forked from p04798526/LLaMA-Factory-Mirror
update readme and webui launch
This commit is contained in:
parent
1409654cef
commit
9d2ce57345
|
@ -344,11 +344,12 @@ To enable FlashAttention-2 on the Windows platform, you need to install the prec
|
|||
#### Use local environment
|
||||
|
||||
```bash
|
||||
export CUDA_VISIBLE_DEVICES=0 # `set CUDA_VISIBLE_DEVICES=0` for Windows
|
||||
export GRADIO_SERVER_PORT=7860 # `set GRADIO_SERVER_PORT=7860` for Windows
|
||||
llamafactory-cli webui
|
||||
```
|
||||
|
||||
> [!TIPS]
|
||||
> To modify the default setting in the LLaMA Board GUI, you can use environment variables, e.g., `export CUDA_VISIBLE_DEVICES=0 GRADIO_SERVER_NAME=0.0.0.0 GRADIO_SERVER_PORT=7860 GRADIO_SHARE=False` (use `set` command on Windows OS).
|
||||
|
||||
<details><summary>For Alibaba Cloud users</summary>
|
||||
|
||||
If you encountered display problems in LLaMA Board on Alibaba Cloud, try using the following command to set environment variables before starting LLaMA Board:
|
||||
|
@ -392,7 +393,8 @@ docker compose -f ./docker-compose.yml up -d
|
|||
|
||||
See [examples/README.md](examples/README.md) for usage.
|
||||
|
||||
Use `llamafactory-cli train -h` to display arguments description.
|
||||
> [!TIPS]
|
||||
> Use `llamafactory-cli train -h` to display arguments description.
|
||||
|
||||
### Deploy with OpenAI-style API and vLLM
|
||||
|
||||
|
|
|
@ -344,11 +344,12 @@ pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/downl
|
|||
#### 使用本地环境
|
||||
|
||||
```bash
|
||||
export CUDA_VISIBLE_DEVICES=0 # Windows 使用 `set CUDA_VISIBLE_DEVICES=0`
|
||||
export GRADIO_SERVER_PORT=7860 # Windows 使用 `set GRADIO_SERVER_PORT=7860`
|
||||
llamafactory-cli webui
|
||||
```
|
||||
|
||||
> [!TIPS]
|
||||
> 您可以使用环境变量来修改 LLaMA Board 可视化界面的默认设置,例如 `export CUDA_VISIBLE_DEVICES=0 GRADIO_SERVER_NAME=0.0.0.0 GRADIO_SERVER_PORT=7860 GRADIO_SHARE=False`(Windows 系统可使用 `set` 指令)。
|
||||
|
||||
<details><summary>阿里云用户指南</summary>
|
||||
|
||||
如果您在阿里云上使用 LLaMA Board 时遇到显示问题,请尝试在启动前使用以下命令设置环境变量:
|
||||
|
@ -392,7 +393,8 @@ docker compose -f ./docker-compose.yml up -d
|
|||
|
||||
使用方法请参考 [examples/README_zh.md](examples/README_zh.md)。
|
||||
|
||||
您可以执行 `llamafactory-cli train -h` 来查看参数文档。
|
||||
> [!TIPS]
|
||||
> 您可以执行 `llamafactory-cli train -h` 来查看参数文档。
|
||||
|
||||
### 利用 vLLM 部署 OpenAI API
|
||||
|
||||
|
|
|
@ -69,8 +69,8 @@ def create_web_demo() -> gr.Blocks:
|
|||
|
||||
|
||||
def run_web_ui():
|
||||
create_ui().queue().launch(server_name="0.0.0.0")
|
||||
create_ui().queue().launch()
|
||||
|
||||
|
||||
def run_web_demo():
|
||||
create_web_demo().queue().launch(server_name="0.0.0.0")
|
||||
create_web_demo().queue().launch()
|
||||
|
|
Loading…
Reference in New Issue