diff --git a/README.md b/README.md index 5e7d61ea..4f363099 100644 --- a/README.md +++ b/README.md @@ -344,11 +344,12 @@ To enable FlashAttention-2 on the Windows platform, you need to install the prec #### Use local environment ```bash -export CUDA_VISIBLE_DEVICES=0 # `set CUDA_VISIBLE_DEVICES=0` for Windows -export GRADIO_SERVER_PORT=7860 # `set GRADIO_SERVER_PORT=7860` for Windows llamafactory-cli webui ``` +> [!TIPS] +> To modify the default setting in the LLaMA Board GUI, you can use environment variables, e.g., `export CUDA_VISIBLE_DEVICES=0 GRADIO_SERVER_NAME=0.0.0.0 GRADIO_SERVER_PORT=7860 GRADIO_SHARE=False` (use `set` command on Windows OS). +
For Alibaba Cloud users If you encountered display problems in LLaMA Board on Alibaba Cloud, try using the following command to set environment variables before starting LLaMA Board: @@ -392,7 +393,8 @@ docker compose -f ./docker-compose.yml up -d See [examples/README.md](examples/README.md) for usage. -Use `llamafactory-cli train -h` to display arguments description. +> [!TIPS] +> Use `llamafactory-cli train -h` to display arguments description. ### Deploy with OpenAI-style API and vLLM diff --git a/README_zh.md b/README_zh.md index bfb9feaa..8f9d5513 100644 --- a/README_zh.md +++ b/README_zh.md @@ -344,11 +344,12 @@ pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/downl #### 使用本地环境 ```bash -export CUDA_VISIBLE_DEVICES=0 # Windows 使用 `set CUDA_VISIBLE_DEVICES=0` -export GRADIO_SERVER_PORT=7860 # Windows 使用 `set GRADIO_SERVER_PORT=7860` llamafactory-cli webui ``` +> [!TIPS] +> 您可以使用环境变量来修改 LLaMA Board 可视化界面的默认设置,例如 `export CUDA_VISIBLE_DEVICES=0 GRADIO_SERVER_NAME=0.0.0.0 GRADIO_SERVER_PORT=7860 GRADIO_SHARE=False`(Windows 系统可使用 `set` 指令)。 +
阿里云用户指南 如果您在阿里云上使用 LLaMA Board 时遇到显示问题,请尝试在启动前使用以下命令设置环境变量: @@ -392,7 +393,8 @@ docker compose -f ./docker-compose.yml up -d 使用方法请参考 [examples/README_zh.md](examples/README_zh.md)。 -您可以执行 `llamafactory-cli train -h` 来查看参数文档。 +> [!TIPS] +> 您可以执行 `llamafactory-cli train -h` 来查看参数文档。 ### 利用 vLLM 部署 OpenAI API diff --git a/src/llmtuner/webui/interface.py b/src/llmtuner/webui/interface.py index 5f17d76d..459802f2 100644 --- a/src/llmtuner/webui/interface.py +++ b/src/llmtuner/webui/interface.py @@ -69,8 +69,8 @@ def create_web_demo() -> gr.Blocks: def run_web_ui(): - create_ui().queue().launch(server_name="0.0.0.0") + create_ui().queue().launch() def run_web_demo(): - create_web_demo().queue().launch(server_name="0.0.0.0") + create_web_demo().queue().launch()