CPM-9G/My_project/推理说明.txt

44 lines
1.4 KiB
Plaintext
Raw Permalink Normal View History

2024-10-31 21:52:29 +08:00
以下内容都是在一体机上进行的:
构建端口为7860的容器设置端口和name
docker run -p 7860:7860 \
--name new_9g_finetuning \
--env PATH=/usr/local/corex-4.0.0/bin:/usr/local/corex-4.0.0/lib64/python3/dist-packages/bin:/usr/local/openmpi/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
--env DEBIAN_FRONTEND=noninteractive \
--env COREX_VERSION=4.0.0 \
--env LD_LIBRARY_PATH=/usr/local/corex-4.0.0/lib64:/usr/local/openmpi/lib:/usr/local/lib: \
--env LANG=en_US.utf8 \
--env LC_ALL=en_US.utf8 \
--env PYTHONPATH=/usr/local/corex-4.0.0/lib64/python3/dist-packages \
--env RUSTUP_DIST_SERVER=https://mirrors.ustc.edu.cn/rust-static \
--env RUSTUP_UPDATE_ROOT=https://mirrors.ustc.edu.cn/rust-static/rustup \
-v /data:/workspace \
--cap-add ALL \
--cgroupns host \
--pid host \
--privileged \
--security-opt label=disable \
9g_finetuning/lora:v3 \
sleep infinity
安装相关依赖
pip install gradio_client
可能报错,解决方法
pip install importlib-metadata==4.13.0
pip install --upgrade pip setuptools wheel
推理
a)打开系统终端,
b)进入root模式:sudo su
docker start new_9g_finetuning
c)
docker exec -it new_9g_finetuning /bin/bash
d)
cd /workspace/zksc/LLaMA-Factory
e)输入
CUDA_VISIBLE_DEVICES=0 llamafactory-cli webchat examples/inference/fm9g_merge.yaml
f)最后执行推理代码生成回答文件