From 5bdad463875100e402329d47cd4c14bf9bc3b84b Mon Sep 17 00:00:00 2001 From: hiyouga Date: Wed, 15 May 2024 00:05:17 +0800 Subject: [PATCH] update examples --- examples/README.md | 9 +++++++++ examples/README_zh.md | 9 +++++++++ 2 files changed, 18 insertions(+) diff --git a/examples/README.md b/examples/README.md index 0838314a..4b4a8248 100644 --- a/examples/README.md +++ b/examples/README.md @@ -7,6 +7,7 @@ Make sure to execute these commands in the `LLaMA-Factory` directory. - [LoRA Fine-Tuning on A Single GPU](#lora-fine-tuning-on-a-single-gpu) - [QLoRA Fine-Tuning on a Single GPU](#qlora-fine-tuning-on-a-single-gpu) - [LoRA Fine-Tuning on Multiple GPUs](#lora-fine-tuning-on-multiple-gpus) +- [LoRA Fine-Tuning on Multiple NPUs](#lora-fine-tuning-on-multiple-npus) - [Full-Parameter Fine-Tuning on Multiple GPUs](#full-parameter-fine-tuning-on-multiple-gpus) - [Merging LoRA Adapters and Quantization](#merging-lora-adapters-and-quantization) - [Inferring LoRA Fine-Tuned Models](#inferring-lora-fine-tuned-models) @@ -124,6 +125,14 @@ bash examples/lora_multi_gpu/multi_node.sh bash examples/lora_multi_gpu/ds_zero3.sh ``` +### LoRA Fine-Tuning on Multiple NPUs + +#### Supervised Fine-Tuning with DeepSpeed ZeRO-0 + +```bash +bash examples/lora_multi_npu/ds_zero0.sh +``` + ### Full-Parameter Fine-Tuning on Multiple GPUs #### Supervised Fine-Tuning with Accelerate on Single Node diff --git a/examples/README_zh.md b/examples/README_zh.md index 7fe43954..3b5b2dee 100644 --- a/examples/README_zh.md +++ b/examples/README_zh.md @@ -7,6 +7,7 @@ - [单 GPU LoRA 微调](#单-gpu-lora-微调) - [单 GPU QLoRA 微调](#单-gpu-qlora-微调) - [多 GPU LoRA 微调](#多-gpu-lora-微调) +- [多 NPU LoRA 微调](#多-npu-lora-微调) - [多 GPU 全参数微调](#多-gpu-全参数微调) - [合并 LoRA 适配器与模型量化](#合并-lora-适配器与模型量化) - [推理 LoRA 模型](#推理-lora-模型) @@ -124,6 +125,14 @@ bash examples/lora_multi_gpu/multi_node.sh bash examples/lora_multi_gpu/ds_zero3.sh ``` +### 多 NPU LoRA 微调 + +#### 使用 DeepSpeed ZeRO-0 训练 + +```bash +bash examples/lora_multi_npu/ds_zero0.sh +``` + ### 多 GPU 全参数微调 #### 使用 DeepSpeed 进行单节点训练