From 2d43b8bb49057e14a9f79146acdcc0cfa94bcc5a Mon Sep 17 00:00:00 2001 From: hiyouga <467089858@qq.com> Date: Thu, 13 Jun 2024 16:02:21 +0800 Subject: [PATCH] Update README.md --- examples/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/examples/README.md b/examples/README.md index 180d5f7b..a6d78936 100644 --- a/examples/README.md +++ b/examples/README.md @@ -97,25 +97,25 @@ FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3. #### Supervised Fine-Tuning with 4/8-bit Bitsandbytes Quantization (Recommended) ```bash -CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/train_qlora/llama3_lora_sft_bitsandbytes.yaml +llamafactory-cli train examples/train_qlora/llama3_lora_sft_bitsandbytes.yaml ``` #### Supervised Fine-Tuning with 4/8-bit GPTQ Quantization ```bash -CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/train_qlora/llama3_lora_sft_gptq.yaml +llamafactory-cli train examples/train_qlora/llama3_lora_sft_gptq.yaml ``` #### Supervised Fine-Tuning with 4-bit AWQ Quantization ```bash -CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/train_qlora/llama3_lora_sft_awq.yaml +llamafactory-cli train examples/train_qlora/llama3_lora_sft_awq.yaml ``` #### Supervised Fine-Tuning with 2-bit AQLM Quantization ```bash -CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml +llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml ``` ### Full-Parameter Fine-Tuning