LLaMA-Factory-310P3/examples/README.md

51 lines
2.6 KiB
Markdown
Raw Normal View History

2024-04-02 20:37:37 +08:00
We provide diverse examples about fine-tuning LLMs.
```
examples/
├── lora_single_gpu/
2024-04-16 17:44:48 +08:00
│ ├── pretrain.sh: Do continuous pre-training using LoRA
2024-04-15 22:14:34 +08:00
│ ├── sft.sh: Do supervised fine-tuning using LoRA
│ ├── reward.sh: Do reward modeling using LoRA
│ ├── ppo.sh: Do PPO training using LoRA
│ ├── dpo.sh: Do DPO training using LoRA
│ ├── orpo.sh: Do ORPO training using LoRA
2024-04-26 05:34:58 +08:00
│ ├── sft_mllm.sh: Do supervised fine-tuning on multimodal data using LoRA
2024-04-02 20:37:37 +08:00
│ ├── prepare.sh: Save tokenized dataset
2024-04-15 22:14:34 +08:00
│ └── predict.sh: Do batch predict and compute BLEU and ROUGE scores after LoRA tuning
2024-04-02 20:37:37 +08:00
├── qlora_single_gpu/
2024-04-15 22:14:34 +08:00
│ ├── bitsandbytes.sh: Fine-tune 4/8-bit BNB models using QLoRA
│ ├── gptq.sh: Fine-tune 4/8-bit GPTQ models using QLoRA
│ ├── awq.sh: Fine-tune 4-bit AWQ models using QLoRA
│ └── aqlm.sh: Fine-tune 2-bit AQLM models using QLoRA
2024-04-02 20:37:37 +08:00
├── lora_multi_gpu/
2024-04-15 22:14:34 +08:00
│ ├── single_node.sh: Fine-tune model with Accelerate on single node using LoRA
2024-04-22 00:37:32 +08:00
│ ├── multi_node.sh: Fine-tune model with Accelerate on multiple nodes using LoRA
2024-04-23 18:29:46 +08:00
│ └── ds_zero3.sh: Fine-tune model with DeepSpeed ZeRO-3 using LoRA (weight sharding)
2024-04-02 20:37:37 +08:00
├── full_multi_gpu/
2024-04-15 22:14:34 +08:00
│ ├── single_node.sh: Full fine-tune model with DeepSpeed on single node
│ ├── multi_node.sh: Full fine-tune model with DeepSpeed on multiple nodes
2024-04-23 18:29:46 +08:00
│ └── predict.sh: Do parallel batch predict and compute BLEU and ROUGE scores after full tuning
2024-04-02 20:37:37 +08:00
├── merge_lora/
2024-04-02 20:51:21 +08:00
│ ├── merge.sh: Merge LoRA weights into the pre-trained models
2024-04-15 22:14:34 +08:00
│ └── quantize.sh: Quantize the fine-tuned model with AutoGPTQ
2024-04-02 20:37:37 +08:00
├── inference/
2024-04-25 19:02:32 +08:00
│ ├── cli_demo.sh: Chat with fine-tuned model in the CLI with LoRA adapters
│ ├── api_demo.sh: Chat with fine-tuned model in an OpenAI-style API with LoRA adapters
│ ├── web_demo.sh: Chat with fine-tuned model in the Web browser with LoRA adapters
2024-04-15 22:14:34 +08:00
│ └── evaluate.sh: Evaluate model on the MMLU/CMMLU/C-Eval benchmarks with LoRA adapters
2024-04-02 20:37:37 +08:00
└── extras/
├── galore/
2024-04-02 20:51:21 +08:00
│ └── sft.sh: Fine-tune model with GaLore
2024-04-16 17:44:48 +08:00
├── badam/
│ └── sft.sh: Fine-tune model with BAdam
2024-04-02 20:37:37 +08:00
├── loraplus/
2024-04-15 22:14:34 +08:00
│ └── sft.sh: Fine-tune model using LoRA+
2024-04-21 18:11:10 +08:00
├── mod/
│ └── sft.sh: Fine-tune model using Mixture-of-Depths
2024-04-02 20:37:37 +08:00
├── llama_pro/
2024-04-02 20:51:21 +08:00
│ ├── expand.sh: Expand layers in the model
2024-04-15 22:14:34 +08:00
│ └── sft.sh: Fine-tune the expanded model
2024-04-02 20:37:37 +08:00
└── fsdp_qlora/
2024-04-15 22:14:34 +08:00
└── sft.sh: Fine-tune quantized model with FSDP+QLoRA
2024-04-02 20:37:37 +08:00
```