LLaMA-Factory-Mirror/README.md

481 lines
19 KiB
Markdown
Raw Normal View History

2023-05-28 18:09:04 +08:00
# LLaMA Efficient Tuning
2023-07-15 17:20:39 +08:00
[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Efficient-Tuning?style=social)](https://github.com/hiyouga/LLaMA-Efficient-Tuning/stargazers)
[![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Efficient-Tuning)](LICENSE)
[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Efficient-Tuning)](https://github.com/hiyouga/LLaMA-Efficient-Tuning/commits/main)
[![PyPI](https://img.shields.io/pypi/v/llmtuner)](https://pypi.org/project/llmtuner/)
[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Efficient-Tuning/pulls)
2023-05-29 21:53:02 +08:00
2023-06-02 21:47:10 +08:00
👋 Join our [WeChat](assets/wechat.jpg).
\[ English | [中文](README_zh.md) \]
2023-05-31 16:54:06 +08:00
## Changelog
2023-08-18 01:41:17 +08:00
[23/08/18] Now we support **resuming training**, upgrade `transformers` to `4.31.0` to enjoy this feature.
2023-08-12 21:00:11 +08:00
[23/08/12] Now we support **RoPE scaling** to extend the context length of the LLaMA models. Try `--rope_scaling linear` argument in training and `--rope_scaling dynamic` argument at inference to extrapolate the position embeddings.
2023-08-11 03:02:53 +08:00
[23/08/11] Now we support **[DPO training](https://arxiv.org/abs/2305.18290)** for instruction-tuned models. See [this example](#dpo-training) to train your models (experimental feature).
[23/08/03] Now we support training the **Qwen-7B** model in this repo. Try `--model_name_or_path Qwen/Qwen-7B-Chat` and `--lora_target c_attn` arguments to train the Qwen-7B model. Remember to use `--template chatml` argument when you are using the Qwen-7B-Chat model.
2023-08-12 21:23:05 +08:00
[23/07/31] Now we support **dataset streaming**. Try `--streaming` and `--max_steps 10000` arguments to load your dataset in streaming mode.
2023-07-31 23:42:32 +08:00
2023-08-01 10:08:47 +08:00
[23/07/29] We release two instruction-tuned 13B models at Hugging Face. See these Hugging Face Repos ([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/baichuan-13b-sft)) for details.
[23/07/19] Now we support training the **LLaMA-2** models in this repo. Try `--model_name_or_path meta-llama/Llama-2-7b-hf` argument to use the LLaMA-2 model. Remember to use `--template llama2` argument when you are using the LLaMA-2-chat model.
2023-07-19 16:42:14 +08:00
2023-08-12 21:23:05 +08:00
[23/07/18] Now we develop an **all-in-one Web UI** for training, evaluation and inference. Try `train_web.py` to fine-tune models in your Web browser. Thank [@KanadeSiina](https://github.com/KanadeSiina) and [@codemayq](https://github.com/codemayq) for their efforts in the development.
2023-07-18 00:18:25 +08:00
[23/07/11] Now we support training the **Baichuan-13B** model in this repo. Try `--model_name_or_path baichuan-inc/Baichuan-13B-Base` and `--lora_target W_pack` arguments to train the Baichuan-13B model. Remember to use `--template baichuan` argument when you are using the Baichuan-13B-Chat model.
2023-07-11 16:16:14 +08:00
2023-08-12 21:29:06 +08:00
[23/07/09] Now we release **[FastEdit](https://github.com/hiyouga/FastEdit)** ⚡🩹, an easy-to-use package for editing the factual knowledge of large language models efficiently. Please follow [FastEdit](https://github.com/hiyouga/FastEdit) if you are interested.
2023-07-09 14:57:13 +08:00
[23/07/07] Now we support training the **InternLM-7B** model in this repo. Try `--model_name_or_path internlm/internlm-7b` argument to use the InternLM model. Remember to use `--template intern` argument when you are using the InternLM-chat model.
2023-07-07 11:02:28 +08:00
2023-07-07 12:06:28 +08:00
[23/07/05] Now we support training the **Falcon-7B/40B** models in this repo. Try `--model_name_or_path tiiuae/falcon-7b` and `--lora_target query_key_value` arguments to use the Falcon model.
2023-07-05 15:00:06 +08:00
2023-07-19 20:59:15 +08:00
[23/06/29] We provide a **reproducible example** of training a chat model using instruction-following datasets, see this [Hugging Face Repo](https://huggingface.co/hiyouga/baichuan-7b-sft) for details.
2023-06-29 19:36:22 +08:00
2023-07-07 12:06:28 +08:00
[23/06/22] Now we align the [demo API](src/api_demo.py) with the [OpenAI's](https://platform.openai.com/docs/api-reference/chat) format where you can insert the fine-tuned model in **arbitrary ChatGPT-based applications**.
2023-06-23 00:17:05 +08:00
2023-07-19 20:59:15 +08:00
[23/06/15] Now we support training the **Baichuan-7B** model in this repo. Try `--model_name_or_path baichuan-inc/Baichuan-7B` and `--lora_target W_pack` arguments to use the Baichuan-7B model.
2023-06-16 00:02:17 +08:00
2023-08-01 10:08:47 +08:00
[23/06/03] Now we support quantized training and inference (aka **[QLoRA](https://github.com/artidoro/qlora)**). Try `--quantization_bit 4/8` argument to work with quantized models.
2023-06-04 00:08:56 +08:00
2023-07-07 12:06:28 +08:00
[23/05/31] Now we support training the **BLOOM & BLOOMZ** models in this repo. Try `--model_name_or_path bigscience/bloomz-7b1-mt` and `--lora_target query_key_value` arguments to use the BLOOMZ model.
2023-05-31 16:54:06 +08:00
## Supported Models
2023-08-07 15:02:02 +08:00
| Model | Model size | Default module | Template |
| -------------------------------------------------------- | --------------------------- | ----------------- |----------|
| [LLaMA](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | q_proj,v_proj | - |
| [LLaMA-2](https://huggingface.co/meta-llama) | 7B/13B/70B | q_proj,v_proj | llama2 |
| [BLOOM](https://huggingface.co/bigscience/bloom) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
| [BLOOMZ](https://huggingface.co/bigscience/bloomz) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
| [Falcon](https://huggingface.co/tiiuae/falcon-7b) | 7B/40B | query_key_value | - |
| [Baichuan](https://github.com/baichuan-inc/baichuan-13B) | 7B/13B | W_pack | baichuan |
| [InternLM](https://github.com/InternLM/InternLM) | 7B | q_proj,v_proj | intern |
| [Qwen](https://github.com/QwenLM/Qwen-7B) | 7B | c_attn | chatml |
| [XVERSE](https://github.com/xverse-ai/XVERSE-13B) | 13B | q_proj,v_proj | - |
2023-08-11 23:25:57 +08:00
| [ChatGLM2](https://github.com/THUDM/ChatGLM2-6B) | 6B | query_key_value | chatglm2 |
2023-08-07 15:02:02 +08:00
2023-08-11 03:02:53 +08:00
- **Default module** is used for the `--lora_target` argument. Please use `python src/train_bash.py -h` to see all available options.
- For the "base" models, the `--template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the corresponding template for the "chat" models.
2023-05-31 16:54:06 +08:00
2023-05-31 16:57:43 +08:00
## Supported Training Approaches
2023-05-31 16:54:06 +08:00
2023-08-17 11:00:22 +08:00
| Approach | Full-parameter | Partial-parameter | LoRA | QLoRA |
| ---------------------- | ------------------ | ------------------ | ------------------ | ------------------ |
| Pre-Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Supervised Fine-Tuning | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Reward Modeling | | | :white_check_mark: | :white_check_mark: |
| PPO Training | | | :white_check_mark: | :white_check_mark: |
| DPO Training | :white_check_mark: | | :white_check_mark: | :white_check_mark: |
2023-05-31 16:54:06 +08:00
2023-08-12 21:23:05 +08:00
- Use `--quantization_bit 4/8` argument to enable QLoRA.
2023-05-31 16:54:06 +08:00
## Provided Datasets
- For pre-training:
2023-07-19 20:59:15 +08:00
- [Wiki Demo (en)](data/wiki_demo.txt)
2023-07-23 20:01:43 +08:00
- [RefinedWeb (en)](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
- [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata)
- [Wikipedia (en)](https://huggingface.co/datasets/olm/olm-wikipedia-20221220)
- [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered)
2023-05-31 16:54:06 +08:00
- For supervised fine-tuning:
2023-07-19 20:59:15 +08:00
- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
- [GPT-4 Generated Data (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [Self-cognition (zh)](data/self_cognition.json)
- [ShareGPT (zh)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chinese-instruction-collection)
- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
- [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
- [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M)
- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)
- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
- [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)
2023-07-26 17:05:12 +08:00
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
2023-07-19 20:59:15 +08:00
- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
- [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa)
- [UltraChat (en)](https://github.com/thunlp/UltraChat)
- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)
2023-08-14 10:48:47 +08:00
- For reward modeling or DPO training:
2023-07-19 20:59:15 +08:00
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [GPT-4 Generated Data (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
2023-05-31 16:54:06 +08:00
Please refer to [data/README.md](data/README.md) for details.
2023-07-19 20:59:15 +08:00
Some datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands.
2023-05-31 16:54:06 +08:00
```bash
pip install --upgrade huggingface_hub
huggingface-cli login
```
2023-05-29 21:53:02 +08:00
## Requirement
2023-05-31 16:54:06 +08:00
- Python 3.8+ and PyTorch 1.13.1+
2023-05-29 21:53:02 +08:00
- 🤗Transformers, Datasets, Accelerate, PEFT and TRL
- sentencepiece and tiktoken
2023-07-05 23:03:58 +08:00
- jieba, rouge-chinese and nltk (used at evaluation)
2023-07-15 16:54:28 +08:00
- gradio and matplotlib (used in web_demo.py)
2023-07-05 23:03:58 +08:00
- uvicorn, fastapi and sse-starlette (used in api_demo.py)
2023-05-29 21:53:02 +08:00
And **powerful GPUs**!
## Getting Started
### Data Preparation (optional)
Please refer to `data/example_dataset` for checking the details about the format of dataset files. You can either use a single `.json` file or a [dataset loading script](https://huggingface.co/docs/datasets/dataset_script) with multiple files to create a custom dataset.
Note: please update `data/dataset_info.json` to use your custom dataset. About the format of this file, please refer to `data/README.md`.
### Dependence Installation (optional)
```bash
git clone https://github.com/hiyouga/LLaMA-Efficient-Tuning.git
conda create -n llama_etuning python=3.10
conda activate llama_etuning
cd LLaMA-Efficient-Tuning
pip install -r requirements.txt
```
2023-07-22 14:29:22 +08:00
If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you will be required to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.1.
```bash
pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.39.1-py3-none-win_amd64.whl
```
2023-07-18 00:18:25 +08:00
### All-in-one Web UI
2023-05-28 18:09:04 +08:00
2023-05-31 16:54:06 +08:00
```bash
2023-08-01 18:48:27 +08:00
CUDA_VISIBLE_DEVICES=0 python src/train_web.py
2023-05-29 21:53:02 +08:00
```
2023-08-18 01:41:17 +08:00
We strongly recommend using the all-in-one Web UI for newcomers since it can also generate training scripts **automatically**.
2023-08-01 18:48:27 +08:00
Currently the web UI only supports training on **a single GPU**.
2023-07-22 14:29:22 +08:00
2023-08-11 03:02:53 +08:00
### Pre-Training
2023-05-29 21:53:02 +08:00
```bash
2023-07-15 16:54:28 +08:00
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
--stage pt \
2023-06-16 00:02:17 +08:00
--model_name_or_path path_to_your_model \
2023-05-29 21:53:02 +08:00
--do_train \
--dataset wiki_demo \
--template default \
2023-05-29 21:53:02 +08:00
--finetuning_type lora \
--output_dir path_to_pt_checkpoint \
--overwrite_cache \
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 4 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--plot_loss \
--fp16
2023-05-28 18:09:04 +08:00
```
2023-05-29 21:53:02 +08:00
### Supervised Fine-Tuning
2023-05-28 18:09:04 +08:00
```bash
2023-07-15 16:54:28 +08:00
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
--stage sft \
2023-06-16 00:02:17 +08:00
--model_name_or_path path_to_your_model \
2023-05-28 18:09:04 +08:00
--do_train \
2023-05-29 21:53:02 +08:00
--dataset alpaca_gpt4_en \
--template default \
2023-05-28 18:09:04 +08:00
--finetuning_type lora \
--output_dir path_to_sft_checkpoint \
--overwrite_cache \
2023-05-29 21:53:02 +08:00
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 4 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--plot_loss \
--fp16
```
2023-08-14 10:48:47 +08:00
### Reward Modeling
2023-05-29 21:53:02 +08:00
```bash
2023-07-15 16:54:28 +08:00
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
--stage rm \
2023-06-16 00:02:17 +08:00
--model_name_or_path path_to_your_model \
2023-05-29 21:53:02 +08:00
--do_train \
--dataset comparison_gpt4_en \
--template default \
2023-05-29 21:53:02 +08:00
--finetuning_type lora \
2023-07-28 17:36:00 +08:00
--resume_lora_training False \
--checkpoint_dir path_to_sft_checkpoint \
2023-05-29 21:53:02 +08:00
--output_dir path_to_rm_checkpoint \
2023-08-11 03:02:53 +08:00
--per_device_train_batch_size 2 \
2023-05-29 21:53:02 +08:00
--gradient_accumulation_steps 4 \
2023-05-28 18:09:04 +08:00
--lr_scheduler_type cosine \
--logging_steps 10 \
2023-05-29 21:53:02 +08:00
--save_steps 1000 \
2023-05-28 18:09:04 +08:00
--learning_rate 1e-5 \
--num_train_epochs 1.0 \
2023-05-29 21:53:02 +08:00
--plot_loss \
2023-05-28 18:09:04 +08:00
--fp16
```
2023-05-29 21:53:02 +08:00
2023-08-11 03:02:53 +08:00
### PPO Training
2023-05-29 21:53:02 +08:00
```bash
2023-07-15 16:54:28 +08:00
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
--stage ppo \
2023-06-16 00:02:17 +08:00
--model_name_or_path path_to_your_model \
2023-05-29 21:53:02 +08:00
--do_train \
--dataset alpaca_gpt4_en \
--template default \
2023-05-29 21:53:02 +08:00
--finetuning_type lora \
2023-07-28 17:36:00 +08:00
--resume_lora_training False \
2023-06-16 00:02:17 +08:00
--checkpoint_dir path_to_sft_checkpoint \
2023-05-29 21:53:02 +08:00
--reward_model path_to_rm_checkpoint \
--output_dir path_to_ppo_checkpoint \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate 1e-5 \
--num_train_epochs 1.0 \
--plot_loss
```
2023-08-11 03:02:53 +08:00
### DPO Training
```bash
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
--stage dpo \
--model_name_or_path path_to_your_model \
--do_train \
--dataset comparison_gpt4_en \
--template default \
--finetuning_type lora \
--resume_lora_training False \
--checkpoint_dir path_to_sft_checkpoint \
--output_dir path_to_dpo_checkpoint \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate 1e-5 \
--num_train_epochs 1.0 \
--plot_loss \
--fp16
```
2023-05-29 21:53:02 +08:00
### Distributed Training
2023-08-12 21:23:05 +08:00
#### Use Huggingface Accelerate
2023-05-29 21:53:02 +08:00
```bash
accelerate config # configure the environment
2023-07-15 16:54:28 +08:00
accelerate launch src/train_bash.py # arguments (same as above)
2023-05-29 21:53:02 +08:00
```
2023-08-12 21:23:05 +08:00
<details><summary>Example config.yaml for training with DeepSpeed ZeRO-2</summary>
2023-06-27 22:50:23 +08:00
```yaml
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 4
2023-06-27 23:54:24 +08:00
gradient_clipping: 0.5
2023-06-27 22:50:23 +08:00
offload_optimizer_device: none
offload_param_device: none
zero3_init_flag: false
zero_stage: 2
distributed_type: DEEPSPEED
downcast_bf16: 'no'
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
</details>
2023-08-12 21:23:05 +08:00
#### Use DeepSpeed
```bash
deepspeed --num_gpus 8 --master_port=9901 src/train_bash.py \
--deepspeed ds_config.json \
... # arguments (same as above)
```
<details><summary>Example ds_config.json for training with DeepSpeed ZeRO-2</summary>
```json
{
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"zero_allow_untested_optimizer": true,
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"initial_scale_power": 16,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"overlap_comm": false,
"contiguous_gradients": true
}
}
```
</details>
2023-05-29 21:53:02 +08:00
### Evaluation (BLEU and ROUGE_CHINESE)
```bash
2023-07-15 16:54:28 +08:00
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
2023-07-20 17:23:16 +08:00
--stage sft \
2023-06-16 00:02:17 +08:00
--model_name_or_path path_to_your_model \
2023-05-29 21:53:02 +08:00
--do_eval \
--dataset alpaca_gpt4_en \
--template default \
2023-07-20 17:23:16 +08:00
--finetuning_type lora \
2023-05-29 21:53:02 +08:00
--checkpoint_dir path_to_checkpoint \
--output_dir path_to_eval_result \
--per_device_eval_batch_size 8 \
2023-07-20 17:23:16 +08:00
--max_samples 100 \
2023-05-29 21:53:02 +08:00
--predict_with_generate
```
2023-06-16 00:02:17 +08:00
We recommend using `--per_device_eval_batch_size=1` and `--max_target_length 128` at 4/8-bit evaluation.
2023-06-04 00:08:56 +08:00
2023-07-20 17:23:16 +08:00
### Predict
```bash
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
--stage sft \
--model_name_or_path path_to_your_model \
--do_predict \
--dataset alpaca_gpt4_en \
--template default \
2023-07-20 17:23:16 +08:00
--finetuning_type lora \
--checkpoint_dir path_to_checkpoint \
--output_dir path_to_predict_result \
--per_device_eval_batch_size 8 \
--max_samples 100 \
--predict_with_generate
```
2023-07-18 00:18:25 +08:00
### API Demo
```bash
python src/api_demo.py \
--model_name_or_path path_to_your_model \
--template default \
2023-07-20 17:23:16 +08:00
--finetuning_type lora \
2023-07-18 00:18:25 +08:00
--checkpoint_dir path_to_checkpoint
```
2023-07-20 17:23:16 +08:00
Visit `http://localhost:8000/docs` for API documentation.
2023-07-18 00:18:25 +08:00
### CLI Demo
2023-05-29 21:53:02 +08:00
```bash
2023-07-18 00:18:25 +08:00
python src/cli_demo.py \
2023-06-16 00:02:17 +08:00
--model_name_or_path path_to_your_model \
--template default \
2023-07-20 17:23:16 +08:00
--finetuning_type lora \
2023-05-29 21:53:02 +08:00
--checkpoint_dir path_to_checkpoint
```
2023-07-18 17:21:16 +08:00
### Web Demo
```bash
python src/web_demo.py \
--model_name_or_path path_to_your_model \
--template default \
2023-07-20 17:23:16 +08:00
--finetuning_type lora \
2023-07-18 17:21:16 +08:00
--checkpoint_dir path_to_checkpoint
```
2023-05-29 21:53:02 +08:00
### Export model
```bash
python src/export_model.py \
2023-06-16 00:02:17 +08:00
--model_name_or_path path_to_your_model \
--template default \
2023-07-20 17:23:16 +08:00
--finetuning_type lora \
2023-05-29 21:53:02 +08:00
--checkpoint_dir path_to_checkpoint \
--output_dir path_to_export
```
2023-07-25 17:04:02 +08:00
## TODO
- [ ] Supporting flash attention ([torch](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) / [xformers](https://github.com/facebookresearch/xformers) / [flashattn](https://github.com/Dao-AILab/flash-attention)).
- [ ] Implementing multi-query attention for faster inference.
- [ ] Supporting full-parameter RLHF training.
2023-05-29 21:53:02 +08:00
## License
2023-05-31 16:54:06 +08:00
This repository is licensed under the [Apache-2.0 License](LICENSE).
2023-07-07 12:06:28 +08:00
Please follow the model licenses to use the corresponding model weights:
2023-05-31 16:54:06 +08:00
2023-07-07 12:06:28 +08:00
- [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md)
2023-07-19 16:42:14 +08:00
- [LLaMA-2](https://ai.meta.com/llama/license/)
2023-07-07 12:06:28 +08:00
- [BLOOM](https://huggingface.co/spaces/bigscience/license)
- [Falcon](LICENSE)
2023-08-01 10:08:47 +08:00
- [Baichuan](https://huggingface.co/baichuan-inc/baichuan-7B/resolve/main/baichuan-7B%20%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)
2023-07-07 12:06:28 +08:00
- [InternLM](https://github.com/InternLM/InternLM#open-source-license)
- [Qwen](https://huggingface.co/Qwen/Qwen-7B-Chat/blob/main/LICENSE)
2023-08-11 23:25:57 +08:00
- [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf)
- [ChatGLM2](https://github.com/THUDM/ChatGLM2-6B/blob/main/MODEL_LICENSE)
2023-06-16 00:02:17 +08:00
2023-05-29 21:53:02 +08:00
## Citation
2023-07-07 12:06:28 +08:00
If this work is helpful, please kindly cite as:
2023-05-29 21:53:02 +08:00
```bibtex
@Misc{llama-efficient-tuning,
title = {LLaMA Efficient Tuning},
author = {hiyouga},
howpublished = {\url{https://github.com/hiyouga/LLaMA-Efficient-Tuning}},
year = {2023}
}
```
## Acknowledgement
This repo is a sibling of [ChatGLM-Efficient-Tuning](https://github.com/hiyouga/ChatGLM-Efficient-Tuning). They share a similar code structure of efficient tuning on large language models.
2023-06-27 23:56:29 +08:00
## Star History
![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/LLaMA-Efficient-Tuning&type=Date)