LLaMA-Factory-Mirror/README.md

686 lines
32 KiB
Markdown
Raw Normal View History

2023-12-02 01:31:24 +08:00
![# LLaMA Factory](assets/logo.png)
2023-05-28 18:09:04 +08:00
2023-10-12 21:42:29 +08:00
[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers)
[![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE)
[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main)
2023-07-15 17:20:39 +08:00
[![PyPI](https://img.shields.io/pypi/v/llmtuner)](https://pypi.org/project/llmtuner/)
2023-09-16 17:33:01 +08:00
[![Downloads](https://static.pepy.tech/badge/llmtuner)](https://pypi.org/project/llmtuner/)
[![Citation](https://img.shields.io/badge/citation-21-green)](#projects-using-llama-factory)
2023-10-12 21:42:29 +08:00
[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls)
2023-12-15 22:11:31 +08:00
[![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK)
2024-02-29 17:45:30 +08:00
[![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai)
[![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
[![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
2024-03-02 19:58:21 +08:00
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)
2023-05-29 21:53:02 +08:00
2023-06-02 21:47:10 +08:00
👋 Join our [WeChat](assets/wechat.jpg).
\[ English | [中文](README_zh.md) \]
2024-03-03 01:41:07 +08:00
**Fine-tuning a large language model can be easy as...**
2024-03-03 00:48:47 +08:00
https://github.com/hiyouga/LLaMA-Factory/assets/16256802/9840a653-7e9c-41c8-ae89-7ace5698baf6
2024-03-03 00:48:06 +08:00
2024-03-03 01:41:07 +08:00
Choose your path:
- **🤗 Spaces**: https://huggingface.co/spaces/hiyouga/LLaMA-Board
- **ModelScope**: https://modelscope.cn/studios/hiyouga/LLaMA-Board
- **Colab**: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing
- **Local machine**: Please refer to [usage](#getting-started)
2023-10-15 20:23:22 +08:00
2023-11-18 11:09:52 +08:00
## Table of Contents
2024-02-28 20:50:01 +08:00
- [Features](#features)
2023-11-18 11:09:52 +08:00
- [Benchmark](#benchmark)
- [Changelog](#changelog)
- [Supported Models](#supported-models)
- [Supported Training Approaches](#supported-training-approaches)
- [Provided Datasets](#provided-datasets)
- [Requirement](#requirement)
- [Getting Started](#getting-started)
- [Projects using LLaMA Factory](#projects-using-llama-factory)
- [License](#license)
- [Citation](#citation)
- [Acknowledgement](#acknowledgement)
2024-02-28 20:50:01 +08:00
## Features
- **Various models**: LLaMA, Mistral, Mixtral-MoE, Qwen, Yi, Gemma, Baichuan, ChatGLM, Phi, etc.
- **Integrated methods**: (Continuous) pre-training, supervised fine-tuning, reward modeling, PPO and DPO.
2024-02-29 00:34:19 +08:00
- **Scalable resources**: 32-bit full-tuning, 16-bit freeze-tuning, 16-bit LoRA, 2/4/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8.
2024-02-28 21:11:23 +08:00
- **Advanced algorithms**: DoRA, LongLoRA, LLaMA Pro, LoftQ, agent tuning.
2024-02-29 00:34:19 +08:00
- **Practical tricks**: FlashAttention-2, Unsloth, RoPE scaling, NEFTune, rsLoRA.
2024-02-28 23:19:25 +08:00
- **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, etc.
2024-02-28 20:50:01 +08:00
2023-11-18 11:09:52 +08:00
## Benchmark
2023-11-18 11:30:01 +08:00
Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning), LLaMA-Factory's LoRA tuning offers up to **3.7 times faster** training speed with a better Rouge score on the advertising text generation task. By leveraging 4-bit quantization technique, LLaMA-Factory's QLoRA further improves the efficiency regarding the GPU memory.
2023-11-18 11:09:52 +08:00
![benchmark](assets/benchmark.svg)
2023-12-01 22:53:15 +08:00
<details><summary>Definitions</summary>
2023-11-18 11:15:56 +08:00
- **Training Speed**: the number of training samples processed per second during the training. (bs=4, cutoff_len=1024)
2023-11-18 11:30:01 +08:00
- **Rouge Score**: Rouge-2 score on the development set of the [advertising text generation](https://aclanthology.org/D19-1321.pdf) task. (bs=4, cutoff_len=1024)
2023-11-18 11:15:56 +08:00
- **GPU Memory**: Peak GPU memory usage in 4-bit quantized training. (bs=1, cutoff_len=1024)
2023-11-18 11:09:52 +08:00
- We adopt `pre_seq_len=128` for ChatGLM's P-Tuning and `lora_rank=32` for LLaMA-Factory's LoRA tuning.
2023-12-01 22:53:15 +08:00
</details>
2023-05-31 16:54:06 +08:00
## Changelog
2024-02-28 19:53:28 +08:00
[24/02/28] We supported weight-decomposed LoRA (**[DoRA](https://arxiv.org/abs/2402.09353)**). Try `--use_dora` to activate DoRA training.
2024-02-15 02:27:36 +08:00
[24/02/15] We supported **block expansion** proposed by [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro). See `tests/llama_pro.py` for usage.
2024-02-06 00:10:51 +08:00
[24/02/05] Qwen1.5 (Qwen2 beta version) series models are supported in LLaMA-Factory. Check this [blog post](https://qwenlm.github.io/blog/qwen1.5/) for details.
2024-02-15 02:27:36 +08:00
<details><summary>Full Changelog</summary>
2024-02-28 20:50:01 +08:00
[24/01/18] We supported **agent tuning** for most models, equipping model with tool using abilities by fine-tuning with `--dataset glaive_toolcall`.
2023-12-23 02:17:41 +08:00
[23/12/23] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s implementation to boost LoRA tuning for the LLaMA, Mistral and Yi models. Try `--use_unsloth` argument to activate unsloth patch. It achieves 1.7x speed in our benchmark, check [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison) for details.
2023-12-23 00:14:33 +08:00
2023-12-12 11:44:30 +08:00
[23/12/12] We supported fine-tuning the latest MoE model **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)** in our framework. See hardware requirement [here](#hardware-requirement).
2023-12-12 11:39:04 +08:00
2024-01-18 14:30:48 +08:00
[23/12/01] We supported downloading pre-trained models and datasets from the **[ModelScope Hub](https://modelscope.cn/models)** for Chinese mainland users. See [this tutorial](#use-modelscope-hub-optional) for usage.
2023-12-12 11:44:30 +08:00
[23/10/21] We supported **[NEFTune](https://arxiv.org/abs/2310.05914)** trick for fine-tuning. Try `--neftune_noise_alpha` argument to activate NEFTune, e.g., `--neftune_noise_alpha 5`.
2023-09-28 14:39:16 +08:00
[23/09/27] We supported **$S^2$-Attn** proposed by [LongLoRA](https://github.com/dvlab-research/LongLoRA) for the LLaMA models. Try `--shift_attn` argument to enable shift short attention.
2023-09-27 21:55:50 +08:00
2023-09-23 21:10:17 +08:00
[23/09/23] We integrated MMLU, C-Eval and CMMLU benchmarks in this repo. See [this example](#evaluation) to evaluate your models.
2023-09-10 20:43:56 +08:00
2023-12-12 22:47:06 +08:00
[23/09/10] We supported **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**. Try `--flash_attn` argument to enable FlashAttention-2 if you are using RTX4090, A100 or H100 GPUs.
2023-08-18 01:41:17 +08:00
2023-09-23 00:34:17 +08:00
[23/08/12] We supported **RoPE scaling** to extend the context length of the LLaMA models. Try `--rope_scaling linear` argument in training and `--rope_scaling dynamic` argument at inference to extrapolate the position embeddings.
2023-08-11 03:02:53 +08:00
2023-09-23 00:34:17 +08:00
[23/08/11] We supported **[DPO training](https://arxiv.org/abs/2305.18290)** for instruction-tuned models. See [this example](#dpo-training) to train your models.
2023-07-31 23:42:32 +08:00
2023-09-23 00:34:17 +08:00
[23/07/31] We supported **dataset streaming**. Try `--streaming` and `--max_steps 10000` arguments to load your dataset in streaming mode.
2023-08-01 10:08:47 +08:00
2023-09-23 00:34:17 +08:00
[23/07/29] We released two instruction-tuned 13B models at Hugging Face. See these Hugging Face Repos ([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft)) for details.
2023-07-18 00:18:25 +08:00
2023-09-23 00:34:17 +08:00
[23/07/18] We developed an **all-in-one Web UI** for training, evaluation and inference. Try `train_web.py` to fine-tune models in your Web browser. Thank [@KanadeSiina](https://github.com/KanadeSiina) and [@codemayq](https://github.com/codemayq) for their efforts in the development.
2023-07-09 14:57:13 +08:00
2023-09-23 00:34:17 +08:00
[23/07/09] We released **[FastEdit](https://github.com/hiyouga/FastEdit)** ⚡🩹, an easy-to-use package for editing the factual knowledge of large language models efficiently. Please follow [FastEdit](https://github.com/hiyouga/FastEdit) if you are interested.
2023-06-29 19:36:22 +08:00
2023-09-23 00:34:17 +08:00
[23/06/29] We provided a **reproducible example** of training a chat model using instruction-following datasets, see [Baichuan-7B-sft](https://huggingface.co/hiyouga/Baichuan-7B-sft) for details.
2023-06-23 00:17:05 +08:00
2023-09-23 00:34:17 +08:00
[23/06/22] We aligned the [demo API](src/api_demo.py) with the [OpenAI's](https://platform.openai.com/docs/api-reference/chat) format where you can insert the fine-tuned model in **arbitrary ChatGPT-based applications**.
[23/06/03] We supported quantized training and inference (aka **[QLoRA](https://github.com/artidoro/qlora)**). Try `--quantization_bit 4/8` argument to work with quantized models.
2023-06-04 00:08:56 +08:00
2023-12-01 22:53:15 +08:00
</details>
2023-05-31 16:54:06 +08:00
## Supported Models
2023-08-07 15:02:02 +08:00
2023-09-06 21:43:06 +08:00
| Model | Model size | Default module | Template |
| -------------------------------------------------------- | --------------------------- | ----------------- | --------- |
2023-12-18 22:29:45 +08:00
| [Baichuan2](https://huggingface.co/baichuan-inc) | 7B/13B | W_pack | baichuan2 |
| [BLOOM](https://huggingface.co/bigscience/bloom) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
| [BLOOMZ](https://huggingface.co/bigscience/bloomz) | 560M/1.1B/1.7B/3B/7.1B/176B | query_key_value | - |
2023-12-18 22:29:45 +08:00
| [ChatGLM3](https://huggingface.co/THUDM/chatglm3-6b) | 6B | query_key_value | chatglm3 |
2024-01-15 23:34:23 +08:00
| [DeepSeek (MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B | q_proj,v_proj | deepseek |
2023-12-18 22:29:45 +08:00
| [Falcon](https://huggingface.co/tiiuae) | 7B/40B/180B | query_key_value | falcon |
2024-02-21 23:27:36 +08:00
| [Gemma](https://huggingface.co/google) | 2B/7B | q_proj,v_proj | gemma |
2024-01-18 14:30:48 +08:00
| [InternLM2](https://huggingface.co/internlm) | 7B/20B | wqkv | intern2 |
| [LLaMA](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | q_proj,v_proj | - |
| [LLaMA-2](https://huggingface.co/meta-llama) | 7B/13B/70B | q_proj,v_proj | llama2 |
2023-11-06 12:25:47 +08:00
| [Mistral](https://huggingface.co/mistralai) | 7B | q_proj,v_proj | mistral |
2023-12-12 11:39:04 +08:00
| [Mixtral](https://huggingface.co/mistralai) | 8x7B | q_proj,v_proj | mistral |
2024-01-13 23:12:47 +08:00
| [Phi-1.5/2](https://huggingface.co/microsoft) | 1.3B/2.7B | q_proj,v_proj | - |
2023-12-18 22:29:45 +08:00
| [Qwen](https://huggingface.co/Qwen) | 1.8B/7B/14B/72B | c_attn | qwen |
2024-02-06 00:10:51 +08:00
| [Qwen1.5](https://huggingface.co/Qwen) | 0.5B/1.8B/4B/7B/14B/72B | q_proj,v_proj | qwen |
| [StarCoder2](https://huggingface.co/bigcode) | 3B/7B/15B | q_proj,v_proj | - |
2023-12-18 22:29:45 +08:00
| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | q_proj,v_proj | xverse |
| [Yi](https://huggingface.co/01-ai) | 6B/34B | q_proj,v_proj | yi |
2023-12-29 13:50:24 +08:00
| [Yuan](https://huggingface.co/IEITYuan) | 2B/51B/102B | q_proj,v_proj | yuan |
2023-08-07 15:02:02 +08:00
2023-09-10 21:01:20 +08:00
> [!NOTE]
2023-09-10 20:43:56 +08:00
> **Default module** is used for the `--lora_target` argument, you can use `--lora_target all` to specify all the available modules.
>
2023-10-13 13:53:43 +08:00
> For the "base" models, the `--template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the **corresponding template** for the "chat" models.
2023-10-27 22:15:25 +08:00
2023-11-15 18:04:37 +08:00
Please refer to [constants.py](src/llmtuner/extras/constants.py) for a full list of models we supported.
2023-05-31 16:54:06 +08:00
2023-05-31 16:57:43 +08:00
## Supported Training Approaches
2023-05-31 16:54:06 +08:00
2024-02-29 00:34:19 +08:00
| Approach | Full-tuning | Freeze-tuning | LoRA | QLoRA |
2023-08-17 11:00:22 +08:00
| ---------------------- | ------------------ | ------------------ | ------------------ | ------------------ |
| Pre-Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| Supervised Fine-Tuning | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
2023-11-16 02:08:04 +08:00
| Reward Modeling | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| PPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
| DPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
2023-05-31 16:54:06 +08:00
2023-09-10 21:01:20 +08:00
> [!NOTE]
2023-12-18 15:46:45 +08:00
> Use `--quantization_bit 4` argument to enable QLoRA.
2023-08-12 21:23:05 +08:00
2023-05-31 16:54:06 +08:00
## Provided Datasets
2023-11-02 23:10:04 +08:00
<details><summary>Pre-training datasets</summary>
- [Wiki Demo (en)](data/wiki_demo.txt)
- [RefinedWeb (en)](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
- [RedPajama V2 (en)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2)
- [Wikipedia (en)](https://huggingface.co/datasets/olm/olm-wikipedia-20221220)
- [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered)
- [Pile (en)](https://huggingface.co/datasets/EleutherAI/pile)
- [SkyPile (zh)](https://huggingface.co/datasets/Skywork/SkyPile-150B)
- [The Stack (en)](https://huggingface.co/datasets/bigcode/the-stack)
- [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata)
</details>
<details><summary>Supervised fine-tuning datasets</summary>
- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
2024-02-09 14:53:14 +08:00
- [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
- [Self Cognition (zh)](data/self_cognition.json)
2023-11-02 23:10:04 +08:00
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [ShareGPT (zh)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chinese-instruction-collection)
- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
- [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
- [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M)
- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)
- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
- [UltraChat (en)](https://github.com/thunlp/UltraChat)
- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
- [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
2023-11-15 18:04:37 +08:00
- [OpenOrca (en)](https://huggingface.co/datasets/Open-Orca/OpenOrca)
2024-02-10 16:39:19 +08:00
- [SlimOrca (en)](https://huggingface.co/datasets/Open-Orca/SlimOrca)
2023-11-02 23:10:04 +08:00
- [MathInstruct (en)](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)
2024-02-09 14:53:14 +08:00
- [Wiki QA (en)](https://huggingface.co/datasets/wiki_qa)
2023-11-02 23:10:04 +08:00
- [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa)
- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)
2023-12-01 15:34:50 +08:00
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
2023-12-25 18:29:34 +08:00
- [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)
2023-11-02 23:10:04 +08:00
- [Ad Gen (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)
- [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k)
- [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
- [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
- [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct)
2023-11-02 23:42:49 +08:00
- [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m)
2023-11-02 23:10:04 +08:00
- [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
2024-01-18 14:30:48 +08:00
- [Glaive Function Calling V2 (en)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
- [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
2024-01-30 17:18:01 +08:00
- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
- [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de)
2024-02-09 14:53:14 +08:00
- [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de)
- [OpenSchnabeltier (de)](https://huggingface.co/datasets/mayflowergmbh/openschnabeltier_de)
- [Evol Instruct (de)](https://huggingface.co/datasets/mayflowergmbh/evol-instruct_de)
- [Dolphin (de)](https://huggingface.co/datasets/mayflowergmbh/dolphin_de)
- [Booksum (de)](https://huggingface.co/datasets/mayflowergmbh/booksum_de)
- [Airoboros (de)](https://huggingface.co/datasets/mayflowergmbh/airoboros-3.0_de)
- [Ultrachat (de)](https://huggingface.co/datasets/mayflowergmbh/ultra-chat_de)
2023-11-02 23:10:04 +08:00
</details>
<details><summary>Preference datasets</summary>
- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
- [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [GPT-4 Generated Data (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
2023-12-01 15:34:50 +08:00
- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
2024-02-09 14:53:14 +08:00
- [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de)
2023-11-02 23:10:04 +08:00
</details>
2023-05-31 16:54:06 +08:00
Please refer to [data/README.md](data/README.md) for details.
2023-07-19 20:59:15 +08:00
Some datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands.
2023-05-31 16:54:06 +08:00
```bash
pip install --upgrade huggingface_hub
huggingface-cli login
```
2023-05-29 21:53:02 +08:00
## Requirement
2024-02-28 23:19:25 +08:00
| Mandatory | Minimum | Recommend |
| ------------ | ------- | --------- |
| python | 3.8 | 3.10 |
| torch | 1.13.1 | 2.2.1 |
| transformers | 4.37.2 | 4.38.2 |
2024-02-28 23:19:25 +08:00
| datasets | 2.14.3 | 2.17.1 |
| accelerate | 0.27.2 | 0.27.2 |
| peft | 0.9.0 | 0.9.0 |
| trl | 0.7.11 | 0.7.11 |
| Optional | Minimum | Recommend |
| ------------ | ------- | --------- |
| CUDA | 11.6 | 12.2 |
| deepspeed | 0.10.0 | 0.13.4 |
| bitsandbytes | 0.39.0 | 0.41.3 |
| flash-attn | 2.3.0 | 2.5.5 |
2023-05-29 21:53:02 +08:00
2023-11-29 12:05:03 +08:00
### Hardware Requirement
2024-02-28 23:19:25 +08:00
\* *estimated*
2023-12-12 11:39:04 +08:00
| Method | Bits | 7B | 13B | 30B | 65B | 8x7B |
| ------ | ---- | ----- | ----- | ----- | ------ | ------ |
2023-12-23 15:24:27 +08:00
| Full | 16 | 160GB | 320GB | 600GB | 1200GB | 900GB |
2023-12-12 11:39:04 +08:00
| Freeze | 16 | 20GB | 40GB | 120GB | 240GB | 200GB |
| LoRA | 16 | 16GB | 32GB | 80GB | 160GB | 120GB |
| QLoRA | 8 | 10GB | 16GB | 40GB | 80GB | 80GB |
| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 32GB |
2023-05-29 21:53:02 +08:00
## Getting Started
### Data Preparation (optional)
2023-11-02 23:10:04 +08:00
Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can either use a single `.json` file or a [dataset loading script](https://huggingface.co/docs/datasets/dataset_script) with multiple files to create a custom dataset.
2023-05-29 21:53:02 +08:00
2023-09-10 21:01:20 +08:00
> [!NOTE]
2023-09-10 20:43:56 +08:00
> Please update `data/dataset_info.json` to use your custom dataset. About the format of this file, please refer to `data/README.md`.
2023-05-29 21:53:02 +08:00
### Dependence Installation (optional)
```bash
2023-10-12 21:42:29 +08:00
git clone https://github.com/hiyouga/LLaMA-Factory.git
conda create -n llama_factory python=3.10
conda activate llama_factory
cd LLaMA-Factory
2023-05-29 21:53:02 +08:00
pip install -r requirements.txt
```
If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you will be required to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2.
2023-07-22 14:29:22 +08:00
```bash
pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.40.0-py3-none-win_amd64.whl
2023-07-22 14:29:22 +08:00
```
2024-02-20 16:07:55 +08:00
To enable FlashAttention-2 on the Windows platform, you need to install the precompiled `flash-attn` library, which supports CUDA 12.1 to 12.2. Please download the corresponding version from [flash-attention](https://github.com/bdashore3/flash-attention/releases) based on your requirements.
2023-12-12 19:45:59 +08:00
### Use ModelScope Hub (optional)
2023-12-01 16:11:30 +08:00
2023-12-12 19:45:59 +08:00
If you have trouble with downloading models and datasets from Hugging Face, you can use LLaMA-Factory together with ModelScope in the following manner.
2023-12-01 16:11:30 +08:00
2023-12-01 22:53:15 +08:00
```bash
export USE_MODELSCOPE_HUB=1 # `set USE_MODELSCOPE_HUB=1` for Windows
2023-12-01 16:11:30 +08:00
```
2023-12-01 22:53:15 +08:00
Then you can train the corresponding model by specifying a model ID of the ModelScope Hub. (find a full list of model IDs at [ModelScope Hub](https://modelscope.cn/models))
2023-12-01 16:11:30 +08:00
2023-12-01 22:53:15 +08:00
```bash
2023-12-01 22:58:29 +08:00
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
2023-12-01 22:53:15 +08:00
--model_name_or_path modelscope/Llama-2-7b-ms \
... # arguments (same as below)
2023-12-01 16:11:30 +08:00
```
2023-12-12 19:45:59 +08:00
LLaMA Board also supports using the models and datasets on the ModelScope Hub.
2023-12-01 16:11:30 +08:00
2023-12-01 22:53:15 +08:00
```bash
CUDA_VISIBLE_DEVICES=0 USE_MODELSCOPE_HUB=1 python src/train_web.py
2023-12-01 16:11:30 +08:00
```
2023-08-18 01:51:55 +08:00
### Train on a single GPU
2023-09-10 21:01:20 +08:00
> [!IMPORTANT]
2023-09-10 20:52:21 +08:00
> If you want to train models on multiple GPUs, please refer to [Distributed Training](#distributed-training).
2023-09-10 20:43:56 +08:00
#### LLaMA Board GUI
```bash
CUDA_VISIBLE_DEVICES=0 python src/train_web.py
```
2023-08-18 01:51:55 +08:00
#### Pre-Training
2023-05-29 21:53:02 +08:00
```bash
2023-07-15 16:54:28 +08:00
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
--stage pt \
2023-05-29 21:53:02 +08:00
--do_train \
2023-12-15 20:53:11 +08:00
--model_name_or_path path_to_llama_model \
2023-05-29 21:53:02 +08:00
--dataset wiki_demo \
--finetuning_type lora \
2023-08-18 11:43:10 +08:00
--lora_target q_proj,v_proj \
2023-05-29 21:53:02 +08:00
--output_dir path_to_pt_checkpoint \
--overwrite_cache \
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 4 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--plot_loss \
--fp16
2023-05-28 18:09:04 +08:00
```
2023-08-18 01:51:55 +08:00
#### Supervised Fine-Tuning
2023-05-28 18:09:04 +08:00
```bash
2023-07-15 16:54:28 +08:00
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
--stage sft \
2023-05-28 18:09:04 +08:00
--do_train \
2023-12-15 20:53:11 +08:00
--model_name_or_path path_to_llama_model \
2023-05-29 21:53:02 +08:00
--dataset alpaca_gpt4_en \
--template default \
2023-05-28 18:09:04 +08:00
--finetuning_type lora \
2023-08-18 11:43:10 +08:00
--lora_target q_proj,v_proj \
2023-05-28 18:09:04 +08:00
--output_dir path_to_sft_checkpoint \
--overwrite_cache \
2023-05-29 21:53:02 +08:00
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 4 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate 5e-5 \
--num_train_epochs 3.0 \
--plot_loss \
--fp16
```
2023-08-18 01:51:55 +08:00
#### Reward Modeling
2023-05-29 21:53:02 +08:00
```bash
2023-07-15 16:54:28 +08:00
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
--stage rm \
2023-05-29 21:53:02 +08:00
--do_train \
2023-12-15 20:53:11 +08:00
--model_name_or_path path_to_llama_model \
--adapter_name_or_path path_to_sft_checkpoint \
--create_new_adapter \
2023-05-29 21:53:02 +08:00
--dataset comparison_gpt4_en \
--template default \
2023-05-29 21:53:02 +08:00
--finetuning_type lora \
2023-08-18 11:43:10 +08:00
--lora_target q_proj,v_proj \
2023-05-29 21:53:02 +08:00
--output_dir path_to_rm_checkpoint \
2023-08-11 03:02:53 +08:00
--per_device_train_batch_size 2 \
2023-05-29 21:53:02 +08:00
--gradient_accumulation_steps 4 \
2023-05-28 18:09:04 +08:00
--lr_scheduler_type cosine \
--logging_steps 10 \
2023-05-29 21:53:02 +08:00
--save_steps 1000 \
2023-08-18 11:43:10 +08:00
--learning_rate 1e-6 \
2023-05-28 18:09:04 +08:00
--num_train_epochs 1.0 \
2023-05-29 21:53:02 +08:00
--plot_loss \
2023-05-28 18:09:04 +08:00
--fp16
```
2023-05-29 21:53:02 +08:00
2023-08-18 01:51:55 +08:00
#### PPO Training
2023-05-29 21:53:02 +08:00
```bash
2023-07-15 16:54:28 +08:00
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
--stage ppo \
2023-05-29 21:53:02 +08:00
--do_train \
2023-12-15 20:53:11 +08:00
--model_name_or_path path_to_llama_model \
--adapter_name_or_path path_to_sft_checkpoint \
--create_new_adapter \
2023-05-29 21:53:02 +08:00
--dataset alpaca_gpt4_en \
--template default \
2023-05-29 21:53:02 +08:00
--finetuning_type lora \
2023-08-18 11:43:10 +08:00
--lora_target q_proj,v_proj \
2023-05-29 21:53:02 +08:00
--reward_model path_to_rm_checkpoint \
--output_dir path_to_ppo_checkpoint \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--lr_scheduler_type cosine \
2023-11-17 16:17:36 +08:00
--top_k 0 \
--top_p 0.9 \
2023-05-29 21:53:02 +08:00
--logging_steps 10 \
--save_steps 1000 \
--learning_rate 1e-5 \
--num_train_epochs 1.0 \
2023-08-18 01:51:55 +08:00
--plot_loss \
--fp16
2023-05-29 21:53:02 +08:00
```
2024-02-26 17:25:47 +08:00
> [!TIP]
> Use `--adapter_name_or_path path_to_sft_checkpoint,path_to_ppo_checkpoint` to infer the fine-tuned model.
2023-11-20 21:39:15 +08:00
> [!WARNING]
> Use `--per_device_train_batch_size=1` for LLaMA-2 models in fp16 PPO training.
2023-11-20 21:39:15 +08:00
2023-08-18 01:51:55 +08:00
#### DPO Training
2023-08-11 03:02:53 +08:00
```bash
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
--stage dpo \
--do_train \
2023-12-15 20:53:11 +08:00
--model_name_or_path path_to_llama_model \
--adapter_name_or_path path_to_sft_checkpoint \
--create_new_adapter \
2023-08-11 03:02:53 +08:00
--dataset comparison_gpt4_en \
--template default \
--finetuning_type lora \
2023-08-18 11:43:10 +08:00
--lora_target q_proj,v_proj \
2023-08-11 03:02:53 +08:00
--output_dir path_to_dpo_checkpoint \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 4 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 1000 \
--learning_rate 1e-5 \
--num_train_epochs 1.0 \
--plot_loss \
--fp16
```
2024-02-26 17:25:47 +08:00
> [!TIP]
> Use `--adapter_name_or_path path_to_sft_checkpoint,path_to_dpo_checkpoint` to infer the fine-tuned model.
2023-05-29 21:53:02 +08:00
### Distributed Training
2023-08-12 21:23:05 +08:00
#### Use Huggingface Accelerate
2023-05-29 21:53:02 +08:00
```bash
accelerate config # configure the environment
2023-07-15 16:54:28 +08:00
accelerate launch src/train_bash.py # arguments (same as above)
2023-05-29 21:53:02 +08:00
```
2023-09-10 20:43:56 +08:00
<details><summary>Example config for LoRA training</summary>
2023-06-27 22:50:23 +08:00
```yaml
compute_environment: LOCAL_MACHINE
2024-02-21 18:30:29 +08:00
debug: false
2023-09-10 20:43:56 +08:00
distributed_type: MULTI_GPU
2023-06-27 22:50:23 +08:00
downcast_bf16: 'no'
2023-09-10 20:43:56 +08:00
gpu_ids: all
2023-06-27 22:50:23 +08:00
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
</details>
2023-08-12 21:23:05 +08:00
#### Use DeepSpeed
```bash
deepspeed --num_gpus 8 --master_port=9901 src/train_bash.py \
--deepspeed ds_config.json \
... # arguments (same as above)
```
2023-09-10 20:43:56 +08:00
<details><summary>Example config for full-parameter training with DeepSpeed ZeRO-2</summary>
2023-08-12 21:23:05 +08:00
```json
{
2023-09-10 21:01:20 +08:00
"train_batch_size": "auto",
2023-08-12 21:23:05 +08:00
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"zero_allow_untested_optimizer": true,
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"initial_scale_power": 16,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
2024-01-09 18:31:27 +08:00
},
2023-08-12 21:23:05 +08:00
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"overlap_comm": false,
"contiguous_gradients": true
}
}
```
</details>
2023-11-20 21:39:15 +08:00
### Merge LoRA weights and export model
2023-07-20 17:23:16 +08:00
```bash
2023-08-18 01:51:55 +08:00
python src/export_model.py \
2023-08-18 11:43:10 +08:00
--model_name_or_path path_to_llama_model \
2023-12-15 20:53:11 +08:00
--adapter_name_or_path path_to_checkpoint \
--template default \
2023-07-20 17:23:16 +08:00
--finetuning_type lora \
2023-12-28 18:47:19 +08:00
--export_dir path_to_export \
--export_size 2 \
--export_legacy_format False
2023-07-20 17:23:16 +08:00
```
2023-12-04 11:02:29 +08:00
> [!WARNING]
2023-12-15 23:44:50 +08:00
> Merging LoRA weights into a quantized model is not supported.
> [!TIP]
2023-12-18 15:46:45 +08:00
> Use `--export_quantization_bit 4` and `--export_quantization_dataset data/c4_demo.json` to quantize the model after merging the LoRA weights.
2023-12-04 11:02:29 +08:00
2024-02-25 16:26:08 +08:00
### Inference with OpenAI-style API
2023-07-18 00:18:25 +08:00
```bash
python src/api_demo.py \
2023-08-18 11:43:10 +08:00
--model_name_or_path path_to_llama_model \
2023-12-15 20:53:11 +08:00
--adapter_name_or_path path_to_checkpoint \
--template default \
2023-12-15 20:53:11 +08:00
--finetuning_type lora
2023-07-18 00:18:25 +08:00
```
2023-11-20 21:39:15 +08:00
> [!TIP]
2023-09-10 20:43:56 +08:00
> Visit `http://localhost:8000/docs` for API documentation.
2023-07-18 00:18:25 +08:00
2024-02-25 16:26:08 +08:00
### Inference with command line
2023-05-29 21:53:02 +08:00
```bash
2023-07-18 00:18:25 +08:00
python src/cli_demo.py \
2023-08-18 11:43:10 +08:00
--model_name_or_path path_to_llama_model \
2023-12-15 20:53:11 +08:00
--adapter_name_or_path path_to_checkpoint \
--template default \
2023-12-15 20:53:11 +08:00
--finetuning_type lora
2023-05-29 21:53:02 +08:00
```
2024-02-25 16:26:08 +08:00
### Inference with web browser
2023-07-18 17:21:16 +08:00
```bash
python src/web_demo.py \
2023-08-18 11:43:10 +08:00
--model_name_or_path path_to_llama_model \
2023-12-15 20:53:11 +08:00
--adapter_name_or_path path_to_checkpoint \
--template default \
2023-12-15 20:53:11 +08:00
--finetuning_type lora
2023-07-18 17:21:16 +08:00
```
2023-09-23 21:10:17 +08:00
### Evaluation
```bash
CUDA_VISIBLE_DEVICES=0 python src/evaluate.py \
--model_name_or_path path_to_llama_model \
2023-12-15 20:53:11 +08:00
--adapter_name_or_path path_to_checkpoint \
2023-09-23 21:10:17 +08:00
--template vanilla \
2024-01-15 18:50:35 +08:00
--finetuning_type lora \
2023-09-23 21:10:17 +08:00
--task mmlu \
--split test \
--lang en \
--n_shot 5 \
--batch_size 4
```
### Predict
2023-05-29 21:53:02 +08:00
```bash
2023-08-18 01:51:55 +08:00
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
--stage sft \
2023-09-23 00:34:17 +08:00
--do_predict \
2023-12-15 20:53:11 +08:00
--model_name_or_path path_to_llama_model \
--adapter_name_or_path path_to_checkpoint \
2023-08-18 01:51:55 +08:00
--dataset alpaca_gpt4_en \
--template default \
2023-07-20 17:23:16 +08:00
--finetuning_type lora \
2023-09-23 00:34:17 +08:00
--output_dir path_to_predict_result \
2024-02-25 16:26:08 +08:00
--per_device_eval_batch_size 1 \
2023-08-18 01:51:55 +08:00
--max_samples 100 \
2023-11-20 21:39:15 +08:00
--predict_with_generate \
--fp16
2023-08-18 01:51:55 +08:00
```
2023-11-20 21:39:15 +08:00
> [!WARNING]
> Use `--per_device_train_batch_size=1` for LLaMA-2 models in fp16 predict.
> [!TIP]
2023-09-23 21:10:17 +08:00
> We recommend using `--per_device_eval_batch_size=1` and `--max_target_length 128` at 4/8-bit predict.
2023-05-29 21:53:02 +08:00
2023-10-29 22:07:13 +08:00
## Projects using LLaMA Factory
2024-02-25 15:34:47 +08:00
1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223)
1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092)
1. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [[arxiv]](https://arxiv.org/abs/2311.07816)
1. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [[arxiv]](https://arxiv.org/abs/2312.15710)
2024-02-25 15:18:58 +08:00
1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2401.04319)
2024-02-25 15:34:47 +08:00
1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2401.07286)
2024-02-25 15:18:58 +08:00
1. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2402.05904)
1. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [[arxiv]](https://arxiv.org/abs/2402.07625)
1. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11176)
1. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [[arxiv]](https://arxiv.org/abs/2402.11187)
1. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [[arxiv]](https://arxiv.org/abs/2402.11746)
1. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11801)
1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. 2024. [[arxiv]](https://arxiv.org/abs/2402.11809)
1. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11819)
1. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [[arxiv]](https://arxiv.org/abs/2402.12204)
1. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.14714)
2024-02-25 15:34:47 +08:00
1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: A large language model for Astronomy, based on ChatGLM2-6B and Qwen-14B.
1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: A large language model specialized in Chinese legal domain, based on Baichuan-13B, is capable of retrieving and reasoning on legal knowledge.
1. **[Sunsimiao](https://github.com/thomas-yanxin/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B.
1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: A series of large language models for Chinese medical domain, based on LLaMA2-7B and Baichuan-13B.
1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**: A series of MBTI Personality large language models, capable of giving any LLM 16 different personality types based on different datasets and training methods.
2024-01-13 23:12:47 +08:00
2023-11-20 21:39:15 +08:00
> [!TIP]
2023-11-18 11:09:52 +08:00
> If you have a project that should be incorporated, please contact via email or create a pull request.
2023-05-29 21:53:02 +08:00
## License
2023-05-31 16:54:06 +08:00
This repository is licensed under the [Apache-2.0 License](LICENSE).
Please follow the model licenses to use the corresponding model weights: [Baichuan2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [InternLM2](https://github.com/InternLM/InternLM#license) / [LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [LLaMA-2](https://ai.meta.com/llama/license/) / [Mistral](LICENSE) / [Phi-1.5/2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [StarCoder2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yuan](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
2023-06-16 00:02:17 +08:00
2023-05-29 21:53:02 +08:00
## Citation
2023-07-07 12:06:28 +08:00
If this work is helpful, please kindly cite as:
2023-05-29 21:53:02 +08:00
```bibtex
2023-10-12 21:42:29 +08:00
@Misc{llama-factory,
title = {LLaMA Factory},
2023-05-29 21:53:02 +08:00
author = {hiyouga},
2023-10-12 21:42:29 +08:00
howpublished = {\url{https://github.com/hiyouga/LLaMA-Factory}},
2023-05-29 21:53:02 +08:00
year = {2023}
}
```
## Acknowledgement
2023-10-09 20:02:50 +08:00
This repo benefits from [PEFT](https://github.com/huggingface/peft), [QLoRA](https://github.com/artidoro/qlora) and [FastChat](https://github.com/lm-sys/FastChat). Thanks for their wonderful works.
2023-06-27 23:56:29 +08:00
## Star History
2023-10-12 21:42:29 +08:00
![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/LLaMA-Factory&type=Date)