Commit Graph

972 Commits

Author SHA1 Message Date
hiyouga 6629087e12 update loader 2023-12-24 19:10:23 +08:00
hiyouga e44b82ee24 update patcher 2023-12-23 15:24:27 +08:00
hiyouga 0bbf7118df fix #1909 2023-12-23 14:42:20 +08:00
hiyouga 0ad86a4f62 update readme 2023-12-23 02:17:41 +08:00
hiyouga 779cfefb78 fix unsloth dtype 2023-12-23 01:59:49 +08:00
hiyouga 074745b170 fix dpo trainer 2023-12-23 01:51:55 +08:00
hiyouga 9a18a85639 llama board: add unsloth 2023-12-23 00:35:53 +08:00
hiyouga 7aad0b889d support unsloth 2023-12-23 00:14:33 +08:00
hoshi-hiyouga 315b8367cb
Merge pull request #1953 from ShaneTian/model-load-bf16
Fix slow model initialization in bfloat16 dtype.
2023-12-22 17:29:54 +08:00
ShaneTian d032daa4bd Fix slow model initialization in bfloat16 dtype. 2023-12-22 16:27:28 +08:00
hiyouga ba69378841 fix param type 2023-12-21 17:33:01 +08:00
hiyouga 083355fc05 fix ds zero3 check 2023-12-21 01:19:22 +08:00
hiyouga af0194e6d9 match version 2023-12-20 22:17:35 +08:00
hoshi-hiyouga ba4d32bf59
Merge pull request #1932 from ShaneTian/main
Update transformers to 4.36.2 to resolve multi-node saving bug.
2023-12-20 22:13:28 +08:00
ShaneTian 390f0caf7f
Update transformers to 4.36.2 to resolve bug when saving a checkpoint in the multi-node setting. 2023-12-20 22:00:41 +08:00
hiyouga 7910dbae92 Update wechat.jpg 2023-12-20 19:24:37 +08:00
hiyouga dec360d5ae fix stop words 2023-12-20 19:06:43 +08:00
hiyouga 5af8841c4f fix yi template #1895 2023-12-20 18:58:16 +08:00
hiyouga 624cc21281 improve quantization 2023-12-20 18:27:16 +08:00
hiyouga c4a3977ad7 add max_memory for gptq #1923 2023-12-20 18:15:17 +08:00
hiyouga 31165a9822 fix #1073 #1462 #1735 #1908 2023-12-20 17:15:40 +08:00
hiyouga ec1fe1daa9 optimize data loading logic 2023-12-20 16:15:41 +08:00
hiyouga c6abbbfe90 fix #1909 2023-12-20 16:11:07 +08:00
hiyouga f86857bd9e fix mixtral inference #1821 2023-12-20 15:11:15 +08:00
hiyouga 0c6ab7c75e fix #1900 2023-12-19 17:21:46 +08:00
hiyouga edb7d177c2 update readme 2023-12-18 22:29:45 +08:00
hiyouga a67a440644 add codegeex template 2023-12-18 19:52:35 +08:00
hiyouga 2df923540c add xverse-65B-2 model 2023-12-18 19:24:09 +08:00
hiyouga 709ac8870a add models 2023-12-18 19:09:31 +08:00
hiyouga 71a9c16171 fix tokenizer for Yi chat models #1617 #1875 2023-12-18 17:18:11 +08:00
hiyouga 2b4e5f0d32 update readme 2023-12-18 15:46:45 +08:00
hiyouga c46879575f fix llama board 2023-12-16 22:17:37 +08:00
hiyouga 870426ff70 fix #1742 2023-12-16 20:50:45 +08:00
hiyouga 7ae6919b9b add xverse-65b-chat model 2023-12-16 20:21:29 +08:00
hiyouga 328ad06bd4 set version 2023-12-16 20:17:51 +08:00
hiyouga a66186b872 add noisy mean initialization #1815 2023-12-16 19:47:51 +08:00
hiyouga b87c74289d support dpo-ftx 2023-12-16 19:21:41 +08:00
hiyouga 71389be37c support autogptq in llama board #246 2023-12-16 16:31:30 +08:00
hoshi-hiyouga 93f64ce9a8
Merge pull request #1868 from yhyu13/improve_hfargparser
Improve logging for unknown args
2023-12-16 16:06:09 +08:00
yhyu13 fc70a92cb6 Use llmtuner logger 2023-12-16 07:15:27 +00:00
yhyu13 26817143ff Improve logging for unknown args 2023-12-16 05:16:29 +00:00
hiyouga 3551171d49 update tips 2023-12-15 23:52:50 +08:00
hiyouga 439a26c276 fix #1770 2023-12-15 23:50:15 +08:00
hiyouga 3524aa1e58 support quantization in export model 2023-12-15 23:44:50 +08:00
hiyouga 87ef3f47b5 update dc link 2023-12-15 22:11:31 +08:00
hoshi-hiyouga e2bd597b3c
Merge pull request #1864 from hiyouga/dev
Refactor hyper-parameters of adapters and model loader
2023-12-15 22:06:56 +08:00
hiyouga 00c77104f8 fix bug 2023-12-15 21:54:02 +08:00
hiyouga 9e509b99af fix bug 2023-12-15 21:49:26 +08:00
hiyouga 2740aa9cbb add configurer 2023-12-15 21:46:40 +08:00
hiyouga 0716f5e470 refactor adapter hparam 2023-12-15 20:53:11 +08:00