Commit Graph

2121 Commits

Author SHA1 Message Date
hoshi-hiyouga 3d39d74003
Update parser.py 2024-07-14 23:04:34 +08:00
hoshi-hiyouga 9d64507bd5
Update README.md 2024-07-14 21:27:04 +08:00
hiyouga f1d8d29bc3 add gemma test 2024-07-14 18:01:45 +08:00
hiyouga 173921419d fix test 2024-07-14 15:44:30 +08:00
hiyouga 88a20ba797 fix #4699
slow tokenizer for yi models
2024-07-14 15:34:22 +08:00
hiyouga d3c01552e0 tiny fix 2024-07-14 10:56:45 +08:00
hiyouga 2f6af73da2 fix gemma2 attention 2024-07-13 23:33:45 +08:00
hiyouga 7b19e99ed7 update workflows 2024-07-13 22:31:15 +08:00
hoshi-hiyouga 5da54deb50
Merge pull request #4781 from hzhaoy/fix-dockerfile-cuda
Fix cuda Dockerfile
2024-07-13 22:25:32 +08:00
hiyouga 6b48308ef9 fix #4792 2024-07-13 22:07:58 +08:00
hoshi-hiyouga 32699a82a6
Merge pull request #4804 from codemayq/fix-examples
tiny fix of examples
2024-07-13 20:49:13 +08:00
hoshi-hiyouga f618b80fa2
Update llava1_5.yaml 2024-07-13 20:30:06 +08:00
codingma 982a1cdd24 1. fix output_dir in llama3_lora_pretrain.yaml
2. add llava1_5.yaml for inference
2024-07-13 13:16:22 +08:00
hzhaoy 8bab99c582 tiny fix 2024-07-12 00:28:44 +08:00
hzhaoy 642c6d666f fix #4780 2024-07-12 00:25:48 +08:00
hzhaoy a8bf1abf0f fix #4779 2024-07-12 00:15:15 +08:00
codemayq 67040f149c update wechat_npu.jpg 2024-07-11 20:03:39 +08:00
hoshi-hiyouga 555194e150
Merge pull request #4700 from marko1616/patch-1
Fix Windows command preview
2024-07-10 13:51:50 +08:00
hoshi-hiyouga 40c3b88b68
Merge pull request #4746 from yzoaim/fix
fix src/llamafactory/train/callbacks.py
2024-07-10 13:32:49 +08:00
hoshi-hiyouga 39cd89ce17
Update callbacks.py 2024-07-10 13:32:20 +08:00
-.- cff89a2e89 fix src/llamafactory/train/callbacks.py 2024-07-10 12:05:51 +08:00
hiyouga 51942acee8 fix #4731 2024-07-10 11:32:36 +08:00
hiyouga fb0c400116 fix ppo trainer 2024-07-10 11:05:45 +08:00
hiyouga 2f09520c0d fix #4742 2024-07-09 23:24:24 +08:00
hiyouga 86b1594823 Update wechat.jpg 2024-07-09 09:25:11 +08:00
hoshi-hiyouga 563a27dab7
Merge pull request #4706 from T-Atlas/main
chore: Update vllm_engine.py to support vllm version >= 0.5.1
2024-07-07 15:50:38 +08:00
hoshi-hiyouga f84b007ebb
Update packages.py 2024-07-07 15:48:29 +08:00
Lian Junhong 322663bf90 chore: Update vllm_engine.py to support vllm version >= 0.5.1 2024-07-07 15:08:12 +08:00
hiyouga a15782cb9f fix #4705 2024-07-07 13:10:06 +08:00
marko1616 e0562521bb
Update utils.py
In windows mutiline command should like
command --arg1 xxx `
--arg2 xxx `
2024-07-06 20:40:13 +08:00
hiyouga 53b1002fb7 add codegeex4, internlm2.5 2024-07-06 16:16:47 +08:00
hiyouga c9bb0757ec update pissa example 2024-07-06 15:47:32 +08:00
codingma 76f3bbcfc0 1. add custom eval dataset support
2. merge load dataset and split dataset function
2024-07-05 15:52:10 +08:00
hiyouga 9f33f1edf5 fix processors 2024-07-05 08:33:22 +08:00
hiyouga e43809bced fix #4683 2024-07-05 00:58:05 +08:00
hiyouga ed232311e8 fix #4674 2024-07-05 00:41:03 +08:00
hiyouga 226a9e563f Merge branch 'main' of https://github.com/hiyouga/LLaMA-Factory 2024-07-04 14:23:37 +08:00
hiyouga 1e27e8c776 fix #4677 2024-07-04 14:22:07 +08:00
hoshi-hiyouga 07d96d497c
Merge pull request #4673 from hzhaoy/main
tiny fix
2024-07-04 10:40:41 +08:00
hzhaoy 738df47748 tiny fix 2024-07-04 10:20:28 +08:00
hiyouga 636bb9c1e6 update tests 2024-07-04 04:00:12 +08:00
hiyouga 0c699de39d tiny fix 2024-07-04 03:47:05 +08:00
hiyouga 44747cebd2 tiny fix 2024-07-04 03:02:23 +08:00
hiyouga b5d101e1bf fix data map for packing 2024-07-04 03:01:31 +08:00
hiyouga b03e4a74ba update wechat 2024-07-04 01:55:05 +08:00
hiyouga 6fd6aa4530 fix packing for eager/sdpa attn 2024-07-04 01:52:43 +08:00
hoshi-hiyouga 87d9b2d005
Merge pull request #4224 from chuan298/main
Implement efficient packing without cross-contamination attention
2024-07-04 01:18:54 +08:00
hiyouga cce7083024 update packing 2024-07-04 01:10:55 +08:00
hoshi-hiyouga a36e8f2dd5
Update packing.py 2024-07-03 23:36:01 +08:00
hiyouga c346f79f99 update func name 2024-07-03 23:29:33 +08:00