hoshi-hiyouga
|
26082fc6c9
|
fix #4917
|
2024-07-22 11:28:31 +08:00 |
hiyouga
|
c333e2f49d
|
tiny fix
|
2024-07-22 00:06:03 +08:00 |
hiyouga
|
4135e69406
|
fix flashattn + packing
|
2024-07-21 17:07:45 +08:00 |
hiyouga
|
ad71296a7c
|
update wechat
|
2024-07-20 22:00:44 +08:00 |
huangpan.foo
|
44e48e2b82
|
update deepseek template
|
2024-07-19 15:02:54 +08:00 |
hiyouga
|
88c7fc1599
|
set dev version
|
2024-07-19 02:01:46 +08:00 |
hiyouga
|
8f6995081c
|
update parser
|
2024-07-19 01:36:39 +08:00 |
hiyouga
|
bbd5a64423
|
release v0.8.3
|
2024-07-19 01:21:18 +08:00 |
hiyouga
|
cdb0f34f10
|
fix test
|
2024-07-19 01:17:37 +08:00 |
hiyouga
|
e80006795f
|
fix unittest
|
2024-07-19 01:10:30 +08:00 |
hiyouga
|
608de799a2
|
add unittest
|
2024-07-19 01:06:27 +08:00 |
hiyouga
|
779aae83d2
|
follow #4878 fix #4684
|
2024-07-18 22:06:12 +08:00 |
hoshi-hiyouga
|
2516763d69
|
Merge pull request #4878 from ly863/main
Train the last turing conversation.
|
2024-07-18 22:03:41 +08:00 |
Shiyu Zhang
|
1e7b396ff2
|
仅仅训练最后一轮对话
|
2024-07-18 15:30:25 +08:00 |
hiyouga
|
beec77a089
|
fix metrics #4786
|
2024-07-17 00:47:00 +08:00 |
hiyouga
|
d774b94f12
|
support batch_eval_metrics, fix #4826
|
2024-07-17 00:33:00 +08:00 |
hiyouga
|
bda302fbfb
|
tiny fix
|
2024-07-15 23:09:50 +08:00 |
hoshi-hiyouga
|
f2aaebdbde
|
Merge pull request #4822 from codemayq/test-ci
add github action check to ignore some test cases
|
2024-07-15 23:07:55 +08:00 |
hoshi-hiyouga
|
10289eab15
|
Update test_template.py
|
2024-07-15 23:04:39 +08:00 |
hoshi-hiyouga
|
da990f76b8
|
Update test_template.py
|
2024-07-15 23:00:27 +08:00 |
hoshi-hiyouga
|
38bc411d42
|
Merge pull request #4821 from codemayq/feature-eval-split
add "split" as suffix in eval task name
|
2024-07-15 22:59:44 +08:00 |
hoshi-hiyouga
|
91ba083f37
|
Update llama3_lora_eval.yaml
|
2024-07-15 22:55:12 +08:00 |
hoshi-hiyouga
|
33420bab81
|
Update test_template.py
|
2024-07-15 22:55:05 +08:00 |
hoshi-hiyouga
|
52a4256ad9
|
Update test_template.py
|
2024-07-15 22:52:25 +08:00 |
hiyouga
|
fd8cc49008
|
fix #4820
|
2024-07-15 22:32:07 +08:00 |
hiyouga
|
b0aa321a4a
|
update wechat
|
2024-07-15 22:02:52 +08:00 |
codingma
|
32c3afdfa1
|
add IN_GITHUB_ACTIONS
|
2024-07-15 10:28:07 +08:00 |
codingma
|
645211dc01
|
1. change the task name format
2. delete split param in data_args.py
|
2024-07-15 09:55:33 +08:00 |
hiyouga
|
99ab7a8c1c
|
allow computing rouge in training
|
2024-07-15 01:16:26 +08:00 |
hiyouga
|
29ebcd75d5
|
fix up
|
2024-07-15 01:04:56 +08:00 |
hoshi-hiyouga
|
15b399a82f
|
Merge pull request #4691 from codemayq/feature-suppot-eval-dataset
add eval dataset support
|
2024-07-15 01:00:34 +08:00 |
hoshi-hiyouga
|
cba673f491
|
Update data_args.py
|
2024-07-15 00:56:03 +08:00 |
hoshi-hiyouga
|
df52fb05b1
|
Update preprocess.py
|
2024-07-15 00:55:36 +08:00 |
hoshi-hiyouga
|
84e4047f8a
|
Update parser.py
|
2024-07-15 00:55:21 +08:00 |
hoshi-hiyouga
|
97a0e291c7
|
Update data_utils.py
|
2024-07-15 00:54:34 +08:00 |
hoshi-hiyouga
|
a5b809516e
|
Update loader.py
|
2024-07-15 00:50:06 +08:00 |
hiyouga
|
a4ae3ab4ab
|
update test template
|
2024-07-15 00:49:34 +08:00 |
hoshi-hiyouga
|
3d39d74003
|
Update parser.py
|
2024-07-14 23:04:34 +08:00 |
hoshi-hiyouga
|
9d64507bd5
|
Update README.md
|
2024-07-14 21:27:04 +08:00 |
hiyouga
|
f1d8d29bc3
|
add gemma test
|
2024-07-14 18:01:45 +08:00 |
hiyouga
|
173921419d
|
fix test
|
2024-07-14 15:44:30 +08:00 |
hiyouga
|
88a20ba797
|
fix #4699
slow tokenizer for yi models
|
2024-07-14 15:34:22 +08:00 |
hiyouga
|
d3c01552e0
|
tiny fix
|
2024-07-14 10:56:45 +08:00 |
hiyouga
|
2f6af73da2
|
fix gemma2 attention
|
2024-07-13 23:33:45 +08:00 |
hiyouga
|
7b19e99ed7
|
update workflows
|
2024-07-13 22:31:15 +08:00 |
hoshi-hiyouga
|
5da54deb50
|
Merge pull request #4781 from hzhaoy/fix-dockerfile-cuda
Fix cuda Dockerfile
|
2024-07-13 22:25:32 +08:00 |
hiyouga
|
6b48308ef9
|
fix #4792
|
2024-07-13 22:07:58 +08:00 |
hoshi-hiyouga
|
32699a82a6
|
Merge pull request #4804 from codemayq/fix-examples
tiny fix of examples
|
2024-07-13 20:49:13 +08:00 |
hoshi-hiyouga
|
f618b80fa2
|
Update llava1_5.yaml
|
2024-07-13 20:30:06 +08:00 |
codingma
|
982a1cdd24
|
1. fix output_dir in llama3_lora_pretrain.yaml
2. add llava1_5.yaml for inference
|
2024-07-13 13:16:22 +08:00 |