BUAADreamer
|
7be7972f28
|
add full parameter finetuning of mllm
|
2024-05-11 13:11:00 +08:00 |
BUAADreamer
|
8b997e32fb
|
add push processor to hub
|
2024-05-09 14:05:19 +08:00 |
BUAADreamer
|
83f2f0de1d
|
Merge branch 'hiyouga:main' into main
|
2024-05-09 13:45:43 +08:00 |
BUAADreamer
|
ef33856380
|
add mllm export
|
2024-05-08 22:50:42 +08:00 |
hiyouga
|
d9cdddd19c
|
fix #3625
|
2024-05-08 17:12:56 +08:00 |
hiyouga
|
48ee46dac1
|
add llama3 chinese chat
|
2024-05-08 17:10:03 +08:00 |
hiyouga
|
10ab83f4c4
|
add deepseek moe 236B
|
2024-05-08 16:37:54 +08:00 |
BUAADreamer
|
0ca1d1967d
|
modify export model
|
2024-05-08 10:36:36 +08:00 |
hiyouga
|
0f8f7d3b90
|
fix #3560
|
2024-05-07 19:03:35 +08:00 |
hiyouga
|
b0888262e3
|
fix #3602
|
2024-05-07 17:50:27 +08:00 |
hiyouga
|
09f3ef1de4
|
fix stop param
|
2024-05-07 00:41:04 +08:00 |
hoshi-hiyouga
|
bcf7ec5ceb
|
Merge pull request #3527 from zhaonx/dev
"add support for vllm api stop parameter"
|
2024-05-07 00:37:49 +08:00 |
hoshi-hiyouga
|
17d0005b8c
|
Update vllm_engine.py
|
2024-05-07 00:37:05 +08:00 |
hoshi-hiyouga
|
f32eefae3d
|
Update generating_args.py
|
2024-05-07 00:28:16 +08:00 |
hoshi-hiyouga
|
7ae7ae64f0
|
Update generating_args.py
|
2024-05-07 00:27:56 +08:00 |
hiyouga
|
a153039380
|
fix gradio args
|
2024-05-06 23:33:06 +08:00 |
hiyouga
|
34d33e2257
|
update docs
|
2024-05-06 21:47:00 +08:00 |
zhaonx96
|
80645751bc
|
”add stop parameter in chat.py“
|
2024-05-06 10:10:00 +08:00 |
zhaonx96
|
1abd55dd59
|
Merge branch 'main' of https://github.com/zhaonx/LLaMA-Factory into dev
|
2024-05-06 10:09:00 +08:00 |
hiyouga
|
bd095eeb73
|
add version and help to cli
|
2024-05-05 02:44:35 +08:00 |
hiyouga
|
af596988b1
|
update webui
|
2024-05-05 00:17:54 +08:00 |
hiyouga
|
e984ba3167
|
remove empty stream response
|
2024-05-04 16:13:52 +08:00 |
hiyouga
|
941924fdbd
|
fix async stream api response
|
2024-05-04 16:11:18 +08:00 |
hiyouga
|
ed8f8be752
|
update api and support abort eval in webui
|
2024-05-04 15:59:15 +08:00 |
hiyouga
|
9d2ce57345
|
update readme and webui launch
|
2024-05-04 00:43:02 +08:00 |
hiyouga
|
24cc93ab15
|
fix eval in webui
|
2024-05-04 00:19:19 +08:00 |
hiyouga
|
510e64ee70
|
fix webui resume
|
2024-05-03 23:15:19 +08:00 |
hiyouga
|
3010154adb
|
fix slow op in dpo/orpo trainer
|
2024-05-03 23:06:52 +08:00 |
hiyouga
|
9585838ebe
|
fix callback log multigpu #3559
|
2024-05-03 21:24:27 +08:00 |
hiyouga
|
5e6f808e3c
|
enable tqdm in webui
|
2024-05-03 04:42:50 +08:00 |
hiyouga
|
17d2e5147e
|
fix gen_args
|
2024-05-03 04:24:50 +08:00 |
hiyouga
|
530f6b49bb
|
fix colab gradio
|
2024-05-03 03:54:46 +08:00 |
hiyouga
|
245fe47ece
|
update webui and add CLIs
|
2024-05-03 02:58:23 +08:00 |
hiyouga
|
9433c8c215
|
fix badam configs
|
2024-05-02 02:47:04 +08:00 |
hoshi-hiyouga
|
dcd53cb89a
|
Update train.py
|
2024-05-02 02:21:27 +08:00 |
zhaonx
|
42edc81585
|
"add support for vllm api stop parameter"
|
2024-04-30 17:17:09 +08:00 |
codingma
|
26f7170393
|
support BAdam in WebUI
|
2024-04-28 11:31:34 +08:00 |
hiyouga
|
b3e33c703e
|
fix llava rlhf
|
2024-04-28 03:01:49 +08:00 |
hiyouga
|
4dbbce21d5
|
add models to 0.7.0
|
2024-04-28 01:50:30 +08:00 |
hiyouga
|
168f56683a
|
release v0.7.0
|
2024-04-26 23:18:00 +08:00 |
hiyouga
|
375b25131b
|
support Qwen1.5 110B
|
2024-04-26 19:59:22 +08:00 |
hiyouga
|
fc67b736ba
|
fix llava qlora
|
2024-04-26 18:00:23 +08:00 |
hiyouga
|
cd3a960f81
|
add llava to llamaboard
|
2024-04-26 06:41:35 +08:00 |
hiyouga
|
27ba1b63ce
|
update readme
|
2024-04-26 05:44:30 +08:00 |
hiyouga
|
e057c8de48
|
support mllm hf inference
|
2024-04-26 05:34:58 +08:00 |
hoshi-hiyouga
|
7f3bd35c0e
|
Update preprocess.py
|
2024-04-26 04:10:28 +08:00 |
hoshi-hiyouga
|
fcd09112d5
|
Update aligner.py
|
2024-04-26 03:48:34 +08:00 |
hoshi-hiyouga
|
f62cadb258
|
Update parser.py
|
2024-04-26 03:35:39 +08:00 |
hoshi-hiyouga
|
3408af236f
|
Update loader.py
|
2024-04-26 03:33:07 +08:00 |
hoshi-hiyouga
|
e16f128dc3
|
Update workflow.py
|
2024-04-26 03:29:12 +08:00 |