Katehuuh
|
19a85bf52d
|
Update README_zh.md
Add Projects Nekochu/Luminia-13B-v3
|
2024-05-07 06:28:48 +02:00 |
Katehuuh
|
984f7fbbf7
|
Update README.md
Add Projects Nekochu/Luminia-13B-v3
|
2024-05-07 06:23:36 +02:00 |
hiyouga
|
8e09e20ece
|
update readme
|
2024-05-07 06:19:29 +08:00 |
hiyouga
|
09f3ef1de4
|
fix stop param
|
2024-05-07 00:41:04 +08:00 |
hoshi-hiyouga
|
bcf7ec5ceb
|
Merge pull request #3527 from zhaonx/dev
"add support for vllm api stop parameter"
|
2024-05-07 00:37:49 +08:00 |
hoshi-hiyouga
|
17d0005b8c
|
Update vllm_engine.py
|
2024-05-07 00:37:05 +08:00 |
hoshi-hiyouga
|
f32eefae3d
|
Update generating_args.py
|
2024-05-07 00:28:16 +08:00 |
hoshi-hiyouga
|
7ae7ae64f0
|
Update generating_args.py
|
2024-05-07 00:27:56 +08:00 |
hoshi-hiyouga
|
d6ca7853fa
|
Merge pull request #3588 from ZeyuTeng96/patch-1
update hf_hub_url for nectar_rm in dataset_info
|
2024-05-07 00:06:11 +08:00 |
hoshi-hiyouga
|
c3910ab98a
|
Update dataset_info.json
|
2024-05-07 00:05:45 +08:00 |
hiyouga
|
f50c365871
|
update readme
|
2024-05-06 23:34:59 +08:00 |
hiyouga
|
a153039380
|
fix gradio args
|
2024-05-06 23:33:06 +08:00 |
hoshi-hiyouga
|
c8cd00bec6
|
Merge pull request #3596 from hiyouga/dev_doc
Add CLI document
|
2024-05-06 23:10:38 +08:00 |
hiyouga
|
047313f48e
|
update examples
|
2024-05-06 23:07:55 +08:00 |
hiyouga
|
f02f87c6fb
|
update example docs
|
2024-05-06 22:51:02 +08:00 |
hiyouga
|
34d33e2257
|
update docs
|
2024-05-06 21:47:00 +08:00 |
ZeyuTeng96
|
044af36442
|
update hf_hub_url for nectar_rm in dataset_info
Hi there,
I cannot find the "mlinmg/RLAIF-Nectar" on hf, seems like it changed as "AstraMindAI/RLAIF-Nectar". So, making a PR for updating.
See: https://huggingface.co/datasets/AstraMindAI/RLAIF-Nectar
|
2024-05-06 16:44:50 +08:00 |
zhouwei
|
28ae947161
|
The training efficiency of the Ascend 910A has been significantly enhanced, leveraging the full computational power of the NPU (Neural Processing Unit) and the capabilities of torch_npu, a PyTorch library optimized for NPUs. This improvement has resulted in a remarkable tenfold increase in efficiency.
|
2024-05-06 13:29:59 +08:00 |
zhaonx96
|
80645751bc
|
”add stop parameter in chat.py“
|
2024-05-06 10:10:00 +08:00 |
zhaonx96
|
1abd55dd59
|
Merge branch 'main' of https://github.com/zhaonx/LLaMA-Factory into dev
|
2024-05-06 10:09:00 +08:00 |
hoshi-hiyouga
|
a34f526f10
|
Merge pull request #3578 from pha123661/main
Fix badam example argument
|
2024-05-05 23:41:58 +08:00 |
Oscar
|
eeb415f6fa
|
Fix badam example outdated argument
|
2024-05-05 23:35:19 +08:00 |
codingma
|
845d5acd03
|
update wechat
|
2024-05-05 15:31:47 +08:00 |
hiyouga
|
bd095eeb73
|
add version and help to cli
|
2024-05-05 02:44:35 +08:00 |
hiyouga
|
177604fb6b
|
fix eval scripts
|
2024-05-05 00:53:07 +08:00 |
hiyouga
|
af596988b1
|
update webui
|
2024-05-05 00:17:54 +08:00 |
hiyouga
|
c1a53a0deb
|
update scripts
|
2024-05-04 23:05:17 +08:00 |
hiyouga
|
25aeaae51b
|
add avg ppl
|
2024-05-04 22:35:31 +08:00 |
hiyouga
|
76a077bdce
|
update ppl script
|
2024-05-04 22:13:14 +08:00 |
hiyouga
|
3a666832c1
|
add cal_ppl script
|
2024-05-04 22:02:25 +08:00 |
hiyouga
|
57a39783d1
|
update readme
|
2024-05-04 17:01:21 +08:00 |
hiyouga
|
e984ba3167
|
remove empty stream response
|
2024-05-04 16:13:52 +08:00 |
hiyouga
|
941924fdbd
|
fix async stream api response
|
2024-05-04 16:11:18 +08:00 |
hiyouga
|
ed8f8be752
|
update api and support abort eval in webui
|
2024-05-04 15:59:15 +08:00 |
hiyouga
|
d4283bb6bf
|
update readme
|
2024-05-04 00:43:53 +08:00 |
hiyouga
|
9d2ce57345
|
update readme and webui launch
|
2024-05-04 00:43:02 +08:00 |
hiyouga
|
1409654cef
|
update readme
|
2024-05-04 00:31:02 +08:00 |
hiyouga
|
24cc93ab15
|
fix eval in webui
|
2024-05-04 00:19:19 +08:00 |
hiyouga
|
510e64ee70
|
fix webui resume
|
2024-05-03 23:15:19 +08:00 |
hiyouga
|
3010154adb
|
fix slow op in dpo/orpo trainer
|
2024-05-03 23:06:52 +08:00 |
hiyouga
|
9585838ebe
|
fix callback log multigpu #3559
|
2024-05-03 21:24:27 +08:00 |
hiyouga
|
5e6f808e3c
|
enable tqdm in webui
|
2024-05-03 04:42:50 +08:00 |
hiyouga
|
17d2e5147e
|
fix gen_args
|
2024-05-03 04:24:50 +08:00 |
hiyouga
|
530f6b49bb
|
fix colab gradio
|
2024-05-03 03:54:46 +08:00 |
hiyouga
|
245fe47ece
|
update webui and add CLIs
|
2024-05-03 02:58:23 +08:00 |
hiyouga
|
39e964a97a
|
Update prepare.sh
|
2024-05-02 17:16:02 +08:00 |
hiyouga
|
9433c8c215
|
fix badam configs
|
2024-05-02 02:47:04 +08:00 |
hoshi-hiyouga
|
f1c0eedeb3
|
Merge pull request #3487 from codemayq/main
support BAdam in WebUI
|
2024-05-02 02:38:01 +08:00 |
hoshi-hiyouga
|
dcd53cb89a
|
Update train.py
|
2024-05-02 02:21:27 +08:00 |
hoshi-hiyouga
|
282b5d5b1f
|
Merge pull request #3490 from khazic/main
Added the second sharegpt format
|
2024-05-02 02:15:23 +08:00 |