Commit Graph

2167 Commits

Author SHA1 Message Date
wql ba90bf1625 chore: sort run_once 2024-09-05 14:41:48 +08:00
wql 36840f0310 fix: fix baichuan2 path 2024-09-05 14:35:47 +08:00
wql 190fddf27d fix: small change 2024-09-05 13:37:17 +08:00
wql 64044380bd fix: add not supported model err msg 2024-09-05 13:21:38 +08:00
wql 8162a54aa5 fix:small fix 2024-09-05 13:10:34 +08:00
wql 95b4b493e6 chore: add echo 2024-09-05 13:09:43 +08:00
wql ceb01459fe fix: fix bug 2024-09-05 13:04:54 +08:00
wql cc99691cf4 fix: fix file type 2024-09-05 13:03:22 +08:00
wql 4058fd7d64 fix: remove no use import 2024-09-05 13:01:49 +08:00
wql 846fb7bfef fix: fix space 2024-09-05 12:59:52 +08:00
wql f23e9d417e fix: fix space 2024-09-05 12:59:02 +08:00
wql 0cf37e5ec1 fix: fix para 2024-09-05 12:57:41 +08:00
wql ae308991fb fix: fix first line 2024-09-05 12:54:15 +08:00
wql 3e548489ed feat: done easy run 2024-09-05 11:28:19 +08:00
wql fa9a9007f9 chore: add lora sft and predict template yaml file 2024-09-04 16:52:15 +08:00
wql a61372ee0f chore: change gpu status save folder 2024-09-04 16:24:15 +08:00
wql 91db09b6c7 feat: add gpu_status.py 2024-09-04 16:15:17 +08:00
wql e7a7a22f19 feat: add save_state in prediction 2024-09-04 16:08:57 +08:00
wql 7af7b40955 chore: change docker to ubuntu22.04 2024-09-04 09:26:09 +08:00
wql d2a4d909c1 chore: change docker env to ubuntu 20.04 2024-08-29 11:19:48 +08:00
wql e0a4a0e451 chore: change pip to tuna source in docker 2024-08-29 09:33:31 +08:00
hiyouga 0f5a0f64f7 update wechat 2024-08-27 12:55:23 +08:00
hiyouga d14edd350d add extra requires 2024-08-27 12:52:12 +08:00
hiyouga f6ae4e75dd tiny fix 2024-08-27 12:49:32 +08:00
hoshi-hiyouga dbe886ae5c
Merge pull request #5237 from marko1616/patch-1
Fix mllm api
2024-08-27 12:24:43 +08:00
marko1616 df8d5b6985 ruff pass. 2024-08-27 11:30:16 +08:00
marko1616 1545684c3f
Update chat.py 2024-08-27 11:27:56 +08:00
hiyouga 72bc8f0111 support liger kernel 2024-08-27 11:20:14 +08:00
marko1616 3a28521710
Force re check. 2024-08-23 14:43:18 +08:00
marko1616 8eb2092921
Update chat.py 2024-08-22 12:24:34 +08:00
marko1616 a4f1de9d82
Update chat.py 2024-08-22 12:14:34 +08:00
hoshi-hiyouga 36039b0fe0
Merge pull request #5230 from MengqingCao/image
[NPU] Update npu base image
2024-08-21 22:13:07 +08:00
hiyouga 8907150c1b update wechat 2024-08-21 22:07:34 +08:00
MengqingCao b3f4acd1b4 update npu base image 2024-08-21 09:12:38 +00:00
hiyouga c8b4c7fee5 tiny fix 2024-08-20 00:10:52 +08:00
hoshi-hiyouga 15be296347
Merge pull request #5156 from YeQiuO/main
fix Llama-template's system prompt bug
2024-08-20 00:09:03 +08:00
hoshi-hiyouga ec72eeca52
Update template.py 2024-08-20 00:03:33 +08:00
hoshi-hiyouga da335d42c3
Merge pull request #5163 from liu-zichen/fix_ppo_optim
fix lr not change
2024-08-19 23:56:24 +08:00
hoshi-hiyouga f59c9bef31
Merge pull request #5185 from chenhuiyu/feature/add-sailorllm-template
Add SailorLLM template
2024-08-19 23:51:49 +08:00
hoshi-hiyouga d39f4a62d3
Merge pull request #5188 from Zxilly/main
fix: report correct device count for intel xpu
2024-08-19 23:51:39 +08:00
hoshi-hiyouga 5d5bfc83e6
Merge pull request #5193 from Ricardo-L-C/main
_is_bf16_available judgment supports npu
2024-08-19 23:40:59 +08:00
hoshi-hiyouga 5f3300ec5d
Update template.py 2024-08-19 23:40:16 +08:00
hiyouga 3804ddec9e update readme 2024-08-19 23:32:04 +08:00
Ricardo 384ab8db84 _is_bf16_available judgment supports npu 2024-08-16 02:58:22 +00:00
Zxilly dc36fcc3de
fix: report correct device count for intel xpu 2024-08-15 08:30:43 +00:00
Huiyu Chen 2502833a77 Add SailorLLM template 2024-08-15 15:10:14 +08:00
liu-zichen ddee718b31 fix lr not change 2024-08-13 16:33:34 +08:00
codingma 625a0e32c4 add tutorial and doc links 2024-08-13 16:13:10 +08:00
codingma 5b9d99ebc6 update wechat.jpg 2024-08-13 16:12:36 +08:00
“Wzw” bcbbf45063 fix Llama-template's system prompt bug 2024-08-12 19:22:12 +08:00