wql
|
140d7d533c
|
train: prepare for batch run
|
2024-08-22 21:35:59 +08:00 |
wql
|
d217df9443
|
fix: fix small bug
|
2024-08-22 16:22:42 +08:00 |
wql
|
72df1af06e
|
feat: update gpu status code
|
2024-08-22 16:10:27 +08:00 |
wql
|
d4cea6f9ac
|
chore: change yaml
|
2024-08-22 15:12:08 +08:00 |
wql
|
fdae778fa7
|
train: done test1
|
2024-08-22 06:46:44 +00:00 |
wql
|
7f5b10d654
|
chore: change yaml and git ignore
|
2024-08-22 13:27:24 +08:00 |
wql
|
c2b4a2db78
|
change: change test1 yaml
|
2024-08-22 13:21:55 +08:00 |
wql
|
8eb67cb9f2
|
change: change yaml
|
2024-08-22 11:21:19 +08:00 |
wql
|
f47d38717f
|
change: change test yaml
|
2024-08-22 11:09:58 +08:00 |
wql
|
29a4e49dfe
|
add: add test yaml
|
2024-08-22 10:46:22 +08:00 |
wql
|
1c0f790c9b
|
update: git ignore
|
2024-08-22 10:41:52 +08:00 |
wql
|
1cae7dbe8a
|
train: run baichuan once
|
2024-08-22 02:22:17 +00:00 |
wql
|
bfa2e166d7
|
change: new train yaml
|
2024-08-22 09:35:51 +08:00 |
wql
|
429c1cd574
|
train: run qwen single once
|
2024-08-21 06:23:09 +00:00 |
wql
|
cf1107fbfa
|
chore: add baichuan fix file
|
2024-08-21 13:44:02 +08:00 |
wql
|
bd971173c9
|
Merge branch 'main' of https://osredm.com/p04798526/LLaMA-Factory-Mirror
|
2024-08-21 13:15:59 +08:00 |
wql
|
3fdf2cb71a
|
chore: change model_name_or_path
|
2024-08-21 13:15:54 +08:00 |
wql
|
4e55ae0a1a
|
chore: include previous inference log
|
2024-08-21 02:52:31 +00:00 |
wql
|
9525725e56
|
train: run llama and chatglm
|
2024-08-21 01:19:12 +00:00 |
wql
|
83c41567b3
|
Merge branch 'main' of https://osredm.com/p04798526/LLaMA-Factory-Mirror
|
2024-08-20 17:39:26 +08:00 |
wql
|
6e99c064ad
|
change: gpu status
|
2024-08-20 17:38:32 +08:00 |
wql
|
93c80971dc
|
train: run chatglm
|
2024-08-20 09:32:46 +00:00 |
wql
|
8793d13920
|
update: update gitignore
|
2024-08-20 17:31:35 +08:00 |
wql
|
9411239d8d
|
change: change for batch run
|
2024-08-20 17:25:47 +08:00 |
wql
|
af17ae5fb4
|
transfer: transfer chatglm file
|
2024-08-20 16:28:47 +08:00 |
wql
|
40801b188c
|
change: change yaml
|
2024-08-20 16:02:39 +08:00 |
wql
|
aa8eb9bff4
|
change: change yaml
|
2024-08-20 15:52:44 +08:00 |
wql
|
d8a730dcfe
|
change: change yaml
|
2024-08-20 15:48:16 +08:00 |
wql
|
07b328ee23
|
feat: add finish add log and gpu status
|
2024-08-20 14:31:29 +08:00 |
wql
|
abf6ab0743
|
Merge branch 'main' of https://osredm.com/p04798526/LLaMA-Factory-Mirror
|
2024-08-20 14:30:32 +08:00 |
wql
|
0ae3f28774
|
test: test token and gpu status
|
2024-08-20 06:14:38 +00:00 |
wql
|
39e97a5c5f
|
Merge branch 'main' of https://osredm.com/p04798526/LLaMA-Factory-Mirror
|
2024-08-20 13:52:46 +08:00 |
wql
|
0ab6f2836b
|
feat: add cur_time to log
|
2024-08-20 10:35:46 +08:00 |
wql
|
368a593cde
|
Merge branch 'main' of https://osredm.com/p04798526/LLaMA-Factory-Mirror
|
2024-08-20 01:49:06 +00:00 |
wql
|
c93a5b8b8f
|
add: add include_num_input_tokens_seen
|
2024-08-20 09:42:58 +08:00 |
wql
|
36d18312d3
|
add: add test results
|
2024-08-19 09:12:34 +00:00 |
wql
|
d3f91c8e2f
|
add: add test yaml
|
2024-08-19 16:29:03 +08:00 |
wql
|
7f0b91db6b
|
change: change yaml
|
2024-08-19 13:34:11 +08:00 |
wql
|
25b8dd41f4
|
add: add test result
|
2024-08-19 05:08:37 +00:00 |
wql
|
d7d54df525
|
change: change batch run
|
2024-08-19 10:48:31 +08:00 |
wql
|
7c5d56ca26
|
change: change yaml
|
2024-08-19 10:41:43 +08:00 |
wql
|
3981d608f5
|
add: add max step 1000 result
|
2024-08-19 02:39:02 +00:00 |
wql
|
746ceac74a
|
train:test train
|
2024-08-19 09:57:13 +08:00 |
wql
|
539d4d08f1
|
add: add results for llama2 lora and inference
|
2024-08-19 01:24:28 +00:00 |
wql
|
40b5fec934
|
change: add comment
|
2024-08-18 23:54:18 +08:00 |
wql
|
f5b14a46be
|
add: add batch run scripts
|
2024-08-18 14:02:58 +08:00 |
wql
|
a0569cadda
|
test: test llama3 example
|
2024-08-18 11:11:07 +08:00 |
wql
|
2469598eb4
|
chore: add help.txt
|
2024-08-16 11:04:18 +00:00 |
wql
|
907282d2d7
|
add: inference result
|
2024-08-15 03:26:36 +00:00 |
wql
|
0e8b03b638
|
change: change predict yaml
|
2024-08-15 11:18:12 +08:00 |