Compare commits

...

23 Commits
FM_9G ... FM_9G

Author SHA1 Message Date
p83651209 1033ad4a75 Update README.md 2024-11-12 13:41:57 +08:00
p83651209 b63fcef8d2 Update README.md 2024-11-12 12:22:21 +08:00
p83651209 db10b9114b Update README.md 2024-11-12 11:16:38 +08:00
p83651209 58a7967a98 Update inference.py 2024-11-03 20:04:38 +08:00
p83651209 cd1bdcf117 Add model_final_url.txt 2024-11-03 13:30:04 +08:00
p83651209 4c8196bc84 Delete model_final 2024-11-03 13:29:30 +08:00
p83651209 124160cb1e Add model_final 2024-11-03 12:49:21 +08:00
p83651209 b0406a26bb Update README.md 2024-11-02 17:48:43 +08:00
p83651209 cc5b9a5ad8 Update README.md 2024-11-02 17:00:02 +08:00
p83651209 9441c81244 Update README.md 2024-11-02 16:54:13 +08:00
p83651209 ed4e38ea65 ADD file via upload 2024-11-02 16:52:22 +08:00
p83651209 0ff927cf92 Delete LLaMA-Factory.zip 2024-11-02 16:51:52 +08:00
p83651209 8807057563 Update README.md 2024-11-02 16:34:20 +08:00
p83651209 5b57b159f2 ADD file via upload 2024-11-02 16:32:04 +08:00
p83651209 7519028f67 Delete sft_code.sh 2024-11-02 16:30:05 +08:00
p83651209 2f35baea6c ADD file via upload 2024-11-02 16:28:39 +08:00
p83651209 b6a00ea9ea ADD file via upload 2024-11-02 16:18:55 +08:00
p83651209 5858ade20b Update README.md 2024-11-02 16:18:09 +08:00
p83651209 defb7a8bdd ADD file via upload 2024-11-02 16:15:33 +08:00
p83651209 2b55eb9f69 Update README.md 2024-11-02 16:09:56 +08:00
p83651209 93ba858875 ADD file via upload 2024-11-02 14:51:31 +08:00
p83651209 6b868588e1 Update README.md 2024-11-02 14:49:03 +08:00
p83651209 9cf01675dc Update README.md 2024-11-02 13:53:14 +08:00
6 changed files with 4568 additions and 63 deletions

BIN
LLaMA-Factory.zip Normal file

Binary file not shown.

View File

@ -1,67 +1,27 @@
# 九格通用基础大模型
## 简介
启元九格大模型由启元实验室牵头,联合清华大学、哈尔滨工业大学、中国科学院计算技术研究所、北京大学、南开大学等优势单位共同研制。具有高效训练与推理和高效适配与部署的技术特点,具备文本问答、文本分类、机器翻译、文本摘要等自然语言处理能力。
夸克网盘 docker链接https://pan.quark.cn/s/4cda395f13e8
(没有会员请联系我下载)
## 更新信息
- 本次启元九格开源两个参数级别模型分别是百亿级通用基础大模型为8B80亿和端侧模型2B20亿参数具体的模型训练、推理等内容见[QUICK START](https://www.osredm.com/jiuyuan/CPM-9G-8B/tree/FM_9G/quick_start_clean/readmes/quick_start.md)
- 若还在使用旧版本的九格模型训练和推理,请切换分支到[master](https://www.osredm.com/jiuyuan/CPM-9G-8B/tree/master/quick_start_clean/readmes/README_ALL.md)
## 版本更新内容
具体的迭代信息如下:
- 训练升级了训练代码提升GPU利用率和并行化并且2B模型能兼容transformers中的tokenizer(LlamaTokenizerFast)
- 推理支持vllm进行模型推理和部署可以接入langchain、openai等部署方式同时可以支持2b模型转换成GGUF等多种部署格式的部署
- 由于新架构中多数据集验证发现2B模型进行lora训练效果不及全参数微调因此建议2B模型全参数微调8B模型LORA微调在master分支进行
1.使用llama-factory对九格模型进行全参数微调。数据集见dataset
## 2024.08.19 NOTICE
- 由于新架构中多数据集验证发现2B模型进行lora训练效果不及全参数微调
- 2B模型采用全参数微调训练我们在[QUICK START](https://www.osredm.com/jiuyuan/CPM-9G-8B/tree/FM_9G/quick_start_clean/readmes/quick_start.md) 中更新了更多关于微调训练的信息
- 8B模型LORA微调在master分支进行训练
2.训练和推理都已验证无误在A100*8卡机器上。
docker 启动sudo docker run -it --runtime=nvidia --gpus all --shm-size=256g wjf:train
推理python inference.py
训练:
cd training
sh training.sh
# 迈向通用智能的大模型技术系列课程
系列课程全方位介绍人工智能和大模型技术的基础知识和前沿课题,理论学习和实践应用相结合。课程既有“人工智能与大模型通论”和“神经网络与预训练模型”等基础知识,也有“九格大模型生态体系”和“领域大模型实战”等实战主题,基本内容包括大模型训练、微调、知识增强、伦理安全、多模态、具身智能、自主智能体等话题,高级选题包括多语言处理、面向科学研究的大模型应用、高效计算技术、评测与数据科学等话题。课程旨在通过一系列精心设计的单元为学习者提供大型通用人工智能的学习之旅。
## 人工智能大模型通论
<video src="https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD%E4%B8%8E%E5%A4%A7%E6%A8%A1%E5%9E%8B%E9%80%9A%E8%AE%BA-%E5%AD%99%E8%8C%82%E6%9D%BE%E8%80%81%E5%B8%88-1124_DeWatermark.mp4
" width="800px" height="600px" controls="controls"></video>
[人工智能与大模型通论-PPT](https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/1.%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD%E4%B8%8E%E5%A4%A7%E6%A8%A1%E5%9E%8B%E9%80%9A%E8%AE%BA-PPT.pdf)
3.推理使用多checkpoint、多次推理融合。
## 大模型技术的重要特性与发展趋势
<video src="https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8A%80%E6%9C%AF%E7%9A%84%E9%87%8D%E8%A6%81%E7%89%B9%E6%80%A7%E4%B8%8E%E5%8F%91%E5%B1%95%E8%B6%8B%E5%8A%BF-%E5%88%98%E7%9F%A5%E8%BF%9C%E8%80%81%E5%B8%88-1201_DeWatermark.mp4
" width="800px" height="600px" controls="controls"></video>
[大模型技术的重要特性与发展趋势-PPT](https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/2.%E5%A4%A7%E6%A8%A1%E5%9E%8B%E6%8A%80%E6%9C%AF%E7%9A%84%E9%87%8D%E8%A6%81%E7%89%B9%E6%80%A7%E4%B8%8E%E5%8F%91%E5%B1%95%E8%B6%8B%E5%8A%BF-PPT.pdf)
4.所有资料都已打包进docker只需要docker即可。
## 大语言模型的适配与对齐技术
<video src="https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/2023-12-22-%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B%E7%9A%84%E9%80%82%E9%85%8D%E4%B8%8E%E5%AF%B9%E9%BD%90%E6%8A%80%E6%9C%AF-%E4%B8%81%E5%AE%81_DeWatermark.mp4
" width="800px" height="600px" controls="controls"></video>
[大语言模型的适配与对齐技术-PPT](https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/3.%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B%E7%9A%84%E9%80%82%E9%85%8D%E4%B8%8E%E5%AF%B9%E9%BD%90%E6%8A%80%E6%9C%AF-PPT.pdf)
5.启动训练时将覆盖提交的checkpoint。
## 大模型领域适配原理与实战
<video src="https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/2023-12-29%E5%A4%A7%E6%A8%A1%E5%9E%8B%E9%A2%86%E5%9F%9F%E9%80%82%E9%85%8D%E5%8E%9F%E7%90%86%E4%B8%8E%E5%AE%9E%E6%88%98-%E7%8E%8B%E7%A1%95_DeWatermark.mp4
" width="800px" height="600px" controls="controls"></video>
[大模型领域适配原理与实战-PPT](https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/4.%E5%A4%A7%E6%A8%A1%E5%9E%8B%E9%A2%86%E5%9F%9F%E9%80%82%E9%85%8D%E5%8E%9F%E7%90%86%E4%B8%8E%E5%AE%9E%E6%88%98-PPT.pdf)
## 知识增强的大语言模型
<video src="https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/%E7%9F%A5%E8%AF%86%E5%A2%9E%E5%BC%BA%E7%9A%84%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B.mp4
" width="800px" height="600px" controls="controls"></video>
[知识增强的大语言模型-PPT](https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/5.%E6%A3%80%E7%B4%A2%E5%A2%9E%E5%BC%BA%E7%9A%84%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B-PPT.pdf)
## 大模型工具学习
<video src="https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%B7%A5%E5%85%B7%E5%AD%A6%E4%B9%A0.mp4
" width="800px" height="600px" controls="controls"></video>
[大模型工具学习-PPT](https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/6.%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%B7%A5%E5%85%B7%E5%AD%A6%E4%B9%A0-PPT.pdf)
## 检索增强生成的基本实现
<video src="https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/%E6%A3%80%E7%B4%A2%E5%A2%9E%E5%BC%BA%E7%94%9F%E6%88%90%E7%9A%84%E5%9F%BA%E6%9C%AC%E5%AE%9E%E7%8E%B0.mp4
" width="800px" height="600px" controls="controls"></video>
[检索增强生成的基本实现-PPT](https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/7.%E6%A3%80%E7%B4%A2%E5%A2%9E%E5%BC%BA%E7%94%9F%E6%88%90%E7%9A%84%E5%9F%BA%E6%9C%AC%E5%AE%9E%E7%8E%B0-PPT.pdf)
## 多模态语义检索与检索增强技术
<video src="https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/%E5%A4%9A%E6%A8%A1%E6%80%81%E8%AF%AD%E4%B9%89%E6%A3%80%E7%B4%A2%E4%B8%8E%E6%A3%80%E7%B4%A2%E5%A2%9E%E5%BC%BA%E6%8A%80%E6%9C%AF.mp4
" width="800px" height="600px" controls="controls"></video>
[多模态语义检索与检索增强技术-PPT](https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/8.%E5%A4%9A%E6%A8%A1%E6%80%81%E8%AF%AD%E4%B9%89%E6%A3%80%E7%B4%A2%E4%B8%8E%E6%A3%80%E7%B4%A2%E5%A2%9E%E5%BC%BA%E6%8A%80%E6%9C%AF-PPT.pdf)
## 大语言模型驱动的多智能体协作与演化
<video src="https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/400_0121.mp4
" width="800px" height="600px" controls="controls"></video>
[大语言模型驱动的多智能体协作与演化-PPT](https://qy-obs-6d58.obs.cn-north-4.myhuaweicloud.com/%E8%AF%BE%E7%A8%8B%E8%A7%86%E9%A2%91/9.%E5%A4%A7%E8%AF%AD%E8%A8%80%E6%A8%A1%E5%9E%8B%E9%A9%B1%E5%8A%A8%E7%9A%84%E5%A4%9A%E6%99%BA%E8%83%BD%E4%BD%93%E5%8D%8F%E4%BD%9C%E4%B8%8E%E6%BC%94%E5%8C%96-PPT.pdf)
6.docker卡在数据处理可能是机器的问题尝试docker中输入
export NCCL_DEBUG=INFO
export NCCL_SHM_DISABLE=1
export NCCL_P2P_DISABLE=1
由于需要保存多个checkpoint请务必保证磁盘空间足够大于500G。
7.提交不易请有问题是及时联系我电话13121813131

278
inference.py Normal file
View File

@ -0,0 +1,278 @@
import json, torch, re, sys, subprocess
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoModel, StoppingCriteria
device = "cuda" # the device to load the model onto
from tqdm import tqdm
def exec_code(test):
with open("test_case.json", "r") as f:
test_cases = json.load(f)
right_num = 0
all_num = 0
package = "import os, sys, math, re, json, random\n"
for item, test_case in zip(test, test_cases):
if "```python\n" in item["raw_outputs"]:
matches = re.findall('```python(.*?)```', item["raw_outputs"], re.DOTALL)
if len(matches) == 1:
item["raw_outputs"] = matches[0]
else:
matches = re.findall('```python(.*?)assert', item["raw_outputs"], re.DOTALL)
if len(matches) == 1:
item["raw_outputs"] = matches[0]
else:
item["raw_outputs"] = item["raw_outputs"][item["raw_outputs"].index("python\n") + len("python\n"):]
print(item)
#break
code = item["raw_outputs"].replace("<|im_end|>", "").replace("</s>", "").replace("```", "").strip().rstrip("\n")
raw_code = code
codes = raw_code.split("\n")
last_line = 0
for index, line in enumerate(codes):
if " return" in line:
last_line = index
code = "\n".join(codes[:last_line+1])
'''
if raw_code != code:
print("\n--------------------------------------------------------\n", [raw_code], "\n--------------------------------------------------------\n")
print("clean:\n", [code], "\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n\n\n")
'''
with open('code_.py', 'w') as fout:
fout.write(package + code + "\n" + "\n".join(test_case["test_case"]))
batcmd = 'timeout 3 ' + sys.executable + ' code_.py'
try:
shell_output = subprocess.check_output(batcmd, shell=True).decode('utf8')
right_num += 1
item["result"] = "True"
except Exception as e:
print("++++++++++++++++++++++++++++++++++++++++++++++++++++\n", raw_code, "\n-----------------------------------------\n\n\n", package + code + "\n--------------------------\n" + "\n".join(test_case["test_case"]))
print("--------------------------------------------------------\n\n\nitem:", item)
print("e: ", e, "\n================================================\n")#, e, )
item["result"] = "False"
all_num += 1
item["raw_outputs"] = [code]
print(len(test), right_num, all_num, right_num / all_num)
with open(f'wjf_{model_path.replace("/", "-")}{right_num / all_num}.json', "w") as f:
json.dump(test, f, indent=4)
return test, right_num / all_num
def get_result(model, tokenizer):
test = []
with open("/mnt/disk2/home/wujianfeng/com/code/code_round4.jsonl", "r") as f:
#test = json.load(f)
for line in f:
test.append(json.loads(line))
all_score = 0
all_num = 0
test_num = 1000
from tqdm import tqdm
for example in tqdm(test[:]):
#print(example["question"])
example["question"] = example["question"].replace("'''", '"""')
ai_prefix = ""
if example["question"].split(" ")[0] == "Write":
question = example["question"][:example["question"].index("\n")].strip().rstrip()
test_case = example["question"][example["question"].index("\n"):].split("\n")
print("test_case: ", test_case)
function_name = test_case[1].split(" ")[1].split("(")[0]
ai_prefix = "def " + function_name
messages = [
{"role": "user", "content": question + "\n\n" + ("\n".join(test_case))}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
text += ai_prefix
example["test_case"] = test_case
else:
tmp = re.findall(r'"""(.*?)"""', example["question"], flags=re.DOTALL)[0].split("\n")
question = ""
for line in tmp:
line = line.strip().rstrip()
if len(line) == 0:
continue
#if "xample" in line and len(line) < 20:
# break
question += line + " "
code = re.sub(r'"""(.*?)"""', '', example["question"], flags=re.DOTALL).strip().rstrip()
ai_prefix = code
messages = [
{"role": "user", "content": question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
text += ai_prefix
example["prompt"] = text
print("text: " , [text])
input_ids = tokenizer([text], return_tensors="pt").to(device).input_ids
output = model.generate(input_ids,
#top_p=1.0,
max_new_tokens=600,
#repetition_penalty=1.1 + t*0.01,
temperature=0.1,
#no_repeat_ngram_size = 5,
).squeeze()
output_str = tokenizer.decode(output[input_ids.shape[1]:])
output_str = ai_prefix + output_str
print("output_str:\n", output_str, "\n-----------------------------------------------------------------")
example["raw_outputs"] = output_str#re.findall(r'```python(.*?)```', output_str)
return test
def get_result_1(model, tokenizer):
test = []
with open("/mnt/disk2/home/wujianfeng/com/code/code_round4.jsonl", "r") as f:
#test = json.load(f)
for line in f:
test.append(json.loads(line))
all_score = 0
all_num = 0
test_num = 1000
from tqdm import tqdm
for example in tqdm(test[:]):
#print(example["question"])
messages = [
{"role": "user", "content": example["question"]}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
example["prompt"] = text
print("text: " , [text])
input_ids = tokenizer([text], return_tensors="pt").to(device).input_ids
output = model.generate(input_ids,
#top_p=1.0,
max_new_tokens=600,
#repetition_penalty=1.1 + t*0.01,
temperature=0.1,
#no_repeat_ngram_size = 5,
).squeeze()
output_str = tokenizer.decode(output[input_ids.shape[1]:])
print("output_str:\n", output_str, "\n-----------------------------------------------------------------")
example["raw_outputs"] = output_str#re.findall(r'```python(.*?)```', output_str)
return test
answers = {}
for model_path in [
"/mnt/disk2/home/wujianfeng/LLaMA-Factory/all/TACO/",
"/mnt/disk2/home/wujianfeng/LLaMA-Factory/all_new_2/CodeNet4Repair/",
"/mnt/disk2/home/wujianfeng/LLaMA-Factory/all_new_1/CodeExercise-Python-27k/",
]:
print("model_path: ", model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype="auto",
device_map=device,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
test = get_result(model, tokenizer)
test, score = exec_code(test)
answers[score] = test
test = get_result_1(model, tokenizer)
test, score = exec_code(test)
answers[score] = test
'''
import os
for path in os.listdir("./"):
if "home-wujianfeng" in path:
with open(path, "r") as f:
test = json.load(f)
answers[float(path.split(".")[-2].split("-")[-1])] = test
'''
answers = list(dict(sorted(answers.items())).values())
print("answers: ", answers)
right = 0
jiuge_right = 0
merge = []
for i in range(len(answers[0])):
#for i in range(2):
flag = 0
for answer in answers:
if answer[i]["result"] == "True":
right += 1
jiuge_right += 1
flag = 1
merge.append(answer[i])
break
if flag == 0:
merge.append(answers[0][i])
print(right / len(answers[0]), jiuge_right / len(answers[0]))
with open("wjf_jiuge.jsonl", "w") as f:
for item in merge:
item.pop("result")
f.write(json.dumps(item, ensure_ascii=False) + '\n')

14
model_final_url.txt Normal file
View File

@ -0,0 +1,14 @@
model_wight:通过百度网盘分享的文件:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
提取码6666
#https://www.alipan.com/s/FTPWUSBuz7s
docker:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
提取码6666
#https://www.alipan.com/s/FTPWUSBuz7s
train_data:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
提取码6666
#https://www.alipan.com/s/FTPWUSBuz7s

4099
test_case.json Normal file

File diff suppressed because it is too large Load Diff

154
train.sh Normal file
View File

@ -0,0 +1,154 @@
#!/bin/bash
deepspeed --include localhost:0,1,2,3,4,5,6,7 --master_port 21666 src/train.py \
--stage sft \
--model_name_or_path /mnt/diskhd/Backup/DownloadModel/2b_sft_model/ \
--do_train \
--dataset TACO \
--template jiuge \
--finetuning_type full \
--output_dir TACO \
--per_device_train_batch_size 14 \
--gradient_accumulation_steps 6 \
--lr_scheduler_type cosine \
--logging_step 1 \
--save_steps 300 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.001 \
--optim adamw_torch \
--learning_rate 2e-5 \
--num_train_epochs 2.0 \
--plot_loss \
--bf16 \
--gradient_checkpointing \
--report_to tensorboard \
--deepspeed deepspeed_configs/zero2.json \
--cutoff_len 2048
deepspeed --include localhost:0,1,2,3,4,5,6,7 --master_port 21666 src/train.py \
--stage sft \
--model_name_or_path /mnt/diskhd/Backup/DownloadModel/2b_sft_model/ \
--do_train \
--dataset Tested-143k-Python-Alpaca \
--template jiuge \
--finetuning_type full \
--output_dir Tested-143k-Python-Alpaca \
--per_device_train_batch_size 14 \
--gradient_accumulation_steps 6 \
--lr_scheduler_type cosine \
--logging_step 1 \
--save_steps 300 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.001 \
--optim adamw_torch \
--learning_rate 2e-5 \
--num_train_epochs 2.0 \
--plot_loss \
--bf16 \
--gradient_checkpointing \
--report_to tensorboard \
--deepspeed deepspeed_configs/zero2.json \
--cutoff_len 2048
deepspeed --include localhost:0,1,2,3,4,5,6,7 --master_port 21666 src/train.py \
--stage sft \
--model_name_or_path /mnt/diskhd/Backup/DownloadModel/2b_sft_model/ \
--do_train \
--dataset UltraInteract_sft \
--template jiuge \
--finetuning_type full \
--output_dir UltraInteract_sft \
--per_device_train_batch_size 14 \
--gradient_accumulation_steps 6 \
--lr_scheduler_type cosine \
--logging_step 1 \
--save_steps 300 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.001 \
--optim adamw_torch \
--learning_rate 2e-5 \
--num_train_epochs 2.0 \
--plot_loss \
--bf16 \
--gradient_checkpointing \
--report_to tensorboard \
--deepspeed deepspeed_configs/zero2.json \
--cutoff_len 2048
deepspeed --include localhost:0,1,2,3,4,5,6,7 --master_port 21666 src/train.py \
--stage sft \
--model_name_or_path /mnt/diskhd/Backup/DownloadModel/2b_sft_model/ \
--do_train \
--dataset code_instructions_120k_alpaca \
--template jiuge \
--finetuning_type full \
--output_dir code_instructions_120k_alpaca \
--per_device_train_batch_size 14 \
--gradient_accumulation_steps 6 \
--lr_scheduler_type cosine \
--logging_step 1 \
--save_steps 300 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.001 \
--optim adamw_torch \
--learning_rate 2e-5 \
--num_train_epochs 2.0 \
--plot_loss \
--bf16 \
--gradient_checkpointing \
--report_to tensorboard \
--deepspeed deepspeed_configs/zero2.json \
--cutoff_len 2048
deepspeed --include localhost:0,1,2,3,4,5,6,7 --master_port 21666 src/train.py \
--stage sft \
--model_name_or_path /mnt/diskhd/Backup/DownloadModel/2b_sft_model/ \
--do_train \
--dataset CodeExercise-Python-27k \
--template jiuge \
--finetuning_type full \
--output_dir CodeExercise-Python-27k \
--per_device_train_batch_size 14 \
--gradient_accumulation_steps 6 \
--lr_scheduler_type cosine \
--logging_step 1 \
--save_steps 300 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.001 \
--optim adamw_torch \
--learning_rate 2e-5 \
--num_train_epochs 2.0 \
--plot_loss \
--bf16 \
--gradient_checkpointing \
--report_to tensorboard \
--deepspeed deepspeed_configs/zero2.json \
--cutoff_len 2048
deepspeed --include localhost:0,1,2,3,4,5,6,7 --master_port 21666 src/train.py \
--stage sft \
--model_name_or_path /mnt/diskhd/Backup/DownloadModel/2b_sft_model/ \
--do_train \
--dataset CodeNet4Repair \
--template jiuge \
--finetuning_type full \
--output_dir CodeNet4Repair \
--per_device_train_batch_size 14 \
--gradient_accumulation_steps 6 \
--lr_scheduler_type cosine \
--logging_step 1 \
--save_steps 300 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.001 \
--optim adamw_torch \
--learning_rate 2e-5 \
--num_train_epochs 2.0 \
--plot_loss \
--bf16 \
--gradient_checkpointing \
--report_to tensorboard \
--deepspeed deepspeed_configs/zero2.json \
--cutoff_len 2048