Compare commits

..

No commits in common. "FM_9G" and "FM_9G" have entirely different histories.
FM_9G ... FM_9G

3 changed files with 289 additions and 314 deletions

View File

@ -1,27 +1,24 @@
夸克网盘 docker链接https://pan.quark.cn/s/4cda395f13e8 方案:
(没有会员请联系我下载) 全参数微调,使用不同数据集训练多个模型和推理时增强进行融合。
1.使用llama-factory对九格模型进行全参数微调。数据集见dataset 训练代码:
LLaMA-Factory.zip 解压后使用可参照https://github.com/hiyouga/LLaMA-Factory配置环境或将代码映射到docker中使用。
训练train.sh。将数据集放到LLaMA-Factory/data文件夹下将train.sh放到LLaMA-Factory下使用。
推理: python inference.py(需在inference.py中修改好模型路径。) test_case.json是从题目中提取出来的测试用例。
2.训练和推理都已验证无误在A100*8卡机器上。 百度网盘需要收费,使用阿里云盘
docker 启动sudo docker run -it --runtime=nvidia --gpus all --shm-size=256g wjf:train model_wight:通过百度网盘分享的文件:
推理python inference.py 链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
训练: 提取码6666
cd training https://www.alipan.com/s/FTPWUSBuz7s
sh training.sh
docker:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
提取码6666
https://www.alipan.com/s/FTPWUSBuz7s
3.推理使用多checkpoint、多次推理融合。 train_data:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
4.所有资料都已打包进docker只需要docker即可。 提取码6666
https://www.alipan.com/s/FTPWUSBuz7s
5.启动训练时将覆盖提交的checkpoint。
6.docker卡在数据处理可能是机器的问题尝试docker中输入
export NCCL_DEBUG=INFO
export NCCL_SHM_DISABLE=1
export NCCL_P2P_DISABLE=1
由于需要保存多个checkpoint请务必保证磁盘空间足够大于500G。
7.提交不易请有问题是及时联系我电话13121813131

View File

@ -211,11 +211,10 @@ def get_result_1(model, tokenizer):
answers = {} answers = {}
for model_path in [ for model_path in [
"/mnt/disk2/home/wujianfeng/LLaMA-Factory/all/TACO/", "/mnt/disk2/home/wujianfeng/LLaMA-Factory/all_new_1/checkpoint-600",
"/mnt/disk2/home/wujianfeng/LLaMA-Factory/all_new_2/CodeNet4Repair/", "/mnt/disk2/home/wujianfeng/LLaMA-Factory/all_new/checkpoint-600/",
"/mnt/disk2/home/wujianfeng/LLaMA-Factory/all_new_1/CodeExercise-Python-27k/",
]: ]:
print("model_path: ", model_path) print("model_path: ", model_path)
model = AutoModelForCausalLM.from_pretrained( model = AutoModelForCausalLM.from_pretrained(
@ -235,21 +234,14 @@ for model_path in [
test, score = exec_code(test) test, score = exec_code(test)
answers[score] = test answers[score] = test
'''
import os
for path in os.listdir("./"):
if "home-wujianfeng" in path:
with open(path, "r") as f:
test = json.load(f)
answers[float(path.split(".")[-2].split("-")[-1])] = test
'''
answers = list(dict(sorted(answers.items())).values()) answers = list(dict(sorted(answers.items())).values())
print("answers: ", answers) print("answers: ", answers)
right = 0 right = 0
jiuge_right = 0 jiuge_right = 0
merge = [] merge = []
for i in range(len(answers[0])): for i in range(len(answers)):
#for i in range(2): #for i in range(2):
flag = 0 flag = 0
for answer in answers: for answer in answers:
@ -265,7 +257,7 @@ for i in range(len(answers[0])):
print(right / len(answers[0]), jiuge_right / len(answers[0])) print(right / len(answers), jiuge_right / len(answers))
with open("wjf_jiuge.jsonl", "w") as f: with open("wjf_jiuge.jsonl", "w") as f:
for item in merge: for item in merge:
item.pop("result") item.pop("result")

View File

@ -1,14 +0,0 @@
model_wight:通过百度网盘分享的文件:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
提取码6666
#https://www.alipan.com/s/FTPWUSBuz7s
docker:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
提取码6666
#https://www.alipan.com/s/FTPWUSBuz7s
train_data:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
提取码6666
#https://www.alipan.com/s/FTPWUSBuz7s