Compare commits

..

7 Commits
FM_9G ... FM_9G

Author SHA1 Message Date
p83651209 1033ad4a75 Update README.md 2024-11-12 13:41:57 +08:00
p83651209 b63fcef8d2 Update README.md 2024-11-12 12:22:21 +08:00
p83651209 db10b9114b Update README.md 2024-11-12 11:16:38 +08:00
p83651209 58a7967a98 Update inference.py 2024-11-03 20:04:38 +08:00
p83651209 cd1bdcf117 Add model_final_url.txt 2024-11-03 13:30:04 +08:00
p83651209 4c8196bc84 Delete model_final 2024-11-03 13:29:30 +08:00
p83651209 124160cb1e Add model_final 2024-11-03 12:49:21 +08:00
3 changed files with 314 additions and 289 deletions

View File

@ -1,24 +1,27 @@
方案:
全参数微调,使用不同数据集训练多个模型和推理时增强进行融合。
夸克网盘 docker链接https://pan.quark.cn/s/4cda395f13e8
(没有会员请联系我下载)
训练代码:
LLaMA-Factory.zip 解压后使用可参照https://github.com/hiyouga/LLaMA-Factory配置环境或将代码映射到docker中使用。
训练train.sh。将数据集放到LLaMA-Factory/data文件夹下将train.sh放到LLaMA-Factory下使用。
推理: python inference.py(需在inference.py中修改好模型路径。) test_case.json是从题目中提取出来的测试用例。
1.使用llama-factory对九格模型进行全参数微调。数据集见dataset
百度网盘需要收费,使用阿里云盘
model_wight:通过百度网盘分享的文件:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
提取码6666
https://www.alipan.com/s/FTPWUSBuz7s
2.训练和推理都已验证无误在A100*8卡机器上。
docker 启动sudo docker run -it --runtime=nvidia --gpus all --shm-size=256g wjf:train
推理python inference.py
训练:
cd training
sh training.sh
docker:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
提取码6666
https://www.alipan.com/s/FTPWUSBuz7s
train_data:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
提取码6666
https://www.alipan.com/s/FTPWUSBuz7s
3.推理使用多checkpoint、多次推理融合。
4.所有资料都已打包进docker只需要docker即可。
5.启动训练时将覆盖提交的checkpoint。
6.docker卡在数据处理可能是机器的问题尝试docker中输入
export NCCL_DEBUG=INFO
export NCCL_SHM_DISABLE=1
export NCCL_P2P_DISABLE=1
由于需要保存多个checkpoint请务必保证磁盘空间足够大于500G。
7.提交不易请有问题是及时联系我电话13121813131

View File

@ -211,10 +211,11 @@ def get_result_1(model, tokenizer):
answers = {}
for model_path in [
"/mnt/disk2/home/wujianfeng/LLaMA-Factory/all_new_1/checkpoint-600",
"/mnt/disk2/home/wujianfeng/LLaMA-Factory/all_new/checkpoint-600/",
for model_path in [
"/mnt/disk2/home/wujianfeng/LLaMA-Factory/all/TACO/",
"/mnt/disk2/home/wujianfeng/LLaMA-Factory/all_new_2/CodeNet4Repair/",
"/mnt/disk2/home/wujianfeng/LLaMA-Factory/all_new_1/CodeExercise-Python-27k/",
]:
print("model_path: ", model_path)
model = AutoModelForCausalLM.from_pretrained(
@ -234,14 +235,21 @@ for model_path in [
test, score = exec_code(test)
answers[score] = test
'''
import os
for path in os.listdir("./"):
if "home-wujianfeng" in path:
with open(path, "r") as f:
test = json.load(f)
answers[float(path.split(".")[-2].split("-")[-1])] = test
'''
answers = list(dict(sorted(answers.items())).values())
print("answers: ", answers)
right = 0
jiuge_right = 0
merge = []
for i in range(len(answers)):
for i in range(len(answers[0])):
#for i in range(2):
flag = 0
for answer in answers:
@ -257,7 +265,7 @@ for i in range(len(answers)):
print(right / len(answers), jiuge_right / len(answers))
print(right / len(answers[0]), jiuge_right / len(answers[0]))
with open("wjf_jiuge.jsonl", "w") as f:
for item in merge:
item.pop("result")

14
model_final_url.txt Normal file
View File

@ -0,0 +1,14 @@
model_wight:通过百度网盘分享的文件:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
提取码6666
#https://www.alipan.com/s/FTPWUSBuz7s
docker:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
提取码6666
#https://www.alipan.com/s/FTPWUSBuz7s
train_data:
链接https://pan.baidu.com/s/1paYNO7d5OYESuyw3BVo7Ew
提取码6666
#https://www.alipan.com/s/FTPWUSBuz7s