add: add train_24_8_12_15_46

This commit is contained in:
wql 2024-08-12 08:13:14 +00:00
parent 01f70612c7
commit 1ee249021b
34 changed files with 187439 additions and 0 deletions

View File

@ -0,0 +1,64 @@
---
base_model: /home/user/.cache/modelscope/hub/modelscope/Llama-2-7b-ms
library_name: peft
license: other
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: train_24_8_12_15_46
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_24_8_12_15_46
This model is a fine-tuned version of [/home/user/.cache/modelscope/hub/modelscope/Llama-2-7b-ms](https://huggingface.co//home/user/.cache/modelscope/hub/modelscope/Llama-2-7b-ms) on the alpaca_zh dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0803
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 7
- gradient_accumulation_steps: 8
- total_train_batch_size: 112
- total_eval_batch_size: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.12.0
- Transformers 4.43.4
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1

View File

@ -0,0 +1,34 @@
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "/home/user/.cache/modelscope/hub/modelscope/Llama-2-7b-ms",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layer_replication": null,
"layers_pattern": null,
"layers_to_transform": null,
"loftq_config": {},
"lora_alpha": 16,
"lora_dropout": 0.0,
"megatron_config": null,
"megatron_core": "megatron.core",
"modules_to_save": null,
"peft_type": "LORA",
"r": 8,
"rank_pattern": {},
"revision": null,
"target_modules": [
"down_proj",
"v_proj",
"up_proj",
"k_proj",
"q_proj",
"o_proj",
"gate_proj"
],
"task_type": "CAUSAL_LM",
"use_dora": false,
"use_rslora": false
}

View File

@ -0,0 +1,3 @@
{
"<pad>": 32000
}

View File

@ -0,0 +1,12 @@
{
"epoch": 2.953846153846154,
"eval_loss": 1.0803213119506836,
"eval_runtime": 0.5889,
"eval_samples_per_second": 169.819,
"eval_steps_per_second": 25.473,
"total_flos": 2.4084253524361216e+16,
"train_loss": 1.1769591172536213,
"train_runtime": 36.649,
"train_samples_per_second": 73.672,
"train_steps_per_second": 0.655
}

View File

@ -0,0 +1,202 @@
---
base_model: /home/user/.cache/modelscope/hub/modelscope/Llama-2-7b-ms
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0

View File

@ -0,0 +1,34 @@
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "/home/user/.cache/modelscope/hub/modelscope/Llama-2-7b-ms",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layer_replication": null,
"layers_pattern": null,
"layers_to_transform": null,
"loftq_config": {},
"lora_alpha": 16,
"lora_dropout": 0.0,
"megatron_config": null,
"megatron_core": "megatron.core",
"modules_to_save": null,
"peft_type": "LORA",
"r": 8,
"rank_pattern": {},
"revision": null,
"target_modules": [
"down_proj",
"v_proj",
"up_proj",
"k_proj",
"q_proj",
"o_proj",
"gate_proj"
],
"task_type": "CAUSAL_LM",
"use_dora": false,
"use_rslora": false
}

View File

@ -0,0 +1,3 @@
{
"<pad>": 32000
}

View File

@ -0,0 +1,30 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,52 @@
{
"add_bos_token": true,
"add_eos_token": false,
"add_prefix_space": null,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"32000": {
"content": "<pad>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": false
}
},
"bos_token": "<s>",
"chat_template": "{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{% endif %}{% for message in messages %}{% set content = message['content'] %}{% if loop.index0 == 0 and system_message is defined %}{% set content = '<<SYS>>\n' + system_message + '\n<</SYS>>\n\n' + message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ '<s>' + '[INST] ' + content + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ content + '</s>' }}{% endif %}{% endfor %}",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": false,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "<unk>",
"padding_side": "right",
"sp_model_kwargs": {},
"split_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": false
}

View File

@ -0,0 +1,47 @@
{
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 2.953846153846154,
"eval_steps": 500,
"global_step": 24,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 1.2307692307692308,
"grad_norm": 0.3606488108634949,
"learning_rate": 7.500000000000001e-05,
"loss": 1.2817,
"step": 10
},
{
"epoch": 2.4615384615384617,
"grad_norm": 0.30481404066085815,
"learning_rate": 8.688061284200266e-06,
"loss": 1.1033,
"step": 20
}
],
"logging_steps": 10,
"max_steps": 24,
"num_input_tokens_seen": 0,
"num_train_epochs": 3,
"save_steps": 500,
"stateful_callbacks": {
"TrainerControl": {
"args": {
"should_epoch_stop": false,
"should_evaluate": false,
"should_log": false,
"should_save": true,
"should_training_stop": true
},
"attributes": {}
}
},
"total_flos": 2.4084253524361216e+16,
"train_batch_size": 2,
"trial_name": null,
"trial_params": null
}

View File

@ -0,0 +1,7 @@
{
"epoch": 2.953846153846154,
"eval_loss": 1.0803213119506836,
"eval_runtime": 0.5889,
"eval_samples_per_second": 169.819,
"eval_steps_per_second": 25.473
}

View File

@ -0,0 +1,30 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,52 @@
{
"add_bos_token": true,
"add_eos_token": false,
"add_prefix_space": null,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"32000": {
"content": "<pad>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": false
}
},
"bos_token": "<s>",
"chat_template": "{% if messages[0]['role'] == 'system' %}{% set system_message = messages[0]['content'] %}{% endif %}{% for message in messages %}{% set content = message['content'] %}{% if loop.index0 == 0 and system_message is defined %}{% set content = '<<SYS>>\n' + system_message + '\n<</SYS>>\n\n' + message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ '<s>' + '[INST] ' + content + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ content + '</s>' }}{% endif %}{% endfor %}",
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"legacy": false,
"model_max_length": 1000000000000000019884624838656,
"pad_token": "<unk>",
"padding_side": "right",
"sp_model_kwargs": {},
"split_special_tokens": false,
"tokenizer_class": "LlamaTokenizer",
"unk_token": "<unk>",
"use_default_system_prompt": false
}

View File

@ -0,0 +1,8 @@
{
"epoch": 2.953846153846154,
"total_flos": 2.4084253524361216e+16,
"train_loss": 1.1769591172536213,
"train_runtime": 36.649,
"train_samples_per_second": 73.672,
"train_steps_per_second": 0.655
}

View File

@ -0,0 +1,3 @@
{"current_steps": 10, "total_steps": 24, "loss": 1.2817, "learning_rate": 7.500000000000001e-05, "epoch": 1.2307692307692308, "percentage": 41.67, "elapsed_time": "0:00:15", "remaining_time": "0:00:21", "throughput": "0.00", "total_tokens": 0}
{"current_steps": 20, "total_steps": 24, "loss": 1.1033, "learning_rate": 8.688061284200266e-06, "epoch": 2.4615384615384617, "percentage": 83.33, "elapsed_time": "0:00:30", "remaining_time": "0:00:06", "throughput": "0.00", "total_tokens": 0}
{"current_steps": 24, "total_steps": 24, "epoch": 2.953846153846154, "percentage": 100.0, "elapsed_time": "0:00:36", "remaining_time": "0:00:00", "throughput": "0.00", "total_tokens": 0}

View File

@ -0,0 +1,56 @@
{
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 2.953846153846154,
"eval_steps": 500,
"global_step": 24,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 1.2307692307692308,
"grad_norm": 0.3606488108634949,
"learning_rate": 7.500000000000001e-05,
"loss": 1.2817,
"step": 10
},
{
"epoch": 2.4615384615384617,
"grad_norm": 0.30481404066085815,
"learning_rate": 8.688061284200266e-06,
"loss": 1.1033,
"step": 20
},
{
"epoch": 2.953846153846154,
"step": 24,
"total_flos": 2.4084253524361216e+16,
"train_loss": 1.1769591172536213,
"train_runtime": 36.649,
"train_samples_per_second": 73.672,
"train_steps_per_second": 0.655
}
],
"logging_steps": 10,
"max_steps": 24,
"num_input_tokens_seen": 0,
"num_train_epochs": 3,
"save_steps": 500,
"stateful_callbacks": {
"TrainerControl": {
"args": {
"should_epoch_stop": false,
"should_evaluate": false,
"should_log": false,
"should_save": true,
"should_training_stop": true
},
"attributes": {}
}
},
"total_flos": 2.4084253524361216e+16,
"train_batch_size": 2,
"trial_name": null,
"trial_params": null
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB