Update README.md
This commit is contained in:
parent
0201e3703b
commit
8c279ccf47
|
@ -28,9 +28,7 @@ pip install transformers==4.16.2
|
||||||
pip install datsets==1.18.0
|
pip install datsets==1.18.0
|
||||||
```
|
```
|
||||||
|
|
||||||
### 2. upgrade the transformers to 4.10.0
|
### 2. run
|
||||||
|
|
||||||
### 3. run
|
|
||||||
```bash
|
```bash
|
||||||
python run_image_classification.py configs/lora_beans.json
|
python run_image_classification.py configs/lora_beans.json
|
||||||
```
|
```
|
||||||
|
@ -46,121 +44,7 @@ Solution 1: open a python console, running the error command again, may not be u
|
||||||
Solution 2: download the dataset by yourself on a internect connected machine, saved to disk and transfer to your server, at last load_from_disk.
|
Solution 2: download the dataset by yourself on a internect connected machine, saved to disk and transfer to your server, at last load_from_disk.
|
||||||
|
|
||||||
|
|
||||||
|
## Link to original training scripts
|
||||||
|
You may find solution to other question about the scripts and irrelevant to Opendelta in
|
||||||
|
https://github.com/huggingface/transformers/tree/master/examples/pytorch/image-classification
|
||||||
|
|
||||||
# Image classification examples
|
|
||||||
|
|
||||||
The following examples showcase how to fine-tune a `ViT` for image-classification using PyTorch.
|
|
||||||
|
|
||||||
## Using datasets from 🤗 `datasets`
|
|
||||||
|
|
||||||
Here we show how to fine-tune a `ViT` on the [beans](https://huggingface.co/datasets/beans) dataset.
|
|
||||||
|
|
||||||
👀 See the results here: [nateraw/vit-base-beans](https://huggingface.co/nateraw/vit-base-beans).
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python run_image_classification.py \
|
|
||||||
--dataset_name beans \
|
|
||||||
--output_dir ./beans_outputs/ \
|
|
||||||
--remove_unused_columns False \
|
|
||||||
--do_train \
|
|
||||||
--do_eval \
|
|
||||||
--push_to_hub \
|
|
||||||
--push_to_hub_model_id vit-base-beans \
|
|
||||||
--learning_rate 2e-5 \
|
|
||||||
--num_train_epochs 5 \
|
|
||||||
--per_device_train_batch_size 8 \
|
|
||||||
--per_device_eval_batch_size 8 \
|
|
||||||
--logging_strategy steps \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--evaluation_strategy epoch \
|
|
||||||
--save_strategy epoch \
|
|
||||||
--load_best_model_at_end True \
|
|
||||||
--save_total_limit 3 \
|
|
||||||
--seed 1337
|
|
||||||
```
|
|
||||||
|
|
||||||
Here we show how to fine-tune a `ViT` on the [cats_vs_dogs](https://huggingface.co/datasets/cats_vs_dogs) dataset.
|
|
||||||
|
|
||||||
👀 See the results here: [nateraw/vit-base-cats-vs-dogs](https://huggingface.co/nateraw/vit-base-cats-vs-dogs).
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python run_image_classification.py \
|
|
||||||
--dataset_name cats_vs_dogs \
|
|
||||||
--output_dir ./cats_vs_dogs_outputs/ \
|
|
||||||
--remove_unused_columns False \
|
|
||||||
--do_train \
|
|
||||||
--do_eval \
|
|
||||||
--push_to_hub \
|
|
||||||
--push_to_hub_model_id vit-base-cats-vs-dogs \
|
|
||||||
--fp16 True \
|
|
||||||
--learning_rate 2e-4 \
|
|
||||||
--num_train_epochs 5 \
|
|
||||||
--per_device_train_batch_size 32 \
|
|
||||||
--per_device_eval_batch_size 32 \
|
|
||||||
--logging_strategy steps \
|
|
||||||
--logging_steps 10 \
|
|
||||||
--evaluation_strategy epoch \
|
|
||||||
--save_strategy epoch \
|
|
||||||
--load_best_model_at_end True \
|
|
||||||
--save_total_limit 3 \
|
|
||||||
--seed 1337
|
|
||||||
```
|
|
||||||
|
|
||||||
## Using your own data
|
|
||||||
|
|
||||||
To use your own dataset, the training script expects the following directory structure:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
root/dog/xxx.png
|
|
||||||
root/dog/xxy.png
|
|
||||||
root/dog/[...]/xxz.png
|
|
||||||
|
|
||||||
root/cat/123.png
|
|
||||||
root/cat/nsdf3.png
|
|
||||||
root/cat/[...]/asd932_.png
|
|
||||||
```
|
|
||||||
|
|
||||||
Once you've prepared your dataset, you can can run the script like this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python run_image_classification.py \
|
|
||||||
--dataset_name nateraw/image-folder \
|
|
||||||
--train_dir <path-to-train-root> \
|
|
||||||
--output_dir ./outputs/ \
|
|
||||||
--remove_unused_columns False \
|
|
||||||
--do_train \
|
|
||||||
--do_eval
|
|
||||||
```
|
|
||||||
|
|
||||||
### 💡 The above will split the train dir into training and evaluation sets
|
|
||||||
- To control the split amount, use the `--train_val_split` flag.
|
|
||||||
- To provide your own validation split in its own directory, you can pass the `--validation_dir <path-to-val-root>` flag.
|
|
||||||
|
|
||||||
|
|
||||||
## Sharing your model on 🤗 Hub
|
|
||||||
|
|
||||||
0. If you haven't already, [sign up](https://huggingface.co/join) for a 🤗 account
|
|
||||||
|
|
||||||
1. Make sure you have `git-lfs` installed and git set up.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ apt install git-lfs
|
|
||||||
$ git config --global user.email "you@example.com"
|
|
||||||
$ git config --global user.name "Your Name"
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Log in with your HuggingFace account credentials using `huggingface-cli`
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ huggingface-cli login
|
|
||||||
# ...follow the prompts
|
|
||||||
```
|
|
||||||
|
|
||||||
3. When running the script, pass the following arguments:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python run_image_classification.py \
|
|
||||||
--push_to_hub \
|
|
||||||
--push_to_hub_model_id <name-your-model> \
|
|
||||||
...
|
|
||||||
```
|
|
Loading…
Reference in New Issue