version 0.3.1

This commit is contained in:
shengdinghu 2022-10-17 09:02:45 +00:00
parent f944683087
commit 00ba5bdc8b
19 changed files with 7 additions and 6 deletions

View File

@ -26,7 +26,7 @@
OpenDelta is a toolkit for parameter-efficient tuning methods (we dub it as *delta tuning*), by which users could flexibly assign (or add) a small amount parameters to update while keeping the most paramters frozen. By using OpenDelta, users could easily implement prefix-tuning, adapters, Lora, or any other types of delta tuning with preferred PTMs. OpenDelta is a toolkit for parameter-efficient tuning methods (we dub it as *delta tuning*), by which users could flexibly assign (or add) a small amount parameters to update while keeping the most paramters frozen. By using OpenDelta, users could easily implement prefix-tuning, adapters, Lora, or any other types of delta tuning with preferred PTMs.
- Our repo is tested on Python 3.=-0 and PyTorch 1.9.0. Lower version may also be supported. - The last version of OpenDelta is tested on Python==3.8.13, PyTorch==1.12.1, transformers==4.22.2. Other versions are likely to be supported as well. If you encounter bugs when using your own package versions, please raise an issue, we will look into it as soon as possible.
- **A demo of using Opendelta to modify the PLM (E.g., BART).** - **A demo of using Opendelta to modify the PLM (E.g., BART).**
![How PLM changes using Delta-tuning](docs/source/imgs/demo.gif) ![How PLM changes using Delta-tuning](docs/source/imgs/demo.gif)
@ -162,7 +162,7 @@ delta3.log()
## Verified Default Configurations ## Verified Default Configurations
- **You can try to use OpenDelta on *any* backbone models based on PyTorch.** - **You can try to use OpenDelta on *any* backbone models based on PyTorch.**
- However, with small chances thatThe interface of the submodules of the backbone model is not supported. Therefore we verified some commonly - However, with small chances that the interface of the submodules of the backbone model is not supported. Therefore we verified some commonly
used models that OpenDelta are sure to support. used models that OpenDelta are sure to support.
- We will keep testing more and more emerging models. - We will keep testing more and more emerging models.

View File

@ -31,8 +31,8 @@ copyright = '{}, {}, Licenced under the Apache License, Version 2.0'.format(date
# The full version, including alpha/beta/rc tags # The full version, including alpha/beta/rc tags
release = '0.3.0' release = '0.3.1'
version = "0.3.0" version = "0.3.1"
html_theme = 'sphinx_rtd_theme' html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]

View File

@ -2,6 +2,7 @@
## Version 0.3.1 ## Version 0.3.1
- We update [must_try.py](https://github.com/thunlp/OpenDelta/tree/main/examples/unittest/must_try.py) for a simple introduction of the core functionality of OpenDelta. - We update [must_try.py](https://github.com/thunlp/OpenDelta/tree/main/examples/unittest/must_try.py) for a simple introduction of the core functionality of OpenDelta.
- Thanks to [Weilin Zhao](https://github.com/Achazwl) We merge a long-developed branch parallel_adapter into the main branch.
## Version 0.3.0 ## Version 0.3.0
@ -25,4 +26,4 @@
## Version 0.2.4 ## Version 0.2.4
### Updates ### Updates
- examples/examples_seq2seq and examples/examples_text-classification is depreciated and moved to [legacy](https://github.com/thunlp/OpenDelta/tree/main/examples/legacies) - examples/examples_seq2seq and examples/examples_text-classification is depreciated and moved to [legacy](https://github.com/thunlp/OpenDelta/tree/main/examples/legacies)
- we provide [examples_prompt](https://github.com/thunlp/OpenDelta/tree/main/examples/examples_prompt), as a cleaner and more general framework, which unifies the delta tuning paradigm and the prompt-tuning paradigm. It is still based on [Huggingface Trainers](https://huggingface.co/docs/transformers/main_classes/trainer). In this example framework, the running pipeline is [a unified script](https://github.com/thunlp/OpenDelta/tree/main/examples/examples_prompt/src), the differences in tasks, models, delta tuning models, and even prompt-tuning paradigms are [more modular and be more independent ](https://github.com/thunlp/OpenDelta/tree/main/examples/examples_prompt/backbones). Please try it out! - Thanks to [Zhen Zhang](https://github.com/namezhenzhang), we provide [examples_prompt](https://github.com/thunlp/OpenDelta/tree/main/examples/examples_prompt), as a cleaner and more general framework, which unifies the delta tuning paradigm and the prompt-tuning paradigm. It is still based on [Huggingface Trainers](https://huggingface.co/docs/transformers/main_classes/trainer). In this example framework, the running pipeline is [a unified script](https://github.com/thunlp/OpenDelta/tree/main/examples/examples_prompt/src), the differences in tasks, models, delta tuning models, and even prompt-tuning paradigms are [more modular and be more independent ](https://github.com/thunlp/OpenDelta/tree/main/examples/examples_prompt/backbones). Please try it out!

View File

@ -31,7 +31,7 @@ def get_requirements():
with open('README.md', 'r') as f: with open('README.md', 'r') as f:
setuptools.setup( setuptools.setup(
name = 'opendelta', name = 'opendelta',
version = "0.3.0", version = "0.3.1",
description = "An open source framework for delta learning (parameter efficient learning).", description = "An open source framework for delta learning (parameter efficient learning).",
long_description=open("README.md", "r", encoding="utf-8").read(), long_description=open("README.md", "r", encoding="utf-8").read(),
long_description_content_type="text/markdown", long_description_content_type="text/markdown",