record 0.2.4 updates
This commit is contained in:
parent
9587a5e930
commit
ae825423fe
|
@ -33,6 +33,7 @@ OpenDelta is a toolkit for parameter-efficient tuning methods (we dub it as *del
|
|||
|
||||
## News
|
||||
- **2022.10.14** Release v0.3.0. We make the usage of default configurations of each delta tuning methods (i.e., the position they are attached) more friendly! If a custom model has our supported models as submodules inside, the default configuration is also available. Other key changes can be seen in [Update Log](file:///Users/hsd/codes/opendelta_doc/OpenDelta/docs/build/html/notes/update.html#version-0-3-0)
|
||||
- **2022.10.10** Merge a long-developed branch v0.2.4 into the master branch. Key updates are (1) the an example unifying the delta tuning paradigm and the prompt-tuning paradigm; (2) and support for [delta centers](https://www.openbmb.org/toolKits/deltacenter), whose webpage is still under construction. Details can be seen in [Update Log](file:///Users/hsd/codes/opendelta_doc/OpenDelta/docs/build/html/notes/update.html#version-0-2-4)
|
||||
- **2022.03.24** We notice several bugs in Soft Prompt Tuning and Prefix Tuning, mainly due to their need to customize attention ids, token_type_ids, we are fixing it! Currently, please use the other methods since they are stabler and better in performance.
|
||||
- **2022.03.20** Add a [colab example](https://colab.research.google.com/drive/1uAhgAdc8Qr42UKYDlgUv0f7W1-gAFwGo?usp=sharing) to illustrate efficient training and space-saving multitask-serving.
|
||||
- **2022.03.20** A new pip version released.
|
||||
|
|
|
@ -19,3 +19,7 @@
|
|||
- SoftPrompt is still not supported for wrapped model if the model has no attribute `get_input_embeddings`.
|
||||
- Prefix Tuning is still limited to T5, GPT2, Bart, Bert, Roberta.
|
||||
|
||||
## Version 0.2.4
|
||||
### Updates
|
||||
- examples/examples_seq2seq and examples/examples_text-classification is depreciated and moved to [legacy](https://github.com/thunlp/OpenDelta/tree/main/examples/legacies)
|
||||
- we provide [examples_prompt](https://github.com/thunlp/OpenDelta/tree/main/examples/examples_prompt), as a cleaner and more general framework, which unifies the delta tuning paradigm and the prompt-tuning paradigm. It is still based on [Huggingface Trainers](https://huggingface.co/docs/transformers/main_classes/trainer). In these examples, the core pipeline is [using unified scripts](https://github.com/thunlp/OpenDelta/tree/main/examples/examples_prompt/src), the difference in tasks, models, delta tuning models, and even prompt-tuning paradigms are [seperated and be more indepent](https://github.com/thunlp/OpenDelta/tree/main/examples/examples_prompt/backbones). Please try it out!
|
Loading…
Reference in New Issue