Skip to content
This repository was archived by the owner on Sep 23, 2025. It is now read-only.

Conversation

@minmingzhu
Copy link
Contributor

No description provided.

Copy link
Contributor

@carsonwang carsonwang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the work! Summarized the changes to update as we discussed offline:

  • Remove the added is_base_model parameter in the finetuning yaml file.
  • Allow user configuring chat_template in the yaml file. In most of the case, people don't configure it. Priority order: user configured chat_template > model's chat_template > our default template
  • Write the default template by following other models' (such as llama2 chat), that is, check roles in the message, etc.
  • The original data format needs to convert to chat format first, before applying the chat template.
  • Add unit tests to test the result after applying chat template, covering all use cases.
  • Support chat format as finetuning dataset format. Please follow openAI's format. We can support this in a separate PR.

|lora_config|task_type: CAUSAL_LM<br>r: 8<br>lora_alpha: 32<br>lora_dropout: 0.1|Will be passed to the LoraConfig `__init__()` method, then it'll be used as config to build Peft model object.|
|deltatuner_config|"algo": "lora"<br>"denas": True<br>"best_model_structure": "/path/to/best_structure_of_deltatuner_model"|Will be passed to the DeltaTunerArguments `__init__()` method, then it'll be used as config to build [Deltatuner model](https://github.com/intel/e2eAIOK/tree/main/e2eAIOK/deltatuner) object.|
|enable_gradient_checkpointing|False|enable gradient checkpointing to save GPU memory, but will cost more compute runtime|
|chat_template|None|User-defined chat template.|
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you compared the impact of different templates on fine-tuning performance?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not yet

minmingzhu and others added 20 commits May 16, 2024 12:00
Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
2. modify chat template

Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
2. add unit test

Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
* update

* fix blocking

* update

Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>

* update

Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>

* fix setup and getting started

Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>

* update

Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>

* update

Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>

* nit

Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>

* Add dependencies for tests and update pyproject.toml

Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>

* Update dependencies and test workflow

Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>

* Update dependencies and fix torch_dist.py

Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>

* Update OpenAI SDK installation and start ray cluster

Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>

---------

Signed-off-by: Wu, Xiaochang <xiaochang.wu@intel.com>
* single test

* single test

* single test

* single test

* fix hang error
Signed-off-by: minmingzhu <minming.zhu@intel.com>
* use base model mpt-7b instead of mpt-7b-chat

Signed-off-by: minmingzhu <minming.zhu@intel.com>

* manual setting specify tokenizer

Signed-off-by: minmingzhu <minming.zhu@intel.com>

* update

Signed-off-by: minmingzhu <minming.zhu@intel.com>

* update doc/finetune_parameters.md

Signed-off-by: minmingzhu <minming.zhu@intel.com>

---------

Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
Signed-off-by: minmingzhu <minming.zhu@intel.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants