Skip to content

port m train to use peft version of aloras #212

@jakelorocco

Description

@jakelorocco

When the m train code was written, peft had not yet added alora support directly. As a result, we are using a fork of peft for our alora support. This forked version is different from the alora support that eventually got added to peft.

Class names and parameter names / values changed between the versions. The PR that added the alora support directly to peft has an example of a fine-tuning script. We should make sure our fine-tuning example complies with that version.

Because of different parameter names / values, I believe aloras trained with this script will also stop working. For example, the original granite aloras do not work with the new peft version:

/opt/homebrew/Caskroom/miniforge/base/envs/gcom/lib/python3.12/site-packages/peft/config.py:165: UserWarning: Unexpected keyword arguments ['alora_invocation_tokens'] for class LoraConfig, these are ignored. This probably means that you're loading a configuration file that was saved using a higher version of the library and additional parameters have been introduced since. It is highly recommended to upgrade the PEFT version before continuing (e.g. by running `pip install -U peft`).

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingenhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions