You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 10, 2025. It is now read-only.
Originally added in #1123, leveraging torchtune model definitions is something that torchchat is gradually moving towards (in contrast to locally hosting model definitions), but has been lost through pin bump/inactivity.
For example, commands like the following python3 torchchat.py generate llama3.1-tune --prompt "write me a story about a boy and his bear" shuold load the model definition using torchtune then pass back to torchchat for inference, but currently errors out on construction due to outdated function signatures.
Task: Re-enabling the ability to perform inference with: python3 torchchat.py generate llama3.1-tune --prompt "write me a story about a boy and his bear"
I imagine the process being iterative via a combination of tracing signature changes in torchchat and torchtune
A good gauge of this being fixed is that changes like: 69da96c, should be sufficient to support a new torchtune model in torchchat.