-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Support Mistral predicted outputs using Pydantic models #3372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ent for Unit-Test Method
| """Settings used for a Mistral model request.""" | ||
|
|
||
| # ALL FIELDS MUST BE `mistral_` PREFIXED SO YOU CAN MERGE THEM WITH OTHER MODELS. | ||
| mistral_prediction: str | MistralPrediction | MistralPredictionTypedDict | None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we support only str? It looks like the types don't have any additional fields, and None is unnecessary as they key can just be omitted from the dict.
Also if you're up for updating the OpenAI equivalent to support str as well, that'd be great :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine with me. It would result in the easiest-to-use variant for the user and be "convertible" to other models. I will update this and adjust the OpenAI equivalent to support STR. I will also update the documentation.
Any information about the software architecture would be greatly appreciated. If I have overlooked some documentation, please point it out to me. I wasn't sure whether the preferred data type for the Mistral SDK should be PydanticModel or TypedDict/Dict. I went with Pydantic.
At what point would the prediction be added to the general ModelSettings with at least two or three SDKs supporting it?
| Supported by: |
As far as I know, only OpenAI and Mistral support it right now.
Q2: Another approach, especially regarding usage, is prefill support. I wanted to link it here, but I agree with what you wrote, @DouweM, in #2825.
| return MistralPrediction.model_validate(prediction) | ||
| else: | ||
| raise RuntimeError( | ||
| f'Unsupported prediction type: {type(prediction)} for MistralModelSettings. Expected str, dict, or MistralPrediction.' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the suggestion above this can be simplified a lot and we won't need this error anymore, but as a note for the future: we don't need errors like this, we can assume the user is type-checking their code.
This pull request adds support for handling predicted outputs from Mistral models and integrates them following the approach used in #2098. The change ensures predicted outputs are parsed and validated using a Pydantic model. Closes #3336.
What I changed
Hope that works for you !