-
Notifications
You must be signed in to change notification settings - Fork 0
add ALLOWED_OPENAI_PARAMS to openai pipeline #107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: smart-window
Are you sure you want to change the base?
Conversation
| const allowedOpenAIParams = | ||
| request.allowed_openai_params ?? | ||
| request.allowedOpenAIParams ?? | ||
| this.#options.allowedOpenAIParams ?? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tarekziade not sure if these this.#options or config.allowedOpenAIParams check is necessary to allow future clients to set this value. Most of time we probably don't want to add this payload on every client call anyways.
I tested this and it shouldn’t break any non-TogetherAI inference servers — for example, it works fine with VertexAI and doesn’t affect the response payload. But if we want to be extra cautious, and don't want to do a conditional check against Model id in client-side code, we could do this in MLPA layer (check for model id prefix beign together_* and add this additional payload before passing off to Litellm)
Long term, LiteLLM should ideally support custom model configurations that allow passing through model-specific request parameters.
Would love to hear your thoughts on this.
| let config = {}; | ||
| options.applyToConfig(config); | ||
| config.backend = config.backend || "openai"; | ||
| if (!config.allowedOpenAIParams) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how do you expect this config to be set? if from createEngine, the allowed key options filters it out anyway https://searchfox.org/firefox-main/rev/77b6c9748bdd784eb5e0ee42603c408b34559d7d/toolkit/components/ml/content/EngineProcess.sys.mjs#769-770
at least for fork, we could conditional this on the modelId? quick test shows adding these params works for qwen, gpt4o, together but not mistral:
500 litellm.APIConnectionError: Vertex_aiException - b'
{"detail":[
{"type":"model_attributes_type","loc":["body","ChatCompletionRequest","tool_choice","ToolChoice"],"msg":"Input should be a valid dictionary or object to extract fields from","input":null},
{"type":"enum","loc":["body","ChatCompletionRequest","tool_choice","function-after[operator.attrgetter('value')(), str-enum[ToolChoiceEnum]]"],"msg":"Input should be 'auto', 'none', 'any' or 'required'","input":null,"ctx":{"expected":"'auto', 'none', 'any' or 'required'"}},
{"type":"missing","loc":["body","CompletionRequest","prompt"],"msg":"Field required","input":{"model":"mistral-small-2503","messages":[{"role":"system","content":"You are a …Today's date: Thursday, November 13, 2025"},{"role":"user","content":"hi"}]},
{"type":"extra_forbidden","loc":["body","OCRRequest","tools"],"msg":"Extra inputs are not permitted","input":[{"type":"function","function":{"name":"search_open_tabs","description":"Search …","parameters":{"type":"object","properties":{"type":{"type":"string","description":"The …"}},"required":["type"]}}},{"type":"function","function":{"name":"get_page_content","description":"Retrieve text from a specific browser tab. Choose whether to read the current viewport, Reader Mode, or full page content. The content is cleaned for analysis.","parameters":{"type":"object","properties":{"url":{"type":"string","description":"The …"},"mode":{"type":"string","enum":["viewport","reader","full"],"description":"Extraction …"}},"required":["url"]}}},{"type":"function","function":{"name":"search_history","description":"Search …","parameters":{"type":"object","properties":{"search_term":{"type":"string","description":"Keywords …"}},"required":["search_term"]}}}]},
{"type":"extra_forbidden","loc":["body","OCRRequest","tool_choice"],"msg":"Extra inputs are not permitted","input":null},
{"type":"extra_forbidden","loc":["body","OCRRequest","stream"],"msg":"Extra inputs are not permitted","input":true}
]}
'. Received Model Group=mistral-small-2503
Available Model Group Fallbacks=None
This pull request updates the
OpenAIPipelinebackend to introduce support for specifying allowed OpenAI parameters, improving configurability and control over API requests. The main changes include defining a default set of allowed parameters, updating configuration logic, and ensuring these parameters are properly passed in API requests.Configuration and parameter management improvements:
DEFAULT_ALLOWED_OPENAI_PARAMS(containing"tools"and"tool_choice") for use as the default allowed OpenAI parameters.allowedOpenAIParamsto the default value if not explicitly provided in the options.API request handling enhancements:
allowedOpenAIParamsfrom multiple sources (request, options, or default) and ensure it is included in the completion parameters sent to the OpenAI API. [1] [2]add ALLOWED_OPENAI_PARAMS to openai pipeline