Skip to content

Conversation

@ti3x
Copy link

@ti3x ti3x commented Nov 13, 2025

This pull request updates the OpenAIPipeline backend to introduce support for specifying allowed OpenAI parameters, improving configurability and control over API requests. The main changes include defining a default set of allowed parameters, updating configuration logic, and ensuring these parameters are properly passed in API requests.

Configuration and parameter management improvements:

  • Defined a new constant DEFAULT_ALLOWED_OPENAI_PARAMS (containing "tools" and "tool_choice") for use as the default allowed OpenAI parameters.
  • Updated the pipeline configuration logic to set allowedOpenAIParams to the default value if not explicitly provided in the options.

API request handling enhancements:

  • Modified request construction to resolve allowedOpenAIParams from multiple sources (request, options, or default) and ensure it is included in the completion parameters sent to the OpenAI API. [1] [2]add ALLOWED_OPENAI_PARAMS to openai pipeline

@ti3x ti3x requested review from Mardak and tarekziade November 13, 2025 21:54
const allowedOpenAIParams =
request.allowed_openai_params ??
request.allowedOpenAIParams ??
this.#options.allowedOpenAIParams ??
Copy link
Author

@ti3x ti3x Nov 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tarekziade not sure if these this.#options or config.allowedOpenAIParams check is necessary to allow future clients to set this value. Most of time we probably don't want to add this payload on every client call anyways.
I tested this and it shouldn’t break any non-TogetherAI inference servers — for example, it works fine with VertexAI and doesn’t affect the response payload. But if we want to be extra cautious, and don't want to do a conditional check against Model id in client-side code, we could do this in MLPA layer (check for model id prefix beign together_* and add this additional payload before passing off to Litellm)

Long term, LiteLLM should ideally support custom model configurations that allow passing through model-specific request parameters.

Would love to hear your thoughts on this.

let config = {};
options.applyToConfig(config);
config.backend = config.backend || "openai";
if (!config.allowedOpenAIParams) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how do you expect this config to be set? if from createEngine, the allowed key options filters it out anyway https://searchfox.org/firefox-main/rev/77b6c9748bdd784eb5e0ee42603c408b34559d7d/toolkit/components/ml/content/EngineProcess.sys.mjs#769-770

at least for fork, we could conditional this on the modelId? quick test shows adding these params works for qwen, gpt4o, together but not mistral:

500 litellm.APIConnectionError: Vertex_aiException - b'
{"detail":[
{"type":"model_attributes_type","loc":["body","ChatCompletionRequest","tool_choice","ToolChoice"],"msg":"Input should be a valid dictionary or object to extract fields from","input":null},
{"type":"enum","loc":["body","ChatCompletionRequest","tool_choice","function-after[operator.attrgetter('value')(), str-enum[ToolChoiceEnum]]"],"msg":"Input should be 'auto', 'none', 'any' or 'required'","input":null,"ctx":{"expected":"'auto', 'none', 'any' or 'required'"}},
{"type":"missing","loc":["body","CompletionRequest","prompt"],"msg":"Field required","input":{"model":"mistral-small-2503","messages":[{"role":"system","content":"You are a …Today's date: Thursday, November 13, 2025"},{"role":"user","content":"hi"}]},
{"type":"extra_forbidden","loc":["body","OCRRequest","tools"],"msg":"Extra inputs are not permitted","input":[{"type":"function","function":{"name":"search_open_tabs","description":"Search …","parameters":{"type":"object","properties":{"type":{"type":"string","description":"The …"}},"required":["type"]}}},{"type":"function","function":{"name":"get_page_content","description":"Retrieve text from a specific browser tab. Choose whether to read the current viewport, Reader Mode, or full page content. The content is cleaned for analysis.","parameters":{"type":"object","properties":{"url":{"type":"string","description":"The …"},"mode":{"type":"string","enum":["viewport","reader","full"],"description":"Extraction …"}},"required":["url"]}}},{"type":"function","function":{"name":"search_history","description":"Search …","parameters":{"type":"object","properties":{"search_term":{"type":"string","description":"Keywords …"}},"required":["search_term"]}}}]},
{"type":"extra_forbidden","loc":["body","OCRRequest","tool_choice"],"msg":"Extra inputs are not permitted","input":null},
{"type":"extra_forbidden","loc":["body","OCRRequest","stream"],"msg":"Extra inputs are not permitted","input":true}
]}
'. Received Model Group=mistral-small-2503
Available Model Group Fallbacks=None

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants