-
Notifications
You must be signed in to change notification settings - Fork 20k
Open
Feature
Copy link
Labels
feature requestrequest for an enhancement / additional functionalityrequest for an enhancement / additional functionalitylangchainRelated to the package `langchain`Related to the package `langchain`openai
Description
Checked other resources
- This is a feature request, not a bug report or usage question.
- I added a clear and descriptive title that summarizes the feature request.
- I used the GitHub search to find a similar feature request and didn't find it.
- I checked the LangChain documentation and API reference to see if this feature already exists.
- This is not related to the langchain-community package.
Package (Required)
- langchain
- langchain-openai
- langchain-anthropic
- langchain-classic
- langchain-core
- langchain-cli
- langchain-model-profiles
- langchain-tests
- langchain-text-splitters
- langchain-chroma
- langchain-deepseek
- langchain-exa
- langchain-fireworks
- langchain-groq
- langchain-huggingface
- langchain-mistralai
- langchain-nomic
- langchain-ollama
- langchain-perplexity
- langchain-prompty
- langchain-qdrant
- langchain-xai
- Other / not sure / general
Feature Description
Add support for reasoning outputs from OpenAI-compatible models that don't support the Responses API.
Current behavior:
- OpenAI-compatible servers that support reasoning but not Responses API have their reasoning content ignored
- Reasoning content in
choices[].message.reasoningorchoices[].message.reasoning_contentis not extracted
Expected behavior:
- Extract reasoning content from standard Chat Completions API responses
- Include reasoning in
AIMessage.contentorAIMessageChunk.content
Use Case
Some OpenAI-compatible inference servers (vLLM, custom deployments) support reasoning models but does not support Responses API.
Problem:
- Enterprise self-hosted inference servers with reasoning models
- For Example:
- OpenAI Compatible API provided by vLLM
- OpenAI Compatible API provided by Cohere (Cannot use langchain-cohere because of bug in it, I already reported)
- Other providers that provide OpenAI Compatible API but does not support Responses API
- For Example:
- Reasoning content is lost and inaccessible through LangChain
- Must manually parse raw responses to extract reasoning
- Breaks compatibility with agent frameworks
Example:
llm = ChatOpenAI(
base_url="https://custom-server.com/v1",
model="reasoning-model",
reasoning_effort="medium",
use_responses_api=False # Server doesn't support it
)
response = llm.invoke("Solve this problem...")
# Currently: reasoning is ignored
# Needed: reasoning accessible in response.content or response.content_blocksWhy needed:
- Broader compatibility with OpenAI-compatible servers that implement reasoning
- Many servers support Chat Completions API with reasoning fields but not the full Responses API
- Enable reasoning capabilities across diverse inference server implementations
Proposed Solution
Expectation:
- Detect reasoning fields in API response:
- Check
choice.message.reasoningorchoice.message.reasoning_content - Only when
use_responses_api=Falseandreasoning_effort(orreasoning) is provided
- Check
- Include in message content:
- Add as content block with
type="reasoning" - Similar format to Responses API handling
- Add as content block with
- Maintain backward compatibility:
- Only process when fields exist in response
- No change for models without reasoning support
Alternatives Considered
No response
Additional Context
Metadata
Metadata
Assignees
Labels
feature requestrequest for an enhancement / additional functionalityrequest for an enhancement / additional functionalitylangchainRelated to the package `langchain`Related to the package `langchain`openai