Skip to content

Add support for reasoning_effort parameter for reasoning models in AzureOpenAIConfig #3651

@MRizwan14

Description

@MRizwan14

🚀 The feature

A feature for detecting reasoning models such as o1, o3 and gpt-5 was recently added within the file https://github.com/mem0ai/mem0/blob/main/mem0/llms/base.py through the _is_reasoning_model() method.

However, the newly supported OpenAI SDK parameter reasoning_effort was not implemented in Mem0’s configuration classes. As a result, when "reasoning_effort": "low" is included in the llm.config section (for example, when initializing via Memory.from_config), the following error is raised:
TypeError: AzureOpenAIConfig.__init__() got an unexpected keyword argument 'reasoning_effort'

Motivation, pitch

This feature is requested to enable testing and comparison of different reasoning effort levels ("low", "medium", "high") supported by the latest OpenAI SDK.
Adding this parameter would make it possible to evaluate performance and latency trade-offs across reasoning models directly within Mem0.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions