-
-
Notifications
You must be signed in to change notification settings - Fork 718
feat: implement ReasoningAgent and DualBrainAgent with advanced reasoning capabilities #977
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ning capabilities - Add ReasoningConfig class for configurable reasoning parameters - Add ActionState enum for flow control - Implement ReasoningAgent inheriting from Agent with: - Step-by-step reasoning with confidence scoring - Reasoning trace tracking - Configurable min/max steps and reasoning styles - Implement DualBrainAgent inheriting from Agent with: - Separate LLMs for conversation and reasoning - Dual-brain coordination for optimal problem-solving - Brain status monitoring and model switching - Add confidence scoring integration - Maintain backward compatibility with existing Agent class - Export new classes in agent module Addresses issue #968: Create ReasoningAgent inherited from Agent class 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
Warning Rate limit exceeded@MervinPraison has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 3 minutes and 10 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (5)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the agent framework by integrating advanced reasoning capabilities. It introduces two new specialized agent types, ReasoningAgent
and DualBrainAgent
, designed to tackle complex problems through structured thought processes and multi-model coordination. A new reasoning
module underpins these agents, providing a robust framework for configurable reasoning, step-by-step analysis, confidence scoring, and flow control, ensuring more intelligent and traceable agent behavior.
Highlights
- New Agent Implementations: I've added two new agent classes,
ReasoningAgent
andDualBrainAgent
, to introduce advanced reasoning capabilities into the system. These agents are designed to handle complex problem-solving more effectively. - Enhanced Reasoning Framework: A new
reasoning
module has been introduced, providing core components such asReasoningConfig
for configurable parameters,ActionState
for flow control,ReasoningStep
for individual steps, andReasoningTrace
for tracking the entire reasoning process, including confidence scoring. - Dual-Brain Architecture: The
DualBrainAgent
specifically implements a 'dual-brain' approach, utilizing separate Large Language Models (LLMs) for conversational responses and analytical reasoning. This allows for more specialized and coordinated problem-solving by leveraging the strengths of different models. - Step-by-Step Reasoning: The
ReasoningAgent
enables agents to perform step-by-step reasoning, track their internal thought processes, and assess confidence levels for each step. This enhances transparency and provides greater control over how complex tasks are approached and solved.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces the ReasoningAgent
and DualBrainAgent
classes, adding advanced reasoning capabilities. The implementation of the data models in reasoning.py
is well-structured. The main concerns are that the ReasoningAgent
doesn't fully implement the step-by-step reasoning parsing, and the DualBrainAgent
has a potential thread-safety issue. Addressing these, along with adding unit tests, will improve the robustness of this feature.
def chat( | ||
self, | ||
message: str, | ||
**kwargs | ||
) -> str: | ||
""" | ||
Enhanced chat method with reasoning capabilities. | ||
|
||
Args: | ||
message: Input message | ||
**kwargs: Additional chat parameters | ||
|
||
Returns: | ||
Response with reasoning trace | ||
""" | ||
# Start reasoning trace | ||
self.start_reasoning_trace(message) | ||
|
||
# Enhance message with reasoning instructions | ||
enhanced_message = f""" | ||
{message} | ||
|
||
Please solve this step-by-step using the following reasoning process: | ||
1. Break down the problem into logical steps | ||
2. For each step, show your thought process | ||
3. State your confidence level (0.0-1.0) for each step | ||
4. Ensure minimum {self.reasoning_config.min_steps} reasoning steps | ||
5. Use {self.reasoning_config.style} reasoning style | ||
6. Provide a clear final answer | ||
|
||
Format your response to show each reasoning step clearly. | ||
""" | ||
|
||
# Call parent chat method | ||
response = super().chat(enhanced_message, **kwargs) | ||
|
||
# Complete reasoning trace | ||
self.complete_reasoning_trace(response) | ||
|
||
return response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The chat
method instructs the LLM to perform step-by-step reasoning but does not parse the response to create ReasoningStep
objects, so the reasoning_trace.steps
list remains empty. Define a structured format (e.g., JSON) for the LLM to return reasoning steps, update the prompt to request the output in that format, and parse the LLM's response to populate the trace.
try: | ||
# Switch to reasoning LLM | ||
self.llm = self.reasoning_llm | ||
|
||
# Use parent chat method with reasoning LLM | ||
reasoning_result = super().chat(reasoning_prompt) | ||
|
||
return reasoning_result | ||
|
||
finally: | ||
# Restore original LLM | ||
self.llm = original_llm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _reason_with_analytical_brain
method modifies the instance attribute self.llm
, which is not thread-safe. If chat()
is called concurrently, this could lead to race conditions. Consider passing the LLM configuration directly to the chat completion method or creating a temporary, isolated client for the reasoning call.
self.reasoning_trace.overall_confidence = sum( | ||
step.confidence for step in self.reasoning_trace.steps | ||
) / len(self.reasoning_trace.steps) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The calculation for overall_confidence
can result in a ZeroDivisionError
if self.reasoning_trace.steps
is empty. Add a check to prevent this.
if self.reasoning_trace.steps:
self.reasoning_trace.overall_confidence = sum(
step.confidence for step in self.reasoning_trace.steps
) / len(self.reasoning_trace.steps)
else:
self.reasoning_trace.overall_confidence = 0.0
main_llm = llm_config.get('model', llm) | ||
# Apply LLM config parameters as needed | ||
else: | ||
main_llm = llm or "gpt-4o" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if isinstance(reasoning_config, dict) and 'model' in reasoning_config: | ||
self.reasoning_llm_config.update(reasoning_config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of directly updating self.reasoning_llm_config
with the entire reasoning_config
dictionary, selectively update only the keys relevant to the LLM configuration (model
, temperature
, system_prompt
). This prevents potential issues if reasoning_config
contains unrelated parameters.
if isinstance(reasoning_config, dict):
llm_config_keys = {"model", "temperature", "system_prompt"}
llm_specific_config = {k: v for k, v in reasoning_config.items() if k in llm_config_keys}
if llm_specific_config:
self.reasoning_llm_config.update(llm_specific_config)
confidence=0.9 # High confidence in reasoning LLM analysis | ||
) | ||
self.reasoning_trace.steps.append(reasoning_step) | ||
self.last_reasoning_steps.append(reasoning_step) | ||
|
||
# Step 2: Use main LLM for response generation | ||
final_response = self._generate_response_with_main_brain(message, reasoning_analysis) | ||
|
||
# Add response generation step | ||
response_step = ReasoningStep( | ||
step_number=2, | ||
title="Main Brain Response Generation", | ||
thought=f"Using {self.main_llm} for conversational response", | ||
action=final_response, | ||
confidence=0.85 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Incorrect Attribute Access in `reason_step` Function
The reason_step
function incorrectly attempts to call .get('steps', [])
on agent.reasoning_trace
. When agent.reasoning_trace
exists, it is a ReasoningTrace
Pydantic BaseModel object, not a dictionary. Pydantic models do not have a .get()
method, leading to an AttributeError
. The steps
attribute should be accessed directly from the ReasoningTrace
object.
src/praisonai-agents/praisonaiagents/reasoning.py#L117-L119
PraisonAI/src/praisonai-agents/praisonaiagents/reasoning.py
Lines 117 to 119 in 2e5d7cf
step = ReasoningStep( | |
step_number=len(getattr(agent, 'reasoning_trace', {}).get('steps', [])) + 1, | |
title=f"Step {len(getattr(agent, 'reasoning_trace', {}).get('steps', [])) + 1}", |
BugBot free trial expires on July 22, 2025
Learn more in the Cursor dashboard.
Was this report helpful? Give feedback by reacting with 👍 or 👎
@claude review this pull request and do a detailed analysis and fix this if the existing code doesn't have the solution implemented. Making sure it has backward compatibility, no existing features removed. After making those changes, again review the applied changes. Use @web to search if you dont know any information or to find the latest documentation or to find the latest version. Run the code if you think you need to run it to test it. Minimal code change to start with if required any changes. |
Claude encountered an error —— View job I'll analyze this and get back to you. |
Implements ReasoningAgent and DualBrainAgent classes as requested in issue #968
Changes:
Features:
Generated with Claude Code