Skip to content

feat: Implement exponential backoff to GeminiLLM, and enable it by default #2006

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

copybara-service[bot]
Copy link

@copybara-service copybara-service bot commented Jul 17, 2025

feat: Implement exponential backoff to GeminiLLM, and enable it by default

Exponential backoff is enabled by default. It start with 5 seconds and ^2 backoff for
subsequent retries with max 60s delay. As the result, it fires at 5s, 10s, 20s, 40s, 60s.

Usage: It is enabled by default, but is configurable during agent declaration:

root_agent = Agent(
  model=Gemini(
    model='gemini-2.0-flash',
    retry_config=RetryConfig(initial_delay_sec=60, max_retries=3)
  ),
  ...
)

Note: This config cannot be added to RunConfig. Although there are similar configurations,
RunConfig is only available in invocation_context, which is not available to BaseLLM and
any derived LLM classes.

Tested locally:

 The description about you is "Checks if input is valid using predefined tools"'
[logging_plugin]    Available Tools: ['check_valid_input', 'check_valid_input2']
2025-07-21 18:00:07.767904
2025-07-21 18:00:12.776910
2025-07-21 18:00:22.792078
2025-07-21 18:00:42.817873
2025-07-21 18:01:22.856147
[logging_plugin] 🧠 LLM ERROR
[logging_plugin]    Agent: check_input
[logging_plugin]    Error: 503 None. {}

@copybara-service copybara-service bot added the google-contributor [Bot] This PR is created by Google label Jul 17, 2025
@copybara-service copybara-service bot force-pushed the copybara/784029405 branch 4 times, most recently from 157483f to d3f3334 Compare July 17, 2025 16:43
…fault

Exponential backoff is enabled by default. It start with 5 seconds and ^2 backoff for
subsequent retries with max 60s delay. As the result, it fires at 5s, 10s, 20s, 40s, 60s.

Usage: It is enabled by default, but is configurable during agent declaration:

```python
root_agent = Agent(
  model=Gemini(
    model='gemini-2.0-flash',
    retry_config=RetryConfig(initial_delay_sec=60, max_retries=3)
  ),
  ...
)
```

Note: This config cannot be added to RunConfig. Although there are similar configurations,
RunConfig is only available in invocation_context, which is not available to BaseLLM and
any derived LLM classes.

Tested locally:

```bash
 The description about you is "Checks if input is valid using predefined tools"'
[logging_plugin]    Available Tools: ['check_valid_input', 'check_valid_input2']
2025-07-21 18:00:07.767904
2025-07-21 18:00:12.776910
2025-07-21 18:00:22.792078
2025-07-21 18:00:42.817873
2025-07-21 18:01:22.856147
[logging_plugin] 🧠 LLM ERROR
[logging_plugin]    Agent: check_input
[logging_plugin]    Error: 503 None. {}

```

PiperOrigin-RevId: 784029405
@copybara-service copybara-service bot changed the title feat: Implement exponential backoff to GeminiLLM feat: Implement exponential backoff to GeminiLLM, and enable it by default Jul 21, 2025
@copybara-service copybara-service bot force-pushed the copybara/784029405 branch from d3f3334 to 785ca6f Compare July 21, 2025 21:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
google-contributor [Bot] This PR is created by Google
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant