Skip to content

Conversation

tjtanjin
Copy link
Member

@tjtanjin tjtanjin commented Jun 9, 2025

This commit introduces an optional debug boolean configuration to the OpenAI, Gemini, and WebLlm providers.

When debug is set to true in the provider's configuration, the provider will print useful debugging information to the console. This includes:

  • For OpenAI and Gemini providers:
    • The HTTP method being used.
    • The endpoint URL (API keys in Gemini's direct mode URL are redacted).
    • Request headers (with 'Authorization' header redacted for OpenAI).
    • The request body.
  • For WebLlmProvider:
    • The model being used.
    • The system message.
    • The response format.
    • The engine configuration.
    • Chat completion options.
    • The messages being processed.

This feature is intended to help you troubleshoot issues with LLM provider integrations, especially when using proxy mode or when unexpected responses are received. The debug option defaults to false if not specified.

Description

Please include a brief summary of the change and include the relevant issue(s).

Closes #(issue)

What change does this PR introduce?

Please select the relevant option(s).

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update (changes to docs/code comments)

What is the proposed approach?

Please give a short overview/explanation on the approach taken to resolve the issue(s).

Checklist:

  • The commit message follows our adopted guidelines
  • Testing has been done for the change(s) added (for bug fixes/features)
  • Relevant comments/docs have been added/updated (for bug fixes/features)

This commit introduces an optional `debug` boolean configuration to the OpenAI, Gemini, and WebLlm providers.

When `debug` is set to `true` in the provider's configuration, the provider will print useful debugging information to the console. This includes:

- For OpenAI and Gemini providers:
    - The HTTP method being used.
    - The endpoint URL (API keys in Gemini's direct mode URL are redacted).
    - Request headers (with 'Authorization' header redacted for OpenAI).
    - The request body.
- For WebLlmProvider:
    - The model being used.
    - The system message.
    - The response format.
    - The engine configuration.
    - Chat completion options.
    - The messages being processed.

This feature is intended to help you troubleshoot issues with LLM provider integrations, especially when using proxy mode or when unexpected responses are received. The `debug` option defaults to `false` if not specified.
@tjtanjin tjtanjin merged commit 2f46bd8 into main Jun 9, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant