Skip to content

Conversation

@dmitchsplunk
Copy link

@dmitchsplunk dmitchsplunk commented Oct 17, 2025

Changes

This PR proposes adding Customer Product Reviews to the Astronomy Shop application, using Generative AI to summarize the reviews for each product. This addition will allow the community to demonstrate OpenTelemetry capabilities for instrumenting Generative AI interactions within the Astronomy Shop application.

Summary of changes:

  • Adds a new Python-based Product Review service with two functions: getProductReviews(productId) and getProductReviewSummary(productId).
  • Introduces a Python-based LLM service that mocks OpenAI’s Chat Completions API to generate AI summaries of product reviews.
  • Stores customer reviews in a MySQL database.
  • Updates the front-end product page with a new Reviews section, including an AI-generated summary and individual reviews.
  • Instruments GenAI interactions using opentelemetry-instrumentation-openai-v2 to capture relevant spans and attributes.

Here's a screenshot of the new Customer Reviews section of the product page:

Astronomy Shop - Product Reviews

And here's an example trace showing the Product Review Summary flow:

Product Review Summary Trace

The LLM service supports two new feature flags:

  • llmInaccurateResponse: when this feature flag is enabled the LLM service returns an inaccurate product summary for product ID L9ECAV7KIM
  • llmRateLimitError: when this feature flag is enabled, the LLM service intermittently returns a RateLimitError with HTTP status code 429

If the direction looks good, I’ll follow up with documentation and Helm chart changes. In the meantime, I’d welcome early feedback.

Merge Requirements

For new features contributions, please make sure you have completed the following
essential items:

  • CHANGELOG.md updated to document new feature additions
  • Appropriate documentation updates in the docs -> Docs PR
  • Appropriate Helm chart updates in the helm-charts -> Helm chart PR

Maintainers will not merge until the above have been completed. If you're unsure
which docs need to be changed ping the
@open-telemetry/demo-approvers.

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Oct 17, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: dependabot[bot] / name: dependabot[bot] (5953898)
  • ✅ login: dmitchsplunk / name: Derek Mitchell (1df96c3)

@github-actions github-actions bot added docs-update-required Requires documentation update helm-update-required Requires an update to the Helm chart when released labels Oct 17, 2025
@dmitchsplunk dmitchsplunk changed the title Add a new GenAI-powered service for customer product reviews Add a Product Review service with GenAI-powered summaries Oct 17, 2025
@dmitchsplunk dmitchsplunk marked this pull request as ready for review October 17, 2025 22:49
@dmitchsplunk dmitchsplunk requested a review from a team as a code owner October 17, 2025 22:49
@julianocosta89
Copy link
Member

@dmitchsplunk thx for the reply!
I didn't pay attention to the feature flag as yes, the default is always off 😅

Regarding the implementation I now have mixed feelings about it.
Having a call to the AI every time we open a product page seems unnecessary.
I was thinking. An easy solution would be a button, that the user can click to get the summary, but TBH I'd love to have something more interactive.

IDK if you have ever seen the Rufus AI, the AI Amazon has in their product pages.
It's pretty simple.

  • No conversation state management - Each question is independent
  • No message history - Simpler UI and backend
  • Product-scoped context - Only need current product data
  • Cleaner UX - Users see one Q&A at a time

I know this is totally different from your PR, but I'd love to hear your opinion.

@dmitchsplunk
Copy link
Author

@julianocosta89 this is a great idea. I've started working this into the code, and here's what it looks like so far:

Customer Reviews with Interactive AI

The average score can be calculated without AI, so we can always display that, just above the product reviews themselves.

But now, if the user wants to see a summary of product reviews, they'll have to request it.

Since the LLM will be a mock in most cases, there are only a few questions that it will know how to answer.

Please let me know if this is aligned with your thinking, and I'll continue with the implementation.

@julianocosta89
Copy link
Member

@dmitchsplunk I loved that!

@dmitchsplunk
Copy link
Author

Hi @julianocosta89 - I've completed the implementation. When the mock LLM is used, it will respond to the three fixed questions, but will say it doesn't know the answer for ad-hoc questions. When a real LLM is used, it has the ability to fetch additional product info and the product reviews, and will answer whatever questions it can with that info. Please try it out when you have a chance.

Product AI Assistant

@julianocosta89
Copy link
Member

@dmitchsplunk thank you!
I'm flying today, but I'll take a look whenever I have a couple of minutes.

Really nice addition to the demo!
Im excited to test it out! 🥳

@julianocosta89
Copy link
Member

@dmitchsplunk I've updated your PR with a fix to work with OpenAI.
I've used Claude to fix it, and I've tested with and without OpenAI.

The Problem

The original code only handled the first tool call (tool_calls[0]), but when OpenAI's API returns multiple tool calls (e.g., both fetch_product_reviews and fetch_product_info), you must provide a response for each tool_call_id. The API was rejecting your request because it was missing responses for the additional tool calls.

The Solution

The updated code now:

  • Processes all tool calls in a loop instead of just the first one
  • Appends the assistant's message once before processing any tool calls
  • Appends a tool response for each tool call with the correct tool_call_id
  • Consolidates the final user prompt to avoid duplication based on which tool was called
  • Makes a single final LLM call with all tool results included

Key Changes

  • Changed from tool_call = tool_calls[0] to for tool_call in tool_calls:
  • Moved messages.append(response_message) outside the loop so it's only added once
  • Each tool call now gets its response appended to the messages array
  • Simplified the flow to eliminate redundant code paths for different tool types
Screenshot 2025-10-29 at 18 12 29 Screenshot 2025-10-29 at 18 14 07

@julianocosta89
Copy link
Member

I've also moved the configuration of OpenAI token to .env.override.
With that, users do not need to keep comment/uncomment code.

If they want to use OpenAI, they can uncomment the override file and that will take care of overriding the values from .env file.

{
"$schema": "https://flagd.dev/schema/v0/flags.json",
"flags": {
"llmInaccurateResponse": {
Copy link
Member

@julianocosta89 julianocosta89 Oct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just implemented to work with the fake llm service.
Not sure if we can force an hallucination with a public API llm🤔

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, the feature flag is used by the mock LLM service. It may be possible to change the system prompt to tell OpenAI to make something up.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or, when the feature flag is activated, and a review for that specific product is requested, we could direct the request to the mock LLM service.

@dmitchsplunk
Copy link
Author

@julianocosta89 thanks for the fix for the multiple tool calls, the questions I was testing with must have resulted in single tool calls only.

I made a small change to the mock LLM service to ensure it still returns product reviews successfully.

@dmitchsplunk
Copy link
Author

I've also moved the configuration of OpenAI token to .env.override. With that, users do not need to keep comment/uncomment code.

If they want to use OpenAI, they can uncomment the override file and that will take care of overriding the values from .env file.

Is there anything else I need to do? I tried updating .env.override and restarting the app with docker compose, but it didn't pick up the changes.

@dmitchsplunk
Copy link
Author

I've also moved the configuration of OpenAI token to .env.override. With that, users do not need to keep comment/uncomment code.
If they want to use OpenAI, they can uncomment the override file and that will take care of overriding the values from .env file.

Is there anything else I need to do? I tried updating .env.override and restarting the app with docker compose, but it didn't pick up the changes.

Please disregard, I got it working with the following command:

docker compose --env-file .env --env-file .env.override up --force-recreate --remove-orphans --detach --build

dependabot bot and others added 2 commits October 30, 2025 16:43
Bumps the actions-production-dependencies group with 1 update in the / directory: [github/codeql-action](https://github.com/github/codeql-action).


Updates `github/codeql-action` from 4.31.0 to 4.31.1
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](github/codeql-action@4e94bd1...5fe9434)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: 4.31.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: actions-production-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
@dmitchsplunk
Copy link
Author

Hi @julianocosta89 , I've created draft PRs for the Helm chart and documentation updates:

open-telemetry/opentelemetry-helm-charts#1920
open-telemetry/opentelemetry.io#8294

Please let me know if there any other changes you'd like me to make as part of this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

docs-update-required Requires documentation update helm-update-required Requires an update to the Helm chart when released

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants