-
Notifications
You must be signed in to change notification settings - Fork 5k
Add a Product Review service with GenAI-powered summaries #2663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add a Product Review service with GenAI-powered summaries #2663
Conversation
|
@dmitchsplunk thx for the reply! Regarding the implementation I now have mixed feelings about it. IDK if you have ever seen the Rufus AI, the AI Amazon has in their product pages.
I know this is totally different from your PR, but I'd love to hear your opinion. |
|
@julianocosta89 this is a great idea. I've started working this into the code, and here's what it looks like so far:
The average score can be calculated without AI, so we can always display that, just above the product reviews themselves. But now, if the user wants to see a summary of product reviews, they'll have to request it. Since the LLM will be a mock in most cases, there are only a few questions that it will know how to answer. Please let me know if this is aligned with your thinking, and I'll continue with the implementation. |
|
@dmitchsplunk I loved that! |
|
Hi @julianocosta89 - I've completed the implementation. When the mock LLM is used, it will respond to the three fixed questions, but will say it doesn't know the answer for ad-hoc questions. When a real LLM is used, it has the ability to fetch additional product info and the product reviews, and will answer whatever questions it can with that info. Please try it out when you have a chance.
|
|
@dmitchsplunk thank you! Really nice addition to the demo! |
|
@dmitchsplunk I've updated your PR with a fix to work with OpenAI. The ProblemThe original code only handled the first tool call (tool_calls[0]), but when OpenAI's API returns multiple tool calls (e.g., both The SolutionThe updated code now:
Key Changes
|
|
I've also moved the configuration of OpenAI token to If they want to use OpenAI, they can uncomment the override file and that will take care of overriding the values from .env file. |
| { | ||
| "$schema": "https://flagd.dev/schema/v0/flags.json", | ||
| "flags": { | ||
| "llmInaccurateResponse": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just implemented to work with the fake llm service.
Not sure if we can force an hallucination with a public API llm🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, the feature flag is used by the mock LLM service. It may be possible to change the system prompt to tell OpenAI to make something up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or, when the feature flag is activated, and a review for that specific product is requested, we could direct the request to the mock LLM service.
|
@julianocosta89 thanks for the fix for the multiple tool calls, the questions I was testing with must have resulted in single tool calls only. I made a small change to the mock LLM service to ensure it still returns product reviews successfully. |
Is there anything else I need to do? I tried updating .env.override and restarting the app with docker compose, but it didn't pick up the changes. |
Please disregard, I got it working with the following command:
|
Bumps the actions-production-dependencies group with 1 update in the / directory: [github/codeql-action](https://github.com/github/codeql-action). Updates `github/codeql-action` from 4.31.0 to 4.31.1 - [Release notes](https://github.com/github/codeql-action/releases) - [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md) - [Commits](github/codeql-action@4e94bd1...5fe9434) --- updated-dependencies: - dependency-name: github/codeql-action dependency-version: 4.31.1 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: actions-production-dependencies ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
Hi @julianocosta89 , I've created draft PRs for the Helm chart and documentation updates: open-telemetry/opentelemetry-helm-charts#1920 Please let me know if there any other changes you'd like me to make as part of this PR. |




Changes
This PR proposes adding Customer Product Reviews to the Astronomy Shop application, using Generative AI to summarize the reviews for each product. This addition will allow the community to demonstrate OpenTelemetry capabilities for instrumenting Generative AI interactions within the Astronomy Shop application.
Summary of changes:
Here's a screenshot of the new Customer Reviews section of the product page:
And here's an example trace showing the Product Review Summary flow:
The LLM service supports two new feature flags:
llmInaccurateResponse: when this feature flag is enabled the LLM service returns an inaccurate product summary for product ID L9ECAV7KIMllmRateLimitError: when this feature flag is enabled, the LLM service intermittently returns a RateLimitError with HTTP status code 429If the direction looks good, I’ll follow up with documentation and Helm chart changes. In the meantime, I’d welcome early feedback.
Merge Requirements
For new features contributions, please make sure you have completed the following
essential items:
CHANGELOG.mdupdated to document new feature additionsMaintainers will not merge until the above have been completed. If you're unsure
which docs need to be changed ping the
@open-telemetry/demo-approvers.