-
Notifications
You must be signed in to change notification settings - Fork 1.5k
fix: Incorrect token count mapping in telemetry #2109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Incorrect token count mapping in telemetry #2109
Conversation
Hi @hangfei @Jacksunwei , could you guys be able to have a look, many thanks! 🙏 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution!
@tl-nguyen could you fix the two failed checks? |
9ae6dc6
to
895e3c8
Compare
@hangfei, I fixed them many thanks! |
895e3c8
to
7774e4e
Compare
7774e4e
to
4590843
Compare
Hi @hangfei, can we have a look on this one again? 🙏 , many thanks! |
Merge #2109 Fixes #2105 ## Problem When integrating Google ADK with Langfuse using the @observe decorator, the usage details displayed in Langfuse web UI were incorrect. The root cause was in the telemetry implementation where total_token_count was being mapped to gen_ai.usage.output_tokens instead of candidates_token_count. - Expected mapping: - candidates_token_count → completion_tokens (output tokens) - prompt_token_count → prompt_tokens (input tokens) - Previous incorrect mapping: - total_token_count → completion_tokens (wrong!) - prompt_token_count → prompt_tokens (correct) ## Solution Updated trace_call_llm function in telemetry.py to use candidates_token_count for output token tracking instead of total_token_count, ensuring proper token count reporting to observability tools like Langfuse. ## Testing plan - Updated test expectations in test_telemetry.py - Verified telemetry tests pass - Manual verification with Langfuse integration ## Screenshots **Before** <img width="1187" height="329" alt="Screenshot from 2025-07-22 20-20-33" src="https://github.com/user-attachments/assets/ad5fc957-64a2-4524-bd31-0cebb15a5270" /> **After** <img width="1187" height="329" alt="Screenshot from 2025-07-22 20-21-40" src="https://github.com/user-attachments/assets/3920df2a-be75-47e0-9bd0-f961bb72c838" /> _Notes_: From the screenshot, there's another problem: thoughts_token_count field is not mapped, but this should be another issue imo COPYBARA_INTEGRATE_REVIEW=#2109 from tl-nguyen:fix-telemetry-token-count-mapping 3d043f5 PiperOrigin-RevId: 786827802
This change has been accepted as a part of c8f8b4a |
Fixes #2105
Problem
When integrating Google ADK with Langfuse using the @observe
decorator, the usage details displayed in Langfuse web UI were
incorrect.
The root cause was in the telemetry implementation where
total_token_count was being mapped to gen_ai.usage.output_tokens
instead of candidates_token_count.
Expected mapping:
Previous incorrect mapping:
Solution
Updated trace_call_llm function in telemetry.py to use
candidates_token_count for output token tracking instead of
total_token_count, ensuring proper token count reporting to
observability tools like Langfuse.
Testing plan
Screenshots
Before
After
Notes: From the screenshot, there's another problem: thoughts_token_count field is not mapped, but this should be another issue imo