Skip to content

Commit c8f8b4a

Browse files
tl-nguyencopybara-github
authored andcommitted
fix: Fix incorrect token count mapping in telemetry
Merge #2109 Fixes #2105 ## Problem When integrating Google ADK with Langfuse using the @observe decorator, the usage details displayed in Langfuse web UI were incorrect. The root cause was in the telemetry implementation where total_token_count was being mapped to gen_ai.usage.output_tokens instead of candidates_token_count. - Expected mapping: - candidates_token_count → completion_tokens (output tokens) - prompt_token_count → prompt_tokens (input tokens) - Previous incorrect mapping: - total_token_count → completion_tokens (wrong!) - prompt_token_count → prompt_tokens (correct) ## Solution Updated trace_call_llm function in telemetry.py to use candidates_token_count for output token tracking instead of total_token_count, ensuring proper token count reporting to observability tools like Langfuse. ## Testing plan - Updated test expectations in test_telemetry.py - Verified telemetry tests pass - Manual verification with Langfuse integration ## Screenshots **Before** <img width="1187" height="329" alt="Screenshot from 2025-07-22 20-20-33" src="https://github.com/user-attachments/assets/ad5fc957-64a2-4524-bd31-0cebb15a5270" /> **After** <img width="1187" height="329" alt="Screenshot from 2025-07-22 20-21-40" src="https://github.com/user-attachments/assets/3920df2a-be75-47e0-9bd0-f961bb72c838" /> _Notes_: From the screenshot, there's another problem: thoughts_token_count field is not mapped, but this should be another issue imo COPYBARA_INTEGRATE_REVIEW=#2109 from tl-nguyen:fix-telemetry-token-count-mapping 3d043f5 PiperOrigin-RevId: 786827802
1 parent 11037fc commit c8f8b4a

File tree

2 files changed

+5
-3
lines changed

2 files changed

+5
-3
lines changed

src/google/adk/telemetry.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ def trace_call_llm(
202202
)
203203
span.set_attribute(
204204
'gen_ai.usage.output_tokens',
205-
llm_response.usage_metadata.total_token_count,
205+
llm_response.usage_metadata.candidates_token_count,
206206
)
207207

208208

tests/unittests/test_telemetry.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -155,15 +155,17 @@ async def test_trace_call_llm_usage_metadata(monkeypatch, mock_span_fixture):
155155
llm_response = LlmResponse(
156156
turn_complete=True,
157157
usage_metadata=types.GenerateContentResponseUsageMetadata(
158-
total_token_count=100, prompt_token_count=50
158+
total_token_count=100,
159+
prompt_token_count=50,
160+
candidates_token_count=50,
159161
),
160162
)
161163
trace_call_llm(invocation_context, 'test_event_id', llm_request, llm_response)
162164

163165
expected_calls = [
164166
mock.call('gen_ai.system', 'gcp.vertex.agent'),
165167
mock.call('gen_ai.usage.input_tokens', 50),
166-
mock.call('gen_ai.usage.output_tokens', 100),
168+
mock.call('gen_ai.usage.output_tokens', 50),
167169
]
168170
assert mock_span_fixture.set_attribute.call_count == 9
169171
mock_span_fixture.set_attribute.assert_has_calls(

0 commit comments

Comments
 (0)