Fix #3715: Remove unwanted LLM stream chunk printing to stdout #3716
+9,849
−3,505
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fix #3715: Remove unwanted LLM stream chunk printing to stdout
Summary
Removed the
print()
statement inevent_listener.py
that was causing all LLM streaming chunks to be printed directly to stdout. This addresses issue #3715 where users reported seeing unwanted LLM output text in their console.Changes:
print(content, end="", flush=True)
from theon_llm_stream_chunk
event handler insrc/crewai/events/event_listener.py
test_llm_stream_chunks_do_not_print_to_stdout
to verify chunks are emitted as events but not printed to stdoutThe streaming chunks are still collected in the internal
text_stream
and emitted asLLMStreamChunkEvent
events for proper event-driven handling - they're just no longer printed directly to stdout.Review & Testing Checklist for Human
stream=True
on the LLM and verify that streaming still works correctly but chunks aren't printed to stdoutLLMStreamChunkEvent
still receive the chunks properlyRecommended Test Plan
llm = LLM(model="gpt-4o", stream=True)
LLMStreamChunkEvent
and verify it still receives chunksNotes