Skip to content

Conversation

devin-ai-integration[bot]
Copy link
Contributor

Fix #3715: Remove unwanted LLM stream chunk printing to stdout

Summary

Removed the print() statement in event_listener.py that was causing all LLM streaming chunks to be printed directly to stdout. This addresses issue #3715 where users reported seeing unwanted LLM output text in their console.

Changes:

  • Removed print(content, end="", flush=True) from the on_llm_stream_chunk event handler in src/crewai/events/event_listener.py
  • Added test test_llm_stream_chunks_do_not_print_to_stdout to verify chunks are emitted as events but not printed to stdout

The streaming chunks are still collected in the internal text_stream and emitted as LLMStreamChunkEvent events for proper event-driven handling - they're just no longer printed directly to stdout.

Review & Testing Checklist for Human

  • Test with actual LLM streaming: Run a crew with stream=True on the LLM and verify that streaming still works correctly but chunks aren't printed to stdout
  • Verify event handlers still receive chunks: Confirm that any custom event handlers listening for LLMStreamChunkEvent still receive the chunks properly
  • Check console output: Ensure that final agent outputs and task results are still displayed correctly (only the streaming chunks should be suppressed)

Recommended Test Plan

  1. Create a simple crew with streaming enabled: llm = LLM(model="gpt-4o", stream=True)
  2. Run the crew and observe console output - you should see normal agent/task messages but NOT individual streaming chunks
  3. Add a custom event handler for LLMStreamChunkEvent and verify it still receives chunks

Notes

  • The fix is minimal and low-risk - it's a single line removal
  • The test manually emits events rather than calling a real LLM (to avoid network dependencies in unit tests)
  • I couldn't fully run the existing VCR-based streaming tests due to cassette/network issues, so end-to-end verification with actual streaming is recommended
  • Link to Devin run: https://app.devin.ai/sessions/905c78a2d39a42ee956423514c83194f
  • Requested by: João (joao@crewai.com)

- Removed print() statement in event_listener.py that was printing all LLM streaming chunks to stdout
- The print() on line 386 was causing all text chunks from LLM responses to be displayed in stdout
- Added test to verify stream chunks are emitted as events but not printed to stdout
- Streaming chunks should only be handled by event handlers, not printed directly

Fixes #3715

Co-Authored-By: João <joao@crewai.com>
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

Co-Authored-By: João <joao@crewai.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] The entire LLM output is displayed in stdout

1 participant