Skip to content

Conversation

@Pavanmanikanta98
Copy link

@Pavanmanikanta98 Pavanmanikanta98 commented Nov 15, 2025

Issue #2992 : Add validation in UserPromptNode to raise UserError if message history starts with ModelResponse, ensuring conversations begin with a user message (ModelRequest)

Include comprehensive tests for invalid history, valid history, empty history, multiple messages, and validation after message cleaning to prevent issues with malformed conversation logs

Add validation in UserPromptNode to raise UserError if message history starts with ModelResponse, ensuring conversations begin with a user message (ModelRequest)

Include comprehensive tests for invalid history, valid history, empty history, multiple messages, and validation after message cleaning to prevent issues with malformed conversation logs
@Pavanmanikanta98 Pavanmanikanta98 force-pushed the fix/issue-2992-disallow-initial-model-response branch from aaee6e4 to 6ef2b8c Compare November 15, 2025 18:07
@Pavanmanikanta98
Copy link
Author

Note on test_outlines.py Changes

During implementation, I discovered that tests/models/test_outlines.py::test_input_format was failing in CI.

Root Cause

The test was using message histories that started with ModelResponse:

# Before:
tool_call_message_history: list[ModelMessage] = [
    ModelResponse(parts=[ToolCallPart(...)]),  # Invalid: starts with ModelResponse
    ModelRequest(parts=[ToolReturnPart(...)]),
]

This worked previously because there was no validation at the framework level. The test's purpose was to verify that OutlinesModel rejects tool calls, not to test message history validation.

However, with the new validation in place, the test now fails early with:

UserError: Message history cannot start with a `ModelResponse`. Conversations must begin with a user message.

This prevented the test from reaching the OutlinesModel code it was meant to test.

Solution

I updated the test to use valid message history structure by adding an initial ModelRequest:

# After:
tool_call_message_history: list[ModelMessage] = [
    ModelRequest(parts=[UserPromptPart(content='some user prompt')]),  # Valid start
    ModelResponse(parts=[ToolCallPart(...)]),
    ModelRequest(parts=[ToolReturnPart(...)]),
]

This maintains the test's original purpose (verifying OutlinesModel rejects tool calls) while complying with the correct message history structure that all LLM providers require.

Rationale

This change makes the test more correct:

  • ✅ Real conversations must start with a user message
  • ✅ All major LLM providers (Bedrock, OpenAI, Anthropic) require this
  • ✅ The framework should validate this before model-specific code runs
  • ✅ Tests should use valid data unless specifically testing validation

The same fix was applied to the file_part_message_history test case for consistency.


Let me know if you'd prefer a different approach for the test data (e.g., more realistic user prompts like "What is my location?" for the tool call test).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant