-
Notifications
You must be signed in to change notification settings - Fork 36
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Bug Description
ValueError: The decoder prompt (length 36842) is longer than the maximum model length of 20480.
raised when running the Alfworld script. I think similar issue has been proposed, but I am still confused. Could the dev kindly advise on how to handle it?
Environment Information
- Operating System: [e.g., Ubuntu 20.04]
- Python Version:3.10.18
- GPU: 4xH20
- CUDA Version: 12.6
- Installation Method: clone, then pip
- Trinity-RFT Version: 0.3.1.dev0
- Other relevant dependencies or configurations you think might be helpful
Steps to Reproduce
trinity run --config examples/grpo_alfworld/alfworld.yaml
Expected Behavior
I think these over lengthy 'experience' should be pruned or discarded. But I am not very sure.
Actual Behavior
Value error raised during training.
Log Information
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] File "/root/miniconda3/envs/trinity/lib/python3.10/concurrent/futures/_base.py", line 458, in result
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] return self.__get_result()
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] File "/root/miniconda3/envs/trinity/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] raise self._exception
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] File "/root/autodl-tmp/Trinity-RFT/trinity/common/models/vllm_model.py", line 347, in convert_messages_to_experience
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] logprobs = await self.logprobs(token_ids=token_ids.tolist()) # (seq_length - 1,)
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] File "/root/autodl-tmp/Trinity-RFT/trinity/common/models/vllm_model.py", line 302, in logprobs
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] output = await self._generate_internal(
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] File "/root/autodl-tmp/Trinity-RFT/trinity/common/models/vllm_model.py", line 323, in _generate_internal
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] async for request_output in stream:
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] File "/root/miniconda3/envs/trinity/lib/python3.10/site-packages/vllm/v1/engine/async_llm.py", line 307, in generate
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] q = await self.add_request(
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] File "/root/miniconda3/envs/trinity/lib/python3.10/site-packages/vllm/v1/engine/async_llm.py", line 237, in add_request
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] prompt_str, request = self.processor.process_inputs(
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] File "/root/miniconda3/envs/trinity/lib/python3.10/site-packages/vllm/v1/engine/processor.py", line 266, in process_inputs
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] self._validate_model_inputs(processed_inputs, lora_request)
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] File "/root/miniconda3/envs/trinity/lib/python3.10/site-packages/vllm/v1/engine/processor.py", line 365, in _validate_model_inputs
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] self._validate_model_input(decoder_inputs,
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] File "/root/miniconda3/envs/trinity/lib/python3.10/site-packages/vllm/v1/engine/processor.py", line 418, in _validate_model_input
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] raise ValueError(
2025-09-23 11:48:56
ERROR 09-23 19:48:56 [scheduler.py:90] ValueError: The decoder prompt (length 36842) is longer than the maximum model length of 20480. Make sure that `max_model_len` is no smaller than the number of text tokens.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working