-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Open
Description
When I tried to run ChatDev with the following command:
python3 run.py --task "[make a simple Snake Game game]" --name "[Snake_Game]"
I get this error:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/Users/yixuan/Documents/GitHub/ChatDev/camel/utils.py", line 160, in wrapper
return func(self, *args, **kwargs)
File "/Users/yixuan/Documents/GitHub/ChatDev/camel/agents/chat_agent.py", line 239, in step
response = self.model_backend.run(messages=openai_messages)
File "/Users/yixuan/Documents/GitHub/ChatDev/camel/model_backend.py", line 101, in run
response = client.chat.completions.create(*args, **kwargs, model=self.model_type.value,
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/openai/_utils/_utils.py", line 286, in wrapper
return func(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/openai/resources/chat/completions/completions.py", line 1147, in create
return self._post(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/openai/_base_client.py", line 1259, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/openai/_base_client.py", line 1047, in request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'max_tokens is too large: 14817. This model supports at most 4096 completion tokens, whereas you provided 14817.', 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'invalid_value'}}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/yixuan/Documents/GitHub/ChatDev/run.py", line 136, in <module>
chat_chain.execute_chain()
File "/Users/yixuan/Documents/GitHub/ChatDev/chatdev/chat_chain.py", line 168, in execute_chain
self.execute_step(phase_item)
File "/Users/yixuan/Documents/GitHub/ChatDev/chatdev/chat_chain.py", line 138, in execute_step
self.chat_env = self.phases[phase].execute(self.chat_env,
File "/Users/yixuan/Documents/GitHub/ChatDev/chatdev/phase.py", line 295, in execute
self.chatting(chat_env=chat_env,
File "/Users/yixuan/Documents/GitHub/ChatDev/chatdev/utils.py", line 79, in wrapper
return func(*args, **kwargs)
File "/Users/yixuan/Documents/GitHub/ChatDev/chatdev/phase.py", line 133, in chatting
assistant_response, user_response = role_play_session.step(input_user_msg, chat_turn_limit == 1)
File "/Users/yixuan/Documents/GitHub/ChatDev/camel/agents/role_playing.py", line 247, in step
assistant_response = self.assistant_agent.step(user_msg_rst)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7f9582503820 state=finished raised BadRequestError>]
Expected Behavior
The program should correctly calculate max_tokens based on the model limits (e.g., gpt-4o supports 4096 tokens).
Actual Behavior
It sets max_tokens=14817, which exceeds the model’s maximum (4096), causing the OpenAI API to return BadRequestError.
Metadata
Metadata
Assignees
Labels
No labels