-
-
Notifications
You must be signed in to change notification settings - Fork 11.5k
[Bugfix] Parse gpt-oss refusals w/ newer openai-harmony #28303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[Bugfix] Parse gpt-oss refusals w/ newer openai-harmony #28303
Conversation
The output generated by gpt-oss models does not always strictly follow its expected harmony chat template format. This commonly - but not exclusively - happens when gpt-oss-120b generates refusals for content that violates its built-in safety guidelines. To fix this, a non-strict mode was added to the openai-harmony library to allow attempted recovery of malformed message headers in the model output, such as a missing `<|message|>` special token before the assistant text. This will resolve some cases where the error `openai_harmony.HarmonyError: unexpected tokens remaining in message header` was previously thrown. It will not resolve all of those, as not every malformed message output can be recovered. Other ongoing work around using structured output for the Harmony format can help prevent these kinds of things in the first place, once that work lands and in the cases where the user and/or server decide to enable it. I believe it should be safe to enable this non-strict mode by default in vLLM, as the code paths that enables in the openai-harmony library only gets triggered once it's already detected malformed output. So, there shouldn't be any performance penalty in the common case. And, in the event that the malformed content cannot be properly recovered, the openai-harmony library will still end up throwing an error. This is related to vllm-project#23567 as well as openai/harmony#80. Signed-off-by: Ben Browning <bbrownin@redhat.com>
d3005d2 to
06413e0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively addresses a parsing issue with gpt-oss model outputs by upgrading the openai-harmony library and enabling its non-strict parsing mode. The change is well-justified, with the core logic modification being minimal and correctly targeted. The addition of a new test case, test_malformed_refusal_message, is excellent as it specifically validates the fix for the described malformed refusal messages. The dependency updates in requirements/common.txt and requirements/test.txt are consistent with the required library version. Overall, this is a solid bugfix that improves the robustness of handling gpt-oss outputs.
|
Thanks @bbrowning! Might this help with some of the potentially flaky tests such as https://buildkite.com/vllm/ci/builds/38051#019a6063-c042-4667-bc3f-859390c7272d? |
|
@njhill I don't think this will do anything in that particular case, as on the surface that looks like a role of |
|
Some random thoughts: I think I do lean to not having this enabled by default at first, or if we do have it enabled by default, we need to:
For example, if I deployed this, I'd have no way of understanding the impact of the changes. How many requests its "saving" from erroring for example. I also think there is a difference between the impact of this change on single-turn responses API vs in the tool calling while loop. I think in single-turn responses API usage there is a better ability to "repair" harmony messages when they are translated to responses items and back into harmony messages, but for the tool calling loop it stays as harmony messages the entire time so these issues may compound silently |
Purpose
The output generated by gpt-oss models does not always strictly follow its expected harmony chat template format. This commonly - but not exclusively - happens when gpt-oss-120b generates refusals for content that violates its built-in safety guidelines.
To fix this, a non-strict mode was added to the openai-harmony library to allow attempted recovery of malformed message headers in the model output, such as a missing
<|message|>special token before the assistant text.This will resolve some cases where the error
openai_harmony.HarmonyError: unexpected tokens remaining in message headerwas previously thrown. It will not resolve all of those, as not every malformed message output can be recovered. Other ongoing work around using structured output for the Harmony format can help prevent these kinds of things in the first place, once that work lands and in the cases where the user and/or server decide to enable it.I believe it should be safe to enable this non-strict mode by default in vLLM, as the code paths that enables in the openai-harmony library only gets triggered once it's already detected malformed output. So, there shouldn't be any performance penalty in the common case. And, in the event that the malformed content cannot be properly recovered, the openai-harmony library will still end up throwing an error.
This is related to #23567 as well as openai/harmony#80.
Test Plan
I added a new test to verify the refusal parsing in
test_harmony_utils.py. I also runtest_response_api_with_harmony.pylocally, but skip the code interpreter tests because my dev machine is not setup properly to run that particular one.Test Result