Skip to content

Conversation

@Alexei-V-Ivanov-AMD
Copy link
Collaborator

@Alexei-V-Ivanov-AMD Alexei-V-Ivanov-AMD commented Nov 19, 2025

Updating the mirror of test-amd.yaml as of 2025-11-18

Signed-off-by: Alexei V. Ivanov alexei.ivanov@amd.com

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
@mergify mergify bot added ci/build rocm Related to AMD ROCm labels Nov 19, 2025
@mergify
Copy link

mergify bot commented Nov 19, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @Alexei-V-Ivanov-AMD.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Nov 19, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the CI configuration in .buildkite/test-amd.yaml. The changes involve adding new tests, updating test filters, and disabling some tests, likely to sync with the current state of the test suite. My review found one issue: an outdated comment that incorrectly states a test is skipped, which could cause confusion. I've provided a comment to address this.

commands: # LMEval
# Transcription WER check is skipped because encoder-decoder models are not supported on ROCm, see https://github.com/vllm-project/vllm/issues/27442
- pytest -s entrypoints/openai/correctness/ --ignore entrypoints/openai/correctness/test_transcription_api_correctness.py
- pytest -s entrypoints/openai/correctness/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

With the removal of the --ignore flag, test_transcription_api_correctness.py will now be executed. However, the comment on the preceding line (713) which states that this test is skipped is now outdated. This can be misleading for developers. Please remove the outdated comment to maintain clarity in the CI configuration.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines 697 to +714
- vllm/model_executor/models/whisper.py
commands: # LMEval
# Transcription WER check is skipped because encoder-decoder models are not supported on ROCm, see https://github.com/vllm-project/vllm/issues/27442
- pytest -s entrypoints/openai/correctness/ --ignore entrypoints/openai/correctness/test_transcription_api_correctness.py
- pytest -s entrypoints/openai/correctness/

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Re-enable ROCm-incompatible transcription correctness suite

The OpenAI API correctness step now runs the entire entrypoints/openai/correctness directory, but the comment above still states that transcription WER checks must be skipped because encoder–decoder models are unsupported on ROCm. The previously ignored tests/entrypoints/openai/correctness/test_transcription_api_correctness.py contains an unconditional integration test that downloads audio datasets and depends on packages like librosa and evaluate without any ROCm guard. Running it on the AMD agents will likely fail due to missing dependencies or unsupported model execution. If the ROCm limitation still applies, the command should continue to ignore that file or add an explicit skip inside the test.

Useful? React with 👍 / 👎.

- pytest -s entrypoints/openai/correctness/

- label: OpenAI-Compatible Tool Use # 23 min
timeout_in_minutes: 35
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to ignore this since encoder-decoder models are not supported on ROCm

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an effort to support specifically OpenAI Whisper (e.g. #28376)
for which Word Error Rate (WER) is the predominant quality metric.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Alexei-V-Ivanov-AMD Could we disable the test first? The PR are still under reviewing. It will take some time.

Alexei-V-Ivanov-AMD and others added 2 commits November 20, 2025 06:14
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
@mergify mergify bot removed the needs-rebase label Nov 20, 2025
@tjtanaa tjtanaa added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 20, 2025
@gshtras gshtras merged commit 2292438 into vllm-project:main Nov 20, 2025
16 checks passed
LuminolT pushed a commit to LuminolT/vllm that referenced this pull request Nov 21, 2025
…9016)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: LuminolT <lumischen01@gmail.com>
lpapavassiliou pushed a commit to lpapavassiliou/vllm that referenced this pull request Nov 24, 2025
…9016)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
RunkaiTao pushed a commit to RunkaiTao/vllm that referenced this pull request Nov 24, 2025
…9016)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: Runkai Tao <rt572@physics.rutgers.edu>
bringlein pushed a commit to bringlein/vllm that referenced this pull request Nov 26, 2025
…9016)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
devpatelio pushed a commit to SumanthRH/vllm that referenced this pull request Nov 29, 2025
…9016)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
kitaekatt pushed a commit to kitaekatt/vllm that referenced this pull request Dec 1, 2025
…9016)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
charlotte12l pushed a commit to charlotte12l/vllm that referenced this pull request Dec 5, 2025
…9016)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: Xingyu Liu <charlotteliu12x@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build ready ONLY add when PR is ready to merge/full CI is needed rocm Related to AMD ROCm

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants