Skip to content

Conversation

@KyleShao1016
Copy link
Contributor

Implement NVIDIA's COSMOS 2.5 Diffusion Transformer (DiT) architecture in FastVideo
with verified numerical accuracy against the official cosmos-predict2.5 reference.

Key changes:

  1. Model Architecture (fastvideo/models/dits/cosmos2_5.py):

    • Implement Cosmos25Transformer3DModel with 28 transformer blocks
    • Add AdaLN-LoRA conditioning with proper modulation parameter computation
    • Implement 3D RoPE with NTK-aware extrapolation for spatiotemporal encoding
    • Add QK normalization using RMSNorm for improved training stability
    • Support optional cross-attention projection for high-dimensional embeddings (Qwen 7B)
    • Add learnable positional embeddings as optional feature
    • Implement proper patch embedding/unpatchification for video processing
  2. Configuration (fastvideo/configs/models/dits/cosmos2_5.py):

    • Add Cosmos25ArchConfig with complete parameter mapping for checkpoint loading
    • Map official checkpoint structure (net.blocks.) to FastVideo structure (transformer_blocks.)
    • Support AdaLN modulation layers at block level (adaln_modulation_self_attn/cross_attn/mlp)
    • Configure cross-attention projection for 100,352-dim Qwen embeddings
    • Add LoRA parameter mappings for fine-tuning support
  3. Tests (fastvideo/tests/transformers/test_cosmos2_5.py):

    • Add comprehensive parity test against official MinimalV1LVGDiT reference
    • Test both single-frame (image) and multi-frame (video) generation
    • Verify checkpoint loading and weight consistency
    • Achieve 0.173% relative difference in bfloat16 (excellent parity)
    • Test video2world conditioning with condition masks
  4. Registry (fastvideo/configs/models/dits/init.py):

    • Register Cosmos25VideoConfig for model instantiation

Test Results:

  • Max difference: 0.046875
  • Mean difference: 0.001869
  • Relative difference: 0.173% (excellent parity in bfloat16)

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @KyleShao1016, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates NVIDIA's COSMOS 2.5 Diffusion Transformer architecture into the FastVideo framework. The implementation includes key architectural components like AdaLN-LoRA conditioning, 3D Rotary Positional Embeddings, and QK normalization, along with a robust configuration for checkpoint compatibility. Extensive testing confirms high numerical accuracy against the official reference, ensuring reliable video generation capabilities.

Highlights

  • COSMOS 2.5 DiT Implementation: Added the NVIDIA COSMOS 2.5 Diffusion Transformer (DiT) architecture to FastVideo, enabling advanced video generation capabilities.
  • Numerical Accuracy Verified: Achieved excellent numerical accuracy, with a 0.173% relative difference in bfloat16, against the official COSMOS 2.5 reference implementation.
  • Advanced Architectural Features: Implemented key features including 28 transformer blocks, AdaLN-LoRA conditioning, 3D Rotary Positional Embeddings (RoPE) with NTK-aware extrapolation, QK normalization, and optional cross-attention projection for high-dimensional embeddings (e.g., Qwen 7B).
  • Comprehensive Configuration and Checkpoint Loading: Introduced Cosmos25ArchConfig with detailed parameter mappings to ensure seamless loading of official COSMOS 2.5 checkpoints and support for LoRA fine-tuning.
  • Robust Testing Suite: Included extensive parity tests covering both single-frame (image) and multi-frame (video) generation, checkpoint loading, and video2world conditioning with condition masks.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive implementation of the NVIDIA COSMOS 2.5 DiT model, including its configuration, architecture, and detailed tests for numerical parity against the reference implementation. The code is well-structured, clearly commented, and the addition of extensive tests is commendable. My review identifies a few areas for improvement related to code style, efficiency, and robustness. Specifically, I've suggested moving an import out of a hot path in the forward method, improving robustness by making zip calls stricter to catch potential configuration errors, and cleaning up unused imports and variables in the test file. Overall, this is a high-quality contribution that significantly extends the model zoo.


# 2. Concatenate padding mask if needed
if self.concat_padding_mask and padding_mask is not None:
from torchvision import transforms
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Importing torchvision.transforms inside the forward method is inefficient as it will be executed on every call. This import should be moved to the top of the file with the other imports (e.g., near import torch) to follow standard Python best practices and avoid repeated import overhead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved the import code to the top section.

) -> None:
super().__init__()

self.max_size = [size // patch for size, patch in zip(max_size, patch_size, strict=False)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using zip with strict=False can hide potential configuration errors. If max_size and patch_size have different lengths, zip will silently truncate to the shorter iterable, which could lead to unexpected behavior. It's safer to use strict=True to ensure that these tuples have the expected matching length of 3. This will raise a ValueError if their lengths differ, making configuration issues easier to debug.

Suggested change
self.max_size = [size // patch for size, patch in zip(max_size, patch_size, strict=False)]
self.max_size = [size // patch for size, patch in zip(max_size, patch_size, strict=True)]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated!

) -> None:
super().__init__()

self.max_size = [size // patch for size, patch in zip(max_size, patch_size, strict=False)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the Cosmos25RotaryPosEmbed class, using zip with strict=False here can hide configuration errors. If max_size and patch_size have different lengths, it will silently truncate. Using strict=True is recommended for robustness, as it will raise a ValueError if the lengths of the iterables do not match, which helps in catching configuration mistakes early.

Suggested change
self.max_size = [size // patch for size, patch in zip(max_size, patch_size, strict=False)]
self.max_size = [size // patch for size, patch in zip(max_size, patch_size, strict=True)]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated!

Comment on lines 9 to 28
import numpy as np
import pytest
import torch

# Add cosmos-predict2.5 to Python path for loading reference model
# cosmos-predict2.5 is a sibling directory to FastVideo at video/cosmos-predict2.5
TEST_DIR = os.path.dirname(os.path.abspath(__file__))
# From test file: video/FastVideo/fastvideo/tests/transformers/
# Go up 4 levels to reach: video/
# Then join with: cosmos-predict2.5
COSMOS_PREDICT2_5_PATH = os.path.join(TEST_DIR, '..', '..', '..', '..', 'cosmos-predict2.5')
COSMOS_PREDICT2_5_PATH = os.path.normpath(COSMOS_PREDICT2_5_PATH)
if os.path.exists(COSMOS_PREDICT2_5_PATH) and COSMOS_PREDICT2_5_PATH not in sys.path:
sys.path.insert(0, COSMOS_PREDICT2_5_PATH)

from fastvideo.configs.pipelines import PipelineConfig
from fastvideo.forward_context import set_forward_context
from fastvideo.fastvideo_args import FastVideoArgs
from fastvideo.logger import init_logger
from fastvideo.models.loader.component_loader import TransformerLoader
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There are several unused imports and variables in this test file that should be removed for code cleanliness:

  • Unused imports: numpy as np (line 9), PipelineConfig (line 24), FastVideoArgs (line 26), TransformerLoader (line 28).
  • Unused variable precision_str: defined on line 196 and 479 but never used.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed!

@Edenzzzz Edenzzzz added the go Trigger Buildkite CI label Nov 23, 2025
Comment on lines +465 to +444
assert max_diff < 1e-1, f"Maximum difference too large: {max_diff.item()}"
assert mean_diff < 1e-2, f"Mean difference too large: {mean_diff.item()}"
Copy link
Collaborator

@Edenzzzz Edenzzzz Nov 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert a lower error than this?

Copy link
Contributor Author

@KyleShao1016 KyleShao1016 Dec 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed the dtype to float32 and reduced the threshold values.

Comment on lines 254 to 255
# Attention computation
attn_output = torch.nn.functional.scaled_dot_product_attention(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you replace this with fastvideo's distributedAttention? refer to wan or cosmos2. By default both torch sdpa and fa should be supported

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated!

value = value.transpose(1, 2)

# Attention computation
attn_output = torch.nn.functional.scaled_dot_product_attention(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

replace this with our LocalAttention layer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated!

@SolitaryThinker
Copy link
Collaborator

also please run pre-commit and fix lint

KyleS1016 and others added 3 commits December 7, 2025 04:32
Implement NVIDIA's COSMOS 2.5 Diffusion Transformer with AdaLN-LoRA conditioning, 3D RoPE, and QK normalization. Achieves 0.173% relative error vs official reference in bfloat16.

- Add Cosmos25Transformer3DModel with 28 transformer blocks
- Add configuration, checkpoint mappings, and parity tests
- Support optional cross-attention projection for high-dim embeddings
- Integrated DistributedAttention and LocalAttention for flexible backend support in Cosmos25SelfAttention and Cosmos25CrossAttention classes.
- Updated attention computation to handle both distributed and local scenarios.
- Refactored attention backend initialization to check for distributed environment.
- Cleaned up unused comments and improved code readability in the test suite for Cosmos 2.5.

This update improves the model's adaptability to different hardware configurations while maintaining performance.
…onality

- Added torchvision transforms import to the Cosmos 2.5 model for enhanced functionality.
- Updated max_size calculation in positional embedding classes to enforce strict behavior.
- Changed precision from bfloat16 to float32 in test cases to improve numerical stability.
- Tightened numerical difference assertions in tests to ensure higher accuracy in model outputs.

These changes enhance the model's robustness and ensure better alignment with reference implementations.
- Changed precision from float32 to bfloat16 to support FlashAttention backend
  * FlashAttention only supports bfloat16 and float16 dtypes
  * This aligns with the official Cosmos2.5 inference pipeline which uses bfloat16

- Adjusted numerical difference thresholds to account for bfloat16 precision
  * Relaxed max_diff and mean_diff assertions to accommodate lower precision
  * New thresholds are empirically determined to pass tests while maintaining correctness verification

These changes improve test compatibility with optimized attention backends and better reflect the model's actual inference configuration.
Comment on lines 229 to 235
try:
from fastvideo.distributed.parallel_state import model_parallel_is_initialized
use_distributed = torch.distributed.is_initialized() and model_parallel_is_initialized()
except:
use_distributed = False

if use_distributed:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we shouldn't need to check this, just always using distributedAttention is fine

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated!
I used DistributedAttention for self attention and LocalAttention for cross attention.

…butedAttention for self-attention and LocalAttention for cross-attention. This change simplifies the code by removing the distributed environment checks and ensures consistent behavior across different configurations.
@SolitaryThinker SolitaryThinker changed the title feat: add COSMOS 2.5 DiT implementation [feat]: add COSMOS 2.5 DiT implementation Dec 8, 2025
@SolitaryThinker SolitaryThinker merged commit e04a192 into hao-ai-lab:main Dec 8, 2025
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

go Trigger Buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants