Skip to content

Supported sharded GPU simulation with mjwarp #2837

@jjyyxx

Description

@jjyyxx

The feature, motivation and pitch

It would be very useful to support multi-GPU mjwarp simulations.

Problem

Running mjwarp simulations across multiple GPUs is currently difficult due to limitations in sharding and device handling.

Challenges

  1. Device-side model/data creation: The current API is not compatible with replicated or data-parallel sharding.
  2. Sharding awareness in JAX FFI integration: The mjx/third_party/warp/jax_experimental/ffi.py file (and other code locations) assumes the first visible device instead of handling different devices properly.
  3. Collision data layout: mjwarp stores collision data in a flattened layout, making partitioning and sharding more complex.

Alternatives

No response

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions