Code for our paper "TCDiff++: An End-to-end Trajectory-Controllable Diffusion Model for Harmonious Music-Driven Group Choreography".
β¨ If you like this project, feel free to give TCDiff++ a star β¨.
- 2025.11.12 π Our work TCDiff++ has been officially accepted by IJCV 2025
- 2025.09.28 π We have released the TCDiff++ codebase. The open-source implementation provides:
- End-to-end trajectory-controllable diffusion framework for group choreography.
- Support for both Mamba-SSM and Transformer-based backbones (via
--use_ssmoption). - Scripts for data preprocessing on AIOZ-GDance.
- Training and evaluation pipelines with long-duration group dance generation.
- Automated Blender visualization pipeline for rendering high-quality 3D animations.
- 2025.06 π The preprint of TCDiff++ is available on arXiv.
- 2024.03 πͺ Our previous work TCDiff was released, pioneering trajectory-controllable diffusion for harmonious group choreography and laying the foundation for TCDiff++.
- To set up the environment, follow these steps:
# Create a new conda environment
conda create -n tcdiffpp python=3.9
conda activate tcdiffpp
# Install PyTorch with CUDA support
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.7 -c pytorch -c nvidia
# Install additional dependencies
pip install mamba-ssm
conda install packaging
# Configure and install PyTorch3D
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch3d/
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install pytorch3d
# Install remaining requirements
pip install -r requirements.txt
pip install accelerate
pip install librosa
pip install matplotlib
pip install p_tqdm- On certain hardware configurations, setting up an environment with SSM may encounter issues. Alternatively, you can use an environment without SSM and set the parameter
use_ssmtoFalse, which allows you to use the Transformer-based framework instead. - We found that on certain devices, the script
create_dataset.pyhas specific version requirements for NumPy (1.26.1 or 2.0.1), which may cause conflicts when setting up the environment (1.24.1) fortrain.pyandtest.py. The following actions may help resolve these issues:
pip install -U scipy
pip install -U librosa
pip install -U numpy
pip install numpy==x.x.x-
Please download AIOZ-GDance from here and place it in the
./data/AIOZ_Datasetpath. -
Run the Preprocessing Script:
cd data/
python create_dataset.pyTo train the model, use the following command:
accelerate launch train.py --use_ssm True
π‘ Note: If you encounter difficulties setting up the mamba environment, you can set --use_ssm False. This will switch to the Transformer-based framework and remove the dependency on mamba-ssm.
To generate results using the trained model, run:
python train.py --mode "val" --use_ssm True
π‘ Note: Similarly, if the mamba environment is hard to configure, you can set --use_ssm False to avoid relying on SSM.
To perform Long-duration generation, execute:
python long_generation.py --required_dancer_num 4 --genre Electronic --use_ssm True
π‘ Note: The --use_ssm setting should be kept consistent with the configuration you used during training. If you had to disable SSM due to environment issues, make sure to set --use_ssm False here as well.
We developed automated scripts to transform the generated SMPL motion data into beautiful 3D animations rendered in Blender, replicating the high-quality visuals featured on our project page. The entire rendering pipeline, from data preparation to Blender rendering, is fully scripted for ease of use and reproducibility. For detailed steps, please refer to the Blender_Visulization/ Rendering Pipeline documentation. β¨ Your star is the greatest encouragement for our work. β¨

The concept of TCDiff is inspired by solo-dancer generation model EDGE and Mamba. We sincerely appreciate the efforts of these teams for their contributions to open-source research and development.
@article{dai2025tcdiff++,
title={TCDiff++: An End-to-end Trajectory-Controllable Diffusion Model for Harmonious Music-Driven Group Choreography},
author={Dai, Yuqin and Zhu, Wanlu and Li, Ronghui and Li, Xiu and Zhang, Zhenyu and Li, Jun and Yang, Jian},
journal={arXiv preprint arXiv:2506.18671},
year={2025}
}
@inproceedings{dai2025harmonious,
title={Harmonious Music-driven Group Choreography with Trajectory-Controllable Diffusion},
author={Dai, Yuqin and Zhu, Wanlu and Li, Ronghui and Ren, Zeping and Zhou, Xiangzheng and Ying, Jixuan and Li, Jun and Yang, Jian},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={39},
number={3},
pages={2645--2653},
year={2025}
}
