Chin-Yang Lin
·
Cheng Sun
·
Fu-En Yang
Min-Hung Chen
.
Yen-Yu Lin
·
Yu-Lun Liu
LongSplat is an unposed 3D Gaussian Splatting framework for robust reconstruction from casually captured long videos. Featuring incremental joint optimization, pose estimation with MASt3R, and adaptive octree anchoring, LongSplat achieves high-quality novel view synthesis from free viewpoints while remaining memory-efficient and scalable.
🎉 2025.08.27: Release converter to 3DGS format for compatibility with general 3DGS viewers. See Convert to 3DGS Format section.
🎉 2025.08.20: Code release!
- Clone LongSplat.
git clone --recursive https://github.com/NVlabs/LongSplat.git
cd LongSplat
- Create the environment
conda create -n longsplat python=3.10.13 cmake=3.14.0 -y
conda activate longsplat
conda install pytorch torchvision pytorch-cuda=12.1 -c pytorch -c nvidia # use the correct version of cuda for your system
pip install -r requirements.txt
pip install submodules/simple-knn
pip install submodules/diff-gaussian-rasterization
pip install submodules/fused-ssim
- Optional but highly suggested, compile the cuda kernels for RoPE (as in CroCo v2).
# DUST3R relies on RoPE positional embeddings for which you can compile some cuda kernels for faster runtime.
cd submodules/mast3r/dust3r/croco/models/curope/
python setup.py build_ext --inplace
cd ../../../../../../
DATAROOT is ./data
by default. Please first make data folder by mkdir data
.
Download Free dataset from Dropbox, and save it into the ./data/free
folder.
Download Hike dataset from Google Drive, and save it into the ./data/hike
folder.
Download the data preprocessed by Nope-NeRF as below, and the data is saved into the ./data/tanks
folder.
wget https://www.robots.ox.ac.uk/~wenjing/Tanks.zip
The training scripts include both training, rendering, and evaluation steps:
Each .sh
script runs three main Python scripts in sequence:
train.py
: Trains the LongSplat modelrender.py
: Renders the trained model to generate novel viewsmetrics.py
: Evaluates the rendering quality and computes metrics
# For Free dataset
bash scripts/train_free.sh
# For Hike dataset
bash scripts/train_hike.sh
# For Tanks and Temples dataset
bash scripts/train_tnt.sh
Input video suggestions: LongSplat can handle varied resolutions and fps. We suggest subsampling to around 10 fps and resizing longer videos to 512 px width for faster training. For best results, avoid videos with dynamic objects and ensure sufficient camera overlap between frames to avoid non-overlapping sparse input.
-
To run LongSplat on your own video, you need first to convert your video to frames and save them to
./data/$CUSTOM_DATA/images/
-
Before running the script, you need to modify the
scene=
parameter inscripts/train_custom.sh
to point to your custom data directory. For example, changescene='./data/IMG_4190'
toscene='./data/YOUR_CUSTOM_DATA'
. -
Adjust training parameters based on video length and resolution: For longer videos, you may need to increase the
--post_iter
parameter to ensure sufficient optimization. Consider adjusting this value proportionally to your video length. -
Run the following command:
bash scripts/train_custom.sh
LongSplat uses an anchor + MLP structure for efficient reconstruction. We provide a conversion script to transform LongSplat results into the standard 3DGS format, which outputs a point_cloud.ply
file that can be used with general 3DGS viewers.
Note: Converting to 3DGS format will change both quality and model size. We recommend applying pre-pruning to reduce the model size before conversion.
# Convert LongSplat output to 3DGS format
python convert_3dgs.py -m PATH_TO_TRAINED_MODEL --prune_ratio 0.6
This demonstrates the converted result visualized on Three.js 3D Gaussian Splatting viewer.
Our render is built upon 3DGS. The data processing and visualization codes are partially borrowed from Scaffold-GS. We thank all the authors for their great repos.
Please consider starring ⭐ this repo and citing our paper 📝 if you find it useful.
@inproceedings{lin2025longsplat,
title={LongSplat: Robust Unposed 3D Gaussian Splatting for Casual Long Videos},
author={Chin-Yang Lin and Cheng Sun and Fu-En Yang and Min-Hung Chen and Yen-Yu Lin and Yu-Lun Liu},
booktitle={ICCV},
year={2025}
}