You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, it is a great publication and contribution to the open-source community!
I'm wondering if this Sparse-VideoGen could potentially be compatible and combined with any other acceleration methods for video DiT models, such as TeaCache (and its ComfyUI implementation), Comfy-WaveSpeed, torch.compile, and/or any of attention optimization methods like SageAttention and FlashAttention.
I see the mainstream of the ComfyUI/video gen community relies on TeaCache and/or Comfy-WaveSpeed (with @kijai's wrappers), which already achieves 1.6x-2x acceleration with reasonable quality impacts.
Thank you in advance, and I would love to learn if there could be a best combination of them to further speed up video generation!
For most of us without flagship GPUs like H100, video generation time (and VRAM consumption) are very crucial problem that we are eager to overcome.