Skip to content

Commit b9b98e0

Browse files
committed
Merge branch 'main' of github.com:hao-ai-lab/hao-ai-lab.github.io
2 parents 72bde58 + 3271ed6 commit b9b98e0

File tree

2 files changed

+7
-3
lines changed

2 files changed

+7
-3
lines changed

content/blogs/fastvideo_post_training/index.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
+++
22
title = "FastWan: Generating a 5-Second Video in 5 Seconds via Sparse Distillation"
3-
date = 2025-08-01T11:00:00-08:00
3+
date = 2025-08-04T11:00:00-08:00
44
authors = ["FastVideo Team"]
55
author = "FastVideo Team"
66
ShowReadingTime = true
@@ -13,7 +13,7 @@ draft = false
1313
name = "github"
1414
url = "https://github.com/hao-ai-lab/FastVideo"
1515
[cover]
16-
image = "/img/fastwan.png"
16+
image = "/img/fastwan/fastwan-teaser.gif"
1717
alt = "Denoising speedup of FastWan"
1818
caption = "A gif of a graph showing FastWan achieving 72.8x speedup for denoising"
1919
hidden = true
@@ -33,7 +33,7 @@ draft = false
3333

3434
Below, we demonstrate how each module accelerates the DiT denoising time (without text encoder and vae) on a single H200 GPU.
3535

36-
{{< table title="Table 2: DiT denoising time comparisons of different methods. All numbers can be reproduced with this [script](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/inference/v1_inference_wan_VSA_DMD.sh)." >}}
36+
{{< table title="Table 1: DiT denoising time comparisons of different methods. All numbers can be reproduced with this [script](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/inference/v1_inference_wan_VSA_DMD.sh)." >}}
3737

3838
| Modules | Wan 2.2 5B 720P | Wan2.1 14B 720P | Wan2.1 1.3B 480P |
3939
|:-------------------------:|:---------------:|:----------------:|:----------------:|
@@ -66,11 +66,15 @@ FastWan is runnable on a wide range of hardware including Nvidia H100, H200, 409
6666
### Models and Recipes
6767

6868
With this blog, we are releasing the following models and their recipes:
69+
70+
{{< table title="Table 2: FastWan release assets." >}}
71+
6972
| Model | Sparse Distillation | Dataset |
7073
|:-------------------------------------------------------------------------------------------: |:---------------------------------------------------------------------------------------------------------------: |:--------------------------------------------------------------------------------------------------------: |
7174
| [FastWan2.1-T2V-1.3B](https://huggingface.co/FastVideo/FastWan2.1-T2V-1.3B-Diffusers) | [Recipe](https://github.com/hao-ai-lab/FastVideo/tree/main/examples/distill/Wan2.1-T2V/Wan-Syn-Data-480P) | [FastVideo Synthetic Wan2.1 480P](https://huggingface.co/datasets/FastVideo/Wan-Syn_77x448x832_600k) |
7275
| [FastWan2.1-T2V-14B-Preview](https://huggingface.co/FastVideo/FastWan2.1-T2V-14B-Diffusers) | Coming soon! | [FastVideo Synthetic Wan2.1 720P](https://huggingface.co/datasets/FastVideo/Wan-Syn_77x768x1280_250k) |
7376
| [FastWan2.2-TI2V-5B-FullAttn-Diffusers](https://huggingface.co/FastVideo/FastWan2.2-TI2V-5B-FullAttn-Diffusers) | [Recipe](https://github.com/hao-ai-lab/FastVideo/tree/main/examples/distill/Wan2.2-TI2V-5B-Diffusers/Data-free) | [FastVideo Synthetic Wan2.2 720P](https://huggingface.co/datasets/FastVideo/Wan2.2-Syn-121x704x1280_32k) |
77+
{{</ table >}}
7478

7579

7680
For FastWan2.2-TI2V-5B-FullAttn, since its sequence length is short (~20K), it does not benifit much from sparse attention. We only train it with DMD and full attention. We are actively working on applying sparse distillation to 14B models for both Wan2.1 and Wan2.2. Follow our progress at our [Github](https://github.com/hao-ai-lab/FastVideo), [Slack](https://join.slack.com/t/fastvideo/shared_invite/zt-38u6p1jqe-yDI1QJOCEnbtkLoaI5bjZQ) and [Discord](https://discord.gg/Dm8F2peD3e)!
1.39 MB
Loading

0 commit comments

Comments
 (0)