Skip to content

Commit 5d01430

Browse files
update table caption
1 parent 77983ee commit 5d01430

File tree

1 file changed

+5
-1
lines changed
  • content/blogs/fastvideo_post_training

1 file changed

+5
-1
lines changed

content/blogs/fastvideo_post_training/index.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ draft = false
3333

3434
Below, we demonstrate how each module accelerates the DiT denoising time (without text encoder and vae) on a single H200 GPU.
3535

36-
{{< table title="Table 2: DiT denoising time comparisons of different methods. All numbers can be reproduced with this [script](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/inference/v1_inference_wan_VSA_DMD.sh)." >}}
36+
{{< table title="Table 1: DiT denoising time comparisons of different methods. All numbers can be reproduced with this [script](https://github.com/hao-ai-lab/FastVideo/blob/main/scripts/inference/v1_inference_wan_VSA_DMD.sh)." >}}
3737

3838
| Modules | Wan 2.2 5B 720P | Wan2.1 14B 720P | Wan2.1 1.3B 480P |
3939
|:-------------------------:|:---------------:|:----------------:|:----------------:|
@@ -66,11 +66,15 @@ FastWan is runnable on a wide range of hardware including Nvidia H100, H200, 409
6666
### Models and Recipes
6767

6868
With this blog, we are releasing the following models and their recipes:
69+
70+
{{< table title="Table 2: FastWan release assets." >}}
71+
6972
| Model | Sparse Distillation | Dataset |
7073
|:-------------------------------------------------------------------------------------------: |:---------------------------------------------------------------------------------------------------------------: |:--------------------------------------------------------------------------------------------------------: |
7174
| [FastWan2.1-T2V-1.3B](https://huggingface.co/FastVideo/FastWan2.1-T2V-1.3B-Diffusers) | [Recipe](https://github.com/hao-ai-lab/FastVideo/tree/main/examples/distill/Wan2.1-T2V/Wan-Syn-Data-480P) | [FastVideo Synthetic Wan2.1 480P](https://huggingface.co/datasets/FastVideo/Wan-Syn_77x448x832_600k) |
7275
| [FastWan2.1-T2V-14B-Preview](https://huggingface.co/FastVideo/FastWan2.1-T2V-14B-Diffusers) | Coming soon! | [FastVideo Synthetic Wan2.1 720P](https://huggingface.co/datasets/FastVideo/Wan-Syn_77x768x1280_250k) |
7376
| [FastWan2.2-TI2V-5B-FullAttn-Diffusers](https://huggingface.co/FastVideo/FastWan2.2-TI2V-5B-FullAttn-Diffusers) | [Recipe](https://github.com/hao-ai-lab/FastVideo/tree/main/examples/distill/Wan2.2-TI2V-5B-Diffusers/Data-free) | [FastVideo Synthetic Wan2.2 720P](https://huggingface.co/datasets/FastVideo/Wan2.2-Syn-121x704x1280_32k) |
77+
{{</ table >}}
7478

7579

7680
For FastWan2.2-TI2V-5B-FullAttn, since its sequence length is short (~20K), it does not benifit much from sparse attention. We only train it with DMD and full attention. We are actively working on applying sparse distillation to 14B models for both Wan2.1 and Wan2.2. Follow our progress at our [Github](https://github.com/hao-ai-lab/FastVideo), [Slack](https://join.slack.com/t/fastvideo/shared_invite/zt-38u6p1jqe-yDI1QJOCEnbtkLoaI5bjZQ) and [Discord](https://discord.gg/Dm8F2peD3e)!

0 commit comments

Comments
 (0)