Kijai
Kijai/WanVideo_comfy
Combined and quantized models for WanVideo, originating from here:...
Model Documentation
Combined and quantized models for WanVideo, originating from here:
https://huggingface.co/Wan-AI/
Can be used with: https://github.com/kijai/ComfyUI-WanVideoWrapper and ComfyUI native WanVideo nodes.
I've also started to do fp8_scaled versions over here: https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled
Other model sources:
TinyVAE from https://github.com/madebyollin/taehv
SkyReels: https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9
WanVideoFun: https://huggingface.co/collections/alibaba-pai/wan21-fun-v11-680f514c89fe7b4df9d44f17
---
Lightx2v:
CausVid 14B: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid
CFG and Step distill 14B: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill
---
CausVid 1.3B: https://huggingface.co/tianweiy/CausVid
AccVideo: https://huggingface.co/aejion/AccVideo-WanX-T2V-14B
Phantom: https://huggingface.co/bytedance-research/Phantom
ATI: https://huggingface.co/bytedance-research/ATI
MiniMaxRemover: https://huggingface.co/zibojia/minimax-remover
MAGREF: https://huggingface.co/MAGREF-Video/MAGREF
FantasyTalking: https://github.com/Fantasy-AMAP/fantasy-talking
MultiTalk: https://github.com/MeiGen-AI/MultiTalk
Anisora: https://huggingface.co/IndexTeam/Index-anisora/tree/main/14B
Pusa: https://huggingface.co/RaphaelLiu/PusaV1/tree/main
FastVideo: https://huggingface.co/FastVideo
EchoShot: https://github.com/D2I-ai/EchoShot
Wan22 5B Turbo: https://huggingface.co/quanhaol/Wan2.2-TI2V-5B-Turbo
Ovi: https://github.com/character-ai/Ovi
FlashVSR: https://huggingface.co/JunhaoZhuang/FlashVSR
rCM: https://huggingface.co/worstcoder/rcm-Wan/tree/main
--- CausVid LoRAs are experimental extractions from the CausVid finetunes, the aim with them is to benefit from the distillation in CausVid, rather than any actual causal inference. --- v1 = direct extraction, has adverse effects on motion and introduces flashing artifact at full strength.
v1.5 = same as above, but without the first block which fixes the flashing at full strength.
v2 = further pruned version with only attention layers and no first block, fixes flashing and retains motion better, needs more steps and can also benefit from cfg.
https://huggingface.co/Wan-AI/
Can be used with: https://github.com/kijai/ComfyUI-WanVideoWrapper and ComfyUI native WanVideo nodes.
I've also started to do fp8_scaled versions over here: https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled
Other model sources:
TinyVAE from https://github.com/madebyollin/taehv
SkyReels: https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9
WanVideoFun: https://huggingface.co/collections/alibaba-pai/wan21-fun-v11-680f514c89fe7b4df9d44f17
---
Lightx2v:
CausVid 14B: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid
CFG and Step distill 14B: https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill
---
CausVid 1.3B: https://huggingface.co/tianweiy/CausVid
AccVideo: https://huggingface.co/aejion/AccVideo-WanX-T2V-14B
Phantom: https://huggingface.co/bytedance-research/Phantom
ATI: https://huggingface.co/bytedance-research/ATI
MiniMaxRemover: https://huggingface.co/zibojia/minimax-remover
MAGREF: https://huggingface.co/MAGREF-Video/MAGREF
FantasyTalking: https://github.com/Fantasy-AMAP/fantasy-talking
MultiTalk: https://github.com/MeiGen-AI/MultiTalk
Anisora: https://huggingface.co/IndexTeam/Index-anisora/tree/main/14B
Pusa: https://huggingface.co/RaphaelLiu/PusaV1/tree/main
FastVideo: https://huggingface.co/FastVideo
EchoShot: https://github.com/D2I-ai/EchoShot
Wan22 5B Turbo: https://huggingface.co/quanhaol/Wan2.2-TI2V-5B-Turbo
Ovi: https://github.com/character-ai/Ovi
FlashVSR: https://huggingface.co/JunhaoZhuang/FlashVSR
rCM: https://huggingface.co/worstcoder/rcm-Wan/tree/main
--- CausVid LoRAs are experimental extractions from the CausVid finetunes, the aim with them is to benefit from the distillation in CausVid, rather than any actual causal inference. --- v1 = direct extraction, has adverse effects on motion and introduces flashing artifact at full strength.
v1.5 = same as above, but without the first block which fixes the flashing at full strength.
v2 = further pruned version with only attention layers and no first block, fixes flashing and retains motion better, needs more steps and can also benefit from cfg.
Files & Weights
| Filename | Size | Action |
|---|---|---|
| Phantom-Wan-14B_fp16.safetensors | 27.06 GB | |
| Phantom-Wan-14B_fp8_e4m3fn.safetensors | 13.97 GB | |
| Phantom-Wan-1_3B_fp16.safetensors | 2.68 GB | |
| Phantom-Wan-1_3B_fp32.safetensors | 5.29 GB | |
| UniAnimate-Wan2.1-14B-Lora-12000-fp16.safetensors | 1.14 GB | |
| Wan21_AccVid_I2V_480P_14B_lora_rank32_fp16.safetensors | 0.30 GB | |
| Wan21_AccVid_T2V_14B_lora_rank32_fp16.safetensors | 0.30 GB | |
| Wan21_CausVid_14B_T2V_lora_rank32.safetensors | 0.30 GB | |
| Wan21_CausVid_14B_T2V_lora_rank32_v1_5_no_first_block.safetensors | 0.29 GB | |
| Wan21_CausVid_14B_T2V_lora_rank32_v2.safetensors | 0.19 GB | |
| Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors | 0.08 GB | |
| Wan21_T2V_14B_MoviiGen_lora_rank32_fp16.safetensors | 0.30 GB | |
| Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors | 0.30 GB | |
| Wan21_Uni3C_controlnet_fp16.safetensors | 1.86 GB | |
| Wan2_1-AccVideo-T2V-14B_fp8_e4m3fn.safetensors | 13.97 GB | |
| Wan2_1-Anisora-I2V-480P-14B_fp16.safetensors | 31.00 GB | |
| Wan2_1-Anisora-I2V-480P-14B_fp8_e4m3fn.safetensors | 15.96 GB | |
| Wan2_1-FLF2V-14B-720P_fp16.safetensors | 31.00 GB | |
| Wan2_1-FLF2V-14B-720P_fp8_e4m3fn.safetensors | 15.96 GB | |
| Wan2_1-I2V-14B-480P_fp8_e4m3fn.safetensors | 15.83 GB | |
| Wan2_1-I2V-14B-480P_fp8_e5m2.safetensors | 15.83 GB | |
| Wan2_1-I2V-14B-720P_fp8_e4m3fn.safetensors | 15.83 GB | |
| Wan2_1-I2V-14B-720P_fp8_e5m2.safetensors | 15.83 GB | |
| Wan2_1-I2V-ATI-14B_fp16.safetensors | 31.00 GB | |
| Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors | 15.96 GB | |
| Wan2_1-I2V-ATI-14B_fp8_e5m2.safetensors | 15.96 GB | |
| Wan2_1-MiniMaxRemover_1_3B_fp16.safetensors | 2.10 GB | |
| Wan2_1-MoviiGen1_1_fp16.safetensors | 27.06 GB | |
| Wan2_1-MoviiGen1_1_fp8_e4m3fn.safetensors | 13.97 GB | |
| Wan2_1-T2V-14B_CausVid_fp8_e4m3fn.safetensors | 13.48 GB | |
| Wan2_1-T2V-14B_fp8_e4m3fn.safetensors | 13.84 GB | |
| Wan2_1-T2V-14B_fp8_e5m2.safetensors | 13.84 GB | |
| Wan2_1-T2V-1_3B_bf16.safetensors | 2.68 GB | |
| Wan2_1-T2V-1_3B_fp32.safetensors | 5.29 GB | |
| Wan2_1-T2V-1_3B_fp8_e4m3fn.safetensors | 1.37 GB | |
| Wan2_1-T2V_FastWan_1_3B_bf16.safetensors | 2.78 GB | |
| Wan2_1-VACE_module_14B_bf16.safetensors | 5.68 GB | |
| Wan2_1-VACE_module_14B_fp8_e4m3fn.safetensors | 2.84 GB | |
| Wan2_1-VACE_module_1_3B_bf16.safetensors | 1.37 GB | |
| Wan2_1-Wan-I2V-MAGREF-14B_fp8_e4m3fn.safetensors | 15.96 GB | |
| Wan2_1_VACE_1_3B_preview_bf16.safetensors | 1.37 GB | |
| Wan2_1_VAE_bf16.safetensors | 0.24 GB | |
| Wan2_1_VAE_fp32.safetensors | 0.47 GB | |
| Wan2_1_kwai_recammaster_1_3B_step20000_bf16.safetensors | 2.78 GB | |
| Wan2_2-I2V-A14B-HIGH_bf16.safetensors | 26.62 GB | |
| Wan2_2-I2V-A14B-LOW_bf16.safetensors | 26.62 GB | |
| Wan2_2_VAE_bf16.safetensors | 1.31 GB | |
| WanVideo_2_1_Multitalk_14B_fp8_e4m3fn.safetensors | 2.53 GB | |
| fantasytalking_fp16.safetensors | 1.57 GB | |
| open-clip-xlm-roberta-large-vit-huge-14_visual_fp16.safetensors | 1.18 GB | |
| open-clip-xlm-roberta-large-vit-huge-14_visual_fp32.safetensors | 2.35 GB | |
| taew2_1.safetensors | 0.02 GB | |
| taew2_2.safetensors | 0.02 GB | |
| umt5-xxl-enc-bf16.safetensors | 10.58 GB | |
| umt5-xxl-enc-fp8_e4m3fn.safetensors | 6.27 GB |