wsbagnsv1
wsbagnsv1/ltxv-13b-0.9.7-distilled-GGUF
basemodel: Lightricks/LTX-Video libraryname: gguf quantizedby: wsbagnsv1...
Model Documentation
This is a direct GGUF conversion of the 13b-0.9.7-distilled variant from Lightricks/LTX-Video.
The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:
| Type | Name | Location | Download | | -----------| ----------------------- | --------------------------------- | ------ |
| Main Model | ltxv-13b-0.9.7-distilled|
Example workflowbased on the official example workflow
*As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.*
*Comfyui now supports the ggufs natively, so you just need to update comfyui to the latest version and if some issues persist update all the nodes in the workflow*
*Other T5 clips will probably work as well, just use one that you like, you can get them as safetensors or ggufs. The best I tried was the t5 v1.1 xxl one*
*Loras do work, but you need to follow the steps in the example workflow and dont use torchcompile with loras!*
*Teacache works with ltx but atm not really good. The rel_l1_thresh only seems to work at 0.01 in my testing and even that causes some noticable quality drops, best leave it disabled.*
The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:
| Type | Name | Location | Download | | -----------
ComfyUI/models/diffusion_models | GGUF (this repo) |
| Text Encoder | T5-V1.1-XXL-Encoder | ComfyUI/models/text_encoders | Safetensors / GGUF |
| VAE | ltxv-13b-0.9.7-vae | ComfyUI/models/vae | Safetensors |Example workflow
Notes
*As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.*
*Comfyui now supports the ggufs natively, so you just need to update comfyui to the latest version and if some issues persist update all the nodes in the workflow*
*Other T5 clips will probably work as well, just use one that you like, you can get them as safetensors or ggufs. The best I tried was the t5 v1.1 xxl one*
*Loras do work, but you need to follow the steps in the example workflow and dont use torchcompile with loras!*
*Teacache works with ltx but atm not really good. The rel_l1_thresh only seems to work at 0.01 in my testing and even that causes some noticable quality drops, best leave it disabled.*
Files & Weights
| Filename | Size | Action |
|---|---|---|
| ltxv-13b-0.9.7-distilled-Q3_K_M.gguf | 6.06 GB | |
| ltxv-13b-0.9.7-distilled-Q3_K_S.gguf | 5.46 GB | |
| ltxv-13b-0.9.7-distilled-Q4_0.gguf | 7.24 GB | |
| ltxv-13b-0.9.7-distilled-Q4_1.gguf | 7.80 GB | |
| ltxv-13b-0.9.7-distilled-Q4_K_M.gguf | 8.21 GB | |
| ltxv-13b-0.9.7-distilled-Q4_K_S.gguf | 7.44 GB | |
| ltxv-13b-0.9.7-distilled-Q5_0.gguf | 8.74 GB | |
| ltxv-13b-0.9.7-distilled-Q5_1.gguf | 9.30 GB | |
| ltxv-13b-0.9.7-distilled-Q5_K_M.gguf | 9.15 GB | |
| ltxv-13b-0.9.7-distilled-Q5_K_S.gguf | 8.55 GB | |
| ltxv-13b-0.9.7-distilled-Q6_K.gguf | 10.15 GB | |
| ltxv-13b-0.9.7-distilled-Q8_0.gguf | 13.05 GB |