Key Takeaways
- Official Sources: The only safe places to get LTX-2 are the official Lightricks HuggingFace repo and GitHub.
- Version Choice: Use LTX-2-19B-Distilled for most local setups (consumer GPUs); use 19B-Dev only if you need full uncompressed weights for research.
- Safety First: Always verify the
.safetensorsfile hash after downloading to prevent corrupted files or malicious tampering.
What Are You Looking For? (30-Second Guide)
Not sure which file to grab? Start here to save time.
-
"I just want to try it out without installing anything."
👉 Try LTX-2 Online (No GPU required, instant results). -
"I want to run it on my PC (ComfyUI / Forge)."
Continue reading below for the Weights & Download Guide. -
"I'm a developer building an app."
👉 Check out our LTX-2 API Guide for REST API docs.
LTX-2 Official "Safe Sources"
There are dozens of re-uploads out there, but to avoid malware or outdated versions, stick to these three official channels.
1. HuggingFace (Model Weights)
This is where the actual .safetensors files live.
- Repository:
Lightricks/LTX-2 - Best for: Downloading models for ComfyUI, Forge, or local inference.
2. GitHub (Code & Tools)
Use this for the official inference code, diffusers integration, and troubleshooting scripts.
- Repository:
Lightricks/LTX-Video - Best for: Developers, installing inference pipelines, and bug reporting.
- Note: Don't confuse
LTX-Video(the code repo) withLTX-2(the model family name). They are part of the same project.
3. LTX Studio (Official App)
If you prefer a desktop GUI experience provided directly by Lightricks (if applicable), check their official product page.
How to Choose the Right Version on HuggingFace
When you open the HuggingFace "Files" tab, you will see multiple files. Here is exactly what you need.
Common Variations
| File Name | Description | Recommended For |
|---|---|---|
ltx-2-19b-distilled.safetensors |
🔥 Recommended. Optimizes performance with minimal quality loss. | Most Users (ComfyUI, Local PC, Consumer GPUs). |
ltx-2-19b-dev.safetensors |
Full, uncompressed weights. Requires massive VRAM. | Researchers & Enterprise GPUs (H100/A100). |
ltx-2-lora-camera-ctrl.safetensors |
An add-on adapter for camera movement control. | Advanced users who want zoom/pan control. |
💡 Pro Tip: If you see files ending in .ckpt, ignore them. .safetensors is the modern standard—it's safer (harder to execute malicious code) and faster to load.
Safe Setup: How to Verify Your Download
A corrupted 20GB download can cause vague errors like "Nan values" or "Load failed" that take hours to debug. Check your file integrity before loading it.
1. Why Hash Verification Matters
Files can get corrupted during download, especially large ones (>10GB). A simple SHA-256 check confirms you have the exact byte-for-byte file released by Lightricks.
2. Quick Check Checklist
- [ ] File extension is
.safetensors(not.exeor.zip). - [ ] File size matches HuggingFace exactly (e.g., ~19GB for the base model).
- [ ] Hash Check:
- Mac/Linux Terminal:
shasum -a 256 filename.safetensors - Windows PowerShell:
Get-FileHash filename.safetensors -Algorithm SHA256
- Mac/Linux Terminal:
Compare the output string with the "SHA256" displayed next to the file on HuggingFace.
Troubleshooting: Top 5 Common Loading Errors
Stuck? Here are the most common reasons LTX-2 fails to load.
1. "safetensors_rust.SafeTensorsError: Error while deserializing header"
- Cause: The download was interrupted or incomplete.
- Fix: Delete the file and re-download. Do not try to "resume" a failed download if possible; start fresh.
2. "CUDA out of memory"
- Cause: You are trying to load the
19b-devversion on a card with less than 24GB VRAM, or you haven't enabled CPU offloading. - Fix: Switch to the Distilled version or enable
--lowvramflags in your runner.
3. "Module not found: ltx_video"
- Cause: You cloned the Repo but didn't install the python package.
- Fix: Run
pip install -e .inside theLTX-Videofolder.
4. Wrong Folder Path (ComfyUI)
- Issue: Placing the model in
ComfyUI/models/checkpoints/directly. - Fix: LTX-2 is a video model. Some workflows require it in
ComfyUI/models/checkpoints/LTX-Video(check your custom node instructions).
Which Method is Right for You?
| Feature | Local Run (ComfyUI) | Online Generator (Our App) | API Integration |
|---|---|---|---|
| Cost | Free (requires Hardware) | Pay-per-gen (Credits) | Pay-per-call |
| Hardware | High-end GPU (16GB+ VRAM) | Any Device (Mobile/Laptop) | Server-side |
| Setup Time | 30-60 Mins | Instant | 1-2 Days (Dev) |
| Privacy | 100% Local | Secure Cloud | Secure Cloud |
| Best For | Power Users, Free Experimentation | Creators, Quick Projects | Developers, SaaS |
Next Steps
Ready to dive deeper? Check out our guides:
FAQ
1. Are LTX-2 and LTX-2.0 the same thing?
Yes. The community often uses them interchangeably. The official release in January 2026 refers to the architecture LTX-Video (LTX-2).
2. I see "LTXV"... is that this?
Yes, "LTXV" is a common shorthand for "LTX-Video". Ensure you aren't downloading the older "LTX-1" if you want the latest quality.
3. What is the minimum VRAM for the 19B model?
For the Distilled version with 4-bit or 8-bit quantization (common in ComfyUI wrappers), you can squeeze it into 12GB-16GB VRAM. For the full 16-bit float version, you typically need 24GB+ (RTX 3090/4090).
Still stuck?
Don't waste hours debugging locally.
👉 Generate LTX-2 Video Online Here and get your video in 60 seconds.
