LTX-2 is our latest step: a next-generation open-source AI model that combines synchronized audio and video generation, 4K fidelity, and real-time performance.
Most importantly, it’s open source, so you can explore the architecture, fine-tune it for your own workflows, and help push creative AI forward.
What’s New in LTX-2
LTX-2 represents a major leap forward from our previous model, LTXV 0.9.8. Here’s what’s new:
- Audio + Video, Together: Visuals and sound are generated in one coherent process, with motion, dialogue, ambience, and music flowing simultaneously.
- 4K Fidelity: The Ultra flow delivers native 4K resolution at 50 fps with synchronized audio.
- Longer Generations: LTX-2 supports longer, continuous clips with synchronized audio up to 10 seconds.
- Low Cost & Efficiency: Up to 50% lower compute cost than competing models, powered by a multi-GPU inference stack.
- Consumer Hardware, Professional Output: Runs efficiently on high-end consumer-grade GPUs, democratizing high-quality video generation.
- Creative Control: Multi-keyframe conditioning, 3D camera logic, and LoRA fine-tuning deliver frame-level precision and style consistency.
LTX-2 combines every core capability of modern video generation into one model: synchronized audio and video, 4K fidelity, multiple performance modes, production-ready outputs, and open access. For developers, this means faster iteration, greater flexibility, and lower barriers to entry.
More Choices for Developers
The LTX-2 API offers a choice of modes, giving developers flexibility to balance speed and fidelity depending on the need:
- Fast. Extreme speed for live previews, mobile workflows, and high-throughput ideation.
- Pro. Balanced performance with strong fidelity and fast turnaround. Ideal for creators, marketing teams, and daily production work.
- Ultra (Coming soon). Maximum fidelity for cinematic use cases, delivering up to 4K at 50 fps with synchronized audio for professional production and VFX.
Key Technical Capabilities
Beyond these features, LTX-2 introduces a new technical foundation for generative AI. Here’s how it achieves production-grade performance:
Architecture & Inference
- Built on a hybrid diffusion–transformer architecture optimized for speed, control, and efficiency.
- Uses a multi-GPU inference stack to deliver generation faster than playback while maintaining fidelity and cost-effectiveness.
Resolution & Rendering
- Supports 16:9 ratio, native QHD and 4K rendering, with sharp textures and smooth motion.
- Multi-scale rendering enables fast low-res previews that scale seamlessly to full-quality cinematic output.
Control & Precision
- Multi-keyframe conditioning and 3D camera logic for scene-level control.
- Frame-level precision ensures coherence across long sequences.
- LoRA adapters allow fine-tuning for brand style or IP consistency.
Multimodality & Sync
- Accepts text, image, video, and audio inputs, plus depth maps and reference footage for guided conditioning.
- Generates audio and video together in a single pass, aligning motion, dialogue, and music for cohesive storytelling.
Pipeline Integration
- Integrates directly with editing suites, VFX stacks, game engines, and leading AI platforms such as Fal, Replicate, RunDiffusion, and ComfyUI.
- A new API Playground lets teams and partners test native 4K generation with synchronized audio before full API integration.
LTX-2 as a Platform
What sets LTX-2 apart isn’t only what it can do today, but how it’s built for tomorrow.
- Open Source: Model weights, code, and benchmarks will be released to the open community in late November 2025, enabling research, customization, and innovation.
- Ecosystem-Ready: APIs, SDKs, and integrations designed for seamless creative workflows.
- Community-First: Built for experimentation, extension, and collaboration.
As with our previous models, LTX-2’s open release ensures it is not just another tool, but a foundation for a full creative AI ecosystem.
Availability
API access can be requested through the LTX-2 website and is being rolled out gradually to early partners and teams, with integrations available through Fal, Replicate, ComfyUI and more. Full model weights and tooling will be released to the open-source community on GitHub in January 2026, enabling developers, researchers, and studios to experiment, fine-tune, and build freely.
Getting Involved
We’re just getting started and we want you to be a part of the journey. Join the conversation on our Discord to connect with other developers, share feedback, and collaborate on projects.
Be part of the community shaping the next chapter of creative AI. LTX-2 is the production-ready AI engine that finally keeps up with your imagination, and it’s open for everyone to build on. We can’t wait to see what you’ll create with it.

