AI Video Model

Fraction of the Cost.

Studio-Grade Quality.

The Fastest in the World.

Ranked in the Top 3 AI Models Globally*

Ranked in the Top 3 AI Models Globally *

LTX-2 is the complete AI engine for creative workflows. It unites video and audio generation with native 4K fidelity and radical efficiency. Open-source, production-ready, and built for studios, developers, and enterprises.

* Independent benchmarking by Artificial Analysis

Native 4k 50 FPS

Generate cinematic-grade video with synchronized audio at true 4K / 50 fps. Built for professional workflows, ready for studio, developer, or enterprise production.

Try LTX-2 Now

Audio & video

Create synchronized visuals and sound in one coherent process: motion, dialogue, ambience, and music, generated together with natural timing.

Try LTX-2 Now

20 sec clip

Extend creative range with long-form generation. Produce up to 20 seconds of high-fidelity video with complete control and consistent style.

Try LTX-2 Now

Fastest in the world

Optimized for speed without sacrificing quality.
 Generate synchronized 4K video and audio in seconds with the fastest production-grade AI model available today.

Try LTX-2 Now

Open Source

Access model weights, datasets, and tooling through open release. Customize, fine-tune, and innovate freely across production and research environments.

Try LTX-2 Now
“It’s amazing how fast it can generate ideas”
I’ve spent half my time trying to explain what’s in my head, but with LTX (formerly LTX Studio), I can just generate an image and say, ‘that’s what I mean.’ You still have to guide it, just like any good performance, but you’re in control. It’s amazing how fast it can generate ideas — it can accelerate the whole creative process.
Lane L.
Taika Waititi, Academy Award-winning Filmmaker & Director
"AI that's built for image creation and animation"
I was drawn to LTX (formerly LTX Studio) because of its use of 'consistent characters' but what I found really valuable from LTX was the ability to quickly storyboard my videos. It lets me quickly story board, visualize, and edit a sequence together as a template for my main project.
Lane L.
Lane L., Director
"Great tool"
I was drawn to LTX (formerly LTX Studio) because of it's use of 'consistent characters'. While their consistent characters are okay ... what I found myself really valuing from LTX was the ability to quickly story board my videos. It lets me quickly story board, visualize, and edit a sequence together as a template for my main project.
TC R.
TC R., Video Producer
"AH-mazing"
I’m a musician and I don’t do live videos.. this allowed to present my art in such an amazing way. I love everything about it. It’s a blessing and you guys did an awesome job.. I truly don’t know how to thank you for creating this platform. It’s life and career changing.
Edgar F.
Edgar F., Music Producer
"LTX is the Future of Filmmaking"
The best thing about LTX (formerly LTX Studio) is having all the tools to make a film is under one framework. I've been using LTX since it's inception and all the tools that have been added make your projects better
Mark A.
Mark A., Content Creator

Use Cases

Post Without the Production.

LTX-2 automates motion tracking, rotoscoping, and plate replacement with high fidelity, reducing post-production time and cost while maintaining cinematic quality.

  • Deliver broadcast-ready composites faster than real-time.
  • Preserve detail and consistency across complex shots.
  • Integrate seamlessly into pipelines with API access.

Use Cases

From Concept to Cutscene.

Transform static concept art or character poses into dynamic, story-driven motion. No full 3D pipelines required.


  • Generate in-game loops, trailers, or cinematic sequences directly
from sketches or keyframes.
  • Iterate quickly on pacing, framing, and visual tone.
  • Maintain fidelity and stylistic consistency using LoRA fine-tuning.

Use Cases

Faster, Smarter Pre-Production.

Use LTX-2 to simulate camera logic, lighting, and pacing before stepping on set, saving time and cost across the creative cycle.

  • Visualize storyboards and camera moves instantly.
  • Collaborate with directors and clients using realistic motion previews.
  • Refine compositions before production begins.

Use Cases

Direct Every Element.

Precisely guide movement, pacing, and style with multi-keyframe conditioning and contextual control.

  • Create frame-coherent animation and choreography.
  • Experiment with timing, rhythm, and cinematic flow.
  • Use LoRA and fine-tuning for creative consistency across scenes.

Use Cases

Old Frames, New Fidelity.

Upscale, interpolate, and restore archival footage or rough renders with style-preserving precision.


  • Enhance clarity while protecting the original creative intent.
  • Extend resolution up to native 4K with fluid motion.
  • Ideal for film remastering, restoration, and animation cleanup.

Development Tools

LTX-2 lora trainer

ControlNet Integration Guide

Local Preview and Cloud Upscale Workflows

LTX-2 News

LTX-2 News

Lightricks Releases LTX-2

LTX-2 brings every core capability of modern video generation into one model
Read More

FAQ

What is LTX-2?

LTX-2 is an open-source AI video generation model built on diffusion techniques. It transforms still images or text prompts into controllable, high-fidelity video sequences. The model also offers sequenced audio and video generation. It is optimized for customization, speed, and creative flexibility, and designed for use across studios, research teams, and solo developers.

What can I use it for?

Video generation from prompts or images, animated cutscenes, motion design, product visualizations, VFX shots, archival restoration, and more. LTX-2 is ideal for any workflow that requires cost-effective, high-resolution, stylized video content.

Is it open-source?

Yes. LTX-2 will be released later this fall under an open license, with full access to model weights, training code, and example pipelines via our GitHub repository.

What kind of model does LTX-2 provide?

LTX-2 supports both text-to-video models and image-to-video models, offering flexibility in how users initiate video generation. You can create short-form or long-form video clips by either uploading a single image or describing the desired motion, camera behavior, and scene with a natural language prompt. These AI video generation models enable precise control over motion, visual style, depth, and structure retention, making them ideal for everything from cinematic storytelling and product content to stylized animation and research workflows

Can LTX-2 generate longer videos?

LTX-2 models natively supports video extension and keyframe-based generation, allowing you to create longer and more coherent scenes by extending videos forward or backward.

What are the different AI filmmaking tools provided by LTX-2?

LTX-2 is more than just a video diffusion model; it’s a comprehensive suite of AI filmmaking tools designed for creators, studios, and developers. Key capabilities include:

  • LoRA customization for precise stylization and brand-specific visual identity
  • Temporal outpainting to generate long-form, coherent narratives from limited inputs
  • Multiscale rendering for high-resolution, cinematic detail across frames
  • Guided motion control to animate with precision, ideal for storytelling and VFX
  • Fast inference optimized for consumer-grade GPUs and rapid iteration
  • Synchronised audio and video generation, for coherent sounds, music and dialogue in scenes

Where do I start?

Visit the GitHub repo, launch our hosted playground, or explore LTX for a visual interface. Full documentation, training scripts, and community support are included.