Home Other Lightricks open-source its AI video model ‘LTX-2’

Lightricks open-source its AI video model ‘LTX-2’

0

In a bold challenge to “black box” AI giants like OpenAI and Google, Lightricks has officially open-sourced LTX-2, a state-of-the-art audio-video foundation model. Released on January 6, 2026, LTX-2 is the first production-ready model to offer synchronized audio and video generation with fully open weights.

By releasing the model’s weights, training code, and inference pipelines, Lightricks is positioning LTX-2 as the “Linux of AI Video,” allowing creators to run high-end cinematic generations on local hardware rather than relying on expensive cloud subscriptions.

A Technical Powerhouse: 4K, 50 FPS, and Native Audio

Unlike previous models that generated silent video requiring a second pass for sound, LTX-2 uses an asymmetric dual-stream transformer architecture. This allows the model to generate visuals and audio simultaneously in a single “unified” pass.

Key Specifications of LTX-2

FeatureCapability
ResolutionNative 4K (3840×2160)
Frame RateUp to 50 FPS (Cinematic smoothness)
Max Duration20 Seconds (Extendable via multi-keyframing)
AudioSynchronized speech, foley, and environmental ambience
Architecture19B Parameters (14B Video / 5B Audio)

Why Open-Source Matters for Creators

The release of LTX-2 shifts the power dynamic from corporate APIs back to individual developers and studios. Lightricks CEO Zeev Farbman noted that for AI to become a true “rendering engine” for the film industry, it must be customizable and run locally.

1. Zero Cloud Dependency

Because LTX-2 is optimized for the NVIDIA RTX ecosystem, creators can generate 4K content on consumer-grade GPUs (like the RTX 4090 or 5090). This ensures total data privacy for sensitive intellectual property and eliminates “per-second” billing.

2. Full Creative Control (LoRAs)

Lightricks included a suite of IC-LoRAs (Instant Conditioning) at launch. These allow users to control specific elements like:

  • Camera Movement: Precise dolly, pan, and crane shots.
  • Structural Guidance: Using depth maps and “Canny” edges to maintain character consistency.
  • Motion Control: Directing character poses via OpenPose integration.

3. Rapid Iteration

Benchmarks show that LTX-2 is up to 18 times faster than other open models like Alibaba’s Wan2.2. A distilled “8-step” version of the model allows for “Brainstorm Mode,” where 10-second clips can be generated in near real-time for rapid storyboarding.


Challenging Sora and Veo

While OpenAI’s Sora and Google’s Veo have dominated headlines with photorealistic demos, they remain largely inaccessible to the general public. Lightricks is betting that the community’s ability to “tinker” with LTX-2 will lead to faster innovation.

By integrating LTX-2 directly into ComfyUI, the industry-standard node-based interface for AI artists, Lightricks has ensured that the model is immediately usable by tens of thousands of professional creators.

Availability and Licensing

  • Open Weights: Available for download on Hugging Face and GitHub.
  • Pricing: Free for academic research and small businesses (under $10M ARR). Large enterprises require a commercial license for high-volume use.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version