Why Lightricks Released Its New Video Generation Model as Open Source
Lightricks, the visual AI innovators responsible for Facetune, LTX Studio, and Popular Pays, attracted increased attention recently when they released a new video generation model, LTX Video (LTXV 0.9). The arrival of a new video model is always an exciting event, but the choice to release it under an open source license has attracted even more interest.
LTXV was built using PyTorch XLA, runs seamlessly on both GPU and TPU systems, and is available on both Github and Hugging Face. Following community review, it will be released under the OpenRail license to keep derivatives open for academic and commercial use.
Of course, it’s not the only open source video gen model by a long shot. But it is one of a small minority, joining models like Stable Video Diffusion, AnimateDiff, Genmo, CogVideo, and Open Sora. And given that LTXV is the first model to have the ability to render video faster than real time, in addition to other performance advantages, it’s easy to make the case that with this release, Lightricks has pushed video AI forward, open source or not.
The company is clear about its commitment to open source collaboration, having released Long AnimateDiff, an open source animation framework, earlier this year. LTXV was designed for extensive customization, and the LIghtricks team is eager to see how the global community will use it to experiment and advance GenAI video.
Making LTXV open source was a deliberate choice in the name of advancing the world’s AI video generation capabilities. “We’re not going to make any money out of this model at the moment,” emphasized Zeev Farbman, Lightricks CEO and co-founder, in a recent interview with VentureBeat. A number of important factors led to this decision.
Open Source Models Cultivate Collabetition
Most video generation models in use today are available only through for-pay APIs. Others are developed by the top Big Tech companies for their own proprietary use or even never released at all, simply used as ways for these firms to boast about the amazing capabilities of their tech.
This includes Sora from OpenAI, Imagen from Google, and Adobe’s Firefly. Because they’re all locked behind a paywall, startups and innovators can’t access them easily, which adds entry barriers for smaller players in the ecosystem to create new AI video solutions. And because they’re “black box” models, they make it impossible for others to build on existing code bases.
“If startups want to have a serious chance to compete, the technology needs to be open,” said Farbman. “You want to make sure that people in the top universities across the world have access to your model and add capabilities on top of it.”
Given that the company was founded 11 years ago by five PhD students, the Lightricks leadership has a lot of sympathy for other bootstrappers who invest time and effort into AI innovation. The decision to release LTXV with an open source license was clearly made with startups in mind. Farbman and his colleagues want to ensure that the AI video market stays open for competition.
Open Source Models Feed Innovation
There’s also a parallel concern that the AI video market could stagnate without a steady flow of new “foundational” open source models. The AI enthusiasts and experimenters who sharpen the bleeding edge of AI video capabilities can’t always access the best models, which are tightly guarded by their tech giant owners.
If it’s dominated by for-pay models owned by a handful of tech giants, the vibrant ecosystem could turn into a goldfish bowl that’s devoid of creative life.
“Today, the best models on the market are closed,” observed Farbman. “This creates problems beyond cost. Gaming companies, for example, want to produce simple graphics and then use these models to experiment with visual styles, but closed models don’t allow for that. It also stifles academic research and gives an advantage to large companies.”
Without open source models, researchers and innovators aren’t able to experiment with different engines and compare results to see what works best for their ideas. There’s a risk that the most-trodden path becomes the only highway, not because it’s the best but because alternatives are hard to access.
“Part of what’s going on at the moment is that diffusion models are becoming an alternative paradigm to classical ways of doing things in computer graphics,” Farbman explained. “But if you actually want to build alternatives, APIs are definitely not enough. You need to give people – academia, industry, enthusiasts – models to tinker with and create amazing new ideas.”
Committing to Open Source Serves as a Differentiator
Designating LTXV as open source helps flatten the barriers around AI video, providing end users with the greatest range of options, which is something Farbman feels passionately about.
He’s excited by the impact of his company's LTX Studio in empowering filmmakers and creators to actualize their visions, and talks with passion about opening up possibilities for more industries to get creative about video content.
All this is true, but the decision to go open source wasn’t entirely altruistic. Lightricks still needs to pay its bills, after all. Choosing to release LTXV as an open source model helps to differentiate Lightricks within a crowded market.
It positions the company as underdog disruptors taking on the tech giants, which is a narrative that everybody loves (except, perhaps, the tech giants). This draws more publicity to the company and its for-pay apps. In the long run, Farbman expects to see benefits from running LTXV as open source.
Open Source Can Propel AI Video Forward
The decision to release LTXV as an open source model was taken after careful consideration of all the pros and cons. While Lightricks expects to gain concrete advantages in the long run, the choice to go open source stems from a fundamental belief in the power of collaboration and the need to keep innovation flowing.