Meta plans to open-source versions of its next AI models

0
122
Meta

Meta is pivoting to a “hybrid” AI strategy. After months of internal debate and reports that the company might shift to a strictly closed-source model to compete with OpenAI, Meta has confirmed it will eventually release open-source versions of its next-generation frontier models.

These new models are the first to emerge from Meta’s “Superintelligence” unit, led by Scale AI founder Alexandr Wang, who joined the company following Meta’s $15 billion acquisition of Scale AI in 2025.


1. The “Avocado” and “Mango” Models

Meta is currently developing two flagship proprietary models that will serve as the “gold standard” for its internal products (Facebook, Instagram, WhatsApp) before distilled versions are released to the public.

  • “Avocado”: A massive Large Language Model (LLM) designed to succeed the Llama 4 family. It is being built to compete directly with OpenAI’s GPT-5 and Anthropic’s Claude 4.
  • “Mango”: A state-of-the-art multimedia file generator capable of natively creating high-fidelity video, 3D assets, and audio.

2. The Open-Source “Catch”

Unlike the Llama 2 and Llama 3 eras, where the most powerful weights were released almost immediately, the 2026 strategy introduces a “Proprietary First” delay.

FeatureProprietary (Full) VersionsOpen-Source (Versions)
Release TimingImmediate launch in 2026.Released “eventually” after safety audits.
CapabilitiesFull multimodal “reasoning” & agency.May have restricted features or smaller parameters.
Safety LayersIntegrated, real-time filtering.“Pieces” may remain proprietary to prevent misuse.

3. Why the Shift? (The Llama 4 “Maverick” Context)

The decision to go hybrid follows a difficult 2025 for Meta’s AI division.

  • The Benchmark Gap: While Llama 4 Maverick (400B) was a success, it was quickly overtaken in late 2025 by Chinese open-source models like Qwen 3 and DeepSeek V3, which began dominating global leaderboards.
  • The “Gemma 4” Threat: Google’s recent release of Gemma 4 (April 2, 2026) under a permissive Apache 2.0 license has put immense pressure on Meta to remain the “developer’s choice” in the West.
  • Economic Reality: Training “Superintelligence” is costing Meta upwards of $150 billion annually in data center and power costs. A hybrid model allows Meta to monetize its most advanced breakthroughs via enterprise API while keeping the developer community engaged with “Scout” or “Lite” versions.

4. Leadership: The Alexandr Wang Era

The upcoming models reflect the influence of Alexandr Wang, who has been tasked with helping Meta “leapfrog” rivals after Llama 4 was perceived as trailing in complex reasoning.

  • AGI Focus: Wang’s team is reportedly obsessed with “Agentic Workflows”—AI that doesn’t just talk, but can execute multi-step software tasks autonomously.
  • Data Advantage: By leveraging Scale AI’s data-labeling expertise internally, Meta is attempting to create models with significantly higher “instruction-following” reliability than previous generations.

5. What This Means for Developers

If you are currently building on Llama 4, you should expect:

  • A Tiered Ecosystem: You will likely use the “Open” version for local hosting and low-cost tasks, while “calling” the proprietary Avocado API for high-stakes reasoning or advanced multimodal generation.
  • Continued Support: Zuckerberg remains “hell-bent” on AI supremacy and believes that open-sourcing versions of his models is the only way to prevent a total monopoly by OpenAI and Google.

“Meta has been the largest U.S. player to let others modify its models,” noted a report from Axios. “Zuckerberg believes openness accelerates innovation, but he’s no longer willing to give away the ‘crown jewels’ for free on day one.”

Advertisement

LEAVE A REPLY

Please enter your comment!
Please enter your name here