⌘K
Change language Switch ThemeSign In
Narrow Mode
内测输给 Gemini,还套壳?!Meta 千亿自研大模型遭延期 ================================
机 机器之心 @机器之心
One Sentence Summary
Meta's next-generation self-developed large model, Avocado, has been delayed due to its performance falling short of Gemini 3.0, exposing the competitive pressure and strategic pivot challenges it still faces despite massive investments.
Summary
This article reports on the latest developments regarding Meta's new generation foundational large model, Avocado. Originally scheduled for release in March 2026, the model's launch has been pushed back to May due to its inferiority to Google's Gemini 3.0 in reasoning, code generation, and writing capabilities. The article delves into Meta's aggressive investments in AI (with projected spending reaching $135 billion by 2026) and strategic adjustments, including the establishment of the TBD Lab led by Alexandr Wang, the consideration of shifting from open-source to closed-source to cope with high costs, and internal disagreements on how AI should serve the advertising business. Meta's predicament reflects that the large model race has entered a fierce stage of competing on iteration speed and engineering efficiency.
Main Points
* 1. Meta's next-generation model, Avocado, was forced to delay its release due to underperforming expectations and falling behind competitors.Internal tests showed that while Avocado surpassed older versions of Gemini, it still fell short of Gemini 3.0, indicating that in the foundational model arena, merely 'progressing fast' is no longer enough; absolute leadership is required to maintain ecosystem appeal. * 2. Meta is undergoing a potential strategic transition, exploring a shift from being an open-source champion to a closed-source commercialization path.Due to extremely high model development and computing costs, Zuckerberg and the core team are inclined to make Avocado closed-source to compete with rivals like OpenAI and alleviate immense commercialization pressure. * 3. Massive capital investment and elite team configuration demonstrate Meta's aggressive determination to win the AI race.Meta plans to invest up to $135 billion by 2026 and has heavily recruited the founder of Scale AI to serve as Chief AI Officer, attempting to accelerate model iteration through a 'special forces' model. * 4. Internal disagreements exist regarding the direction of AI development, with the core conflict being the balance between technological foresight and advertising business monetization.The Chief AI Officer and product leads are debating the path for AI to enhance the advertising business, revealing the inherent tension for tech giants between pursuing Artificial General Intelligence (AGI) and maintaining existing business models.
Metadata
AI Score
81
Website mp.weixin.qq.com
Published At Today
Length 1347 words (about 6 min)
Sign in to use highlight and note-taking features for a better reading experience. Sign in now
上千亿刀打水漂?
机器之心编辑部
Meta 的 AI 计划,突然踩了刹车。
据《纽约时报》、路透社、彭博社等多家媒体报道,Meta 正在开发的新一代基础大模型 Avocado(牛油果),原本计划在本月发布,但由于性能未达预期,发布时间被推迟到至少 5 月。
原因也很直接:模型表现还没追上头部玩家。
在 Meta 内部测试中,Avocado 在推理、代码生成和写作能力上,仍然落后于竞争对手的最新模型。AI 大模型竞赛的现实,再次摆在桌面上。
据知情人士透露:Avocado 明显优于 Meta 上一代模型,也超过了 2025 年 3 月版本的 Gemini 2.5,但仍然落后于 2025 年 11 月发布的 Gemini 3.0。
换句话说:Meta 进步很大,但对手进步更快。
在基础模型赛道,这种差距往往意味着生态吸引力、开发者资源、人才招募能力都会受到影响。因为基础模型不仅是产品能力,更是 AI 平台的底座。
更有戏剧性的一点是,Meta 内部一度讨论过一个方案:临时授权 Google 的 Gemini 模型,来支持自己的 AI 产品。
虽然目前还没有做出决定,但这个讨论本身已经说明了问题:Meta 的 AI 战略正处在一个 非常关键的窗口期。如果核心模型落后,AI助手、代码工具、视频生成等产品能力都会受到牵制。
事实上,Meta 在 AI 上的投入,已经堪称互联网公司里最激进之一。几个数字可以感受一下:
* 2025 年 AI 相关支出:720 亿美元
* 2026 年预计支出:最高 1350 亿美元
* 数据中心长期规划投入:6000 亿美元级别
除此之外,Meta 还做了一件极具象征意义的事:砸 143 亿美元投资 Scale AI ,并让创始人 Alexandr Wang 直接成为 Meta 首席 AI 官。
目标只有一个,建立通向超级智能的 AI 体系。扎克伯格甚至公开表示:AI 将开启人类新时代。
而这个还没熟的 Avocado 正是来自 Meta 内部的新 AI 实验室:TBD Lab(To Be Determined Lab)。实验室目前只有约 100 人,但配置极高,几乎是精英特种部队。
TBD Lab 正在同时开发两类模型:基础大模型Avocado;图像/视频生成模型Mango。
据《纽约时报》报道,TBD Lab 于去年年底完成了 Avocado 开发的第一阶段,即预训练。今年 1 月,他们开始了后训练,团队也正是在这个阶段将目标发布日期定在了 3 月中旬(结果跑票)。
目前已经推出过一个产品 Vibes,一个类似 OpenAI Sora 的视频生成应用。
不过,内部团队也并非完全顺利。报道称,有研究员在 Avocado 发布前离职, Alexandr Wang 与 Meta 产品负责人之间存在分歧,争论焦点是 AI 如何提升广告业务。这其实也是 Meta AI 战略的一条主线:AI 必须服务广告。
Meta 一直是开源大模型阵营的旗手。Llama 系列几乎是开源生态的核心力量。但这次的 Avocado,有可能改变策略。内部讨论显示:扎克伯格和 Alexandr Wang 更倾向于闭源。
原因并不难理解,模型成本极高,竞争更加激烈,商业化压力更大。而对手们( OpenAI、Anthropic )几乎全部是 闭源路线。
Meta 的这次延期,其实透露了一个行业信号:大模型竞争已经从能不能做出来,变成了谁迭代更快。
现在的头部玩家差距已经不再是有没有,而是推理能力、工程效率、推理成本、迭代速度。谁能持续跑在前面,谁就能成为 AI 平台生态的中心。
有趣的是,Meta 已经开始规划下一代模型。命名依旧保持水果系列:Avocado → Mango → Watermelon,规模会更大。
扎克伯格在一次投资者电话会上说过一句话,我们的第一批模型可能只是不错,但更重要的是,它们会显示出我们正在快速前进。
翻译一下就是:现在可能不是最强,但很快会追上。
问题只剩一个:这场 AI 竞赛里,时间是否还够。
机 机器之心 @机器之心
One Sentence Summary
Meta's next-generation self-developed large model, Avocado, has been delayed due to its performance falling short of Gemini 3.0, exposing the competitive pressure and strategic pivot challenges it still faces despite massive investments.
Summary
This article reports on the latest developments regarding Meta's new generation foundational large model, Avocado. Originally scheduled for release in March 2026, the model's launch has been pushed back to May due to its inferiority to Google's Gemini 3.0 in reasoning, code generation, and writing capabilities. The article delves into Meta's aggressive investments in AI (with projected spending reaching $135 billion by 2026) and strategic adjustments, including the establishment of the TBD Lab led by Alexandr Wang, the consideration of shifting from open-source to closed-source to cope with high costs, and internal disagreements on how AI should serve the advertising business. Meta's predicament reflects that the large model race has entered a fierce stage of competing on iteration speed and engineering efficiency.
Main Points
* 1. Meta's next-generation model, Avocado, was forced to delay its release due to underperforming expectations and falling behind competitors.
Internal tests showed that while Avocado surpassed older versions of Gemini, it still fell short of Gemini 3.0, indicating that in the foundational model arena, merely 'progressing fast' is no longer enough; absolute leadership is required to maintain ecosystem appeal.
* 2. Meta is undergoing a potential strategic transition, exploring a shift from being an open-source champion to a closed-source commercialization path.
Due to extremely high model development and computing costs, Zuckerberg and the core team are inclined to make Avocado closed-source to compete with rivals like OpenAI and alleviate immense commercialization pressure.
* 3. Massive capital investment and elite team configuration demonstrate Meta's aggressive determination to win the AI race.
Meta plans to invest up to $135 billion by 2026 and has heavily recruited the founder of Scale AI to serve as Chief AI Officer, attempting to accelerate model iteration through a 'special forces' model.
* 4. Internal disagreements exist regarding the direction of AI development, with the core conflict being the balance between technological foresight and advertising business monetization.
The Chief AI Officer and product leads are debating the path for AI to enhance the advertising business, revealing the inherent tension for tech giants between pursuing Artificial General Intelligence (AGI) and maintaining existing business models.
Key Quotes
* Meta has made significant progress, but its competitors are progressing even faster. * The large model competition has shifted from whether it can be built to who can iterate faster. * Our first models might just be good, but more importantly, they will show that we are moving fast. * Foundational models are not just product capabilities; they are the bedrock of the AI platform.
AI Score
81
Website mp.weixin.qq.com
Published At Today
Length 1347 words (about 6 min)
Tags
Meta
Avocado
Gemini
Large Model Race
AI Strategy
Related Articles
* Tencent Accelerates Its AI Drive with Intensive Talent Acquisition, Organizational Restructuring, and Open-Source Initiatives * A Unique Reading Methodology for the AI Era. Previously, I Used Su Shi's 'All-Encompassing Reading Strategy' and Wan Weigang's 'Intensive Reading Methodology' with a dual-window Prompt setup to deepen comprehension and distill core insights from books.") * Challenging Claude Code? OpenAI Codex Release Month Approaches, Revealing the Agent Loop Today * Anthropic Unveils New Technology: Parameter Localization to Mitigate AI Risks without Data Deletion, a novel technology that enables Large Language Models to remove dangerous knowledge by localizing it into dedicated parameter regions during training, thus avoiding data deletion while preserving general capabilities.") * Just Released: Nano Banana 2! Affordable and Powerful—Here Are the Details After My Hands-on Experience, focusing on high cost-performance, strong comprehension, and subject consistency, significantly lowering the cost barrier for AI image generation.") * Unlock This Year's Most Valuable Prompt – Generate Perfect PPTs with Both Aesthetic Quality and Editable Text * Claude's Strongest Sonnet Model 4.6 is Here, Featuring a Million-Token Context Window * Gemini Leadership: Pro's Main Role is to Distill Flash! The Greatest Potential for Breakthroughs Lies in Post-training; Noam, Jeff Dean: Continual Learning is a Key Direction for Improvement * Unveiling the Technical Solutions of AI Talents Who Won 2 Million+ RMB in Bonuses * 127. Large Model Quarterly Report New Year Dialogue: Guang Mi's Prediction of the AI War's Two Major Alliances and the Third Paradigm of Online Learning HomeArticlesPodcastsVideosTweets
Meta's Hundred-Billion-Dollar Self-Developed Large Model ... ===============