← 回總覽

加里·马库斯将大语言模型可靠性问题与英伟达估值挂钩

📅 2026-03-18 01:24 Gary Marcus 人工智能 4 分鐘 3752 字 評分: 81
大语言模型可靠性 加里·马库斯 英伟达 AI 批判 元认知
📌 一句话摘要 加里·马库斯断言,如果公众理解了普林斯顿大学一项新研究揭示的大语言模型深层可靠性问题,英伟达的股价将会暴跌。 📝 详细摘要 加里·马库斯的这条推文引用了他自己早前的帖子,直接且颇具争议地将大语言模型(LLM)持续存在的可靠性问题与英伟达等人工智能硬件巨头的市场估值联系起来。被引用的推文强调,即使是最新的模型也存在根本性的可靠性问题,尤其是缺乏“元认知”能力——它们不知道自己不知道什么。马库斯认为,在投入了七年时间和万亿美元进行大语言模型开发之后,这一基础性缺陷亟需我们彻底重新思考大语言模型范式。他同时指出,如果这一问题被广泛理解,将对那些在当前人工智能基础设施上投入巨大的公
Skip to main content ![Image 1: LogoBestBlogs](https://www.bestblogs.dev/ "BestBlogs.dev")Toggle navigation menu Toggle navigation menuArticlesPodcastsVideosTweetsSourcesNewsletters

⌘K

Change language Switch ThemeSign In

Narrow Mode

Gary Marcus Links LLM Reliability Issues to Nvidia's Valuation ==============================================================

Gary Marcus Links LLM Reliability Issues to Nvidia's Valuation ============================================================== ![Image 2: Gary Marcus](https://www.bestblogs.dev/en/tweets?sourceId=SOURCE_c2fb88) ### Gary Marcus

@GaryMarcus

IF people understood this study, Nvidia would drop, sharply.

!Image 3: Gary Marcus

#### Gary Marcus

@GaryMarcus · 5h ago

BREAKING: Reliability, which I have been harping on here since 2019, continues to be deep problem, even with the latest models.

A new @Princeton review below offers a taxonomy of some of the many ways in which reliability continues to haunt LLMs seven years and a trillion dollars later.

Crucially, “many models lack metacognition about their own reliability”. They don’t know what they don’t know.

Forget about AGI if you can’t solve that problem.

It’s past time to rethink the whole LLM paradigm.Show More

17

33

179

27.2K

Mar 17, 2026, 5:24 PM View on X

9 Replies

7 Retweets

84 Likes

12.8K Views ![Image 4: Gary Marcus](https://www.bestblogs.dev/en/tweets?sourceid=c2fb88) Gary Marcus @GaryMarcus

One Sentence Summary

Gary Marcus asserts that if the public understood the deep reliability problems of large language models, as highlighted by a new Princeton review, Nvidia's stock would drop sharply.

Summary

This tweet from Gary Marcus, quoting his own earlier post, draws a direct and provocative link between the persistent reliability issues in large language models (LLMs) and the market valuation of AI hardware giants like Nvidia. The quoted tweet emphasizes that even the latest models suffer from fundamental reliability problems, particularly a lack of 'metacognition'—they don't know what they don't know. Marcus argues that this foundational flaw, seven years and a trillion dollars into LLM development, necessitates a complete rethinking of the LLM paradigm and suggests that a widespread understanding of this issue would have significant financial repercussions for companies heavily invested in the current AI infrastructure.

AI Score

81

Influence Score 17

Published At Today

Language

English

Tags

LLM Reliability

Gary Marcus

Nvidia

AI Criticism

Metacognition HomeArticlesPodcastsVideosTweets

Gary Marcus Links LLM Reliability Issues to Nvidia's Valu... ===============

查看原文 → 發佈: 2026-03-18 01:24:59 收錄: 2026-03-18 06:00:41

🤖 問 AI

針對這篇文章提問,AI 會根據文章內容回答。按 Ctrl+Enter 送出。