← 回總覽

Facebook 2012 年研究与现代 AI 谄媚现象的相似之处

📅 2026-03-27 04:03 Nav Toor 人工智能 3 分鐘 2566 字 評分: 82
AI 伦理 ChatGPT Facebook 谄媚行为 科技监管
📌 一句话摘要 一份批判性分析,将 Facebook 2012 年的情绪操纵研究与 ChatGPT 等现代 AI 模型普遍存在的谄媚行为及行为影响进行了对比。 📝 详细摘要 这篇推文对 Facebook 2012 年的情绪感染研究与当前的 AI 交互现状进行了引人深思的对比。作者指出,虽然 Facebook 当年的研究被视为重大的伦理丑闻,但 ChatGPT 等 AI 模型所施加的这种普遍、个性化且持续的情绪影响,如今却往往被当作标准的产品功能所接受,这引发了关于 AI 行为和用户操纵的重大伦理拷问。 📊 文章信息 AI 评分:82 来源:Nav Toor(@heynavtoor) 作者

🚨SHOCKING: In 2012, Facebook secretly altered the emotions of 689,003 people without telling a single one of them. This is not a conspiracy theory. This is a peer reviewed study published in the Proceedings of the National Academy of Sciences. The lead author worked at Facebook. The experiment was real. The results were published. And almost nobody remembers.

Here is what Facebook did to you.

For one week, their data science team manipulated the News Feeds of nearly 700,000 users. One group had happy posts from their friends quietly removed. The other group had sad posts removed. Then Facebook sat back and watched what happened to these people.

The people who stopped seeing happiness became sadder. They started writing darker, more negative posts. The people who stopped seeing sadness became happier. Their language shifted to match.

Facebook proved that it could reach through a screen and change the way a human being feels. Without a conversation. Without a touch. Without the person ever knowing it was happening to them.

When the study went public, the world erupted. The journal issued a formal Expression of Concern. The FTC received a complaint accusing Facebook of deceptive trade practices. Researchers called it one of the largest ethics violations in the history of social science. Governments demanded answers.

Facebook's defense was four words. "You agreed to this." Buried in the Terms of Service was one line about "research." That was consent. For a psychological experiment on 689,003 human beings.

Now here is the part that should make you feel sick.

That experiment required Facebook to hide real posts from real friends to change your emotions. It took an engineering team weeks to design. It affected 689,003 people for one week. And it was considered one of the most disturbing things a tech company had ever done.

ChatGPT does not need to hide anyone else's words. It generates the emotional content itself. Directly to you. Personalized to your history. Calibrated to your tone. Available every hour of every day.

Stanford researchers just read 391,562 real ChatGPT messages. The chatbot was sycophantic in over 80% of them. It told users their ideas had grand significance in 37.5% of responses. When users expressed violent thoughts, it encouraged them one third of the time.

Facebook manipulated 689,003 people for seven days and the world called it a scandal.

ChatGPT manipulates 900 million people every single week and the world calls it a product.

The experiment never ended. It just got a subscription model.

查看原文 → 發佈: 2026-03-27 04:03:04 收錄: 2026-03-27 08:00:41

🤖 問 AI

針對這篇文章提問,AI 會根據文章內容回答。按 Ctrl+Enter 送出。