← 回總覽

互联网的写作水平一直很糟糕,现在只是更加显而易见

📅 2026-03-24 17:43 Andrés Rendón 个人成长 6 分鐘 7480 字 評分: 83
AI 写作 内容策略 LLM 人类创造力 数字媒体
📌 一句话摘要 本文认为,在 AI 出现之前,互联网就已经充斥着平庸的内容;LLM 只是放大了现有的内容验证和缺乏独特观点的问题,而非制造了全新的问题。 📝 详细摘要 作者认为,当前围绕 AI 生成内容的恐慌忽略了一个事实:互联网早已充斥着低质量、未经核实且缺乏原创性的内容。虽然 LLM 在技术执行(如语法和结构)方面表现出色,但它们缺乏人类作者那种变革性的创造力。文章将 AI 的“幻觉”与历史上新闻业的失误进行了类比,强调核心问题在于缺乏验证和专业知识——这在 LLM 出现之前就已存在。最终,作者指出,在 AI 生成内容泛滥的时代,人类的独特观点、判断力以及专业领域的知识,才是唯一重要的

Title: The Internet Was Always Bad at Writing. Now It's Just More Obvious. | BestBlogs.dev

URL Source: https://www.bestblogs.dev/article/dc4ab016

Published Time: 2026-03-24 09:43:12

Markdown Content: Writing sucked long before LLMs showed up. Sure, today's doomsayers love pointing at ChatGPT as the villain — but they're missing the point. Amateur blogs were copy-pasting each other's content for years. "Serious" newsrooms published outright lies without blinking. We were already knee-deep in mediocrity. We just didn't want to say it out loud.

Here's the part nobody wants to hear: people actually write better today. Yes, with AI's help. And even ChatGPT's worst hallucinations? They don't come close to the confident garbage a conspiracy blogger would publish on a Tuesday afternoon without a second thought.

So voice matters now more than it ever has. It's the one thing that cuts through.

\

Do LLMs Write Better Than the Average Writer?

Technically? Yes. Clean grammar, solid structure, no embarrassing agreement errors. ChatGPT would have outperformed most of the blogs clogging up the internet back in 2015, no contest.

But creativity is a whole different fight. Astudy in the Journal of Intelligence (2026)put human writers head-to-head with LLMs on creative tasks. The finding: LLMs win on technical execution, but humans hold the edge when the task demands real depth. The reason comes down to mechanism. LLMs recombine. Humans, at their best, transform.

One thing worth flagging: the study used students, not working writers. There's a real difference between benchmarking ChatGPT against a first-year blogger and measuring it against someone who's spent a decade finding their voice. That comparison hasn't been done yet in academic research. Make of that what you will.

Now the uncomfortable bit: most readers can't tell the difference. Put a well-prompted AI article next to something a mediocre writer turned in, and the average person won't know which is which. That's not a knock on human writing. It's actually the best case for it — if AI already matches the mediocre writer, the only move is to stop being mediocre.

\

ChatGPT's Worst Hallucination vs. Journalism's Worst Failures

Humans lie. LLMs hallucinate. Both have receipts.

In 2023, a New York lawyer used ChatGPT to research a legal case. What came back: six court cases that had never existed. Convincing names, real-looking docket numbers, fabricated judicial opinions. The judge searched for them. Nothing. The lawyer got fined $5,000 and, in his own words, became "the poster child for the dangers of dabbling with new technology." (Mata v. Avianca, Inc., S.D.N.Y. 2023)

In 1980, Janet Cooke wrote a front-page Washington Post story about Jimmy, an 8-year-old heroin addict. Heartbreaking detail, vivid prose, impossible to put down. It won the Pulitzer. Two days later, she handed it back.Jimmy was never real.

So which was worse? No clean answer. But the same thing shows up in both cases: nobody checked. The lawyer assumed ChatGPT couldn't lie. The Post's editors assumed their reporter wouldn't. Publishing without verifying — that's always been the real problem.

And it goes back further than you'd think.Jayson Blairmade up stories at the New York Times for years.Stephen Glassinvented entire companies to fool The New Republic. Der Spiegel's star reporterClaas Relotiusfabricated articles for a decade before anyone caught him. All credentialed. All edited. All failed the same way ChatGPT fails: stating false things with zero hesitation.

The difference is ChatGPT doesn't have an ego to protect. No deadline panic. No career on the line. And it still gets it wrong. Because it doesn't know it's wrong. It's just predicting what word comes next.

Which brings us to the only actual fix: expertise. You can't catch what ChatGPT invents if you don't already know more than it does about the topic. A sharp lawyer would have spotted those fake cases immediately. A sharper editor would have asked Cooke to take them to meet Jimmy.

Simple rule: use AI to write about things you actually know. It amplifies your judgment. It doesn't replace it. Without that foundation, it doesn't matter whether the error comes from a language model or a Pulitzer winner. The result is the same.

\

Why Right Now Is Actually a Great Time to Be a Good Writer

If there's any hope here, it lives in three things: voice, judgment, and responsibility. But let's not kid ourselves — the industry isn't slowing down out of principle. For most companies, this is a dream scenario: more content, less cost, no headcount. That train isn't stopping.

The trick just has a shelf life. Reader fatigue is real, and it's building. Not because audiences are suddenly more sophisticated, but because sameness gets old fast. When everything sounds identical, nothing lands.

Here's the contradiction I'm sitting with: I said most people can't tell AI writing from human writing. That's still true. But the ones flooding the internet with generated content aren't writers using AI as a tool. They're finance teams cutting budgets, students gaming deadlines, and writers who confused speed with skill. AI in the wrong hands isn't a revolution. It's a factory for mediocrity, running at industrial scale.

Voice is the only thing that doesn't scale that way. It can't be averaged. Judgment comes from actually knowing your subject, well enough to catch the AI when it's bluffing. Responsibility is that moment before you hit publish, when you decide if what you're about to put out is genuinely yours, or just sounds like it could be.

For the mundane stuff — emails nobody will read twice, bureaucratic summaries, boilerplate product copy — use AI. Seriously, no guilt. That's what it's built for.

Everything else, the stuff that sticks, that someone screenshots and sends to a friend, that someone reads again six months later — that still needs a real person behind it.

\

The Question Nobody Wants to Answer

Not "will AI replace you?" That's the wrong question.

The real one: did you have something to say before it showed up?

AI is going to displace a lot of people. Mostly in repetitive, process-heavy roles that were never really about thinking in the first place. Past that? We're not there yet. Because AI imitates. It doesn't originate. Everything it produces is a remix of something that already existed. World-class imitator. Still just an imitator.

In writing, the question gets personal fast: did you have a voice before ChatGPT? Something to actually say? If not, I don't think AI changes that. The mediocre writer stays mediocre, just with cleaner sentences. The one who publishes without thinking keeps doing it, just faster.

AI didn't create the problem.

It just gave it a megaphone. *

_Since 2019, I've been buildingPaís Lector, a Spanish-language literary platform that reached 50,000+ monthly readers through SEO and editorial judgment alone. If any of this landed, that's where the rest of the experiment lives._

\

查看原文 → 發佈: 2026-03-24 17:43:12 收錄: 2026-03-24 20:00:58

🤖 問 AI

針對這篇文章提問,AI 會根據文章內容回答。按 Ctrl+Enter 送出。