← 回總覽

不要让 LLM 替你写作 — LessWrong

📅 2026-03-11 02:49 JustisMills 人工智能 5 分鐘 5642 字 評分: 82
LLM 写作方法论 AI 伦理 认知负荷 内容质量
📌 一句话摘要 本文认为,使用 LLM 生成最终文本会削弱清晰写作与清晰思考之间的联系,并敦促作者为了保持智识完整性而坚持人工创作。 📝 详细摘要 作者认为,AI 生成的文本有一种独特且可辨识的“气味”,向细心的读者传递出缺乏深度思考的信号。核心论点有两点:首先,人类写作是思考过程的证据,而 LLM 无论底层想法质量如何都能生成润色过的文本,从而打破了这种关联。其次,AI 文本在风格上令人乏味,其特点是过度的铺垫、重复的结构和毫无意义的列表。通过一个关于修改学术摘要的个人轶事,作者证明了手动写作的过程能发现 AI 辅助略读时容易忽略的细微差别和错误。结论主张将 LLM 作为研究助手或“实习

Sign in to use highlight and note-taking features for a better reading experience. Sign in now

_Content note: nothing in this piece is a prank or jumpscare where I smirkingly reveal you've been reading AI prose all along._

It’s easy to forget this in roarin’ 2026, but _homo sapiens_ are the original vibers. Long before we adapt our behaviors or formal heuristics, human beings can sniff out something sus. And to most human beings, AI prose is something sus.

If you use AI to write something, people will know. Not everyone, but the people paying attention, who aren’t newcomers or distracted or intoxicated. And most of those people will judge you.

The Reasons -----------

People may just be squicked out by AI, or lossily compress AI with crypto and assume you’re a “tech bro,” or think only uncreative idiots use AI at all. These are bad objections, and I don’t endorse them. But when I catch a whiff of LLM smell, I stop reading. I stop reading much faster than if I saw typos, or broken English, or disliked ideology. There are two reasons.

First, human writing is evidence of human thinking. If you try writing something you don’t understand well, it becomes immediately apparent; you end up writing a mess, and it stays a mess until you sort out the underlying idea. So when I read clear prose, I assume that I’m reading a refined thought. LLM prose violently breaks this correlation. If some guy tells Claude to “help put this idea he has into words” then Claude will write clear prose even if the idea is vague and stupid. If the guy asks to “help find citations” and there are no actual good ones, Claude will find random D-tier writeups and link to them authoritatively. Worst of all, if the guy asks Claude to “poke holes in my argument” when the argument is sufficiently muddy, Claude will just kind of make up random “issues” that the guy will hedge against (or, let’s be real, have Claude hedge against). So you end up with a writeup which cites sources, has plenty of caveats, and… has no actual core of considered thought. If you read enough of these, and you start alt-tabbing away real fast when you see structured lists with bold headers, or weird clipped parenthetical asides, or splashy contrastive disclaimers every 2-3 sentences, or any number of other ineffable signs subtler than an em dash.

Is it possible that a 50% AI-generated hunk of text contains a pearl of careful thinking, that the poor human author simply didn’t have the time or technical skill to express? I suppose. But it ain’t worth checking.

Second, and closely related, AI prose is a slog. There’s way too much framing, there are too many lists and each list has a few items that serve no purpose, the bold and italics feel desperate, and it’s just all so same-y. In your own conversation with an AI that you can fully steer, you can sometimes break out of this feeling for a little bit. But reading the output of someone else’s AI conversation is rarely any fun.

In short, if someone reads writing “by you” and it seems LLM-y, they will think both that:

  • You probably don’t have an actual good idea under the cruft
  • Even if you do, the cruft is going to suck to get through
If you want them there, they are not going to stick around. In fact, the more you want a reader, the more likely they are to be turned off by this stuff. Even if they’re the biggest AI fan in the world.

Luddite! Moralizer! -------------------

Fine. I admit it. Just this week, I too experienced Temptation.

You may know me as aneditor. In this capacity, I was revising an academic paper’s abstract in response to reviewer comments. But I had several papers to work on in the same project, and the owner of that project actively encouraged me to use AI to move fast enough to meet deadlines.[[1]](https://www.bestblogs.dev/article/a66aeb59#fnv0704k3y0s8)

So I gave Claude the paper and the reviewer comments, and asked it to come up with a new abstract that would satisfy the reviewers. The result looked good.

“It’s just an abstract,” I whispered to myself, face lit eerily in my laptop screen’s blue light. “Summary. Synthesis.” I rocked back and forth. “I could… just…”

But no. Claude’s abstract was a useful reminder of which paper this was, and Claude helpfully catalogued what the reviewer requests were. Still, I rewrote the abstract myself, from scratch. In so doing, I noticed a lot of things I hadn’t seen, when I was just skimming the AI output. Stuff it included that it didn’t really need to. Stuff it emphasized that wasn’t actually that important.

Did I run _my_ abstract by Claude in turn? Yes! It had two nitpicks, one of which I agreed with, and fixed in my own words. Use these tools. You should totally ask Claude to find you sources for a claim, but then you should check those sources like you would check the sources of an eager day one intern, and expect to throw most (or all) of them away. You should totally ask Claude to fact check, but expect it to miss some factual errors and unhelpfully nitpick others. You can even ask Claude to “help clarify your thinking.” But if you’re _really_ just clarifying it, then you won’t use its text. Because once your thinking’s clear, you can write the text yourself, and you should.

To be clear, editing I do as part of the LessWrong Feedback Service uses my own human judgment, and I don't use LLMs to make edits.

查看原文 → 發佈: 2026-03-11 02:49:47 收錄: 2026-03-11 06:00:47

🤖 問 AI

針對這篇文章提問,AI 會根據文章內容回答。按 Ctrl+Enter 送出。