⌘K
Change language Switch ThemeSign In
Narrow Mode
The Feynman Technique Is Artificial Backpropagation
The Feynman Technique Is Artificial Backpropagation
 ### 向阳乔木@vista8
费曼学习法就是人工的反向传播。
读懂一个东西不够,还得用自己的话讲出来。
讲不清楚,说明梯度没有回传到底层,只是在表面记住了token的排列顺序,权重没更新。
费曼方法在测试梯度到底传到了哪一层。
讲不出来,说明只动了表层。
讲得出来但不够简单,说明传到了中间层。
能让一个小孩听懂,说明底层权重已经彻底更新了。
死记硬背是只训练最后一层,融会贯通是全网络微调。Show More
Mar 20, 2026, 11:25 PM View on X
5 Replies
16 Retweets
112 Likes
7,215 Views  向阳乔木 @vista8
One Sentence Summary
The Feynman Technique is analogized to artificial backpropagation in neural networks, explaining learning depth through weight updates.
Summary
The author presents a clever analogy: the Feynman Technique is essentially artificial backpropagation. Merely understanding something isn't enough—you need to explain it in your own words. If you can't explain it clearly, it means the gradient hasn't propagated back to the foundational layer; you've only superficially memorized the token sequence. The Feynman method tests where the gradient has reached: if you can't explain it, only the surface layer was activated; if you can explain it but not simply enough, it reached the middle layer; if you can make a child understand, the foundational weights have been completely updated. Rote memorization trains only the last layer, while deep understanding is full-network fine-tuning. This is a highly insightful analogy that perfectly links core deep learning mechanisms with classic learning methods.
AI Score
89
Influence Score 31
Published At Yesterday
Language
Chinese
Tags
Feynman Technique
Backpropagation
Deep Learning
Learning Methods
Weight Update HomeArticlesPodcastsVideosTweets