← 回總覽

傅盛分享提升大模型效率的“龙虾三步”技巧

📅 2026-03-10 13:55 傅盛 人工智能 2 分鐘 1607 字 評分: 82
大模型技巧 Prompt Engineering 上下文优化 傅盛 AI应用
📌 一句话摘要 傅盛提出通过文件存储、步骤确认和主动归纳三步法,解决大模型因上下文过多导致性能下降的问题。 📝 详细摘要 针对大模型在处理长上下文时可能出现的“记忆模糊”或性能下降问题,傅盛提出了“龙虾三步”优化策略。具体包括:1. 将关键信息文件化以供随时调取,减少对上下文窗口的依赖(类似 RAG 思路);2. 采用先规划后执行的模式,确保复杂任务不偏离目标(Prompt Engineering 技巧);3. 引导模型主动归纳重要事项以强化记忆。该推文旨在帮助用户通过优化交互方式,让 AI 工具在实际应用中更加精准聪明。 📊 文章信息 AI 评分:82 来源:傅盛(@FuSheng_0
![Image 1: 傅盛](https://www.bestblogs.dev/en/tweets?sourceId=SOURCE_be604927)

大模型越用越笨,核心原因只有一个:上下文塞太多,它记模糊了。用龙虾三步可以彻底绕开这个限制:

  • 关键信息存成文件,让它随时调取,不靠上下文硬撑
  • 复杂任务先让它说步骤,确认没跑偏再执行
  • 重要事项让它主动归纳记忆,而不是靠你一遍遍重复
做了个视频把这三步说清楚。用好了,龙虾越用越聪明。

!Image 2: 视频缩略图

02:13

2 Replies

4 Retweets

12 Likes

1,593 Views ![Image 3: 傅盛](https://www.bestblogs.dev/en/tweets?sourceid=be604927)

One Sentence Summary

Fu Sheng proposes a three-step method involving file storage, step verification, and active summarization to solve LLM performance degradation caused by excessive context.

Summary

To address the 'fuzzy memory' or performance drop LLMs experience when handling long contexts, Fu Sheng introduced the 'Lobster Three Steps' optimization strategy. This includes: 1. Filing key information for on-demand retrieval to reduce reliance on the context window (similar to the RAG approach); 2. Adopting a 'plan-then-execute' workflow to ensure complex tasks stay on target (a Prompt Engineering technique); and 3. Guiding the model to actively summarize key points to reinforce memory. This tweet aims to help users refine their AI interactions to make tools more precise and intelligent in practice.

AI Score

82

Influence Score 7

Published At Today

Language

Chinese

Tags

LLM Tips

Prompt Engineering

Context Optimization

Fu Sheng

AI Applications

查看原文 → 發佈: 2026-03-10 13:55:00 收錄: 2026-03-11 00:00:48

🤖 問 AI

針對這篇文章提問,AI 會根據文章內容回答。按 Ctrl+Enter 送出。