← 回總覽

DeepSeek 网页端更新:上线专家模式与视觉模型,V4 疑云再起

📅 2026-04-08 11:58 梦瑶 人工智能 11 分鐘 13062 字 評分: 86
DeepSeek V4 专家模式 大语言模型 多模态
📌 一句话摘要 DeepSeek 网页端近期上线「快速」与「专家」双模式并灰度测试视觉模型,引发社区对其下一代 V4 模型即将发布的广泛猜测。 📝 详细摘要 DeepSeek 网页端近日进行重大更新,推出了侧重即时响应的「快速模式」和擅长复杂任务(如代码、内容生成)的「专家模式」。同时,具备多模态能力的「视觉模型」也已开启灰度测试。尽管官方未明确模型版本,但网友通过实测发现专家模式在 SVG 绘图和代码精细度上表现更佳,且部分对话显示模型自称为「V4」。然而,由于专家模式目前存在约 133K 的上下文长度限制,与传闻中 V4 支持 1M token 的特性不符,社区对其真实身份仍存争议,推
Skip to main content ![Image 13: LogoBestBlogs](https://www.bestblogs.dev/ "BestBlogs.dev")Toggle navigation menu Toggle navigation menuArticlesPodcastsVideosTweetsSourcesNewsletters

⌘K

Change language Switch ThemeSign In

Narrow Mode

DeepSeek 深夜更新后自曝:我是 V4(?!)

量子位 @梦瑶

One Sentence Summary

DeepSeek's web interface has recently launched dual 'Fast' and 'Expert' modes and initiated canary testing for its vision model, sparking widespread community speculation about the imminent release of its next-generation V4 model.

Summary

DeepSeek's web interface underwent a major update recently, introducing a 'Fast Mode' focused on immediate response and an 'Expert Mode' tailored for complex tasks such as coding and content generation. Simultaneously, a 'Vision Model' with multimodal capabilities has entered canary testing. Although the official version remains unconfirmed, users discovered through testing that Expert Mode performs better in SVG rendering and code precision, with some dialogues showing the model identifying itself as 'V4'. However, due to the current 133K context length limit in Expert Mode—which contradicts rumors of V4 supporting 1M tokens—the community remains divided on its true identity, speculating it may be a V4 Lite or V3.3 version.

Main Points

* 1. DeepSeek introduces dual-mode switching to differentiate between simple and complex tasks.Fast Mode supports image uploads and focuses on daily conversations; Expert Mode is provided in limited quantities and focuses on code generation and complex logic, though it currently lacks some multimodal support. * 2. Expert Mode demonstrates stronger engineering and visual generation capabilities in testing.Through practical tests like SVG drawing and Tetris game development, Expert Mode outperformed Fast Mode in detail handling, component precision, and logical completeness. * 3. The model's version identity remains a mystery, with contradictions between community speculation and test data.Although the model refers to itself as V4 in dialogues, the tested context length limit (133K) is far below the rumored V4 standard, suggesting it may be a transitional or lightweight version.

Metadata

AI Score

86

Website qbitai.com

Published At Today

Length 2005 words (about 9 min)

Sign in to use highlight and note-taking features for a better reading experience. Sign in now

> 梦瑶 发自 凹非寺 > > > 量子位 | 公众号 QbitAI

不更是不更,一更就是个大动作,DeepSeek V4可能真的要来了!

就在刚刚,DeepSeek网页端直接就是一个大更新,推出了「快速模式」「专家模式」两个功能。

此外,还有一个带有图片图标的「视觉模型」,目前也已经开启灰度测试。(好家伙.jpg)

!Image 14

虽说官方并没有对两个模式进行模型说明,但架不住网友们暗中揣测,结果,还真发现了点蛛丝马迹:

!Image 15

是的,人家模型自己承认自己的版本是「V4」了(doge)

虽不说可信度有多少,不过从种种迹象看,大家伙都一众推测——

V4可能真的离我们不远了???

(最好不要再是虚晃一枪!)

常言道,DeepSeek一更新,那关于V4模型的种种揣测必定少不了。(笑)

我们先来说说DeepSeek刚刚上线的「快速模式」和「专家模式」到底是个啥东西!

从名称字面意思和模式介绍上不难看出,两种模式主要还是生成速度任务处理范围的不同~

我也帮大家浅浅总结了一下这两种模式能力的侧重点:

* 快速模式:适合日常对话、即时响应、适合处理日常比较简单快速的问答,支持上传图片和文件。 * 专家模式:更偏向复杂任务处理,更擅长内容生成、代码、网页等任务,不支持多模态和文件上传,而且目前还是限量供应。

!Image 16

我们直接来一道超超超超经典的「鹈鹕骑自行车」的svg实测题,看看两种模式的生成效果具体有啥差异!

!Image 17

(左图为快速模式、右图为专家模式)

从生成效果来看,虽然两种模式在鹈鹕和自行车的融合度上都还有一定提升空间,但整体比较下来,专家模式的完成度还是要更胜一筹~

接着我们再把难度往上抬一下子,让两种模式分别生成俄罗斯方块小游戏,be like:

!Image 18

(左图为快速模式、右图为专家模式)

从页面布局和视觉效果来看,两种模式在游戏设定和整体页面结构上的差别其实并不算大,而且二者运行都没有明显问题,基础操作按键也都可以正常使用。

比较明显的区别,可能更多体现在一些细节处理上——

比如专家模式生成出来的icon图标,看上去会更精细一点???(大家觉得呢)

除了实测效果之外,为了进一步摸清专家模式背后的模型真面目,我也继续刨根问底了一下——

结果,当专家模式被追问模型参数、上下文长度等关键信息时,模型选择了:不予正面回答。

(主打一个,遇到难回答的问题就《跑题》)

!Image 19

跑题归跑题,但架不住网友们从专家模式中察觉到V4模型的一些蛛丝马迹。

比如下面这位网友,用专家模式做了一个网页游戏,得出的结论和我们刚才实测「鹈鹕骑自行车」的感受差不多:

专家模式和快速模式之间,效果差距并没有拉得特别大。

因此这位网友的推测是,专家模式目前很可能调用的还是某个版本的V4 Lite,完整版V4或许已经不远了。

!Image 20

不止如此,还有网友在直接询问专家模式自己的模型名称时,模型甚至大大方方地表示:自己就是「V4」。

(真的假的,咱也不好说…)

!Image 21

当然,围绕专家模式到底是不是V4,网上大家伙也有不少不同声音。

例如下面这位网友就在使用专家模式时发现,当输入token数量达到约133K时,就会触发长度限制。

而根据目前社区流传的V4模型参数信息,完整的DeepSeek V4预计将支持1M token的超长上下文。

这么一看,专家模式当前表现出来的上下文长度,又和大家预期中的V4不太对得上,没准也可能是V3.3的模型版本:

!Image 22

至于专家模式到底是不是V4、完整版V4何时上线,目前DeepSeek官方嘛——还没有给出任何明确说明。

所有线索,都还停留在网友的实测和猜测阶段。(doge)

哦对了,除了这次上线的快速和专家两种模式外,还有一个「vision」模式也已经开始灰度测试。

!Image 23

(有被灰度到的小伙伴,欢迎来评论区聊聊实际效果~)

咋说呢,关于DeepSeek V4的发布传闻,从年初到现在就像一场持续上演的大型的《狼来了》。(doge)

其实早在去年底V3.2版本正式落地后,各大技术社区就已经开始热议下一代V4模型的迭代方向,毕竟——

从版本更新节奏和技术布局来看,DeepSeek的模型迭代本就有清晰的进阶逻辑,V4的研发筹备本就是意料之中的事…

关于V4版本的各种详细地不能再详细模型参数,可以说是全网《遍地开花》。

此外,关于V4模型的各种基准测试结果也开始疯狂流出,在不少基准测试中V4的表现都碾压GPT5.2、Gemini3等模型。

强,真·强啊。

!Image 24

再就是上个月,DeepSeek一次普通的服务器崩溃事件,由被网友直接送上热搜。

(V4传闻的新素材这不就来了吗!!!)

原本只是一次常规的技术故障,却被大家脑洞大开地解读V4模型即将发布,服务器正在做上线前的压力测试,各种猜测和分析又又又又刷屏社区:

!Image 25

emm…隔三差五就被拎出来热议,却始终没等来官方实锤(doge)。

(真·《狼来了》)

当然了,这次DeepSeek突然上线「专家模式」,无疑给持续发酵的V4传闻又添了一把火…

哪怕只是网传的轻量化V4 Lite版本,也意味着完整版V4的研发已经进入收尾阶段,正式落地或许真的不远了。

没招了,大家只能安心蹲蹲吧~

量子位 @梦瑶

One Sentence Summary

DeepSeek's web interface has recently launched dual 'Fast' and 'Expert' modes and initiated canary testing for its vision model, sparking widespread community speculation about the imminent release of its next-generation V4 model.

Summary

DeepSeek's web interface underwent a major update recently, introducing a 'Fast Mode' focused on immediate response and an 'Expert Mode' tailored for complex tasks such as coding and content generation. Simultaneously, a 'Vision Model' with multimodal capabilities has entered canary testing. Although the official version remains unconfirmed, users discovered through testing that Expert Mode performs better in SVG rendering and code precision, with some dialogues showing the model identifying itself as 'V4'. However, due to the current 133K context length limit in Expert Mode—which contradicts rumors of V4 supporting 1M tokens—the community remains divided on its true identity, speculating it may be a V4 Lite or V3.3 version.

Main Points

* 1. DeepSeek introduces dual-mode switching to differentiate between simple and complex tasks.

Fast Mode supports image uploads and focuses on daily conversations; Expert Mode is provided in limited quantities and focuses on code generation and complex logic, though it currently lacks some multimodal support.

* 2. Expert Mode demonstrates stronger engineering and visual generation capabilities in testing.

Through practical tests like SVG drawing and Tetris game development, Expert Mode outperformed Fast Mode in detail handling, component precision, and logical completeness.

* 3. The model's version identity remains a mystery, with contradictions between community speculation and test data.

Although the model refers to itself as V4 in dialogues, the tested context length limit (133K) is far below the rumored V4 standard, suggesting it may be a transitional or lightweight version.

Key Quotes

* When they don't update, they don't; but when they do, it's a big move. DeepSeek V4 might really be coming! * Expert Mode: More inclined toward complex task processing, better at content generation, coding, web tasks, and more. * Expert Mode is likely calling a version of V4 Lite; the full version of V4 may not be far off.

AI Score

86

Website qbitai.com

Published At Today

Length 2005 words (about 9 min)

Tags

DeepSeek

V4

Expert Mode

LLM

Multimodal

Related Articles

* The Evolution of Agent/Skills/Teams Architectures and Principles of Technology Selection * GPT-5.4 Released: OpenAI's First Unified Model, Truly Native * Yao Shunyu Lectures Face-to-Face with Tang Jie, Yang Zhilin, and Lin Junyang! Four Schema Heroes Debate Heroes at Zhongguancun * Zhipu GLM-5 Technical Deep Dive: Full Compatibility with Domestic Chips Like Huawei Sparks Global Discussion and asynchronous reinforcement learning infrastructure, while completing full-stack adaptation for domestic chips.") * Vol.86 Same Generation Technology, Two Systems: A 181-Page PPT Record of the AI Industry in 2025 * MiniMax M2.5 Released: $1/Hour, the King of Real-World Work through a native Agent RL framework.") * Vol.89 AI Industry 2025 Annual Summary Supplement (V4 "Can't Wait" Edition) --- 70-page PPT Solo * AI Starts to "Take Action", Alibaba's Qwen Leads the World * Deconstructing Agentic Coding from First Principles: From Theory to Practice * Google's New Paper Crashes Memory Stocks! 6x KV Cache Compression, Netizens: Silicon Valley Becomes Reality HomeArticlesPodcastsVideosTweets

DeepSeek's Late-Night Update Reveals: I Am V4 (?!) | Best...

查看原文 → 發佈: 2026-04-08 11:58:24 收錄: 2026-04-08 14:00:45

🤖 問 AI

針對這篇文章提問,AI 會根據文章內容回答。按 Ctrl+Enter 送出。