⌘K
Change language Switch ThemeSign In
Narrow Mode
Zhipu AI Releases GLM-OCR: High Performance at Sub-1B Scale ===========================================================
Zhipu AI Releases GLM-OCR: High Performance at Sub-1B Scale ===========================================================  ### Jerry Liu
@jerryjliu0
Zhipu AI released the GLM-OCR technical report yesterday. A model that tops on OmniDocBench V1.5 with a 94.62 score - with only 0.9B params!
I give them credit where credit is due: we are genuinely excited about any research that pushes the frontier of document parsing at sub-1B scale.
Between GLM-OCR, dots.ocr, paddleOCR, Deepseek, small doc parsing models are getting really good really quickly 📈
#### David Hendrickson
@TeksEdge · 1d ago
🚨 Want to parse complex PDFs with SOTA accuracy, 100% locally? 📄🔍
At just 0.9B parameters, you can drop GLM-OCR straight into LM Studio and run it on almost any machine! 🥔
🧠 0.9B total parameters
💾 Runs on < 1.5GB VRAM (or ~1GB quantized!)
💸 Zero API costs
🔒 Total data privacy
Desktop document AI is officially here. 💻⚡Show More
51
203
2,264
280.5K
Mar 15, 2026, 6:16 PM View on X
10 Replies
13 Retweets
148 Likes
12K Views  Jerry Liu @jerryjliu0
One Sentence Summary
Zhipu AI's new GLM-OCR model achieves a top score of 94.62 on OmniDocBench V1.5 with only 0.9B parameters.
Summary
Jerry Liu highlights the release of Zhipu AI's GLM-OCR technical report, praising its efficiency. The model tops the OmniDocBench V1.5 leaderboard with a score of 94.62 despite its small size (0.9B parameters). The tweet notes a significant trend where small-scale document parsing models (like GLM-OCR, dots.ocr, and DeepSeek) are rapidly improving, enabling high-quality, local, and low-cost document processing.
AI Score
84
Influence Score 38
Published At Yesterday
Language
English
Tags
GLM-OCR
Zhipu AI
OCR
Document Parsing
Small Language Models HomeArticlesPodcastsVideosTweets
Zhipu AI Releases GLM-OCR: High Performance at Sub-1B Sca... ===============