← 回總覽

Shopify/liquid: 性能:解析+渲染速度提升 53%,内存分配减少 61%

📅 2026-03-13 11:44 Simon Willison 人工智能 4 分鐘 3970 字 評分: 86
AI 编码智能体 性能优化 Shopify Liquid Ruby
📌 一句话摘要 Shopify 首席执行官 Tobi Lütke 通过使用半自主 AI 编码智能体进行了 100 多次微优化实验,使 Liquid 引擎的性能提升了 53%。 📝 详细摘要 本文重点介绍 Shopify 的 Liquid 模板引擎在 AI 辅助的“自动研究”工作流驱动下取得的重大性能突破。首席执行官 Tobi Lütke 利用一个编码智能体运行了大约 120 次自动化实验,产生了 93 次提交,优化了字符串扫描、分词和内存分配。结果令人印象深刻:解析和渲染速度提升 53%,内存分配减少 61%。除了技术指标之外,这篇文章还探讨了强大的测试套件如何成为 AI 智能体的“巨大解

Sign in to use highlight and note-taking features for a better reading experience. Sign in now

13th March 2026 - Link Blog Shopify/liquid: Performance: 53% faster parse+render, 61% fewer allocations (via) PR from Shopify CEO Tobias Lütke against Liquid, Shopify's open source Ruby template engine that was somewhat inspired by Django when Tobi first created it back in 2006.

Tobi found dozens of new performance micro-optimizations using a variant of autoresearch, Andrej Karpathy's new system for having a coding agent running hundreds of semi-autonomous experiments to find new effective techniques for training nanochat.

Tobi's implementation started two days ago with this autoresearch.md prompt file and an autoresearch.sh script for the agent to run to execute the test suite and report on benchmark scores.

The PR now lists 93 commits from around 120 automated experiments. The PR description lists what worked in detail - some examples:

> * Replaced StringScanner tokenizer with String#byteindex. Single-byte byteindex searching is ~40% faster than regex-based skip_until. This alone reduced parse time by ~12%. > * Pure-byte parse_tag_token. Eliminated the costly StringScanner#string= reset that was called for every {% %} token (878 times). Manual byte scanning for tag name + markup extraction is faster than resetting and re-scanning via StringScanner. [...] > * Cached small integer to_s. Pre-computed frozen strings for 0-999 avoid 267 Integer#to_s allocations per render.

This all added up to a 53% improvement on benchmarks - truly impressive for a codebase that's been tweaked by hundreds of contributors over 20 years.

I think this illustrates a number of interesting ideas:

* Having a robust test suite - in this case 974 unit tests - is a _massive unlock_ for working with coding agents. This kind of research effort would not be possible without first having a tried and tested suite of tests. * The autoresearch pattern - where an agent brainstorms a multitude of potential improvements and then experiments with them one at a time - is really effective. * If you provide an agent with a benchmarking script "make it faster" becomes an actionable goal. * CEOs can code again! Tobi has always been more hands-on than most, but this is a much more significant contribution than anyone would expect from the leader of a company with 7,500+ employees. I've seen this pattern play out a lot over the past few months: coding agents make it feasible for people in high-interruption roles to productively work with code again.

Here's Tobi's GitHub contribution graph for the past year, showing a significant uptick following that November 2025 inflection point when coding agents got really good.

!Image 1: 1,658 contributions in the last year - scattered lightly through Jun, Aug, Sep, Oct and Nov and then picking up significantly in Dec, Jan, and Feb.

He used Pi as the coding agent and released a new pi-autoresearch plugin in collaboration with David Cortés, which maintains state in an autoresearch.jsonl file like this one.

查看原文 → 發佈: 2026-03-13 11:44:34 收錄: 2026-03-13 14:00:44

🤖 問 AI

針對這篇文章提問,AI 會根據文章內容回答。按 Ctrl+Enter 送出。