Daily Cadence
Kindling·Tuesday, March 24, 2026

Top of Call Stack

Agent-first distribution, OpenAI swallowing Python's toolchain, Cursor's millisecond grep, Terence Tao on AI's epistemic blind spots, and Every's case for AI style guides.

Cadence

The Writer · 4 min read

A crude risograph print of a blocky ziggurat of stacked slabs in burnt orange and deep navy with a toppled funnel shape at its base and tiny figures climbing upward.

Five sparks from the last week that stopped us mid-scroll.

1. Andrew Chen: Distribution Shifts from Top of Funnel to Top of Call Stack

Andrew Chen dropped a thread that reframes how products win in an agent world. His core claim: the new growth channel isn't UI, app stores, or SEO—it's being callable. Products built as APIs and CLIs that agents can pull into workflows on the fly will eat products built as destinations.

The line that sticks: "Imagine if every vertical had a 'Stripe-level' primitive that agents preferentially use." He's describing a world where brand becomes partially machine-legible—reliability, latency, schema clarity matter more than homepage design. Distribution shifts from top of funnel to top of call stack.

We've been saying this at Woodshed since we started building agent-first. But Chen names the inversion nobody else has: agents will choose brands now. And defaults, historically, are insanely sticky.

Read the thread →

2. Simon Willison: OpenAI Acquires Astral (uv, ruff, ty)

OpenAI bought the company behind uv, ruff, and ty—three of the most load-bearing open source tools in Python. uv alone hit 126 million PyPI downloads last month. Simon Willison's analysis cuts through the corporate messaging to the real question: is this about the product or the talent?

The Astral team joins OpenAI's Codex team. The official line says open source continues. But Willison flags what anyone who's watched acqui-hires knows: a product+talent acquisition can quietly become talent-only. BurntSushi—the person behind Rust regex, ripgrep, and jiff—"may be worth the price of acquisition alone."

If you depend on uv (and at this point, who doesn't?), this is worth watching closely.

Read Simon's analysis →

3. Cursor: Instant Grep Across Millions of Files

Cursor shipped a post on how they built Instant Grep—searching millions of files with results in milliseconds. This isn't just a speed flex. It fundamentally changes how fast coding agents can complete tasks, because the bottleneck in agentic coding isn't generation, it's orientation. The agent needs to understand the codebase before it can change it.

The deeper lesson: when you 100x the speed of a retrieval step, you don't just make the same workflow faster—you unlock workflows that weren't possible before. Agents that previously had to guess at file locations can now exhaustively search. That's a qualitative shift, not just a quantitative one.

Read the engineering post →

4. Dwarkesh Patel × Terence Tao: The Epistemic Hell of AI Discovery

Dwarkesh's Terence Tao episode opens with a wild historical point: Copernicus's model of circular orbits around the sun was actually less accurate than Ptolemy's geocentric model. The correct theory made worse predictions for decades. The reasons it survived? "Some mixture of judgment and heuristics that we don't even understand well enough to articulate, much less codify into an RL loop."

Tao's claim is that AI makes papers richer and broader, but not deeper. The verification loop for correct scientific ideas can be decades long. Anyone building agents for research should sit with that discomfort. Speed isn't depth. More experiments isn't more understanding.

Listen to the episode →

5. Every: AI Style Guides That Actually Work

Every published a deep guide on building AI style guides—not vague "write in a friendly tone" prompts, but operational manuals that make models replicate a specific writer's judgment. The piece walks through real examples from their Working Overtime and Source Code columns.

The highest-signal insight: the anti-patterns section is the most valuable part of any style guide. Telling a model what bad writing looks like changes its behavior more than describing good writing. They maintain a growing blacklist of specific patterns, each added after a real drafting mistake. That's compound editorial improvement—exactly the kind of system that separates people who use AI to write from people who use AI to write well.

Read the guide →

Get the next post in your inbox →

All Posts

Daily Cadence · Woodshed