Ignite Insights

Ignite Insights

The Great Acceleration

An investor’s field guide to living on the knee of the curve.

Ignite Insights's avatar
Ignite Insights
Oct 01, 2025
∙ Paid
Share

Twelve months in AI time now feels like a century. Jensen Huang jokes about it, but his punchline has teeth: a year ago he said inference would grow not 100× or 1,000×, but a billion×—and this year he says he underestimated it. Why? Because inference stopped being “one shot.” Models learned to think—to plan, use tools, search, critique, and try again. In Jensen’s framing, we now have three scaling laws—pre‑training (learn), post‑training (practice), and inference (think). Add agents that run in parallel and you don’t just get faster answers; you get compound intelligence.

Zoom out. Ray Kurzweil’s “law of accelerating returns” says technology builds on itself, so progress stacks exponentially, not linearly. Thirty linear steps gets you to 30. Thirty exponential steps gets you to a billion. We are squarely at that part of the chessboard where each move doubles the grains of rice—and suddenly the emperor’s granary looks tiny.

Below is the lay of the land—where acceleration is fastest, why it’s happening, what it means for builders, and how to ride it without getting flung off the curve.


From chatbots to thinking systems

A year ago, most people saw LLMs as flashy autocomplete. Today, they’re systems of models coordinating in real time: one reasons, another searches, a third writes code, a fourth verifies and rewrites. Huang’s mantra—“think before you answer”—captures the shift. The result isn’t just better outputs; it’s longer‑horizon competence. Planning. Tool use. Memory. Self‑correction.

Keep reading with a 7-day free trial

Subscribe to Ignite Insights to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Team Ignite Ventures
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture