scripod.com

How Foundation Models Evolved: A PhD Journey Through AI's Breakthrough Era

The a16z Show

Shownote

The Stanford PhD who built DSPy thought he was just creating better prompts—until he realized he'd accidentally invented a new paradigm that makes LLMs actually programmable.  While everyone obsesses over whether LLMs will get us to AGI, Omar Khattab is s...

Highlights

While much of the AI world focuses on scaling models and chasing AGI, a quiet revolution is underway—one that rethinks how we communicate intent to machines. The real bottleneck isn't model size, but our ability to precisely guide AI behavior in complex, evolving systems.
00:00
Natural language and code are inadequate for specifying AI behavior
20:58
DSPy enables separation between what users want to build and evolving LLMs.
28:28
Signatures in DSPy encode user intent formally and are difficult to build because the system cannot assist in their creation.
45:26
Humans think imperatively, so DSPy uses an imperative shell for better alignment.
49:47
Online reinforcement learning has been supported on DSPy programs since May 2025.
52:27
Human needs will become more complex, so structured systems are necessary.

Chapters

Is bigger really better for AI—or are we missing the real problem?
00:00
Why natural language alone can't capture what we truly want from AI
10:45
How DSPy turns vague prompts into precise, programmable contracts
23:31
From machine code to C: the evolution of programming language models
35:43
Beyond prompt tuning: the rise of optimization in AI development
47:38
Why the future of AI design is declarative, not imperative
52:27
What happens when AI systems finally understand human intent?
55:24

Transcript

Speaker 1: Nobody wants intelligence, period. I want something else, right? And that something else is always specific, or at least more specific. There is this kind of observed phenomenon where if you over-engineer intelligence, you regret it. Because som...