scripod.com

How Foundation Models Evolved: A PhD Journey Through AI's Breakthrough Era

The a16z Show
While much of the AI world focuses on scaling models and chasing AGI, a quiet revolution is underway—one that rethinks how we communicate intent to machines. The real bottleneck isn't model size, but our ability to precisely guide AI behavior in complex, evolving systems.
Omar Khattab challenges the prevailing obsession with larger language models, arguing that true progress lies in building programmable, reliable AI systems. His framework, DSPy, introduces formal abstractions like 'signatures'—declarative contracts that separate user intent from model execution. This shift enables modular, composable AI applications that remain stable even as underlying models evolve. Rather than relying on brittle natural language prompts or imperative code, DSPy blends typed structures with optimization techniques, allowing developers to specify *what* should be achieved without dictating *how*. As frontier labs move beyond pure scaling, investments in retrieval, tool use, and agent systems highlight a broader industry pivot toward structured design. The future of AI, Khattab asserts, depends not on emergent magic but on engineering principles that make systems interpretable, maintainable, and aligned with complex human goals.
00:00
00:00
Natural language and code are inadequate for specifying AI behavior
20:58
20:58
DSPy enables separation between what users want to build and evolving LLMs.
28:28
28:28
Signatures in DSPy encode user intent formally and are difficult to build because the system cannot assist in their creation.
45:26
45:26
Humans think imperatively, so DSPy uses an imperative shell for better alignment.
49:47
49:47
Online reinforcement learning has been supported on DSPy programs since May 2025.
52:27
52:27
Human needs will become more complex, so structured systems are necessary.