Dwarkesh and Ilya Sutskever on What Comes After Scaling
The a16z Show
2025/12/15
Dwarkesh and Ilya Sutskever on What Comes After Scaling
Dwarkesh and Ilya Sutskever on What Comes After Scaling

The a16z Show
2025/12/15
Despite their impressive performance on standardized tests, today's AI models often fall short when deployed in real-world scenarios. This gap between theoretical capability and practical reliability raises fundamental questions about the path to true artificial general intelligence.
The conversation explores why AI excels in benchmarks but struggles with real-world application, highlighting limitations in generalization and overfitting to evaluation metrics. Unlike humans, who learn deeply from limited data and self-correct through intrinsic motivations, current AI relies heavily on vast datasets and lacks robust reasoning. Value functions in reinforcement learning are discussed as a more efficient alternative to pure pre-training, drawing parallels to human emotional systems that guide behavior. The discussion emphasizes the need for continual learning and safer deployment strategies, advocating for gradual release of advanced AI to allow societal adaptation. Concerns about alignment extend beyond human values to include sentient life, especially as superintelligences may emerge simultaneously. SSI’s research focuses on reliable generalization and brain-inspired principles, aiming to create systems that learn like humans. With compute scaling reaching limits, innovation may come from small teams revisiting foundational ideas. The future could see fragmented AI specialization rather than monopolistic dominance, guided by elegant, neuroscience-rooted design philosophies.
11:57
11:57
Emotions may serve as a value function that guides human decision-making.
15:19
15:19
Value functions provide early training signals in reinforcement learning, improving efficiency before final outcomes are known.
28:08
28:08
Humans learn complex real-world skills with low data diversity due to evolutionary advantages.
38:02
38:02
SSI focuses on research-first approach, aiming for superintelligence by default.
51:09
51:09
SSI combines human-like learning with digital scalability to achieve transformative capabilities
58:03
58:03
It may be easier to build AI that cares about sentient life because most sentient beings in the future may be AIs.
1:07:59
1:07:59
Evolution hard-coded complex social desires in humans without intelligence, a mystery relevant to AI alignment.
1:13:54
1:13:54
Strategies for AI alignment will converge as systems become more powerful.
1:17:26
1:17:26
An AI that learns like a human could emerge in 5 to 20 years
1:27:10
1:27:10
Adversarial setups like debate may incentivize diverse AI reasoning strategies