scripod.com

Dwarkesh and Ilya Sutskever on What Comes After Scaling

The a16z Show

2025/12/15
The a16z Show

The a16z Show

2025/12/15

Shownote

AI models feel smarter than their real-world impact. They ace benchmarks, yet still struggle with reliability, strange bugs, and shallow generalization. Why is there such a gap between what they can do on paper and in practice In this episode from The Dwa...

Highlights

Despite their impressive performance on standardized tests, today's AI models often fall short when deployed in real-world scenarios. This gap between theoretical capability and practical reliability raises fundamental questions about the path to true artificial general intelligence.
11:57
Emotions may serve as a value function that guides human decision-making.
15:19
Value functions provide early training signals in reinforcement learning, improving efficiency before final outcomes are known.
28:08
Humans learn complex real-world skills with low data diversity due to evolutionary advantages.
38:02
SSI focuses on research-first approach, aiming for superintelligence by default.
51:09
SSI combines human-like learning with digital scalability to achieve transformative capabilities
58:03
It may be easier to build AI that cares about sentient life because most sentient beings in the future may be AIs.
1:07:59
Evolution hard-coded complex social desires in humans without intelligence, a mystery relevant to AI alignment.
1:13:54
Strategies for AI alignment will converge as systems become more powerful.
1:17:26
An AI that learns like a human could emerge in 5 to 20 years
1:27:10
Adversarial setups like debate may incentivize diverse AI reasoning strategies

Chapters

Why do AI models seem smart but fail in practice?
00:00
Can value functions make AI smarter like emotions make us wiser?
15:19
Is it time to go back to basics in AI research?
22:13
What can AI learn from how humans master skills so quickly?
31:14
Should we roll out powerful AI slowly to protect society?
44:27
How will superintelligent AI reshape what we care about?
54:36
Can we really align AI with something as complex as life itself?
1:01:21
What is SSI doing differently to crack the generalization problem?
1:13:54
How close are we to an AI that learns like a human?
1:17:26
Will one AI dominate, or will many specialized minds thrive?
1:20:25
Could beauty and simplicity be the key to building better brains?
1:30:04

Transcript

Ilya Sutskever: Now that compute is big, computers are very big. In some sense, we are back to the age of research. We got to the point where we are in a world where there are more companies than ideas by quite a bit. Now there is the Silicon Valley saying...