Dwarkesh and Ilya Sutskever on What Comes After Scaling
The a16z Show
2025/12/15
Dwarkesh and Ilya Sutskever on What Comes After Scaling
Dwarkesh and Ilya Sutskever on What Comes After Scaling

The a16z Show
2025/12/15
Shownote
Shownote
AI models feel smarter than their real-world impact. They ace benchmarks, yet still struggle with reliability, strange bugs, and shallow generalization. Why is there such a gap between what they can do on paper and in practice In this episode from The Dwa...
Highlights
Highlights
Despite their impressive performance on standardized tests, today's AI models often fall short when deployed in real-world scenarios. This gap between theoretical capability and practical reliability raises fundamental questions about the path to true artificial general intelligence.
Chapters
Chapters
Why do AI models seem smart but fail in practice?
00:00Can value functions make AI smarter like emotions make us wiser?
15:19Is it time to go back to basics in AI research?
22:13What can AI learn from how humans master skills so quickly?
31:14Should we roll out powerful AI slowly to protect society?
44:27How will superintelligent AI reshape what we care about?
54:36Can we really align AI with something as complex as life itself?
1:01:21What is SSI doing differently to crack the generalization problem?
1:13:54How close are we to an AI that learns like a human?
1:17:26Will one AI dominate, or will many specialized minds thrive?
1:20:25Could beauty and simplicity be the key to building better brains?
1:30:04Transcript
Transcript
Ilya Sutskever: Now that compute is big, computers are very big. In some sense, we are back to the age of research. We got to the point where we are in a world where there are more companies than ideas by quite a bit. Now there is the Silicon Valley saying...