scripod.com

Thoughts on AI progress (Dec 2025)

Dwarkesh Podcast
This podcast examines the current trajectory of AI development, focusing on the challenges and misconceptions surrounding scalability, economic impact, and the path to artificial general intelligence. It critiques common narratives about AI adoption and progress, particularly in reinforcement learning and real-world deployment.
The discussion highlights that while pre-training shows predictable scaling, reinforcement learning (RL) does not—public data is sparse and returns diminish significantly, suggesting RL advancement may require orders of magnitude more compute. Current AI systems lack on-the-job learning and context-specific adaptation, limiting their real-world utility despite narrow assistance capabilities. Economic slow diffusion isn't due to integration barriers but reflects AI's failure to match human productivity; if it did, adoption would be rapid and profits evident. Perceived goal-post shifting in AGI benchmarks is justified, as true generalization remains elusive. Continual learning is seen as the next major frontier, though progress will likely be incremental rather than explosive, with no single lab expected to dominate due to fast knowledge replication across the field.
00:00
00:00
If models were human-like learners, current RL approaches would be pointless
05:06
05:06
If AI had human-like capabilities, it would diffuse quickly because it’s easier to integrate than human employees.
06:36
06:36
The previous definition of AGI may have been too narrow
08:25
08:25
A million-fold compute scale-up may be needed for GPT-level RL gains