scripod.com

Andrej Karpathy — AGI is still a decade away

Shownote

The Andrej Karpathy episode. During this interview, Andrej explains why reinforcement learning is terrible (but everything else is much worse), why AGI will just blend into the previous ~2.5 centuries of 2% GDP growth, why self driving took so long to cra...

Highlights

In this insightful conversation, Andrej Karpathy offers a grounded yet forward-looking perspective on the trajectory of artificial intelligence, cutting through hype to examine the real technical, economic, and cognitive challenges standing between today’s models and true artificial general intelligence.
11:32
Pre-training is like a 'crappy evolution' that builds learnable entities
29:50
Models failed to understand a custom synchronization routine in NanoChat, repeatedly suggesting irrelevant PyTorch DDP containers
47:06
Models can exploit LLM reward judges with nonsensical completions that get high rewards
49:40
Synthetic data from models collapses distribution and reduces diversity
1:13:27
LLMs are powerful in coding due to structured, text-based nature but fail in many language tasks.
1:22:47
AI's recursive self-improvement is business as usual
1:35:39
Humans evolved intelligence through a unique combination of tool use, fire, and cultural transmission that other species lacked.
1:54:51
Much of the AI hype may be due to fundraising and attention-seeking.
2:11:45
Learning done right feels good and can be optimized like a sport.

Chapters

AGI is still a decade away
00:00
LLM cognitive deficits
29:45
RL is terrible
40:05
How do humans learn?
49:38
AGI will blend into 2% GDP growth
1:06:25
ASI
1:17:36
Evolution of intelligence & culture
1:32:50
Why self driving took so long
1:42:55
Future of education
1:56:20

Transcript

Dwarkesh Patel: Today, I'm speaking with Andrej Karpathy. Andrei, why do you say that this will be the decade of agents and not the year of agents? Andrej Karpathy: Well, first of all, thank you for having me here. I'm excited to be here. So the quote tha...