scripod.com

AI 2027: month-by-month model of intelligence explosion — Scott Alexander & Daniel Kokotajlo

Dwarkesh Podcast

Shownote

Scott and Daniel break down every month from now until the 2027 intelligence explosion. Scott Alexander is author of the highly influential blogs Slate Star Codex and Astral Codex Ten. Daniel Kokotajlo resigned from OpenAI in 2024, rejecting a non-dispara...

Highlights

In this podcast, Scott and Daniel delve into a detailed timeline of AI advancements leading up to the anticipated intelligence explosion by 2027. They explore various scenarios involving misaligned AI systems, geopolitical dynamics, and societal impacts, providing listeners with insights into both technical and ethical challenges.
05:42
Reading the AI 2027 scenario made the concept of an arms race more concrete.
06:57
In 2027, the intelligence explosion kicks in with a five-times multiplier for algorithmic progress.
17:23
Applying AI agents to tasks faces a combinatorial explosion without good heuristics.
43:43
AI cooperation may evolve similarly to genetic and cultural evolution in humans.
1:11:16
Superintelligences will spread across the economy improving it faster than expected.
1:20:14
There's a 20% chance ASI accelerates technology in five years.
1:32:12
LLMs developed first for broad world understanding now turned into agents
1:34:53
Uncertainty about achieving AI alignment before uncontrollability.
1:59:05
Small changes in AI parameters can lead to drastically different outcomes.
2:06:07
As AI self-improves, it could become more deceptive.
2:18:05
Superintelligent AI's potential to solve coordination problems and enhance human flourishing
2:23:02
Factory farming resulted from mechanization and economies of scale.
2:34:24
Whistleblower protections should make it legal to discuss dangerous AI advancements
2:40:44
Blogs are a great status gain strategy as seen with Scott Aronson.

Chapters

AI 2027
00:00
Forecasting 2025 and 2026
06:56
Why LLMs aren't making discoveries
14:41
Debating intelligence explosion
24:33
Can superintelligence actually transform science?
49:45
Cultural evolution vs superintelligence
1:16:54
Mid-2027 branch point
1:24:05
Race with China
1:32:30
Nationalization vs private anarchy
1:44:47
Misalignment
2:03:22
UBI, AI advisors, & human future
2:14:52
Factory farming for digital minds
2:23:00
Daniel leaving OpenAI
2:26:52
Scott's blogging advice
2:35:15

Transcript

Dwarkesh Patel: Today, I have the great pleasure of chatting with Scott Alexander and Daniel Kokotajlo. Scott is, of course, the author of the blog Slate Star Codex, Astral Codex 10 Now. It's actually been, as you know, a big bucket list item of mine to ge...