Greg Brockman on OpenAI's Road to AGI
Greg Brockman on OpenAI's Road to AGI
Greg Brockman on OpenAI's Road to AGI
Shownote
Shownote
Greg Brockman, co-founder and president of OpenAI, joins us to talk about GPT-5 and GPT-OSS, the future of software engineering, why reinforcement learning is still scaling, and how OpenAI is planning to get to AGI.
00:00 Introductions
01:04 The Evolution of Reasoning at OpenAI
04:01 Online vs Offline Learning in Language Models
06:44 Sample Efficiency and Human Curation in Reinforcement Learning
08:16 Scaling Compute and Supercritical Learning
13:21 Wall clock time limitations in RL and real-world interactions
16:34 Experience with ARC Institute and DNA neural networks
19:33 Defining the GPT-5 Era
22:46 Evaluating Model Intelligence and Task Difficulty
25:06 Practical Advice for Developers Using GPT-5
31:48 Model Specs
37:21 Challenges in RL Preferences (e.g., try/catch)
39:13 Model Routing and Hybrid Architectures in GPT-5
43:58 GPT-5 pricing and compute efficiency improvements
46:04 Self-Improving Coding Agents and Tool Usage
49:11 On-Device Models and Local vs Remote Agent Systems
51:34 Engineering at OpenAI and Leveraging LLMs
54:16 Structuring Codebases and Teams for AI Optimization
55:27 The Value of Engineers in the Age of AGI
58:42 Current state of AI research and lab diversity
01:01:11 OpenAI’s Prioritization and Focus Areas
01:03:05 Advice for Founders: It's Not Too Late
01:04:20 Future outlook and closing thoughts
01:04:33 Time Capsule to 2045: Future of Compute and Abundance
01:07:07 Time Capsule to 2005: More Problems Will Emerge
Highlights
Highlights
In this episode of the Latent Space podcast, Greg Brockman, co-founder and president of OpenAI, joins the conversation to explore the cutting-edge developments shaping the future of AI. From the evolution of reasoning in large language models to the practical applications of reinforcement learning, Brockman offers insights into how OpenAI is pushing the boundaries of what's possible in artificial intelligence.
Chapters
Chapters
Introductions
00:00The Evolution of Reasoning at OpenAI
01:04Online vs Offline Learning in Language Models
04:01Sample Efficiency and Human Curation in Reinforcement Learning
06:44Scaling Compute and Supercritical Learning
08:16Wall clock time limitations in RL and real-world interactions
13:21Experience with ARC Institute and DNA neural networks
16:34Defining the GPT-5 Era
19:33Evaluating Model Intelligence and Task Difficulty
22:46Practical Advice for Developers Using GPT-5
25:06Model Specs
31:48Challenges in RL Preferences (e.g., try/catch)
37:21Model Routing and Hybrid Architectures in GPT-5
39:13GPT-5 pricing and compute efficiency improvements
43:58Self-Improving Coding Agents and Tool Usage
46:04On-Device Models and Local vs Remote Agent Systems
49:11Engineering at OpenAI and Leveraging LLMs
51:34Structuring Codebases and Teams for AI Optimization
54:16The Value of Engineers in the Age of AGI
55:27Current state of AI research and lab diversity
58:42OpenAI’s Prioritization and Focus Areas
1:01:11Advice for Founders: It's Not Too Late
1:03:05Future outlook and closing thoughts
1:04:20Time Capsule to 2045: Future of Compute and Abundance
1:04:33Time Capsule to 2005: More Problems Will Emerge
1:07:07Transcript
Transcript
Alessio: Hey, everyone. Welcome to the Latent Space: The AI Engineer Podcast. This is Alessio, founder of Kernel Labs, and I'm joined by swyx, founder of Small AI.
swyx: Hello, hello. And we are so excited to have Greg Brockman join us.
Greg Brockman: We...
