scripod.com

OpenAI Co-Founder: AI Goes Parabolic! Here's What's Next | Greg Brockman

The Knowledge Project
In this candid and revealing conversation, Greg Brockman—OpenAI co-founder and former president—offers a first-person account of the organization’s founding, its pivotal technical and structural decisions, and the dramatic leadership crisis following Sam Altman’s firing.
Brockman traces OpenAI’s origins to the 2015 Napa offsite, where the team forged its decade-long technical roadmap and confronted the reality that achieving AGI required massive compute—prompting the shift from a pure nonprofit to a capped-profit structure. He recounts the chaotic 72 hours after Altman’s ousting: his immediate resignation, the rapid formation of a 'Phoenix' backup company at Sam’s house, employee revolt, and Ilya Sutskever’s decisive public endorsement. Looking forward, he emphasizes AI’s self-improving trajectory—where AI now writes much of its own code and accelerates R&D—but warns that true advantage lies in proprietary infrastructure, not static models. On access, OpenAI prioritizes broad democratization via free tiers and purpose-built data centers, while navigating compute constraints through strategic infrastructure investment. He stresses that AGI must serve humanity equitably, urging regulation focused on societal benefit and privacy—not control—and affirms that success means fulfilling OpenAI’s mission, not metrics.
00:00
00:00
Making a difference in AI would make for a well-lived life
00:49
00:49
Sam Altman realized the speaker had already decided to start an AI company
02:40
02:40
The Napa off-site produced the technical plan that has guided OpenAI for 10 years
04:25
04:25
Google DeepMind seemed to have an insurmountable advantage in the field before AlphaGo's release
04:54
04:54
Creating a for-profit entity was the only path to achieve OpenAI's AGI mission
06:05
06:05
Machines could learn semantics
08:22
08:22
By scaling PPO, they exceeded the performance of the best humans, showing that massive compute with simple algorithms works in practice
10:05
10:05
Reasoning and prediction are deeply connected to intelligence
12:00
12:00
Conflicts within the AI field take on existential weight due to the high-stakes nature of OpenAI's mission
15:44
15:44
They were removed from the board immediately after learning Sam would be removed
17:50
17:50
On Sunday night, the board replaced the interim CEO, causing the company to rebel and creating chaos
19:56
19:56
Many people canceled their Thanksgiving flights and gathered at the office to be part of the historical moment
23:18
23:18
No one left for more money or better offers despite poaching attempts
23:49
23:49
Trained language models on DNA sequences to advance health research
28:03
28:03
It's painful but worthwhile to create an environment where others can do great work
28:23
28:23
Suffering is necessary to build value
32:23
32:23
AI is being applied to its own development process, making it faster
33:26
33:26
AI is now much better than humans at writing code, though humans still excel at structuring code
36:21
36:21
Technological improvements ensure AI is aligned with users' long-term goals rather than short-term satisfaction
38:08
38:08
We're in a global AI renaissance
38:40
38:40
Leading in AI is critical for the US to protect democratic values
39:49
39:49
The core advantage lies in the 'machine' that creates models, not just individual models
40:39
40:39
Training the model to have a good-looking chain of thought may lead to loss of faithfulness
41:47
41:47
The current trend is to release preview models due to a compute-constrained world
43:38
43:38
OpenAI's core is to face reality and think about long-term implications
46:32
46:32
Dedicated data centers for specific problems like cancer research could happen this year
47:52
47:52
Empowering people with the technology is core to our mission, rather than focusing solely on solving problems in an ivory tower approach
51:32
51:32
Everyone on the planet should have access to a personal AGI that knows them well, can provide advice, and help achieve goals
59:04
59:04
Safety involves long-term thinking, including model training and feedback loops, and extends to building societal resilience for AI
1:03:59
1:03:59
Data centers use little water due to a closed-loop system
1:04:40
1:04:40
We need to solve certain problems and ensure people can feel the impact in their daily lives.
1:04:45
1:04:45
AI is about empowerment and human agency, not just automation
1:07:17
1:07:17
Leaning into AI technology will be a critical skill in the future, allowing people to manage AI agents and achieve goals more easily
1:11:45
1:11:45
Success means achieving OpenAI's mission of making AGI benefit all of humanity