scripod.com

How DeepMind Red‑Teams AI—and Why It Matters

Shownote

Two of Google DeepMind’s leaders—Dawn Bloxwich (Responsible Development & Innovation) and Tom Lue (VP, Frontier AI Global Affairs)—open the playbook on frontier safety and policy. Bloxwich explains how DeepMind blends structured evaluations with red‑teamin...

Highlights

In this episode of 'The Tech Download,' leaders from Google DeepMind tackle the urgent, evolving challenge of ensuring AI remains controllable, beneficial, and aligned with human values — not as a theoretical concern, but as an active engineering and policy priority.
06:21
AlphaFold exemplifies early risk anticipation in responsible AI development
17:44
A scientific and rigorous approach to safety and responsibility will be applied to address AGI risks in advance.
26:57
Involving China in global AI standard-setting is crucial but challenging due to political tensions
37:01
Companies need to show the real benefits and positive impacts of AI, like Weather Next's accurate weather prediction, to counter scapegoating
42:48
Trump's executive order invalidated state laws on AI in the US

Chapters

How do you build powerful AI without losing control?
00:00
What happens when AI gets too persuasive — or too convincing?
11:57
Can the world agree on AI rules before it’s too late?
23:51
How are governments really responding — and what’s working?
30:30
Why does AI regulation keep falling behind reality?
39:56

Transcript

Dawn Bloxwich: A CNBC original podcast. Arjun Kharpal: Hello and welcome to The Tech Download, a new CNBC original podcast where we unpack the tech stories that matter most. Steve Kovach: Each season, we dive into one big theme and what it means for your...