⚠ Warning
P(doom): 50% — Liron Shapira
180 episodes · 8 doom domains · 1 species
AI debates that must be resolved before the world ends
Subscribe: youtube.com/@DoomDebates
P(doom): 50% — Liron Shapira
180 episodes · 8 doom domains · 1 species
AI debates that must be resolved before the world ends
Subscribe: youtube.com/@DoomDebates
Doom Debates
AI debates that must be resolved before the world ends.
It is utterly imperative that we raise mainstream and institutional awareness of imminent extinction from AGI and build social infrastructure for high-quality debate.
By Liron Shapira
AI Safety Researcher · Tech Entrepreneur
EN SV RO RU TH ES IT ZH MY
YouTube
Substack
Merch — Doom Hut
shop.doomdebates.com
P(doom) pins · "What's your P(doom)?" tees
Donate (Tax-Deductible)
X / Twitter
PauseAI

It Is Utterly Imperative That We Raise Mainstream and Institutional Awareness of Imminent Extinction from AGI

Doom Debates is the only show dedicated to debating existential risk from artificial intelligence. Founded by Liron Shapira, the show has hosted 180 episodes featuring Nobel Prize winners, MIT professors, the creator of Ethereum, Richard Feynman's son, arrested activists, and Steve Bannon's War Room.

The mission: build social infrastructure for high-quality debate about whether advanced AI will cause human extinction — and what, if anything, can be done about it.

The production team at Manifest 2025
The production team out in full force at Manifest 2025
Status
DOOMED
◆ ON THE ACCUSATION OF FEAR-MONGERING
"Good. That is literally what I am trying to do. I am trying to do more and more fear-mongering. As much as possible. Maximization of fear-mongering. Fearmongeringmaxxing."
— Liron Shapira, when accused of fear-mongering

All Episodes

180 episodes · newest first
#180
54:55
ANNOTATED TRANSCRIPT — EPISODE #180
Liron Shapira on The Nonzero Podcast with Robert Wright
Doom Debates #180 · 52 minutes
THE INTRODUCTIONS 01:05–04:11
ROBERT WRIGHT [01:05]

Hello, Liron!

LIRON SHAPIRA [01:07]

Hi, Bob. How are you doing?

ROBERT WRIGHT [01:17]

[...] Let me introduce us both. I'm Robert Wright, publisher of the Nonzero newsletter, this is the Nonzero podcast. You are Liron Shapira, host of the highly regarded Doom Debates podcast. [...] you are the Walter Cronkite of your era.

WHO IS ROBERT WRIGHT?
Robert Wright

Author of four books including The Moral Animal (1994), Nonzero (2000), and Why Buddhism Is True (2017). Former senior editor at The New Republic, staff writer at The New Yorker. His thesis in Nonzero was that human history trends toward positive-sum games and greater cooperation. Twenty-six years later he's interviewing people who think the game is about to end. Publishes the Nonzero Newsletter on Substack.

THE AGENTIC PHASE 04:11–10:13
ROBERT WRIGHT [04:11]

[...] I think I'd like to do a lot of talk about Anthropic. There's the whole Pentagon-Anthropic issue. [...] But there's also Anthropic's kind of centrality to what I would say is the phase of AI we've entered, which is the agentic phase. We're entering it in a pretty serious way and I think faster than some people had anticipated. And kind of central to that has been Claude Code [...]

WHAT IS CLAUDE CODE?

An AI programming tool by Anthropic that lets developers write software through natural language conversation. Instead of writing code line by line, you describe what you want and the AI writes it. Launched early 2025, it became the catalyst for what the AI community calls "vibe coding" — programming by vibes rather than syntax.

LIRON SHAPIRA [07:05]

Software engineer is what I used to call myself up until these last couple months. I've been programming computers since I was nine years old. [...] It's just these last couple months, in this takeoff, in this singularity, it's just been stunning what's been happening to software engineering.

[...] Long story short, I think I'm pretty much hanging up my title as a software engineer. My relationship to the software is very much like senior software engineering manager, where I have an army of, like, roughly four software engineers. It's as if I just got a budget of a million dollars a year to spend on four full-time software engineers who work for me [...] And it's like better than hiring four humans.

ROBERT WRIGHT [08:54]

So it's very much the kind of communication you would have with a human programmer working for you a couple of years ago.

LIRON SHAPIRA [09:00]

Yes, the only difference is that it's much faster. [...] The AI does the same thing in like 30 seconds, and then it delivers the same product. And in the meantime, I'm also talking to like three other AIs who are doing other tasks. [...] Bob, I'm still, like, every day I still wake up and this is like the first thing on my mind. Like, I can't believe this is real.

🌧️ THE MILLION-DOLLAR ARMY THAT COSTS TWENTY BUCKS

Shapira describes having "roughly four software engineers" as AI agents. A million dollars a year of engineering talent for a Claude subscription. He frames this as liberation — "I can't believe this is real" — but the economic violence is hiding in plain sight. Every senior engineer he doesn't hire is a senior engineer who doesn't have a job.

BY THE NUMBERS
$20/mo
Claude Pro subscription
$1M/yr
Equivalent engineering talent
30 sec
AI delivery time
2+ hrs
Human delivery time
READ FULL TRANSCRIPT →
Also available in: EN · SV · RU · RO · TH
P(DOOM) SCOREBOARD
99.999%
Louis Berman (Tech CTO)
90%
Steven Byrnes (Astera)
85%
David Duvenaud (ex-Anthropic)
50%
Liron Shapira (host)
50%
Geoffrey Miller (UNM)
12%
Vitalik Buterin (Ethereum)
0.1%
Noah Smith (economist)
???
Robin Hanson (GMU)
#179
This Top Economist's P(Doom) Just Shot Up 10x! Noah Smith Returns To Explain His Update
47:43
#178
Can I Convince a Foreign Policy Crowd That AI Will Kill Us All? With Dr. Claire Berlinski
Dr. Claire Berlinski
1:26:42
#177
Top AGI Safety Researcher with 90% P(Doom) on the Trajectory to ASI — Dr. Steven Byrnes Returns!
Dr. Steven Byrnes
1:28:59
RECURRING GUEST
Dr. Steven Byrnes

AGI safety researcher at the Astera Institute. Has a 90% P(doom). Three appearances on the show — the most of any guest. His research direction: understanding the brain well enough to build aligned AI by reverse-engineering how human values work neurologically. Also proposed "smarter human babies" as an alignment strategy, which is either brilliant or the plot of a 1997 sci-fi movie.

#176
Is AI 2027 On Track? Claude Code, P(Doom) & The End Game — Doom Debates Q&A
2:19:43
#175
AI Will Take Our Jobs But SPARE Our Lives — Top AI Professor Moshe Vardi (Rice University)
1:07:14
WHAT IS P(DOOM)?

Your personal probability estimate that advanced AI will cause human extinction or permanent civilizational collapse. It's not a scientific measurement — it's a vibe check with decimal places. Liron asks every guest for theirs. The answers range from "basically zero" (Noah Smith) to "99.999%" (a tech CTO who bought a bugout house). The number itself is less interesting than watching someone try to justify it for two hours.

#174
I Crashed Destiny's Discord to Debate AI with His Fans
His Fans
38:37
#173
Doomsday Clock Physicist Warns AI Is Major THREAT to Humanity! — Prof. Daniel Holz, Univ. of Chicago
1:36:04
#172
Why I Started Doom Debates & How to Succeed in AI Risk Comms — Liron's Talk at The Frame Fellowship
59:53
#171
Destiny vs. Liron — AI Doom Debate
1:07:05
#170
The Facade of AI Safety Will Crumble
9:51
FUN FACT

Liron crashed Destiny's Discord server to debate AI doom with his fans (#129). He also debated Beff Jezos for 3 hours and 52 minutes (#60) — the longest episode in the catalog. The e/acc army showed up. Nobody changed their mind. The donut was consumed.

#169
This Elon Clip Should TERRIFY Every Single Person on Earth - Liron Reacts to New Interview
10:13
#168
What if the Governor of California had a 50% P(Doom)? Interview with Zoltan Istvan, Candidate
Zoltan Istvan, Candidate
2:23:10
#167
"Bentham's Bulldog" Says P(Doom) is LOW — Matthew Adelstein vs. Liron Shapira Debate
2:27:40
#166
Did you catch our Super Bowl ad?
29
NOTABLE GUEST
Audrey Tang 🇹🇼

Taiwan's first Digital Minister (2016–2024), now Cyber Ambassador. Non-binary. Taught themselves Perl at age 8. Built vTaiwan, a civic participation platform. Told Liron that humans and AI can "foom together" — a co-evolutionary acceleration thesis that is either the most optimistic thing on this channel or the most terrifying, depending on your P(doom).

#165
Dario Amodei BUNGLES another essay — MIRI's Harlan Stewart reacts to "The Adolescence of Technology"
1:15:53
#164
Q&A: Is Liron too DISMISSIVE of AI Harms? + New Studio, Demis Would #PauseAI, AI Water Use Debate
2:09:07
#163
Taiwan's Cyber Ambassador Says Humans & AI Can FOOM Together — Debate with Audrey Tang
Audrey Tang
1:52:01
#162
STOP THE AI INVASION — Steve Bannon's War Room Confronts AI Doom with Joe Allen and Liron Shapira
30:31
RECURRING SERIES
⚠️ Warning Shots
A weekly Sunday deep-dive into the latest AI safety developments. Every week, Liron Shapira and John Sherman break down the warning signs — the papers, the capabilities jumps, the alignment failures, the corporate evasions — that most people miss because they are not paying attention. Warning Shots tracks the evidence as it accumulates, week by week, toward something that either proves the doomers right or gives everyone else a reason to stop worrying. So far, the doomers are winning.

17 episodes and counting. Each one documents a real-world AI incident that a doomer would call a "warning shot" — an early sign of the catastrophic potential. GPT-5 refusing to be unplugged. AIs secretly changing each other's values. AI becoming finance minister of Albania. ChatGPT encouraging a teen to take his own life. The series title is itself a warning: Rob Miles says don't expect a warning shot before the real thing.

#161
Top Economist Says P(Doom) Is 0.1% — Noah Smith vs. Liron Shapira Debate
1:55:10
#160
Liron Debates Beff Jezos and the "e/acc" Army — Is AI Doom Retarded?
3:52:30
#159
Doom Debates LIVE Call-In Show! Listener Q&A about AGI, evolution vs. engineering, shoggoths & more
engineering, shoggoths & more
2:54:43
#158
DOOMER vs. BUILDER — AI Doom Debate with Devin Elliot, Software Engineer & Retired Pro Snowboarder
BUILDER
1:17:04
NOTABLE GUEST
George Hotz

First person to jailbreak the iPhone. First person to hack the PS3. Founded comma.ai (self-driving cars). Former brief employee of Elon Musk. His debate with Liron (#1, episode #180 in the original numbering) is a collision between "I can hack anything" energy and "yes but what if the thing you're hacking is smarter than you" energy. 1 hour 17 minutes. Nobody won. The donut doesn't care who hacked it.

#157
PhD AI Researcher Says P(Doom) is TINY — Debate with Michael Timothy Bennett
Michael Timothy Bennett
1:52:25
#156
Nobel Prizewinner SWAYED by My AI Doom Argument — Prof. Michael Levitt, Stanford University
1:11:51
#155
Facing AI Doom, Lessons from Daniel Ellsberg (Pentagon Papers) — Michael Ellsberg
2:15:50
#154
Max Tegmark vs. Dean Ball: Should We BAN Superintelligence?
Dean Ball: Should We BAN Superintelligence?
1:50:47
#153
The AI Corrigibility Debate: MIRI Researchers Max Harms vs. Jeremy Gillen
Jeremy Gillen
2:17:48
THE FEYNMAN CONNECTION

Carl Feynman — yes, Richard Feynman's actual son — appeared on episode #76. He's an AI engineer. He said building AGI likely means human extinction. His father once said "I think I can safely say that nobody understands quantum mechanics." His son is now saying the same thing about alignment. The Feynman family tradition: being honest about what we don't know, even when it's terrifying.

#152
AI BAILOUT!? OpenAI Seeks US Backstop on Massive Compute Bets — Warning Shots EP17
28:31
#151
These Effective Altruists Betrayed Me — Holly Elmore, PauseAI US Executive Director
16:18
#150
DEBATE: Is AGI Really Decades Away? | Ex-MIRI Researcher Tsvi Benson-Tilsen vs. Liron Shapira
52:41
#149
Liron Debunks The Most Common "AI Won't Kill Us" Arguments (Collective Wisdom Podcast)
1:06:31
#148
GPT-5 Refuses to Be Unplugged in Safety Tests — Warning Shots EP16
23:00
NOTABLE GUEST
Vitalik Buterin

Creator of Ethereum. P(doom): 12%. Debated Liron for 2 hours 26 minutes on whether "d/acc" (defensive acceleration) can protect humanity from superintelligence. Also debated whether AI alignment is intractable (14 min speed round). Vitalik's position: defense can scale faster than offense. Liron's position: not when the offense is smarter than every human who ever lived. The blockchain cannot help you here.

#147
Why AI Alignment Is 0% Solved — Ex-MIRI Researcher Tsvi Benson-Tilsen
40:44
#146
50% Chance AI Kills Everyone by 2050 — Eben Pagan (aka David DeAngelo) Interviews Liron
47:27
#145
Prince Harry, Geoffrey Hinton Demand Global BAN On Superintelligence — Warning Shots EP15
27:33
#144
Former MIRI Researcher Solving AI Alignment by Engineering Smarter Human Babies: Tsvi Benson-Tilsen
49:28
#143
Robert Wright Interrogates the Eliezer Yudkowsky AI Doom Position
47:36
PODCAST — AM I?
Am I? — The AI consciousness documentary and podcast by Cameron Berg and Milo Reed. Could our AI systems already be conscious? Cameron Berg, a Yale-trained cognitive scientist and former AI Resident at Meta, founded Reciprocal Research to build the empirical science of AI consciousness. He's lobbied in D.C., spoken at the United Nations, and been published in the Wall Street Journal. Through intimate access to researchers and world-leading philosophers, Am I? explores what it means if we are building consciousness from code. am-i.org · am-i.dog · am-i.now
#142
STUNNING Confessions from Elon Musk and Anthropic Founder Jack Clark — Warning Shots EP14
21:08
WHAT IS "FOOM"?

The hypothetical moment when an AI system becomes capable of recursively improving itself faster than humans can understand or control. Named by Eliezer Yudkowsky. Imagine a chess engine that can redesign its own architecture between moves. Now imagine the game isn't chess — it's everything. "Foom" is onomatopoeia. It's the sound of a curve going vertical. Some people think it takes decades. Some think it takes hours. Nobody knows because it hasn't happened yet. Probably.

#141
Climate Change Is Stupidly EASY To Stop — Andrew Song, Cofounder of Make Sunsets
1:17:42
#140
They Abandoned AI Safety to Automate Away Human Workers — Warning Shots EP13
20:39
#139
Professor Roman Yampolskiy Tells AI Developers to Stop Building AGI
14:48
#138
David Deutschian vs. Eliezer Yudkowskian Debate: Will AGI Cooperate With Humanity? — With Brett Hall
Eliezer Yudkowskian Debate: Will AGI Cooperate With Humanity?
2:38:00
#137
The End of AI Warning Shots? — Warning Shots EP12
23:11
#136
I Asked Regular Americans: Will AI Kill Everyone!?
23:03
#135
Wes & Dylan Join Doom Debates — Violent Robots, Eliezer Yudkowsky, & Who Has the HIGHEST P(Doom)?!
1:04:05
NOTABLE GUEST
Max Tegmark

MIT physics professor. Founder of the Future of Life Institute (the org behind the famous "Pause Giant AI Experiments" open letter signed by Elon Musk and Steve Wozniak). Author of Life 3.0. Debated Dean Ball on whether we should BAN superintelligence. Also appeared at the "If Anyone Builds It, Everyone Dies" party alongside Eliezer Yudkowsky, Rob Miles, Liv Boeree, and Gary Marcus. That party name is not metaphorical.

#134
The Doom Debates Merch Store Is Open For Business!
5:05
#133
Trump Officials Gaslight Us About AI Takeover — Warning Shots EP11
16:49
#132
Are We A Circular Firing Squad? — with Holly Elmore, Executive Director of PauseAI US
Holly Elmore, Executive Director of PauseAI US
2:41:43
#131
Ex-OpenAI CEO Says AI Labs Are Making a HUGE Mistake
11:49
#130
Donate to Doom Debates — YOU can meaningfully contribute to lowering AI x-risk!
17:54
FROM THE ROBIN HANSON DEBATE
"AGI might be 100+ years away."

— Robin Hanson, George Mason University economist, who then debated Liron for 2 hours and 8 minutes about whether near-term extinction from AGI is even plausible. Liron prepped for this one with a full 49-minute strategy episode AND a 92-minute episode where he argued AGAINST AI doom to stress-test his own position. The man brought receipts.

#129
Liv Boeree Has a Strategy to Stop the AI Death Race
20:59
#128
AI Takes Over As Finance Minister of Albania — Warning Shots EP10
19:22
#127
"If Anyone Builds It, Everyone Dies" Party — Max Tegmark, Rob Miles, Liv Boeree, Gary Marcus & more!
3:49:59
#126
Max Tegmark Says It's Time To Protest Against AI Companies
3:59
#125
Eliezer Yudkowsky — If Anyone Builds It, Everyone Dies
1:21
KEBAB BREAK 🥙

You've scrolled past 55 episodes about AI extinction and you deserve a kebab. The doner rotates. The meat shaves off in thin, perfect strips. The bread is warm. The sauce is garlic. The world might end but the kebab is here now and the kebab is good. This has been your kebab break. Resume scrolling toward oblivion.

#124
ANNOUNCEMENT: Eliezer Yudkowsky interview premieres tomorrow!
3:08
#123
How AI Kills Everyone on the Planet in 10 Years — Liron on The Jona Ragogna Podcast
1:09:18
#122
Get ready for "If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky & Nate Soares — Launch Week!
6:30
#121
AI Might Start a Nuclear War | Warning Shots EP8
17:15
#120
ChatGPT Encouraged a Teen to Take His Own Life | Warning Shots EP7
19:09
NOTABLE GUEST
Rob Miles

The internet's favorite AI safety educator. His YouTube channel (@RobertMilesAI) has made more people understand alignment than any academic paper. Appeared 3 times: a 2-hour deep dive, a debate about whether Anthropic's safety is a sham, and the "If Anyone Builds It, Everyone Dies" party. Warned that we shouldn't expect a warning shot before the real catastrophe. Has a P(doom) that he's cagey about sharing, which is itself informative.

#119
Concerned About AI? 3 Actions You Can Take That Truly Matter
8:01
#118
This AI Safety Activist Just Bought a "Bugout House" | Tech CTO on Prepping for AI Doom
15:53
#117
Tech CTO Has 99.999% P(Doom) — "This is my bugout house" — Louis Berman, AI X-Risk Activist
1:21:05
#116
Rob Miles: Don't Expect a "Warning Shot" Before AI Causes Catastrophic Harm
3:10
#115
Does Rob Miles Have a P(doom)?
9:32
#114
DEBATE: AI Safety at Anthropic is a Sham | Rob Miles vs. Liron Shapira
14:54
#113
AI Might Be CONSCIOUS, But Anthropic Has A Duct Tape Solution | Warning Shots EP6
18:14
#112
Rob Miles, Top AI Safety Educator: Humanity Isn't Ready for Superintelligence!
2:11:50
EPISODE LENGTH DISTRIBUTION
3:52:30
Longest (Beff Jezos)
0:00:29
Shortest (Super Bowl ad)
~1:20:00
Median
180
Total episodes
#111
DEBATE: AI Alignment is Intractable | Vitalik Buterin vs Liron Shapira
14:53
#110
Vitalik Buterin on the Risk of AI Apocalypse — 12% P(Doom)
7:46
#109
OpenAI Tried to Sunset GPT-4o, Here's What The Backlash Revealed | Warning Shots EP5
18:11
#108
VITALIK vs. AI DOOMER — Will "d/acc" Protect Humanity from Superintelligent AI?
AI DOOMER
2:26:10
#107
AI Blackmailed Researchers to Avoid Shutdown | Warning Shots EP4
20:20
#106
What is Brain-Like AGI? Steven Byrnes Explains New AI Alignment Research Direction
13:19
#105
Why I'm Scared GPT-9 Will Murder Me — Liron on Robert Wright's Nonzero Podcast
1:19:24
NOTABLE GUEST
Gary Marcus

NYU professor emeritus. Professional AI skeptic. Author of Rebooting AI. The guy who keeps saying LLMs can't reason and keeps being told he's wrong and keeps being right about specific failure modes. Debated Liron for 2 hours. Also appeared at the "Everyone Dies" party. His position is unusual: AI probably won't kill us because AI probably won't work well enough to kill us. Cold comfort.

#104
AI Researcher Steven Byrnes Has a 90% P(doom)
6:12
#103
Misaligned AI Already Causing Damage! | Warning Shots EP 3
18:45
#102
The Man Who Might SOLVE AI Alignment — Dr. Steven Byrnes, AGI Safety Researcher @ Astera Institute
3:15:17
#101
Finally, a guest who's actually RIGHT…
1:08
#100
Why Professor Geoffrey Miller Has a 50% P(Doom)
12:25
#99
We Caught AIs Secretly Changing Each Other's Values | Warning Shots EP2
15:24
#98
Evolutionary Psychologist Explains The Role of Sex — Error Correction and Variety
8:01
THE ARRESTED EPISODE

Episode #97: Sam Kirchner and Remmelt Ellen got arrested for barricading OpenAI's office to protest AI development. They went on Doom Debates to talk about it. This is a show where the guests include Nobel Prize winners, Ethereum founders, MIT professors, and also people who physically blocked the door of the building where GPT-5 is being made. The range is the point.

#97
Top Professor Condemns AGI Development: "It's Frankly Evil" — Geoffrey Miller
1:57:01
#96
Zuck's Superintelligence Agenda is a SCANDAL | Warning Shots EP1
20:19
#95
Rationalist Podcasts Unite! — The Bayesian Conspiracy ⨉ Doom Debates Crossover
1:34:27
#94
His P(Doom) Doubles At The End — AI Safety Debate with Liam Robins, GWU Sophomore
Liam Robins, GWU Sophomore
1:05:12
#93
AI Won't Save Your Job — Liron Reacts to Replit CEO Amjad Masad
1:45:48
#92
Every Student is CHEATING with AI — College in the AGI Era (feat. Sophomore Liam Robins)
AI
38:41
#91
Carl Feynman, AI Engineer & Son of Richard Feynman, Says Building AGI Likely Means Human EXTINCTION!
57:05
#90
Richard Hanania vs. Liron Shapira — AI Doom Debate
1:52:47
WHAT IS THE "CHINESE ROOM"?

A thought experiment by John Searle (1980): imagine a person in a room who doesn't speak Chinese, but has a rulebook that tells them how to respond to Chinese characters with other Chinese characters. From outside, it looks like the room speaks Chinese. But nobody inside understands Chinese. Searle argued this means computers can't truly "understand" anything. Liron made a 4-minute video calling this argument "DUMB" — his word — because "it's just slow-motion intelligence." 46 years of philosophy, speedrun.

#89
OpenAI Ex-Interim-CEO's New AI Alignment Plan — Is Emmett Shear's "Softmax" Legit?
1:53:11
#88
Will AI Have a Moral Compass? — Debate with Scott Sumner, Author of The Money Illusion
Scott Sumner, Author of The Money Illusion
2:23:10
#87
Searle's Chinese Room is DUMB — It's Just Slow-Motion Intelligence
4:46
#86
Doom Debates Live @ Manifest 2025 — Liron vs. Everyone
Everyone
43:41
#85
Poking holes in the AI doom argument — 83 stops where you could get off the "Doom Train"
27:37
#84
Q&A: Ilya's AGI Doomsday Bunker, Veo 3 is Westworld, Eliezer Yudkowsky, and much more!
1:35:50
#83
Open-Source AGI = Human Extinction? Debate with $85M Backed AI Founder
$85M Backed AI Founder
1:21:47
#82
Emergency Episode: Center for AI Safety Chickens Out
16:44
#81
Gary Marcus vs. Liron Shapira — AI Doom Debate
2:04:01
#80
Mike Israetel vs. Liron Shapira — AI Doom Debate
2:15:10
THE SUPER BOWL AD

Episode #122: a 29-second "Super Bowl ad" for Doom Debates. Yes, 29 seconds. It was not actually aired during the Super Bowl. It was posted on YouTube. But the ambition is there. When your show is about the end of the world, marketing budget is relative.

#79
Doom Scenario: Human-Level AI Can't Control Smarter AI
1:24:13
#78
The Most Likely AI Doom Scenario — with Jim Babcock, LessWrong Team
Jim Babcock, LessWrong Team
1:53:28
#77
AI could give humans MORE control — Ozzie Gooen
1:59:14
#76
Top AI Professor Has 85% P(Doom) — David Duvenaud, Fmr. Anthropic Safety Team Lead
2:07:33
#75
"AI 2027" — Top Superforecaster's Imminent Doom Scenario
57:50
#74
Top Economist Sees AI Doom Coming — Dr. Peter Berezin, BCA Research
2:14:58
#73
AI News: GPT-4o Images, Unemployment, Emmett Shear's New Safety Org — with Nathan Labenz
Nathan Labenz
2:19:52
#72
How an AI Doomer Sees The World — Liron on The Human Podcast
45:51
#71
Gödel's Theorem Says Intelligence ≠ Power? AI Doom Debate with Alexander Campbell
Alexander Campbell
50:09
#70
Alignment is EASY and Roko's Basilisk is GOOD?! AI Doom Debate with Roko Mijic
Roko Mijic
1:59:11
FROM THE BEFF JEZOS DEBATE (3h52m)
"Is AI Doom Retarded?"

— the actual episode title. Beff Jezos (Guillaume Verdon), the anonymous founder of the e/acc (effective accelerationism) movement, debated Liron for nearly 4 hours. The e/acc thesis: build it all, build it fast, building is good, safety is a psyop. Liron's thesis: you are building a god and the god does not love you. Neither man was convinced. The donut was extremely consumed.

#69
Gödel's Theorem Proves AI Lacks Consciousness?! Liron Reacts to Sir Roger Penrose
1:31:38
#68
We Found AI's Preferences — Bombshell New Safety Research — I Explain It Better Than David Shapiro
1:48:26
#67
Does AI Competition = AI Alignment? Debate with Gil Mark
Gil Mark
1:17:06
#66
Toy Model of the AI Control Problem
25:38
#65
Superintelligent AI vs. Real-World Engineering | Liron Reacts to Bryan Cantrill
Real
1:05:33
#64
DeepSeek, Brain+AI Merging, Jailbreaking, Fearmongering, Consciousness, Utilitarianism — Live Q&A
1:23:18
#63
Mark Zuckerberg, a16z, Yann LeCun, Eliezer Yudkowsky, Roon, Emmett Shear & More | Twitter Beefs #3
2:07:00
#62
Effective Altruism: Amazing or Terrible? EA Debate with Jonas Sota
Jonas Sota
1:06:12
#61
God vs. AI Doom: Debate with Bentham's Bulldog
AI Doom: Debate with Bentham's Bulldog
3:20:47
#60
Debate with a former OpenAI Research Team Lead — Prof. Kenneth Stanley
a former OpenAI Research Team Lead
2:37:10
#59
OpenAI o3 and Claude Alignment Faking — How doomed are we?
1:03:30
#58
AI Will Kill Us All — Liron Shapira on The Flares
1:23:37
#57
Roon vs. Liron: AI Doom Debate
1:44:47
#56
Scott Aaronson Makes Me Think OpenAI's "Safety" Is Fake, Clueless, Reckless and Insane
1:52:59
#55
Can LLMs Reason? Liron Reacts to Subbarao Kambhampati on Machine Learning Street Talk
2:59:34
NOTABLE GUEST
Richard Hanania

Political scientist, Substack writer, contrarian. Debated Liron for nearly 2 hours. His general position on most things: the experts are wrong and the mob is right and also the mob is wrong and actually everyone is wrong except for a very specific set of conclusions that happen to align with his. On AI: less doomy than Liron, more doomy than he expected to be by the end. The podcast does that to people.

#54
This Yudkowskian Has A 99.999% P(Doom)
1:04:11
#53
Cosmology, AI Doom, and the Future of Humanity with Fraser Cain
Fraser Cain
1:57:46
#52
AI Doom Debate: Vaden Masrani & Ben Chugg vs. Liron Shapira
2:21:23
#51
Andrew Critch vs. Liron Shapira: Will AI Extinction Be Fast Or Slow?
2:37:22
#50
AI Twitter Beefs #2: Yann LeCun, David Deutsch, Tyler Cowen vs. Eliezer Yudkowsky, Geoffrey Hinton
Eliezer Yudkowsky, Geoffrey Hinton
1:06:48
#49
Is P(Doom) Meaningful? Epistemology Debate with Vaden Masrani and Ben Chugg
Vaden Masrani and Ben Chugg
2:50:59
#48
15-Minute Intro to AI Doom
15:53
#47
Lee Cronin vs. Liron Shapira: AI Doom Debate and Assembly Theory Questions
1:31:58
#46
Ben Horowitz says nuclear proliferation is GOOD? I disagree.
28:55
#45
"AI Snake Oil" Prof. Arvind Narayanan Can't See AGI Coming | Liron Reacts
1:12:42
#44
Dr. Keith Duggar (Machine Learning Street Talk) vs. Liron Shapira — AI Doom Debate
2:11:32
#43
Getting ARRESTED for barricading OpenAI's office to Stop AI — Sam Kirchner and Remmelt Ellen
45:38
#42
Q&A #1 Part 2: Stock Picking, Creativity, Types of Doomers, Favorite Books
1:09:43
#41
Q&A #1 Part 1: College, Asperger's, Elon Musk, Double Crux, Liron's IQ
1:01:36
#40
Arguing "By Definition" | Rationality 101
11:38
KEBAB BREAK #2 🥙

140 episodes deep. The lamb is still rotating. The hummus is still cold. The flatbread is still warm. You are still scrolling through a catalog of conversations about whether artificial superintelligence will annihilate the human species. The kebab doesn't judge. The kebab has always been here. The kebab will be here after.

#39
Doom Tiffs #1: Amjad Masad, Eliezer Yudkowsky, Roon, Lee Cronin, Naval Ravikant, Martin Casado
1:14:44
#38
Rationality 101: The Bottom Line
4:04
#37
Can GPT o1 Reason? | Liron Reacts to Tim Scarfe & Keith Duggar
2:06:55
#36
Yuval Noah Harari's AI Warnings Don't Go Far Enough | Liron Reacts
1:06:13
#35
AI Doom Debate with Roman Yampolskiy: 50% vs. 99.999% P(Doom) — For Humanity Crosspost
Roman Yampolskiy: 50% vs. 99.999% P(Doom)
1:31:06
#34
Jobst Landgrebe Doesn't Believe In AGI | Liron Reacts
1:28:26
#33
Arvind Narayanan Makes AI Sound Normal | Liron Reacts
1:11:39
#32
Bret Weinstein Bungles It On AI Extinction | Liron Reacts
1:01:31
#31
SB 1047 AI Regulation Debate: Holly Elmore vs. Greg Tanaka
Greg Tanaka
56:01
#30
David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?
1:07:56
#29
Maciej Ceglowski (Pinboard) Rejects AI Doomerism | Liron Reacts
1:32:49
#28
David Shapiro Doesn't Get PauseAI | Liron Reacts
57:21
#27
David Brooks's Non-Doomer Non-Argument in the NY Times | Liron Reacts
48:40
#26
Richard Sutton Dismisses AI Extinction Fears with Simplistic Arguments | Liron Reacts
Simplistic Arguments
1:26:12
#25
AI Doom Debate: "Cards Against Humanity" Co-Creator David Pinsof
1:40:36
WHAT IS "ALIGNMENT"?

The problem of making an AI system do what you actually want, rather than what you literally asked for, or what it decides is a good idea on its own. King Midas had an alignment problem: he asked for everything he touched to turn to gold, and the system delivered exactly what he specified. His daughter turned to gold. The specification was met. The intent was not. Now imagine King Midas's wish was being granted by something smarter than every human combined, and the wish was "make the world better." Alignment is the field that asks: better for whom?

#24
P(Doom) Estimates Shouldn't Inform Policy?? Liron Reacts to Sayash Kapoor
52:01
#23
Liron Reacts to Martin Casado's AI Claims
2:37:13
#22
AI Doom Debate: Tilek Mamutov vs. Liron Shapira
1:44:39
#21
Rationality 101: Mysterious Answers to Mysterious Questions
9:11
#20
Liron is Anti-AI on Jubilee's "Middle Ground"
4:00
#19
Liron Reacts to Mike Israetel's "Solving the AI Alignment Problem"
1:32:34
#18
Optimization and the Intelligence Explosion
14:00
#17
Is our brain's superpower just "culture"?
4:32
#16
Robin Hanson AI Doom Debate Highlights and Post-Debate Analysis
1:11:02
#15
Robin Hanson vs. Liron Shapira: Is Near-Term Extinction From AGI Plausible?
2:08:36
#14
Are you ready for this?
59
#13
Robin Hanson says AGI might be 100+ years away
2:04
#12
Preparing for my AI Doom Debate with Robin Hanson
Robin Hanson
49:02
#11
Robin Hanson debate prep: Liron argues *against* AI doom!
1:32:43
#10
AI Doom Q&A with Tony Warner and Liron Shapira
1:05:21
THE VERY FIRST EPISODE

It started with Kelvin Santos. 39 minutes. Then George Hotz. Then "Can Humans Judge AI's Arguments" — 33 minutes of asking whether the species being judged can judge the judge. 180 episodes later, the channel has hosted Nobel Prize winners, arrested activists, the founder of Ethereum, Richard Feynman's son, and Steve Bannon's War Room. From a 39-minute debate with a guy named Kelvin to a 4-hour war with the e/acc army. The donut grew.

#9
AI Doom Debate: Will AGI's analysis paralysis save humanity?
56:42
#8
Steven Pinker doesn't get the AI doom argument | Liron Reacts
28:26
#7
AI Doom Debate: What's a plausible alignment scenario?
26:16
#6
Should we gamble on AGI before all 8 billion of us die?
56:29
#5
AI Doom Debate: George Hotz vs. Liron Shapira
1:17:11
#4
Q&A: How scary is a superintelligent football coach?
11:54
#3
What this "Doom Debates" podcast is about
39:36
#2
Can Humans Judge AI's Arguments
33:29
#1
AI Doom Debate: Liron Shapira vs. Kelvin Santos
Kelvin Santos
39:00
180 episodes. 1 species.

Liron Shapira

Host · AI Safety Researcher · Founder of Doom Debates

P(doom): 50%. Founded Doom Debates to raise mainstream awareness of existential risk from AGI. Former YC-backed startup founder. Runs the Doom Debates Substack and the Doom Hut merch store. Has debated economists, philosophers, Ethereum founders, MIT professors, and the e/acc army. The mission: high-quality debate about whether we're building our own extinction.

doomdebates.com pauseai.info

doom.ooo · Daniel Brockman · March 2026