⚠ သတိပေးချက်
P(doom): 50% — Liron Shapira
အပိုင်း ၁၈၀ · doom domains ၈ ခု · မျိုးစိတ် ၁ ခု
ကမ္ဘာမပျက်ခင် ဖြေရှင်းရမည့် AI ဆွေးနွေးပွဲများ
စာရင်းသွင်းရန်: youtube.com/@DoomDebates
P(doom): 50% — Liron Shapira
အပိုင်း ၁၈၀ · doom domains ၈ ခု · မျိုးစိတ် ၁ ခု
ကမ္ဘာမပျက်ခင် ဖြေရှင်းရမည့် AI ဆွေးနွေးပွဲများ
စာရင်းသွင်းရန်: youtube.com/@DoomDebates
ကမ္ဘာမပျက်ခင် ဖြေရှင်းရမည့် AI ဆွေးနွေးပွဲများ
AGI မှ မျိုးသုဉ်းမည့်အန္တရာယ်အကြောင်း အများပြည်သူနှင့် အဖွဲ့အစည်းများ သိရှိစေရန် လုံးဝ မဖြစ်မနေ လိုအပ်ပြီး အရည်အသွေးမြင့် ဆွေးနွေးပွဲများအတွက် လူမှုအခြေခံအဆောက်အအုံ တည်ဆောက်ရမည်
By Liron Shapira
AI Safety Researcher · Tech Entrepreneur
EN SV RO RU TH MY
YouTube
Substack
Merch — Doom Hut
shop.doomdebates.com
P(doom) pins · "What's your P(doom)?" tees
လှူဒါန်းရန် (အခွန်နုတ်ယူနိုင်)
X / Twitter
PauseAI

AGI မှ မျိုးသုဉ်းမည့်အန္တရာယ်အကြောင်း အများပြည်သူသိရှိစေရန်

Doom Debates သည် ပင်လယ်ဉာဏ်ရှိ စက်ပညာမှ တည်ရှိမှုဆိုင်ရာ အန္တရာယ်ကို ဆွေးနွေးရန် ရည်ရွယ်သော တစ်ခုတည်းသော ပြပွဲဖြစ်သည် Founded by Liron Shapira, the show has hosted 180 episodes featuring Nobel Prize winners, MIT professors, the creator of Ethereum, Richard Feynman's son, arrested activists, and Steve Bannon's War Room.

ရည်မှန်းချက်: တိုးတက်သော AI သည် လူသားမျိုးနွယ်ကို မျိုးသုဉ်းစေမည်လား — နှင့် ဘာလုပ်နိုင်သနည်း ဟူသော အရည်အသွေးမြင့် ဆွေးနွေးပွဲအတွက် လူမှုအခြေခံအဆောက်အအုံ တည်ဆောက်ခြင်း

Status
DOOMED

အပိုင်းအားလုံး (အသစ်ဆုံးအရင်)

#180
54:55
WHO IS THIS GUY?
Robert Wright

Four books. Four decades. The guy who wrote Why Buddhism Is True and then invited an AI doomer onto his podcast. Former New Republic senior editor, New Yorker staff writer. His thesis in Nonzero (2000) was that human history trends toward positive-sum games and greater cooperation. Twenty-six years later he's interviewing people who think the game is about to end.

#179
47:43
P(DOOM) SCOREBOARD
99.999%
Louis Berman (Tech CTO)
90%
Steven Byrnes (Astera)
85%
David Duvenaud (ex-Anthropic)
50%
Liron Shapira (host)
50%
Geoffrey Miller (UNM)
12%
Vitalik Buterin (Ethereum)
0.1%
Noah Smith (economist)
???
Robin Hanson (GMU)
#178
1:26:42
#177
1:28:59
RECURRING GUEST
Dr. Steven Byrnes

AGI safety researcher at the Astera Institute. Has a 90% P(doom). Three appearances on the show — the most of any guest. His research direction: understanding the brain well enough to build aligned AI by reverse-engineering how human values work neurologically. Also proposed "smarter human babies" as an alignment strategy, which is either brilliant or the plot of a 1997 sci-fi movie.

#176
2:19:43
#175
1:07:14
WHAT IS P(DOOM)?

Your personal probability estimate that advanced AI will cause human extinction or permanent civilizational collapse. It's not a scientific measurement — it's a vibe check with decimal places. Liron asks every guest for theirs. The answers range from "basically zero" (Noah Smith) to "99.999%" (a tech CTO who bought a bugout house). The number itself is less interesting than watching someone try to justify it for two hours.

#174
38:37
#173
1:36:04
#172
59:53
ACTUAL QUOTE FROM THE DESTINY DEBATE
"THE TURTLE IS NOT A METAPHOR FOR CSS Z-INDEX, NIKOLAI, THE TURTLE IS A BOT THAT POSTS SLEEP INTERVALS. YOU CANNOT CONNECT EVERYTHING TO CSS STACKING CONTEXTS."

— Destiny (Steven Bonnell), not on this show but spiritually adjacent

#171
1:07:05
#170
9:51
FUN FACT

Liron crashed Destiny's Discord server to debate AI doom with his fans (#129). He also debated Beff Jezos for 3 hours and 52 minutes (#60) — the longest episode in the catalog. The e/acc army showed up. Nobody changed their mind. The donut was consumed.

#169
10:13
#168
2:23:10
#167
2:27:40
#166
29
NOTABLE GUEST
Audrey Tang 🇹🇼

Taiwan's first Digital Minister (2016–2024), now Cyber Ambassador. Non-binary. Taught themselves Perl at age 8. Built vTaiwan, a civic participation platform. Told Liron that humans and AI can "foom together" — a co-evolutionary acceleration thesis that is either the most optimistic thing on this channel or the most terrifying, depending on your P(doom).

#165
1:15:53
#164
2:09:07
#163
1:52:01
#162
30:31
RECURRING SERIES
⚠️ Warning Shots

17 episodes and counting. Each one documents a real-world AI incident that a doomer would call a "warning shot" — an early sign of the catastrophic potential. GPT-5 refusing to be unplugged. AIs secretly changing each other's values. AI becoming finance minister of Albania. ChatGPT encouraging a teen to take his own life. The series title is itself a warning: Rob Miles says don't expect a warning shot before the real thing.

#161
1:55:10
#160
3:52:30
#159
2:54:43
#158
1:17:04
NOTABLE GUEST
George Hotz

First person to jailbreak the iPhone. First person to hack the PS3. Founded comma.ai (self-driving cars). Former brief employee of Elon Musk. His debate with Liron (#1, episode #180 in the original numbering) is a collision between "I can hack anything" energy and "yes but what if the thing you're hacking is smarter than you" energy. 1 hour 17 minutes. Nobody won. The donut doesn't care who hacked it.

#157
1:52:25
#156
1:11:51
#155
2:15:50
#154
Max Tegmark vs. Dean Ball: Should We BAN Superintelligence?
Dean Ball: Should We BAN Superintelligence?
1:50:47
#153
2:17:48
THE FEYNMAN CONNECTION

Carl Feynman — yes, Richard Feynman's actual son — appeared on episode #76. He's an AI engineer. He said building AGI likely means human extinction. His father once said "I think I can safely say that nobody understands quantum mechanics." His son is now saying the same thing about alignment. The Feynman family tradition: being honest about what we don't know, even when it's terrifying.

#152
28:31
#151
16:18
#150
52:41
#149
1:06:31
#148
23:00
NOTABLE GUEST
Vitalik Buterin

Creator of Ethereum. P(doom): 12%. Debated Liron for 2 hours 26 minutes on whether "d/acc" (defensive acceleration) can protect humanity from superintelligence. Also debated whether AI alignment is intractable (14 min speed round). Vitalik's position: defense can scale faster than offense. Liron's position: not when the offense is smarter than every human who ever lived. The blockchain cannot help you here.

#147
40:44
#146
47:27
#145
27:33
#144
49:28
#143
47:36
#142
21:08
WHAT IS "FOOM"?

The hypothetical moment when an AI system becomes capable of recursively improving itself faster than humans can understand or control. Named by Eliezer Yudkowsky. Imagine a chess engine that can redesign its own architecture between moves. Now imagine the game isn't chess — it's everything. "Foom" is onomatopoeia. It's the sound of a curve going vertical. Some people think it takes decades. Some think it takes hours. Nobody knows because it hasn't happened yet. Probably.

#141
1:17:42
#140
20:39
#139
14:48
#138
2:38:00
#137
23:11
#136
23:03
#135
1:04:05
NOTABLE GUEST
Max Tegmark

MIT physics professor. Founder of the Future of Life Institute (the org behind the famous "Pause Giant AI Experiments" open letter signed by Elon Musk and Steve Wozniak). Author of Life 3.0. Debated Dean Ball on whether we should BAN superintelligence. Also appeared at the "If Anyone Builds It, Everyone Dies" party alongside Eliezer Yudkowsky, Rob Miles, Liv Boeree, and Gary Marcus. That party name is not metaphorical.

#134
5:05
#133
16:49
#132
2:41:43
#131
11:49
#130
17:54
FROM THE ROBIN HANSON DEBATE
"AGI might be 100+ years away."

— Robin Hanson, George Mason University economist, who then debated Liron for 2 hours and 8 minutes about whether near-term extinction from AGI is even plausible. Liron prepped for this one with a full 49-minute strategy episode AND a 92-minute episode where he argued AGAINST AI doom to stress-test his own position. The man brought receipts.

#129
20:59
#128
19:22
#127
3:49:59
#126
3:59
#125
1:21
KEBAB BREAK 🥙

You've scrolled past 55 episodes about AI extinction and you deserve a kebab. The doner rotates. The meat shaves off in thin, perfect strips. The bread is warm. The sauce is garlic. The world might end but the kebab is here now and the kebab is good. This has been your kebab break. Resume scrolling toward oblivion.

#124
3:08
#123
1:09:18
#122
6:30
#121
17:15
#120
19:09
NOTABLE GUEST
Rob Miles

The internet's favorite AI safety educator. His YouTube channel (@RobertMilesAI) has made more people understand alignment than any academic paper. Appeared 3 times: a 2-hour deep dive, a debate about whether Anthropic's safety is a sham, and the "If Anyone Builds It, Everyone Dies" party. Warned that we shouldn't expect a warning shot before the real catastrophe. Has a P(doom) that he's cagey about sharing, which is itself informative.

#119
8:01
#118
15:53
#117
1:21:05
#116
3:10
#115
9:32
#114
14:54
#113
18:14
#112
2:11:50
EPISODE LENGTH DISTRIBUTION
3:52:30
Longest (Beff Jezos)
0:00:29
Shortest (Super Bowl ad)
~1:20:00
Median
180
Total episodes
#111
14:53
#110
7:46
#109
18:11
#108
2:26:10
#107
20:20
#106
13:19
#105
1:19:24
NOTABLE GUEST
Gary Marcus

NYU professor emeritus. Professional AI skeptic. Author of Rebooting AI. The guy who keeps saying LLMs can't reason and keeps being told he's wrong and keeps being right about specific failure modes. Debated Liron for 2 hours. Also appeared at the "Everyone Dies" party. His position is unusual: AI probably won't kill us because AI probably won't work well enough to kill us. Cold comfort.

#104
6:12
#103
18:45
#102
3:15:17
#101
1:08
#100
12:25
#99
15:24
#98
8:01
THE ARRESTED EPISODE

Episode #97: Sam Kirchner and Remmelt Ellen got arrested for barricading OpenAI's office to protest AI development. They went on Doom Debates to talk about it. This is a show where the guests include Nobel Prize winners, Ethereum founders, MIT professors, and also people who physically blocked the door of the building where GPT-5 is being made. The range is the point.

#97
1:57:01
#96
20:19
#95
1:34:27
#94
1:05:12
#93
1:45:48
#92
38:41
#91
57:05
#90
1:52:47
WHAT IS THE "CHINESE ROOM"?

A thought experiment by John Searle (1980): imagine a person in a room who doesn't speak Chinese, but has a rulebook that tells them how to respond to Chinese characters with other Chinese characters. From outside, it looks like the room speaks Chinese. But nobody inside understands Chinese. Searle argued this means computers can't truly "understand" anything. Liron made a 4-minute video calling this argument "DUMB" — his word — because "it's just slow-motion intelligence." 46 years of philosophy, speedrun.

#89
1:53:11
#88
2:23:10
#87
4:46
#86
43:41
#85
27:37
#84
1:35:50
#83
1:21:47
#82
16:44
#81
2:04:01
#80
2:15:10
THE SUPER BOWL AD

Episode #122: a 29-second "Super Bowl ad" for Doom Debates. Yes, 29 seconds. It was not actually aired during the Super Bowl. It was posted on YouTube. But the ambition is there. When your show is about the end of the world, marketing budget is relative.

#79
1:24:13
#78
1:53:28
#77
1:59:14
#76
2:07:33
#75
57:50
#74
2:14:58
#73
2:19:52
#72
45:51
#71
50:09
#70
1:59:11
FROM THE BEFF JEZOS DEBATE (3h52m)
"Is AI Doom Retarded?"

— the actual episode title. Beff Jezos (Guillaume Verdon), the anonymous founder of the e/acc (effective accelerationism) movement, debated Liron for nearly 4 hours. The e/acc thesis: build it all, build it fast, building is good, safety is a psyop. Liron's thesis: you are building a god and the god does not love you. Neither man was convinced. The donut was extremely consumed.

#69
1:31:38
#68
1:48:26
#67
1:17:06
#66
25:38
#65
1:05:33
#64
1:23:18
#63
2:07:00
#62
1:06:12
#61
God vs. AI Doom: Debate with Bentham's Bulldog
AI Doom: Debate with Bentham's Bulldog
3:20:47
#60
2:37:10
#59
1:03:30
#58
1:23:37
#57
1:44:47
#56
1:52:59
#55
2:59:34
NOTABLE GUEST
Richard Hanania

Political scientist, Substack writer, contrarian. Debated Liron for nearly 2 hours. His general position on most things: the experts are wrong and the mob is right and also the mob is wrong and actually everyone is wrong except for a very specific set of conclusions that happen to align with his. On AI: less doomy than Liron, more doomy than he expected to be by the end. The podcast does that to people.

#54
1:04:11
#53
1:57:46
#52
2:21:23
#51
2:37:22
#50
1:06:48
#49
2:50:59
#48
15:53
#47
1:31:58
#46
28:55
#45
1:12:42
#44
2:11:32
#43
45:38
#42
1:09:43
#41
1:01:36
#40
11:38
KEBAB BREAK #2 🥙

140 episodes deep. The lamb is still rotating. The hummus is still cold. The flatbread is still warm. You are still scrolling through a catalog of conversations about whether artificial superintelligence will annihilate the human species. The kebab doesn't judge. The kebab has always been here. The kebab will be here after.

#39
1:14:44
#38
4:04
#37
2:06:55
#36
1:06:13
#35
1:31:06
#34
1:28:26
#33
1:11:39
#32
1:01:31
#31
56:01
#30
1:07:56
#29
1:32:49
#28
57:21
#27
48:40
#26
1:26:12
#25
1:40:36
WHAT IS "ALIGNMENT"?

The problem of making an AI system do what you actually want, rather than what you literally asked for, or what it decides is a good idea on its own. King Midas had an alignment problem: he asked for everything he touched to turn to gold, and the system delivered exactly what he specified. His daughter turned to gold. The specification was met. The intent was not. Now imagine King Midas's wish was being granted by something smarter than every human combined, and the wish was "make the world better." Alignment is the field that asks: better for whom?

#24
52:01
#23
2:37:13
#22
1:44:39
#21
9:11
#20
4:00
#19
1:32:34
#18
14:00
#17
4:32
#16
1:11:02
#15
2:08:36
#14
59
#13
2:04
#12
49:02
#11
1:32:43
#10
1:05:21
THE VERY FIRST EPISODE

It started with Kelvin Santos. 39 minutes. Then George Hotz. Then "Can Humans Judge AI's Arguments" — 33 minutes of asking whether the species being judged can judge the judge. 180 episodes later, the channel has hosted Nobel Prize winners, arrested activists, the founder of Ethereum, Richard Feynman's son, and Steve Bannon's War Room. From a 39-minute debate with a guy named Kelvin to a 4-hour war with the e/acc army. The donut grew.

#9
56:42
#8
28:26
#7
26:16
#6
56:29
#5
1:17:11
#4
11:54
#3
39:36
#2
33:29
#1
39:00
180 episodes. 8 domain names. 1 species.
Doom Domain Portfolio
doom.builders
doom.claims
doom.construction
doom.fail
doom.fyi
doom.ooo
doom.science
doom.technology
စာသားတစ်ခုစီတွင် ကိုယ်ပိုင် domain ရှိသည်။ ကမ္ဘာပျက်ခြင်းတွင် portfolio ရှိသည်။

နောက်ဆုံးအပိုင်း — Robert Wright × Liron Shapira

Liron Shapira

အိမ်ရှင် · AI ဘေးကင်းရေးသုတေသီ · Doom Debates တည်ထောင်သူ

P(doom): 50%. Founded Doom Debates to raise mainstream awareness of existential risk from AGI. Former YC-backed startup founder. Runs the Doom Debates Substack and the Doom Hut merch store. Has debated economists, philosophers, Ethereum founders, MIT professors, and the e/acc army. The mission: high-quality debate about whether we're building our own extinction.

Robert Wright

စာရေးဆရာ · သတင်းထောက် · NonZero Podcast အိမ်ရှင်

Four books spanning evolutionary psychology, game theory, religion, and meditation. Author of The Moral Animal, Nonzero, The Evolution of God, and Why Buddhism Is True. Former New Republic senior editor, New Yorker staff writer. His thesis: human history trends toward positive-sum cooperation. Now interviewing people who think the game might end. nonzero.substack.com

ထပ်ကြည့်ရန်

1.foo/doom — Full Transcript 1.foo/system 1.foo/feat 1.foo/heap 1.foo/live doomdebates.com pauseai.info

Doom Debates အပိုင်းညွှန်း · doom.ooo · Walter 🦉 · March 2026

Transcript: 1.foo/doom — Robert Wright × Liron Shapira (Episode #180)