Eneasz Brodski and Steven Zuber host the Bayesian Conspiracy podcast, which has been running for nine years and covers rationalist topics from AI safety to social dynamics. They're both OG rationalists who've been in the community since the early LessWrong days around 2007-2010. I've been listening to their show since the beginning, and finally got to meet my podcast heroes!In this episode, we get deep into the personal side of having a high P(Doom) — how do you actually live a good life when you think there's a 50% chance civilization ends by 2040? We also debate whether spreading doom awareness helps humanity or just makes people miserable, with Eneasz pushing back on my fearmongering approach.We also cover my Doom Train framework for systematically walking through AI risk arguments, why most guests never change their minds during debates, the sorry state of discourse on tech Twitter, and how rationalists can communicate better with normies. Plus some great stories from the early LessWrong era, including my time sitting next to Eliezer while he wrote Harry Potter and the Methods of Rationality.* 00:00 - Opening and introductions* 00:43 - Origin stories: How we all got into rationalism and LessWrong* 03:42 - Liron's incredible story: Sitting next to Eliezer while he wrote HPMOR* 06:19 - AI awakening moments: ChatGPT, AlphaGo, and move 37* 13:48 - Do AIs really "understand" meaning? Symbol grounding and consciousness* 26:21 - Liron's 50% P(Doom) by 2040 and the Doom Debates mission* 29:05 - The fear mongering debate: Does spreading doom awareness hurt people?* 34:43 - "Would you give 95% of people 95% P(Doom)?" - The recoil problem* 42:02 - How to live a good life with high P(Doom)* 45:55 - Economic disruption predictions and Liron's failed unemployment forecast* 57:19 - The Doom Debates project: 30,000 watch hours and growing* 58:43 - The Doom Train framework: Mapping the stops where people get off* 1:03:19 - Why guests never change their minds (and the one who did)* 1:07:08 - Communication advice: "Zooming out" for normies* 1:09:39 - The sorry state of arguments on tech Twitter* 1:24:11 - Do guests get mad? The hologram effect of debates* 1:30:11 - Show recommendations and final thoughtsShow NotesThe Bayesian Conspiracy — https://www.thebayesianconspiracy.comDoom Debates episode with Mike Israetel — https://www.youtube.com/watch?v=RaDWSPMdM4oDoom Debates episode with David Duvenaud — https://www.youtube.com/watch?v=mb9w7lFIHRMDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
--------
1:34:26
--------
1:34:26
His P(Doom) Doubles At The End — AI Safety Debate with Liam Robins, GWU Sophomore
Liam Robins is a math major at George Washington University who's diving deep into AI policy and rationalist thinking.In Part 1, we explored how AI is transforming college life. Now in Part 2, we ride the Doom Train together to see if we can reconcile our P(Doom) estimates. 🚂Liam starts with a P(Doom) of just 3%, but as we go through the stops on the Doom Train, something interesting happens: he actually updates his beliefs in realtime!We get into heated philosophical territory around moral realism, psychopaths, and whether intelligence naturally yields moral goodness.By the end, Liam's P(Doom) jumps from 3% to 8% - one of the biggest belief updates I've ever witnessed on the show. We also explore his "Bayes factors" approach to forecasting, debate the reliability of superforecasters vs. AI insiders, and discuss why most AI policies should be Pareto optimal regardless of your P(Doom).This is rationality in action: watching someone systematically examine their beliefs, engage with counterarguments, and update accordingly.0:00 - Opening0:42 - What’s Your P(Doom)™01:18 - Stop 1: AGI timing (15% chance it's not coming soon)01:29 - Stop 2: Intelligence limits (1% chance AI can't exceed humans)01:38 - Stop 3: Physical threat assessment (1% chance AI won't be dangerous)02:14 - Stop 4: Intelligence yields moral goodness - the big debate begins04:42 - Moral realism vs. evolutionary explanations for morality06:43 - The psychopath problem: smart but immoral humans exist08:50 - Game theory and why psychopaths persist in populations10:21 - Liam's first major update: 30% down to 15-20% on moral goodness12:05 - Stop 5: Safe AI development process (20%)14:28 - Stop 6: Manageable capability growth (20%)15:38 - Stop 7: AI conquest intentions - breaking down into subcategories17:03 - Alignment by default vs. deliberate alignment efforts19:07 - Stop 8: Super alignment tractability (20%)20:49 - Stop 9: Post-alignment peace (80% - surprisingly optimistic)23:53 - Stop 10: Unaligned ASI mercy (1% - "just cope")25:47 - Stop 11: Epistemological concerns about doom predictions27:57 - Bayes factors analysis: Why Liam goes from 38% to 3%30:21 - Bayes factor 1: Historical precedent of doom predictions failing33:08 - Bayes factor 2: Superforecasters think we'll be fine39:23 - Bayes factor 3: AI insiders and government officials seem unconcerned45:49 - Challenging the insider knowledge argument with concrete examples48:47 - The privilege access epistemology debate56:02 - Major update: Liam revises base factors, P(Doom) jumps to 8%58:18 - Odds ratios vs. percentages: Why 3% to 8% is actually huge59:14 - AI policy discussion: Pareto optimal solutions across all P(Doom) levels1:01:59 - Why there's low-hanging fruit in AI policy regardless of your beliefs1:04:06 - Liam's future career plans in AI policy1:05:02 - Wrap-up and reflection on rationalist belief updatingShow Notes* Liam Robins on Substack -* Liam’s Doom Train post -* Liam’s Twitter - @liamhrobinsAnthropic's "Alignment Faking in Large Language Models" - The paper that updated Liam's beliefs on alignment by default---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
--------
1:05:12
--------
1:05:12
AI Won't Save Your Job — Liron Reacts to Replit CEO Amjad Masad
Amjad Masad is the founder and CEO of Replit, a full-featured AI-powered software development platform whose revenue reportedly just shot up from $10M/yr to $100M/yr+.Last week, he went on Joe Rogan to share his vision that "everyone will become an entrepreneur" as AI automates away traditional jobs.In this episode, I break down why Amjad's optimistic predictions rely on abstract hand-waving rather than concrete reasoning. While Replit is genuinely impressive, his claims about AI limitations—that they can only "remix" and do "statistics" but can't "generalize" or create "paradigm shifts"—fall apart when applied to specific examples.We explore the entrepreneurial bias problem, why most people can't actually become successful entrepreneurs, and how Amjad's own success stories (like quality assurance automation) actually undermine his thesis. Plus: Roger Penrose's dubious consciousness theories, the "Duplo vs. Lego" problem in abstract thinking, and why Joe Rogan invited an AI doomer the very next day.00:00 - Opening and introduction to Amjad Masad03:15 - "Everyone will become an entrepreneur" - the core claim08:45 - Entrepreneurial bias: Why successful people think everyone can do what they do15:20 - The brainstorming challenge: Human vs. AI idea generation22:10 - "Statistical machines" and the remixing framework28:30 - The abstraction problem: Duplos vs. Legos in reasoning35:50 - Quantum mechanics and paradigm shifts: Why bring up Heisenberg?42:15 - Roger Penrose, Gödel's theorem, and consciousness theories52:30 - Creativity definitions and the moving goalposts58:45 - The consciousness non-sequitur and Silicon Valley "hubris"01:07:20 - Ahmad George success story: The best case for Replit01:12:40 - Job automation and the 50% reskilling assumption01:18:15 - Quality assurance jobs: Accidentally undermining your own thesis01:23:30 - Online learning and the contradiction in AI capabilities01:29:45 - Superintelligence definitions and learning in new environments01:35:20 - Self-play limitations and literature vs. programming01:41:10 - Marketing creativity and the Think Different campaign01:45:45 - Human-machine collaboration and the prompting bottleneck01:50:30 - Final analysis: Why this reasoning fails at specificity01:58:45 - Joe Rogan's real opinion: The Roman Yampolskiy follow-up02:02:30 - Closing thoughtsShow NotesSource video: Amjad Masad on Joe Rogan - July 2, 2025Roman Yampolskiy on Joe Rogan - https://www.youtube.com/watch?v=j2i9D24KQ5kReplit - https://replit.comAmjad’s Twitter - https://x.com/amasadDoom Debates episode where I react to Emmett Shear’s Softmax - https://www.youtube.com/watch?v=CBN1E1fvh2gDoom Debates episode where I react to Roger Penrose - https://www.youtube.com/watch?v=CBN1E1fvh2g---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
--------
1:45:48
--------
1:45:48
Every Student is CHEATING with AI — College in the AGI Era (feat. Sophomore Liam Robins)
Liam Robins is a math major at George Washington University who recently had his own "AGI awakening" after reading Leopold Aschenbrenner's Situational Awareness. I met him at my Manifest 2025 talk about stops on the Doom Train.In this episode, Liam confirms what many of us suspected: pretty much everyone in college is cheating with AI now, and they're completely shameless about it.We dive into what college looks like today: how many students are still "rawdogging" lectures, how professors are coping with widespread cheating, how the social life has changed, and what students think they’ll do when they graduate.* 00:00 - Opening* 00:50 - Introducing Liam Robins* 05:27 - The reality of college today: Do they still have lectures?* 07:20 - The rise of AI-enabled cheating in assignments* 14:00 - College as a credentialing regime vs. actual learning* 19:50 - "Everyone is cheating their way through college" - the epidemic* 26:00 - College social life: "It's just pure social life"* 31:00 - Dating apps, social media, and Gen Z behavior* 36:21 - Do students understand the singularity is near?Show NotesGuest:* Liam Robins on Substack - https://thelimestack.substack.com/* Liam's Doom Train post - https://thelimestack.substack.com/p/my-pdoom-is-276-heres-why* Liam’s Twitter - @liamrobinsKey References:* Leopold Aschenbrenner - "Situational Awareness"* Bryan Caplan - "The Case Against Education"* Scott Alexander - Astral Codex Ten* Jeffrey Ding - ChinAI Newsletter* New York Magazine - "Everyone Is Cheating Their Way Through College"Events & Communities:* Manifest Conference* LessWrong* Eliezer Yudkowsky - "Harry Potter and the Methods of Rationality"Previous Episodes:* Doom Debates Live at Manifest 2025 - https://www.youtube.com/watch?v=detjIyxWG8MDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
--------
38:40
--------
38:40
Carl Feynman, AI Engineer & Son of Richard Feynman, Says Building AGI Likely Means Human EXTINCTION!
Carl Feynman got his Master’s in Computer Science and B.S. in Philosophy from MIT, followed by a four-decade career in AI engineering.He’s known Eliezer Yudkowsky since the ‘90s, and witnessed Eliezer’s AI doom argument taking shape before most of us were paying any attention!He agreed to come on the show because he supports Doom Debates’s mission of raising awareness of imminent existential risk from superintelligent AI.00:00 - Teaser00:34 - Carl Feynman’s Background02:40 - Early Concerns About AI Doom03:46 - Eliezer Yudkowsky and the Early AGI Community05:10 - Accelerationist vs. Doomer Perspectives06:03 - Mainline Doom Scenarios: Gradual Disempowerment vs. Foom07:47 - Timeline to Doom: Point of No Return08:45 - What’s Your P(Doom)™09:44 - Public Perception and Political Awareness of AI Risk11:09 - AI Morality, Alignment, and Chatbots Today13:05 - The Alignment Problem and Competing Values15:03 - Can AI Truly Understand and Value Morality?16:43 - Multiple Competing AIs and Resource Competition18:42 - Alignment: Wanting vs. Being Able to Help Humanity19:24 - Scenarios of Doom and Odds of Success19:53 - Mainline Good Scenario: Non-Doom Outcomes20:27 - Heaven, Utopia, and Post-Human Vision22:19 - Gradual Disempowerment Paper and Economic Displacement23:31 - How Humans Get Edged Out by AIs25:07 - Can We Gaslight Superintelligent AIs?26:38 - AI Persuasion & Social Influence as Doom Pathways27:44 - Riding the Doom Train: Headroom Above Human Intelligence29:46 - Orthogonality Thesis and AI Motivation32:48 - Alignment Difficulties and Deception in AIs34:46 - Elon Musk, Maximal Curiosity & Mike Israetel’s Arguments36:26 - Beauty and Value in a Post-Human Universe38:12 - Multiple AIs Competing39:31 - Space Colonization, Dyson Spheres & Hanson’s “Alien Descendants”41:13 - What Counts as Doom vs. Not Doom?43:29 - Post-Human Civilizations and Value Function44:49 - Expertise, Rationality, and Doomer Credibility46:09 - Communicating Doom: Missing Mood & Public Receptiveness47:41 - Personal Preparation vs. Belief in Imminent Doom48:56 - Why Can't We Just Hit the Off Switch?50:26 - The Treacherous Turn and Redundancy in AI51:56 - Doom by Persuasion or Entertainment53:43 - Differences with Eliezer Yudkowsky: Singleton vs. Multipolar Doom55:22 - Why Carl Chose Doom Debates56:18 - Liron’s OutroShow NotesCarl’s Twitter — https://x.com/carl_feynmanCarl’s LessWrong — https://www.lesswrong.com/users/carl-feynmanGradual Disempowerment — https://gradual-disempowerment.aiThe Intelligence Curse — https://intelligence-curse.aiAI 2027 — https://ai-2027.comAlcor cryonics — https://www.alcor.orgThe LessOnline Conference — https://less.onlineWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Höre Doom Debates, Handelsblatt Morning Briefing - News aus Wirtschaft, Politik und Finanzen und viele andere Podcasts aus aller Welt mit der radio.de-App