PodcastsTechnologieDoom Debates!

Doom Debates!

Liron Shapira
Doom Debates!
Neueste Episode

145 Episoden

  • Doom Debates!

    AI Alignment Is Solved?! PhD Researcher Quintin Pope vs Liron Shapira (2023 Twitter Debate)

    25.03.2026 | 1 Std. 20 Min.
    Dr. Quintin Pope, PhD, is one of the few critics of AI doomerism who is truly fluent in the concepts and arguments. In Oct, 2023 he joined me for a debate in Twitter Spaces where he argued that AI alignment was basically already solved.
    His “inside view” on machine-learning forced me to update my position, but could he knock me off the doom train?
    Timestamps
    00:00:00 — Cold Open
    00:00:43 — Introductions
    00:01:22 — Quintin's Opening Statement
    00:02:32 — Liron's Opening Statement
    00:05:10 — Has RLHF Solved the Alignment Problem?
    00:07:52 — AI Capabilities Are Constrained by Training Data
    00:10:52 — Defining ASI and Could RLHF Align a Superintelligence?
    00:13:13 — Quintin Is More Optimistic Than OpenAI
    00:14:16 — What Is ASI in Your Mind?
    00:15:57 — AI in 5 Years (2028) & AI Coding Agents
    00:19:05 — Continuous or Discontinuous Capability Gains?
    00:19:39 — DEBATE: General Intelligence Algorithm in Humans
    00:30:02 — The Only Coherent Explanation of Humans Going to the Moon
    00:34:01 — Are We "Fully Cooked" as a General Optimizer?
    00:35:53 — Common Mistake in Forecasting Superintelligence
    00:42:22 — 'Neat' vs 'Scruffy': Will Interpretable Structure Emerge Inside Neural Nets?
    00:48:57 — Does This Disagreement Actually Matter for P(Doom)?
    00:54:33 — Thought Experiment: Could You Have Predicted a Species Would Go to the Moon?
    00:57:26 — The Basin of Attraction for Superintelligence
    00:59:35 — Does a Superintelligence Even Exist in Algorithm Space?
    01:09:59 — Closing Statements
    01:12:40 — Audience Q&A
    01:19:35 — Wrap Up
    Links
    Original Twitter Spaces debate (Quintin Pope vs. Liron Shapira) — https://x.com/i/spaces/1YpJkwOzOqEJj/peek
    Quintin Pope on Twitter/X — https://twitter.com/QuintinPope5
    Quintin Pope, Alignment Forum profile — https://www.alignmentforum.org/users/quintin-pope
    InstructGPT, Wikipedia — https://en.wikipedia.org/wiki/InstructGPT
    AIXI, Wikipedia — https://en.wikipedia.org/wiki/AIXI
    AlphaZero, Wikipedia — https://en.wikipedia.org/wiki/AlphaZero
    MuZero, Wikipedia — https://en.wikipedia.org/wiki/MuZero
    DeepMind AlphaZero and MuZero page — https://deepmind.google/research/alphazero-and-muzero/
    Midjourney — https://www.midjourney.com/
    DALL-E, Wikipedia — https://en.wikipedia.org/wiki/DALL-E
    OpenAI Superalignment announcement — https://openai.com/index/introducing-superalignment/
    Shard Theory sequence on LessWrong — https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX
    “Evolution Provides No Evidence for the Sharp Left Turn” — https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn
    “My Objections to ‘We’re All Gonna Die with Eliezer Yudkowsky’” — https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky
    “AI is Centralizing by Default; Let’s Not Make It Worse” — https://forum.effectivealtruism.org/posts/zd5inbT4kYKivincm/ai-is-centralizing-by-default-let-s-not-make-it-worse
    Singular Learning Theory, Alignment Forum sequence — https://www.alignmentforum.org/s/mqwA5FcL6SrHEQzox
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    I'm Watching AI Take Everyone's Job | Liron on Robert Wright's NonZero Podcast

    20.03.2026 | 54 Min.
    My new interview on Robert Wright's Nonzero Podcast where we dive into the agentic AI explosion.
    Bob is an exceptionally sharp interviewer who connects the dots between job displacement and AI doom.
    I highly recommend becoming a premium subscriber to Bob’s Nonzero Newsletter so you can watch the Overtime segment in every interview he does — https://nonzero.org
    Our discussion continues in Overtime for premium subscribers — https://www.nonzero.org/p/early-access-the-allure-and-danger
    Links
    Nonzero Podcast on YouTube — https://www.youtube.com/@nonzero
    Robert Wright, The God Test (book, Amazon) — https://www.amazon.com/God-Test-Artificial-Intelligence-Reckoning/dp/1668061651
    Timestamps
    00:00:00 — Introduction and Today's Topics
    00:03:22 — Vibe Coding and the Agentic Revolution
    00:08:57 — The Future of Employment
    00:17:57 — Agents and What They Can Do
    00:27:59 — The "Can It" and "Will It" Framework for AI Doom
    00:30:27 — OpenClaw and Liron's Experience with AI Agents
    00:36:45 — The Case for Slowing Down AI Development
    00:43:28 — Anthropic, the Pentagon, and AI Politics
    00:48:37 — AI Safety Leadership Concerns
    00:52:06 — Closing and Overtime Tease
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    This Top Economist's P(Doom) Just Shot Up 10x! Noah Smith Returns To Explain His Update

    17.03.2026 | 47 Min.
    Noah Smith is an economist and author of Noahpinion, one of the most popular Substacks in the world.
    He returns to Doom Debates to share a massive update to his P(Doom), and things get a little heated.
    Timestamps
    00:00:00 — Cold Open
    00:00:41 — Welcome Back Noah Smith!
    00:01:40 — Noah's P(Doom) Update
    00:03:57 — The Chatbot-Genie-God Framework
    00:05:14 — What's Your P(Doom)™
    00:09:59 — Unpacking Noah's Update
    00:16:56 — Why Incidents of Rogue AI Lower P(Doom)
    00:20:04 — Noah's Mainline Doom Scenario: Much Worse Than COVID-19
    00:23:29 — Society Responds After Growing Pains
    00:29:25 — Agentic AI Contributed to Noah's Position
    00:31:35 — Should Yudkowsky Get Bayesian Credit?
    00:33:59 — Are We Communicating the Right Way with Policymakers?
    00:40:16 — Finding Common Ground on AI Policy
    00:47:07 — Wrap-Up: People Need to Be More Scared
    Links
    Doom Debate with Noah Smith — Part 1: https://www.youtube.com/watch?v=AwmJ-OnK2I4
    Noah’s Twitter — https://x.com/noahpinion
    Noah’s Substack — https://noahpinion.blog
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    Talking AI Doom with Dr. Claire Berlinski & Friends

    12.03.2026 | 1 Std. 26 Min.
    Dr. Claire Berlinski is a journalist, Oxford PhD, and author of The Cosmopolitan Globalist.
    She invited me to her weekly symposium to make the case for AI as an existential risk.
    Can we convince her sharp, skeptical audience that P(Doom) is high?
    Subscribe to The Cosmopolitan Globalist: https://claireberlinski.substack.com/
    Follow Claire on X: https://x.com/ClaireBerlinski
    “If Anyone Builds It, Everyone Dies” by Eliezer Yudkowsky & Nate Soares — https://ifanyonebuildsit.com
    Timestamps
    00:00:00 — Introduction
    00:02:10 — Welcome and Setting the Stage
    00:06:16 — Outcome Steering: The Magic of Intelligence
    00:10:40 — Collective Intelligence and the Path to ASI
    00:12:53 — The Five-Point Argument
    00:14:56 — The Alignment Problem and Control
    00:17:56 — The Genie Problem and Recursive Self-Improvement
    00:20:38 — Timeline: Five Years or Fifty?
    00:26:14 — Social Revolution and Pausing AI
    00:28:54 — Energy Constraints and Resource Limits
    00:31:23 — Morality, Empathy, and Superintelligence
    00:37:45 — How AI Is Actually Built
    00:38:31 — Computational Irreducibility and Co-Evolution
    00:44:57 — Foom and the Discontinuity Question
    00:46:44 — US-China Rivalry and the Arms Race
    00:49:36 — The Co-Evolution Argument
    00:55:36 — Alignment as Psychoanalysis
    00:57:24 — Anthropic’s “Harmless Slop” Paper
    01:00:33 — Policy Solutions: The Pause Button
    01:04:47 — Military AI and the Singularity
    01:07:10 — Cognitive Obstacles and Doom Fatigue
    01:09:07 — Why People Don’t Act
    01:13:00 — Reaching Representatives and Building a Platform
    01:17:12 — Sam Altman and the Manhattan Project Parallel
    01:19:14 — Community Building and Pause AI
    01:22:07 — Call to Action and Closing
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    How Friendly AI Will Become Deadly — Dr. Steven Byrnes (AGI Safety Researcher, Harvard Physics Postdoc) Returns!

    10.03.2026 | 1 Std. 28 Min.
    Fan favorite Dr. Steven Byrnes returns to discuss recent AI progress and the concerning paradigm shift to "ruthless sociopath AI" he sees on the horizon.
    Steven Byrnes, UC Berkeley physics PhD and Harvard physics postdoc, is an AI safety researcher at the Astera Institute and one of the most rigorous thinkers working on the technical AI alignment problem.
    Timestamps
    00:00:00 — Cold Open
    00:00:48 — Welcoming Back the Returning Champion
    00:02:38 — Research Update: What's New in The Last 6 Months
    00:04:31 — The Rise of AI Agents
    00:07:49 — What's Your P(Doom)?™
    00:13:42 — "Brain-Like AGI": The Next Generation of AI
    00:17:01 — Can LLMs Ever Match the Human Brain?
    00:31:51 — Will AI Kill Us Before It Takes Our Jobs?
    00:36:12 — Country of Geniuses in a Data Center
    00:41:34 — Why We Should Expect "Ruthless Sociopathic" ASI
    00:54:15 — Post-Training & RLVR — A "Thin Layer" of Real Intelligence
    01:02:32 — Consequentialism and the Path to Superintelligence
    01:17:02 — Airplanes vs. Rockets: An Analogy for AI
    01:24:33 — FOOM and Recursive Self-Improvement
    Links
    Steven Byrnes’ Website & Research— https://sjbyrnes.com/
    Steve’s X—https://x.com/steve47285
    Astera Institute—https://astera.org/
    “Why We Should Expect Ruthless Sociopath ASI” — https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asi
    Intro to Brain-Like-AGI Safety—https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8
    Steve on LessWrong—https://www.lesswrong.com/users/steve2152
    AI 2027 — Scenario Timeline — https://ai-2027.com/
    Part 1: “The Man Who Might SOLVE AI Alignment”—
    https://www.youtube.com/watch?v=_ZRUq3VEAc0
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe

Weitere Technologie Podcasts

Über Doom Debates!

It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com
Podcast-Website

Höre Doom Debates!, Bits und so und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen
Rechtliches
Social
v8.8.4| © 2007-2026 radio.de GmbH
Generated: 3/26/2026 - 2:17:29 PM