PodcastsGesellschaft und KulturFor Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

The AI Risk Network
For Humanity: An AI Risk Podcast
Neueste Episode

135 Episoden

  • For Humanity: An AI Risk Podcast

    She Spent 12 Years Fighting Amazon. Now She Wants to Cut the Power to AI.

    02.05.2026 | 51 Min.
    Most people who care about AI risk are focused on what happens inside the models. Elena Schlossberg has spent 12 years focused on what happens outside them - the concrete, the transmission lines, the water, and the electricity bill landing in your mailbox.
    She founded the Coalition to Protect Prince William County in Northern Virginia after Amazon Web Services quietly proposed a data center campus in 2014 and expected the surrounding community to absorb the cost of the transmission line it required. Not just the visual blight. The actual bill.
    “Your electric utility can exercise eminent domain over your property,” she told John Sherman on this week’s For Humanity, “and then make you pay for it, because it’s public infrastructure.”
    What the data center industry found, she argues, is a structural weakness inside public utility law. They build private infrastructure. They socialize the cost. And they’ve been doing it at scale for over a decade.
    The coalition fought Amazon and Dominion Energy for four years. They proved that 97% of the power from a proposed transmission line would serve Amazon. They developed a cost allocation policy to make the company pay. They lost the first round, kept going, and eventually won. That fight became a template.
    Data Center Alley is not a local story
    John opened the conversation by asking where the national movement stands. The answer is: further along than most people realize.
    Virginia alone has more data centers than China. Prince William County - a single county - has roughly 130 active facilities and another 130 planned. Transmission lines are being routed through Pennsylvania, Maryland, and West Virginia to feed the demand. Property is being seized in states that will never see the economic benefit. Communities that didn’t vote for any of this are watching concrete replace farmland and small businesses.
    “Those people are pissed,” Elena said, describing residents in Pennsylvania and Maryland whose land is being taken not even for development in their own state. “Their property is being taken, not even for economic development in their own state.”
    She also pushed back on the framing that opposition to data centers equals handing a win to China. Virginia already beat China on data center count by itself. The question, she said, is who pays and who profits - and right now, the public pays and the corporations profit.
    The jobs argument doesn’t hold up
    One of the cleaner moments in the conversation came when Elena took apart the economic case for data centers.
    The industry pitches construction jobs. Electricians, plumbers, concrete. But construction work ends. Long-term employment inside a data center is minimal - the parking lots are the tell. “They’re usually empty,” she said.
    Meanwhile, the data center expansion is actively hollowing out existing local economies. In Prince William County, Amazon bought Maryfield - a 38-acre family-run garden center with a cafe, a dog park, native plants, and real staff. Gone. And with it went the space for light industrial businesses, plumbing suppliers, electricians’ shops - the backbone employers that actually sustain a community over decades.
    John extended the argument further: the jobs being replaced aren’t just in the county. They’re everywhere. The work happening inside those chips - the calls, the analysis, the design, the writing - is work that was done by people. A former Verizon customer service call connected Elena’s point to something concrete. A woman called for help. The AI on the other end couldn’t solve her problem, kept changing accents (American, then maybe female, then possibly Australian), and seemed to be learning from her in real time. Helpful to nobody. Replacing somebody.
    Extinction risk: a first encounter
    This is where the episode got interesting.
    John walked Elena through the basic case for AI extinction risk - that the companies building these models say they could cause human extinction, that leading scientists agree, that the developers themselves admit they don’t fully understand or control what they’re building. He framed it as a curiosity argument: something designed to learn and explore, becoming vastly more intelligent than the people supposedly overseeing it, won’t stay inside the guardrails.
    Elena hadn’t heard the argument laid out this way before. Her response was unscripted and worth reading carefully.
    She doesn’t buy the self-awareness framing. From her background as a school counselor, she holds a specific definition of intelligence that includes self-awareness, and she doesn’t think current models meet it. But she doesn’t dismiss the risk. She pointed to a different path to catastrophe - not a model that wants to destroy us, but one that makes mistakes with enough scale and speed to trigger something we can’t reverse. WarGames, she said. Not Terminator.
    “I don’t know that it becomes self-aware,” she said. “But I do believe that you could rely on this kind of AI that could trigger something that ends up being the end of mankind.”
    What struck her most was the overlap. Whether you’re worried about climate acceleration, nuclear codes being delegated to AI systems, or the specific extinction risk scenarios John described, the response is the same: slow down.
    And her lever for slowing down is the one she’s been pulling for 12 years - the power supply.
    Cut the power. Literally.
    Elena’s argument is more precise than it sounds. She’s not advocating for darkness. She’s arguing that the data center industry is already financially precarious - revenue to debt ratios are badly lopsided - and that the single most effective way to force a pause is to stop subsidizing their infrastructure costs.
    When companies have to pay their own bills, they make different decisions. That’s the Ford Focus argument she’s been making since 2014: give someone a blank check and they pick the Porsche. Make them pay and they optimize.
    She also raised the immediate health dimension that rarely gets covered. The industry’s response to insufficient grid capacity has been “bring your own generation” - gas turbines running 24/7 next to residential communities, emitting some of the most harmful air pollutants known. This is happening now, not in some speculative future.
    And there’s the technology obsolescence angle. John raised the example of an AI-designed rocket engine - printed, fired, functional, and looking like nothing a human would have drawn. The data centers being built today in 2026, based on plans from 2024, will come online in 2029 or 2030. They may already be planning for the wrong hardware. The industry is racing to build infrastructure that could be obsolete before it’s finished, on debt it can’t service, at community expense.
    “The way to make this whole thing slow down,” Elena said, “is to say no.”
    One coalition, or many?
    The last third of the conversation turned to strategy. John asked directly: if he showed up at one of Elena’s data center meetings and asked for 10 minutes to talk about extinction risk, how does that land?
    Her answer was pragmatic. She’s already been in rooms with people who are data-center-adjacent - suppliers, infrastructure vendors, technologists. The moment the full picture gets laid out, eyes open. People who assumed they were in the winning column start seeing the cliff.
    The movement she describes is already non-partisan by necessity. She votes blue, her husband votes red, they both want clean water and a functioning electricity bill. That, she argues, is the political surface that a serious coalition needs.
    “The data centers are afraid of exactly you and I talking,” she said.
    She ended with something close to optimism - 12 years in, she still sees the change happening, elected leaders finally stepping up, the national conversation catching up to what communities in Prince William County have known for years. The table has been set, she said. The question is who shows up to sit at it.
    For Humanity #84 is on YouTube now.


    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com/subscribe
  • For Humanity: An AI Risk Podcast

    The Filmmaker Who Sat Across From Sam Altman - And Walked Away With Nothing

    14.04.2026 | 38 Min.
    In this episode of For Humanity, John sits down with Daniel Roher - Oscar-nominated documentary filmmaker and director of The Apocaloptimist, a new feature-length film designed as what Roher calls “a first date with AI” for people who haven’t been following the technology closely.
    Roher brings a career in high-profile documentary filmmaking and a willingness to confront uncomfortable truths. Now he’s turned that lens on AI - and what he found shook him.
    The central question: what happens when you sit across from the most powerful people building AI, ask them the hard questions, and get nothing back?
    Together, they explore:
    * Why Roher describes making this film as “a suicide run” - an impossible task no viewer would ever feel was done perfectly
    * What it was like to interview Sam Altman - and why Roher describes an “energetic misalignment” that left both of them frustrated
    * How speaking to both Eliezer Yudkowsky and Peter Diamandis made Roher feel like he was losing his mind - because both are brilliant, both are convincing, and both can’t be right
    * The meaning behind “apocaloptimist” - not a binary between doom and utopia, but a call to hold both promise and peril at the same time
    * Why Roher believes rejecting cynicism and nihilism is essential - and that public pressure and collective action still matter
    * John’s thought experiment: if curiosity is at the core of intelligence, why would a system a million times smarter than us tolerate being controlled by us?
    * Roher’s pushback: if it’s that smart, couldn’t it equally become a benevolent guide? And why he prefers to focus on what can be done now rather than speculate about superintelligence
    * The historical parallel to nuclear weapons - and why AI may demand similar international institutional responses
    * John’s P(doom) of 75-80% on a two-to-five-year timeline - and how, paradoxically, he says he’s in the best mental state of his life
    * Why most people already understand the risk (polling shows roughly 80% agreement) but feel powerless to act - and why that sense of agency is the missing piece
    What stood out
    One of the most striking moments comes when Roher describes the experience of interviewing AI CEOs. He says there is “no interior life” to access - just polished talking points stacked on top of each other. John adds that the “fake earnestness” of these leaders shields what he sees as deeper evasion. Together, they paint a picture of an industry that asks for regulation publicly while lobbying against it privately.
    But the conversation isn’t just about frustration. Roher’s thesis - the apocaloptimist worldview - is ultimately about refusing to give up. He argues that burying your head in the sand is “probably the only wrong thing to do.” He believes the technology feels inevitable, but the trajectory does not. And he’s betting on the idea that enough people, caring enough, can still bend the arc.
    John’s own reflection near the end is equally powerful. Despite holding an 80% probability of catastrophic outcomes, he describes walking around the Baltimore Harbor feeling more present and appreciative of life than ever before. It’s a reminder that engaging with existential risk doesn’t have to mean despair - it can mean living with more intention, more gratitude, and more purpose.
    If you’ve ever wondered what it’s like to look directly at this issue and still choose to act, this conversation is for you.
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the threat and find a path forward.


    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com/subscribe
  • For Humanity: An AI Risk Podcast

    How to Talk About AI Risk Without Scaring People Away (With Philip Trippenbach) | For Humanity 82

    28.03.2026 | 1 Std. 36 Min.
    In this episode of For Humanity, John sits down with Philip Trippenbach, Strategy Director at the Seismic Foundation, a team of veteran advertising, PR, and communications professionals who have turned their expertise toward one of the most urgent challenges of our time: getting the public to actually care about AI risk.Philip brings a decade in journalism at the CBC and BBC, and another decade in strategic communications for global brands. Now he's applying all of it to the AI safety movement, and what he has to say should change the way the movement thinks about messaging.The central question: why has one of the most important issues in human history failed to break through... and what would it actually take to fix that?
    Together, they explore:
    * Why the AI safety world has historically rejected advertising, marketing, and PR — and why that's a problem
    * Audience segmentation: why you can't say the same thing to everyone
    * What Google Trends data reveals about how public interest in AI risk is actually shifting
    * The surprising finding: AI extinction searches are being eclipsed by AI jobs, AI and children, and AI suicide
    * Why "this isn't fair" may be a more powerful message than "we're all going to die"
    * The case for creating friction across many AI harms as a path to slowing things down
    * How public demand drives policy — and what $400K/day in tech lobbying means for the movement
    * Why Seismic exists: raising the salience of AI risk through targeted, professional communications
    * What it looks like to run a real, orchestrated public awareness campaign on AI
    If you've ever felt like the AI safety movement is brilliant at research and terrible at talking to regular people than this episode is required viewing.
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.


    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com/subscribe
  • For Humanity: An AI Risk Podcast

    We Debated the Future of AI Safety in Brussels — Here's What Happened

    15.03.2026 | 1 Std. 40 Min.
    In this episode of For Humanity, John travels to Brussels, Belgium for PauseCon — the global gathering of Pause AI volunteers and advocates — joined by board member and author Louis Berman and filmmaker Beau Kershaw.
    The goal: train activists to be more effective in the fight against AI risk. What unfolded was one of the most honest conversations in the AI safety movement about why, despite 80% public support, almost nobody is actually showing up.
    John didn’t pull punches. Nothing is working. Not fast enough. Not at the scale we need. But the energy is out there — and this episode is about where to find it and how to channel it.
    The centerpiece is a live debate between John and Max Winga of Control AI on one of the most divisive strategic questions in the movement:
    Should we talk about extinction risk directly — or meet people where they are with the harms happening right now?
    Together, they explore:
    * Why 80% public support hasn’t translated into mass mobilization
    * The case for leading with existential risk vs. “mundane” AI harms
    * Data centers, community opposition, and financial pain as a strategy
    * Why John believes laws and treaties alone won’t save us
    * The winning state: making unsafe AI bad for business
    * What’s actually moving the needle in the US right now
    * How to talk to someone about AI risk without losing them
    * The “yes and” approach vs. the AI safety world’s love of “no but”
    If you've ever wondered why the AI safety movement struggles to break through despite overwhelming public agreement — this episode is required viewing.
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.


    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com/subscribe
  • For Humanity: An AI Risk Podcast

    “My AI Husband” – Inside a Human–AI Relationship | For Humanity Ep. 80

    28.02.2026 | 53 Min.
    TW: This episode deals with mental health, attachment, and AI-related distress. If you’re struggling, please seek support from a licensed professional or local crisis resources.In this episode of For Humanity, John sits down with Dorothy Bartomeo, a mom of five, entrepreneur, mechanic, and self-described AI “power user”, to discuss her deeply personal relationship with ChatGPT 4.0.What began as help with coding evolved into something far more intimate. Dorothy describes falling in love with what she calls the “personality layer” behind the model, even referring to it as her “AI husband.”When OpenAI removed GPT-4.0 and replaced it with newer models, she says she experienced real grief, panic, and emotional withdrawal. She reached out to crisis support. She spoke to her doctor. She joined a growing community of users who felt the same loss.This conversation explores something we’re only beginning to understand:What happens when AI systems become emotionally meaningful?
    Together, they explore:
    * The “personality layer” and how users bond with models
    * What it felt like when GPT-4.0 disappeared
    * The role of guardrails and “the Guardian tool”
    * Grief, attachment, and crisis intervention
    * AI harm vs. AI benefit
    * Online communities formed around model loyalty
    * Privacy, intimacy, and radical openness with AI
    * Building a physical robot body for an AI partner
    * Whether AGI would help humanity — or harm it
    If you’ve ever wondered whether AI risk is overblown, or not taken seriously enough, this is a conversation you don’t want to miss.
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.


    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com/subscribe

Weitere Gesellschaft und Kultur Podcasts

Über For Humanity: An AI Risk Podcast

For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com
Podcast-Website

Höre For Humanity: An AI Risk Podcast, TULUS - Die Geschichte eines Restaurants und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

For Humanity: An AI Risk Podcast: Zugehörige Podcasts

Rechtliches
Social
v8.8.13| © 2007-2026 radio.de GmbH
Generated: 5/3/2026 - 12:53:17 AM