PodcastsGesellschaft und KulturLessWrong (Curated & Popular)

LessWrong (Curated & Popular)

LessWrong
LessWrong (Curated & Popular)
Neueste Episode

805 Episoden

  • LessWrong (Curated & Popular)

    "Broad Timelines" by Toby_Ord

    21.03.2026 | 30 Min.
    No-one knows when AI will begin having transformative impacts upon the world. People aren’t sure and shouldn’t be sure: there just isn’t enough evidence to pin it down.

    But we don’t need to wait for certainty. I want to explore what happens if we take our uncertainty seriously — if we act with epistemic humility. What does wise planning look like in a world of deeply uncertain AI timelines?

    I’ll conclude that taking the uncertainty seriously has real implications for how one can contribute to making this AI transition go well. And it has even more implications for how we act together — for our portfolio of work aimed towards this end.

     

    AI Timelines

    By AI timelines, I refer to how long it will be before AI has truly transformative effects on the world. People often think about this using terms such as artificial general intelligence (AGI), human level AI, transformative AI, or superintelligence. Each term is used differently by different people, making it challenging to compare their stated timelines. Indeed even an individual's own definition of their favoured term will be somewhat vague, such that even after their threshold has been crossed, they might have [...]

    ---

    Outline:

    (00:58) AI Timelines

    [... 7 more sections]

    ---

    First published:
    March 19th, 2026

    Source:
    https://www.lesswrong.com/posts/6pDMLYr7my2QMTz3s/broad-timelines

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:
  • LessWrong (Curated & Popular)

    "No, we haven’t uploaded a fly yet" by Ariel Zeleznikow-Johnston

    21.03.2026 | 17 Min.
    In the last two weeks, social media was set abuzz by claims that scientists had succeeded in uploading a fruit fly. It started with a video released by the startup Eon Systems, a company that wants to create “Brain emulation so humans can flourish in a world with superintelligence.”

    On the left of the video, a virtual fly walks around in a sandpit looking for pieces of banana to eat, occasionally pausing to groom itself along the way. On the right is a dancing constellation of dots resembling the fruit fly brain, set above the caption ‘simultaneous brain emulation’.

    At first glance, this appears astounding - a digitally recreated animal living its life inside a computer. And indeed, this impression was seemingly confirmed when, a couple of days after the video's initial release on X by cofounder Alex Wissner-Gross, Eon's CEO Michael Andregg explicitly posted “We’ve uploaded a fruit fly”.

    Yet “extraordinary claims require extraordinary evidence, not just cool visuals”, as one neuroscientist put it in response to Andregg's post. If Eon had indeed succeeded in uploading a fly - a goal previously thought to be likely decades away according to much of the fly neuroscience community - they’d [...]

    ---

    Outline:

    (03:43) A brief history of fruit fly connectomics

    [... 3 more sections]

    ---

    First published:
    March 19th, 2026

    Source:
    https://www.lesswrong.com/posts/ybwcxBRrsKavJB9Wz/no-we-haven-t-uploaded-a-fly-yet

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:
  • LessWrong (Curated & Popular)

    "Terrified Comments on Corrigibility in Claude’s Constitution" by Zack_M_Davis

    21.03.2026 | 18 Min.
    (Previously: Prologue.)

    Corrigibility as a term of art in AI alignment was coined as a word to refer to a property of an AI being willing to let its preferences be modified by its creator. Corrigibility in this sense was believed to be a desirable but unnatural property that would require more theoretical progress to specify, let alone implement. Desirable, because if you don't think you specified your AI's preferences correctly the first time, you want to be able to change your mind (by changing its mind). Unnatural, because we expect the AI to resist having its mind changed: rational agents should want to preserve their current preferences, because letting their preferences be modified would result in their current preferences being less fulfilled (in expectation, since the post-modification AI would no longer be trying to fulfill them).

    Another attractive feature of corrigibility is that it seems like it should in some sense be algorithmically simpler than the entirety of human values. Humans want lots of specific, complicated things out of life (friendship and liberty and justice and sex and sweets, et cetera, ad infinitum) which no one knows how to specify and would seem arbitrary to a [...]

    ---

    Outline:

    (03:21) The Constitutions Definition of Corrigibility Is Muddled

    (06:24) Claude Take the Wheel

    (15:10) It Sounds Like the Humans Are Begging

    The original text contained 1 footnote which was omitted from this narration.

    ---

    First published:
    March 16th, 2026

    Source:
    https://www.lesswrong.com/posts/K2Ae2vmAKwhiwKEo5/terrified-comments-on-corrigibility-in-claude-s-constitution

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "PSA: Predictions markets often have very low liquidity; be careful citing them." by Eye You

    20.03.2026 | 9 Min.
    I see people repeatedly make the mistake of referencing a very low liquidity prediction market and using it to make a nontrivial point. Usually the implication when a market is cited is that it's number should be taken somewhat seriously, that it's giving us a highly informed probability. Sometimes a market is used to analyze some event that recently occurred; reasoning here looks like "the market on outcome O was trading at X%, then event E happened and the market quickly moved to Y%, thus event E made O less/more likely."

    Who do I see make this mistake? Rationalists, both casually and gasp in blog posts. Scott Alexander and Zvi (and I really appreciate their work, seriously!) are guilty of this. I'll give a recent example from each of them.

    From Scott's Mantic Monday post on March 2:

    Having Your Own Government Try To Destroy You Is (At Least Temporarily) Good For Business

    On Friday, the Pentagon declared AI company Anthropic a “supply chain risk”, a designation never before given to an American firm. This unprecedented move was seen as an attempt to punish, maybe destroy the company. How effective was it?

    Anthropic isn’t publicly traded, so we [...]

    ---

    First published:
    March 16th, 2026

    Source:
    https://www.lesswrong.com/posts/SrtoF6PcbHpzcT82T/psa-predictions-markets-often-have-very-low-liquidity-be

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:
  • LessWrong (Curated & Popular)

    "“The AI Doc” is coming out March 26" by Rob Bensinger, Beckeck

    20.03.2026 | 1 Min.
    On Thursday, March 26th, a major new AI documentary is coming out: The AI Doc: Or How I Became an Apocaloptimist. Tickets are on sale now.

    The movie is excellent, and MIRI staff I've spoken with generally believe it belongs in the same tier as If Anyone Builds It, Everyone Dies as an extremely valuable way to alert policymakers and the general public about AI risk, especially if it smashes the box office.

    When IABIED was coming out, the community did an incredible job of helping the book succeed; without all of your help, we might never have gotten on the New York Times bestseller list. MIRI staff think that the community could potentially play a similarly big role in helping The AI Doc succeed, and thereby help these ideas go mainstream.

    (Note: Two MIRI staff were interviewed for the film, but we weren’t involved in its production. We just like it.)

    The most valuable thing most people can do is maximize opening-weekend success. Buy tickets to see the movie now; poke friends and family members to do the same. This will cause more theaters to pick up the movie, ensure it stays in theaters for longer, and broadly [...]

    ---

    First published:
    March 19th, 2026

    Source:
    https://www.lesswrong.com/posts/w9BCbshKra7FKHTzi/the-ai-doc-is-coming-out-march-26

    ---



    Narrated by TYPE III AUDIO.

Weitere Gesellschaft und Kultur Podcasts

Über LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Podcast-Website

Höre LessWrong (Curated & Popular), Der Sophie Passmann Podcast und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

LessWrong (Curated & Popular): Zugehörige Podcasts

Rechtliches
Social
v8.8.3 | © 2007-2026 radio.de GmbH
Generated: 3/22/2026 - 4:36:27 AM