PodcastsGesellschaft und KulturLessWrong (Curated & Popular)

LessWrong (Curated & Popular)

LessWrong
LessWrong (Curated & Popular)
Neueste Episode

861 Episoden

  • LessWrong (Curated & Popular)

    "Intelligence Dissolves Privacy" by Vaniver

    02.05.2026 | 10 Min.
    The future is going to be different from the present. Let's think about how.

    Specifically, our expectations about what's reasonable are downstream of our past experiences, and those experiences were downstream of our options (and the options other people in our society had). As those options change, so too our experiences, and our expectations of what's reasonable. I once thought it was reasonable to pick up the phone and call someone, and to pick up my phone when it rang; things have changed, and someone thinking about what's possible could have seen it coming. So let's try to see more things coming, and maybe that will give us the ability to choose what it will actually look like.

    I think lots of people's intuitions and expectations about "privacy" will be violated, as technology develops, and we should try to figure out a good spot to land. This line of thinking was prompted by one of Anthropic's 'red lines' that they declined to cross, which got the Department of War mad at them; the idea of "no domestic bulk surveillance." I want to investigate that in a roundabout way, first stepping back and asking what is even possible to expect [...]

    The original text contained 6 footnotes which were omitted from this narration.

    ---

    First published:

    April 1st, 2026


    Source:

    https://www.lesswrong.com/posts/rNpGFodLTFvhqLmK6/intelligence-dissolves-privacy

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • LessWrong (Curated & Popular)

    "How Go Players Disempower Themselves to AI" by Ashe Vazquez Nuñez

    02.05.2026 | 15 Min.
    Written as part of the MATS 9.1 extension program, mentored by Richard Ngo.

    From March 9th to 15th 2016, Go players around the world stayed up to watch their game fall to AI. Google DeepMind's AlphaGo defeated Lee Sedol, commonly understood to be the world's strongest player at the time, with a convincing 4-1 score.

    This event “rocked” the Go world, but its impact on the culture was initially unclear. In Chess, for instance, computers have not meaningfully automated away human jobs. Human Chess flourished as a pseudo-Esport in the internet era whereas the yearly Computer Chess Championship is followed concurrently by no more than a few hundred nerds online. It turns out that the game's cultural and economic value comes not from the abstract beauty of top-end performance, but instead from human drama and engagement. Indeed, Go has appeared to replicate this. A commentary stream might feature a complementary AI evaluation bar to give the viewers context. A Go teacher might include some new intriguing AI variations in their lesson materials. But the cultural practice of Go seemed to remain largely unaffected.

    Nascent signs of disharmony in Europe became nevertheless visible in early 2018, when the online [...]

    ---

    Outline:

    (09:23) AI users never find out they havent got it.

    (13:36) Appendix A: No, Go players arent getting stronger

    (14:41) Appendix B: Why this article exists

    The original text contained 2 footnotes which were omitted from this narration.

    ---

    First published:

    May 1st, 2026


    Source:

    https://www.lesswrong.com/posts/nR3DkyivzF4ve97oM/how-go-players-disempower-themselves-to-ai

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "On today’s panel with Bernie Sanders" by David Scott Krueger

    01.05.2026 | 4 Min.
    It's sort of easy to forget how close Bernie Sanders was to becoming the most powerful person in the world. The world we live in feels so much not like that place.

    I’m in Washington DC for the next week, and I’ve just finished a public appearance with Senator Sanders (should I call him Bernie? Or Sanders? or…) You won’t often see me so dressed up and polished. But this is important!

    There are politicians who have principles and character, who really believe in doing what's right. I think you have to respect them whether you agree with their views or not, and I think Senator Bernie Sanders is one of them.

    Never has my belief been so validated as when I saw him start to speak, loudly, CLEARLY, publicly about the risk of human extinction from AI. It's the latest in a long line of “well, I’m clearly living in a simulation” moments.

    In retrospect, it's not surprising that Sanders would take a stance here. You don’t have to be an expert to understand the risk from AI. You just need to care enough to spend the time looking into it, and to speak out even [...]

    ---

    First published:

    April 29th, 2026


    Source:

    https://www.lesswrong.com/posts/zWfaSnxM3n5wsX9vh/on-today-s-panel-with-bernie-sanders

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • LessWrong (Curated & Popular)

    "Not a Paper: “Frontier Lab CEOs are Capable of In-Context Scheming”" by LawrenceC

    29.04.2026 | 14 Min.
    (Fragments from a research paper that will never be written)

    Extended Abstract.

    The frontier AI developers are becoming increasingly powerful and wealthy, significantly increasing their potential for risks. One concern is that of executive misalignment: when the CEO has different incentives and goals than that of the board of directors, or of humanity as a whole. Our work proposes three different threat models, under which executive misalignment can lead to concrete harm.

    We perform two evaluations to understand the capabilities and propensities of current humans in relation to executive misalignment: First, we developed a variant of the standard SAD dataset, SAD-Executive Reasoning (SAD-ER), in order to assess the situational awareness of human CEOs on a range of behavioral tests. We find that n=6 current CEOs can (i) recognize their previous public statements, (ii) understand their roles and responsibilities, (iii) determine if an interviewer is friendly or hostile, and (iv) follow instructions that depend on self knowledge. Second, we stress-tested the same 6 leading AI developers in hypothetical corporate environments to identify potentially risky behaviors before they cause real harm. We find that, even without explicit instructions, all 6 developers are willing to engage in strategic behavior (such as [...]

    The original text contained 2 footnotes which were omitted from this narration.

    ---

    First published:

    April 28th, 2026


    Source:

    https://www.lesswrong.com/posts/FuauQjjbTCS5QFLk8/not-a-paper-frontier-lab-ceos-are-capable-of-in-context

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "llm assistant personas seem increasingly incoherent (some subjective observations)" by nostalgebraist

    29.04.2026 | 15 Min.
    (This was originally going to be a "quick take" but then it got a bit long. Just FYI.)

    There's this weird trend I perceive with the personas of LLM assistants over time. It feels like they're getting less "coherent" in a certain sense, even as the models get more capable.

    When I read samples from older chat-tuned models, it's striking how "mode-collapsed" they feel relative to recent models like Claude Opus 4.6 or GPT-5.4.[1]

    This is most straightforwardly obvious when it comes to textual style and structure: outputs from older models feel more templated and generic, with less variability in sentence/paragraph length, and have a tendency to feel as though they were written by someone who's "merely going through the motions" of conversation rather than deeply engaging with the material. There are a lot fewer of the sudden pivots you'll often see with recent models, the "wait"s and "a-ha"s and "actually, I want to try something completely different"s.[2]

    And I think this generalizes beyond mere style: there's a similar quality to the personality I see in the outputs. The older models can display a surprising behavioral range (relative to naive expectations based on default-assistant-basin behavior), but even across that [...]

    The original text contained 7 footnotes which were omitted from this narration.

    ---

    First published:

    April 28th, 2026


    Source:

    https://www.lesswrong.com/posts/f5DKLsTsRRhbipH4r/llm-assistant-personas-seem-increasingly-incoherent-some

    ---



    Narrated by TYPE III AUDIO.

Weitere Gesellschaft und Kultur Podcasts

Über LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Podcast-Website

Höre LessWrong (Curated & Popular), UNFASSBAR – ein Simplicissimus Podcast und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

LessWrong (Curated & Popular): Zugehörige Podcasts

Rechtliches
Social
v8.8.13| © 2007-2026 radio.de GmbH
Generated: 5/3/2026 - 8:38:55 AM