PodcastsGesellschaft und KulturLessWrong (Curated & Popular)

LessWrong (Curated & Popular)

LessWrong
LessWrong (Curated & Popular)
Neueste Episode

848 Episoden

  • LessWrong (Curated & Popular)

    "Evil is bad, actually (Vassar and Olivia Schaefer callout post)" by plex

    21.04.2026 | 15 Min.
    Micheal Vassar's strategy for saving the world is horrifyingly counterproductive. Olivia's is worse.

    A note before we start: A lot of the sources cited are people who ended up looking kinda insane. This is not a coincidence, it's apparently an explicit strategy: Apply plausibly-deniable psychological pressure to anyone who might speak up until they crack and discredit themselves by sounding crazy or taking extreme and destructive actions. Here's Brent Dill explaining it:




    (later in the conversation he tries to encourage the person he's talking to kill herself, and threatens her death if she posts the logs. Charming group! I hear Brent was living in Vassar's garden recently, well after he was removed from the wider community for sexual abuse.)

    Examples

    Some of the people here I knew before their interactions with Vassar's sphere to be not just mentally OK, but unusually resilient people. Prime among them is Kathy Forth.

    Prior to her suicide, Kathy and I were friends. I witnessed her falls downwards from healthy and capable to anxiety to paranoia, as downstream of what I believe to be genuine sexual abuse she spiralled into a narrative and way of experiencing the world where almost everyone seemed [...]

    The original text contained 7 footnotes which were omitted from this narration.

    ---

    First published:

    April 21st, 2026


    Source:

    https://www.lesswrong.com/posts/cY7J7KSSqrhB8t3hQ/evil-is-bad-actually-vassar-and-olivia-schaefer-callout-post

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • LessWrong (Curated & Popular)

    "10 non-boring ways I’ve used AI in the last month" by habryka

    21.04.2026 | 13 Min.
    I use AI assistance for basically all of my work, for many hours, every day. My colleagues do the same. Recent surveys suggest >50% of Americans have used AI to help with their work in the last week. My architect recently started sending me emails that were clearly ChatGPT generated.[1]

    Despite that, I know surprisingly little about how other people use AI assitance. Or at least how people who aren't weird AI-influencers sharing their marketing courses on Twitter or LinkedIn use AI. So here is a list of 10 concrete times I have used AI in some at least mildly creative ways, and how that went.

    1) Transcribe and summarize every conversation spoken in our team office

    Using an internal Lightcone application called "Omnilog" we have a microphone in our office that records all of our meetings, transcribes them via ElevenLabs, and uses Pyannote.ai for speaker identification. This was a bunch of work and is quite valuable, but probably a bit too annoying for most readers of this post to set up.

    However, the thing I am successfully using Claude Code to do is take that transcript (which often has substantial transcription and speaker-identification errors), clean it up, summarize [...]

    ---

    Outline:

    (00:50) 1) Transcribe and summarize every conversation spoken in our team office

    (01:56) 2) Try to automatically fix any simple bugs that anyone on the team has mentioned out loud, or complained about in Slack

    (03:13) 3) Design 20+ different design variations for nowinners.ai

    (04:09) 4) Review my LessWrong essays for factual accuracy and argue with me about their central thesis

    (05:08) 5) Remove unnecessary clauses, sentences, parentheticals and random cruft from my LessWrong posts before publishing

    (06:23) 6) Pair vibe-coding

    (08:14) 7) Mass-creating 100+ variations of Suno songs using Claude Cowork desktop control

    [... 3 more sections]

    ---

    First published:

    April 20th, 2026


    Source:

    https://www.lesswrong.com/posts/bxdwSZYxKmPBres6w/10-non-boring-ways-i-ve-used-ai-in-the-last-month

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try
  • LessWrong (Curated & Popular)

    "Feel like a room has bad vibes? The lighting is probably too “spiky” or too blue" by habryka

    21.04.2026 | 6 Min.
    I have now had a few years of experience doing architectural and interior design for many spaces that people seem to really love (most widely known Lighthaven, but before that we also had the Lightcone Offices, though I've also played a hand in designing some of the most popular areas at Constellation a few years back).

    Most people (including me a few years back) have surprisingly bad introspective access into why a room makes them feel certain things. Most of the time, people's ability to describe the effect of a space on them is as shallow as "this place feels artificial", or "this place has bad vibes", or "this place feels cozy". And if they try to figure out why that is true, they quickly run into limits of their introspective access.

    The most common reason why a space feels bad, is because it is lit by low-quality lights.

    Our eyes evolved to see things illuminated by sunlight. Correspondingly, it appears that the best proxy we have for whether the light in a room "works" is how similar the light in that room is to natural sunlight. The most popular way of measuring how much light differs from [...]

    The original text contained 2 footnotes which were omitted from this narration.

    ---

    First published:

    April 19th, 2026


    Source:

    https://www.lesswrong.com/posts/dWib7qinqymfxevE4/feel-like-a-room-has-bad-vibes-the-lighting-is-probably-too

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "Quality Matters Most When Stakes are Highest" by LawrenceC

    20.04.2026 | 5 Min.
    Or, the end of the world is no excuse for sloppy work

    One morning when I was nine, my dad called me over to his computer. He wanted to show me this amazing Korean scientist who had managed to clone stem cells, and who was developing treatments to let people with spinal cord injuries – people like my dad – walk again on their own two legs.

    I don't remember exactly what he said next, or what I said back. I have a sense that I was excited too, and that I was upset when I learned the United States had banned this kind of research.

    Unfortunately, his research didn’t pan out. No such treatment arrived. My dad still walks on crutches.

    Years later, I learned that the scientist, Hwang Woo-Suk, had been exposed as a fraud.

    In 2004, Hwang published a paper in Science claiming that his team had cloned a human embryo and derived stem cells from it (the first time anyone had done this). A year later, in 2005, he published a second paper claiming that they managed to repeat this feat eleven more times, producing 11 patient-specific stem cell lines for patients with type 1 [...]

    ---

    First published:

    April 19th, 2026


    Source:

    https://www.lesswrong.com/posts/GNjDC6jtjr2iiE45i/quality-matters-most-when-stakes-are-highest

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "Reevaluating AGI Ruin in 2026" by lc

    20.04.2026 | 49 Min.
    It's been about four years since Eliezer Yudkowsky published AGI Ruin: A List of Lethalities, a 43-point list of reasons the default outcome from building AGI is everyone dying. A week later, Paul Christiano replied with Where I Agree and Disagree with Eliezer, signing on to about half the list and pushing back on most of the rest.

    For people who were young and not in the bay area, like me, these essays were probably more significant than old timers would expect. Before it became completely consumed with AI discussions, LessWrong was a forum about the art of human rationality, and most internet rationalists I knew thought of it as a mix between that and a place to write for people who liked the sequences. It wasn't until 2022 that we were exposed to all of the doom arguments in one place, and it was the first time in many years that Eliezer had publicly announced how much more dire his assessment was since the Sequences. As far as I can tell AGI Ruin still remains his most authoritative explanation of his views.

    It's not often that public intellectuals will literally hand you a document explaining why [...]

    ---

    Outline:

    (02:51) AGI Ruin

    (02:54) Section A (Setting up the problem)

    (12:18) Section B.1 (Distributional Shift)

    (22:16) Section B.2:  Central difficulties of outer and inner alignment.

    (32:21) Section B.3:  Central difficulties of sufficiently good and useful transparency / interpretability.

    (41:29) Section C (What is AI Safety currently doing?)

    (44:34) Overall Impressions

    The original text contained 4 footnotes which were omitted from this narration.

    ---

    First published:

    April 19th, 2026


    Source:

    https://www.lesswrong.com/posts/PgJYwnN7fZKipgMz4/reevaluating-agi-ruin-in-2026

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Weitere Gesellschaft und Kultur Podcasts

Über LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Podcast-Website

Höre LessWrong (Curated & Popular), Hotel Matze und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

LessWrong (Curated & Popular): Zugehörige Podcasts

Rechtliches
Social
v8.8.11| © 2007-2026 radio.de GmbH
Generated: 4/22/2026 - 3:45:56 AM