PodcastsGesellschaft und KulturLessWrong (Curated & Popular)

LessWrong (Curated & Popular)

LessWrong
LessWrong (Curated & Popular)
Neueste Episode

834 Episoden

  • LessWrong (Curated & Popular)

    "Morale" by J Bostock

    14.04.2026 | 4 Min.
    One particularly pernicious condition is low morale. Morale is, roughly, "the belief that if you work hard, your conditions will improve." If your morale is low, you can't push through adversity. It's also very easy to accidentally drop your morale through standard rationalist life-optimization.

    It's easy to optimize for wellbeing and miss out on the factors which affect morale, especially if you're working on something important, like not having everyone die. One example is working at an office that feeds you three meals per day. This seems optimal: eating is nice, and cooking is effort. Obvious choice.

    Example

    But morale doesn't come from having nice things. Consider a rich teenager. He gets basically every material need satisfied: maids clean, chefs cook, his family takes him on holiday four times a year. What happens when this kid comes up against something really difficult in school? He probably doesn't push through.

    "Aha", I hear you say. "That kid has never faced adversity. Of course he's not going to handle it well." Ok, suppose he gets kicked in the shins every day and called a posh twat by some local youths, but still goes into school. That's adversity, will that work? Will [...]

    ---

    Outline:

    (00:48) Example

    (01:55) II

    (03:19) III

    ---

    First published:

    April 12th, 2026


    Source:

    https://www.lesswrong.com/posts/53ZAzbdzGJHGeE5rs/morale

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "Anthropic repeatedly accidentally trained against the CoT, demonstrating inadequate processes" by Alex Mallen, ryan_greenblatt

    14.04.2026 | 11 Min.
    It turns out that Anthropic accidentally trained against the chain of thought of Claude Mythos Preview in around 8% of training episodes. This is at least the second independent incident in which Anthropic accidentally exposed their model's CoT to the oversight signal.

    In more powerful systems, this kind of failure would jeopardize safely navigating the intelligence explosion. It's crucial to build good processes to ensure development is executed according to plan, especially as human oversight becomes spread thin over increasing amounts of potentially untrusted and sloppy AI labor.

    This particular failure is also directly harmful, because it significantly reduces our confidence that the model's reasoning trace is monitorable (reflective of the AI's intent to misbehave).[1]

    I'm grateful that Anthropic has transparently reported on this issue as much as they have, allowing for outside scrutiny. I want to encourage them to continue to do so.

    Thanks to Carlo Leonardo Attubato, Buck Shlegeris, Fabien Roger, Arun Jose, and Aniket Chakravorty for feedback and discussion. See also previous discussion here.

    Incidents

    A technical error affecting Mythos, Opus 4.6, and Sonnet 4.6

    This is the most recent incident. In the Claude Mythos alignment risk update, Anthropic report having accidentally exposed approximately 8% [...]

    ---

    Outline:

    (01:21) Incidents

    [... 6 more sections]

    ---

    First published:

    April 13th, 2026


    Source:

    https://www.lesswrong.com/posts/K8FxfK9GmJfiAhgcT/anthropic-repeatedly-accidentally-trained-against-the-cot

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or anoth
  • LessWrong (Curated & Popular)

    "The policy surrounding Mythos marks an irreversible power shift" by sil

    14.04.2026 | 3 Min.
    This post assumes Anthropic isn't lying:

    Mythos is the current SOTA
    Mythos is potent[1]
    Anthropic will not make it publicly available un-nerfed[2]
    Anthropic will have a select few companies use it as part of project glasswing[3] to improve cybersecurity or whatever
    Since the release of ChatGPT, at any given time, anyone on the planet with a few bucks could access the current most capable AI model, the SOTA.[4]

    Since Mythos, this has no longer been the case and I don't think it will ever happen again.

    It may happen for a short period of time if an entity with a policy differing significantly from Anthropic develops a SOTA model.[5] However, most serious competitors (OpenAI, Google), don't have policies differing vastly from Anthropic, and thus I can't imagine a SOTA model (more potent than Mythos) being released unrestricted to the public soon.

    To be clear, I am not claiming the public will never have access to a model as strong as Mythos, this seems almost certainly false, I am claiming that the public will probably never have access to the SOTA of that time.

    Glasswing makes it clear that the attitude among top large companies - those in power [...]

    The original text contained 8 footnotes which were omitted from this narration.

    ---

    First published:

    April 12th, 2026


    Source:

    https://www.lesswrong.com/posts/3MhJELzwpbR42xsJ3/the-policy-surrounding-mythos-marks-an-irreversible-power

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "Only Law Can Prevent Extinction" by Eliezer Yudkowsky

    14.04.2026 | 38 Min.
    There's a quote I read as a kid that stuck with me my whole life:

    "Remember that all tax revenue is the result of holding a gun to somebody's head. Not paying taxes is against the law. If you don’t pay taxes, you’ll be fined. If you don’t pay the fine, you’ll be jailed. If you try to escape from jail, you’ll be shot."
    -- P. J. O'Rourke.

    At first I took away the libertarian lesson: Government is violence. It may, in some cases, be rightful violence. But it all rests on violence; never forget that.

    Today I do think there's an important distinction between two different shapes of violence. It's a distinction that may make my fellow old-school classical Heinlein liberaltarians roll up their eyes about how there's no deep moral difference. I still hold it to be important.

    In a high-functioning ideal state -- not all actual countries -- the state's violence is predictable and avoidable, and meant to be predicted and avoided. As part of that predictability, it comes from a limited number of specially licensed sources.

    You're supposed to know that you can just pay your taxes, and then not get shot.

    Is [...]

    ---

    First published:

    April 13th, 2026


    Source:

    https://www.lesswrong.com/posts/5CfBDiQNg9upfipWk/only-law-can-prevent-extinction

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    99% do you start sawing off your own leg" that's not how this works bro.". Eliezer Yudkowsky replies with an image showing a blue and purple cartoon dinosaur screaming with text reading "AAAAA" and "AAAA" on a brown background." style="max-width: 100%;" />
  • LessWrong (Curated & Popular)

    "Dario probably doesn’t believe in superintelligence" by RobertM

    13.04.2026 | 12 Min.
    Epistemic status: I think this is true but don't think this post is a very strong argument for the case, or particularly interesting to read. But I had to get 500 words out! I think the 2013 conversation is interesting reading as a piece of history, separate from the top-level question, and recommend reading that.

    I think many people have a relationship with Anthropic that is premised on a false belief: that Dario Amodei believes in superintelligence.

    What do I mean by "believes" in superintelligence? Roughly speaking, that the returns to intelligence past the human level are large, in terms of the additional affordances they would grant for steering the world, and that it is practical to get that additional intelligence into a system.

    There are many pieces of evidence which suggest this, going quite far back.

    In 2013, Dario was one of two science advisors (along with Jacob Steinhardt) that Holden brought along to a discussion with Eliezer and Luke about MIRI strategy. A transcript of the conversation is here. It is the first piece of public communication I can find from Dario on the subject. Read end-to-end, I don't think it strongly supports my titular claim. However [...]

    ---

    First published:

    April 10th, 2026


    Source:

    https://www.lesswrong.com/posts/Fnty2JpQ6WBD9FWo5/dario-probably-doesn-t-believe-in-superintelligence

    ---



    Narrated by TYPE III AUDIO.

Weitere Gesellschaft und Kultur Podcasts

Über LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Podcast-Website

Höre LessWrong (Curated & Popular), Einschlafen mit Biografien und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

LessWrong (Curated & Popular): Zugehörige Podcasts

Rechtliches
Social
v8.8.9| © 2007-2026 radio.de GmbH
Generated: 4/15/2026 - 12:39:05 AM