PodcastsGesellschaft und KulturLessWrong (Curated & Popular)

LessWrong (Curated & Popular)

LessWrong
LessWrong (Curated & Popular)
Neueste Episode

810 Episoden

  • LessWrong (Curated & Popular)

    "My Most Costly Delusion" by Ihor Kendiukhov

    26.03.2026 | 5 Min.
    Suppose there is a fire in a nearby house. Suppose there are competent firefighters in your town: fast, professional, well-equipped. They are expected to arrive in 2–3 minutes. In that situation, unless something very extraordinary happens, it would indeed be an act of great arrogance and even utter insanity to go into the fire yourself in the hope of "rescuing" someone or something. The most likely outcome would be that you would find yourself among those who need to be rescued.

    But the calculus changes drastically if the closest fire crew is 3 hours away and consists of drunk, unfit amateurs.

    Or consider a child living in a big, happy, smart family. Imagine this child suddenly decides that his family may run out of money to the point where they won't have enough to eat. All reassurances from his parents don't work. The child doesn't believe in his parents' ability to reason, he makes his own calculations, and he strongly believes he is right and they are wrong. He is dead set on fixing the situation by doing day trading.

    What is that if not going nuts? Would those be wrong who ridicule this child and his complete mischaracterization [...]

    ---

    First published:
    March 22nd, 2026

    Source:
    https://www.lesswrong.com/posts/EAH6Y6y3CDi3uxMou/my-most-costly-delusion

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov

    25.03.2026 | 11 Min.
    I think the community underinvests in the exploration of extremely-low-competence AGI/ASI failure modes and explain why.

    Humanity's Response to the AGI Threat May Be Extremely Incompetent

    There is a sufficient level of civilizational insanity overall and a nice empirical track record in the field of AI itself which is eloquent about its safety culure. For example:

    At OpenAI, a refactoring bug flipped the sign of the reward signal in a model. Because labelers had been instructed to give very low ratings to sexually explicit text, the bug pushed the model into generating maximally explicit content across all prompts. The team noticed only after the training run had completed, because they were asleep.
    The director of alignment at Meta's Superintelligence Labs connected an OpenClaw agent to her real email, at which point it began deleting messages despite her attempts to stop it, and she ended up running to her computer to manually halt the process.
    An internal AI agent at Meta posted an answer publicly without approval; another employee acted on the inaccurate advice, triggering a severe security incident that temporarily allowed employees to access sensitive data they were not authorized to view.
    AWS acknowledged that [...]
    ---

    Outline:

    (00:19) Humanitys Response to the AGI Threat May Be Extremely Incompetent

    (02:26) Many Existing Scenarios and Case Studies Assume (Relatively) High Competence

    (04:31) Dumb Ways to Die

    (07:31) Undignified AGI Disaster Scenarios Deserve More Careful Treatment

    (10:43) Why This Might Be Useful

    ---

    First published:
    March 19th, 2026

    Source:
    https://www.lesswrong.com/posts/t9LAhjoBnpQBa8Bbw/the-case-for-low-competence-asi-failure-scenarios

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "Is fever a symptom of glycine deficiency?" by Benquo

    24.03.2026 | 13 Min.
    A 2022 LessWrong post on orexin and the quest for more waking hours argues that orexin agonists could safely reduce human sleep needs, pointing to short-sleeper gene mutations that increase orexin production and to cavefish that evolved heightened orexin sensitivity alongside an 80% reduction in sleep. Several commenters discussed clinical trials, embryo selection, and the evolutionary puzzle of why short-sleeper genes haven't spread.

    I thought the whole approach was backwards, and left a comment:

    Orexin is a signal about energy metabolism. Unless the signaling system itself is broken (e.g. narcolepsy type 1, caused by autoimmune destruction of orexin-producing neurons), it's better to fix the underlying reality the signals point to than to falsify the signals.

    My sleep got noticeably more efficient when I started supplementing glycine. Most people on modern diets don't get enough; we can make ~3g/day but can use 10g+, because in the ancestral environment we ate much more connective tissue or broth therefrom. Glycine is both important for repair processes and triggers NMDA receptors to drop core temperature, which smooths the path to sleep.

    While drafting that, I went back to Chris Masterjohn's page on glycine requirements. His estimate for total need [...]

    ---

    Outline:

    (01:49) Glycine helps us sleep by cooling the body

    (02:26) Glycine cleans our mitochondria as we sleep

    (04:12) Most people could use more glycine

    (05:28) Fever is plan B for fighting infection; glycine supports plan A

    (09:28) Glycines cooling effect via the SCN is unrelated to its immune benefits

    (10:35) Glycine turns out to be a legitimate antipyretic after all

    (11:51) Practical considerations

    ---

    First published:
    March 22nd, 2026

    Source:
    https://www.lesswrong.com/posts/87XoatpFkdmCZpvQK/is-fever-a-symptom-of-glycine-deficiency

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "You can’t imitation-learn how to continual-learn" by Steven Byrnes

    23.03.2026 | 11 Min.
    In this post, I’m trying to put forward a narrow, pedagogical point, one that comes up mainly when I’m arguing in favor of LLMs having limitations that human learning does not. (E.g. here, here, here.)

    See the bottom of the post for a list of subtexts that you should NOT read into this post, including “…therefore LLMs are dumb”, or “…therefore LLMs can’t possibly scale to superintelligence”.

    Some intuitions on how to think about “real” continual learning

    Consider an algorithm for training a Reinforcement Learning (RL) agent, like the Atari-playing Deep Q network (2013) or AlphaZero (2017), or think of within-lifetime learning in the human brain, which (I claim) is in the general class of “model-based reinforcement learning”, broadly construed.

    These are all real-deal full-fledged learning algorithms: there's an algorithm for choosing the next action right now, and there's one or more update rules for permanently changing some adjustable parameters (a.k.a. weights) in the model such that its actions and/or predictions will be better in the future. And indeed, the longer you run them, the more competent they get.

    When we think of “continual learning”, I suggest that those are good central examples to keep in mind. Here are [...]

    ---

    Outline:

    (00:35) Some intuitions on how to think about real continual learning

    (04:57) Why real continual learning cant be copied by an imitation learner

    (09:53) Some things that are off-topic for this post

    The original text contained 3 footnotes which were omitted from this narration.

    ---

    First published:
    March 16th, 2026

    Source:
    https://www.lesswrong.com/posts/9rCTjbJpZB4KzqhiQ/you-can-t-imitation-learn-how-to-continual-learn

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "Nullius in Verba" by Aurelia

    23.03.2026 | 21 Min.
    Independent verification by the Brain Preservation Foundation and the Survival and Flourishing Fund — the results so far

    Cultivating independent verification

    Extraordinary claims require extraordinary evidence. In my previous post, "Less Dead", I said that my company, Nectome, has

    created a new method for whole-body, whole-brain, human end-of-life preservation for the purpose of future revival. Our protocol is capable of preserving every synapse and every cell in the body with enough detail that current neuroscience says long-term memories are preserved. It's compatible with traditional funerals at room temperature and stable for hundreds of years at cold temperatures.

    In this post, we’ll dive into the evidence for these claims, as well as Nectome's overall approach to cultivating rigorous, independent validation of our methods—a cornerstone of the kind of preservation enterprise I want to be a part of.

    To get to the current state-of-the-art required two major developmental milestones:

    Idealized preservation. A method capable of preserving the nanostructure of the brain for small and large animals under idealized laboratory conditions. Specifically, could we preserve animals well if we were allowed to perfectly control the time and conditions of death?  

    This work (2015-2018) resulted in a brand-new technique—aldehyde-stabilized cryopreservation—which was carefully [...]
    ---

    Outline:

    (00:16) Cultivating independent verification

    [... 7 more sections]

    ---

    First published:
    March 19th, 2026

    Source:
    https://www.lesswrong.com/posts/NEFNs4vbNxJPJJgYY/nullius-in-verba

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

Weitere Gesellschaft und Kultur Podcasts

Über LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Podcast-Website

Höre LessWrong (Curated & Popular), Hotel Matze und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

LessWrong (Curated & Popular): Zugehörige Podcasts

Rechtliches
Social
v8.8.4| © 2007-2026 radio.de GmbH
Generated: 3/26/2026 - 8:57:03 PM