PodcastsGesellschaft und KulturLessWrong (Curated & Popular)

LessWrong (Curated & Popular)

LessWrong
LessWrong (Curated & Popular)
Neueste Episode

808 Episoden

  • LessWrong (Curated & Popular)

    "Is fever a symptom of glycine deficiency?" by Benquo

    24.03.2026 | 13 Min.
    A 2022 LessWrong post on orexin and the quest for more waking hours argues that orexin agonists could safely reduce human sleep needs, pointing to short-sleeper gene mutations that increase orexin production and to cavefish that evolved heightened orexin sensitivity alongside an 80% reduction in sleep. Several commenters discussed clinical trials, embryo selection, and the evolutionary puzzle of why short-sleeper genes haven't spread.

    I thought the whole approach was backwards, and left a comment:

    Orexin is a signal about energy metabolism. Unless the signaling system itself is broken (e.g. narcolepsy type 1, caused by autoimmune destruction of orexin-producing neurons), it's better to fix the underlying reality the signals point to than to falsify the signals.

    My sleep got noticeably more efficient when I started supplementing glycine. Most people on modern diets don't get enough; we can make ~3g/day but can use 10g+, because in the ancestral environment we ate much more connective tissue or broth therefrom. Glycine is both important for repair processes and triggers NMDA receptors to drop core temperature, which smooths the path to sleep.

    While drafting that, I went back to Chris Masterjohn's page on glycine requirements. His estimate for total need [...]

    ---

    Outline:

    (01:49) Glycine helps us sleep by cooling the body

    (02:26) Glycine cleans our mitochondria as we sleep

    (04:12) Most people could use more glycine

    (05:28) Fever is plan B for fighting infection; glycine supports plan A

    (09:28) Glycines cooling effect via the SCN is unrelated to its immune benefits

    (10:35) Glycine turns out to be a legitimate antipyretic after all

    (11:51) Practical considerations

    ---

    First published:
    March 22nd, 2026

    Source:
    https://www.lesswrong.com/posts/87XoatpFkdmCZpvQK/is-fever-a-symptom-of-glycine-deficiency

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "You can’t imitation-learn how to continual-learn" by Steven Byrnes

    23.03.2026 | 11 Min.
    In this post, I’m trying to put forward a narrow, pedagogical point, one that comes up mainly when I’m arguing in favor of LLMs having limitations that human learning does not. (E.g. here, here, here.)

    See the bottom of the post for a list of subtexts that you should NOT read into this post, including “…therefore LLMs are dumb”, or “…therefore LLMs can’t possibly scale to superintelligence”.

    Some intuitions on how to think about “real” continual learning

    Consider an algorithm for training a Reinforcement Learning (RL) agent, like the Atari-playing Deep Q network (2013) or AlphaZero (2017), or think of within-lifetime learning in the human brain, which (I claim) is in the general class of “model-based reinforcement learning”, broadly construed.

    These are all real-deal full-fledged learning algorithms: there's an algorithm for choosing the next action right now, and there's one or more update rules for permanently changing some adjustable parameters (a.k.a. weights) in the model such that its actions and/or predictions will be better in the future. And indeed, the longer you run them, the more competent they get.

    When we think of “continual learning”, I suggest that those are good central examples to keep in mind. Here are [...]

    ---

    Outline:

    (00:35) Some intuitions on how to think about real continual learning

    (04:57) Why real continual learning cant be copied by an imitation learner

    (09:53) Some things that are off-topic for this post

    The original text contained 3 footnotes which were omitted from this narration.

    ---

    First published:
    March 16th, 2026

    Source:
    https://www.lesswrong.com/posts/9rCTjbJpZB4KzqhiQ/you-can-t-imitation-learn-how-to-continual-learn

    ---



    Narrated by TYPE III AUDIO.
  • LessWrong (Curated & Popular)

    "Nullius in Verba" by Aurelia

    23.03.2026 | 21 Min.
    Independent verification by the Brain Preservation Foundation and the Survival and Flourishing Fund — the results so far

    Cultivating independent verification

    Extraordinary claims require extraordinary evidence. In my previous post, "Less Dead", I said that my company, Nectome, has

    created a new method for whole-body, whole-brain, human end-of-life preservation for the purpose of future revival. Our protocol is capable of preserving every synapse and every cell in the body with enough detail that current neuroscience says long-term memories are preserved. It's compatible with traditional funerals at room temperature and stable for hundreds of years at cold temperatures.

    In this post, we’ll dive into the evidence for these claims, as well as Nectome's overall approach to cultivating rigorous, independent validation of our methods—a cornerstone of the kind of preservation enterprise I want to be a part of.

    To get to the current state-of-the-art required two major developmental milestones:

    Idealized preservation. A method capable of preserving the nanostructure of the brain for small and large animals under idealized laboratory conditions. Specifically, could we preserve animals well if we were allowed to perfectly control the time and conditions of death?  

    This work (2015-2018) resulted in a brand-new technique—aldehyde-stabilized cryopreservation—which was carefully [...]
    ---

    Outline:

    (00:16) Cultivating independent verification

    [... 7 more sections]

    ---

    First published:
    March 19th, 2026

    Source:
    https://www.lesswrong.com/posts/NEFNs4vbNxJPJJgYY/nullius-in-verba

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:
  • LessWrong (Curated & Popular)

    "Broad Timelines" by Toby_Ord

    21.03.2026 | 30 Min.
    No-one knows when AI will begin having transformative impacts upon the world. People aren’t sure and shouldn’t be sure: there just isn’t enough evidence to pin it down.

    But we don’t need to wait for certainty. I want to explore what happens if we take our uncertainty seriously — if we act with epistemic humility. What does wise planning look like in a world of deeply uncertain AI timelines?

    I’ll conclude that taking the uncertainty seriously has real implications for how one can contribute to making this AI transition go well. And it has even more implications for how we act together — for our portfolio of work aimed towards this end.

     

    AI Timelines

    By AI timelines, I refer to how long it will be before AI has truly transformative effects on the world. People often think about this using terms such as artificial general intelligence (AGI), human level AI, transformative AI, or superintelligence. Each term is used differently by different people, making it challenging to compare their stated timelines. Indeed even an individual's own definition of their favoured term will be somewhat vague, such that even after their threshold has been crossed, they might have [...]

    ---

    Outline:

    (00:58) AI Timelines

    [... 7 more sections]

    ---

    First published:
    March 19th, 2026

    Source:
    https://www.lesswrong.com/posts/6pDMLYr7my2QMTz3s/broad-timelines

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:
  • LessWrong (Curated & Popular)

    "No, we haven’t uploaded a fly yet" by Ariel Zeleznikow-Johnston

    21.03.2026 | 17 Min.
    In the last two weeks, social media was set abuzz by claims that scientists had succeeded in uploading a fruit fly. It started with a video released by the startup Eon Systems, a company that wants to create “Brain emulation so humans can flourish in a world with superintelligence.”

    On the left of the video, a virtual fly walks around in a sandpit looking for pieces of banana to eat, occasionally pausing to groom itself along the way. On the right is a dancing constellation of dots resembling the fruit fly brain, set above the caption ‘simultaneous brain emulation’.

    At first glance, this appears astounding - a digitally recreated animal living its life inside a computer. And indeed, this impression was seemingly confirmed when, a couple of days after the video's initial release on X by cofounder Alex Wissner-Gross, Eon's CEO Michael Andregg explicitly posted “We’ve uploaded a fruit fly”.

    Yet “extraordinary claims require extraordinary evidence, not just cool visuals”, as one neuroscientist put it in response to Andregg's post. If Eon had indeed succeeded in uploading a fly - a goal previously thought to be likely decades away according to much of the fly neuroscience community - they’d [...]

    ---

    Outline:

    (03:43) A brief history of fruit fly connectomics

    [... 3 more sections]

    ---

    First published:
    March 19th, 2026

    Source:
    https://www.lesswrong.com/posts/ybwcxBRrsKavJB9Wz/no-we-haven-t-uploaded-a-fly-yet

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

Weitere Gesellschaft und Kultur Podcasts

Über LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Podcast-Website

Höre LessWrong (Curated & Popular), Dark Matters – Geheimnisse der Geheimdienste und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

LessWrong (Curated & Popular): Zugehörige Podcasts

Rechtliches
Social
v8.8.3 | © 2007-2026 radio.de GmbH
Generated: 3/25/2026 - 7:20:21 PM