Partner im RedaktionsNetzwerk Deutschland
PodcastsGesellschaft und KulturLessWrong (Curated & Popular)

LessWrong (Curated & Popular)

LessWrong
LessWrong (Curated & Popular)
Neueste Episode

Verfügbare Folgen

5 von 687
  • “Please, Don’t Roll Your Own Metaethics” by Wei Dai
    One day, when I was an interning at the cryptography research department of a large software company, my boss handed me an assignment to break a pseudorandom number generator passed to us for review. Someone in another department invented it and planned to use it in their product, and wanted us to take a look first. This person must have had a lot of political clout or was especially confident in himself, because he refused the standard advice that anything an amateur comes up with is very likely to be insecure and he should instead use one of the established, off the shelf cryptographic algorithms, that have survived extensive cryptanalysis (code breaking) attempts. My boss thought he had to demonstrate the insecurity of the PRNG by coming up with a practical attack (i.e., a way to predict its future output based only on its past output, without knowing the secret key/seed). There were three permanent full time professional cryptographers working in the research department, but none of them specialized in cryptanalysis of symmetric cryptography (which covers such PRNGs) so it might have taken them some time to figure out an attack. My time was obviously less valuable and my [...] The original text contained 1 footnote which was omitted from this narration. --- First published: November 12th, 2025 Source: https://www.lesswrong.com/posts/KCSmZsQzwvBxYNNaT/please-don-t-roll-your-own-metaethics --- Narrated by TYPE III AUDIO.
    --------  
    4:11
  • “Paranoia rules everything around me” by habryka
    People sometimes make mistakes [citation needed]. The obvious explanation for most of those mistakes is that decision makers do not have access to the information necessary to avoid the mistake, or are not smart/competent enough to think through the consequences of their actions. This predicts that as decision-makers get access to more information, or are replaced with smarter people, their decisions will get better. And this is substantially true! Markets seem more efficient today than they were before the onset of the internet, and in general decision-making across the board has improved on many dimensions. But in many domains, I posit, decision-making has gotten worse, despite access to more information, and despite much larger labor markets, better education, the removal of lead from gasoline, and many other things that should generally cause decision-makers to be more competent and intelligent. There is a lot of variance in decision-making quality that is not well-accounted for by how much information actors have about the problem domain, and how smart they are. I currently believe that the factor that explains most of this remaining variance is "paranoia", in-particular the kind of paranoia that becomes more adaptive as your environment gets [...] ---Outline:(01:31) A market for lemons(05:02) Its lemons all the way down(06:15) Fighter jets and OODA loops(08:23) The first thing you try is to blind yourself(13:37) The second thing you try is to purge the untrustworthy(20:55) The third thing to try is to become unpredictable and vindictive --- First published: November 13th, 2025 Source: https://www.lesswrong.com/posts/yXSKGm4txgbC3gvNs/paranoia-rules-everything-around-me --- Narrated by TYPE III AUDIO.
    --------  
    22:32
  • “Human Values ≠ Goodness” by johnswentworth
    There is a temptation to simply define Goodness as Human Values, or vice versa. Alas, we do not get to choose the definitions of commonly used words; our attempted definitions will simply be wrong. Unless we stick to mathematics, we will end up sneaking in intuitions which do not follow from our so-called definitions, and thereby mislead ourselves. People who claim that they use some standard word or phrase according to their own definition are, in nearly all cases outside of mathematics, wrong about their own usage patterns.[1] If we want to know what words mean, we need to look at e.g. how they’re used and where the concepts come from and what mental pictures they summon. And when we look at those things for Goodness and Human Values… they don’t match. And I don’t mean that we shouldn’t pursue Human Values; I mean that the stuff people usually refer to as Goodness is a coherent thing which does not match the actual values of actual humans all that well. The Yumminess You Feel When Imagining Things Measures Your Values There's this mental picture where a mind has some sort of goals inside it, stuff it wants, stuff it [...] ---Outline:(01:07) The Yumminess You Feel When Imagining Things Measures Your Values(03:26) Goodness Is A Memetic Egregore(05:10) Aside: Loving Connection(06:58) We Don't Get To Choose Our Own Values (Mostly)(09:02) So What Do? The original text contained 2 footnotes which were omitted from this narration. --- First published: November 2nd, 2025 Source: https://www.lesswrong.com/posts/9X7MPbut5feBzNFcG/human-values-goodness --- Narrated by TYPE III AUDIO.
    --------  
    11:31
  • “Condensation” by abramdemski
    Condensation: a theory of concepts is a model of concept-formation by Sam Eisenstat. Its goals and methods resemble John Wentworth's natural abstractions/natural latents research.[1] Both theories seek to provide a clear picture of how to posit latent variables, such that once someone has understood the theory, they'll say "yep, I see now, that's how latent variables work!". The goal of this post is to popularize Sam's theory and to give my own perspective on it; however, it will not be a full explanation of the math. For technical details, I suggest reading Sam's paper. Brief Summary Shannon's information theory focuses on the question of how to encode information when you have to encode everything. You get to design the coding scheme, but the information you'll have to encode is unknown (and you have some subjective probability distribution over what it will be). Your objective is to minimize the total expected code-length. Algorithmic information theory similarly focuses on minimizing the total code-length, but it uses a "more objective" distribution (a universal algorithmic distribution), and a fixed coding scheme (some programming language). This allows it to talk about the minimum code-length of specific data (talking about particulars rather than average [...] ---Outline:(00:45) Brief Summary(02:35) Shannons Information Theory(07:21) Universal Codes(11:13) Condensation(12:52) Universal Data-Structure?(15:30) Well-Organized Notebooks(18:18) Random Variables(18:54) Givens(19:50) Underlying Space(20:33) Latents(21:21) Contributions(21:39) Top(22:24) Bottoms(22:55) Score(24:29) Perfect Condensation(25:52) Interpretability Solved?(26:38) Condensation isnt as tight an abstraction as information theory.(27:40) Condensation isnt a very good model of cognition.(29:46) Much work to be done! The original text contained 15 footnotes which were omitted from this narration. --- First published: November 9th, 2025 Source: https://www.lesswrong.com/posts/BstHXPgQyfeNnLjjp/condensation --- Narrated by TYPE III AUDIO.
    --------  
    30:29
  • “Mourning a life without AI” by Nikola Jurkovic
    Recently, I looked at the one pair of winter boots I own, and I thought “I will probably never buy winter boots again.” The world as we know it probably won’t last more than a decade, and I live in a pretty warm area. I. AGI is likely in the next decade It has basically become consensus within the AI research community that AI will surpass human capabilities sometime in the next few decades. Some, including myself, think this will likely happen this decade. II. The post-AGI world will be unrecognizable Assuming AGI doesn’t cause human extinction, it is hard to even imagine what the world will look like. Some have tried, but many of their attempts make assumptions that limit the amount of change that will happen, just to make it easier to imagine such a world. Dario Amodei recently imagined a post-AGI world in Machines of Loving Grace. He imagines rapid progress in medicine, the curing of mental illness, the end of poverty, world peace, and a vastly transformed economy where humans probably no longer provide economic value. However, in imagining this crazy future, he limits his writing to be “tame” enough to be digested by a [...] ---Outline:(00:22) I. AGI is likely in the next decade(00:40) II. The post-AGI world will be unrecognizable(03:08) III. AGI might cause human extinction(04:42) IV. AGI will derail everyone's life plans(06:51) V. AGI will improve life in expectation(08:09) VI. AGI might enable living out fantasies(09:56) VII. I still mourn a life without AI --- First published: November 8th, 2025 Source: https://www.lesswrong.com/posts/jwrhoHxxQHGrbBk3f/mourning-a-life-without-ai --- Narrated by TYPE III AUDIO. ---Images from the article:
    --------  
    11:17

Weitere Gesellschaft und Kultur Podcasts

Über LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Podcast-Website

Höre LessWrong (Curated & Popular), Julia Leischik: Spurlos und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

LessWrong (Curated & Popular): Zugehörige Podcasts

Rechtliches
Social
v7.23.11 | © 2007-2025 radio.de GmbH
Generated: 11/15/2025 - 12:25:44 AM