Partner im RedaktionsNetzwerk Deutschland
PodcastsGesellschaft und KulturLessWrong (Curated & Popular)

LessWrong (Curated & Popular)

LessWrong
LessWrong (Curated & Popular)
Neueste Episode

Verfügbare Folgen

5 von 696
  • “Varieties Of Doom” by jdp
    There has been a lot of talk about "p(doom)"over the last few years. This has always rubbed me the wrong waybecause "p(doom)" didn't feel like it mapped to any specific belief in my head.In private conversations I'd sometimes give my p(doom) as 12%, with the caveatthat "doom" seemed nebulous and conflated between several different concepts.At some point it was decideda p(doom) over 10% makes you a "doomer" because it means what actions you should take with respect toAI are overdetermined. I did not and do not feel that is true. But any time Ifelt prompted to explain my position I'd find I could explain a little bit ofthis or that, but not really convey the whole thing. As it turns out doom hasa lot of parts, and every part is entangled with every other part so no matterwhich part you explain you always feel like you're leaving the crucial parts out. Doom ismore like an onion than asingle event, a distribution over AI outcomes people frequentlyrespond to with the force of the fear of death. Some of these outcomes are lessthan death and some [...] ---Outline:(03:46) 1. Existential Ennui(06:40) 2. Not Getting Immortalist Luxury Gay Space Communism(13:55) 3. Human Stock Expended As Cannon Fodder Faster Than Replacement(19:37) 4. Wiped Out By AI Successor Species(27:57) 5. The Paperclipper(42:56) Would AI Successors Be Conscious Beings?(44:58) Would AI Successors Care About Each Other?(49:51) Would AI Successors Want To Have Fun?(51:11) VNM Utility And Human Values(55:57) Would AI successors get bored?(01:00:16) Would AI Successors Avoid Wireheading?(01:06:07) Would AI Successors Do Continual Active Learning?(01:06:35) Would AI Successors Have The Subjective Experience of Will?(01:12:00) Multiply(01:15:07) 6. Recipes For Ruin(01:18:02) Radiological and Nuclear(01:19:19) Cybersecurity(01:23:00) Biotech and Nanotech(01:26:35) 7. Large-Finite Damnation --- First published: November 17th, 2025 Source: https://www.lesswrong.com/posts/apHWSGDiydv3ivmg6/varieties-of-doom --- Narrated by TYPE III AUDIO. ---Images from the article:
    --------  
    1:38:48
  • “How Colds Spread” by RobertM
    It seems like a catastrophic civilizational failure that we don't have confident common knowledge of how colds spread. There have been a number of studies conducted over the years, but most of those were testing secondary endpoints, like how long viruses would survive on surfaces, or how likely they were to be transmitted to people's fingers after touching contaminated surfaces, etc. However, a few of them involved rounding up some brave volunteers, deliberately infecting some of them, and then arranging matters so as to test various routes of transmission to uninfected volunteers. My conclusions from reviewing these studies are: You can definitely infect yourself if you take a sick person's snot and rub it into your eyeballs or nostrils.  This probably works even if you touched a surface that a sick person touched, rather than by handshake, at least for some surfaces.  There's some evidence that actual human infection is much less likely if the contaminated surface you touched is dry, but for most colds there'll often be quite a lot of virus detectable on even dry contaminated surfaces for most of a day.  I think you can probably infect yourself with fomites, but my guess is that [...] ---Outline:(01:49) Fomites(06:58) Aerosols(16:23) Other Factors(17:06) Review(18:33) Conclusion The original text contained 16 footnotes which were omitted from this narration. --- First published: November 18th, 2025 Source: https://www.lesswrong.com/posts/92fkEn4aAjRutqbNF/how-colds-spread --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    20:31
  • “New Report: An International Agreement to Prevent the Premature Creation of Artificial Superintelligence” by Aaron_Scher, David Abecassis, Brian Abeyta, peterbarnett
    TLDR: We at the MIRI Technical Governance Team have released a report describing an example international agreement to halt the advancement towards artificial superintelligence. The agreement is centered around limiting the scale of AI training, and restricting certain AI research. Experts argue that the premature development of artificial superintelligence (ASI) poses catastrophic risks, from misuse by malicious actors, to geopolitical instability and war, to human extinction due to misaligned AI. Regarding misalignment, Yudkowsky and Soares's NYT bestseller If Anyone Builds It, Everyone Dies argues that the world needs a strong international agreement prohibiting the development of superintelligence. This report is our attempt to lay out such an agreement in detail. The risks stemming from misaligned AI are of special concern, widely acknowledged in the field and even by the leaders of AI companies. Unfortunately, the deep learning paradigm underpinning modern AI development seems highly prone to producing agents that are not aligned with humanity's interests. There is likely a point of no return in AI development — a point where alignment failures become unrecoverable because humans have been disempowered. Anticipating this threshold is complicated by the possibility of a feedback loop once AI research and development can [...] --- First published: November 18th, 2025 Source: https://www.lesswrong.com/posts/FA6M8MeQuQJxZyzeq/new-report-an-international-agreement-to-prevent-the --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    6:52
  • “Where is the Capital? An Overview” by johnswentworth
    When a new dollar goes into the capital markets, after being bundled and securitized and lent several times over, where does it end up? When society's total savings increase, what capital assets do those savings end up invested in? When economists talk about “capital assets”, they mean things like roads, buildings and machines. When I read through a company's annual reports, lots of their assets are instead things like stocks and bonds, short-term debt, and other “financial” assets - i.e. claims on other people's stuff. In theory, for every financial asset, there's a financial liability somewhere. For every bond asset, there's some payer for whom that bond is a liability. Across the economy, they all add up to zero. What's left is the economists’ notion of capital, the nonfinancial assets: the roads, buildings, machines and so forth. Very roughly speaking, when there's a net increase in savings, that's where it has to end up - in the nonfinancial assets. I wanted to get a more tangible sense of what nonfinancial assets look like, of where my savings are going in the physical world. So, back in 2017 I pulled fundamentals data on ~2100 publicly-held US companies. I looked at [...] ---Outline:(02:01) Disclaimers(04:10) Overview (With Numbers!)(05:01) Oil - 25%(06:26) Power Grid - 16%(07:07) Consumer - 13%(08:12) Telecoms - 8%(09:26) Railroads - 8%(10:47) Healthcare - 8%(12:03) Tech - 6%(12:51) Industrial - 5%(13:49) Mining - 3%(14:34) Real Estate - 3%(14:49) Automotive - 2%(15:32) Logistics - 1%(16:12) Miscellaneous(16:55) Learnings --- First published: November 16th, 2025 Source: https://www.lesswrong.com/posts/HpBhpRQCFLX9tx62Z/where-is-the-capital-an-overview --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try
    --------  
    18:06
  • “Problems I’ve Tried to Legibilize” by Wei Dai
    Looking back, it appears that much of my intellectual output could be described as legibilizing work, or trying to make certain problems in AI risk more legible to myself and others. I've organized the relevant posts and comments into the following list, which can also serve as a partial guide to problems that may need to be further legibilized, especially beyond LW/rationalists, to AI researchers, funders, company leaders, government policymakers, their advisors (including future AI advisors), and the general public. Philosophical problems Probability theory Decision theory Beyond astronomical waste (possibility of influencing vastly larger universes beyond our own) Interaction between bargaining and logical uncertainty Metaethics Metaphilosophy: 1, 2 Problems with specific philosophical and alignment ideas Utilitarianism: 1, 2 Solomonoff induction "Provable" safety CEV Corrigibility IDA (and many scattered comments) UDASSA UDT Human-AI safety (x- and s-risks arising from the interaction between human nature and AI design) Value differences/conflicts between humans “Morality is scary” (human morality is often the result of status games amplifying random aspects of human value, with frightening results) [...] --- First published: November 9th, 2025 Source: https://www.lesswrong.com/posts/7XGdkATAvCTvn4FGu/problems-i-ve-tried-to-legibilize --- Narrated by TYPE III AUDIO.
    --------  
    4:17

Weitere Gesellschaft und Kultur Podcasts

Über LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Podcast-Website

Höre LessWrong (Curated & Popular), HOTEL TOMMI & KLAAS und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

LessWrong (Curated & Popular): Zugehörige Podcasts

Rechtliches
Social
v7.23.13 | © 2007-2025 radio.de GmbH
Generated: 11/21/2025 - 1:00:50 PM