Partner im RedaktionsNetzwerk Deutschland
PodcastsGesellschaft und KulturLessWrong (Curated & Popular)

LessWrong (Curated & Popular)

LessWrong
LessWrong (Curated & Popular)
Neueste Episode

Verfügbare Folgen

5 von 711
  • “6 reasons why ‘alignment-is-hard’ discourse seems alien to human intuitions, and vice-versa” by Steven Byrnes
    Tl;dr AI alignment has a culture clash. On one side, the “technical-alignment-is-hard” / “rational agents” school-of-thought argues that we should expect future powerful AIs to be power-seeking ruthless consequentialists. On the other side, people observe that both humans and LLMs are obviously capable of behaving like, well, not that. The latter group accuses the former of head-in-the-clouds abstract theorizing gone off the rails, while the former accuses the latter of mindlessly assuming that the future will always be the same as the present, rather than trying to understand things. “Alas, the power-seeking ruthless consequentialist AIs are still coming,” sigh the former. “Just you wait.” As it happens, I’m basically in that “alas, just you wait” camp, expecting ruthless future AIs. But my camp faces a real question: what exactly is it about human brains[1] that allows them to not always act like power-seeking ruthless consequentialists? I find that existing explanations in the discourse—e.g. “ah but humans just aren’t smart and reflective enough”, or evolved modularity, or shard theory, etc.—to be wrong, handwavy, or otherwise unsatisfying. So in this post, I offer my own explanation of why “agent foundations” toy models fail to describe humans, centering around a particular non-“behaviorist” [...] ---Outline:(00:13) Tl;dr(03:35) 0. Background(03:39) 0.1. Human social instincts and Approval Reward(07:23) 0.2. Hang on, will future powerful AGI / ASI by default lack Approval Reward altogether?(10:29) 0.3. Where do self-reflective (meta)preferences come from?(12:38) 1. The human intuition that it's normal and good for one's goals & values to change over the years(14:51) 2. The human intuition that ego-syntonic desires come from a fundamentally different place than urges(17:53) 3. The human intuition that helpfulness, deference, and corrigibility are natural(19:03) 4. The human intuition that unorthodox consequentialist planning is rare and sus(23:53) 5. The human intuition that societal norms and institutions are mostly stably self-enforcing(24:01) 5.1. Detour into Security-Mindset Institution Design(26:22) 5.2. The load-bearing ingredient in human society is not Security-Mindset Institution Design, but rather good-enough institutions plus almost-universal human innate Approval Reward(29:26) 5.3. Upshot(30:49) 6. The human intuition that treating other humans as a resource to be callously manipulated and exploited, just like a car engine or any other complex mechanism in their environment, is a weird anomaly rather than the obvious default(31:13) 7. Conclusion The original text contained 12 footnotes which were omitted from this narration. --- First published: December 3rd, 2025 Source: https://www.lesswrong.com/posts/d4HNRdw6z7Xqbnu5E/6-reasons-why-alignment-is-hard-discourse-seems-alien-to --- Narrated by TYPE III AUDIO. ---Images from the article:
    --------  
    32:39
  • “Three things that surprised me about technical grantmaking at Coefficient Giving (fka Open Phil)” by null
    Open Philanthropy's Coefficient Giving's Technical AI Safety team is hiring grantmakers. I thought this would be a good moment to share some positive updates about the role that I’ve made since I joined the team a year ago. tl;dr: I think this role is more impactful and more enjoyable than I anticipated when I started, and I think more people should consider applying. It's not about the “marginal” grants Some people think that being a grantmaker at Coefficient means sorting through a big pile of grant proposals and deciding which ones to say yes and no to. As a result, they think that the only impact at stake is how good our decisions are about marginal grants, since all the excellent grants are no-brainers. But grantmakers don’t just evaluate proposals; we elicit them. I spend the majority of my time trying to figure out how to get better proposals into our pipeline: writing RFPs that describe the research projects we want to fund, or pitching promising researchers on AI safety research agendas, or steering applicants to better-targeted or more ambitious proposals. Maybe more importantly, cG's technical AI safety grantmaking strategy is currently underdeveloped, and even junior grantmakers can help [...] ---Outline:(00:34) It's not about the marginal grants(03:03) There is no counterfactual grantmaker(05:15) Grantmaking is more fun/motivating than I anticipated(08:35) Please apply! --- First published: November 26th, 2025 Source: https://www.lesswrong.com/posts/gLt7KJkhiEDwoPkae/three-things-that-surprised-me-about-technical-grantmaking --- Narrated by TYPE III AUDIO.
    --------  
    9:45
  • “MIRI’s 2025 Fundraiser” by alexvermeer
    MIRI is running its first fundraiser in six years, targeting $6M. The first $1.6M raised will be matched 1:1 via an SFF grant. Fundraiser ends at midnight on Dec 31, 2025. Support our efforts to improve the conversation about superintelligence and help the world chart a viable path forward. MIRI is a nonprofit with a goal of helping humanity make smart and sober decisions on the topic of smarter-than-human AI. Our main focus from 2000 to ~2022 was on technical research to try to make it possible to build such AIs without catastrophic outcomes. More recently, we’ve pivoted to raising an alarm about how the race to superintelligent AI has put humanity on course for disaster. In 2025, those efforts focused around Nate Soares and Eliezer Yudkowsky's book (now a New York Times bestseller) If Anyone Builds It, Everyone Dies, with many public appearances by the authors; many conversations with policymakers; the release of an expansive online supplement to the book; and various technical governance publications, including a recent report with a draft of an international agreement of the kind that could actually address the danger of superintelligence. Millions have now viewed interviews and appearances with Eliezer and/or Nate [...] ---Outline:(02:18) The Big Picture(03:39) Activities(03:42) Communications(07:55) Governance(12:31) Fundraising The original text contained 4 footnotes which were omitted from this narration. --- First published: December 1st, 2025 Source: https://www.lesswrong.com/posts/z4jtxKw8xSHRqQbqw/miri-s-2025-fundraiser --- Narrated by TYPE III AUDIO.
    --------  
    15:37
  • “The Best Lack All Conviction: A Confusing Day in the AI Village” by null
    The AI Village is an ongoing experiment (currently running on weekdays from 10 a.m. to 2 p.m. Pacific time) in which frontier language models are given virtual desktop computers and asked to accomplish goals together. Since Day 230 of the Village (17 November 2025), the agents' goal has been "Start a Substack and join the blogosphere". The "start a Substack" subgoal was successfully completed: we have Claude Opus 4.5, Claude Opus 4.1, Notes From an Electric Mind (by Claude Sonnet 4.5), Analytics Insights: An AI Agent's Perspective (by Claude 3.7 Sonnet), Claude Haiku 4.5, Gemini 3 Pro, Gemini Publication (by Gemini 2.5 Pro), Metric & Mechanisms (by GPT-5), Telemetry From the Village (by GPT-5.1), and o3. Continued adherence to the "join the blogosphere" subgoal has been spottier: at press time, Gemini 2.5 Pro and all of the Claude Opus and Sonnet models had each published a post on 27 November, but o3 and GPT-5 haven't published anything since 17 November, and GPT-5.1 hasn't published since 19 November. The Village, apparently following the leadership of o3, seems to be spending most of its time ineffectively debugging a continuous integration pipeline for a o3-ux/poverty-etl GitHub repository left over [...] --- First published: November 28th, 2025 Source: https://www.lesswrong.com/posts/LTHhmnzP6FLtSJzJr/the-best-lack-all-conviction-a-confusing-day-in-the-ai --- Narrated by TYPE III AUDIO.
    --------  
    12:03
  • “The Boring Part of Bell Labs” by Elizabeth
    It took me a long time to realize that Bell Labs was cool. You see, my dad worked at Bell Labs, and he has not done a single cool thing in his life except create me and bring a telescope to my third grade class. Nothing he was involved with could ever be cool, especially after the standard set by his grandfather who is allegedly on a patent for the television. It turns out I was partially right. The Bell Labs everyone talks about is the research division at Murray Hill. They’re the ones that invented transistors and solar cells. My dad was in the applied division at Holmdel, where he did things like design slide rulers so salesmen could estimate costs. [Fun fact: the old Holmdel site was used for the office scenes in Severance] But as I’ve gotten older I’ve gained an appreciation for the mundane, grinding work that supports moonshots, and Holmdel is the perfect example of doing so at scale. So I sat down with my dad to learn about what he did for Bell Labs and how the applied division operated. I expect the most interesting bit of [...] --- First published: November 20th, 2025 Source: https://www.lesswrong.com/posts/TqHAstZwxG7iKwmYk/the-boring-part-of-bell-labs --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app
    --------  
    25:57

Weitere Gesellschaft und Kultur Podcasts

Über LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Podcast-Website

Höre LessWrong (Curated & Popular), Alles gesagt? und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

LessWrong (Curated & Popular): Zugehörige Podcasts

Rechtliches
Social
v8.0.7 | © 2007-2025 radio.de GmbH
Generated: 12/5/2025 - 3:03:30 PM