Partner im RedaktionsNetzwerk Deutschland
PodcastsTechnologieLessWrong (Curated & Popular)
Höre LessWrong (Curated & Popular) in der App.
Höre LessWrong (Curated & Popular) in der App.
(256.086)(250.186)
Sender speichern
Wecker
Sleeptimer

LessWrong (Curated & Popular)

Podcast LessWrong (Curated & Popular)
LessWrong
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

Verfügbare Folgen

5 von 418
  • “AI companies are unlikely to make high-assurance safety cases if timelines are short” by ryan_greenblatt
    One hope for keeping existential risks low is to get AI companies to (successfully) make high-assurance safety cases: structured and auditable arguments that an AI system is very unlikely to result in existential risks given how it will be deployed.[1] Concretely, once AIs are quite powerful, high-assurance safety cases would require making a thorough argument that the level of (existential) risk caused by the company is very low; perhaps they would require that the total chance of existential risk over the lifetime of the AI company[2] is less than 0.25%[3][4].The idea of making high-assurance safety cases (once AI systems are dangerously powerful) is popular in some parts of the AI safety community and a variety of work appears to focus on this. Further, Anthropic has expressed an intention (in their RSP) to "keep risks below acceptable levels"[5] and there is a common impression that Anthropic would pause [...] ---Outline:(03:19) Why are companies unlikely to succeed at making high-assurance safety cases in short timelines?(04:14) Ensuring sufficient security is very difficult(04:55) Sufficiently mitigating scheming risk is unlikely(09:35) Accelerating safety and security with earlier AIs seems insufficient(11:58) Other points(14:07) Companies likely wont unilaterally slow down if they are unable to make high-assurance safety cases(18:26) Could coordination or government action result in high-assurance safety cases?(19:55) What about safety cases aiming at a higher risk threshold?(21:57) Implications and conclusionsThe original text contained 20 footnotes which were omitted from this narration. --- First published: January 23rd, 2025 Source: https://www.lesswrong.com/posts/neTbrpBziAsTH5Bn7/ai-companies-are-unlikely-to-make-high-assurance-safety --- Narrated by TYPE III AUDIO.
    --------  
    24:33
  • “Mechanisms too simple for humans to design” by Malmesbury
    Cross-posted from Telescopic TurnipAs we all know, humans are terrible at building butterflies. We can make a lot of objectively cool things like nuclear reactors and microchips, but we still can't create a proper artificial insect that flies, feeds, and lays eggs that turn into more butterflies. That seems like evidence that butterflies are incredibly complex machines – certainly more complex than a nuclear power facility.Likewise, when you google "most complex object in the universe", the first result is usually not something invented by humans – rather, what people find the most impressive seems to be "the human brain".As we are getting closer to building super-human AIs, people wonder what kind of unspeakable super-human inventions these machines will come up with. And, most of the time, the most terrifying technology people can think of is along the lines of "self-replicating autonomous nano-robots" – in other words [...] ---Outline:(02:04) You are simpler than Microsoft Word™(07:23) Blood for the Information Theory God(12:54) The Barrier(15:26) Implications for Pokémon (SPECULATIVE)(17:44) Seeing like a 1.25 MB genome(21:55) Mechanisms too simple for humans to design(26:42) The future of non-human designThe original text contained 2 footnotes which were omitted from this narration. The original text contained 5 images which were described by AI. --- First published: January 22nd, 2025 Source: https://www.lesswrong.com/posts/6hDvwJyrwLtxBLHWG/mechanisms-too-simple-for-humans-to-design --- Narrated by TYPE III AUDIO. ---Images from the article:
    --------  
    28:36
  • “The Gentle Romance” by Richard_Ngo
    This is a link post.A story I wrote about living through the transition to utopia.This is the one story that I've put the most time and effort into; it charts a course from the near future all the way to the distant stars. --- First published: January 19th, 2025 Source: https://www.lesswrong.com/posts/Rz4ijbeKgPAaedg3n/the-gentle-romance --- Narrated by TYPE III AUDIO.
    --------  
    0:34
  • “Quotes from the Stargate press conference” by Nikola Jurkovic
    This is a link post.Present alongside President Trump:  Sam AltmanLarry Ellison (Oracle executive chairman and CTO)Masayoshi Son (Softbank CEO who believes he was born to realize ASI)President Trump: What we want to do is we want to keep [AI datacenters] in this country. China is a competitor and others are competitors. President Trump: I'm going to help a lot through emergency declarations because we have an emergency. We have to get this stuff built. So they have to produce a lot of electricity and we'll make it possible for them to get that production done very easily at their own plants if they want, where they'll build at the plant, the AI plant they'll build energy generation and that will be incredible. President Trump: Beginning immediately, Stargate will be building the physical and virtual infrastructure to power the next generation of [...] --- First published: January 22nd, 2025 Source: https://www.lesswrong.com/posts/b8D7ng6CJHzbq8fDw/quotes-from-the-stargate-press-conference --- Narrated by TYPE III AUDIO.
    --------  
    3:15
  • “The Case Against AI Control Research” by johnswentworth
    The AI Control Agenda, in its own words:… we argue that AI labs should ensure that powerful AIs are controlled. That is, labs should make sure that the safety measures they apply to their powerful models prevent unacceptably bad outcomes, even if the AIs are misaligned and intentionally try to subvert those safety measures. We think no fundamental research breakthroughs are required for labs to implement safety measures that meet our standard for AI control for early transformatively useful AIs; we think that meeting our standard would substantially reduce the risks posed by intentional subversion.There's more than one definition of “AI control research”, but I’ll emphasize two features, which both match the summary above and (I think) are true of approximately-100% of control research in practice: Control research exclusively cares about intentional deception/scheming; it does not aim to solve any other failure mode.Control research exclusively cares [...] ---Outline:(01:34) The Model and The Problem(03:57) The Median Doom-Path: Slop, not Scheming(08:22) Failure To Generalize(10:59) A Less-Simplified Model(11:54) RecapThe original text contained 1 footnote which was omitted from this narration. The original text contained 2 images which were described by AI. --- First published: January 21st, 2025 Source: https://www.lesswrong.com/posts/8wBN8cdNAv3c7vt6p/the-case-against-ai-control-research --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    13:20

Weitere Technologie Podcasts

Über LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.
Podcast-Website

Hören Sie LessWrong (Curated & Popular), kurz informiert by heise online und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen
Rechtliches
Social
v7.4.0 | © 2007-2025 radio.de GmbH
Generated: 1/24/2025 - 6:43:13 PM