[Linkpost] “MAGA populists call for holy war against Big Tech” by Remmelt
This is a link post. Excerpts on AI Geoffrey Miller was handed the mic and started berating one of the panelists: Shyam Sankar, the chief technology officer of Palantir, who is in charge of the company's AI efforts. “I argue that the AI industry shares virtually no ideological overlap with national conservatism,” Miller said, referring to the conference's core ideology. Hours ago, Miller, a psychology professor at the University of New Mexico, had been on that stage for a panel called “AI and the American Soul,” calling for the populists to wage a literal holy war against artificial intelligence developers “as betrayers of our species, traitors to our nation, apostates to our faith, and threats to our kids.” Now, he stared right at the technologist who’d just given a speech arguing that tech founders were just as heroic as the Founding Fathers, who are sacred figures to the natcons. The [...] --- First published: September 8th, 2025 Source: https://www.lesswrong.com/posts/TiQGC6woDMPJ9zbNM/maga-populists-call-for-holy-war-against-big-tech Linkpost URL:https://www.theverge.com/politics/773154/maga-tech-right-ai-natcon --- Narrated by TYPE III AUDIO.
--------
3:44
--------
3:44
“Your LLM-assisted scientific breakthrough probably isn’t real” by eggsyntax
Summary An increasing number of people in recent months have believed that they've made an important and novel scientific breakthrough, which they've developed in collaboration with an LLM, when they actually haven't. If you believe that you have made such a breakthrough, please consider that you might be mistaken! Many more people have been fooled than have come up with actual breakthroughs, so the smart next step is to do some sanity-checking even if you're confident that yours is real. New ideas in science turn out to be wrong most of the time, so you should be pretty skeptical of your own ideas and subject them to the reality-checking I describe below. Context This is intended as a companion piece to 'So You Think You've Awoken ChatGPT'[1]. That post describes the related but different phenomenon of LLMs giving people the impression that they've suddenly attained consciousness. Your situation If [...] ---Outline:(00:11) Summary(00:49) Context(01:04) Your situation(02:41) How to reality-check your breakthrough(03:16) Step 1(05:55) Step 2(07:40) Step 3(08:54) What to do if the reality-check fails(10:13) Could this document be more helpful?(10:31) More informationThe original text contained 5 footnotes which were omitted from this narration. --- First published: September 2nd, 2025 Source: https://www.lesswrong.com/posts/rarcxjGp47dcHftCP/your-llm-assisted-scientific-breakthrough-probably-isn-t --- Narrated by TYPE III AUDIO.
--------
11:52
--------
11:52
“Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro” by ryan_greenblatt
I've recently written about how I've updated against seeing substantially faster than trend AI progress due to quickly massively scaling up RL on agentic software engineering. One response I've heard is something like: RL scale-ups so far have used very crappy environments due to difficulty quickly sourcing enough decent (or even high quality) environments. Thus, once AI companies manage to get their hands on actually good RL environments (which could happen pretty quickly), performance will increase a bunch. Another way to put this response is that AI companies haven't actually done a good job scaling up RL—they've scaled up the compute, but with low quality data—and once they actually do the RL scale up for real this time, there will be a big jump in AI capabilities (which yields substantially above trend progress). I'm skeptical of this argument because I think that ongoing improvements to RL environments [...] ---Outline:(04:18) Counterargument: Actually, companies havent gotten around to improving RL environment quality until recently (or there is substantial lead time on scaling up RL environments etc.) so better RL environments didnt drive much of late 2024 and 2025 progress(05:24) Counterargument: AIs will soon reach a critical capability threshold where AIs themselves can build high quality RL environments(06:51) Counterargument: AI companies are massively fucking up their training runs (either pretraining or RL) and once they get their shit together more, well see fast progress(08:34) Counterargument: This isnt that related to RL scale up, but OpenAI has some massive internal advance in verification which they demonstrated via getting IMO gold and this will cause (much) faster progress late this year or early next year(10:12) Thoughts and speculation on scaling up the quality of RL environmentsThe original text contained 5 footnotes which were omitted from this narration. --- First published: September 3rd, 2025 Source: https://www.lesswrong.com/posts/HsLWpZ2zad43nzvWi/trust-me-bro-just-one-more-rl-scale-up-this-one-will-be-the --- Narrated by TYPE III AUDIO.
--------
14:02
--------
14:02
“⿻ Plurality & 6pack.care” by Audrey Tang
(Cross-posted from speaker's notes of my talk at Deepmind today.) Good local time, everyone. I am Audrey Tang, 🇹🇼 Taiwan's Cyber Ambassador and first Digital Minister (2016-2024). It is an honor to be here with you all at Deepmind. When we discuss "AI" and "society," two futures compete. In one—arguably the default trajectory—AI supercharges conflict. In the other, it augments our ability to cooperate across differences. This means treating differences as fuel and inventing a combustion engine to turn them into energy, rather than constantly putting out fires. This is what I call ⿻ Plurality. Today, I want to discuss an application of this idea to AI governance, developed at Oxford's Ethics in AI Institute, called the 6-Pack of Care. As AI becomes a thousand, perhaps ten thousand times faster than us, we face a fundamental asymmetry. We become the garden; AI becomes the gardener. At that speed, traditional [...] ---Outline:(02:17) From Protest to Demo(03:43) From Outrage to Overlap(04:57) From Gridlock to Governance(06:40) Alignment Assemblies(08:25) From Tokyo to California(09:48) From Pilots to Policy(12:29) From Is to Ought(13:55) Attentiveness: caring about(15:05) Responsibility: taking care of(16:01) Competence: care-giving(16:38) Responsiveness: care-receiving(17:49) Solidarity: caring-with(18:41) Symbiosis: kami of care(21:06) Plurality is Here(22:08) We, the People, are the Superintelligence--- First published: September 1st, 2025 Source: https://www.lesswrong.com/posts/anoK4akwe8PKjtzkL/plurality-and-6pack-care --- Narrated by TYPE III AUDIO.
--------
23:57
--------
23:57
[Linkpost] “The Cats are On To Something” by Hastings
This is a link post. So the situation as it stands is that the fraction of the light cone expected to be filled with satisfied cats is not zero. This is already remarkable. What's more remarkable is that this was orchestrated starting nearly 5000 years ago. As far as I can tell there were three completely alien to-each-other intelligences operating in stone age Egypt: humans, cats, and the gibbering alien god that is cat evolution (henceforth the cat shoggoth.) What went down was that humans were by far the most powerful of those intelligences, and in the face of this disadvantage the cat shoggoth aligned the humans, not to its own utility function, but to the cats themselves. This is a phenomenally important case to study- it's very different from other cases like pigs or chickens where the shoggoth got what it wanted, at the brutal expense of the desires [...] --- First published: September 2nd, 2025 Source: https://www.lesswrong.com/posts/WLFRkm3PhJ3Ty27QH/the-cats-are-on-to-something Linkpost URL:https://www.hgreer.com/CatShoggoth/ --- Narrated by TYPE III AUDIO.
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.