Nate Soares is the President of the Machine Intelligence Research Institute, and plays a central role in setting MIRI’s vision and strategy. Soares has been working in the field for over a decade, and is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs. Prior to MIRI, Soares worked as an engineer at Google and Microsoft, as a research associate at the National Institute of Standards and Technology, and as a contractor for the US Department of Defense. In this episode, Nate and Robinson discuss the problems of AI from the ground up. They touch on how AI is trained, why it will surpass human intelligence, why this is dangerous, how it could wipe out humankind, and more. Nate’s recent book, co-written with Eliezer Yudkowsky, is If Anyone Builds It, Everyone Dies (Little, Brown and Company, 2025).
If Anyone Builds It, Everyone Dies
Nate’s X: https://x.com/So8res
MIRI: https://intelligence.org
OUTLINE
00:00 Nate’s Existential Dread
07:11 What’s the REAL Problem with Artificial Intelligence?
11:39 How Is AI Trained?
17:15 The Vital Importance of Interpreting AI
20:53 Why AI Will Soon Surpass Human Intelligence
32:58 Why Solving the AI Alignment Problem is Crucial to Human Survival
38:40 Will AI Render Human Software Engineers Obsolete?
48:03 Does It Make Sense to Say AI Has Goals?
01:00:02 Why AI Consciousness Is Unimportant
01:11:33 How, Realistically, Could AI Wipe Out Humanity?
01:35:27 A Sci-Fi (But Realistic) AI Doomsday Scenario
01:44:34 Is There Hope that Humans Will Survive AI?