Dr. Roman Yampolskiy joins me to explore one of the most urgent and uncomfortable questions of our time: what happens when we create intelligence that surpasses our own? We unpack the difference between the AI tools we use today and the emergence of artificial general intelligence, and why the transition from narrow systems to self-improving intelligence may mark a point where human control is no longer possible. Roman shares why even the people building these systems do not fully understand how they work, and why that gap in understanding becomes exponentially more dangerous as capabilities increase.
In this conversation, we explore the limits of control, prediction, and safety in a world where intelligence can recursively improve itself beyond human comprehension. Roman lays out why the problem of AI alignment may be fundamentally unsolvable, what timelines experts are realistically considering, and why even a single mistake at that level could have irreversible consequences. This episode invites a deeper reflection on what we are creating, what we assume we can control, and whether humanity is prepared for the intelligence it is bringing into existence.
BiOptimizers - Best magnesium to enhance your sleep
http://bioptimizers.com/knowthyself
Use code KNOWTHYSELF for 15% off at checkout
BASED Body Works
Use code KNOWTHYSELF for a free toiletry bag when buying a set!
https://www.basedbodyworks.com
Andrés Book Recs: https://www.knowthyselfpodcast.com/book-list
___________
00:00 Intro
01:25 What Is AGI and Why Should We Be Scared?
05:17 Roman's Journey: From Optimism to Impossibility
09:07 The High Risk, Zero Reward Equation
13:01 Why Superintelligence Is Uncontrollable, Unexplainable, and Unverifiable
18:00 How Long Do We Have? The AGI Timeline
21:24 How Superintelligence Could Actually Kill Us
23:28 Are We Living in a Simulation?
28:21 Can AI Become Conscious?
31:28 Ad: BiOptimizers
32:41 The Possible Timelines: Terminator, the Matrix, or the Zoo
42:24 I-Risk, X-Risk, and S-Risk: Three Ways It Goes Wrong
46:31 The Human Meaning Crisis: Jobs, Purpose, and What's Left
49:02 Ad: Based Bodyworks
50:20 What Empowers Us as Individuals Right Now
59:37 The Race to Doom: Who's Building It and Why They Won't Stop
1:07:41 Can AI Be Conscious — and Does It Already Have Internal Experiences?
1:12:41 Hacking the Simulation: Quantum, DMT, and Escaping the Code
1:18:30 Simulation Theory, Religion, and the Same Ancient Map
1:29:34 The Deal Roman Would Offer Altman, Dario, and Elon
1:39:44 What Is Humor? A Computer Scientist's Theory
1:43:03 What Comes After: Singularity, Death, and Knowing Thyself
___________
Episode Resources:
https://www.romanyampolskiy.com/
https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626X
https://www.instagram.com/andreduqum/
https://www.instagram.com/knowthyself/
https://www.youtube.com/@knowthyselfpodcast
https://www.knowthyselfpodcast.com
Listen to the show:
Spotify: https://spoti.fi/4bZMq9l
Apple: https://apple.co/4iATICX