PodcastsTechnologieThe MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck

Matt Turck
The MAD Podcast with Matt Turck
Neueste Episode

116 Episoden

  • The MAD Podcast with Matt Turck

    Why Every AI Agent Needs Its Own Computer | Ivan Burazin (Daytona)

    14.05.2026 | 1 Std. 5 Min.
    If AI agents are the new digital knowledge workers, where exactly do they do their work? In this episode of the MAD Podcast, Ivan Burazin joins us to unpack the emerging infrastructure stack for AI agents and explain why every agent needs its own secure, stateful "computer." We explore the technical realities of sandboxes, dive into why legacy, stateless hyperscalers weren't built for these new workloads, and break down the mechanics of microVMs and custom schedulers alongside a contrarian prediction on an impending CPU shortage. Finally, Ivan delivers an absolute masterclass on product-led growth, community building, and go-to-market strategy for technical founders.

    (00:40) Intro
    (02:13) What is an AI agent sandbox?
    (03:17) Security risks of running agents locally
    (05:17) Stateful vs. stateless hyperscalers
    (07:04) The history of cloud IDEs and the end of localhost
    (09:45) Do all AI agents need a sandbox?
    (12:26) Sandbox use cases: RL evals & background agents
    (14:10) Unpacking the emerging AI Agent Stack
    (16:20) The unsolved problem of agent memory and learning
    (19:37) Where sandboxes fit in the agent harness
    (21:35) OpenAI, Anthropic, and agent SDKs
    (23:06) Ivan's founder journey: From CodeAnywhere to Daytona
    (26:59) GTM strategies and building developer communities
    (33:48) Why customer support is your best GTM strategy
    (35:34) Leveraging Twitter during the AI super cycle
    (40:50) The technical anatomy of a sandbox
    (41:53) Why fast spin-up speeds maximize GPU efficiency
    (46:09) Firecracker, QEMU, and isolation primitives
    (49:58) Why sandbox snapshots and state forking matter
    (51:40) Why Daytona built a custom scheduler from scratch
    (55:24) The challenge of long-running stateful sandboxes
    (58:10) The build your own sandbox trap
    (1:01:03) Why AI agents might trigger a global CPU shortage
    (1:02:46) The future of the AI Agent Stack
  • The MAD Podcast with Matt Turck

    OpenAI Board Member Zico Kolter on the Real Risks of Frontier AI

    07.05.2026 | 1 Std. 16 Min.
    What actually happens before a frontier AI model gets released — and who decides whether it is safe enough? In this episode of The MAD Podcast, Matt Turck sits down with Zico Kolter — OpenAI board member, Head of the Machine Learning Department at Carnegie Mellon, and co-founder of Gray Swan — for a deep conversation on the real risks of frontier AI. They discuss how OpenAI’s safety oversight works before major model releases, why more powerful models do not automatically become safer, how jailbreaks and prompt injection expose real weaknesses in AI systems, why AI agents dramatically expand the attack surface, and where frontier AI is headed next. A clear, practical discussion on OpenAI, AI safety, AI security, AI agents, frontier models, red teaming, reinforcement learning, and the future of AI governance.

    (00:00) Intro
    (01:32) OpenAI board role and Safety & Security Committee
    (03:53) How OpenAI reviews major model releases
    (05:33) OpenAI’s preparedness framework explained
    (09:46) Are frontier AI models getting safer?
    (12:33) Why AI safety does not come from scale
    (15:23) The four categories of AI risk
    (19:38) Doomerism vs accelerationism in AI
    (24:11) The six-month AI pause debate
    (26:20) AI safety as a global effort
    (28:04) How Zico Kolter got into machine learning
    (31:05) OpenAI in the early days
    (34:14) Why Carnegie Mellon became an AI powerhouse
    (38:43) What Gray Swan does in AI security
    (40:44) AI safety vs AI security
    (43:15) The GCG jailbreak paper
    (49:19) How AI labs responded to jailbreak research
    (50:19) State-of-the-art AI defenses
    (52:32) State-of-the-art AI attacks
    (54:22) Why AI agents expand the attack surface
    (58:39) Are AI agents ready for production?
    (59:40) Mechanistic interpretability explained
    (1:02:31) Will AI be safer in two years?
    (1:03:46) Reinforcement learning and self-improving models
    (1:08:09) Do post-transformer architectures matter?
    (1:09:29) Best research directions in AI now
    (1:11:00) Zico Kolter’s Intro to Modern AI course
    (1:14:53) Why modern AI is simpler than people think
  • The MAD Podcast with Matt Turck

    Anthropic’s Felix Rieseberg: Claude Cowork, Mythos, and the SaaS Extinction

    10.04.2026 | 58 Min.
    Felix Rieseberg leads engineering for Claude Cowork at Anthropic, one of the most important new agentic AI products in the market today. In this episode of The MAD Podcast, Matt Turck sits down with Felix to discuss Anthropic’s newly announced Claude Mythos Preview, why Felix sees it as a genuine step-function change, and what it means when frontier AI starts showing outsized cybersecurity capabilities.

    The conversation then goes deep on Claude Cowork: how it emerged from Claude Code, what the famous “10-day” story really means, why Anthropic believes AI needs access to the local computer, and how Cowork actually works under the hood. Felix explains why skills are just text files, why memory is often just text files too, and how Anthropic thinks about building trust in AI agents.

    They also explore some of the biggest questions in AI product design and the future of software: why UX may matter as much as the model itself, why execution is becoming dramatically cheaper, what that means for product management and startups, and why Felix believes taste, alignment, and understanding humans may matter more than ever.

    (00:00) Intro
    (01:53) Claude Mythos Preview and the “step-function change”
    (06:16) Why Anthropic is treating Mythos differently
    (11:19) The real story behind Claude Cowork’s “10-day” build
    (12:42) Why Anthropic realized Claude Code needed a non-technical version
    (15:44) What Claude Cowork actually is
    (17:03) Under the hood: virtual machines, tools, skills
    (18:36) Where Cowork’s memory actually lives
    (19:26) How Cowork connects to files, apps, and the internet
    (20:45) Why Felix thinks the local computer is under-appreciated
    (24:49) Trust: how do you get users comfortable with AI agents?
    (28:45) What UX actually means for AI agents
    (31:27) Anthropic Cowork's roadmap is only one month long
    (34:12) Building 100 prototypes
    (35:10) If execution is free, what becomes the bottleneck?
    (37:25) Does it come down to taste?
    (40:12) The hardest part of building Claude Cowork
    (41:43) Advice for founders building AI agents
    (44:21) SaaSpocalypse: what’s left for software startups?
    (49:30) Where AI agents are going next
    (51:20) Regulated industries and enterprise adoption
    (54:15) Hot takes: what's underrated, overrated, and what Felix would build today
  • The MAD Podcast with Matt Turck

    AI is Already Building AI | Google DeepMind’s Mostafa Dehghani

    02.04.2026 | 1 Std. 4 Min.
    Are we truly on the verge of AI automating its own research and development? In this deep-dive episode of the MAD Podcast, Matt Turck sits down with Mostafa Dehghani, a pioneering AI researcher at Google DeepMind whose work on Universal Transformers and Vision Transformers (ViT) helped lay the groundwork for today's frontier models.

    Moving past the hype, Mostafa breaks down the actual mechanics of "thinking in loops" and Recursive Self-Improvement (RSI). He explores the critical bottlenecks holding back true AGI—from evaluation limits and formal verification to the brutal math of long-horizon reliability.

    Mostafa and Matt also discuss the shift from pre-training to post-training, how Gemini's Nano Banana 2 processes pixels and text simultaneously, and why the "frozen" nature of today's models means Continual Learning is the next massive frontier for enterprise AI and data pipelines.

    (00:00) Intro
    (01:17) What “loops” in AI actually mean
    (05:04) Self-improvement as the next chapter of machine learning
    (07:32) Are Karpathy’s autoresearch agents an early form of AI self-improvement?
    (08:56) AI building AI: how close are we?
    (10:02) The biggest bottlenecks: evals, automation, and long horizons
    (12:36) Can formal verification unlock recursive self-improvement?
    (14:06) What is model collapse?
    (15:33) Generalization vs specialization in AI
    (18:04) What is a specialized model today?
    (20:57) Could top AI researchers themselves be automated?
    (24:02) If AI builds AI, does data matter less than compute?
    (26:22) Post-training vs pre-training: where will progress come from?
    (28:14) Why pre-training is not dead
    (29:45) What is continual learning?
    (31:53) How real is continual learning today?
    (33:43) Mostafa Dehghani’s background and path into AI
    (36:13) The story behind Universal Transformers
    (39:56) How Vision Transformers changed AI
    (43:47) Gemini, multimodality, and Nano Banana
    (47:46) Why multimodality helps build a world model
    (52:44) Why image generation is getting faster and more efficient
    (54:44) Hot takes
    (54:53) What the AI field is getting wrong
    (56:17) Why continual learning is underrated
    (57:26) Does RAG go away over time?
    (58:21) What people are too confident about in AI
    (59:56) If he were starting from scratch today
  • The MAD Podcast with Matt Turck

    Benedict Evans: OpenAI’s Moat Problem & the Future of Software

    19.03.2026 | 1 Std. 1 Min.
    Is OpenAI trapped without a defensible moat? World-renowned independent tech analyst Benedict Evans returns to the MAD Podcast and argues that foundation models have zero network effects, making them closer to commodity infrastructure than the next iOS. We unpack OpenAI’s "mile wide, inch deep" usage problem, why simply having a "better model" does not solve the core UX challenge, and whether the hyperscalers' massive CapEx spending is a sustainable strategy or a fast track to financial gravity.

    We also explore the reality behind the recent "SaaSpocalypse", the structural shift from traditional enterprise systems to "improvised" and "ephemeral" software, and where the actual white space lies for founders and investors navigating the artificial intelligence hype cycle.

    (00:00) Intro
    (01:06) OpenAI's Focus Shift
    (03:12) ChatGPT usage: a "mile wide, inch deep"
    (09:03) Why better models do not solve the real problem
    (13:58) Why AI product teams are strategy takers, not strategy setters
    (15:38) Do agents help create defensibility?
    (20:06) OpenClaw and the "Desktop Linux" moment for AI
    (25:52) Why "everyone will build their own software" is completely wrong
    (28:09) Improvised software vs. institutionalized software
    (29:23) The Jevons Paradox: Why there will be more software, not less
    (36:15) Are we heading toward value destruction before value creation?
    (38:03) Circular revenue, leverage, and AI bubble dynamics
    (38:53) Big Tech's Trillion-Dollar CapEx Crisis & Financial Gravity
    (45:23) Why AI job exposure charts can be misleading
    (52:15) How Fortune 500 Execs are actually deploying AI today
    (56:45) The White Space: What this means for founders and investors
Weitere Technologie Podcasts
Über The MAD Podcast with Matt Turck
The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.
Podcast-Website

Höre The MAD Podcast with Matt Turck, c't 4004 – der c't-3003-Podcast und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen
The MAD Podcast with Matt Turck: Zugehörige Podcasts
Rechtliches
Social
v6.9.1| © 2007-2026 radio.de GmbH
Generated: 5/15/2026 - 7:26:59 PM