PodcastsTechnologieAI Security Podcast

AI Security Podcast

TechRiot.io
AI Security Podcast
Neueste Episode

50 Episoden

  • AI Security Podcast

    Buy vs. Build AI Security: Why [Box.com](http://Box.com) CISO is Creating their Own Agentic SOC

    22.04.2026 | 46 Min.
    If your AI solution is just helping humans process the same amount of alerts a little faster, you haven't transformed anything, you've just created a faster hamster wheel.In this episode, Ashish and Caleb speak with Heather Ceylan, CISO at Box.com, about how she is leading a true, developer-first AI transformation within her security organization . Heather reveals the five strategic "AI Bets" Box is making. We dive into the reality of building an AI SOC, discussing how Box achieved a 38% automated triage rate for Tier 1 alerts, and why teaching AI not to hallucinate requires treating prompts like strict policy engines .The conversation also tackles the build vs. buy dilemma. Heather explains why she prefers to have her team build custom AI solutions (at least until vendors can out-innovate her engineers) and shares her biggest disappointment when evaluating AI security startups

    Questions asked:
    (00:00) Introduction(02:50) Who is Heather Ceylan? (CISO at Box.com) (04:20) Transformation vs. Acceleration: Eliminating Classes of Work (06:00) Building an AI SOC: Achieving 38% Automated Triage (07:20) Controlling Hallucinations: Prompts as Policy Engines (09:30) The Buy vs. Build Debate for CISOs (14:00) Why Security Architecture Must Be Machine Consumable (16:50) The Problem with 3rd Party Risk Management (18:20) Box's "5 AI Bets" Framework (21:30) Will AI Replace SOC Analysts? Why Teams Are Embracing the Change (23:50) Continuous Pen Testing & Evaluating AI Startups (26:30) The Biggest Pitching Mistake Startups Make with CISOs (30:20) Shadow AI: When the Business Starts Building Its Own Apps (37:30) Personalized Software: The LEGO Brick Model of Security Agents (41:50) Fun Questions: Crocodile Jerky and Tim Tam Slams (44:20) Hobbies & Family: Raising Two Boys and Surviving the Chaos (45:30) Favorite Restaurant: Meyhouse (Turkish Cuisine in Palo Alto)

    Resources discussed during the episode:
    Heather's LinkedIn Newsletter
    Heather's post RSA blog
    5 Big AI Bets
    https://blog.box.com/big-cybersecurity-bets-part1
    https://blog.box.com/big-cybersecurity-bets-part-2
    https://blog.box.com/big-security-bet-3-ai-redefines-vulnerability-management
    https://blog.box.com/5-big-cybersecurity-bets-4-scaling-security-architecture-ai-first-world
    https://blog.box.com/5-big-cybersecurity-bets-continuous-adversarial-validation
  • AI Security Podcast

    Anthropic's Project Mythos: Why the "Zero-Day Machine" is Terrifying the Security Industry

    18.04.2026 | 1 Std. 3 Min.
    In this episode, Ashish and Caleb discuss the internet-breaking preview of Project Mythos, an unreleased AI model from Anthropic that has shown an unprecedented, terrifying ability to reason through code and automatically generate working zero-day exploits .We dive into the conversations surrounding Project Glasswing, Anthropic's initiative to share this model with select partners (like Palo Alto and CrowdStrike) before public release, allowing them a 100-day window to patch critical vulnerabilities . Caleb explains why this level of AI reasoning isn't just hype: early testers are reporting that Mythos is not only finding zero-days, but actively detecting dormant intrusions within their own networks .If you are a CISO or security practitioner, this episode talks about it all. We discuss why the traditional 30-day patch cycle is dead, why "assuming breach" is now mandatory, and why 60% of legacy security vendors might not survive this shift .

    Questions asked:
    (00:00) Introduction: The Hype Around Anthropic's Project Mythos (04:00) What is Project Mythos? (Reasoning and Finding Zero-Days) (06:50) Project Glasswing: The 100-Day Partner Patch Window (08:30) The Controversy: Did Anthropic Pick the Right Partners? (12:30) Why Anthropic Doesn't Have the Compute to Scan the Whole Internet (15:10) The Insider View: Mythos is Finding Dormant Intrusions (16:30) Why 60% of Security Vendors Will Go Away (19:30) Hype vs. Reality: GeoHot's Comments on Small Models (21:30) Eliminating False Positives in Static Code Analysis (23:50) The Zero-Day Clock: Time to Exploit Drops to Under 6 Hours (25:50) The Ethics of Zero-Days: Should Mythos Be Released at All? (34:30) The CISO Action Plan: Speeding Up Patching (Hours vs. Days) (44:50) The 3rd Party SaaS Problem: What to Do When You Can't Patch (46:10) "Assume Breach": Why Deception (Honeypots) is the New Priority (57:30) Empowering Non-Tech Teams to Build Detections (01:02:10) AI Makes Cheesy "Hacker Movies" a Reality

    Resources mentioned during the episode:
    Assessing Claude Mythos Preview’s cybersecurity capabilities
    Project Glasswing
    Zero Day Clock
  • AI Security Podcast

    Are AI Security Startups Faking It? How to Separate Signal from Noise

    15.04.2026 | 47 Min.
    With over 70 startups claiming to have built the perfect "AI SOC Analyst" or "AI Threat Hunter," how do you separate the real products from the vaporware? Recorded live at Decibel RSAC Founder Festival, Ashish and Caleb hosted a heated panel with Edward Wu (Founder & CEO, Dropzone AI) and Lou Manousos (Co-Founder & CEO, Ent AI). The group debates the controversial claim that AI can provide 100% threat prevention and exposes the dirty secret of the industry: Many AI startups are "cheating" by hiding human analysts behind their software.If you were a CISO or security practitioner navigating the vendor floor at RSA, this episode provides a BS-detector framework. Learn why an AI wrapper around Claude Code isn't enough, why "consistency" is the ultimate test for AI agents, and how to verify if a startup actually has real-world, paying enterprise deployments (and not just friendly design partners) .
    Questions asked:
    (00:00) Introduction: Live with Decibel(01:30) Meet the Panel: Edward Wu (Dropzone) & Lou Manousos (Ent) (03:40) The Great Debate: Has the Industry Given Up on Prevention? (05:50) What Has AI Actually Solved? (Repetitive Work vs. Context) (09:00) How to Spot BS on the RSA Show Floor (11:30) Defining an AI Agent: Chatbots vs. Threat Hunters (13:40) The Claude Code Problem: Is Your Product Just a Wrapper? (16:50) The 80% Accuracy Trap & Why Consistency is Key (21:30) Proving ROI: Evaluating AI Agents Like Human Employees (24:50) The Dirty Secret: Humans Hiding Behind AI Startups (26:30) Spotting Fake Customer Logos (28:30) Audience Q&A: Scaling the SOC vs. Replacing Humans (36:10) Forward Deployed Engineering & Personalized Software (40:30) Reimagining Security Architecture from the Inside Out (43:30) How Ent Detects Remote Workers Outsourcing Their Jobs (45:30) Final Thoughts: Asking Vendors for Real Proof Points
  • AI Security Podcast

    How Lovable Manages 100+ Daily Changes, Vibe Coding & Shadow AI

    02.04.2026 | 57 Min.
    What does it actually look like to run security inside one of Europe's fastest-growing AI companies? In this episode, recorded live at the Munich Cybersecurity Conference (MCSC), Ashish Rajan sat down with Igor Andriushchenko Head of Security at Lovable, the AI-native platform that lets anyone build and ship full applications without writing a line of code.
    Igor joined Lovable as employee #40. Six months later, the team had grown to 150+. Developers were running multi-agent workflows overnight, PMs were pushing pull requests, and the volume of code changes was hitting numbers that challenged every traditional security process they had. This is the security story nobody talks about in AI-native scale-ups and Igor lived it.
    In this episode, they cover: why your CI/CD pipeline is being load-tested to destruction by AI-generated churn · how to use PAM (Privileged Access Management) as a practical guardrail for AI agents that can't escalate to production secrets · why the allow-list vs deny-list logic is reversed for AI agents compared to traditional security · the overlooked SCA supply chain risk when AI recommends unmaintained or hallucinated packages · why old SAST tools are failing and what the new generation of agentic code scanners does differently · how to identify and manage advanced, intermediate, and basic AI users in your org without killing their productivity · and the practical "crawl, walk, run" approach to building internal AI security tooling that actually sticks.
    Igor also shares how Lovable's security team built an incident response AI skill, uses reachability analysis agents to triage SCA findings for enterprise customers, and why the real investment isn't in the AI model, it's in the skills ecosystem and data connections underneath.

    Questions asked:
    (00:00) Introduction: Securing the AI Workforce(03:50) Who is Igor Andriushchenko? (Head of Security, Lovable) (06:10) The Churn of Change: Why AI Will Break Your CI/CD (10:40) The FOMO Problem: Don't Force AI Adoption (11:50) The "Air Pocket" Strategy for Safe AI Experimentation (14:00) The Context Paradox: More Access = Dumber AI (17:40) Managing Agent Sprawl and "Advanced" Users (19:40) Why You Must Treat AI Agents Like Human Developers (PAM Controls) (22:30) The Need for AI Telemetry & Visibility (27:50) Blurring Roles: When PMs Become Developers (31:30) Why You Must Use "Deny Lists" Instead of "Allow Lists" for AI (34:30) AI SAST vs. Traditional SAST: Finding Business Logic Flaws (39:40) Supply Chain Risks: When AI Recommends Dead Libraries (45:40) Building Custom AI Skills for Incident Response (52:50) Fun Questions: Battlefield, Team Culture, and Comfort Food
  • AI Security Podcast

    Questions Every CISO Must Ask AI Security Vendors

    18.03.2026 | 50 Min.
    RSA Conference 2026 is here and the AI agent hype machine is louder than ever. In this episode, Ashish and Caleb cut through the noise and arm CISOs, practitioners, and security teams with a clear-eyed view of what's actually happening in AI security this year.
    From the vendor floor at RSAC to the future of internal security automation, Caleb and Ashish speak about why 70% of "AI agent security" vendors can't even define what an agent is, why security team consolidation around 2–3 major platforms (plus internal AI capability) may be the most underrated CISO strategy of 2026, and why the window from vulnerability disclosure to live exploitation has collapsed from months to under two days.
    They also explore the emerging idea of a centralised AI automation function inside security teams and why the future of security isn't buying more point solutions, it's building internal AI capability on top of a standardised vendor stack.

    Questions asked:
    (00:00) Introduction: Preparing for RSAC 2026(03:50) The Year of the "AI Agent" Marketing Hype (06:50) The Secret to AI Context: Enterprise Search (Glean) (09:50) Why Your SOC Needs a Centralized AI Platform Team (13:30) The #1 Question to Ask Vendors at RSAC: API Access (16:50) The Myth of MCP (Model Context Protocol) as the Gold Standard (20:50) Why RSAC is Too Noisy: Vibe Coding & 1,000 New Startups (22:30) Is Capital Raised the Only Signal of Trust? (24:50) Prediction: CISOs Will Fire 500 Vendors and Consolidate (30:50) The Build vs. Buy Debate for AI Security Features (35:50) Surviving RSAC: Sorting Signal from Noise (38:50) The Problem with "End-to-End" AI Agent Claims (41:50) Are AI-Driven Attacks Real? (44:50) The Zero-Day Clock: From 5 Months to 2 Days (48:50) RSAC Events: Live Recordings and CISO Panels

    Resources spoken about during the episode:
    RSAC 2026
    BSidesSF 2026
    Glean
    Zero Day Clock

Weitere Technologie Podcasts

Über AI Security Podcast

The #1 source for AI Security insights for CISOs and cybersecurity leaders. Hosted by two former CISOs, the AI Security Podcast provides expert, no-fluff discussions on the security of AI systems and the use of AI in Cybersecurity. Whether you're a CISO, security architect, engineer, or cyber leader, you'll find practical strategies, emerging risk analysis, and real-world implementations without the marketing noise. These conversations are helping cybersecurity leaders make informed decisions and lead with confidence in the age of AI.
Podcast-Website

Höre AI Security Podcast, AI FIRST Podcast und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

AI Security Podcast: Zugehörige Podcasts

Rechtliches
Social
v8.8.11| © 2007-2026 radio.de GmbH
Generated: 4/23/2026 - 12:32:41 AM