
AI Vulnerability Management: Why You Can't Patch a Neural Network
13.1.2026 | 41 Min.
Traditional vulnerability management is simple: find the flaw, patch it, and verify the fix. But what happens when the "asset" is a neural network that has learned something ethically wrong? In this episode, Sapna Paul (Senior Manager at Dayforce) explains why there are no "Patch Tuesdays" for AI models .Sapna breaks down the three critical layers of AI vulnerability management: protecting production models, securing the data layer against poisoning, and monitoring model behavior for technically correct but ethically flawed outcomes . We discuss how to update your risk register to speak the language of business and the essential skills security professionals need to survive in an AI-first world .The conversation also covers practical ways to use AI within your security team to combat alert fatigue , the importance of explainability tools like SHAP and LIME , and how to align with frameworks like the NIST AI RMF and the EU AI Act .Guest Socials - Sapna's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Security, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Sapna Paul?(02:40) What is Vulnerability Management in the Age of AI? (05:00) Defining the New Asset: Neural Networks & Models (07:00) The 3 Layers of AI Vulnerability (Production, Data, Behavior) (10:20) Updating the Risk Register for AI Business Risks (13:30) Compliance vs. Innovation: Preventing AI from Going Rogue (18:20) Using AI to Solve Vulnerability Alert Fatigue (23:00) Skills Required for Future VM Professionals (25:40) Measuring AI Adoption in Security Teams (29:20) Key Frameworks: NIST AI RMF & EU AI Act (31:30) Tools for AI Security: Counterfit, SHAP, and LIME (33:30) Where to Start: Learning & Persona-Based Prompts (38:30) Fun Questions: Painting, Mentoring, and Vegan Ramen

Why Backups Aren't Enough & Identity Recovery is Key against Ransomware
16.12.2025 | 37 Min.
Think your cloud backups will save you from a ransomware attack? Think again. In this episode, Matt Castriotta (Field CTO at Rubrik) explains why the traditional "I have backups" mindset is dangerous. He distinguishes between Disaster Recovery (business continuity for operational errors) and Cyber Resilience (recovering from a malicious attack where data and identity are untrusted) .Matt speaks about the "dirty secrets" of cloud-native recovery, explaining why S3 versioning and replication are not valid cyber recovery strategies . The conversation shifts to the critical, often overlooked aspect of Identity Recovery. If your Active Directory or Entra ID is compromised, it's "ground zero” and you can't access anything. Matt argues that identity must be treated as the new perimeter and backed up just like any other critical data source .We also explore the impact of AI agents on data integrity, how do you "rewind" an AI agent that hallucinated and corrupted your data? Plus, practical advice on DORA compliance, multi-cloud resiliency, and the "people and process" side of surviving a breach.Guest Socials - Matt's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions:(00:00) Introduction(02:20) Who is Matt Castriotta?(03:20) Defining Cyber Resilience: The Ability to Say "No" to Ransomware(05:00) Why "I Have Backups" is Not Enough(06:45) The Difference Between Disaster Recovery and Cyber Recovery(10:20) Cloud Native Risks: Versioning and Replication Are Not Backups(12:50) DORA Compliance: Multi-Cloud Resiliency & Egress Costs(15:10) The "Shared Responsibility Model" Trap in Cloud(17:45) Identity is the New Perimeter: Why You Must Back It Up(22:30) Identity Recovery: Can You Restore Your Active Directory in Minutes?(25:40) AI and Data: The New "Oil" and "Crown Jewels"(27:20) Rubrik Agent Cloud: Rewinding AI Agent Actions(29:40) Top 3 Priorities for a 2026 Resiliency Program(33:10) Fun Questions: Guitar, Family, and Italian Food

How to secure your AI Agents: A CISOs Journey
09.12.2025 | 54 Min.
Transitioning a mature organization from an API-first model to an AI-first model is no small feat. In this episode, Yash Kosaraju, CISO of Sendbird, shares the story of how they pivoted from a traditional chat API platform to an AI agent platform and how security had to evolve to keep up.Yash spoke about the industry's obsession with "Zero Trust," arguing instead for a practical "Multi-Layer Trust" approach that assumes controls will fail . We dive deep into the specific architecture of securing AI agents, including the concept of a "Trust OS," dealing with new incident response definitions (is a wrong AI answer an incident?), and the critical need to secure the bridge between AI agents and customer environments .This episode is packed with actionable advice for AppSec engineers feeling overwhelmed by the speed of AI. Yash shares how his team embeds security engineers into sprint teams for real-time feedback, the importance of "AI CTFs" for security awareness, and why enabling employees with enterprise-grade AI tools is better than blocking them entirely .Questions asked:Guest Socials - Yash's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:20) Who is Yash Kosaraju? (CISO at Sendbird)(03:30) Sendbird's Pivot: From Chat API to AI Agent Platform(05:00) Balancing Speed and Security in an AI Transition(06:50) Embedding Security Engineers into AI Sprint Teams(08:20) Threats in the AI Agent World (Data & Vendor Risks)(10:50) Blind Spots: "It's Microsoft, so it must be secure"(12:00) Securing AI Agents vs. AI-Embedded Applications(13:15) The Risk of Agents Making Changes in Customer Environments(14:30) Multi-Layer Trust vs. Zero Trust (Marketing vs. Reality) (17:30) Practical Multi-Layer Security: Device, Browser, Identity, MFA(18:25) What is "Trust OS"? A Foundation for Responsible AI(20:45) Balancing Agent Security vs. Endpoint Security(24:15) AI Incident Response: When an AI Gives a Wrong Answer(29:20) Security for Platform Engineers: Enabling vs. Blocking(30:45) Providing Enterprise AI Tools (Gemini, ChatGPT, Cursor) to Employees(32:45) Building a "Security as Enabler" Culture(36:15) What Questions to Ask AI Vendors (Paying with Data?)(39:20) Personal Use of Corporate AI Accounts(43:30) Using AI to Learn AI (Gemini Conversations)(45:00) The Stress on AppSec Engineers: "I Don't Know What I'm Doing"(48:20) The AI CTF: Gamifying Security Training(50:10) Fun Questions: Outdoors, Team Building, and Indian/Korean Food

AI-First Vulnerability Management: Should CISOs Build or Buy?
04.12.2025 | 1 Std. 1 Min.
Thinking of building your own AI security tool? In this episode, Santiago Castiñeira, CTO of Maze, breaks down the realities of the "Build vs. Buy" debate for AI-first vulnerability management.While building a prototype script is easy, scaling it into a maintainable, audit-proof system is a massive undertaking requiring specialized skills often missing in security teams. The "RAG drug" relies too heavily on Retrieval-Augmented Generation for precise technical data like version numbers, which often fails .The conversation gets into the architecture required for a true AI-first system, moving beyond simple chatbots to complex multi-agent workflows that can reason about context and risk . We also cover the critical importance of rigorous "evals" over "vibe checks" to ensure AI reliability, the hidden costs of LLM inference at scale, and why well-crafted agents might soon be indistinguishable from super-intelligence .Guest Socials - Santiago's LinkedinPodcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:00) Who is Santiago Castiñeira?(02:40) What is "AI-First" Vulnerability Management? (Rules vs. Reasoning)(04:55) The "Build vs. Buy" Debate: Can I Just Use ChatGPT?(07:30) The "Bus Factor" Risk of Internal Tools(08:30) Why MCP (Model Context Protocol) Struggles at Scale(10:15) The Architecture of an AI-First Security System(13:45) The Problem with "Vibe Checks": Why You Need Proper Evals(17:20) Where to Start if You Must Build Internally(19:00) The Hidden Need for Data & Software Engineers in Security Teams(21:50) Managing Prompt Drift and Consistency(27:30) The Challenge of Changing LLM Models (Claude vs. Gemini)(30:20) Rethinking Vulnerability Management Metrics in the AI Era(33:30) Surprises in AI Agent Behavior: "Let's Get Back on Topic"(35:30) The Hidden Cost of AI: Token Usage at Scale(37:15) Multi-Agent Governance: Preventing Rogue Agents(41:15) The Future: Semi-Autonomous Security Fleets(45:30) Why RAG Fails for Precise Technical Data (The "RAG Drug")(47:30) How to Evaluate AI Vendors: Is it AI-First or AI-Sprinkled?(50:20) Common Architectural Mistakes: Vibe Evals & Cost Ignorance(56:00) Unpopular Opinion: Well-Crafted Agents vs. Super Intelligence(58:15) Final Questions: Kids, Argentine Steak, and Closing

SIEM vs. Data Lake: Why We Ditched Traditional Logging?
02.12.2025 | 46 Min.
In this episode, Cliff Crosland, CEO & co-founder of Scanner.dev, shares his candid journey of trying (and initially failing) to build an in-house security data lake to replace an expensive traditional SIEM.Cliff explains the economic breaking point where scaling a SIEM became "more expensive than the entire budget for the engineering team". He details the technical challenges of moving terabytes of logs to S3 and the painful realization that querying them with Amazon Athena was slow and costly for security use cases .This episode is a deep dive into the evolution of logging architecture, from SQL-based legacy tools to the modern "messy" data lake that embraces full-text search on unstructured data. We discuss the "data engineering lift" required to build your own, the promise (and limitations) of Amazon Security Lake, and how AI agents are starting to automate detection engineering and schema management.Guest Socials - Cliff's Linkedin Podcast Twitter - @CloudSecPod If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels:-Cloud Security Podcast- Youtube- Cloud Security Newsletter If you are interested in AI Cybersecurity, you can check out our sister podcast - AI Security PodcastQuestions asked:(00:00) Introduction(02:25) Who is Cliff Crosford?(03:00) Why Teams Are Switching from SIEMs to Data Lakes(06:00) The "Black Hole" of S3 Logs: Cliff's First Failed Data Lake(07:30) The Engineering Lift: Do You Need a Data Engineer to Build a Lake?(11:00) Why Amazon Athena Failed for Security Investigations(14:20) The Danger of Dropping Logs to Save Costs(17:00) Misconceptions About Building Your Own Data Lake(19:00) The Evolution of Logging: From SQL to Full-Text Search(21:30) Is Amazon Security Lake the Answer? (OCSF & Custom Logs)(24:40) The Nightmare of Log Normalization & Custom Schemas(28:00) Why Future Tools Must Embrace "Messy" Logs(29:55) How AI Agents Are Automating Detection Engineering(35:45) Using AI to Monitor Schema Changes at Scale(39:45) Build vs. Buy: Does Your Security Team Need Data Engineers?(43:15) Fun Questions: Physics Simulations & Pumpkin Pie



Cloud Security Podcast