Partner im RedaktionsNetzwerk Deutschland
PodcastsTechnologieAI CyberSecurity Podcast

AI CyberSecurity Podcast

Kaizenteq Team
AI CyberSecurity Podcast
Neueste Episode

Verfügbare Folgen

5 von 28
  • AI Red Teaming & Securing Enterprise AI
    As AI systems become more integrated into enterprise operations, understanding how to test their security effectively is paramount.In this episode, we're joined by Leonard Tang, Co-founder and CEO of Haize Labs, to explore how AI red teaming is changing.Leonard discusses the fundamental shifts in red teaming methodologies brought about by AI, common vulnerabilities he's observing in enterprise AI applications, and the emerging risks associated with multimodal AI (like voice and image processing systems). We delve into the intricacies of achieving precise output control for crafting sophisticated AI exploits, the challenges enterprises face in ensuring AI safety and reliability, and practical mitigation strategies they can implement.Leonard shares his perspective on the future of AI red teaming, including the critical skills cybersecurity professionals will need to develop, the potential for fingerprinting AI models, and the ongoing discussion around protocols like MCP.Questions asked:00:00 Intro: AI Red Teaming's Evolution01:50 Leonard Tang: Haize Labs & AI Expertise05:06 AI vs. Traditional Red Teaming (Enterprise View)06:18 AI Quality Assurance: The Haize Labs Perspective08:50 AI Red Teaming: Real-World Application Examples10:43 Major AI Risk: Multimodal Vulnerabilities Explained11:50 AI Exploit Example: Voice Injections via Background Noise15:41 AI Vulnerabilities & Early XSS: A Cybersecurity Analogy20:10 Expert AI Hacking: Precisely Controlling AI Output for Exploits21:45 The AI Fingerprinting Challenge: Identifying Chained Models25:48 Fingerprinting LLMs: The Reality & Detection Difficulty29:50 Top Enterprise AI Security Concerns: Reputation & Policy34:08 Enterprise AI: Model Choices (Frontier Labs vs. Open Source)34:55 Future of LLMs: Specialized Models & "Hot Swap" AI37:43 MCP for AI: Enterprise Ready or Still Too Early?44:50 AI Security: Mitigation with Precise Input/Output Classifiers49:50 Future Skills for AI Red Teamers: Discrete OptimizationResources discussed during the episode:Baselines for Watermarking Large Language ModelsHaize Labs
    --------  
    53:23
  • RSA Conference 2025 Recap: Agentic AI Hype, MCP Risks & Cybersecurity's Future
    Caleb and Ashish cut through the Agentic AI hype, expose real MCP (Multi-Cloud Platform) risks, and discuss the future of AI in cybersecurity. If you're trying to understand what really happened at RSA and what it means for the industry, you would want to hear this.In this episode, Caleb Sima and Ashish Rajan dissect the biggest themes from RSA, including:Agentic AI Unpacked: What is Agentic AI really, beyond the marketing buzz?MCP & A2A Deployment Dangers: MCPs are exploding, but how do you deploy them safely across an enterprise without slowing down business?AI & Identity/Access Management: The complexities AI introduces to identity, authenticity, and authorization.RSA Innovation Sandbox InsightsGetting Noticed at RSA: What marketing strategies actually work to capture attention from CISOs and executives at a massive conference like RSA?The Current State of AI Security KnowledgeQuestions asked:(00:00) Introduction(02:44) RSA's Big Theme: The Rise of Agentic AI(09:07) Defining Agentic AI: Beyond Basic Automation(12:56) AI Agents vs. API Calls: Clarifying the Confusion(17:54) AI Terms Explained: Inference vs. User Inference(21:18) MCP Deployment Dangers: Identifying Real Enterprise Risks(25:59) Managing MCP Risk: Practical Steps for CISOs(29:13) MCP Architecture: Understanding Server vs. Client Risks(32:18) AI's Impact on Browser Security: The New OS?(36:03) AI & Access Management: The Identity & Authorization Challenge(47:48) RSA Innovation Sandbox 2025: Top Startups & Winner Insights(51:40) Marketing That Cuts Through: How to REALLY Get Noticed at RSA
    --------  
    1:03:25
  • MCP vs A2A Explained: AI Agent Communication Protocols & Security Risks
    Dive deep into the world of AI agent communication with this episode. Join hosts Caleb Sima and Ashish Rajan as they break down the crucial protocols enabling AI agents to interact and perform tasks: Model Context Protocol (MCP) and Agent-to-Agent (A2A).Discover what MCP and A2A are, why they're essential for unlocking AI's potential beyond simple chatbots, and how they allow AI to gain "hands and feet" to interact with systems like your desktop, browsers, or enterprise tools like Jira. The hosts explore practical use cases, the underlying technical architecture involving clients and servers, and the significant security implications, including remote execution risks, authentication challenges, and the need for robust authorization and privilege management.The discussion also covers Google's entry with the A2A protocol, comparing and contrasting it with Anthropic's MCP, and debating whether they are complementary or competing standards. Learn about the potential "AI-ification" of services, the likely emergence of MCP firewalls, and predictions for the future of AI interaction, such as AI DNS.If you're working with AI, managing cybersecurity in the age of AI, or simply curious about how AI agents communicate and the associated security considerations, this episode provides critical insights and context.Questions asked:(00:00) Introduction: AI Agents & Communication Protocols(02:06) What is MCP (Model Context Protocol)? Defining AI Agent Communication(05:54) MCP & Agentic Workflows: Enabling AI Actions & Use Cases(09:14) Why MCP Matters: Use Cases & The Need for AI Integration(14:27) MCP Security Risks: Remote Execution, Authentication & Vulnerabilities(19:01) Google's A2A vs Anthropic's MCP: Protocol Comparison & Debate(31:37) Future-Proofing Security: MCP & A2A Impact on Security Roadmaps(38:00) - MCP vs A2A: Predicting the Dominant AI Protocol(44:36) - The Future of AI Communication: MCP Firewalls, AI DNS & Beyond(47:45) - Real-World MCP/A2A: Adoption Hurdles & Practical Examples
    --------  
    54:21
  • How to Hack AI Applications: Real-World Bug Bounty Insights
    In this episode, we sit down with Joseph Thacker, a bug bounty hunter and AI security researcher, to uncover the evolving threat landscape of AI-powered applications and agents. Joseph shares battle-tested insights from real-world AI bug bounty programs, breaks down why AI AppSec is different from traditional AppSec, and reveals common vulnerabilities most companies miss, like markdown image exfiltration, XSS from LLM responses, and CSRF in chatbots.He also discusses the rise of AI-driven pentesting agents ("hack bots"), their current limitations, and how augmented human hackers will likely outperform them, at least for now. If you're wondering whether AI can really secure or attack itself, or how AI is quietly reshaping the bug bounty and AppSec landscape, this episode is a must-listen.Questions asked:(00:00) Introduction(02:14) A bit about Joseph(03:57) What is AI AppSec?(05:11) Components of AI AppSec(08:20) Bug Bounty for AI Systems(10:48) Common AI security issues(15:09) How will AI change pentesting?(20:23) How is the attacker landscape changing?(22:33) Where would autimation add the most value?(27:03) Is code being deployed less securely?(32:56) AI Red Teaming(39:21) MCP Security(42:13) Evolution of pentest with AIResources shared during the interview:- How to Hack AI Agents and Applications- Critical Thinking Bug Bounty Podcast - The Rise of AI Hackbots- Shift - Caido Plugin - Shadow Repeater- Nuclei - Haize Labs - White Circle AI- Prompt Injection Primer for Engineers
    --------  
    50:29
  • The Future of Digital Identity: Fighting AI Deepfakes & Identity Fraud
    Can you prove you’re actually human? In a world of AI deepfakes, synthetic identities, and evolving cybersecurity threats, digital identity is more critical than ever.With AI-generated voices, fake videos, and evolving fraud tactics, the way we authenticate ourselves online is rapidly changing. So, what’s the future of digital identity? And how can you protect yourself in this new era?In this episode, hosts Caleb Sima and Ashish Rajan is joined by Adrian Ludwig, CISO at Tools For Humanity (World ID project), former Chief Trust Officer at Atlassian, and ex-Google security lead for Android. Together, they explore:Why digital identity is fundamentally broken and needs a major rebootThe rise of AI-powered identity fraud and how it threatens securityHow World ID is using blockchain and biometrics to verify real humansThe debate: Should we trust governments, companies, or decentralized systems with our identity?The impact of GenAI & deepfakes on authentication and online trustQuestions asked:(00:00) Introduction(03:55) Digital Identity in 2025(14:13) How has AI impacted Identity?(29:33) Trust and Transparency with AI(32:18) Authentication and Identity(49:53) What can people do today?(52:05) Where can people learn about World Foundation?(53:49) Adoption of new identity protocolsResources spoken about during the episode:Tools for HumanityWorld.org
    --------  
    57:29

Weitere Technologie Podcasts

Über AI CyberSecurity Podcast

AI Cybersecurity simplified for CISOs and CyberSecurity Professionals.
Podcast-Website

Hören Sie AI CyberSecurity Podcast, t3n MeisterPrompter und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen
Rechtliches
Social
v7.18.2 | © 2007-2025 radio.de GmbH
Generated: 5/17/2025 - 9:49:28 AM