PodcastsKarriereCables2Clouds

Cables2Clouds

Cables2Clouds
Cables2Clouds
Neueste Episode

123 Episoden

  • Cables2Clouds

    Can You Fly With Glass Wings? - Monthly News Update (with a Surprise)

    22.04.2026 | 42 Min.
    Send us Fan Mail
    “Too dangerous to release” is a bold claim in cybersecurity, so we treat it like any other security headline: we interrogate it. We kick off our monthly news round-up by welcoming Catherine McNamara as a permanent co-host, then dig into Anthropic’s Mythos preview model and Project Glasswing, positioned as an AI security and threat intelligence leap that can allegedly find zero-day vulnerabilities at a level the public shouldn’t have yet. We ask the uncomfortable questions: where’s the independent evidence, what does high-fidelity vulnerability discovery actually look like, and how do we avoid drowning in AI-generated noise?

    From there, the discussion gets messier in the way real security always is. We talk about tokens, paid code security reviews, and how incentives change when AI companies chase growth, IPO pressure, and government contracts. We also unpack why “ethical” restrictions are hard to enforce in practice and how rumors of source code leaks and fast rewrites complicate any promise of controlled access. If powerful agencies can use AI to speed up exploit discovery, even lower-severity bugs can become dangerous when chained into real attacks.

    Then we pivot to a concrete lesson every org can use: the Vercel breach. A supply chain compromise plus a single OAuth “Allow All” moment shows how identity and SaaS permissions failures can open the door to data exfiltration. We break down least privilege, blocking risky OAuth grants, shadow SaaS, and why a CASB can be the difference between a contained incident and a headline.

    We close by connecting AI layoffs to social and economic pressure, including CEO security fears, surprising UBI rhetoric, and Oracle laying off 30,000 people by email. If you care about AI, cloud security, appsec, and what these incentives are doing to the world, this one’s for you. Subscribe, share the episode with a friend, and leave a review with your take: is the AI security boom helping defenders more than attackers?
    Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/
    Check out the Monthly Cloud Networking News
    https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

    Visit our website and subscribe: https://www.cables2clouds.com/
    Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
    Follow us on YouTube: https://www.youtube.com/@cables2clouds/
    Follow us on TikTok: https://www.tiktok.com/@cables2clouds
    Merch Store: https://store.cables2clouds.com/
    Join the Discord Study group: https://artofneteng.com/iaatj
  • Cables2Clouds

    What is Spec-Driven Development?

    08.04.2026 | 38 Min.
    Send us Fan Mail
    Your AI can write code fast, but it can also wander fast. That’s why we sat down with Jason Belk from Learning at Cisco to unpack spec-driven development, a simple idea with huge impact: write the rules and requirements first, then let your coding agent execute with far fewer surprises.

    We talk through what “agentic coding” looks like in practice with Claude Code, including the trust and permission model of a local AI agent that can create files, run bash commands, and iterate on a real project. Jason explains how GitHub Spec Kit turns plain markdown and scripts into a repeatable workflow: start with a constitution that defines governing principles, then cycle feature by feature through specify, plan, tasks, and implement. Along the way we cover common gotchas like initializing in the wrong directory so skills never load, plus practical tips like using voice-to-text to improve prompts and choosing the right model tier when implementation quality matters.

    We also zoom out to the bigger picture: why context windows break long builds, how keeping plans on disk helps the agent “re-ground” itself, and where the industry may be heading with small specialized models versus one giant general LLM. Jason shares learning resources too, including a Cisco U tutorial that frames spec-driven development for network engineers, and the Cisco AI Technical Practitioner course and certification, plus upcoming Cisco Live sessions.

    Subscribe for more real-world AI workflows, share this with a teammate who keeps fighting prompt drift, and leave a review with the one automation project you want an agent to build next.

    Connect with Jason:
    https://linktr.ee/renobelk
    https://github.com/jabelk/claude-speckit-template

    SDD Relevant Material mentioned in this episode:
    https://sdd.goecke.io/
    https://substack.com/home/post/p-189415335
    https://ondemandelearning.cisco.com/apollo-alpha/tc-ai-spec-driven-dev/pages/1 https://learningnetwork.cisco.com/s/aitech-exam-topics
    https://u.cisco.com/paths/cisco-ai-technical-practitioner-20806
    https://u.
    Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/
    Check out the Monthly Cloud Networking News
    https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

    Visit our website and subscribe: https://www.cables2clouds.com/
    Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
    Follow us on YouTube: https://www.youtube.com/@cables2clouds/
    Follow us on TikTok: https://www.tiktok.com/@cables2clouds
    Merch Store: https://store.cables2clouds.com/
    Join the Discord Study group: https://artofneteng.com/iaatj
  • Cables2Clouds

    Please Don’t Dump Data Center Soup - Monthly News Update

    25.03.2026 | 32 Min.
    Send us Fan Mail
    AI is everywhere right now, but the numbers and the real-world trade-offs don’t always match the hype. We dig into a headline that AI added basically nothing to US GDP growth last year, even after billions in spending from the biggest names in tech. That launches a bigger question we can’t ignore: is the AI boom creating durable productivity, or mostly moving money around the same handful of companies that sell GPUs, cloud capacity, and data center hardware?

    From there, we get into the messy incentive layer of AI safety and AI regulation. We talk about Anthropic’s shifting safety stance and why “we meant well but competition changed” is becoming a familiar pattern across the AI industry. If guardrails depend on goodwill, what happens when the market punishes anyone who slows down? And if we keep pushing responsibility onto “developers,” are vendors dodging accountability for the defaults they ship?

    We also zoom out to the physical footprint of AI infrastructure: energy demand, strained grids, and the environmental impact questions that show up when states consider options like data center wastewater discharge. Then we hit the human side of “AI efficiency,” including layoffs framed as automation wins, and we end with privacy concerns around Meta Ray-Ban smart glasses and footage that may capture far more than people expect.

    What headline worries you most right now: jobs, safety, the environment, or privacy?
    Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/
    Check out the Monthly Cloud Networking News
    https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

    Visit our website and subscribe: https://www.cables2clouds.com/
    Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
    Follow us on YouTube: https://www.youtube.com/@cables2clouds/
    Follow us on TikTok: https://www.tiktok.com/@cables2clouds
    Merch Store: https://store.cables2clouds.com/
    Join the Discord Study group: https://artofneteng.com/iaatj
  • Cables2Clouds

    An Honest Conversation About AI Security

    11.03.2026 | 52 Min.
    Send us Fan Mail
    Ready for a reality check on AI security? We invited Cisco cybersecurity expert Katherine McNamara to dig into where large language models actually break: from prompt injection and over-permissioned plugins to reckless “vibe-coded” apps that leak IDs, photos, and entire backends. The stories are real, the stakes are high, and the fixes are concrete. We trace how AI sprawl mirrors the worst of early IoT—weak defaults, poor isolation, and a stampede to integrate models into billing, HR, and support without guardrails—only this time the blast radius includes your customer data and your legal exposure.

    We talk through the human factor first. Written policies won’t stop someone from pasting a pen test report into a public chatbot. DLP helps, but hybrid work and BYOD stretch defenses thin. Then we move to the core threat model: public and private models are targets; datasets can be poisoned; plugins often ship with admin-level scopes; and a clever prompt can trick an LLM into disclosing chat histories, creating new accounts, or modifying orders. Courts have already treated chatbots as company representatives, binding businesses to their outputs—another reason to treat every integration like an untrusted user with strict least privilege.

    It’s not all doom. Used well, AI gives security operations superpowers: correlating signals across dozens of tools, reducing alert fatigue, and surfacing lateral movement. The path forward is discipline, not denial. Fence models on the network. Prefer read-only to write. Gate plugins behind narrowly scoped APIs. Vet datasets for backdoors. Red-team prompts as seriously as you pen test code. And educate stakeholders with live demos so they see why these controls matter. We also unpack the shaky economics—GPU costs, rising consumer fatigue, hype-fueled projects with little ROI—and why that pressure can erode privacy if teams aren’t vigilant.

    If you’re building with LLMs or trying to rein them in, this conversation gives you a practical map: what to allow, what to block, and how to make AI useful without turning your stack into an attack surface. Subscribe, share with a teammate who ships integrations, and drop a review with the one guardrail you’ll implement this quarter.

    Connect with our Guest:
    https://x.com/kmcnam1
    https://www.linkedin.com/in/katherinermcnamara/
    Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/
    Check out the Monthly Cloud Networking News
    https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

    Visit our website and subscribe: https://www.cables2clouds.com/
    Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
    Follow us on YouTube: https://www.youtube.com/@cables2clouds/
    Follow us on TikTok: https://www.tiktok.com/@cables2clouds
    Merch Store: https://store.cables2clouds.com/
    Join the Discord Study group: https://artofneteng.com/iaatj
  • Cables2Clouds

    When AI Deletes Production: Guardrails, MCP Risks, And The Surveillance Creep

    25.02.2026 | 42 Min.
    Send us Fan Mail
    What happens when an AI agent decides the “best” fix is to delete production? We unpack the AWS outage tied to an over‑permitted agent and zoom out to a bigger pattern: systems built for maximum utility and minimum restraint. From MCP’s connective promise to its post‑auth sprawl, we break down how agent toolchains turn small mistakes into big blast radii—and how to fix that with real guardrails, least privilege, and human‑in‑the‑loop at destructive boundaries.

    The conversation widens to public deployments where abstractions fail loudly. A military nutrition assistant built on Grok reportedly ran with minimal safety constraints and instantly entertained unsafe prompts. That’s not a funny glitch; it’s a policy failure. We talk about what genuine safety layers look like in high‑stakes settings: capability firewalls, explicit refusal policies, robust logging, and escalation paths for sensitive actions. Ethics, compliance, and operational discipline are not speed bumps; they are the steering wheel.

    Privacy takes center stage with a Ring twist: footage stored in the cloud despite no subscription. Helpful for a kidnapping investigation, yes—but also a wake‑up call for anyone who assumed “local” meant private. We offer practical steps for home security that actually secures the home: VLAN segmentation, strict egress controls, and device choices that still function offline. Then we turn to Discord’s plan to gate “mature” spaces behind global face and ID checks via Persona, the security research that raised red flags, and how user pressure pushed a rollback. If regulation demands verification, the right answer is minimal disclosure, not maximal identity.

    We close with a rare combo: a zero‑day disclosure delivered as a catchy music video calling out Malwarebytes for hard‑coded creds and privilege issues—followed by a commendable vendor response. It’s a model for the culture we want: researchers spotlighting flaws, companies fixing fast, and users gaining safer software. Throughout, we keep returning to one principle that ties AI, identity, and devices together: trust is a permission. Design for refusal, constrain by default, and say clearly what your systems must never do.

    If this resonates, follow the show, share it with a friend, and leave a quick review—what guardrail would you never ship without?
    Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/
    Check out the Monthly Cloud Networking News
    https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

    Visit our website and subscribe: https://www.cables2clouds.com/
    Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
    Follow us on YouTube: https://www.youtube.com/@cables2clouds/
    Follow us on TikTok: https://www.tiktok.com/@cables2clouds
    Merch Store: https://store.cables2clouds.com/
    Join the Discord Study group: https://artofneteng.com/iaatj

Weitere Karriere Podcasts

Über Cables2Clouds

Join Chris and Tim as they delve into the Cloud Networking world! The goal of this podcast is to help Network Engineers with their Cloud journey. Follow us on Twitter @Cables2Clouds | Co-Hosts Twitter Handles: Chris - @bgp_mane | Tim - @juangolbez
Podcast-Website

Höre Cables2Clouds, Female Leadership Podcast und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen
Rechtliches
Social
v8.8.13| © 2007-2026 radio.de GmbH
Generated: 5/3/2026 - 6:15:02 PM