PodcastsNachrichtenScrum Master Toolbox Podcast: Agile storytelling from the trenches

Scrum Master Toolbox Podcast: Agile storytelling from the trenches

Vasco Duarte, Agile Coach, Certified Scrum Master, Certified Product Owner
Scrum Master Toolbox Podcast: Agile storytelling from the trenches
Neueste Episode

408 Episoden

  • Scrum Master Toolbox Podcast: Agile storytelling from the trenches

    BONUS From Individual AI Wins to Team-Wide Transformation With Monica Marquez

    20.2.2026 | 33 Min.
    BONUS: From Individual AI Wins to Team-Wide Transformation
    What happens when the leaders we trust to guide transformation become the bottleneck slowing it down? In this episode, Monica Marquez—with 25+ years in people transformation at Goldman Sachs, Google, and beyond—reveals why the old equation of effort equals success is breaking down, and what leaders must unlearn to thrive in the age of AI.
    The Leadership Crisis Nobody Trained You For
    "No one ever really teaches you what it really takes to be a leader. You know what you do really well, but how do you help other people do that too? That's when I realized it comes down to becoming a really good leader."
     
    Monica's origin story captures a universal struggle: being promoted for technical excellence, then discovering that leading people requires completely different skills. She spent her career at organizations like Goldman Sachs, Bank of America, Ernst & Young, and Google realizing that systems weren't built for everyone—and that the real work of leadership is redesigning those systems to unlock human potential. Today, through her company Flipwork, she helps leaders and teams become what she calls "agentic humans"—people who leverage AI to get ahead rather than getting left behind.
    The Command and Control Trap
    "Most leadership development still rewards the command and control archetype. The person who has all the answers, the decisive hero. But AI moves so fast that when you think you've fixed something, it changes the next day. Leaders are starting to become bottlenecks."
     
    The research shows the problem clearly: middle management is where AI adoption stalls. These leaders cling to command and control because relinquishing it feels like losing their value. Worse, they have an unspoken fear of managing AI agents—they don't want to be liable for outputs they don't fully control. Monica reframes this: treat your AI tools like an artificial intern, not artificial intelligence. You wouldn't take an intern's first draft and hand it to leadership. You train them, provide context, and finesse the output. The same discipline applies to LLMs.
    Rewriting the Success Equation
    "Effort = success is the old equation. That's pre-AI. The new equation is impact equals success. Output equals success, and impact equals worth."
     
    This might be the most important shift leaders need to make. When tasks that took 4 hours now take 30 minutes, deeply conditioned beliefs about work ethic get threatened. Monica sees leaders questioning their worth because they're producing faster. "I was always taught I have to work twice as hard to get half as far," she shares. "Now what used to take me 10 hours, I can get done in 4. Am I not worthy anymore of being a high performer?" The answer is to measure impact, not effort—and that requires rewiring beliefs that may be decades old.
    Why Individual AI Adoption Doesn't Scale
    "Teams are using AI as individual contributors, but they aren't using AI in their actual workflows and the handoffs. That's why leaders are scratching their heads, like, why aren't we seeing the ROI bubble up into the team?"
     
    Here's the gap most organizations miss: individuals save an hour or two per day using AI for personal productivity, but the team never sees compounding benefits. The handoffs between team members remain manual. The friction points persist. Monica's solution is "flip labs"—90-day sprints where teams take one critical workflow, dissect it, and rebuild it with AI. Where can AI handle the $10 tasks so humans can focus on $10,000 decisions? Where should humans remain in the loop? IKEA did this with customer service, retraining displaced workers into design roles. Revenue increased without adding headcount.
    Leading Through Uncertainty
    "We're humans wired for certainty, but Agile is a system designed for uncertainty. That's where the behavioral psychology comes in—how do you help people move forward despite the uncertainty?"
     
    The fundamental challenge is biological: our brains seek certainty, but the only certain thing now is that change will come faster than we can adapt. Monica works with teams to create psychologically safe spaces for experimentation—AB testing old workflows against AI-augmented ones, measuring outputs, and learning from failures. "Sometimes we learn more from the failures than we do the successes," she notes. The leaders who create permission for testing and learning will pull ahead; those who demand control will become the bottleneck that slows their entire organization.
     
    About Monica Marquez
    Monica Marquez is a leadership and workplace AI advisor with 25+ years in people transformation. She coined the "returnship" at Goldman Sachs, helped found Google's Product Inclusion Council, and now guides leaders and teams to adopt AI, agile, and inclusion practices that drive results through her company Flipwork, Inc.
     
    You can connect with Monica Marquez on LinkedIn and subscribe to her Ay, Ay, Ay! AI newsletter at themonicamarquez.com.
  • Scrum Master Toolbox Podcast: Agile storytelling from the trenches

    BONUS The Future of Seeing—Why AI Vision Will Transform Medicine and Human Perception With Daniel Sodickson

    19.2.2026 | 37 Min.
    BONUS: The Future of Seeing—Why AI Vision Will Transform Medicine and Human Perception
    What if the next leap in AI isn't about thinking, but about seeing? In this episode, Daniel Sodickson—physicist, medical imaging pioneer, and author of "The Future of Seeing"—argues we're on the edge of a vision revolution that will change medicine, technology, and even human perception itself.
    From Napkin Sketch to Parallel Imaging
    "I was doodling literally on a napkin in a piano bar in Boston and came up with a way to get multiple lines at once. I ran to my mentor and said, 'Hey, I have this idea, never mind my paper.' And he said, 'Who are you again? Sure, why not.' And it worked."
     
    Daniel's journey into imaging began with a happy accident. While studying why MRI couldn't capture the beating heart fast enough, he realized the fundamental bottleneck: MRI machines scan one line at a time, like old CRT screens. His insight—imaging in parallel to capture multiple lines simultaneously—revolutionized the field. This connection between natural vision (our eyes capture entire scenes at once) and artificial imaging systems set him on a 29-year journey exploring how we can see what was once invisible.
    Upstream AI: Changing What We Measure
    "Most often when we envision AI, we think of it as this downstream process. We generate our data, make our image, then let AI loose instead of our brains. To me, that's limited. Why aren't we thinking of tasks that AI can do that no human could ever do?"
     
    Daniel introduces a crucial distinction between "downstream" and "upstream" AI. Downstream AI takes existing images and interprets them—essentially competing with human experts. Upstream AI changes the game entirely by redesigning what data we gather in the first place. If we know a machine learning system will process the output, we can build cheaper, more accessible sensors. Imagine monitoring devices built into beds or chairs that don't produce perfect images but can detect whether you've changed since your last comprehensive scan. AI fills in the gaps using learned context about how bodies and signals behave.
    The Power of Context and Memory
    "The world we see is a lie. Two eyes are not nearly enough to figure out exactly where everything is in space. What the brain is doing is using everything it's learned about the world—how light falls on surfaces, how big people are compared to objects—and filling in what's missing."
     
    Our brains don't passively receive images; they actively construct reality using massive amounts of learned context. Daniel argues we can give imaging machines the same superpower. By training AI on temporal patterns—how healthy bodies change over time, what signals precede disease—we create systems with "memory" that can make sophisticated judgments from incomplete data. Today's signal, combined with your history and learned patterns from millions of others, becomes far more informative than any single pristine image could be.
    From Reactive to Proactive Health
    "I've started to wonder why we use these amazing MRI machines only once we already know you're sick. Why do we use them reactively rather than proactively?"
     
    This question drove Daniel to leave academia after 29 years and join Function Health, a company focused on proactive imaging and testing to catch disease before it develops. The vision: a GPS for your health. By combining regular blood panels, MRI scans, and wearable data, AI can monitor whether you look like yourself or have changed in worrisome ways. The goal isn't replacing expert diagnosis but creating an early warning system that surfaces problems while they're still easily treatable.
    Seeing How We See
    "Sometimes when I'm walking along, everything I'm seeing just fades away. And what I see instead is how I'm seeing. I imagine light bouncing off of things and landing in my eye, this buzz of light zipping around as fast as anything in the universe can go."
     
    After decades studying vision, Daniel experiences the world differently. He finds himself deconstructing his own perception—tracing sight lines, marveling at how we've evolved to turn chaos of sensation into spatially organized information. This meta-awareness extends to his work: every new imaging modality has driven scientific discovery, from telescopes enabling the Copernican Revolution to MRI revealing the living body. We're now at another inflection point where AI doesn't just interpret images but transforms our relationship with perception itself.
     
    In this episode, we refer to An Immense World: How Animal Senses Reveal the Hidden Realms Around Us by Ed Young on animal perception, and A Path Towards Autonomous Machine Intelligence by Yann LeCun on building AI more like the brain.
     
    About Daniel Sodickson
    Daniel K. Sodickson is a physicist in medicine and chief medical scientist at Function Health. Previously at NYU, and a gold medalist and past president of the International Society for Magnetic Resonance in Medicine, he pioneers AI-driven imaging and is author of The Future of Seeing.
  • Scrum Master Toolbox Podcast: Agile storytelling from the trenches

    AI Assisted Coding: How Spending 4x More on Code Quality Doubled Development Speed With Eduardo Ferro

    18.2.2026 | 32 Min.
    AI Assisted Coding: How Spending 4x More on Code Quality Doubled Development Speed
    What happens when you combine nearly 30 years of engineering experience with AI-assisted coding? In this episode, Eduardo Ferro shares his experiments showing that AI doesn't replace good practices—it amplifies them. The result: doubled productivity while spending four times more on code quality.
    Vibe Coding vs Production-Grade AI Development
    "Vibe coding is flow-driven, curiosity-based way of building software with AI. It's less about meticulously reviewing each line of code, and more about letting the AI steer the process—perfect for quick experiments, side projects, MVPs, and prototypes."
     
    Edu draws a clear distinction between vibe coding and production AI development. Vibe coding is exploration-focused, where you let AI drive while you learn and discover. Production AI coding is goal-focused, with careful planning, spec definition, and identification of edge cases before implementation. Both use small, safe steps and continuous conversation with the AI, but production code demands architectural thinking, security analysis, and sustainability practices. The key insight is that even vibe coding benefits from engineering discipline—as experiments grow, you need sustainable practices to maintain flexibility.
    How AI Doubled My Productivity
    "I was investing four times more in refactoring, cleanup, deleting code, introducing new tests, improving testability, and security analysis than in generating new features. And at the same time, globally, I think I more or less doubled my pace of work."
     
    Edu's two-month experiment with production code revealed a counterintuitive finding: by spending 4x more time on code quality activities—refactoring, cleanup, test improvement, and security analysis—he actually doubled his overall delivery speed. The secret lies in fast feedback loops. With AI, you can implement a feature, run automated code review, analyze security, prioritize improvements, and iterate—all within an hour. What used to be a day's work happens in a single focused session, and the quality improvements compound over time.
    The Positive Spiral of Code Removal
    "We removed code, so we removed all the features that were not being used. And whenever I remove this code, the next step is to automatically try to see, okay, can I simplify the architecture."
     
    One of the most powerful practices Edu discovered is using AI to accelerate code removal. By connecting product analytics to identify unused features, then using AI to quickly remove them, you trigger a positive spiral: removing code makes architecture changes easier, easier architecture changes enable faster feature development, which leads to more opportunities for simplification. This creates a self-reinforcing cycle that humans historically have been reluctant to pursue because removal was as expensive as creation.
    Preparing the System Before Introducing Change
    "What I want to generate is this new functionality—how should I change my system to make it super easy to introduce this one? It's not about making the change, it's about making the change easy."
     
    Edu describes a practice that was previously too expensive: preparing the system before introducing changes. By analyzing architecture decision records, understanding the existing design, and adapting the codebase first, new features become trivial to implement. AI makes this preparation cheap enough to do routinely. The result is systems that evolve cleanly rather than accumulating technical debt with each new feature.
    AI as an Amplifier: The Double-Edged Sword
    "AI is an amplifier. People who already know how to develop software well will continue to develop it well and faster. People who did not know how to develop software well will probably get in trouble much faster than they would otherwise."
     
    Edu's central metaphor is AI as an amplifier—it doesn't replace engineering judgment, it magnifies its presence or absence. Teams with strong practices will see accelerated improvement; teams without them will generate technical debt faster than ever. This has implications beyond individual productivity: the market will be saturated with solutions, making product discovery and distribution channels more important than implementation capability.
     
    In this episode, we refer to Edu's blog post Fast Feedback, Fast Features: My AI Assisted Coding Experiment and Vibe Coding by Gene Kim.
     
    About Eduardo Ferro
    Edu Ferro is Head of Engineering and Data Platform at ClarityAI, with nearly 30 years' experience. He helps teams deliver value through Lean, XP, and DevOps, blending technical depth with product thinking. Recently he explores AI-assisted product development, sharing insights and experiments on his site eferro.net.
     
    You can connect with Edu Ferro on LinkedIn.
  • Scrum Master Toolbox Podcast: Agile storytelling from the trenches

    AI Assisted Coding: Stop Building Features, Start Building Systems with AI With Adam Bilišič

    17.2.2026 | 37 Min.
    AI Assisted Coding: Stop Building Features, Start Building Systems with AI
    What separates vibe coding from truly effective AI-assisted development? In this episode, Adam Bilišič shares his framework for mastering AI-augmented coding, walking through five distinct levels that take developers from basic prompting to building autonomous multi-agent systems.
    Vibe Coding vs AI-Augmented Coding: A Critical Distinction
    "The person who is actually creating the app doesn't have to have in-depth overview or understanding of how the app works in the background. They're essentially a manual tester of their own application, but they don't know how the data structure is, what are the best practices, or the security aspects."
     
    Adam draws a clear line between vibe coding and AI-augmented coding. Vibe coding allows non-developers to create functional applications without understanding the underlying architecture—useful for product owners to create visual prototypes or help clients visualize their ideas. 
    AI-augmented coding, however, is what professional software engineers need to master: using AI tools while maintaining full understanding of the system's architecture, security implications, and best practices. The key difference is that augmented coding lets you delegate repetitive work while retaining deep knowledge of what's happening under the hood.
    From Building Features to Building Systems
    "When you start building systems, instead of thinking 'how can I solve this feature,' you are thinking 'how can I create either a skill, command, sub-agent, or other things which these tools offer, to then do this thing consistently again and again without repetition.'"
     
    The fundamental mindset shift in AI-augmented coding is moving from feature-level thinking to systems-level thinking. Rather than treating each task as a one-off prompt, experienced practitioners capture their thinking process into reusable recipes. This includes documenting how to refactor specific components, creating templates for common patterns, and building skills that encode your decision-making process. The goal is translating your coding practices into something the AI can repeatedly execute for any new feature.
    Context Management: The Critical Skill For Working With AI
    "People have this tendency to install everything they see on Reddit. They never check what is then loaded within the context just when they open the coding agent. You can check it, and suddenly you see 40 or 50% of your context is taken just by MCPs, and you didn't do anything yet."
     
    One of the most overlooked aspects of AI-assisted coding is context management. Adam reveals that many developers unknowingly fill their context window with MCP (Model Context Protocol) tools they don't need for the current task. The solution is strategic use of sub-agents: when your orchestrator calls a front-end sub-agent, it gets access to Playwright for browser testing, while your backend agent doesn't need that context overhead. Understanding how to allocate context across specialized agents dramatically improves results.
    The Five Levels of AI-Augmented Coding
    "If you didn't catch up or change your opinion in the last 2-3 years, I would say we are getting to the point where it will be kind of last chance to do so, because the technology is evolving so fast."
     
    Adam outlines a progression from beginner to expert:
     
    Level 1 - Master of Prompts: Learning to write effective prompts, but constantly repeating context about architecture and preferences

    Level 2 - Configuration Expert: Using files like .cursorrules or CLAUDE.md to codify rules the agent should always follow

    Level 3 - Context Master: Understanding how to manage context efficiently, using MCPs strategically, creating markdown files for reusable information

    Level 4 - Automation Master: Creating custom commands, skills, and sub-agents to automate repetitive workflows

    Level 5 - The Orchestrator: Building systems where a main orchestrator delegates to specialized sub-agents, each running in their own context window

    The Power of Specialized Sub-Agents
    "The sub-agent runs in his own context window, so it's not polluted by whatever the orchestrator was doing. The orchestrator needs to give him enough information so it can do its work."
     
    At the highest level, developers create virtual teams of specialized agents. The orchestrator understands which sub-agent to call for front-end work, which for backend, and which for testing. Each agent operates in a clean context, focused on its specific domain. When the tester finds issues, it reports back to the orchestrator, which can spin up the appropriate agent to fix problems. This creates a self-correcting development loop that dramatically increases throughput.
     
    In this episode, we refer to the Claude Code subreddit and IndyDevDan's YouTube channel for learning resources.
     
    About Adam Bilišič
    Adam Bilišič is a former CTO of a Swiss company with over 12 years of professional experience in software development, primarily working with Swiss clients. He is now the CEO of NodeonLabs, where he focuses on building AI-powered solutions and educating companies on how to effectively use AI tools, coding agents, and how to build their own custom agents.
     
    You can connect with Adam Bilišič on LinkedIn and learn more at nodeonlabs.com. Download his free guide on the five levels of AI-augmented coding at nodeonlabs.com/ai-trainings/ai-augmented-coding#free-guide.
  • Scrum Master Toolbox Podcast: Agile storytelling from the trenches

    When AI Decisions Go Wrong at Scale—And How to Prevent It With Ran Aroussi

    16.2.2026 | 41 Min.
    BONUS: When AI Decisions Go Wrong at Scale—And How to Prevent It
    We've spent years asking what AI can do. But the next frontier isn't more capability—it's something far less glamorous and far more dangerous if we get it wrong. In this episode, Ran Aroussi shares why observability, transparency, and governance may be the difference between AI that empowers humans and AI that quietly drifts out of alignment.
    The Gap Between Demos and Deployable Systems
    "I've noticed that I watched well-designed agents make perfectly reasonable decisions based on their training, but in a context where the decision was catastrophically wrong. And there was really no way of knowing what had happened until the damage was already there."
     
    Ran's journey from building algorithmic trading systems to creating MUXI, an open framework for production-ready AI agents, revealed a fundamental truth: the skills needed to build impressive AI demos are completely different from those needed to deploy reliable systems at scale. Coming from the EdTech space where he handled billions of ad impressions daily and over a million concurrent users, Ran brings a perspective shaped by real-world production demands. 
    The moment of realization came when he saw that the non-deterministic nature of AI meant that traditional software engineering approaches simply don't apply. While traditional bugs are reproducible, AI systems can produce different results from identical inputs—and that changes everything about how we need to approach deployment.
    Why Leaders Misunderstand Production AI
    "When you chat with ChatGPT, you go there and it pretty much works all the time for you. But when you deploy a system in production, you have users with unimaginable different use cases, different problems, and different ways of phrasing themselves."
     
    The biggest misconception leaders have is assuming that because AI works well in their personal testing, it will work equally well at scale. When you test AI with your own biases and limited imagination for scenarios, you're essentially seeing a curated experience. 
    Real users bring infinite variation: non-native English speakers constructing sentences differently, unexpected use cases, and edge cases no one anticipated. The input space for AI systems is practically infinite because it's language-based, making comprehensive testing impossible.
    Multi-Layered Protection for Production AI
    "You have to put in deterministic filters between the AI and what you get back to the user."
     
    Ran outlines a comprehensive approach to protecting AI systems in production:
     
    Model version locking: Just as you wouldn't randomly upgrade Python versions without testing, lock your AI model versions to ensure consistent behavior

    Guardrails in prompts: Set clear boundaries about what the AI should never do or share

    Deterministic filters: Language firewalls that catch personal information, harmful content, or unexpected outputs before they reach users

    Comprehensive logging: Detailed traces of every decision, tool call, and data flow for debugging and pattern detection

     
    The key insight is that these layers must work together—no single approach provides sufficient protection for production systems.
    Observability in Agentic Workflows
    "With agentic AI, you have decision-making, task decomposition, tools that it decided to call, and what data to pass to them. So there's a lot of things that you should at least be able to trace back."
     
    Observability for agentic systems is fundamentally different from traditional LLM observability. When a user asks "What do I have to do today?", the system must determine who is asking, which tools are relevant to their role, what their preferences are, and how to format the response. 
    Each user triggers a completely different dynamic workflow. Ran emphasizes the need for multi-layered access to observability data: engineers need full debugging access with appropriate security clearances, while managers need topic-level views without personal information. The goal is building a knowledge graph of interactions that allows pattern detection and continuous improvement.
    Governance as Human-AI Partnership
    "Governance isn't about control—it's about keeping people in the loop so AI amplifies, not replaces, human judgment."
     
    The most powerful reframing in this conversation is viewing governance not as red tape but as a partnership model. Some actions—like answering support tickets—can be fully automated with occasional human review. Others—like approving million-dollar financial transfers—require human confirmation before execution. The key is designing systems where AI can do the preparation work while humans retain decision authority at critical checkpoints. This mirrors how we build trust with human colleagues: through repeated successful interactions over time, gradually expanding autonomy as confidence grows.
    Building Trust Through Incremental Autonomy
    "Working with AI is like working with a new colleague that will back you up during your vacation. You probably don't know this person for a month. You probably know them for years. The first time you went on vacation, they had 10 calls with you, and then slowly it got to 'I'm only gonna call you if it's really urgent.'"
     
    The path to trusting AI systems mirrors how we build trust with human colleagues. You don't immediately hand over complete control—you start with frequent check-ins, observe performance, and gradually expand autonomy as confidence builds. This means starting with heavy human-in-the-loop interaction and systematically reducing oversight as the system proves reliable. The goal is reaching a state where you can confidently say "you don't have to ask permission before you do X, but I still want to approve every Y."
     
    In this episode, we refer to Thinking in Systems by Donella Meadows, Designing Machine Learning Systems by Chip Huyen, and Build a Large Language Model (From Scratch) by Sebastian Raschka.
     
    About Ran Aroussi
    Ran Aroussi is the founder of MUXI, an open framework for production-ready AI agents. He is also the co-creator of yfinance (with 10 million downloads monthly) and founder of Tradologics and Automaze. Ran is the author of the forthcoming book Production-Grade Agentic AI: From Brittle Workflows to Deployable Autonomous Systems, also available at productionaibook.com.
     
    You can connect with Ran Aroussi on LinkedIn.

Weitere Nachrichten Podcasts

Über Scrum Master Toolbox Podcast: Agile storytelling from the trenches

Every week day, Certified Scrum Master, Agile Coach and business consultant Vasco Duarte interviews Scrum Masters and Agile Coaches from all over the world to get you actionable advice, new tips and tricks, improve your craft as a Scrum Master with daily doses of inspiring conversations with Scrum Masters from the all over the world. Stay tuned for BONUS episodes when we interview Agile gurus and other thought leaders in the business space to bring you the Agile Business perspective you need to succeed as a Scrum Master. Some of the topics we discuss include: Agile Business, Agile Strategy, Retrospectives, Team motivation, Sprint Planning, Daily Scrum, Sprint Review, Backlog Refinement, Scaling Scrum, Lean Startup, Test Driven Development (TDD), Behavior Driven Development (BDD), Paper Prototyping, QA in Scrum, the role of agile managers, servant leadership, agile coaching, and more!
Podcast-Website

Höre Scrum Master Toolbox Podcast: Agile storytelling from the trenches, RONZHEIMER. und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen
Rechtliches
Social
v8.6.0 | © 2007-2026 radio.de GmbH
Generated: 2/20/2026 - 11:42:11 PM