Modern Web

Modern Web
Modern Web
Neueste Episode

177 Episoden

  • Modern Web

    Databases at Extreme Scale (PlanetScale CEO Sam Lambert)

    08.01.2026 | 48 Min.
    In this episode of the Modern Web Podcast, Rob Ocel talks with PlanetScale CEO Sam Lambert about what “database scale” actually looks like in 2026. Sam shares migration stories from companies that moved at the perfect time and others that waited until they were already in trouble, plus why sharding and reliability are never just “magic” if your queries and data model are a mess.
    They also cover PlanetScale’s evolution beyond its MySQL and Vitess roots into Postgres, Metal, and what’s coming next for sharding in the Postgres world. Along the way, they connect the dots to AI workloads, which are increasingly write heavy and put new pressure on performance, uptime, and security.

    What You'll Learn:
    - How to spot the “right time” to migrate databases before you’re on fire (and what happens when you wait too long)
    - What PlanetScale actually gives you “for free” at scale, and what it can’t fix (bad schema, missing indexes, terrible SQL)
    - Why “auto” database magic is usually a tradeoff, and what to ask for when you want to peek behind the curtain
    - What PlanetScale is becoming beyond MySQL/Vitess, including Postgres, Metal, and Nikky (sharding for Postgres)
    - How AI workloads are changing database patterns, especially the shift toward write heavy systems and why that pressures reliability and security

    Sam Lambert on Linkedin: https://www.linkedin.com/in/isamlambert/
    Rob Ocel on Linkedin: https://www.linkedin.com/in/robocel/
    This Dot Labs Twitter: https://x.com/ThisDotLabs
    This Dot Media Twitter: https://x.com/ThisDotMedia
    This Dot Labs Instagram: https://www.instagram.com/thisdotlabs/
    This Dot Labs Facebook: https://www.facebook.com/thisdot/

    Sponsored by This Dot Labs: https://www.thisdot.co/
  • Modern Web

    500B Spent on AI in 2025. Can You Spot the Value?

    11.12.2025 | 32 Min.
    Tracy Lee and A.D. Slaton sit down on The Context Window to unpack a wild week in AI, starting with the eye popping 500 billion dollars spent on AI infrastructure in 2025 and why the Cognizant CEO still says enterprise value is missing. They dig into reports of ChatGPT going “code red” in response to Gemini 3, what that means for OpenAI, and what it means for everyday builders trying to ship real products. Along the way they touch on ByteDance, call out LiveKit as a key piece of infrastructure for voice, video, and physical AI agents, and flag IBM’s move to acquire Confluent as another signal of where data and AI are heading.

    What you will learn
    - Why 500B spent on AI infrastructure has not translated into clear enterprise value yet
    - What the Cognizant CEO’s comments really signal for teams building AI products
    - How Gemini 3’s launch is shaking up the landscape for ChatGPT and OpenAI
    - What a “Code Red” moment actually means for developers and companies relying on these platforms
    - How LiveKit powers voice, video, and physical AI agents and where it fits in the stack
    - Why IBM acquiring Confluent matters for data, streaming, and real time AI systems
    - How to stay grounded and make practical decisions when AI news makes reality feel unstable
    0:00 Intro
    0:53 Are we overspending on AI infrastructure and where’s the enterprise value
    2:54 Adoption gap, enablement work and why 100% AI generated code is still rare
    6:11 High touch AI training, workshops and scaling AI practices across teams
    8:58 Grok 4.22, AI trading experiments and quant style tools for everyone
    13:51 OpenAI “Code Red,” rising competition and what changes for Agile with agents
    20:37 ByteDance agentic phone, AR glasses and AI moving into the physical world
    23:20 LiveKit, voice cloning, AI podcasts and the problem of AI slop
    27:00 Thinking machines, social media’s role in AI and closing reflections

    Tracy Lee on Linkedin: https://www.linkedin.com/in/tracyslee/
    A.D. Slaton on Linkedin: https://www.linkedin.com/in/adslaton/
    This Dot Labs Twitter: https://x.com/ThisDotLabs
    This Dot Media Twitter: https://x.com/ThisDotMedia
    This Dot Labs Instagram: https://www.instagram.com/thisdotlabs/
    This Dot Labs Facebook: https://www.facebook.com/thisdot/
    This Dot Labs Bluesky: https://bsky.app/profile/thisdotlabs.bsky.social
    Sponsored by This Dot: https://ai.thisdot.co
  • Modern Web

    How Varlock Fixes .env Vulnerabilities and Secures Your Secrets

    10.12.2025 | 40 Min.
    Environment variables and secrets are usually a mess: out of sync .env files, scattered API keys, painful onboarding, and brittle CI configs. In this episode of the Modern Web Podcast, Rob Ocel talks with Varlock co-creators Phil Miller and Theo Ephraim about how Varlock turns .env files into a real schema with types, validation, and documentation, pulls secrets from tools like 1Password and other backends, and centralizes configuration across environments and services. They also dig into protecting secrets in an AI-heavy world by redacting them from logs and responses, preventing accidental leaks from agents, and pushing toward an open env-spec standard so configuration becomes predictable, portable, and actually pleasant to work with.

    What you will learn:
    - Why traditional .env files and copy paste workflows break down as teams, services, and environments grow.
    - How Varlock turns environment variables into a schema with types, validation, documentation, and generated TypeScript.- How to pull secrets from tools like 1Password and other backends without leaving them in plain text or scattering them across dashboards.
    - How to manage multiple environments such as development, staging, and production from a single, declarative configuration source.
    - How Varlock helps protect secrets in AI and MCP workflows by redacting them from logs and responses and blocking accidental leaks.
    - What the env spec standard is and how a common schema format can make configuration more portable across tools, templates, and platforms.

    Theo Ephraim on Linkedin: https://www.linkedin.com/in/theo-ephraim/
    Phil Miller on Linkedin: https://www.linkedin.com/in/themillman/
    Rob Ocel on Linkedin: https://www.linkedin.com/in/robocel/
    This Dot Labs Twitter: https://x.com/ThisDotLabs
    This Dot Media Twitter: https://x.com/ThisDotMedia
    This Dot Labs Instagram: https://www.instagram.com/thisdotlabs/
    This Dot Labs Facebook: https://www.facebook.com/thisdot/
    This Dot Labs Bluesky: https://bsky.app/profile/thisdotlabs.bsky.social

    Sponsored by This Dot Labs: https://ai.thisdot.co/
  • Modern Web

    The One Mindset That Will 10x Your Dev Career (and Keep You Ahead of AI)

    21.10.2025 | 32 Min.
    Rob Ocel and Danny Thompson go deep on intentionality, the developer “superpower” that can speed up your growth, sharpen your judgment, and keep you from getting automated away in the AI era. Rob unpacks a simple loop (state intent → act → measure → review) with real stories, including the ticket he challenged on day one that saved a team six figures, and the “it seems to work” anti-pattern that shipped a mystery bug. Together they show how being deliberate before you write a line of code changes everything: scoping tickets, estimating work, documenting decisions, reviewing PRs, and speaking up, even as a junior.What you’ll learn: • The intentionality loop: how to set a hypothesis, capture outcomes, and improve fast • The exact moment to ask “Should we even do this ticket?” and how to push back safely • Why code is the last step: design notes, edge cases, and review context first • Estimation that actually works: start naive, compare to actuals, iterate to ±10% • How to avoid DRY misuse, “tragedy of the commons” code reviews, and stealth tech debt • Where to keep your working notes (GitHub, Notion, SharePoint) so reviewers can follow your logic • How juniors can question assumptions without blocking the room or their careerRob Ocel on Linkedin: https://www.linkedin.com/in/robocel/
    Danny Thompson on Linkedin: https://www.linkedin.com/in/dthompsondev/
    This Dot Labs Twitter: https://x.com/ThisDotLabs
    This Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs
    Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs
    Facebook: https://www.facebook.com/thisdot/This Dot Labs
    Bluesky: https://bsky.app/profile/thisdotlabs.bsky.socialSponsored by This Dot Labs: https://ai.thisdot.co/
  • Modern Web

    The Cloud Built AI. Can It Survive What AI Needs Next?

    14.10.2025 | 33 Min.
    On this episode of the Modern Web Podcast, hosts Rob Ocel and Danny Thompson welcome Miles Ward, CTO of SADA, for an in-depth conversation about the intersection of cloud computing and AI. Miles shares his career journey from early days at AWS and Google Cloud to leading SADA through its acquisition by Insight, offering a rare perspective on the evolution of solutions architecture and cloud adoption at scale.The discussion covers the realities of cloud “repatriation,” why GPUs have shifted some workloads back on-prem or to niche “neo-cloud” providers, and how cloud infrastructure remains the backbone of most AI initiatives. Miles breaks down practical concerns for organizations, from token pricing and GPU costs to scaling AI features without blowing budgets. He also highlights how AI adoption exposes weak organizational habits, why good data and strong processes matter more than hype, and how developers should view AI as intelligence augmentation rather than replacement.Key Takeaways:- Miles Ward, former early AWS Solutions Architect, founder of the SA practice at Google Cloud, and now CTO at SADA (acquired by Insight), brings a deep history in scaling infrastructure and AI workloads.- Cloud repatriation is rare. The main exception is GPUs, where companies may rent from “neo-clouds” like CoreWeave, Crusoe, or Lambda, or occasionally use on-prem for cost and latency reasons, though data-center power constraints make this difficult.- Cloud remains essential for AI. Successful initiatives depend on cloud primitives like data, orchestration, security, and DevOps. Google’s integrated stack (custom hardware, platforms, and models) streamlines development. The best practice is to build in cloud first, then optimize or shift GPU inference later if needed.- Costs and readiness are critical. Organizations should measure AI by business outcomes rather than lines of code. Token spending needs calculators, guardrails, and model routing strategies. On-prem comes with hidden costs such as power, networking, and staffing. The real bottleneck for most companies is poor data and weak processes, not model quality.Miles Ward on Linkedin: https://www.linkedin.com/in/rishabkumar7/Rob Ocel on Linkedin: https://www.linkedin.com/in/robocel/Danny Thompson on Linkedin: https://www.linkedin.com/in/dthompsondev/This Dot Labs Twitter: https://x.com/ThisDotLabsThis Dot Media Twitter: https://x.com/ThisDotMediaThis Dot Labs Instagram: https://www.instagram.com/thisdotlabs/This Dot Labs Facebook: https://www.facebook.com/thisdot/This Dot Labs Bluesky: https://bsky.app/profile/thisdotlabs.bsky.socialSponsored by This Dot Labs: https://ai.thisdot.co/

Weitere Technologie Podcasts

Über Modern Web

The modern web is changing fast. Front-end frameworks evolve quickly, standards are emerging and old ones are fading out of favor. There are a lot of things to learn, but knowing the right thing is more critical than learning them all. Modern Web Podcast is an interview-style show where we learn about modern web development from industry experts. We’re committed to making it easy to digest lots of useful information!
Podcast-Website

Höre Modern Web, Darknet Diaries Deutsch und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen
Rechtliches
Social
v8.7.0 | © 2007-2026 radio.de GmbH
Generated: 2/28/2026 - 10:33:07 PM