PodcastsBildungAI Engineering Podcast

AI Engineering Podcast

Tobias Macey
AI Engineering Podcast
Neueste Episode

79 Episoden

  • AI Engineering Podcast

    Kubernetes, Compliance, and Control: The Operational Backbone of AI Sovereignty

    25.02.2026 | 1 Std. 1 Min.
    Summary
    In this episode of the AI Engineering Podcast, Steven Watt, leader of the Office of the CTO at Red Hat, discusses practical paths to achieving AI sovereignty for organizations. He shares his two-decade experience in AI, highlighting how governments are building GPU platforms and protected data hubs to maintain control over AI workloads. Steve emphasizes why self-managed infrastructure is becoming a strategic necessity as companies outgrow cloud costs and require tighter control over models, data, and compliance. The conversation explores the operational substrate for AI sovereignty, including Kubernetes as the scale-out backbone for LLM serving, bridging the gap with PyTorch ecosystems, observability and policy for non-deterministic systems, and emerging security needs such as confidential inference and agentic identity. They also discuss model and hardware optionality (GPUs, CPUs, and new accelerators), the growing demand for energy-efficient inference, and the importance of open models and post-training to create durable differentiation. Steve identifies access to GPUs as the biggest gap hindering sovereign AI adoption today, emphasizing the need for broad access to GPUs for AI workloads to thrive. The conversation also touches on evolving architectures beyond transformers, the interplay between AI and data sovereignty, consolidation pressures from pilot chaos to standardized platforms, and the societal triad of universities, startups, and sovereign infrastructure.

    Announcements
    Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems
    Unlock the full potential of your AI workloads with a seamless and composable data infrastructure. Bruin is an open source framework that streamlines integration from the command line, allowing you to focus on what matters most - building intelligent systems. Write Python code for your business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. With native support for ML/AI workloads, Bruin empowers data teams to deliver faster, more reliable, and scalable AI solutions. Harness Bruin's connectors for hundreds of platforms, including popular machine learning frameworks like TensorFlow and PyTorch. Build end-to-end AI workflows that integrate seamlessly with your existing tech stack. Join the ranks of forward-thinking organizations that are revolutionizing their data engineering with Bruin. Get started today at aiengineeringpodcast.com/bruin, and for dbt Cloud customers, enjoy a $1,000 credit to migrate to Bruin Cloud.
    Your host is Tobias Macey and today I'm interviewing Stephen Watt about how to adapt your existing infrastructure investments to support your AI workloads and gain "AI Sovereignty"

    Interview

    Introduction
    How did you get involved in machine learning?
    Can you describe what you mean by the term "AI sovereignty"?
    What are the motivating factors for investing in that as an organizational capability?
    What do you see as the scale, sophistication, regulatory triggers that tip someone from buying off-the-shelf AI services and into operating their own AI stacks?
    There has been substantial investment in MLOps toolchains and patterns over the past decade, along with corresponding evolution of LLMOps techniques. What do you see as the areas of overlap between those technology patterns and the "traditional" infrastructure capabilities that organizations have matured over the past ~20 years?
    What are the aspects that are disjoint and contribute to operational pain for DevOps/platform teams?
    How do AI/agentic workloads strain the ability of existing security and governance frameworks that teams are operating for existing cloud-native workloads?
    What are the options for extending those frameworks and what are the requirements that force a new approach? (e.g. guardrails, LLM interpretability, etc.)
    What are the elements of cloud-native architecture that have left us (as an industry) well situated to absorb the complexity of AI/agentic workloads?
    How does the complexity shift as you go along the continuum of model training to finetuning to inference?
    Beyond the ability to host and execute inference on a model are the various data stores and tool availability that make generative AI a competitive advantage. How much of that (e.g. agentic memory, vector stores, MCP/A2A tools, etc.) are actually net new vs. a new coat of paint on existing techniques?
    What are the most interesting, innovative, or unexpected ways that you have seen teams operationalizing AI workloads on their infrastructure?
    What are the most interesting, unexpected, or challenging lessons that you have learned while working on empowering organizations to achieve AI sovereignty?
    When is operating your own AI infrastructure the wrong choice?
    What are your predictions for the future evolution of operational substrates for AI workloads?

    Contact Info

    LinkedIn

    Parting Question

    From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?

    Closing Announcements

    Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
    Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
    If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.
    To help other people find the show please leave a review on iTunes and tell your friends and co-workers.

    Links

    RedHat
    Bayesian Classifier
    Hadoop
    HBase
    DeepSeek
    Reflection AI
    Nvidia Blackwell
    vLLM
    IBM Spyre
    vLLM CPU
    IBM Watson on Jeopardy
    Neuromorphic Computing
    Kubernetes
    PyTorch Foundation
    MLOps
    LLMOps
    Semantic Router
    BERT
    AGNTCY
    OPA == Open Policy Agent
    CEDAR
    WSDL == Web Services Description Language
    UDDI
    Spark
    llama.cpp
    Ollama
    ARPA-H

    The intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
  • AI Engineering Podcast

    From Blind Spots to Observability: Operationalizing LLM Apps with OpenLit

    15.02.2026 | 50 Min.
    Summary
    In this episode of the AI Engineering Podcast, Aman Agarwal, creator of OpenLit, discusses the operational foundations required to run LLM-powered applications in production. He highlights common early blind spots teams face, including opaque model behavior, runaway token costs, and brittle prompt management, emphasizing that strong observability and cost tracking must be established before an MVP ships. Aman explains how OpenLit leverages OpenTelemetry for vendor-neutral tracing across models, tools, and data stores, and introduces features such as prompt and secret management with versioning, evaluation workflows (including LLM-as-a-judge), and fleet management for OpenTelemetry collectors. The conversation covers experimentation patterns, strategies to avoid vendor lock-in, and how detailed stepwise traces reshape system design and debugging. Aman also shares recent advancements like a Kubernetes operator for zero-code instrumentation, multi-database configurations for environment isolation, and integrations with platforms such as Grafana and Dash0. They conclude by discussing lessons learned from building in the open, prioritizing reliability, developer experience, and data security, and preview future work on context management and closing the loop from experimentation to prompt/dataset improvements.

    Announcements
    Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems
    Unlock the full potential of your AI workloads with a seamless and composable data infrastructure. Bruin is an open source framework that streamlines integration from the command line, allowing you to focus on what matters most - building intelligent systems. Write Python code for your business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. With native support for ML/AI workloads, Bruin empowers data teams to deliver faster, more reliable, and scalable AI solutions. Harness Bruin's connectors for hundreds of platforms, including popular machine learning frameworks like TensorFlow and PyTorch. Build end-to-end AI workflows that integrate seamlessly with your existing tech stack. Join the ranks of forward-thinking organizations that are revolutionizing their data engineering with Bruin. Get started today at aiengineeringpodcast.com/bruin, and for dbt Cloud customers, enjoy a $1,000 credit to migrate to Bruin Cloud.
    Your host is Tobias Macey and today I'm interviewing Aman Agarwal about the operational investments that are necessary to ensure you get the most out of your AI models

    Interview

    Introduction
    How did you get involved in the area of AI/data management?
    Can you start by giving your assessment of the main blind spots that are common in the existing AI application patterns?
    As teams adopt agentic architectures, how common is it to fall prey to those same blind spots?
    There are numerous tools/services available now focused on various elements of "LLMOps". What are the major components necessary for a minimum viable operational platform for LLMs?
    There are several areas of overlap, as well as disjoint features, in the ecosystem of tools (both open source and commercial). How do you advise teams to navigate the selection process? (point solutions vs. integrated tools, and handling frameworks with only partial overlap)
    Can you describe what OpenLit is and the story behind it?
    How would you characterize the feature set and focus of OpenLit compared to what you view as the "major players"?
    Once you have invested in a platform like OpenLit, how does that change the overall development workflow for the lifecycle of AI/agentic applications?
    What are the most complex/challenging elements of change management for LLM-powered systems? (e.g. prompt tuning, model changes, data changes, etc.)
    How can the information collected in OpenLit be used to develop a self-improvement flywheel for agentic systems?
    Can you describe the architecture and implementation of OpenLit?
    How have the scope and goals of the project changed since you started working on it?
    Given the foundational aspects of the project that you have built, what are some of the adjacent capabilities that OpenLit is situated to expand into?
    What are the sharp edges and blind spots that are still challenging even when you have OpenLit or similar integrated?
    What are the most interesting, innovative, or unexpected ways that you have seen OpenLit used?
    What are the most interesting, unexpected, or challenging lessons that you have learned while working on OpenLit?
    When is OpenLit the wrong choice?
    What do you have planned for the future of OpenLit?

    Contact Info

    LinkedIn

    Parting Question

    From your perspective, what is the biggest gap in the tooling or technology for data/AI management today?

    Closing Announcements

    Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
    Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
    If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.
    To help other people find the show please leave a review on iTunes and tell your friends and co-workers.

    Links

    OpenLit
    Fleet Hub
    OpenTelemetry
    LangFuse
    LangSmith
    TensorZero
    AI Engineering Podcast Episode
    Traceloop
    Helicone
    Clickhouse

    The intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
  • AI Engineering Podcast

    Taming Voice Complexity with Dynamic Ensembles at Modulate

    08.02.2026 | 59 Min.
    Summary
    In this episode of the AI Engineering Podcast, Carter Huffman, co-founder and CTO of Modulate, discusses the engineering behind low-latency, high-accuracy Voice AI. He explains why voice is a uniquely challenging modality due to its rich non-textual signals like tone, emotion, and context, and how simple speech-to-text-to-speech pipelines can't capture the necessary nuance. Carter introduces Modulate's Ensemble Listening Model (ELM) architecture, which uses dynamic routing and cost-based optimization to achieve scalability and precision in various audio environments. He covera topics such as reliability under distributed systems constraints, watchdogging with periodic model checks, structured long-horizon memory for conversations, and the trade-offs that make ensemble approaches compelling for repeated tasks at scale. Carter also shares insights on how ELMs generalize beyond voice, draws parallels to database query planners and mixture-of-experts, and discusses strategies for observability and evaluation in complex processing pipelines.

    Announcements
    Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems
    Unlock the full potential of your AI workloads with a seamless and composable data infrastructure. Bruin is an open source framework that streamlines integration from the command line, allowing you to focus on what matters most - building intelligent systems. Write Python code for your business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. With native support for ML/AI workloads, Bruin empowers data teams to deliver faster, more reliable, and scalable AI solutions. Harness Bruin's connectors for hundreds of platforms, including popular machine learning frameworks like TensorFlow and PyTorch. Build end-to-end AI workflows that integrate seamlessly with your existing tech stack. Join the ranks of forward-thinking organizations that are revolutionizing their data engineering with Bruin. Get started today at aiengineeringpodcast.com/bruin, and for dbt Cloud customers, enjoy a $1,000 credit to migrate to Bruin Cloud.
    Your host is Tobias Macey and today I'm interviewing Carter Huffman about his work building an ensemble approach to low latency voice AI

    Interview

    Introduction
    How did you get involved in machine learning?
    Can you describe the "Ensemble Listening" approach and the story behind why Modulate moved away from monolithic architectures?
    When designing a real-time voice system, how do you handle the routing logic between specialized models without blowing your latency budget?
    What does the "gatekeeper" or routing layer actually look like in code?
    You’ve mentioned "evals that don’t lie." How do you build a validation pipeline for noisy, adversarial voice data that catches regressions that a simple word-error-rate (WER) might miss?
    In an ensemble of models, a failure in one specialized node might not crash the system, but it can degrade the output quality. How do you monitor for these "silent failures" in real-time without introducing massive overhead?
    For many teams, the default is to call an API for a frontier model. At what point in the scaling or latency curve does it become technically (or economically) necessary to swap a general LLM for a suite of specialized, smaller models?
    How do you track the real-world costs associated with the technical and human overhead of this more complex system?
    What are the most interesting, innovative, or unexpected ways that you have seen orchestrated ensembles used in live conversation environments?
    What are the most interesting, unexpected, or challenging lessons that you have learned while managing the lifecycle of multiple specialized models simultaneously?
    When is an ensemble approach the wrong choice? (e.g., At what level of complexity or throughput is the overhead of orchestration more trouble than it’s worth?)
    What do you have planned for the future of Ensemble Listening Models?
    Are we looking at self-optimizing routers, or perhaps moving these ensembles closer to the edge?

    Contact Info

    LinkedIn

    Parting Question

    From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?

    Closing Announcements

    Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
    Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
    If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.
    To help other people find the show please leave a review on iTunes and tell your friends and co-workers.

    Links

    Modulate
    Nasa Jet Propulsion Laboratory
    OpenAI Whisper
    Multi-Armed Bandit
    Cost-Based Optimizer
    GPT 5
    LLM Attention
    Transformer Architecture
    Mixture of Experts
    Dilated Convolution
    Wavenet

    The intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
  • AI Engineering Podcast

    GPU Clouds, Aggregators, and the New Economics of AI Compute

    27.01.2026 | 46 Min.
    Summary
    In this episode I sit down with Hugo Shi, co-founder and CTO of Saturn Cloud, to map the strategic realities of sourcing and operating GPUs across clouds. Hugo breaks down today’s provider landscape—from hyperscalers to full-service GPU clouds, bare metal/concierge providers, and emerging GPU aggregators—and how to choose among them based on security posture, managed services, and cost. We explore practical layers of capability (compute, orchestration with Kubernetes/Slurm, storage, networking, and managed services), the trade-offs of portability on “Kubernetes-native” stacks, and the persistent challenge of data gravity. We also discuss current supply dynamics, the growing availability of on-demand capacity as newer chips roll out, and how AMD’s ecosystem is maturing as real competition to NVIDIA. Hugo shares patterns for separating training and inference across providers, why traditional ML is far from dead, and how usage varies wildly across domains like biotech. We close with predictions on consolidation, full‑stack experiences from GPU clouds, financial-style GPU marketplaces, and much-needed advances in reliability for long-running GPU jobs.

    Announcements
    Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems
    Unlock the full potential of your AI workloads with a seamless and composable data infrastructure. Bruin is an open source framework that streamlines integration from the command line, allowing you to focus on what matters most - building intelligent systems. Write Python code for your business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. With native support for ML/AI workloads, Bruin empowers data teams to deliver faster, more reliable, and scalable AI solutions. Harness Bruin's connectors for hundreds of platforms, including popular machine learning frameworks like TensorFlow and PyTorch. Build end-to-end AI workflows that integrate seamlessly with your existing tech stack. Join the ranks of forward-thinking organizations that are revolutionizing their data engineering with Bruin. Get started today at aiengineeringpodcast.com/bruin, and for dbt Cloud customers, enjoy a $1,000 credit to migrate to Bruin Cloud.
    Your host is Tobias Macey and today I'm interviewing Hugo Shi about the strategic realities of sourcing GPUs in the cloud for your training and inference workloads

    Interview
    Introduction
    How did you get involved in machine learning?
    Can you start by giving a summary of your understanding of the current market for "cloud" GPUs?
    How would you characterize the customer base for the "neocloud" providers?
    How is the access to the GPU compute typically mediated?
    The predominant cloud providers (AWS, GCP, Azure) have gained market share by offering numerous differentiated services and ease-of-use features. What are the types of services that you might expect from a GPU provider?
    The "cloud-native" ecosystem was developed with the promise of enabling workload portability, but the realities are often more complicated. What are some of the difficulties that teams encounter when trying to adapt their workloads to these different cloud providers?
    What are the toolchains/frameworks/architectures that you are seeing as most effective at adapting to these different compute environments?
    One of the major themes in the 2010s that worked against multi-cloud strategies was the idea of "data gravity". What are the strategies that teams are using to mitigate that tax on their workloads?
    That is a more substantial impact when dealing with training workloads than for inference compute. How are you seeing teams think about the balance of cost savings vs. operational complexity for those different workloads?
    What are the most interesting, innovative, or unexpected ways that you have seen teams capitalize on GPU capacity across these new providers?
    What are the most interesting, unexpected, or challenging lessons that you have learned while working on enabling teams to execute workloads on these neoclouds?
    When is a "neocloud" or "GPU cloud" provider the wrong choice?
    What are your predictions for the future evolutions of GPU-as-a-service as hardware availability improves and model architectures become more efficient?

    Contact Info
    LinkedIn

    Parting Question
    From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?

    Closing Announcements
    Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
    Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
    If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.
    To help other people find the show please leave a review on iTunes and tell your friends and co-workers.

    Links
    Saturn Cloud
    Pandas
    NumPy
    MatLab
    AWS
    GCP
    Azure
    Oracle Cloud
    RunPod
    FluidStack
    SFCompute
    KubeFlow
    Lightning AI
    DStack
    Metaflow
    Flyte
    Arya AI
    Dagster
    Coreweave
    Vultr
    Nebius
    Vast.ai
    Weka
    Vast Data
    Slurm
    CNCF == Cloud-Native Computing Foundation
    Kubernetes
    Terraform
    ECS
    Helm Chart
    Block Storage
    Object Storage
    Container Registry
    Crusoe
    Alluxio
    Data Virtualization
    GB300
    H100
    Spot Instance
    AWS Trainium
    Google TPU (Tensor Processing Unit)
    AMD
    ROCM
    PyTorch
    Google Vertex AI
    AWS Bedrock
    CUDA Python
    Mojo
    XGBoost
    Random Forest
    Ludwig - Uber Deep Learning AutoML
    Paperspace
    Voltage Park
    Weights & Biases

    The intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
  • AI Engineering Podcast

    The Future of Dev Experience: Spotify’s Playbook for Organization‑Scale AI

    20.01.2026 | 56 Min.
    Summary
    In this episode of the AI Engineering Podcast Niklas Gustavsson, Chief Architect at Spotify, talks about scaling AI across engineering and product. He explores how Spotify's highly distributed architecture was built to support rapid adoption of coding agents like Copilot, Cursor, and Claude Code, enabled by standardization and Backstage. The conversation covers the tension between bottoms-up experimentation and platform standardization, and how Spotify is moving toward monorepos and fleet management. Niklas discusses the emergence of "fleet-wide agents" that can execute complex code changes with robust testing and LLM-as-judge loops to ensure quality. He also touches on the shift in engineering workflows as code generation accelerates, the growing use of agents beyond coding, and the lessons learned in sandboxing, agent skills/rules, and shared evaluation frameworks. Niklas highlights Spotify's decade-long experience with ML product work and shares his vision for deeper end-to-end integration of agentic capabilities across the full product lifecycle and making collaborative "team-level memory" for agents a reality.

    Announcements
    Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems
    Unlock the full potential of your AI workloads with a seamless and composable data infrastructure. Bruin is an open source framework that streamlines integration from the command line, allowing you to focus on what matters most - building intelligent systems. Write Python code for your business logic, and let Bruin handle the heavy lifting of data movement, lineage tracking, data quality monitoring, and governance enforcement. With native support for ML/AI workloads, Bruin empowers data teams to deliver faster, more reliable, and scalable AI solutions. Harness Bruin's connectors for hundreds of platforms, including popular machine learning frameworks like TensorFlow and PyTorch. Build end-to-end AI workflows that integrate seamlessly with your existing tech stack. Join the ranks of forward-thinking organizations that are revolutionizing their data engineering with Bruin. Get started today at aiengineeringpodcast.com/bruin, and for dbt Cloud customers, enjoy a $1,000 credit to migrate to Bruin Cloud.
    Your host is Tobias Macey and today I'm interviewing Niklas Gustavsson about how Spotify is scaling AI usage in engineering and product work

    Interview

    Introduction
    How did you get involved in machine learning?
    Can you start by giving an overview of your engineering practices independent of AI?
    What was your process for introducing AI into the developmer experience? (e.g. pioneers doing early work (bottom-up) vs. top-down)
    There are countless agentic coding tools on the market now. How do you balance organizational standardization vs. exploration?
    Beyond the toolchain, what are your methods for sharing best practices and upskilling engineers on use of agentic toolchains for software/product engineering?
    Spotify has been operationalizing ML/AI features since before the introduction of LLMs and transformer models. How has that history helped inform your adoption of generative AI in your overall engineering organization?
    As you use these generative and agentic AI utilities in your day-to-day, how have those lessons learned fed back into your AI-powered product features?
    What are some of the platform capabilities/developer experience investments that you have made to improve the overall effectiveness of agentic coding in your engineering organization?
    What are some examples of guardrails/speedbumps that you have introduced to avoid injecting unreliable or untested work into production?
    As the (time/money/cognitive) cost of writing code drops that increases the burden on reviewing that code. What are some of the ways that you are working to scale that side of the equation?
    What are some of the ways that agentic coding/CLI utilities have bled into other areas of engineering/opertions/product development beyond just writing code?
    What are the most interesting, innovative, or unexpected ways that you have seen your team applying AI/agentic engineering practices?
    What are the most interesting, unexpected, or challenging lessons that you have learned while working on operationalizing and scaling agentic engineering patterns in your teams?
    When is agentic code generation the wrong choice?
    What do you have planned for the future of AI and agentic coding patterns and practices in your organization?

    Contact Info

    LinkedIn

    Parting Question

    From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?

    Closing Announcements

    Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
    Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
    If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.
    To help other people find the show please leave a review on iTunes and tell your friends and co-workers.

    Links

    Spotify
    Developer Experience
    LLM == Large Language Model
    Transformers
    BackStage
    GitHub Copilot
    Cursor
    Claude Skills
    Monorepo
    MCP == Model Context Protocol
    Claude Code
    Product Manager
    DORA Metrics
    Type Annotations
    BigQuery
    PRD == Product Requirements Document
    AI Evals
    LLM-as-a-Judge
    Agentic Memory

    The intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0

Weitere Bildung Podcasts

Über AI Engineering Podcast

This show is your guidebook to building scalable and maintainable AI systems. You will learn how to architect AI applications, apply AI to your work, and the considerations involved in building or customizing new models. Everything that you need to know to deliver real impact and value with machine learning and artificial intelligence.
Podcast-Website

Höre AI Engineering Podcast, Flexikon und viele andere Podcasts aus aller Welt mit der radio.de-App

Hol dir die kostenlose radio.de App

  • Sender und Podcasts favorisieren
  • Streamen via Wifi oder Bluetooth
  • Unterstützt Carplay & Android Auto
  • viele weitere App Funktionen

AI Engineering Podcast: Zugehörige Podcasts

Rechtliches
Social
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/2/2026 - 10:39:02 PM