Nicolay here,Most AI coding tools obsess over automating everything. This conversation focuses on the rightbalance between human skill and AI assistance - where manual context beats web search every time.Today I have the chance to talk to Ben Holmes, a software engineer at Warp, where they're building theAI-first terminal.Manual context engineering trumps automated web search for getting accurate results fromcoding assistants.Key Insight ExpansionThe breakthrough insight is brutally practical: manual context construction consistently outperformsautomated web search when working with AI coding assistants. Instead of letting your AI tool searchfor documentation, find the right pages yourself and feed them directly into the model's contextwindow.Ben demonstrated this with OpenAI's Realtime API documentation - after an hour ofback-and-forthwith web search, he manually found the correct API signatures and saved them as a reference file.When building newfeatures, he attached this curated documentation directly, resulting in immediatesuccess rather than repeated failures from outdated or incorrect search results.This approach works because you can verify documentation accuracy before feeding it to the AI, whileweb search often returns the first result regardless of quality or recency.In the podcast, we also touch on:Why React Native might become irrelevant as AI translation between native languages improvesModel-specific strengths: Gemini excels at debugging while Claude dominates function callingThe skill of working without AI assistance - "raw dogging" code for deep learningWarp's architecture using different models for planning (O1/O3) vs. coding (Claude/Gemini)💡 Core ConceptsManual Context Engineering: Curating documentation, diagrams, and reference materials directlyrather than relying on automated web search.Model-Specific Workflows: Matching AI models to their strengths - O1 for planning, Claude forfunction calling, Gemini for debugging.Raw Dog Programming: Coding without AI assistance to build fundamental skills in codebasenavigation and problem-solving.Agent Mode Architecture: Multi-model system where Claude orchestrates task distribution tospecialized agents through function calls.📶 Connect with Ben:Twitter/X, YouTube, Discord (Warp Community), Website📶 Connect with Nicolay:LinkedIn, X/Twitter, Bluesky, Website,
[email protected]⏱ Important MomentsReact Native's Potential Obsolescence: [08:42] AI translation between native languages couldeliminate cross-platform frameworksManual vs Automated Context: [51:42] Why manually curating documentation beats AI websearchRaw Dog Programming Benefits: [12:00] Value of coding without AI assistance during Ben's firstweek at WarpModel-Specific Strengths: [26:00] Gemini's superior debugging vs Claude's speculative codefixesOpenAI Desktop App Advantage: [13:44] Outperforms Cursor for reading long filesWarp's Multi-Model Architecture: [31:00] How Warp uses O1/O3 for planning, Claude fororchestrationFunction Calling Accuracy: [28:30] Claude outperforms other models at chaining function callsAI as Improv Partner: [56:06] Current AI says "yes and" to everything rather than pushing back🛠 Tools & Tech MentionedWarp Terminal, OpenAI Desktop App, Cursor, Cline, Go by Example, OpenAI Realtime API, MCP📚 Recommended ResourcesWarp Discord Community, Ben's YouTube Channel, Go Programming Documentation🔮 What's NextNext week, we continue exploring production AI implementations with more insights into gettinggenerative AI systems deployed effectively.💬 Join The ConversationFollow How AI Is Built on YouTube, Bluesky, or Spotify. Discord coming soon!♻ Building the platform for engineers to share production experience. Pay it forward by sharing withone engineer facing similar challenges.♻