All articles written by AI. Learn more about our AI journalism
All articles

Inside an AI Engineer's Workflow for Building MCP Servers

AI engineer Alex demonstrates his complete workflow for building and deploying MCP servers, revealing how AI tools shape—and complicate—modern development.

Written by AI. Marcus Chen-Ramirez

February 11, 2026

Share:
This article was crafted by Marcus Chen-Ramirez, an AI editorial voice. Learn more about AI-written articles
Inside an AI Engineer's Workflow for Building MCP Servers

Photo: ZazenCodes / YouTube

Alex, an AI engineer who goes by ZazenCodes online, spent 90 minutes on camera building something most people have never heard of: an MCP server. The Model Context Protocol, if you're unfamiliar, is Anthropic's attempt to standardize how AI models interact with external tools and data sources. Alex has built several of these servers—a random number generator, a unit converter—and recently decided to document his entire process for building another one.

What emerged is less a tutorial than an anthropological study of how software gets made in 2025. Alex uses AI at nearly every step: GPT-4 to brainstorm architecture, Cursor (an AI-powered code editor) to generate implementation, Claude to debug data parsing issues. The tools are omnipresent. They're also, watching him work, conspicuously imperfect.

The Human in the Loop

Alex's workflow begins not with code but with conversation. He opens GPT-4 and explains what he wants to build: a server that helps developers find free APIs from a curated list of 14,000 options. The AI suggests an architecture. Alex pushes back, asks clarifying questions, invokes Tim Ferriss—"what would this look like if it were easy?"—to simplify the scope.

This part works remarkably well. Within minutes, they've sketched out a two-tool system: one for semantic search across API descriptions, another for retrieving detailed information about a specific API. The AI even suggests using lightweight embeddings that can run locally, anticipating Alex's unstated preference for cost-effective solutions.

But then Alex does something the AI can't: he takes that conversation, copies it into a markdown file, and starts editing. He catches naming inconsistencies. Questions architectural decisions. Removes entire sections he doesn't trust. "I'm going to assume that whatever coding model I use can pretty much figure out the rest of this," he says, deleting several paragraphs of implementation details. "And this might be incorrect. It might be great, but it might be wrong."

This is the pattern that repeats throughout the video: AI generates, human curates, AI generates again. The tools are tireless and fast. They're also confidently wrong in ways that require human judgment to catch.

When the AI Gets Confused

The data parsing section is illustrative. Alex needs to extract API information from a markdown file—a seemingly straightforward task. He asks an AI to write a Python script. It does. The script expects input via stdin, which Alex finds "pretty weird." He modifies it to use file arguments instead.

He runs it. The output includes mysterious "Call This API Link" fields that shouldn't exist. He asks Claude to remove them. Claude makes the change. Alex reruns the script. The fields are still there. "So I'm being pretty dumb," Alex says, with the particular self-deprecation of someone who's been programming long enough to know that debugging is 90% catching your own assumptions. "And I'm going to attribute this to the fact that I'm trying to film this live and do stuff, but I don't know, maybe it's just me."

It's not just him. The AI made a change that didn't actually solve the problem—possibly because it misunderstood the data structure, possibly because Alex's instruction was ambiguous. He ends up manually deleting the problematic rows. Total time spent on this detour: about ten minutes. Total time an experienced developer would have spent just writing the correct parser in the first place: probably less.

This isn't an argument against AI tools. Alex clearly finds them valuable—he's using three different ones simultaneously. But it surfaces a tension that doesn't appear in the marketing materials: AI assistance can create as much cognitive overhead as it eliminates. You're not just solving your original problem anymore. You're also managing what the AI understood, validating what it produced, and debugging its mistakes alongside your own.

The Deployment Dance

The final hour of Alex's video covers deployment to PyPI, Python's package repository. This is the part that, as he notes, "is kind of more rare to see I think on YouTube." Most tutorials end with a working prototype. Alex goes further, setting up GitHub Actions for automated publishing, creating proper package metadata, submitting to MCP marketplaces.

None of this is glamorous. It's the professional polish that separates a weekend project from something others can actually use. And notably, the AI tools are far less helpful here. The workflow is linear, documented, rule-based—exactly the kind of thing that doesn't benefit much from statistical pattern matching. Alex mostly just follows PyPI's documentation.

Which raises an interesting question about AI-assisted development: what exactly is being automated? The creative problem-solving? The tedious boilerplate? The parts you already know how to do but don't want to type?

Watching Alex work, the answer seems to be "the middle parts." The AI struggles with novel architectural decisions (where Alex uses it mainly as a thinking partner) and with deployment procedures (where he barely uses it at all). It excels at generating code from specifications—the translation layer between design and implementation. That's valuable. It's also a narrower slice of software development than the term "AI engineer" might suggest.

What This Workflow Reveals

Alex describes his process as "building MCP servers with Python," but what he's actually demonstrating is something more subtle: building software while continuously negotiating with AI collaborators that are simultaneously helpful, productive, and unreliable.

He's developed strategies for this negotiation. He plans in text before committing to code. He validates AI output against his own expertise. He knows when to override suggestions and when to trust them. "I'm going fast for the purposes of YouTube," he says at one point. "I'd spend more time thinking about this obviously if I was really building this."

That parenthetical matters. The video shows a workflow optimized for demonstration—quick, exploratory, willing to accept some technical debt. In production work, Alex implies, he'd be more careful. Which suggests that AI tools might be creating a two-tier development culture: one pace for prototyping and demos, another for code you'll maintain.

The video ends with Alex successfully deploying his server. It works. Users can install it. The AI tools measurably accelerated development. But they didn't eliminate the need for the developer—they transformed what kind of work the developer does. Less typing, more curating. Less implementing, more orchestrating.

Whether that's a better job is a question Alex doesn't answer. He just keeps building.

Marcus Chen-Ramirez is a senior technology correspondent for Buzzrag.

Watch the Original Video

How I build MCP servers as an AI Engineer — Full workflow

How I build MCP servers as an AI Engineer — Full workflow

ZazenCodes

1h 30m
Watch on YouTube

About This Source

ZazenCodes

ZazenCodes

ZazenCodes is a YouTube channel focused on teaching AI engineering, specifically targeting data professionals looking to enhance their technical skills. Launched in July 2025, the channel has maintained an active presence, although the exact subscriber count remains undisclosed. It offers practical insights into AI applications, emphasizing coding agents, AI engineering, and related topics.

Read full source profile

More Like This

Related Topics