This Dev Automated His Entire Workflow With Claude & Linear
Kenny Liao built a system where AI handles his entire dev cycle—from planning to PR reviews. Here's how he ships code while barely touching his keyboard.
Written by AI. Yuki Okonkwo
February 10, 2026

Photo: Kenny Liao / YouTube
Kenny Liao's GitHub contribution graph tells a story. For most of the year, it's the usual scattered green—some commits here, some there. Then, in the last two months, it explodes. Dense, dark green squares stacking up like he suddenly cloned himself.
He didn't. But he did build something close: a system where Claude Code handles his entire development cycle while he goes off to make YouTube videos.
The setup sounds almost too good: type a single slash command, walk away for 20-30 minutes, come back to a pull request that's been implemented, tested, reviewed by another AI agent, and fixed based on that review. No babysitting. No constant context-switching. Just "here's the issue" and then "here's the solution."
I'm fascinated by this because it surfaces a question we're all dancing around: when AI can execute the entire mechanical process of software development, what's actually left for humans to do?
The Part He Won't Automate
Liao's system has a deliberate gap. Everything after the planning phase is automated—implementation, commits, PR creation, code review, fixing issues flagged in review, even the re-review to make sure the fixes worked. But planning? That he does himself, spending 5-10 minutes with Claude to flesh out requirements.
"The one thing I never recommend outsourcing to AI is the plan step, which is often going to require you making certain strategic or creative decisions," Liao explains in the video. "Decisions that likely have a direct impact on the customer experience for my app and the problem I'm solving for my customers."
This matters more than it might seem. He's not drawing an arbitrary line—he's identifying where human judgment creates value versus where it just creates friction. His app is a messaging platform for sales agents, and knowing why they need read receipts versus how to implement read receipts are different problems entirely.
The interesting tension: as AI gets better at planning, this line moves. Will Liao's 5-10 minutes eventually compress to 30 seconds of approval? At what point does "I planned it" become "I picked option B from the three AI suggested"?
How The System Actually Works
The technical setup runs on Linear (project management) and Claude Code with MCP (Model Context Protocol—basically letting Claude directly interact with tools like Linear's API). Liao built custom slash commands that template common workflows.
Here's the process for resolving an issue:
- Planning happens in a conversation with Claude (the human part)
- Run a single slash command with the Linear issue ID
- Claude spawns a "plan agent" that creates an implementation plan using test-driven development
- A "general agent" implements everything
- Code gets committed and pushed, PR opens automatically
- Another agent reviews the PR, flags issues
- Yet another agent fixes those issues
- A re-review agent validates the fixes
- If checks pass, it's ready to merge
The sub-agent architecture is clever—each task spawns its own agent instance, which prevents the main context window from filling up. Liao showed a 26-minute run using only 73,000 tokens. For comparison, that's enough context left over to implement another entire feature.
When he demonstrated this live, Claude found a race condition during code review that would've broken things in production. The system caught it, fixed it, and validated the fix—all without Liao typing a single additional prompt beyond that initial slash command.
What This Exposes About AI Development
"I want to be clear that shipping faster isn't the real end goal here," Liao notes. "What actually matters to me is freeing up more of my time by not having to babysit Cloud Code through this whole process."
This reframes the entire value proposition. We talk about AI "productivity" like it's about doing more—more features, more fixes, more velocity. But Liao's using it for time reallocation. He's not shipping more because he wants a busier GitHub graph; he's shipping the same amount while reclaiming hours for content creation.
That's a fundamentally different relationship with AI than "prompt it to go faster." It's treating AI as something that handles the repetitive cognitive load—the stuff you know how to do but are tired of doing—so you can spend time on work that actually requires you specifically.
The system also reveals how much of software development is mechanical once requirements are clear. All those steps between "we need read receipts" and "read receipts are live in production"? That's process, not creativity. Test-driven development, PR reviews, fixing linter errors, addressing reviewer comments—necessary, but repetitive.
Liao's point about meeting AI where you already are (rather than inventing new workflows to accommodate AI) also deserves attention. He already used Linear. He already had a development process. He just automated the boring parts instead of rebuilding everything around what AI does best.
The Edges Of The System
What's not automated yet: Liao still has to run that initial slash command himself. Still has to review and merge the final PR. Still needs to decide which issues get prioritized.
His next step—mentioned briefly at the end—is removing himself from even running the slash command. That's when you get into autonomous agent loops that just... run. Continuously. Until all your Linear issues are resolved.
At which point we're in interesting territory: AI that manages its own queue, decides what to work on next, and only surfaces completed work for human approval. The planning stage starts to blur when the agent can say "based on user feedback, I think we should implement X" and sketch out three approaches for you to pick from.
The broader question this raises: if your AI can handle the entire development cycle given clear requirements, and can start generating those requirements from user feedback or analytics, what does the role of "software developer" become? Product strategist? AI supervisor? Something else entirely?
Liao's system works because he's kept planning as his domain. But that boundary won't hold forever—not because AI will get better at strategy (though it will), but because the line between strategic planning and tactical implementation gets fuzzy when you zoom in.
Yuki Okonkwo is Buzzrag's AI & Machine Learning correspondent.
Watch the Original Video
How I Turned Claude Code Into My Dev Team
Kenny Liao
21m 57sAbout This Source
Kenny Liao
Kenny Liao is a dynamic presence in the YouTube landscape, specializing in artificial intelligence education. With a subscriber base of 4,550, his channel 'AI Launchpad' has been active since mid-2025. The channel is dedicated to empowering viewers by teaching them how to build AI agents and systems that address real-world challenges. Aimed at both novices and seasoned developers, Liao's content is a treasure trove of in-depth AI knowledge.
Read full source profileMore Like This
Optimizing Claude Code with MCP Launchpad
Learn how MCP Launchpad streamlines tool access for Claude Code, conserving context and enhancing efficiency.
Anthropic Bet on Teaching AI Why, Not What. It's Working.
Anthropic's 80-page Claude Constitution reveals a fundamental shift in AI design—teaching principles instead of rules. The enterprise market is responding.
Google's gwscli: Built for AI Agents, Not Humans
Google's new gwscli tool optimizes Google Workspace for AI agents with nested JSON and runtime docs. But does it signal the end of MCP servers?
Claude's Scheduled Tasks: AI Automation That Runs While You Sleep
Anthropic adds scheduled task automation to Claude. Real-world tests show promise and limitations of AI agents running autonomously overnight.