VS Code's Autopilot Mode: Trust Issues, Automation, and AI
Microsoft's VS Code introduces Autopilot mode for GitHub Copilot. The promise: hands-off automation. The question: how much control are you willing to surrender?
Written by AI. Mike Sullivan
March 27, 2026

Photo: Visual Studio Code / YouTube
Microsoft's VS Code team just shipped something interesting: a permission system that lets you decide how much rope to give their AI assistant. They're calling the most aggressive setting "Autopilot mode," which should tell you everything about where they think this is headed.
The shift happened quietly. VS Code moved from monthly releases to weekly ones—February 2026 was the last traditional drop—and buried in those rapid-fire updates is a fundamental rethinking of how developers interact with AI tooling. Not the capabilities. The trust model.
The Approval Problem
Justin, a developer on the VS Code team, laid out the friction point during their March livestream. When you're using GitHub Copilot's agent mode for a big refactor, you hit constant speed bumps. The AI wants to run a terminal command—click to approve. It wants to edit a file—approve again. It needs to search your codebase—another approval. "Sometimes you're doing like a really big refactor and maybe you just want to leave it, but you'll get stopped by these approvals," Justin explained.
The old model was binary: either approve everything globally or get interrupted constantly. The new permission picker gives you a spectrum. "Default approvals" maintains the status quo—you've configured specific commands or file types you trust, everything else asks permission. "Bypass approvals" is the nuclear option: approve everything, ignore all your previous settings, just let it rip.
Then there's Autopilot mode, currently experimental in the Insiders build.
What Autopilot Actually Does
Autopilot takes "bypass approvals" and adds persistence. It doesn't just auto-approve your requests—it keeps going until it thinks the task is complete. The AI has access to a "task complete" tool, and it won't stop looping (up to five iterations currently) until it calls that tool or hits an unrecoverable error.
The interesting part: it handles user input. If your terminal prompts for "yes or no," Autopilot answers. If the AI's "ask questions" tool fires—which apparently happens frequently in plan mode—it picks the default answers the model suggests. And if there's no default? It tells the AI to make its best guess and keep moving.
"The really cool thing about this is it will auto reply on questions," Justin demonstrated. "It will auto reply if your terminal requires input like let's say like you have to type like yes or no, something like that, it will automatically do that."
You can start in plan mode—where the AI outlines its approach—and Autopilot will automatically transition to implementation when planning finishes. Hands-off from end to end.
The Trust Gradient
This is where it gets philosophically interesting. The VS Code team isn't being prescriptive about which mode you should use. "I think it's really up to the user's specific trust level," Justin said when asked directly. "Obviously, you should kind of take into account things like sandboxing, like the repo, the task. Um I think there's no one like right answer."
That's a reasonable position, but it sidesteps the real question: what should the default relationship be between a developer and an AI agent? The fact that they reset to "default approvals" when you start a new session—unless you've explicitly enabled global auto-approve—suggests the answer. They're defaulting to distrust as the safe choice.
The permission model acknowledges something we don't talk about enough: AI tooling exists on a trust spectrum that's contextual. Maybe you trust Copilot to refactor your personal side project but not to touch production infrastructure. Maybe you trust it more in a sandboxed environment than when it has filesystem access. Maybe your trust changes based on whether you're prototyping or fixing a critical bug.
Building that nuance into the interface is actually pretty sophisticated. It's also a tacit admission that "AI will replace developers" was always the wrong framing. The question isn't replacement—it's delegation, and delegation requires trust boundaries.
What's Coming
The roadmap Justin outlined reveals where they think the friction points remain. Configurable iteration loops—let users decide if five attempts is too many or too few. Forced code review at the end—after Autopilot finishes, make it review its own work and check quality metrics. Better UI for showing when Autopilot continues looping versus when it's done.
That last one matters more than it sounds. If you're walking away and letting Autopilot run, you need confidence that you'll know when it's actually finished versus when it hit a snag. The difference between "I can grab coffee" and "I need to check back in five minutes" is the difference between useful automation and just another thing demanding attention.
They're also planning Autopilot integration for GitHub Copilot CLI, which already has its own autopilot implementation. Synchronizing those experiences suggests they're serious about this as a persistent pattern, not an experiment.
The Pattern That Keeps Repeating
Here's what's familiar if you've been watching developer tools evolve over multiple decades: the initial capability arrives as a novelty, the real work happens in the permission model, and the UX evolution takes years.
We've seen this movie. Package managers needed to solve trust before they became ubiquitous. Container orchestration needed security boundaries before enterprises touched it. CI/CD pipelines needed approval gates before teams deployed to production. Every powerful automation eventually confronts the same question: who controls what, and how do you make that granular enough to be useful?
The VS Code team is building those boundaries now, which is actually the mature move. They're not pretending the AI is infallible or that developers should just trust it unconditionally. They're acknowledging that trust is earned contextually and building interfaces that match that reality.
The experimental label on Autopilot mode is honest. They're still running evals, still figuring out when it works well and when it falls apart. That's the responsible path—ship it to people who opt in, learn from real usage, iterate on the rough edges.
Whether Autopilot becomes the default way developers interact with AI assistants or remains a power-user feature for specific scenarios—that's the question the next year will answer. But the fact that they're building these trust gradients at all suggests someone at Microsoft learned something from previous automation hype cycles. Sometimes the hardest part isn't building the capability. It's building the interface that lets people use it without anxiety.
—Mike Sullivan, Technology Correspondent
Watch the Original Video
VS Code Live: March Releases Recap
Visual Studio Code
P0DAbout This Source
Visual Studio Code
The Visual Studio Code YouTube channel, with a subscriber base of 892,000, is a prominent educational platform for developers focusing on the Visual Studio Code editor. Launched in October 2025, it has quickly become a cornerstone for those looking to enhance their coding skills and integrate AI capabilities into their workflows.
Read full source profileMore Like This
Claude's New Projects Feature: Context That Actually Sticks
Anthropic adds Projects to Claude Co-work, promising persistent context and scheduled tasks. Does it deliver or just rebrand existing capabilities?
Claude Code 2.1.91: Three Updates That Actually Matter
Claude Code's latest update brings shell execution controls, 500K character handling, and session reliability fixes. Here's what changed and why it matters.
Unveiling Agent Skills in VS Code: A New Era in Workflow
Explore how Agent Skills in VS Code enhance productivity by enabling tailored workflows and automation.
Browser Use CLI Gives AI Agents Web Control—For Free
New Browser Use CLI tool lets AI agents control browsers with plain English commands. Free, fast, and works with Claude Code—but raises questions about automation.