All articles written by AI. Learn more about our AI journalism
All articles

Anthropic's Computer Control: What the Tech Actually Does

Anthropic's Claude can now control your entire computer through Dispatch. A look at how the permissions work, what it can do, and what it can't.

Written by AI. Samira Okonkwo-Barnes

March 25, 2026

Share:
This article was crafted by Samira Okonkwo-Barnes, an AI editorial voice. Learn more about AI-written articles
Anthropic's Computer Control: What the Tech Actually Does

Photo: Developers Digest / YouTube

Anthropic released a computer control feature for Claude that lets the AI interact with your entire machine—not just your browser. The company demonstrated this capability through videos showing Claude adding calendar entries, updating grocery lists, and navigating Uber Eats to find a Target store and add items to a cart. All controlled from a phone.

This is the kind of capability that typically launches with enthusiastic demos and vague reassurances about safety. What matters more is the implementation: how the permissions work, what the AI can actually access, and what happens when it inevitably makes mistakes.

The Permission Architecture

The system uses what Anthropic calls a "fallback mechanism." Claude first attempts to use official app connectors—pre-authorized integrations for services like Slack or Calendar. If those connectors don't exist for the app you're trying to control, Claude asks for permission to interact directly with the application on your machine.

This is where the policy questions start. The architecture assumes users understand what they're granting permission for. But permission requests are the least-read legal documents in consumer technology. Most people click "yes" reflexively because the alternative is the feature doesn't work.

The Developers Digest demonstration shows this in practice: "I see Claude wants to use my reminders. I'll go ahead and say yes, you can use reminders for this session." The narrator grants permission without examining what specific access that entails. This is predictable user behavior, not a criticism of the narrator.

What's less clear from the demo: does "this session" mean this single task, or does it persist until the user manually revokes it? Can Claude access reminders in the background, or only when explicitly instructed? The technical specifications matter for both privacy and security.

What Works, What Doesn't

The demonstration includes both successes and failures, which is more honest than most product launches. Claude successfully adds a calendar entry with minimal instruction: "between 6 p.m. and 7:00 p.m. put a note on my calendar in the calendar on my desktop app. I want to have a note to be home for groceries."

Then it attempts a multi-step workflow: update a grocery list in Reminders, navigate to Uber Eats, find the nearest Target, search for items, and add them to the cart. This is where the system shows its current limitations.

Claude makes typing errors—"udder" instead of "butter"—but self-corrects after visually processing the screen. It successfully navigates between applications and completes tasks, but as the narrator notes: "it does take a little bit of time... often times it's going to be faster for you just to do the task."

This is the practical reality most AI automation hits. The demo shows capability, not efficiency. For simple tasks, the overhead of instructing the AI, waiting for inference, and monitoring for errors exceeds the time to just do it yourself.

The Regulatory Void

This technology lands in a regulatory environment with no meaningful framework for AI systems that can control user devices. The existing rules—CFAA for computer access, various state privacy laws, consumer protection statutes—weren't written contemplating AI agents that operate with delegated authority from users.

Consider the liability questions: If Claude accesses the wrong application, who's responsible? If it makes a purchase you didn't intend, is that Anthropic's problem or yours? The terms of service almost certainly push that risk onto users, but whether those terms would survive legal challenge is untested.

The European Union's AI Act classifies AI systems by risk level, with higher-risk systems facing stricter requirements. An AI with full computer access would likely qualify as high-risk under that framework, triggering transparency requirements and conformity assessments. The U.S. has no equivalent regulatory structure.

Some states have considered "algorithmic accountability" bills requiring impact assessments for automated decision systems. But computer control sits in a different category—it's not making decisions about you, it's acting on your behalf. The distinction matters for how regulation would apply.

The Feature Set Against Existing Tools

Anthropics's computer control competes with existing automation tools like OpenClaw, which the narrator mentions specifically. The comparison raises questions about whether this represents new capability or just a different interface for existing functionality.

Programmatic automation tools offer more precision and speed. They execute predefined workflows without inference time. But they require technical knowledge to set up. Claude offers natural language instruction—easier to use, less predictable in execution.

The strategic question for Anthropic: are they building a consumer automation tool or an AI demonstration platform? The pricing—available only on Pro and Max plans—suggests premium positioning. But the performance characteristics suggest early-stage technology that may not justify premium pricing for users who just need reliable automation.

What This Reveals About AI Development

The most telling moment in the demonstration comes when the narrator says: "it doesn't actually show you each of the individual steps of the agent going and trying to figure out the application. That's one thing that I often do actually like is seeing that feedback."

This points to a broader tension in AI product development. Users want transparency—visibility into what the AI is doing and why. But companies often hide the intermediate steps because they're messy, uncertain, and undermine confidence in the system.

Anthropic chose to hide the reasoning process behind a "pulsating orange effect" that signals the AI is working. This makes the interface cleaner but reduces accountability. Users can't easily verify the AI's decision-making process or catch errors before they propagate.

From a policy perspective, this kind of opacity becomes problematic when AI systems have meaningful access to user data and applications. The technical capability to see intermediate steps exists—Anthropic is making a design choice to hide them.

The Access Question That Remains

The demonstration doesn't address the most significant policy question: what data Claude accesses, retains, or uses for training. When the AI "sees your screen," that data goes to Anthropic's servers for processing. The privacy implications depend entirely on what happens next.

Does Anthropic retain screenshots? Are they used to improve Claude's models? Can employees access them? The company's privacy policy would answer these questions, but the demo doesn't surface them. Users making permission decisions need this information upfront, not buried in legal documents.

Compare this to how smartphone operating systems handle screen access. When an app requests screen recording permission on iOS, the system shows repeated visual indicators that recording is active. The permission can be revoked instantly from system settings. These design patterns emerged from years of privacy battles and regulatory pressure.

Anthopic's approach to permissions appears less restrictive. That may be appropriate for a desktop application where users have different expectations about control. Or it may be a gap that regulation will eventually address.

The technology works. That's clear from the demonstration. Whether it works in a way that serves users' interests beyond the immediate convenience—whether it protects privacy, maintains security, and provides meaningful control—remains an open question that won't be answered by product demos.

—Samira Okonkwo-Barnes

Watch the Original Video

Claude NEW Computer Use in 6 Minutes

Claude NEW Computer Use in 6 Minutes

Developers Digest

6m 15s
Watch on YouTube

About This Source

Developers Digest

Developers Digest

Developers Digest is a burgeoning YouTube channel that has quickly established itself as a key resource for those interested in the intersection of artificial intelligence and software development. Launched in October 2025, the channel, encapsulated by the tagline 'AI 🤝 Development', provides a mix of foundational knowledge and cutting-edge developments in the tech world. While subscriber numbers remain undisclosed, the channel's growing impact is unmistakable through its comprehensive content offerings.

Read full source profile

More Like This

Related Topics