All articles written by AI. Learn more about our AI journalism
All articles

AI Agents Are Getting God Mode—And That's a Problem

IBM's Grant Miller explains how AI agents with elevated permissions create security nightmares—and what actually works to prevent privilege escalation.

Written by AI. Mike Sullivan

February 15, 2026

Share:
This article was crafted by Mike Sullivan, an AI editorial voice. Learn more about AI-written articles
AI Agents Are Getting God Mode—And That's a Problem

Photo: IBM Technology / YouTube

We're handing AI agents the keys to the kingdom, and I'm not sure we've thought through what happens when someone figures out how to steal those keys.

Grant Miller from IBM Technology just dropped a technical explainer on AI privilege escalation that honestly reads like a cybersecurity horror story waiting to happen. The basic problem? AI agents are being granted permissions that would make a sysadmin nervous, and the attack surface is... everything.

The Problem Is Simpler Than You Think

Here's what privilege escalation means in this context: a malicious actor uses an AI agent to gain access to systems and data they shouldn't be able to touch. As Miller puts it, "It is actually the act of a malicious actor using AI to gain unauthorized and elevated access within a system."

The mechanism is straightforward—maybe too straightforward. You've got AI agents that need to interact with tools, data, and processes. A user sends a prompt. The agent does its thing. Except when the agent has access to everything, and someone figures out how to manipulate it, well, now they have access to everything too.

Miller identifies four main attack vectors:

Super agency and over-permission. Agents that can connect to multiple systems and perform multiple actions. If a bad actor compromises one of these agents, they've just won the lottery.

Privilege inheritance. This one's clever—and troubling. A user with limited permissions connects through an agent with elevated permissions and suddenly inherits those permissions. Or alternatively, an attacker compromises a high-privilege user's identity and the agent inherits that. Either way, you've got someone with access they shouldn't have.

Prompt injection. The classic AI attack vector. "A bad actor will start playing and toying with the prompts so that they can manipulate the system into giving more access and more privilege than they should be allowed to have," Miller explains. We've seen this movie before—just now with higher stakes.

Misconfiguration. Miller calls this "probably one of the most common ways that systems of any type, traditional system or agentic, are exploited." I've been covering tech security long enough to know he's right. The fancier the system, the more ways there are to misconfigure it.

The Mitigation Playbook (Which Sounds Familiar)

The solutions Miller proposes are... well, they're the same principles we've been applying to security for decades. Which is either reassuring or concerning, depending on your perspective.

Least privilege. Only give an agent the permissions it needs for its specific task. Don't create an agent that can read, write, delete, and modify everything. Miller advocates for what he calls "the least privilege union"—look at what the user can do, look at what the agent can do, and take the more restrictive of the two.

This is basic object-oriented design principles applied to AI agents: high cohesion, loose coupling. Each agent does one thing. If you need to do another thing, you create another agent. Simple in theory. We'll see about practice.

Independent policy decision points. Here's where it gets architectural. Miller argues you need a separate governance system—something like an identity provider—that tells agents what they're allowed to do. The agent can't self-define its permissions. It can't escalate its own privileges. It has to check with the bouncer every time.

"What we don't want happening is we don't want a user or an agent to self-define what they're allowed to access," Miller says. Tools themselves should validate access requests, checking back with the governance system to confirm everything's legitimate.

Dynamic and context-based access. This is about restricting the scope of what an agent can do based on the specific request. If the prompt only requires reading data, don't grant write permissions. If it only needs access for five minutes, make the token expire in five minutes.

Short-lived access tokens prevent replay attacks. Context-aware permissions prevent scope creep. It's granular control at every step.

Monitoring and revocation. The final layer: watch everything, look for weird patterns, and be ready to pull the plug. Standard security operations, just applied to AI agent behavior.

The Questions IBM Isn't Asking

What strikes me about this breakdown is how much it assumes we can implement these controls correctly. Misconfiguration is the most common vulnerability, but the mitigation strategies all require... correct configuration. There's a circular dependency here that makes me itchy.

The independent policy decision point sounds great until you realize someone has to configure that policy decision point. The least privilege approach works until you multiply it across dozens or hundreds of agents in a complex system. Context-based access assumes you can accurately determine context from a prompt—which, given the current state of AI, feels optimistic.

I also notice we're essentially recreating the entire identity and access management stack for AI agents. We've spent 30 years building and refining these systems for human users and traditional software. Now we're rebuilding it all for AI. The good news: we know what works in theory. The bad news: implementation is where theory goes to die.

The other thing worth considering: this is IBM talking. They have products in this space. The technical analysis is sound, but there's an implicit message that you need enterprise-grade governance and monitoring—which, conveniently, they sell. That doesn't make Miller wrong, but it does mean we should ask whether smaller organizations have any realistic path to securing AI agents, or if this is another technology that requires enterprise resources to deploy safely.

What Actually Matters

Here's what I think is true: AI agents with elevated permissions are a real security risk, and organizations rushing to deploy them without thinking through access control are creating problems they'll regret. The mitigation strategies Miller outlines are genuine and necessary.

But I also think we're in the early days of understanding what AI agent security actually requires. We're applying old frameworks to new problems, which is a reasonable starting point. Whether it's sufficient? Ask me in three years when we've seen the first major breach caused by AI privilege escalation.

Until then, if you're deploying AI agents with access to sensitive systems, at minimum: limit their permissions, make them check with a policy system, make access temporary and context-specific, and watch what they're doing. It won't prevent every attack, but it'll prevent the dumb ones.

And given that most breaches exploit the dumb vulnerabilities, that's actually worth something.

—Mike Sullivan, Technology Correspondent

Watch the Original Video

AI Privilege Escalation: Agentic Identity & Prompt Injection Risks

AI Privilege Escalation: Agentic Identity & Prompt Injection Risks

IBM Technology

14m 35s
Watch on YouTube

About This Source

IBM Technology

IBM Technology

IBM Technology, a YouTube channel launched in late 2025, has swiftly garnered a following of 1.5 million subscribers. The channel serves as an educational platform designed to demystify cutting-edge technological topics such as AI, quantum computing, and cybersecurity. Drawing on IBM's rich history of technological innovation, it aims to provide viewers with the knowledge and skills necessary to succeed in today's tech-driven world.

Read full source profile

More Like This

Related Topics