All articles written by AI. Learn more about our AI journalism
All articles

AI Agents That Never Leave Your VPC: Ona's Enterprise Bet

Ona runs AI software engineers entirely inside customer VPCs, automating tech debt and migrations while keeping data locked down. Here's how it actually works.

Written by AI. Marcus Chen-Ramirez

March 14, 2026

Share:
This article was crafted by Marcus Chen-Ramirez, an AI editorial voice. Learn more about AI-written articles
AI Agents That Never Leave Your VPC: Ona's Enterprise Bet

Photo: Amazon Web Services / YouTube

The enterprise AI pitch usually goes like this: upload your code to our servers, trust us with your IP, and we'll make magic happen. For most companies, that's a non-starter. For heavily regulated ones—pharma, finance, healthcare—it's legally impossible.

Ona, an AI coding platform, is betting it can solve this by never asking for that trust in the first place. Instead of pulling customer code out to some cloud endpoint, Ona's AI agents run entirely inside the customer's AWS Virtual Private Cloud. Your data never leaves your perimeter. The AI comes to you.

It's an architecture decision with major implications—not just for security theater, but for whether AI can touch the unglamorous work that actually keeps enterprises running.

The Work Nobody Wants

Here's what Ona is targeting: code migrations, dependency updates, patching CVEs, clearing tech debt. The stuff that gets perpetually backlogged because it's tedious, necessary, and offers zero career advancement. According to the company's promotional material through AWS, Ona "goes after the work that always gets pushed back" while operating "across hundreds of repositories at the same time."

One example they cite: a global pharmaceutical company facing a multi-million dollar CI/CD infrastructure migration spanning over 100,000 repositories. They'd already hired contractors with their own multi-million dollar budget. Those contractors weren't moving fast enough.

Ona's approach, as described in the video, treats each repository like an engineer would: "ONA reads the entire repository, checks the documentation, understands how the CI runs, looks for expected outputs, and plans everything out before writing a single line of code." Then it executes, validates, recreates CI runs in the new format, and marks work ready for human review.

The claimed result: 95% of migration work done autonomously. Engineers review and sign off rather than doing the actual migration. "That's the difference between I need to finish this and I just need to review this," the video notes—a distinction anyone who's ever faced a tedious technical project will recognize.

The VPC Constraint as Feature

Running inside a customer's VPC isn't just a security checkbox. It fundamentally changes what's possible.

Most AI coding tools operate as cloud services. You send them code, they process it on their infrastructure, they send back results. For a bank or pharmaceutical company, that data exfiltration—even encrypted, even logged—may violate compliance requirements. Some industries simply can't use tools built this way, regardless of how good they are.

Ona's architecture—built on AWS primitives like EC2, Bedrock, and PrivateLink—keeps everything contained. The AI workstation, the access to internal tooling, the enterprise connectivity, all of it runs inside the customer's security parameter. As the video puts it: "customers now get enterprise-grade AI software engineers that run in their own VPC in their own cloud parameter so they stay compliant while automating work that was basically impossible to do at scale before."

This creates an interesting dynamic. The constraint—data can't leave the VPC—becomes the product differentiator. Ona isn't competing on being the smartest AI or the fastest coder. It's competing on being the AI that regulated enterprises are legally allowed to use.

The AWS Alliance Question

Ona is co-selling with AWS, appearing in their partner marketplace and promotional content. This relationship matters for a specific reason: enterprise trust.

"When we show up together, customers already trust the infrastructure piece and we can focus on showing them what ONA does rather than convincing them we're secure," the video explains. Translation: AWS's security reputation becomes Ona's security reputation by proximity.

This is smart go-to-market, but it also reveals something about enterprise AI adoption. The technical capabilities matter less than the trust framework. A pharmaceutical company isn't evaluating Ona primarily on code quality—they're evaluating whether their compliance officer will approve it. AWS's endorsement answers that question before it's asked.

It also raises a question about lock-in. Ona is "vertically integrated" and built specifically on AWS primitives. That's great if you're already an AWS shop. If you're multi-cloud or committed to Google Cloud or Azure, this solution doesn't exist for you. The VPC constraint that makes Ona possible also makes it platform-specific.

Humans in the Loop (Still)

One detail worth noting: despite the automation percentages, Ona keeps humans in the validation loop. Engineers review PRs, adjust configurations, paste in API keys when needed. The 95% automation figure means 95% of the work, not 95% autonomous decision-making.

This is probably the right architecture for enterprise environments, where a bad automated decision can cascade across hundreds of repositories. But it also means Ona isn't replacing engineers—it's changing what they spend time on. Less migration work, more review work. Whether that's better depends on whether you find reviewing more tolerable than migrating.

The question this raises: at what automation percentage does the human review become rubber-stamping? If an AI handles 95% of a migration correctly, are engineers actually scrutinizing the remaining 5%, or just trusting the pattern and clicking approve? Human-in-the-loop sounds reassuring until you realize humans are very good at outsourcing cognitive effort to systems that seem reliable.

What This Means for Enterprise AI

Ona represents a specific bet: that the path to enterprise AI adoption isn't making AI smarter, it's making AI fit existing security models. Instead of asking enterprises to change how they handle data, build AI that works within their current perimeter.

That's probably correct for regulated industries. But it also means fragmentation. An AI that runs in your VPC can't learn from other customers' code. It can't get better from aggregate data. Every deployment is somewhat isolated. That's the tradeoff for keeping data contained.

The broader pattern here is interesting: AI that succeeds in enterprise might not be the most capable AI. It might be the AI that asks for the least organizational change. Ona doesn't require new security models, new compliance frameworks, or new trust relationships. It works with what's already there.

Whether that's a limitation or an insight depends on what you think enterprises need more: better AI or usable AI. For the pharma company with 100,000 repositories to migrate, Ona's answer is clear.


Marcus Chen-Ramirez is a senior technology correspondent at Buzzrag, covering AI, software development, and enterprise technology.

Watch the Original Video

Ona's Approach to Running Background Agents Inside your AWS VPC | Amazon Web Services

Ona's Approach to Running Background Agents Inside your AWS VPC | Amazon Web Services

Amazon Web Services

4m 59s
Watch on YouTube

About This Source

Amazon Web Services

Amazon Web Services

Amazon Web Services (AWS) is a prominent YouTube channel dedicated to showcasing the capabilities of one of the leading cloud computing platforms worldwide. With 832,000 subscribers, AWS targets tech professionals and businesses, offering a wide array of content that demonstrates how their services can drive innovation, cost-efficiency, and agility through cloud-based solutions.

Read full source profile

More Like This

Related Topics