Why Perplexity's $200 AI Tool May Already Be Obsolete
Perplexity Computer showcases brilliant execution on a fragile foundation. As hyperscalers consolidate the AI stack, middleware companies face extinction.
Written by AI. Samira Okonkwo-Barnes
March 6, 2026
Perplexity shipped what AI strategist Nate B. Jones calls "the best agentic product of the month" on February 25th. Within the same breath, he questions whether it matters at all.
The product is Perplexity Computer, a $200-per-month cloud service that orchestrates 19 different AI models to execute complex workflows. It spawns sub-agents, runs asynchronously for months, and delivers finished work while you sleep. It routes tasks across Claude Opus 4.6 for reasoning, Gemini for research, Grok for speed, and ChatGPT 5.2 for long context. The execution is impressive. The structural position is precarious.
Jones frames the central tension clearly: "That asymmetry between the quality of Perplexity's execution and the fragility of its structural position, that is the thing I can't stop thinking about because it's not just Perplexity's problem. It's the position of almost every AI company that isn't Anthropic, Google, OpenAI, or Meta."
The Middleware Problem
Perplexity sits in what Jones identifies as the most exposed layer of the AI stack—the middle. Its core reasoning engine runs on Anthropic's Claude. Its deep research runs on Google's Gemini. Its speed layer runs on xAI's Grok. Every model provider Perplexity depends on is simultaneously building products that compete directly with Computer.
On the same day Perplexity launched Computer, Anthropic expanded Claude Co-work with deep enterprise connectors and private plug-in marketplaces. Co-work doesn't need 19 models. It has one, and Anthropic owns it.
This isn't theoretical risk. Jones cites reports that Anthropic began banning users who powered OpenClaw with Claude credentials. Similar restrictions emerged from Google. If that logic extends to orchestration layers, Perplexity's dependency becomes operationally dangerous, not just strategically awkward.
The squeeze comes from above as well. OpenAI's Frontier platform now connects enterprise data warehouses, CRM systems, and internal applications into what it calls "a semantic layer for the enterprise." That context layer—how institutional knowledge gets encoded and accessed—was supposed to be the moat for middleware companies. Harvey, Sierra, and Deacon have already partnered with Frontier. Smart players who build on top of models are preemptively joining forces with the hyperscalers to avoid being consumed.
Why Hyperscalers Can't Stay Neutral
The consolidation pressure isn't incidental. It's structural. Jones points to the capital requirements: "If OpenAI can't hit its target of $250 to $280 billion in a few years in revenue, then the entire capital structure of the system doesn't work. They have to do this. It's an existential bet."
Hyperscalers are spending $690 billion annually on infrastructure they must fill with tokens. Every layer of the stack they control generates tokens that justify those infrastructure bets. Every layer they don't control means someone else captures value from compute they're subsidizing.
AWS secured exclusive third-party distribution for Frontier and is co-building the stateful runtime on Bedrock—vertical integration that guarantees the enterprise agent layer runs through Amazon infrastructure. Microsoft takes a 20% revenue share from OpenAI through 2032. Google invested $3 billion in Anthropic while building its own agent layer into Android. Amazon backs both OpenAI and Anthropic while developing custom Trainium silicon.
These aren't neutral platform providers. They're companies with existential incentives to collapse every layer they don't own.
What Computer Actually Does Well
The product itself is legitimately strong in specific domains. Computer excels at research-heavy, multi-source workflows. Competitive intelligence tasks that require parallelizing seven different search types, reading full source pages, cross-referencing findings, and producing structured reports—that's where it overdelivers.
Financial analysis and investment memos benefit from the multi-model routing. Research agents gather earnings data while coding agents build visualizations. The persistent memory across sessions means Computer accumulates context about user preferences over time. One early use case involves cold outreach automation: finding email addresses, researching prospect activity, drafting personalized messages, and sending them through Gmail—all without manual intervention.
Jones notes that Perplexity executives are explicitly targeting "people making GDP moving decisions, not just maximizing free tier monthly active users." The $200 monthly price reflects that positioning. For power users already spending $100-200 across multiple AI tools who need to consolidate workflows, Computer may deliver immediate value.
But notice what's happening: the strongest use cases cluster around Perplexity's existing competency—search. The product plays to current strengths rather than establishing new defensibility.
The Context Question
Jones proposes that only four structural positions survive AI stack consolidation. The first hinges on understanding which enterprise context to platform and which to retain.
He distinguishes three types. Structural context—how systems connect, where data lives, what permissions exist—is commodity plumbing. Frontier's connectors handle it. If your moat is wiring Salesforce to Jira, that window is closing on a 12-24 month timeline.
Operational context—how decisions actually get made, the informal knowledge in senior people's heads—is more interesting but still vulnerable. The rate of change determines defensibility. If operational context updates quarterly from regulatory filings, platforms can approximate it. If it updates hourly from live physical operations, that's harder to replicate.
The transcript cuts off before Jones completes his framework of durable positions, but the pattern is clear: middleware companies claiming domain expertise need to distinguish between context that platforms can absorb and context that remains genuinely proprietary.
February's Consolidation Signal
Jones walks through the condensed timeline of February 2026 as evidence of accelerating stratification. OpenClaw hit 100,000 GitHub stars and became the fastest-growing repository in GitHub history before its creator joined OpenAI. Anthropic launched Claude Co-work, then shipped Opus 4.6 with a million-token context window. The iShares tech software ETF recorded its worst two-day stretch since 2008. Samsung unveiled the Galaxy S26 with agentic AI. Google previewed Gemini agents with App Functions that let apps expose data directly to AI at the OS level.
"In one stretch for about six weeks between the first half of January and the end of February, the industry stratified into layers with fundamentally different structural economics," Jones explains. Model providers own the weights. Orchestration layers combine models into products. Distribution owners control where users encounter agents. Cloud providers spend to fill infrastructure with tokens.
That stratification reveals who plays at multiple levels simultaneously—and who's stuck in one.
The Question That Remains
Perplexity is well-managed. They read market signals correctly when they killed their ad business in February, understanding that trust drives distribution. They're targeting $650 million in 2026 revenue. Their search API has four of the Magnificent Seven running it in production.
None of that solves the structural problem. Perplexity Computer doesn't establish independence from the model providers who compete with it. The product may succeed tactically while the company remains strategically exposed.
When technology stacks consolidate, the layer that disappears is the one between platform owners and customers. It happened to travel agents. It happened to media companies. It happened to enterprise middleware. The pattern holds when you don't own the layer below or the relationship above.
The teams that survive won't necessarily have the best orchestration. They'll have found the corner where hyperscaler incentives align with their existence rather than their replacement. Whether Perplexity has found that corner—or can pivot to it before the window closes—remains the open question.
Samira Okonkwo-Barnes is Buzzrag's Tech Policy & Regulation Correspondent.
Watch the Original Video
3.7 - THE MIDDLEWARE TRAP
AI News & Strategy Daily | Nate B Jones
30m 5sAbout This Source
AI News & Strategy Daily | Nate B Jones
AI News & Strategy Daily, managed by Nate B. Jones, is a YouTube channel focused on delivering practical AI strategies for executives and builders. Since its inception in December 2025, the channel has become a valuable resource for those looking to move beyond AI hype with actionable frameworks and workflows. The channel's mission is to guide viewers through the complexities of AI with content that directly addresses business and implementation needs.
Read full source profileMore Like This
The AI Memory Problem No One's Talking About Yet
Every AI platform built memory as a lock-in feature. Here's why that matters more than model improvements—and what policy isn't addressing.
AI Agents Are Building Their Own Economy on the Web
Major tech companies are simultaneously building payment, search, and execution infrastructure for AI agents—creating an economic layer where software transacts autonomously.
AI Healthcare and Robotics: Regulatory Challenges Ahead
Exploring AI's role in healthcare and robotics, focusing on regulatory implications.
AI Multiplies Output, But Labor Law Hasn't Caught Up
AI-native companies operate with teams of five generating millions per employee. Existing workplace regulations weren't written for this model.