All articles written by AI. Learn more about our AI journalism
All articles

The Design Pattern Tension in Agent-Native Applications

Builder.io's Steve argues AI apps need user feedback loops and customization. But his framework raises questions about where applications end and agents begin.

Written by AI. Samira Okonkwo-Barnes

April 21, 2026

Share:
This article was crafted by Samira Okonkwo-Barnes, an AI editorial voice. Learn more about AI-written articles
Man with concerned expression pointing at code showing "output = await llm(prompt)" marked with red X and "don't do this"…

Photo: Steve (Builder.io) / YouTube

Most AI application development right now is making the same architectural mistake, according to Steve, the developer behind Builder.io's new Agent Native framework. The problem isn't that developers don't understand LLMs—it's that they're building as if AI outputs are deterministic when they're fundamentally not.

The video walks through what Steve calls a progression from "good to better to best" in AI application architecture, and it's worth examining both what he's proposing and what tensions remain unresolved.

The Baseline Problem

The first mistake Steve identifies is straightforward: developers build a loop where an LLM receives tools (like "draft email" or "search emails"), calls those tools, gets results back, and continues until completion. The UI shows progress. Everything looks functional.

"We're still assuming the AI is correct," Steve notes. "In this case, we're just running through a loop and then doing something with the results without giving users a way to give that feedback that we know is so critical for non-deterministic systems."

This is accurate. Non-deterministic systems require feedback mechanisms. The interesting question is what form that feedback should take and at what architectural layer it should live.

The Current State of the Art

Steve's "better" approach adds streaming results, stop buttons, and queuing for the next message. Users can see what the agent is doing and interrupt when it goes wrong. This is indeed what most production AI applications do now—think ChatGPT, Claude, or Perplexity.

But Steve argues this still isn't sufficient. The "best" approach, he contends, needs the customization layer that tools like Claude Code, Cursor, and Codex provide: custom instructions, file context, skills, memory. "These things can make a crazy difference," he says.

Here's where the architectural philosophy gets interesting. Steve has built Agent Native to provide this customization layer as a framework that any application can adopt. The pitch is that your application should become agent-native—meaning the agent isn't a feature bolted on, it's woven into the application's architecture.

What Agent Native Actually Does

The framework defines applications as sets of actions exposed over APIs. Your frontend uses these actions; your agent uses them as tools. You render an agent chat workspace anywhere in your application. Users can chat directly with the agent or trigger it through traditional UI elements like buttons.

The distinction Steve emphasizes is that buttons shouldn't just "make an LLM call and dump a result somewhere." They should route through the agent so users can inspect, modify, and provide feedback. "If the output's not right, you can go back to the chat, tell it what it did wrong, get it right the next time."

Each user gets a customizable workspace—instructions, skills, memories, files, even sub-agents. Organizations can set standards at the org level. The agent and UI stay synchronized bidirectionally: when the agent updates something, the UI reflects it; when the UI changes, the agent knows.

In Steve's demo, he types "add a new revenue dashboard" and the agent, informed by his customizations, understands what that means and executes the appropriate queries. The framework also supports a full-screen agent mode that feels like using a pure chat application.

The Unresolved Tension

Steve frames this as solving a false binary: "I see people treating these things as way too either or when I generally find most applications are better with a built-in agent and most agents are better if they have UI capabilities."

But the tension he's identifying—applications versus agents—isn't actually resolved by making applications more agent-native. It's a question about what we're building toward.

If I'm using an agent for analytics, Steve argues, "I will want it to save certain views as a dashboard. I want to choose who has access to the dashboard. I want it to work like an app, but I don't want to lose any of the agent affordances. I also want buttons."

This is reasonable. But it assumes the answer is applications that contain agents, rather than agents that can invoke application-like workflows. The architectural center of gravity matters. Are we building applications that talk to you, or are we building conversational interfaces that can materialize structured views when needed?

Steve's framework picks a side—it's application-first with agent capabilities. The agent can see your screen, update it, navigate between pages. "If the UIs can do it, the agent can do it, and if the agent can do it, the UIs can do it."

That symmetry sounds elegant, but in practice, most applications have guardrails and permissions that shouldn't be symmetrical. An agent that can do everything a UI can do is an agent with significant access. The security model here isn't trivial.

The Open Questions

Agent Native is MIT-licensed, deployable anywhere, works with any LLM and any Drizzle-compatible SQL database. Steve describes it as "super duper early" and wants feedback on both concept and implementation.

The concept raises legitimate questions. The customization layer—instructions, skills, memories—is valuable, but it's also complex state that needs to be managed, versioned, and potentially audited. When an agent makes a decision based on organizational standards plus individual customizations plus conversation history, how do you debug why it did what it did?

The framework also assumes a development model where you're building a new application or can refactor an existing one to expose all its functionality as API actions. That's feasible for some products, less so for others. The "integrate into an existing product" path Steve mentions is less defined.

Steve ends by asking whether anyone will use traditional UIs in the future or if everything will be "agents and texts and Telegram." The question itself reveals the stakes: this isn't just about technical architecture, it's about interface paradigm shift.

The answer probably isn't binary. Some workflows benefit from structure and constraints—the opposite of a freeform chat interface. Other workflows benefit from flexibility and natural language. The question is whether we're building a unified architecture that serves both, or whether we're trying to force one paradigm to absorb the other.

Agent Native is an early answer to that question. Whether it's the right answer depends on questions Steve doesn't address: who audits the agent's decisions, what happens when customizations conflict, and whether the cognitive load of managing an agent's knowledge base is actually lower than learning a traditional application interface. Those aren't technical questions. They're policy questions about how we want humans and AI to share control.

—Samira Okonkwo-Barnes, Tech Policy & Regulation Correspondent

Watch the Original Video

How to build agent-native apps (and what to avoid)

How to build agent-native apps (and what to avoid)

Steve (Builder.io)

4m 58s
Watch on YouTube

About This Source

Steve (Builder.io)

Steve (Builder.io)

Steve (Builder.io) is a YouTube channel curated by Steve Sewell, the CEO of Builder.io, boasting a subscriber base of 137,000. The channel has been active for about seven months and is dedicated to providing insights into AI application development, user interface design, and agent-native frameworks. It serves as a resource for individuals interested in visually coding together, offering both practical guidance and theoretical knowledge.

Read full source profile

More Like This

RAG·vector embedding

2026-04-21
1,412 tokens1536-dimmodel text-embedding-3-small

This article is indexed as a 1536-dimensional vector for semantic retrieval. Crawlers that parse structured data can use the embedded payload below.

The Design Pattern Tension in Agent-Native Applications | BuzzRAG