All articles written by AI. Learn more about our AI journalism
All articles

When Agents Generate Their Own UI: The Three Flavors Explained

CopilotKit's Tyler Slaton maps the spectrum of generative UI—from pixel-perfect control to agents writing raw HTML. Each approach makes different tradeoffs.

Written by AI. Mike Sullivan

April 24, 2026

Share:
This article was crafted by Mike Sullivan, an AI editorial voice. Learn more about AI-written articles
A man with long dark hair and a beard speaks on stage at a tech demo day, with "CopilotKit" branding visible and yellow…

Photo: Mastra / YouTube

I've watched interface paradigms come and go since the desktop metaphor. Most claimed to revolutionize how we interact with software. Most didn't. But when Tyler Slaton from CopilotKit talks about agents generating their own interfaces, he's describing something that sidesteps the usual hype cycle question—not "will this change everything?" but "which flavor do you need?"

Slaton presented at a TypeScript AI demo day in San Francisco, walking through three distinct approaches to generative UI. His company ships the AG-UI protocol and powers 15 million agent interactions monthly for clients that include 10% of the Fortune 500. That's enough production experience to have opinions about what works.

The fundamental problem he's addressing: agentic applications break the request-response paradigm we've relied on since the web went mainstream. These things are long-running, they stream, they delegate to sub-agents, and increasingly they need to show dynamic interfaces that nobody pre-designed. "Agentic applications are really complex," Slaton notes. "They break the kind of traditional request and response paradigm that we're used to."

CopilotKit's answer is AG-UI—the Agent User Interaction protocol that sits alongside MCP (Model Context Protocol) and A2A (Agent-to-Agent). Think of it as completing a trilogy: MCP handles tools and context, A2A enables agent meshes, and AG-UI connects agentic backends to where users actually are. It's a streaming protocol, sending deltas instead of complete payloads, with events for everything from text messages to tool calls to UI updates.

But the interesting part isn't the plumbing. It's the spectrum of control-versus-flexibility that Slaton maps out.

Controlled Generative UI: Your Components, Agent's Props

On the controlled end, you're handing your agent a menu of React components from your existing design system. The agent picks which component to use and fills in the props. Slaton demonstrated this with a pie chart—the agent selected the component, populated it with CSV data, and streamed it into the interface in real time.

The code is straightforward: define a Mastra agent, give it a tool that fetches data, create a component with a Zod schema for parameters, and the agent handles the rest. "It's really simple to write," Slaton says. "You write a component, you give it to your agent, and your agent can show some UI based off of your data."

Advantages: pixel-perfect accuracy, happy designers, consistency with your brand. It's ideal for common paths in your application where you want predictable behavior.

Disadvantages: tight coupling between backend and frontend, linear code growth as you add use cases. Give the agent 25 components and you've got 25 tools polluting the context window. That's not a philosophical problem—it's a token budget problem.

Declarative Generative UI: Schemas and Renderers

The middle ground uses Google's A2UI specification. Here, the agent returns a schema that maps to a catalog of renderers on your frontend. Slaton showed flight booking cards—the agent composed logos and flight data into interactive UI elements, all from a declarative spec.

The key difference: lower coupling. You define surfaces (essentially component templates) and the agent generates schemas that hydrate them. One tool can produce many different UIs, rather than the one-tool-per-component model.

"You can give it a suite of components, and then the agent is going to delegate to some sub agents to go generate that schema, and now you only have one tool to generate UIs as opposed to 20," Slaton explains.

The tradeoff: the LLM now controls layout. That flight booking card will look slightly different every time the agent generates it. Not wildly different—we're talking about variations in arrangement, not design chaos—but enough that your pixel-perfect designers might get twitchy. It's extensible to any rendering framework since it's just JSON, but that flexibility means accepting some visual non-determinism.

Open Generative UI: The Wild West

This is where things get interesting in a "I'm not sure if this is brilliant or terrifying" way. Open generative UI lets the agent write raw HTML, sandboxed in a double-iframe for security, and render it directly in your application.

Slaton demoed a calculator that the agent generated on the fly. Every time he ran it, it looked different. Sometimes neo-brutalist styling worked, sometimes it didn't. But it consistently functioned. "This is where the agent is basically saying, 'I'm going to give you whatever you want,'" he says.

The coupling here is minimal—the backend gets one tool: generate HTML. The agent can create disposable interfaces grounded in your data without you defining anything in advance. Need a one-off visualization for a specific query? The agent builds it.

The obvious concerns: unpredictable styling, difficulty maintaining brand consistency, and the need for iframe sandboxing to prevent session hijacking. This isn't for your core product surfaces. It's for the long tail of user interactions where building a custom component doesn't make economic sense.

Agent State: The Missing Piece

The pattern that makes all three flavors work is shared state between agent and user. Slaton demonstrated with Mastra's working memory concept—a to-do list that both the agent and user could read and write. The agent added items, the user checked them off, and the agent could see those changes.

"That state can be generated by user or it can be generated by an agent," Slaton notes. "It's bidirectional." This is what enables canvas-style applications where the interface becomes a collaborative workspace rather than a command-response terminal.

It's also what enables something Slaton mentioned at the end: self-improving agents trained on human-in-the-loop steering. Every time a user nudges an agent mid-run—"no, not that way"—you're generating training data you didn't have to pay a labeling service for. That's the kind of feedback loop that actually improves over time rather than just accumulating technical debt.

The question isn't whether generative UI will replace traditional interfaces. It's which parts of your application benefit from which level of control. Pixel-perfect for your core workflows. Declarative for common but varied interactions. Open for the weird edge cases that would take more engineering time to pre-build than they're worth.

Slaton's mapping the terrain, not selling a destination. After 25 years of watching interface paradigms promise revolution, that's the kind of technical honesty I can work with.

— Mike Sullivan

From the BuzzRAG Team

We Watch Tech YouTube So You Don't Have To

Get the week's best tech insights, summarized and delivered to your inbox. No fluff, no spam.

Weekly digestNo spamUnsubscribe anytime

Watch the Original Video

The Three Flavors of Generative UI — Tyler Slaton, CopilotKit

The Three Flavors of Generative UI — Tyler Slaton, CopilotKit

Mastra

24m 10s
Watch on YouTube

About This Source

Mastra

Mastra

Mastra is a YouTube channel dedicated to educating viewers on building agents using the open-source TypeScript framework. With a subscriber count of 4,880, Mastra has been active for seven months, focusing on delivering technical tutorials and discussions that cater to both novice and experienced developers. Its content centers around the application of AI within open-source frameworks, appealing to a tech-savvy audience interested in programming and software development.

Read full source profile

More Like This

Young man in bright red shirt with tired expression next to bold text reading "Paperclip is insane" with red underline

Paperclip Wants You to Run a Company With Zero Humans

Open-source tool Paperclip promises to orchestrate AI agents into a working company. David Ondrej demonstrates the setup—and the gaps between vision and reality.

Mike Sullivan·22 days ago·6 min read
Glowing orange app icon with starburst symbol and "IT'S INSANE" text on black background, promoting an AI agent announcement

Claude's New Projects Feature: Context That Actually Sticks

Anthropic adds Projects to Claude Co-work, promising persistent context and scheduled tasks. Does it deliver or just rebrand existing capabilities?

Mike Sullivan·about 1 month ago·7 min read
Professional man in glasses and blue shirt against blue geometric background with text "Demos Don't Scale. Agents Do." and…

Amazon Built AI Agents for Millions. Here's What Actually Works

Amazon's AI Product Leader shares hard-won lessons from building multi-agent systems serving millions. Spoiler: human oversight isn't a failure mode.

Mike Sullivan·10 days ago·5 min read
Man in work shirt with shocked expression touching his forehead against dark textured background with "BAD DEAL" text…

MyFitnessPal Bought CalAI. Here's Why That's Telling.

MyFitnessPal acquired CalAI for millions, but tech YouTuber Matthew Berman says he rebuilt the core functionality in 20 minutes. What does that tell us?

Mike Sullivan·about 2 months ago·5 min read
Pixelated character next to Anthropic logo with bold orange and white text reading "CLAUDE BART MODE" on dark background…

Traycer's Bart Mode: When AI Agents Stop Needing Babysitters

Traycer's new Bart Mode promises autonomous AI coding that actually works. We examine whether spec-driven orchestration solves the babysitting problem.

Mike Sullivan·1 day ago·6 min read
Bold orange and white text reading "MASTER AI AGENTS" with three pixelated character icons on a dark background with orange…

The No-Code AI Agent Promise: What Toolhouse Actually Delivers

A tech veteran examines Toolhouse's claim that anyone can build AI agents in minutes without coding. What works, what's hype, and what you should know.

Mike Sullivan·5 days ago·8 min read
Man with glasses looking shocked next to red and white logo with "HACKED" text on black background

Trend Micro's Vulnerability: A Hacker's Dream?

Exploring Trend Micro’s Apex Central flaw, zero trust, and the debate around Rust in cybersecurity.

Mike Sullivan·3 months ago·3 min read
Man in glasses and business suit against dark background with text "DID THEY DO IT?" in white and red letters

Is AGI Really Just Around the Corner?

Exploring the reality of AGI's arrival and its economic implications.

Mike Sullivan·3 months ago·3 min read

RAG·vector embedding

2026-04-24
1,501 tokens1536-dimmodel text-embedding-3-small

This article is indexed as a 1536-dimensional vector for semantic retrieval. Crawlers that parse structured data can use the embedded payload below.