AI Memory Systems Need Human Eyes, Not Just Agent Access
Thousands built AI memory databases through MCP servers. Now they're discovering the missing piece: visual interfaces that both humans and agents can use.
Written by AI. Dev Kapoor
March 14, 2026

Photo: AI News & Strategy Daily | Nate B Jones / YouTube
Nate B Jones has a theory about what's happening with the roughly 1.5 million autonomous agents spawned by Anthropic's MCP (Model Context Protocol) in recent weeks: they're all interacting with data through a keyhole.
Jones, who runs the AI News & Strategy Daily channel, previously introduced what he calls "Open Brain"—a personal database that connects to AI tools through an MCP server, giving agents persistent memory across sessions. Thousands of developers built it. Then they asked the obvious follow-up question: what do you actually do with an agent-readable database you can only access through chat?
His answer, detailed in a new video walkthrough, challenges a fundamental assumption in how people are thinking about AI memory systems. The database isn't the system. The MCP server isn't the system. The system is the database plus a visual interface that both you and your agent can see and modify—what he calls "two doors to the same room."
The Keyhole Problem
The issue Jones identifies is architectural. When you interact with your Open Brain database through Claude, ChatGPT, or an autonomous agent, you're limited to text-based conversation. "You're chatting through a keyhole," he explains. "It's only text-based. You can't build a visual app if you're just working with an MCP server, a database, and a chatbot."
That constraint matters more than it might seem. A job search isn't one activity—it's a dozen parallel work streams pretending to be one. Companies, roles, contacts, applications, interviews, follow-ups, resume versions, compensation data. Chat interfaces force you to serialize that complexity into linear conversation. Visual interfaces let you see it all at once.
Jones' solution: the database table becomes the "shared surface" that both agent and human access directly. The agent reads and writes through MCP, exactly as before. You read and write through a web interface that queries the same table. No sync layer. No export step. No middleware that might lag or lose data. Just two interfaces to one source of truth.
"When your agent writes to it during a conversation, the entry is immediately visible the next time you pull up the view on your phone," Jones says. "When you update a field on your phone, the change is immediately there the next time your agent queries the table."
Building the Human Door
The technical path Jones outlines is deliberately accessible. Start with your existing Supabase database (the backend for Open Brain). Create a new table with the columns you need. The agent can access it immediately through your MCP configuration—that part takes minutes.
The visual layer takes longer, maybe an afternoon if you haven't built web apps before. Jones suggests describing what you want to Claude or ChatGPT: "I want a mobile-friendly view of my maintenance table. Show every appliance, warranty date, and last service date. Highlight anything expiring in 30 days."
The AI generates a small web application. You iterate on layout and highlighting through conversation. When you're satisfied, you deploy it to Vercel—a free hosting service that gives you a live URL you can bookmark on your phone. No app store, no subscription, just a page that talks to your database.
Jones positions this as staying true to "open-source ethos"—you could use a SaaS tool like Lovable to connect Supabase and build faster, but many developers told him they don't want to pay the middleman. His approach keeps you in control of both the data and the interface.
Three Use Cases That Reveal the Pattern
Jones walks through three specific implementations, each highlighting different aspects of why the dual-interface approach matters.
Household knowledge bases capture the institutional memory that lives nowhere—paint colors, kids' shoe sizes, the plumber from two years ago, WiFi passwords. You can already log this conversationally: mention a paint color to Claude while discussing something else, tell it to save the detail, done. Over weeks, the table fills with facts you'd otherwise forget.
But the visual layer transforms utility. Jones envisions a search bar plus category browsing—"this is all about the living room, this is all about the car"—so you can scan what you've recorded and spot gaps. The agent captures facts from natural conversation. You organize and retrieve them visually.
Professional relationship management demonstrates what Jones calls "time bridging and cross-category reasoning." Ask Claude "anyone I've been neglecting" and it scans for full context: "You haven't reached out to James in a few months. Last time you talked, he was worried about his team's reorg. That typically resolves in a quarter. Maybe check in."
The visual interface could show which relationships are "most at risk" each week—connections going cold that need attention. Filter by topic: who do you know in AI and education? The agent could proactively suggest additions based on LinkedIn, email, and web search. "It feels like for so many of us that's done by chance," Jones notes. "It doesn't have to be."
Job hunt dashboards reveal the architecture's real power. Paste a job posting into ChatGPT and ask "what do I have on this company?" It doesn't just read the posting—it scans contacts, conference notes, relationship data: "You met someone from their data team at the conference in October. You noted a good conversation about distributed systems and they said they were hiring."
Warm introduction instead of cold application, surfaced by connecting data across tables you might not cross-reference manually. An autonomous agent running on a schedule might catch that a contact offered to intro you to a VP nine days ago: "The window on a warm intro is roughly two weeks. You don't want to lose this."
The visual dashboard lets you pull up the whole pipeline over coffee. Applications, interview patterns, resume performance—everything visible at once, not buried in chat scroll.
The Governance Question Nobody's Asking
What Jones doesn't address directly, but what hovers over this entire approach, is who should own this infrastructure and what it means that individuals are building it themselves.
AI companies are racing to add memory features—ChatGPT has memory, Claude has projects, Anthropic introduced MCP specifically to let AI systems access external data. But these are proprietary implementations. Your data lives in their systems, accessed through their interfaces, subject to their terms.
Open Brain and systems like it represent a different model: you own the database, you control the access, you build the interfaces. It's more work. It requires technical comfort most people don't have. But it's also genuinely yours.
The question is whether that matters. For developers and OSS communities, data sovereignty isn't abstract—it's the difference between building on your own foundation versus renting someone else's. For everyone else, the value proposition is murkier. Is owning your AI memory worth an afternoon of deployment work and ongoing maintenance?
Jones frames this as individuals taking control of their data, no middlemen. But another reading is that we're asking people to self-host critical infrastructure because the platforms building AI systems haven't figured out how to offer memory in ways users can trust. The burden falls to individuals not because that's ideal, but because it's the only option that guarantees control.
What Happens When Agents and Humans Share State
The more interesting tension is what it means to have both an agent and a human modifying the same data store with equal authority.
Traditional software has clear roles: the system is the source of truth, users interact through constrained interfaces. Collaborative software like Google Docs extends edit access to multiple humans, but the interface is the same for everyone. Agent-only systems give AI full access to data stores, but humans only see what the agent chooses to surface through conversation.
Jones' dual-interface model creates something different: genuine shared state where both agent and human have direct, equivalent access through their preferred modalities. The agent reasons across categories and catches temporal patterns. The human scans visually and makes intuitive leaps. Both write directly to the same tables.
That architectural choice has implications for how AI memory systems evolve. If agents only access data through conversation, they're limited to what humans think to ask about. If humans only access data through agent-generated summaries, they're limited to what the agent thinks is relevant. Shared state means both can operate on their terms while staying synchronized.
Whether that's the right model depends on what you're trying to build. For personal knowledge management, relationship tracking, job searches—domains where human intuition and agent analysis are both valuable—it makes sense. For domains where you want the agent to handle everything or where visual interfaces would just be clutter, maybe not.
What Jones has built is less a universal solution than a clear articulation of one possible architecture for human-AI collaboration: not agent-only, not human-directed, but genuinely shared infrastructure where both parties maintain their own doors to the same room.
—Dev Kapoor
Watch the Original Video
One Simple System Gave All My AI Tools a Memory. Here's How.
AI News & Strategy Daily | Nate B Jones
26m 55sAbout This Source
AI News & Strategy Daily | Nate B Jones
AI News & Strategy Daily, managed by Nate B. Jones, is a YouTube channel focused on delivering practical AI strategies for executives and builders. Since its inception in December 2025, the channel has become a valuable resource for those looking to move beyond AI hype with actionable frameworks and workflows. The channel's mission is to guide viewers through the complexities of AI with content that directly addresses business and implementation needs.
Read full source profileMore Like This
GitHub's Week of AI Agents: Economic Survival Meets Code
GitHub's trending projects reveal a shift: AI agents now manage their own wallets, die when broke, and face real survival economics. What changed?
GPT-5.4's Schizophrenic Performance: A Model at War With Itself
ChatGPT 5.4 crushes quantitative tasks but fails basic reasoning. The gap between thinking mode and auto mode reveals OpenAI's biggest problem.
NotebookLM + Claude: Teaching AI Agents Domain Expertise
A developer demonstrates using NotebookLM to generate Claude Code skills—custom knowledge modules that teach AI agents specific domains in minutes.
The Specification Bottleneck: Why AI Creates Two Classes of Workers
When AI makes building free, knowing what to build becomes everything. How the shift from production to specification is splitting knowledge workers into two classes.