A Retired Engineer Built Superhuman AI in His Garage
Dave Plamer's year-long project to build game-playing AI raises urgent questions about unregulated AI development and what happens when capability outpaces oversight.
Written by AI. Samira Okonkwo-Barnes
March 6, 2026

Photo: Dave's Garage / YouTube
Dave Plamer spent a year teaching a neural network to play Tempest, the 1981 Atari game with the kind of difficulty curve that makes Dark Souls look forgiving. Yesterday, his system beat his own world record on the game's hardest settings. He built it in his workshop. No institutional review board. No safety assessment. No one asking whether this was a good idea.
"Yesterday the AI that I spent a year or so building to play the classic arcade game Tempest beat my own official world record on extreme settings," Plamer explains in his video documenting the project. "The same record that I've been obnoxiously proud of for a long time."
Plamer is a retired Microsoft engineer from the Windows 95 era. He has a Dell workstation with dual Nvidia GPUs, a 60,000-watt generator for when the power goes out, and the technical chops to wire together MAME emulators, Lua scripts, Python servers, and Rainbow DQN variants until they produce something that plays a video game better than humans. What he doesn't have is any regulatory oversight whatsoever.
That's not a criticism of Plamer. It's an observation about the state of AI governance in 2025.
The Architecture of Unregulated Innovation
Plamer's system is technically sophisticated in ways that matter for policy, even if the application seems trivial. He didn't use the standard approach of feeding raw pixels to a neural network and hoping it figures things out. Instead, he hooked directly into the game's memory through MAME's Lua scripting engine, extracting 195 normalized floating-point values per frame representing game state—enemy positions, lane indices, spike heights, shot trajectories.
This architectural choice is worth examining because it demonstrates something policymakers consistently miss: effective AI systems don't need to be enormous. Plamer's model has under one million parameters. For context, that's roughly 0.0006% the size of GPT-3. "If you're used to hearing about LLM models with billions and trillions of parameters, it's nothing like that," he notes. "It can't write code, but it does play a mean game at Tempest."
The model uses prioritized experience replay across a 15-million-frame buffer—almost six days of gameplay compressed into a data structure that lets the system learn preferentially from its most significant mistakes. It employs attention mechanisms that let it focus on relevant threats while ignoring background noise. And crucially, it uses polar coordinate representation that matches the game's actual geometry rather than forcing the network to reinvent trigonometry.
These design choices—attention, geometric representation, prioritized learning—are the same architectural elements that make frontier AI systems work. The scale differs. The principles don't.
What Your Regulator Doesn't Know
Every AI regulation proposal I've reviewed in the past three years focuses on training compute, model size, or deployment scale as the trigger for oversight. The EU's AI Act creates risk categories based largely on application domain. The various U.S. state privacy laws treat AI as a data processing concern. The proposed federal frameworks all assume that dangerous AI will be obvious because it'll be big and expensive.
Plamer's system cost him some hardware and a year of tinkering time. No venture capital. No institutional computing budget. No procurement process that could have triggered compliance review. He's running multiple MAME instances pushing 3,000 frames per second through TCP sockets "like it's 1996 again," as he puts it, because that architecture works and he knows how to build it.
The regulatory assumption is that sophisticated AI development requires institutional resources that create natural checkpoints for oversight. But Plamer represents a different model: distributed capability in the hands of individuals who grew up debugging 6502 assembly and who treat neural networks as just another engineering problem to solve in their spare time.
When I covered the Senate AI working group hearings last year, multiple senators expressed concern about AI systems that could "operate autonomously" or "exceed human performance." Plamer's Tempest AI does both. It makes decisions at superhuman speed, learns from experience without human intervention, and beats expert human performance on a task requiring millisecond reaction times and strategic planning.
The fact that the task is a video game doesn't change the capability profile. It changes what we care about the consequences being.
The Real Policy Question
The policy challenge isn't whether to regulate Dave Plamer's Tempest bot. It's whether our regulatory frameworks can handle a world where people with software engineering backgrounds and consumer hardware can build systems that exhibit many of the capabilities we've decided to worry about in AI.
Consider what Plamer's project demonstrates is now accessible to skilled individuals:
- Reinforcement learning from experience without explicit supervision
- Attention mechanisms that prioritize relevant information
- Systems that develop strategies their creators didn't explicitly program
- Performance that exceeds expert humans at complex tasks
- All of this running on loaned hardware that any small business could afford
"I added an attention mechanism and I rewired the representation so that the world looks like Tempest actually looks—polar, radial, circular, lane-based, not a flat little grid pretending it's a 2D platformer," Plamer explains. "And when it broke, it didn't break politely. It didn't crawl past my record. It went right through it like it was angry at me for ever doubting it."
That breakthrough moment—when proper architectural choices suddenly unlock capability jumps—is exactly what makes AI development unpredictable and hard to regulate through proxy metrics like compute or model size. The system didn't need more parameters or more data. It needed the right representation of its problem space.
Where This Leaves Policy
Current AI policy discussions operate on assumptions that are already outdated. We're designing frameworks for a world where AI development happens in labs with institutional structures, where capability improvements require massive resources, where deployment means either selling products or offering services. We're preparing to regulate OpenAI and Google while Dave Plamer is in his garage training neural networks between power outages.
This isn't an argument for regulating hobbyist AI projects. It's a recognition that our policy models assume concentration of capability that no longer exists. The knowledge required to build these systems is published in academic papers and implemented in open-source frameworks. The compute required is increasingly accessible. The talent pool includes every software engineer who's decided to spend their retirement learning about attention mechanisms and polar coordinates.
What we're missing in policy discussions is any framework for capability development that happens outside institutional boundaries. Not because it's malicious or even particularly risky—Plamer's Tempest bot threatens no one—but because we've built our entire regulatory approach around the assumption that we can identify and regulate the entities that create AI systems.
Plamer ends his video with his model still training, still improving, running on that Dell workstation in his shop. There's no impact assessment. No safety review. No determination of risk level or compliance requirement. His AI development project exists in a regulatory vacuum not because anyone decided it should, but because no one writing AI policy imagined this scenario was worth accounting for.
That imagination gap is the actual policy problem we need to solve.
Samira Okonkwo-Barnes covers technology policy and regulation for Buzzrag.
Watch the Original Video
My Custom AI Went Superhuman Yesterday...
Dave's Garage
23m 36sAbout This Source
Dave's Garage
Dave's Garage is a YouTube channel that has captivated over 1,090,000 subscribers with its diverse content, ranging from Windows history and Arduino tutorials to ESP32 information. Launched in August 2025, it has quickly become a go-to source for both hobbyists and engineers interested in the practical and historical aspects of technology.
Read full source profileMore Like This
Examining Google's Stitch and Vercel's AI Tools
Analyzing the implications of Google Stitch and Vercel's tools on AI design and industry standards.
OpenAI's Ad Strategy: Architecture of Trust or Just Talk?
OpenAI details its approach to ads in ChatGPT with technical separation between model and ads, data controls, and sensitivity filters. Will the principles hold?
GPT-5.4 Leak Suggests OpenAI's Next Move, But Questions Remain
Code references to GPT-5.4 surfaced in OpenAI repositories this week. The technical details reveal ambitions—and raise questions about implementation.
When AI Trains AI: The Regulatory Gap Nobody's Watching
HuggingFace's autonomous ML training demo reveals a regulatory blindspot: who's accountable when AI systems design and train other AI systems?