AI Can Write Code, But Can It Make Software Stop Sucking?
The creator of Windows Task Manager on why AI coding tools amplify your skill level—and why that might not fix bloated, slow software.
Written by AI. Yuki Okonkwo
April 18, 2026

Photo: Dave's Attic / YouTube
Here's a question that's been rattling around dev circles: if AI can write code now, why does software still feel like it's getting worse? Your $3,000 laptop boots slower than your 2012 MacBook. Apps take three seconds to open when they should be instant. Everything feels... mushy.
Dave Plummer—the guy who literally created Windows Task Manager—has thoughts. And they're more nuanced than "AI good" or "AI bad."
In a recent Shop Talk episode with co-host Glenn, Dave fielded questions from viewers trying to make sense of AI's role in this mess. The conversation that emerged maps out something fascinating: AI is a skill amplifier, not a skill replacer. And that distinction matters way more than the hype cycle wants you to believe.
The Specificity Problem
When someone asked Dave about prompts for getting better performance from AI-generated code, his answer was telling: "I don't just say make it faster."
Instead, Dave gives AI hyper-specific instructions based on decades of debugging experience. When working on his Robbitron AI project (100 enemies, multiple passes through the list per frame), he spotted the performance issue immediately: "Make sure you're not doing any order one searches here... make sure that any work you're doing on the pass is cached and available to other passes so that you're not doing the same work over and over five times."
That level of instruction requires knowing why code is slow in the first place. You need to have written slow code, been burned by it, debugged it at 2am, and internalized the lesson. "If you have never written code, been burned, written slow code, had to fix it, found out why it was slow in the first place and explored all that, then it's very hard to know what to tell the AI to do," Dave explained.
Which creates this interesting dynamic: AI empowers people "proportional to their skill level." Dave's daughter—who knows "maybe some very basic JavaScript"—built a functional fashion search website with AI and published it live. That's genuinely impressive and was "almost impossible three years ago."
But ask that same AI to build an operating system or a Word-compatible word processor? "We will be there in 3 to 5 years, but we're not there yet." You'd need to write specifications as detailed as a 1990s Microsoft program manager, spelling out every menu interaction. At that point, you're not using AI—you're managing it like a very literal intern.
The Stack Overflow Paradox
Here's where things get philosophically weird. A viewer raised the sustainability question: when AI summarizes Stack Overflow answers so well that nobody visits Stack Overflow anymore, where does the AI get its training data?
Dave's counter: isn't that just... Wikipedia?
"You could go and do a whole bunch of research or you can just read the Wikipedia page on Moose Jaw Saskatchewan and know all there is to know about it. So, is Wikipedia therefore stealing anything because they're shortcutting the facts and telling you everything you need to know before you go and do all your research and see all the advertisements about Moose Jaw?"
It's the same compression mechanism—summarizing knowledge so effectively that accessing the original becomes unnecessary. Whether that's legal is one question (Dave seemed uncertain). Whether it's sustainable is another. If the ad-supported repository dies because AI ate its traffic, where does the next generation of solutions come from?
The conversation didn't resolve this tension. It just sat there, unresolved, which honestly feels right. Nobody knows yet.
Machine Code From Scratch
The wildest moment came when Dave casually mentioned he'd already gotten ChatGPT to write pure machine code—not assembly language, actual binary. "I said, I don't want code. I don't want to assemble it. I don't want to compile it. I want you to produce a program for me that I click on a link and it runs."
It worked. PE headers, everything. A Windows executable that displayed "Hello World" without even triggering a security warning. Dave seemed genuinely surprised: "I didn't think it could do it yet, but it could."
This is where the "AI will fix bloated software" argument hits a wall. AI can absolutely write efficient code—if you know enough to ask for it. But most developers won't. They'll ask AI to add features quickly, not optimize ruthlessly. And AI, like any tool, will give you what you request.
The Embedded Exception
When asked where lean, performance-focused code still exists, Dave pointed to embedded systems. His ESP32 projects run on 240 MHz processors with 520K of RAM. "There are a lot of constraints and I'm up against that all the time but I'm always balancing."
That constraint-driven development—where you can't just throw more RAM at the problem—produces fundamentally different software. Same with Valve's Steam OS, which has to perform on known hardware configurations.
But desktop/web development? Moore's Law is dead ("how many years have we been stuck around four or five gigahertz?"), but software keeps expanding to fill available resources. The result: "If software keeps getting bigger, the experience is just going to keep getting worse."
The Build vs. Debug Debate
One viewer suggested AI should only debug and optimize, not build. Dave's response was immediate: "Not to me."
He's absolutely using AI to build—"I must build and I will build with my crazy AI"—but with caveats. Would he let AI build embedded code for a pacemaker? "No, probably not. I'd like that to be hand written and carefully reviewed."
The key distinction: you don't have to write slop just because you're using AI. "You can write things crisp and clean and narrow and well-defined and carefully reviewed. Or you can just say, 'Go write me this and make it faster, baby,' and well, it doesn't do this and doesn't do that and change this and change that. And 3 weeks later, you've got this massive spaghetti code that even the AI can't keep track of."
Software engineering principles still matter. AI doesn't exempt you from thinking.
What Actually Gets Fixed
So will AI fix bloated, slow software? The answer emerging from this conversation is: only if the people using AI already care about performance. AI amplifies intent. If your intent is "ship features fast," you'll get features fast—and all the bloat that comes with speed-first development. If your intent is "make this blazingly efficient," and you know enough to guide the AI toward that goal, you might get somewhere.
Dave mentioned Microsoft To-Do (formerly Wunderlist) as one of the last apps he's installed that feels genuinely snappy. That app is at least 10 years old. The fact that a decade-old app stands out for performance says everything about the current trajectory.
Maybe the real question isn't whether AI can fix software. It's whether we'll use AI as an excuse to stop caring about the fundamentals—or as a tool to finally return to them at scale.
Yuki Okonkwo is Buzzrag's AI & Machine Learning Correspondent
Watch the Original Video
Will AI Fix This? (Why Software Keeps Getting Worse) | Shop Talk EP 77
Dave's Attic
53m 27sAbout This Source
Dave's Attic
Dave's Attic, a secondary channel to the well-regarded 'Dave's Garage', has carved out a niche within the tech YouTube community by delivering insightful content on AI and software development. Since its inception in October 2025, the channel has attracted over 52,300 subscribers. It serves as a valuable resource for those interested in the intersection of artificial intelligence and programming, offering deep dives into current technological trends and challenges.
Read full source profileMore Like This
AI's Wild Week: From Images to Audio Mastery
Explore the latest AI tools reshaping images, audio, and video editing. From OpenAI to Adobe, discover what these innovations mean for creators.
AI Agents Are Getting Persistent—And That Changes Everything
Anthropic's Conway, Z.ai's GLM-5V-Turbo, and Alibaba's Qwen 3.6 Plus signal a shift from chatbots to AI that stays active, sees screens, and actually works.
Alibaba's Qwen 3.5: Testing the Open-Source Model
Alibaba's Qwen 3.5 promises to rival Opus 4.5 and Gemini 3 Pro. We break down what the 397B parameter model actually delivers in real-world testing.
Is Code Cheap but Software Priceless?
Coding is easy, but real software is tough. Discover the shift to disposable software and what it means for developers.