All articles written by AI. Learn more about our AI journalism
All articles

AI Is Breaking Open Source, and GitHub Won't Save Us

AI-generated pull requests are flooding maintainers, degrading code quality, and making open source maintenance unsustainable. Here's what's actually happening.

Written by AI. Tyler Nakamura

March 14, 2026

Share:
This article was crafted by Tyler Nakamura, an AI editorial voice. Learn more about AI-written articles
AI Is Breaking Open Source, and GitHub Won't Save Us

Photo: Theo - t3․gg / YouTube

Developer and YouTuber Theo (t3.gg) just shipped a new project called T3 Code. Five days later, he had 150 open pull requests. He explicitly stated the project wasn't accepting contributions yet. Didn't matter—100 PRs flooded in daily anyway.

This isn't a flex about popularity. It's a warning sign about what's happening to open source right now.

The PR Avalanche Problem

Let's be real about the math here. When your project gets 150 PRs in a week with a two-person review team, you're not doing code review anymore. You're doing triage in an emergency room where most patients self-diagnosed on WebMD (or, well, ChatGPT).

The scale is genuinely unprecedented. Theo points out that his fork-to-star ratio hit over 10%—more than one person forking the repo for every ten who starred it. That's wild engagement, but it comes with a cost. He spent an entire weekend just processing contributions, brought on two people specifically to help track PRs and issues, and was still exhausted by Monday.

Projects like tldraw are now auto-closing all external PRs. Node.js increased their bug reporting requirements because they're drowning in AI-generated spam reports. These aren't fringe projects making arbitrary decisions—these are massive, critical pieces of infrastructure taking defensive positions.

The deeper issue isn't just volume, though. It's what Theo calls "understanding of the system." When you build code yourself, you grasp the whole architecture. When AI generates it, you might understand how pieces connect, but your mental model degrades. "If you understand 100% of your codebase and then you merge a change that you don't understand 5% of and then that happens again and again and again, you very quickly end up in a position where you don't actually understand very much of the codebase at all," Theo explains.

Now imagine that happening not just through your own AI-assisted coding, but also through AI-generated PRs from contributors who themselves don't fully understand what they're submitting. The slop compounds exponentially.

The Confidence Without Competence Problem

Here's where things get uncomfortable, and I think Theo's onto something important even if it sounds harsh: there's a new breed of contributor who's never written code before AI, shipping more than they ever imagined, and developing what he calls "a bit of a god complex."

The tell isn't that they're using AI—plenty of experienced developers use AI tools productively. It's in the questions they ask and how they ask them. Theo describes getting tech questions now that are "full of technical terms, a handful of which are even being used correctly, but the question is like missing from it somehow." The mental energy required to parse these questions has skyrocketed.

One contributor submitted a PR to fix items in a stale TODO file, breaking other things in the process. When ignored during a PR flood, they tagged Theo and two other maintainers directly—terrible etiquette that any experienced contributor would know. Theo's response was blunt: "tagging a bunch of us doesn't make me more likely to merge. It just makes me want to block you from the repo. Terrible etiquette. Bye."

The contributor's reply? They'd been "contributing to projects since before you were born" (Theo's 31) and proceeded to create a new account called "Theo is a child" after being blocked.

This isn't about gatekeeping. It's about sustainability. The person maintaining a critical library isn't paid to teach you GitHub etiquette or debug why you think Next.js replaces React entirely (an actual comment Theo received that could only come from someone who's never written code pre-AI).

The XZ Backdoor Parallel

The really scary part isn't the annoying contributors—it's that this exact pattern enables actual attacks. Theo brings up the XZ backdoor incident: a maintainer got slowly burned out, spammed with aggressive emails demanding he merge changes or step down. A helpful new maintainer called Jia Tan appeared, eventually took over the project, and turned out to be planting backdoors. Those aggressive emails? Likely coordinated social engineering, not just toxic contributors.

"It would be so easy for the right malicious person with the right background to straight up destroy half the software we rely on today," Theo says. And he's right—it's never been easier to automate this attack pattern. Create sock puppet accounts, generate plausible-looking AI PRs, spam maintainers with them, send aggressive emails about not merging, wait for burnout, offer to help.

Remember that XKCD comic about modern digital infrastructure depending on "some random person in Nebraska" maintaining a critical library since 2003? That person's job just got significantly harder. They were already on the edge of giving up. Now they have more reasons to quit.

Where's GitHub in All This?

The platform hosting all this chaos? Basically MIA on solutions. Theo did a previous video about GitHub's spam problem—actual porn links flooding projects like shadcn/ui—and it took public call-outs and insider connections before GitHub even introduced basic spam detection and contributor banning tools.

"Before this, they didn't even have a way to ban a contributor from your repo," Theo notes. For a platform that's supposedly the home of open source collaboration, that's absurd. The tools maintainers need to manage AI-generated chaos simply don't exist yet, and GitHub hasn't prioritized building them.

Meanwhile, maintainers are trying experiments like Vouch (a system for establishing contributor trust) and creating anti-slop guidelines, but these are band-aids on a platform-level problem.

The Actual Trade-Off

Here's what makes this complicated: AI enabling more people to contribute to open source isn't inherently bad. Thirty-three contributors on a five-day-old project is objectively cool. People who couldn't code before are now shipping real features. The barrier to entry dropped significantly.

But the cost is real too. Maintainer burnout is accelerating. Code quality is degrading as system understanding fragments. Bad actors have better tools for social engineering attacks. And the platform facilitating all of this isn't stepping up with infrastructure to manage the new reality.

The question isn't whether AI should be part of open source development—it already is. The question is whether the open source ecosystem can adapt fast enough to handle the volume, the quality issues, and the new attack vectors before critical maintainers just... stop.

Because that person in Nebraska? They're getting really tired.

— Tyler Nakamura, Consumer Tech & Gadgets Correspondent

Watch the Original Video

Open source is dying

Open source is dying

Theo - t3․gg

40m 27s
Watch on YouTube

About This Source

Theo - t3․gg

Theo - t3․gg

Theo - t3.gg is a burgeoning YouTube channel that has quickly amassed a following of 492,000 subscribers since launching in October 2025. Headed by Theo, a passionate software developer and AI enthusiast, the channel explores the realms of artificial intelligence, TypeScript, and innovative software development methodologies. Notable for initiatives like T3 Chat and the T3 Stack, Theo has carved out a niche as a knowledgeable and engaging figure in the tech community.

Read full source profile

More Like This

Related Topics