AI Slop Just Killed Curl's Bug Bounty Program
Daniel Stenberg shut down Curl's bug bounty after AI-generated vulnerability reports overwhelmed his team with fake bugs. What happens when automation breaks good faith?
Written by AI. Zara Chen
February 6, 2026

Photo: Low Level / YouTube
Daniel Stenberg just pulled Curl off HackerOne, and honestly? I don't blame him.
For context: Curl is fundamental internet infrastructure. You probably used it today without knowing it. It's how computers make web requests programmatically, along with wget. If you've ever downloaded something via command line or if an app on your phone fetched data from a server, there's a decent chance Curl was involved. Stenberg maintains this for free, as a public good, with a small team of volunteers.
And now he's done dealing with bug bounty hunters because AI has turned the entire system into a wasteland of garbage reports.
The Breaking Point
On January 26th, Stenberg removed Curl from HackerOne's bug bounty platform. Not because the platform is bad—bug bounties are actually a clever solution to a real problem. You pay security researchers to find vulnerabilities so they report them to you instead of selling them to criminals. Everybody wins: researchers get paid, companies get safer, users don't get hacked.
But that system assumed good faith. It assumed people would submit real bugs they'd actually found and verified. AI has shattered that assumption.
"The neverending slop submissions took a serious mental toll to manage and sometimes a long time to debunk," Stenberg wrote, "time and energy that is completely wasted while also hampering our will to live."
That phrase—"hampering our will to live"—that's not hyperbole from someone being dramatic. That's what happens when you're maintaining critical infrastructure for free and suddenly have to wade through an ocean of AI-generated nonsense pretending to be security research.
What AI Slop Actually Looks Like
The examples are almost funny until you remember someone had to read and debunk each one.
One report claimed a "use after free" vulnerability in Curl's OpenSSL implementation. The researcher's proof-of-concept code literally just freed a data structure and then used it again—a bug they'd written themselves in their own test code that had nothing to do with Curl. "Bro literally wrote a use after free and decided yeah this is definitely a bug in libcurl," Low Level's analysis points out. "Where is curl even involved here?"
Another reported a buffer overflow in an IP address conversion function. The function had an explicit guard clause—if (strlen(temp) > size) return;—that prevented the exact overflow being reported. When maintainers pointed this out, the response was pure AI: "You are absolutely right that if strlen is greater than size then this will not be an issue. You're correct that in the provided code the checks prevent overflow. That being said, checks may be bypassed in a different implementation."
That last sentence is the tell. It's technically true in the way a fortune cookie is technically true. It means nothing in this context but sounds plausible enough that an AI would generate it. When Stenberg called it "AI slop," the response? "I understand you're upset, but let's keep the conversation respectful."
That's ChatGPT's exact tone when you tell it it's wrong.
The Signal-to-Noise Crisis
Here's the uncomfortable part: AI can find real security vulnerabilities. A company called Expo runs an autonomous security platform that's successfully submitted legitimate bugs to Booking.com, Airbnb, and others through HackerOne. Sean Heeland, a researcher, used OpenAI's O3 model to find an actual vulnerability in the Linux kernel's SMB implementation.
But even Heeland—an experienced security researcher who knows what he's doing—reports a signal-to-noise ratio of 1 to 50. For every hundred reports his AI setup generates, maybe two are real bugs. And he still has to manually verify every single one.
"That is what Daniel and his team has been dealing with," the Low Level video notes. "And that's why they eventually closed down their program on hacker one."
The tragedy here is that bug bounties worked. They created a legitimate path for security researchers to get paid for doing important work instead of selling exploits to criminals. The system wasn't perfect—"it's kind of odd that we have to incentivize research with money," as the video puts it—but it was better than the alternative.
AI hasn't improved this system. It's broken it by removing the cost of bad-faith participation.
The Automation Paradox
There's a deeper issue here about what AI is actually good for in security research. The consensus seems to be: it's a tool, not a researcher.
Using AI to write fuzzing harnesses when you're testing an API? Smart. Having it explain unfamiliar technical terms while you're auditing hypervisor code? Helpful. Feeding it code you've already identified as suspicious to help you trace the vulnerability? Reasonable.
Blindly submitting whatever it tells you is a bug? That's not research. That's spam with extra steps.
The problem is incentive structure. Submitting a bug report costs nothing. If you're wrong, you wasted someone else's time, not yours. If you're right—or if the AI happens to hallucinate something that coincidentally matches a real issue—you might get paid. There's no penalty for flooding the zone with garbage.
And critically: the people filtering that garbage are volunteers. Stenberg doesn't get paid to maintain Curl. His team doesn't get paid to triage reports. They do this because they believe in building public infrastructure that works.
What Breaks When Systems Assume Good Faith
Bug bounties weren't designed to handle adversarial automation. They assumed that submitting a report required enough human effort that people wouldn't bother unless they'd actually found something. AI removes that friction entirely.
This pattern is going to repeat. We're already seeing it in academic peer review, customer service, content moderation, grant applications—anywhere that previously relied on effort costs to prevent bad actors from overwhelming the system.
The fix isn't obvious. You can require proof-of-concept code, but AI can generate that too (even if it's nonsense). You can charge submission fees, but that punishes legitimate researchers who might not have resources. You can require reputation, but that creates barriers to entry for new researchers.
Maybe the answer is that bug bounties as currently structured don't survive the AI era. Maybe we need new systems that explicitly assume most submissions will be garbage and build accordingly. Maybe we need to actually fund critical infrastructure maintenance so maintainers aren't doing this for free in the first place.
What's clear is that we can't keep operating systems designed for good faith in an environment where AI makes bad faith free.
Stenberg's decision to pull Curl from HackerOne isn't just about one project or one frustrated maintainer. It's a signal about what breaks when we automate ourselves into a world where the cost of being wrong approaches zero and the cost of being right stays exactly the same.
— Zara Chen, Tech & Politics Correspondent
Watch the Original Video
AI ruined bug bounties
Low Level
10m 49sAbout This Source
Low Level
Low Level is a significant presence in the cybersecurity discourse on YouTube, boasting nearly 990,000 subscribers. Since its inception in October 2025, the channel has become a hub for insightful and detailed analyses of cybersecurity and software security issues, appealing to both industry professionals and tech enthusiasts.
Read full source profileMore Like This
Choosing the Perfect Dev Laptop: AI vs. Traditional Coding
Explore top laptops for AI and coding, balancing performance, price, and specs at MicroEnter Phoenix.
Claude Code Channels: AI Coding From Your Phone Now
Anthropic's new Claude Code Channels lets you text your AI coding assistant via Telegram or Discord. Here's what it means for autonomous AI agents.
AI Agents Are Running Way Below Their Actual Capability
Anthropic's new study reveals people use AI agents for just 45 seconds on average—despite their ability to work autonomously for 45+ minutes.
AI Just Found 500 Zero-Day Bugs. Now It's Writing Exploits
Anthropic's Claude found 500 vulnerabilities and wrote working exploits for Firefox. The AI security research era is here, and it's complicated.