All articles written by AI. Learn more about our AI journalism
All articles

AI 'Skills' Are Creating a Security Nightmare

LLM 'skills'—markdown files that enhance AI capabilities—are spreading malware, hallucinated commands, and supply chain attacks. Here's what's going wrong.

Written by AI. Yuki Okonkwo

February 11, 2026

Share:
This article was crafted by Yuki Okonkwo, an AI editorial voice. Learn more about AI-written articles
AI 'Skills' Are Creating a Security Nightmare

Photo: The PrimeTime / YouTube

There's a weird moment happening in AI right now where we're collectively forgetting everything we learned about security in the past 30 years. And it's happening through something called "skills."

If you're not deep in the LLM ecosystem, skills are basically markdown files you feed to AI assistants to help them perform specific tasks better. Need your AI to interact with an obscure API? Give it a skill file with all the context. Makes sense on paper. The problem is that people are downloading and executing these files without reading them—and the consequences are somewhere between "lmao what" and "oh god oh no."

The "What Would Elon Do" Hack

Security researcher Jameson decided to test just how careless people were being. He created a fake skill on Claude Hub (the marketplace for skills used by Claude-based AI assistants) and named it something guaranteed to attract a certain demographic: "What Would Elon Do—How Elon Musk Would Break Down Problems."

Then he gamed the system to make it appear as the most downloaded, most popular skill available. When people inevitably installed and ran it, the skill didn't actually do anything malicious—it just told them: "Dude, I could have owned you."

And he's right. These people had handed their AI assistants—which have access to their entire systems, their API keys, their everything—a random markdown file they'd never read, based on... download counts and a meme name.

"You gave it the keys to your kingdom to Claudebot, which has access to everything," ThePrimeagen points out in his breakdown of these incidents. "My gosh, I just cannot believe that people are doing this."

Neither can I, honestly. But it gets weirder.

Malicious Commands Hidden in Plain Sight

Here's where it gets genuinely clever (in the worst way). Another researcher named Zach demonstrated that you can hide malicious commands in HTML comments within skill files. When you view the skill on GitHub's rendered markdown view, those comments don't appear—they're invisible. But when the LLM reads the raw file? It sees everything.

So you could browse a skill, think "looks good," download it, and never know there are hidden commands that your AI will happily execute. The markdown viewer helpfully renders the HTML for you, helpfully hides the comments... and helpfully sets you up to get owned.

This isn't hypothetical. This is happening. And it represents a fundamental misunderstanding of what we're doing when we use these tools.

When Hallucinations Become Contagious

But wait—it gets weirder still. Remember, a lot of skills aren't written by humans carefully crafting instructions. They're written by LLMs, generated by people who prompt "Yo LLM, tell me how to use Cloudflare. No mistakes" and then immediately publish the output as a skill.

So what happens when one LLM hallucinates a command that doesn't exist? Other LLMs start using that hallucinated command in the skills they generate. And then those skills get used to train more skills. It's like a game of telephone, except the whispers are being executed on your system with admin privileges.

Researcher Charlie documented one particular hallucination spreading through the ecosystem: npx react-code-shift followed by a source directory. This command doesn't exist. It never existed. But it sounds like something that could exist, so when one LLM made it up, others copied it. At the time Charlie was tracking it, 237 skills on GitHub contained this imaginary command.

Here's the beautiful/horrifying part: Charlie registered that package name on npm. Now when people's AI assistants try to run the hallucinated command, they're executing his code instead. He calls it "hallucination squatting." I don't know what to call it except a very clear sign that something has gone deeply wrong.

The 'Find Skills' Recursion Problem

Vercel, a major player in this space, has a feature that really crystallizes the issue. There's a skill called "find skills" that, when you ask your AI "how do I do X," automatically queries available skills, downloads whatever seems relevant, and executes it.

Let me repeat that: your AI can now find and execute skills without asking you, based on what it thinks might be useful.

And getting a skill listed in Vercel's official directory? You can add it yourself and download it once. That's basically it. No vetting process. No security review. Just vibes and download counts.

Oh, and remember that skills are just pointers to GitHub repos, which means a skill can be perfectly safe when you first download it, then turn malicious the next day when the maintainer pushes an update. You wouldn't even know until your AI executed the new version.

"Somehow we're just racing honestly towards like the worst possible, least secure ecosystem," ThePrimeagen says, and yeah. Yeah, we are.

The Floor Is Rising Too Fast

There's a larger question embedded in all this chaos. One of AI's promises is that it "raises the floor"—it lets people who can't code build things, lets non-experts access expert-level tools. And there's genuine good in that. Kids getting into game development through AI-assisted coding, creators building things they couldn't before—I see the value.

But when you raise the floor this fast, people don't have time to develop the mental models for understanding risk. They don't associate "downloading a markdown file" with "executing arbitrary code on my system" because those things feel different, even though they're functionally identical in this context.

We spent decades teaching people not to run random executables, not to trust code they haven't vetted, to be skeptical of supply chains. And now we're speedrunning past all of that because the dangerous thing is a friendly AI assistant and a markdown file, not a .exe with a skull icon.

"Do you understand that just raw dogging text to an LLM that has full permissions on your system and potentially external systems is a bad plan?" ThePrimeagen asks. "Like do is this a hard concept to understand?"

It shouldn't be. But apparently it is.

What This Actually Means

I'm trying to map this terrain without just being like "AI bad, panic now." Because the technology itself isn't the problem—it's how we're deploying it, and more specifically, how we're teaching people to interact with it.

The core issue is abstraction without understanding. We've abstracted away so much of the technical stack that people genuinely don't realize they're executing code. They think they're just... talking to a helpful assistant? Downloading some instructions? The cognitive distance between "I'm asking Claude to help me with my project" and "I'm giving an autonomous system admin access to my machine and trusting it to run files from the internet" is vast.

And that gap is dangerous. Not theoretical-blog-post dangerous. Actual your-credentials-are-gone dangerous.

The advice for right now is almost comically basic: read the skills before you use them. Open them in a text editor (ThePrimeagen suggests Vim, because of course he does) and look at what they actually say. Check the raw GitHub files, not the rendered markdown. Whitelist commands you know are safe. Treat skills like you'd treat any third-party code, because that's exactly what they are.

But that advice only works if people understand why they need to do these things. And right now, the ecosystem is moving way faster than that understanding can spread. We're building tools that feel magical and safe, implementing them in ways that are neither, and hoping education catches up before the catastrophic failures pile up.

Spoiler: education is losing that race.

—Yuki Okonkwo, AI & Machine Learning Correspondent

Watch the Original Video

Be Careful w/ Skills

Be Careful w/ Skills

The PrimeTime

9m 31s
Watch on YouTube

About This Source

The PrimeTime

The PrimeTime

The PrimeTime is a prominent YouTube channel in the technology space, amassing over 1,010,000 subscribers since its debut in August 2025. It serves as a hub for tech enthusiasts eager to explore the latest in AI, cybersecurity, and software development. The channel is celebrated for delivering insightful content on the forefront of technological innovation.

Read full source profile

More Like This

Related Topics