Why Skills Are Flunking: Vercel's AI Agent Revelations
Vercel finds skills often unused by AI agents. Discover why agents.md might be the true MVP.
Written by AI. Yuki Okonkwo
January 29, 2026

Photo: Better Stack / YouTube
If you've ever tried to teach your AI agent to be the Hermione Granger of coding (minus the time-turner), you know the struggle is real. Skills, those modular bundles of coding wisdom, seemed like the perfect answer. But according to Vercel's recent tests, they might just be the AI equivalent of a broken wand.
Why Skills Are Failing Their Wizarding Exams
Vercel's quest was simple: equip a coding agent with the latest and greatest Next.js knowledge. But when they turned to skills, they found a plot twist worthy of a J.K. Rowling novel. "We discovered that 56% of the time, the skill was never invoked," Vercel revealed. Yup, the agents just decided to ghost those skills like a bad Tinder match.
The problem? Skills rely on the agent to decide when and if to use them. And when tested, agents often shrugged off these skills, sometimes even performing worse with them in the mix. It's like giving your AI agent a cheat sheet and watching them ignore it during the test. Vercel tried to mandate skill usage with prompts like "Please use this skill," but the results were as inconsistent as a cat meme's popularity.
Enter Agents.md: The Quiet Overachiever
So what did Vercel do when the skills went rogue? They tapped into the agents.md file, an unsung hero among AI documentation formats. Unlike skills, agents.md keeps everything front and center, loaded into the system prompt like a reliable study guide. No decision-making needed—just pure, unadulterated context.
This approach is like the AI version of "keep it simple, stupid"—but, you know, in a way that doesn't make your AI feel bad about itself. Vercel found that agents.md provided "persistent context instead of having it on demand." When tested, this method scored a perfect 100% on all evaluations, making it the valedictorian of AI documentation techniques.
The Real Takeaway: Decision-Making Matters
Why did agents.md outperform skills? Vercel speculates it's all about reducing decision points for the AI. With agents.md, the context is always there, whereas skills require the AI to decide to use them. Imagine if you had to decide every time whether to use Google Maps or just wing it. Yeah, no thanks.
But don't throw skills into the Room of Requirement just yet. They still have their place in user-triggered workflows. Think of them as the Hagrid of AI documentation—useful for specific tasks like "upgrade my next.js version." But for general framework knowledge? Agents.md is the Hermione of the group.
So, What Now?
Vercel advises a future where we design for retrieval, not memory. In other words, compress your context like it's a TikTok video, and always test your setups with evaluations (aka evals) to ensure they're solid.
But here's the real question: will other tools follow Vercel's lead and embrace the less-is-more philosophy? Or will we find ourselves in a future where skills and agents.md coexist like peanut butter and jelly?
What do you think? Are you ready to embrace agents.md, or do you still have faith in skills? Let me know in the comments or wherever you share your deep thoughts with the world. Until next time, keep your agents sharp and your context sharper.
Yuki Okonkwo, AI & Machine Learning Correspondent
Watch the Original Video
Skills Had ONE Job (They Failed)
Better Stack
5m 51sAbout This Source
Better Stack
Since launching in October 2025, Better Stack has rapidly garnered a following of 91,600 subscribers by offering a compelling alternative to traditional enterprise monitoring tools such as Datadog. With a focus on cost-effectiveness and exceptional customer support, the channel has positioned itself as a vital resource for tech professionals looking to deepen their understanding of software development and cybersecurity.
Read full source profileMore Like This
AI Agents: The Future of Coding by 2026
Explore how AI agents are reshaping software development, making coding accessible to non-developers, and transforming engineering roles.
TypeScript Bash Implementation Cuts AI Token Costs by 95%
JustBash runs Bash commands in TypeScript without infrastructure, reducing AI agent token usage from 133,000 to 6,000 in real-world tests.
Google's gwscli: Built for AI Agents, Not Humans
Google's new gwscli tool optimizes Google Workspace for AI agents with nested JSON and runtime docs. But does it signal the end of MCP servers?
OpenAI's Websocket Shift Could Cut AI Bandwidth by 90%
OpenAI's move from REST to websockets promises 90%+ bandwidth reduction for AI agents. Here's why this seemingly simple change is actually revolutionary.