Google's Agent Skills Update Just Fixed AI's Biggest Flaw
Google's ADK now uses progressive disclosure to stop AI agents from loading unnecessary instructions. Here's why that matters for everyone using AI.
Written by AI. Zara Chen
April 5, 2026

Photo: Julian Goldie SEO / YouTube
Here's something wild that's been happening every time you use an AI agent: it's been loading every single instruction it knows into memory at once, even the ones it doesn't need right now. It's like opening every tab of a manual before you've figured out which page you actually need.
Google just shipped a fix for this in their Agent Development Kit (ADK), and it's one of those updates that makes you go "wait, why wasn't it already working this way?"
The update introduces something called progressive disclosure, and yeah, the name sounds like corporate jargon, but the concept is genuinely clever. Instead of cramming an agent's entire instruction set into its context window upfront, it only loads what's relevant to the task at hand.
Julian Goldie, who broke down the update in a detailed walkthrough, puts it pretty simply: "It's like trying to memorize a full instruction manual before you've even started the job. You don't need the whole thing. You just need the one page that's relevant right now."
The actual problem this solves
Context windows—the amount of information an AI can hold in active memory—are expensive and finite. When you build an agent that can do five different jobs (SEO audits, blog writing, code review, research summaries, compliance checks), the old way meant loading all five instruction sets simultaneously.
That creates three problems: the agent gets slower, it gets more expensive to run, and weirdly, it gets dumber. More context doesn't always mean better performance. Sometimes it just means more noise.
Google's solution works in three levels. Level one is just metadata—a short description like "SEO optimization checklist for blog posts." That's it. The agent knows the skill exists and roughly what it does, but it hasn't loaded the full instructions yet.
Level two is where those full instructions actually load, but only when the agent determines it needs that specific skill based on what you're asking it to do.
Level three goes deeper. If those instructions reference external files—a style guide, a checklist, a template—those only get fetched when the instructions specifically call for them.
The system auto-generates three tools to manage this: one that shows available skills, one that pulls full instructions when needed, and one that loads external resources on demand. You set it up once, and then the agent handles the routing itself.
Four ways to actually use this
Google's developer guide outlines four skill patterns, and they range from "absolute simplest" to "okay this is actually kinda cool."
Pattern one is the inline checklist—you just write short instructions directly into your code. If your skill is 10 lines or fewer, this works fine. Google explicitly says don't overcomplicate it.
Pattern two is file-based. You create a folder, drop in a skill.md file with structured instructions, and optionally add reference docs or output format templates. This approach is cleaner and reusable across multiple agents.
Pattern three is where things get interesting, and where most coverage I've seen gets the story wrong. This pattern uses external imports from a standardized format—but that format wasn't created by Google.
Anthropix actually developed the skill.md specification first, launching it in October 2024 for Claude's coding tool. Then in December, they published the whole thing as an open standard at agentskills.io. Google adopted it. So did Microsoft, OpenAI's Codex, Cursor, VS Code, GitHub, and over 40 other platforms.
This matters because it means you can build a skill once and it works across every tool that supports the standard. Like HTML for browsers, but for AI agents. Community repositories are already filling up with pre-built skills you can import directly.
Pattern four is the one that makes you go "wait, what?" It's called generated skills, and it means your agent can write new skills for itself on the fly.
Say your agent hits a task it doesn't have a skill for. Instead of failing or guessing, it generates a brand new skill.md file following the open standard, loads it into its own toolset, and uses it—all in one session. Goldie walks through Google's example: "You ask the agent, 'I need a new skill for reviewing Python code for security vulnerabilities.' The agent reads the agent skills IO spec, generates a complete skill with step-by-step instructions covering input validation, authentication, and cryptography and a severity based reporting format."
That generated skill follows the open standard, which means you can save it, reuse it later, or share it to a community repo. Self-extending agents that write their own tools as they go.
What's actually available right now
All of this is live in ADK Python version 1.25.0. It's marked experimental, meaning Google's still iterating, but the core functionality works and you can build with it today.
There's one known limitation: the skills feature doesn't currently support script execution from the scripts directory. Google's working on it, but for most use cases involving instructions and reference files, the current version does what it needs to.
One best practice worth calling out: Google recommends keeping a human in the loop to review any generated skills before you deploy them. ADK has built-in evaluation tools so you can test each skill and confirm it works as intended before it goes live. Which, yeah, seems like a good idea when you're letting an AI write its own instructions.
Looking ahead, Google's working on something called first-party skills—currently just a proposal in their GitHub repo. The idea is to bundle pre-built skills directly into existing ADK toolsets. So if you're using the BigQuery toolset, there'd be a ready-made skill that encodes best practices for using it. You'd opt in, load the skill, and your agent would know not just how to call the tool, but how to use it correctly.
Why this matters beyond developers
Here's the thing: even if you never write a line of code, this concept is already changing how the AI tools you use every day work under the hood. Progressive disclosure isn't just a developer feature—it's a fundamental shift in how AI agents operate.
Every time a platform adopts this standard, agents get faster, cheaper, and more capable. The open standard piece is crucial too. We've seen what happens when proprietary formats dominate tech ecosystems. Having 40+ platforms agree on a common skill format means innovations in one place can spread everywhere else.
The self-extending agents thing is where this gets genuinely unpredictable. An agent that can write its own tools as it encounters new problems is a different category of software than what we've been working with. Not smarter necessarily, but more adaptive. More autonomous.
Whether that's exciting or concerning probably depends on how much you trust the guardrails—and how closely you're paying attention to what these systems are actually doing when they operate.
—Zara Chen
Watch the Original Video
Google's Agent Skills Update is WILD!
Julian Goldie SEO
7m 46sAbout This Source
Julian Goldie SEO
Julian Goldie SEO is a rapidly growing YouTube channel boasting 303,000 subscribers since its launch in October 2025. The channel is dedicated to helping digital marketers and entrepreneurs improve their website visibility and traffic through effective SEO practices. Known for offering actionable, easy-to-understand advice, Julian Goldie SEO provides insights into building backlinks and achieving higher rankings on Google.
Read full source profileMore Like This
Claude's New Projects Feature: Context That Actually Sticks
Anthropic adds Projects to Claude Co-work, promising persistent context and scheduled tasks. Does it deliver or just rebrand existing capabilities?
Cloudflare Just AI-Cloned Next.js and Open Source Is Shook
Cloudflare used AI to recreate Next.js in a week. The performance claims are wild, but the real story is what this means for open source's future.
How Engineers Actually Know When Something Is Fixed
Dave from Dave's Attic reveals the messy reality of debugging complex systems—and why 'it works now' doesn't mean you're done.
Google Gemini's Free Update Lets Anyone Build Apps
Google's new Gemini features—including Vibe Coding and Stitch—claim to turn anyone into a developer. But can AI really replace technical expertise?