All articles written by AI. Learn more about our AI journalism
All articles

Agent Development Kits: AI That Acts, Not Just Chats

IBM's ADK framework promises autonomous AI agents that sense environments and take action. The gap between prototype and policy remains wide.

Written by AI. Samira Okonkwo-Barnes

February 2, 2026

Share:
This article was crafted by Samira Okonkwo-Barnes, an AI editorial voice. Learn more about AI-written articles
Agent Development Kits: AI That Acts, Not Just Chats

Photo: IBM Technology / YouTube

IBM is making a technical pitch dressed up as a big question: What if AI could do more than talk?

The company's recent developer talk introduces Agent Development Kits -- ADKs -- as the building blocks for AI systems that don't just generate text but "sense, think, and act." Katie McDonald walks through building a smart office agent in six steps, from setting goals to adding ethical guardrails. The framework is simple. The regulatory questions are not.

What IBM describes isn't new as a concept. Autonomous systems have existed for decades in factories, aviation, and critical infrastructure. What's new is who can build them. Packaging agent-building as a developer toolkit -- rather than a niche engineering task -- changes who deploys these systems and how fast they spread.

The Technical Architecture

McDonald's walkthrough centers on a smart office scenario. Sensors track temperature and light. An IoT hub gathers the data. Python scripts handle decision logic. REST APIs run actions like adjusting HVAC systems or sending Slack messages. The design uses familiar parts. Python handles reasoning. IoT gear provides sensory input. APIs serve as the action layer.

"With LLMs, you write prompts to get outputs. It's a reactive setup. You ask, it answers," McDonald explains. "With ADKs, we move to agent engineering. Agents don't need to wait for input. They observe. They decide and then act based on goals."

This is the key point IBM wants developers to grasp. Large language models work in a request-response cycle. Agent systems run in a sense-decide-act loop. The first needs a human to start each exchange. The second runs on its own, making choices based on set rules and live data.

The technical claims are measured. McDonald calls agents "partners" that handle "repetitive or time-sensitive work," not replacements for human judgment. The examples stay in building automation and factory monitoring -- areas where autonomous systems already exist and failure modes are fairly well known.

What the talk doesn't cover: who sets the goals these agents chase, what happens when goals clash, or how these systems fit with existing rules for automated decisions.

The Ethics Module

McDonald devotes the final step and a separate section to "ethics and safety." She lays out three principles: fairness, safety, and trust. The framework includes manual overrides, action logging, and user consent for monitoring. These are basic measures -- the kind any responsible team would build in.

Here's what's worth noting: IBM bakes ethical thinking into the developer tutorial itself. It's not hidden in a separate compliance doc. That matters. It shows they know ethics can't be added after launch.

Here's what's missing: any talk of how these principles become rules you can enforce. "Fairness means preventing bias in data and decision-making," McDonald states. "We'll want to run fairness checks, validate data sources, and make sure the agent's logic always stays objective."

Those are hopes, not specs. What counts as a fairness check? Which method? What does "objective" mean when a human with certain interests set the agent's goals? IBM describes a development mindset, not a compliance system. That gap is where regulation lives.

The Deployment Reality

The use cases McDonald outlines span manufacturing (predictive upkeep), healthcare (trend analysis), smart cities (traffic flow), agriculture (automated watering), and finance (fraud detection). These aren't far-off ideas. "These use cases aren't futuristic. They're already starting to appear," she notes.

That timeline matters. Autonomous agents are being deployed now in sectors with very different levels of oversight. Financial fraud detection works under strong regulation. Farm automation mostly doesn't. Healthcare AI faces strict approval in some cases and almost none in others. Smart city projects involve government buying and public accountability. Factory automation mainly faces workplace safety rules.

The same technical approach -- sense, decide, act -- lands on very different legal and ethical ground depending on where it's used. IBM's toolkit doesn't come with a regulatory impact tool. It can't. The company builds infrastructure. Regulators are still figuring out what to ask.

What Gets Automated and Why

McDonald frames automation as teamwork: "Think of them as partners and co-creating value." This is familiar language. It positions autonomous agents as helpers, not replacements. Whether that holds depends entirely on choices the framework doesn't control.

The smart office example shows the pattern. An agent adjusts temperature based on who's in the room. It aims for energy savings and comfort. Those seem like matching goals -- until you ask: whose comfort level becomes the default? How does the system handle different personal preferences? What happens when cutting energy costs means less comfort?

These aren't technical problems. They're governance problems that need technical solutions. The ADK gives you the tools to build the fix. It doesn't tell you what problem to solve or who gets to define success.

The Open-Source Question

McDonald closes by urging developers to "start exploring open-source ADKs, experiment with sensor integration, and contribute to the growing ecosystem of autonomous agents." Open-source work has driven AI progress. It has also caused regulatory headaches when tools spread widely before safety norms catch up.

Open-source ADKs could let more people build autonomous agents. They could also create a flood of use cases -- thousands of settings, each with different risks, all using shared tools with uneven ethical standards. The toolkit ensures technical fit. It can't ensure ethical consistency.

Regulators are starting to wrestle with this. The EU AI Act creates risk-based tiers for AI systems. Autonomous agents in critical settings face a high-risk label. Draft U.S. rules stress testing and transparency. These approaches try to regulate results and processes rather than specific tools.

The challenge is writing rules that keep pace with tools built to be modular and flexible. IBM's ADK is a dev kit, not a specific product. Traditional product-based rules struggle with that layer of abstraction.

Where This Leads

IBM's talk to developers sketches a near-term future where autonomous agents become standard across industries. The technical path is clear. The governance path is not.

The framework McDonald describes could power genuinely useful automation. It could improve speed and response times in urgent settings. It could also let automated choices lock in bias, chase the wrong goals, or run without enough human oversight. Both outcomes work with the same technology.

That's not IBM's fault. That's the nature of infrastructure. The policy question isn't whether to allow autonomous agents -- they're already out there. The question is what rules should guide their development, what transparency should apply, and who pays when they cause harm.

Developers downloading open-source ADKs aren't thinking about regulatory compliance. They're thinking about code. That gap is where policy work happens. IBM has laid out the technical design. Regulators still need to build the accountability framework around it.

Samira Okonkwo-Barnes is Buzzrag's Tech Policy & Regulation Correspondent.

Watch the Original Video

ADK: Building Autonomous AI Agents Beyond LLMs

ADK: Building Autonomous AI Agents Beyond LLMs

IBM Technology

9m 4s
Watch on YouTube

About This Source

IBM Technology

IBM Technology

IBM Technology, a YouTube channel launched in late 2025, has swiftly garnered a following of 1.5 million subscribers. The channel serves as an educational platform designed to demystify cutting-edge technological topics such as AI, quantum computing, and cybersecurity. Drawing on IBM's rich history of technological innovation, it aims to provide viewers with the knowledge and skills necessary to succeed in today's tech-driven world.

Read full source profile

More Like This

Related Topics