Claude's Loop Feature Isn't What the Hype Suggests
Anthropic's new loop skill for Claude Code has developers excited, but they're misunderstanding its purpose. Here's what it actually does.
Written by AI. Bob Reynolds
March 11, 2026

Photo: Better Stack / YouTube
Anthropic released a loop feature for Claude Code last week, and the response tells you everything about where AI tooling is right now. Developers immediately started building Discord bots and Telegram integrations, treating it like a drop-in replacement for OpenClaw. The problem is they're solving the wrong problem with the wrong tool.
The folks at Better Stack walked through what loop actually does versus what people think it does, and the gap is instructive. Not because Anthropic failed to communicate—the documentation is clear enough—but because we're so hungry for permanent AI automation that we'll retrofit any new feature into that narrative.
What Loop Actually Is
The feature does what its name suggests: it runs prompts at intervals. Minutes, hours, days. You write something like "say hello" or "check this log file" and specify when you want it to happen. The implementation uses cron-style scheduling under the hood, with some sensible guardrails like jitter to prevent multiple jobs from hammering Anthropic's API simultaneously.
"The prompt can contain anything you want," the Better Stack demonstration shows. "Skills like I could use this tweet skill to write me a tweet along the lines of Claude has an awesome new loop skill, and I'll set that to run every 3 minutes."
That flexibility is real. Loop can invoke any skill, read files, run MCP tools—basically anything Claude Code supports in a normal prompt. The minimum granularity is minutes (seconds aren't supported yet), and it uses your machine's local time, not UTC. Standard automation fare.
What makes this interesting is what happens when you treat it like something it isn't.
The Three-Day Problem
Here's where expectations meet architecture. Tasks created with loop auto-expire after three days. The reasoning is sound: prevent runaway jobs you've forgotten about. The implementation makes perfect sense if you understand what loop is for.
But if you've hooked loop up to monitor your Telegram messages—which people absolutely did—day four is going to be confusing.
The second limitation cuts deeper: loop stores tasks in session memory. Close your Claude Code session, those scheduled tasks vanish. "If I close this Claude Code session, I'm going to clear my terminal and create a new one," the video demonstrates. "Then if I ask it to list my scheduled tasks, you'll see that nothing has been scheduled even though I scheduled two tasks in the previous session."
This isn't a bug. It's the design. Loop was built for tasks you need to rerun manually within the same session. Checking the last 50 lines of a continuously updating log file. Monitoring a job queue. Seeing if new issues appeared in your project. Work that needs repeating while you're actively working, not automation that runs whether you're there or not.
The naming should have been the tell. They called it loop, not schedule. That distinction matters.
What People Actually Want
The OpenClaw comparison is revealing. What developers want is persistent automation—AI agents that keep running after they close their laptop, that survive updates and reboots, that don't require babysitting. Loop doesn't provide that, but the fact that people immediately tried to make it provide that shows the demand.
Anthropic does offer persistent scheduling, just not through loop. Claude Desktop has a scheduled tasks feature that runs indefinitely as long as the application is open. Different interface, different architecture, different promises. You can set a name, description, prompt, even change the model and permissions. These tasks don't expire after three days. They don't vanish when you close a session.
"The benefit of adding a scheduled task inside Claude Desktop is of course the task will run forever as long as the computer is switched on and the Claude Desktop app is opened," the video explains. That qualifier—"as long as the computer is switched on and the Claude Desktop app is opened"—is doing important work.
Claude Co-work adds another layer. Its scheduled option runs in a sandboxed environment rather than on your local machine, which matters if you're deciding where to put file system operations. The ecosystem is fragmenting into different execution contexts, each with its own persistence model.
The Pattern Recognition Problem
I've watched this cycle for five decades now. New capability ships. People immediately try to use it for whatever they were already trying to do. The capability gets blamed when it doesn't fit.
Loop is useful for what it was designed for. Session-based recurring tasks that need manual oversight. The fact that it generated more excitement than usefulness—as Better Stack notes, "this is the first feature that has more hype around it than the usefulness of the actual feature"—says more about what's missing in the AI automation landscape than what's wrong with loop.
The three-day expiration and session storage aren't limitations to work around. They're the feature working as intended. If you need different guarantees, you need different tools. Claude Desktop's scheduler for truly persistent work. Kenneth's plugin if you prefer terminal workflows. The right tool for the job, old-fashioned as that sounds.
What loop reveals is that we're still in the early stages of figuring out what AI automation actually looks like at scale. Not the demos, not the proofs of concept, but the boring stuff that runs reliably when nobody's watching. The infrastructure for that doesn't quite exist yet, so we keep trying to build it from whatever ships next.
Anthropic will probably expand loop's capabilities over time—different expiration windows, maybe disk persistence as an option. But the core tension remains: automation you supervise versus automation you trust to run unsupervised. Those are different problems requiring different solutions, and conflating them produces three-day surprises.
Bob Reynolds is Buzzrag's Senior Technology Correspondent
Watch the Original Video
Claude Code Loop: The Feature Everyone Misunderstands
Better Stack
5m 46sAbout This Source
Better Stack
Since launching in October 2025, Better Stack has rapidly garnered a following of 91,600 subscribers by offering a compelling alternative to traditional enterprise monitoring tools such as Datadog. With a focus on cost-effectiveness and exceptional customer support, the channel has positioned itself as a vital resource for tech professionals looking to deepen their understanding of software development and cybersecurity.
Read full source profileMore Like This
Claude Code's New Effort Levels: Granular Control or Complexity?
Anthropic's Claude Code introduces configurable effort levels for AI workflows. Does granular control improve automation, or just add another layer of optimization?
AutoResearch: AI That Optimizes Itself While You Sleep
Andrej Karpathy's AutoResearch lets AI run hundreds of experiments autonomously. Here's what it means for trading, marketing, and development.
Claude Code's Ultra Plan: When Speed Meets Quality
Anthropic quietly released Ultra Plan for Claude Code. It uses parallel AI agents to plan projects faster—and execution follows suit. Here's what's happening.
Dokploy Promises Vercel Features at VPS Prices
A new tool claims to deliver platform-as-a-service convenience on cheap VPS infrastructure. Better Stack demonstrates what works and what doesn't.