Claude Code's /Loop Feature: Automation or Session Lock-In?
Anthropic's Claude Code adds /loop for recurring tasks. But the session-based design reveals tensions in how AI coding tools think about persistence.
Written by AI. Dev Kapoor
March 8, 2026

Photo: WorldofAI / YouTube
Anthropic's Claude Code just shipped /loop, a new command that lets you schedule recurring prompts—checking PRs, scanning logs, monitoring builds—for up to three days at a time. Boris Cherny, Claude Code's creator, announced it on X, and the developer AI community is already testing use cases: autonomous error-fixing workflows, morning Slack digests via MCP, continuous deployment monitoring.
On the surface, it's straightforward automation. Type /loop every 15 minutes, scan logs for errors and Claude becomes a background agent that reports back without manual prompting. You can run up to 50 tasks per session, with flexible intervals ranging from minutes to days. If you don't specify timing, it defaults to every 10 minutes.
But the implementation details reveal something more interesting than just "scheduled tasks for AI." This is where the design choices start surfacing tensions about what automation actually means in developer tooling.
Session-Based Means Session-Dependent
The /loop feature only runs while your Claude Code terminal session stays open. Close the terminal, and your scheduled tasks disappear. Restart your computer, and they're gone. This is a deliberate design choice—the three-day expiration prevents forgotten loops from burning through tokens indefinitely.
Compare this to Claude Code Desktop's scheduled tasks, which persist across restarts and run as long as the desktop app is open. Same company, same month, fundamentally different approaches to the same problem. Desktop tasks survive interruptions. Terminal loops don't.
This isn't a bug. It's a philosophy about where automation lives. Desktop tasks assume you want something that persists independently of your coding session. Terminal loops assume your automation needs are tied to active development work.
The question is whether that assumption matches how developers actually work. If you're monitoring PRs or watching for deployment issues, do you want that tied to whether your terminal session is open? Or do you want it running regardless—more like a CI/CD pipeline than a terminal command?
The Autonomous Workflow Promise
Cherny suggested some compelling use cases: "having it babysit all your PRs, autofixing the build issues when comments come in and then using a work tree agent to fix them." This gets at the real ambition—not just scheduled prompts, but chains of agents that monitor, detect, and remediate without human intervention.
You could theoretically set up a loop that scans for errors every three minutes, then triggers a sub-agent to deploy fixes. A fully autonomous debugging workflow running in the background while you're in meetings or working on something else.
Except you need to keep that terminal session open for three days. And if your laptop sleeps, or you close the terminal to clean up tabs, the whole chain stops. The autonomy is real, but it's tethered to session persistence in a way that undermines the "works for you 24/7" framing.
This matters because it shapes what kinds of automation are actually practical. Short-term monitoring during active development? Great fit. Multi-day background agents that survive your workflow interruptions? Less ideal, unless you're willing to structure your terminal usage around keeping that session alive.
What Developers Are Actually Building
Early experiments show people testing the boundaries. Some are using /loop for exactly what it seems designed for—in-session monitoring while they're actively coding. Others are trying to stretch it into longer-term automation, then discovering the session dependency the hard way.
The natural language interface is genuinely useful: "remind me to check deployment status in 2 hours" works without cron syntax. You can chain existing Claude Code commands through /loop, which means any workflow you've already built can become recurring without rewriting it.
But there's a mismatch between the "lightweight agent" promise and the session-based reality. If I want Claude to monitor something continuously, I need to think about session management as part of my automation strategy. That's a different kind of cognitive overhead than "set it and forget it."
The Desktop vs Terminal Split
Anthropic now maintains two different scheduling systems with different persistence models, different UIs, and different assumptions about how developers want automation to work. Desktop tasks get a visual interface with clickable links and survive restarts. Terminal loops are command-line native, more flexible for chaining commands, but ephemeral.
This split isn't necessarily bad—different contexts, different tools. But it does fragment the mental model. If I'm designing an automated workflow, which system should I use? The answer depends on whether I value persistence over integration with my terminal-based development process.
For developers already living in Claude Code's terminal interface, /loop slots in naturally. For those who want automation that outlives their coding sessions, Desktop's scheduled tasks make more sense. The problem is that both are "Claude Code," marketed as the same product, but they handle automation fundamentally differently.
Token Economics and the Three-Day Limit
That three-day expiration isn't just about preventing accidents. It's about token usage. An infinitely-running loop could rack up costs quickly, especially if it's checking something every few minutes. The expiration is a guardrail that protects both users and Anthropic from runaway automation.
But it also reveals how AI coding tools are still figuring out the economics of continuous operation. Traditional CI/CD systems run until you turn them off. Cron jobs run forever unless explicitly disabled. AI agents, apparently, need expiration dates.
This makes sense given current pricing models—you're paying per token, not per hour of compute. But it creates an interesting design constraint: automation features need built-in limits to prevent cost disasters. That's a different paradigm than traditional developer tools, and it shapes what kinds of automation are financially viable.
Who This Actually Serves
The /loop feature works best for developers who:
- Already use Claude Code's terminal interface extensively
- Want automation during active development sessions, not across days
- Are comfortable with session management as part of their workflow
- Need to chain existing Claude Code commands on a schedule
It works less well for:
- Long-term background monitoring that needs to survive restarts
- Developers who want "set and forget" automation
- Use cases that require true 24/7 operation
- Teams expecting traditional CI/CD-style persistence
This isn't a limitation so much as a design decision about what problem /loop is trying to solve. It's not trying to replace your CI/CD pipeline. It's trying to make your active coding session more automated.
Whether that's the right level of automation depends entirely on what you're building and how you work. For some workflows, session-based loops are exactly right. For others, they'll feel like a feature that almost solves the problem but stops just short of what you actually need.
The interesting question is whether Anthropic will eventually unify these approaches—or whether the session-based and persistent models serve genuinely different needs that shouldn't be collapsed into one feature. Right now, we've got both, and developers get to figure out which one fits their workflow.
—Dev Kapoor
Watch the Original Video
Claude Code Just Got ANOTHER MASSIVE Upgrade with /Loop - Automate AI Coding!
WorldofAI
8m 43sAbout This Source
WorldofAI
WorldofAI is an engaging YouTube channel that has swiftly captured the attention of AI enthusiasts, boasting 182,000 subscribers since its inception in October 2025. The channel is dedicated to showcasing the creative and practical applications of Artificial Intelligence in everyday tasks, offering viewers a rich collection of tips, tricks, and guides to enhance their daily and professional lives through AI.
Read full source profileMore Like This
Claude Cowork: AI's Next Step in Desktop Automation
Explore Claude Cowork, Anthropic's AI that redefines desktop automation, offering non-tech users powerful task management.
Claude Co-work and Gemini AI: New Tools, Old Concerns
Explore Claude Co-work's automation and Gemini AI's personal intelligence while questioning data privacy and open source implications.
NotebookLM + Claude: Teaching AI Agents Domain Expertise
A developer demonstrates using NotebookLM to generate Claude Code skills—custom knowledge modules that teach AI agents specific domains in minutes.
Claude Code's Agent Teams: The Coordination Layer Nobody Saw Coming
Anthropic quietly released Agent Teams in Claude Code—an experimental feature that lets AI agents coordinate like actual dev teams. How it works, what it costs.