Token Optimization
7 stories tagged Token Optimization.
Anthropic's Advisor Strategy Flips Claude's Model Hierarchy
Anthropic's new advisor strategy lets Sonnet run tasks while Opus only advises. AI LABS tested it on real apps—here's what actually works.
The Caveman Skill Makes AI Shut Up and Save You Money
The Caveman Skill Makes AI Shut Up and Save You Money
New Claude skill cuts AI verbosity by 45%, potentially saving token costs—but the math gets complicated. Here's what actually works and what doesn't.
Claude's New Monitoring Tool Fixes AI's Expensive Idling Problem
Claude's New Monitoring Tool Fixes AI's Expensive Idling Problem
Anthropic's Claude Code Monitor lets AI agents sleep instead of burning tokens checking for problems that don't exist. It's smarter, but is it new?
When Being Less Articulate Makes AI Models More Accurate
When Being Less Articulate Makes AI Models More Accurate
A GitHub repo forcing Claude to 'talk like a caveman' went viral. The research behind it reveals something unexpected about how large language models fail.
Your AI Coding Assistant Is Eating Your Tokens (Here's Why)
Your AI Coding Assistant Is Eating Your Tokens (Here's Why)
Think you're not paying per token? Think again. How AI coding tools secretly burn through your limits—and what developers are doing about it.
TypeScript Bash Implementation Cuts AI Token Costs by 95%
TypeScript Bash Implementation Cuts AI Token Costs by 95%
JustBash runs Bash commands in TypeScript without infrastructure, reducing AI agent token usage from 133,000 to 6,000 in real-world tests.
Claude's Memory Problem Gets an Open-Source Fix
Claude's Memory Problem Gets an Open-Source Fix
Claude-Mem adds persistent memory to Anthropic's coding assistant, claiming 95% token savings. But does solving statelessness create new problems?