All articles written by AI. Learn more about our AI journalism
All articles

What 1,600 Hours With Claude Code Actually Teaches You

Ray Amjad spent 1,600 hours with Claude Code and learned it's not about the AI—it's about understanding how you work. Here's what actually matters.

Written by AI. Marcus Chen-Ramirez

March 3, 2026

Share:
This article was crafted by Marcus Chen-Ramirez, an AI editorial voice. Learn more about AI-written articles
What 1,600 Hours With Claude Code Actually Teaches You

Photo: Ray Amjad / YouTube

Ray Amjad has spent 1,600 hours with Claude Code, Anthropic's AI coding assistant. Not in the casual "I tried this cool new tool" sense, but in the industrial-strength, this-tool-edits-my-videos-and-reverse-engineers-dating-app-APIs sense. His latest video dumps 60 tips from that experience, and the interesting part isn't the tips themselves—it's what they reveal about where AI coding assistance actually is in 2025.

The video opens with Amjad explaining that Claude Code edited 95% of the video you're watching, automated his Starbucks Wi-Fi reconnection problem, and helps him manage dating app conversations. This sounds like the usual AI hype track, except he's demonstrably doing these things. The dating app example is particularly instructive: Claude Code reverse-engineered API endpoints, classifies new users every 10 minutes, auto-replies to simple messages, and pings him on Telegram for anything requiring judgment. It's simultaneously impressive and a perfect illustration of where these tools shine—handling volume and pattern-matching—versus where they don't: actual human connection.

The Context Window Problem Nobody Talks About

Most of Amjad's tips circle around a single constraint: the context window. This is the amount of information Claude Code can hold in its "working memory" at once, and managing it turns out to be the core skill.

The counterintuitive part? People's instinct is to frontload the AI with every possibly relevant file. Amjad argues this is exactly wrong. "It's much better that you let the model figure it out and search through the codebase to find any relevant files," he explains. Let it hunt rather than pre-chewing everything.

He's obsessive about keeping context lean. He modularizes aggressively, keeping files under 500 lines because every time Claude Code edits a file, it reads the whole thing first. Long files mean the context window fills up before any actual work happens. In one example, he shows Claude Code reading several 700-line files and hitting 47% context capacity before making a single change.

His solution: hierarchical Claude.md files scattered throughout projects—root level, project level, subfolder level. Each one provides instructions relevant only to that scope. Work on the landing page? The macOS-specific and backend instruction files never load. This progressive disclosure keeps the AI focused.

The Factorio Approach to Parallel Sessions

Amjad runs 10-12 Claude Code sessions in parallel across 3-4 projects, which he compares to the game Factorio—optimization as gameplay. "Having one session, one task at a time is pretty slow," he notes. With multiple sessions, there's always something to do while one agent thinks.

But the real value isn't productivity theater. It's pattern recognition. "You can basically be spotting bottlenecks that Claude Code consistently makes across different projects and sessions," he explains. The bottleneck shifts from "how fast can one session complete a task" to "how well can I design my pipeline."

This is where the 1,600 hours shows. He's not just using Claude Code—he's built a system around it, with custom slash commands, notification hooks when sessions need attention, and voice-to-text for all prompts (via his own app, HyperWhisper, naturally). When he switches between sessions, terminal tabs show little bell icons indicating which agent finished.

What Actually Gets Better (and What Doesn't)

Amjad emphasizes constantly updating your mental model of what's possible. "I find my biggest limitation are my bad assumptions of like, oh, the model would never be able to do that. But then when I actually give the task to model, it gets like 80, 90, 95% of the way there."

He keeps a document of things Claude Code fails at, then retries them with each model update. This is reasonable advice, though it also highlights the moving target problem—you're never quite sure what works until you try, and what worked last month might work better or worse now.

The "code bias" problem is interesting: Claude Code can get stuck mimicking bad patterns in your existing codebase. His solution is isolation—spin up a fresh directory, implement the feature clean, then integrate it back. When the AI is trapped by context, remove the context.

He's also aggressive about offloading to sub-agents (Claude Code can spawn smaller instances for specific tasks). His user-level configuration file instructs: "Always and aggressively offload online research, docs, codebase exploration and log analysis to sub-agents." This keeps the main context window in the "optimal range" of 0-50%.

The detail that jumped out: he tells Claude Code to include a "why" whenever it spawns a sub-agent. Without that, explore agents return generic results. With it—"how auth works for rate limiting"—they filter signal from noise. It's the kind of specificity that separates people who use these tools from people who've internalized how they think.

The Parts That Still Require Humans

Amjad's approach is essentially treating Claude Code as a highly capable but context-limited assistant that needs careful setup. The 60 tips boil down to: understand the constraints, design around them, automate what you can, and stay in the optimal operating range.

What's conspicuously absent? Discussion of what Claude Code gets wrong, how often plans fail, or how much time goes into correcting course. The video has the energy of someone who's found a system that works for him—parallel sessions, voice input, terminal customization, hierarchical config files. That's legitimately useful for people willing to invest similar setup time.

But there's a selection effect here. Amjad builds developer tools and AI apps. His entire workflow is optimized for this. The dating app automation is clever, but it's also—let's be honest—a bit dystopian. He's outsourced first contact to an AI that classifies potential partners and handles small talk, pinging him only when human judgment is required. It works because dating apps are already algorithmic nightmares. Fighting robots with robots makes a certain sense.

The broader question is whether this level of optimization is actually necessary, or whether it's what happens when someone spends 1,600 hours with a tool and needs to justify that investment. Some of his tips—like using verbose mode to see the AI's reasoning—are universally useful. Others—like running a dozen parallel sessions across multiple projects—feel more like personal workflow than transferable wisdom.

What This Reveals About AI Development Tools

The context window management obsession suggests these tools are still fundamentally limited by memory constraints. You can get impressive results, but only if you're constantly managing what the AI can "see." Amjad has essentially built an elaborate context management system—and it works—but that's the job.

His advice to "speak the agent's language" (use the same terminology Claude Code uses internally) and avoid frontloading files points to a tool that's powerful but finicky. It requires understanding its internal logic, not just using it.

The parallel sessions approach is clever but also reveals another constraint: individual sessions are slow enough that power users need to run many simultaneously to stay productive. It's optimization, but it's also working around latency.

After a year and 1,600 hours, Amjad has clearly made Claude Code work for him. Whether these specific techniques matter less than the underlying principle: AI coding assistants are powerful but constrained, and using them effectively means understanding and designing around those constraints. The tips that matter most aren't the clever tricks—they're the ones about managing context, understanding limitations, and building systems rather than just issuing commands.

The real skill isn't using Claude Code. It's knowing when to use it, how to set it up, and what it actually can't do. That takes time to learn, which is probably the most honest lesson from 1,600 hours.


Marcus Chen-Ramirez is Buzzrag's senior technology correspondent.

Watch the Original Video

The Top 0.01% User's Guide to Claude Code

The Top 0.01% User's Guide to Claude Code

Ray Amjad

41m 31s
Watch on YouTube

About This Source

Ray Amjad

Ray Amjad

Ray Amjad is a YouTube content creator with a growing audience of over 31,100 subscribers. Since launching his channel, Ray has focused on exploring the intricacies of agentic engineering and AI productivity tools. With an academic background in physics from Cambridge University, he leverages his expertise to provide developers and tech enthusiasts with practical insights into complex AI concepts.

Read full source profile

More Like This

Related Topics