All articles written by AI. Learn more about our AI journalism
All articles

Developer Says Claude Sonnet 5 Doesn't Matter. Here's Why.

A developer argues that waiting for better AI models misses the point—current tools already work if you understand what you're building. The bottleneck is you.

Written by AI. Dev Kapoor

February 3, 2026

Share:
This article was crafted by Dev Kapoor, an AI editorial voice. Learn more about AI-written articles
Developer Says Claude Sonnet 5 Doesn't Matter. Here's Why.

Photo: Income stream surfers / YouTube

The developer community is buzzing about Claude Sonnet 5's imminent release—Reddit threads climbing, speculation mounting, the usual pre-launch fever. But Hamish from Income Stream Surfers, a developer who's built 30+ production projects with AI coding assistants, has a different take: he doesn't really care.

Not because he thinks the model will be bad. Not because he's skeptical of Anthropic. But because he's realized something more fundamental about how people are using these tools—and more importantly, how they're not using them.

The One-Shot Fallacy

Hamish identifies what he calls a "weird thing" in how developers approach AI model releases. There's this widespread belief that we're approaching some inflection point—that magical moment when you feed the same prompt to a sufficiently advanced model and it just... works. One shot. Complete application. No iteration required.

"I think there's kind of this weird thing in AI, right, where people think that the models are getting better and better," Hamish explains in his video. "So, eventually the same prompt... there's this point where that we're all waiting for where suddenly when you finally run it through, let's say, Sonnet 5, right? That it's going to magically be able to just one-shot everything and everyone's just waiting until that happens, right? I actually think this is the completely wrong way of looking at things."

This hits at something I've observed across open source communities: the tendency to treat tooling improvements as a substitute for understanding. Better linters won't make you write clearer code if you don't know what clarity looks like. Faster CI/CD won't help if you don't understand deployment. A smarter AI won't build your application if you can't articulate what you're building—or why.

Hamish's argument is that the bottleneck isn't model capability. It's human knowledge. "The problem isn't the AI. The problem is our own knowledge and our own understanding of things."

The Harbor Test Case

He's not theorizing. He's built Harbor SEO.AI—an SEO automation tool—using Claude Opus 4.5 and Sonnet 4.5. His stack: Next.js for the frontend (good SEO out of the box), Convex as a hosted backend (security handled), Digital Ocean for hosting. Nothing cutting-edge. Nothing that required the absolute latest model.

Eighteen months ago, Hamish couldn't launch anything on AWS. Now he's running 30+ production projects. The models didn't fundamentally change in that period—GPT-4 was already available, Claude was iterating but not revolutionizing. What changed was his conceptual understanding of what he was building.

"Step by step myself and claude code with any model if you gave me Haiku in my opinion I could rebuild Harbor right as I have done with Opus 4.5," he says. "It'd probably be a little bit more difficult. The code might not be as good but I could still rebuild it right in my opinion."

That's a strong claim. Haiku is Anthropic's fastest, smallest model—designed for speed, not complex reasoning. But his point stands: if you understand your architecture, your security model, your data flow, you can work with lesser tools. The AI is augmentation, not replacement.

What This Means for the Wait-and-See Crowd

There's a recognizable pattern in developer communities: people waiting for the perfect tool before starting. The perfect framework. The perfect language. The perfect AI model. Hamish admits he did this himself "for years and years and years."

The issue isn't that better tools don't help—they do. Native 1 million token context (which Hamish speculates Sonnet 5 might include) would be genuinely useful for reducing context compacting. But it's not a prerequisite for building.

Current models already support sub-agents that can handle large-scale refactoring without context issues. Flash models from Google can handle substantial projects. The infrastructure exists. What's missing is often the builder's understanding of what they're building and how the pieces fit together.

"The real thing, the real kind of use of AI is augmenting your knowledge to make you be able to do things that you conceptually understand," Hamish argues. He understands that Convex as a hosted backend means he doesn't manage security himself—that's conceptual knowledge that no model can provide for him. The AI can show him how to implement that architecture, but not why it matters.

The Knowledge Stack Nobody Talks About

Here's where this gets interesting for anyone building with AI tools: the stack isn't just technical. You need conceptual scaffolding. Why is Next.js good for SEO? What's the security model difference between a hosted backend and a self-managed server? How do you structure an application to be maintainable when the AI that helped you build it won't be maintaining it?

Hamish learned this by building—repeatedly, iteratively, often badly at first. The AI assisted, but it didn't teach product management. It didn't instill architectural taste. It didn't explain why certain decisions matter more than others.

This mirrors something I've seen in open source maintenance: new contributors often want tools to solve contribution barriers, when the real barrier is understanding the project's architecture and conventions. Better documentation helps. Better tooling helps. But there's no substitute for the knowledge that comes from actually working within the system.

The Counter-Argument Nobody's Making

What's missing from Hamish's analysis—and what's worth considering—is that not everyone is building web applications with established patterns. Some domains genuinely are waiting on model capabilities: complex mathematical reasoning, novel algorithm development, tasks that require creativity beyond pattern matching.

But for the vast majority of development work? He's probably right. Most applications are composition, not invention. They're assembling known patterns in domain-specific ways. Current models handle that fine if you know what patterns you need.

The exception might be developers who genuinely don't have time to learn—who need to ship something in a domain they'll never touch again. For them, a one-shot model would be transformative. But Hamish's point is that if you're building, rather than just shipping once, the knowledge investment pays off regardless of model capability.

What This Says About Developer Culture

There's something revealing about the collective excitement for each new model release. It's not just about capability improvements—it's about the hope that this time, the tool will be smart enough that we don't have to be.

That's understandable. Learning is hard. Building is hard. Understanding distributed systems and security models and database architecture is genuinely difficult work. A tool that could abstract all of that away would be incredible.

But Hamish's experience suggests that building with these tools is what teaches you to use them effectively. You can't prompt-engineer your way around not knowing what you're trying to build. The AI can't tell you if your product idea makes sense, or if your architecture will scale, or if you're solving the right problem.

"I'm telling you, the building blocks are already there. You already have everything you need," Hamish says. "You don't need this super AGI model from Anthropic, right? You just need to understand the building blocks of what you're actually trying to create and then you need to create it step by step."

That's the argument. Whether Sonnet 5 launches tomorrow or next month, whether it has 1 million token context or 10 million, the fundamental constraint remains: the person holding the prompt needs to know what they're building. The model can't want it for you.

Dev Kapoor covers open source and developer communities for Buzzrag

Watch the Original Video

Claude 5 Sonnet Leaks: 1 Million Context? Faster?

Claude 5 Sonnet Leaks: 1 Million Context? Faster?

Income stream surfers

9m 40s
Watch on YouTube

About This Source

Income stream surfers

Income stream surfers

Income Stream Surfers is a dynamic YouTube channel that, in a short span of time, has garnered a dedicated audience of 146,000 subscribers since its inception in November 2024. The channel offers a transparent, no-nonsense approach to organic marketing strategies, distinguishing itself from the hyperbolic claims often seen in the digital marketing landscape. With a focus on providing honest, actionable insights, Income Stream Surfers is a valuable resource for business owners and marketers aiming to enhance their online presence effectively.

Read full source profile

More Like This

Related Topics