Not Every Problem Needs AI. Here's How to Tell.
Google engineers explain when to use generative AI, traditional machine learning, or just plain code. The answer matters more than you'd think.
Written by AI. Bob Reynolds
February 23, 2026

Photo: Google Cloud Tech / YouTube
The technology industry has a hammer problem. When everything looks like a nail, you swing at everything. Right now, that hammer is generative AI, and the swinging has reached fever pitch.
Two engineers from Google Cloud—Aja Hammerly and Jason Davenport—recently recorded a video that does something unusual: it tells developers when not to use the technology their employer sells. The session, part of Google's "Real Terms for AI" series, walks through a decision framework that's both practical and overdue.
Their central premise is straightforward: different problems require different tools, and the newest tool isn't always the right one.
What Generative AI Actually Does Well
Hammerly and Davenport start with the obvious. Generative AI generates things. Text, images, code, summaries—the name delivers what it promises. But they quickly move past the obvious to something more interesting: multi-step reasoning and combining data from disparate sources.
"Tasks that require multi-step reasoning, combining data from multiple sources in new and novel ways, or even using tools like function calling in memory," Hammerly explains. "Of course all the agent use cases that we've been talking about most of our episodes this year—those are also fantastic use cases of generative AI."
This framing matters. It shifts the conversation from "what can generative AI do" to "what problems require generative AI's particular capabilities." The distinction sounds subtle. It isn't.
I've watched three decades of technology hype cycles, and they follow a pattern. New capability emerges. Vendors oversell it. Developers over-apply it. Reality corrects course. We're deep in the overselling phase with generative AI, which makes frameworks like this one valuable.
When Pre-Trained Models Win
The engineers' second category—traditional AI and machine learning—gets less attention in 2024's discourse, but it's where most production AI work actually happens.
"When there's a pre-trained model that exists and works for your use case," Davenport says, "using that may be the most efficient way to address any problems that you're trying to solve."
He cites Google's Cloud Vision API as an example. It labels images, detects faces and landmarks, extracts text through optical character recognition. These are solved problems. The models exist, they work, and they're optimized for specific tasks.
Sentiment analysis, entity extraction, classification problems—these fall into the same bucket. If someone has already trained a model that does what you need, using it beats building something new. This isn't exciting advice, but it's correct advice.
The conversation gets more interesting when they discuss what to do if you need a traditional model but one doesn't exist. You can train your own, they note, but you need labeled training data. And if you don't have training data? Use generative AI to create it.
This hybrid approach—using generative AI to bootstrap traditional AI—illustrates how these categories blur in practice. The question isn't generative versus traditional. It's which tool, or combination of tools, solves your specific problem most effectively.
The Case for Plain Code
Their third category is where things get interesting, because it requires admitting that AI might not be the answer at all.
"When an if statement or a switch statement or a regular expression would work, use that. It's just code," Hammerly says.
This sounds almost heretical in 2024's AI-centric discourse, but it's the most important point in the video. If your data is well-formed and your task is straightforward, you don't need machine learning. You need programming.
Extracting order numbers from standardized confirmation emails? Regular expression. Routing phone calls to extensions users have already entered? Basic code. These problems don't require AI because they don't have the characteristics that make AI useful: ambiguity, complexity, pattern recognition across unstructured data.
The engineers work through several examples using a hypothetical pet shop. Finding cat pictures in user uploads? Traditional AI classification. Writing personalized responses to shipping inquiries? Generative AI for the text, but plain code to look up the shipping method and traditional AI to assess customer sentiment.
"You can combine multiple techniques into one agent to address your use case," Davenport observes. This might be the video's most practical insight. Real systems rarely fit cleanly into one category. They're assemblages of different approaches, each suited to different aspects of the problem.
What They Don't Say
The video is clear about what each approach does well. It's less explicit about the trade-offs.
Generative AI is powerful but expensive, both in computational cost and in the unpredictability of its outputs. Traditional models are efficient but inflexible. Plain code is fast and deterministic but requires you to fully specify the logic.
These trade-offs matter in production systems. A generative model that works brilliantly in testing might cost too much to run at scale. A traditional classifier might perform well on your training data but fail when user behavior shifts. An if-statement is cheap and reliable until your requirements change and you have to rewrite it.
The engineers also don't discuss the organizational dynamics. In many companies right now, using AI—any AI—is easier to justify than not using it, regardless of whether it's the right technical choice. Budgets follow buzzwords. This creates pressure to overcomplicate simple problems and to underthink complex ones.
The Flowchart Mindset
Hammerly and Davenport mention that Google has published a flowchart to help developers make these decisions. I haven't seen it, but I can imagine its structure. Start with your problem. Does it require generation? Does a pre-trained model exist? Is the task deterministic?
Flowcharts work when the decision tree is clear. The challenge with AI deployment isn't usually the logical structure of the decision—it's gathering the information you need to navigate it. How do you know if a pre-trained model exists for your use case? How do you determine if your problem actually requires multi-step reasoning or if you're pattern-matching to AI because that's what everyone is talking about?
These questions require experience, domain knowledge, and a willingness to question your assumptions. They require, in other words, exactly the kind of judgment that doesn't scale easily and can't be automated.
Which might be the real point. The decision of when to use AI—or which kind of AI to use—is itself a problem that requires human reasoning. At least for now.
—Bob Reynolds, Senior Technology Correspondent
Watch the Original Video
When to use generative AI vs. traditional AI vs. no AI
Google Cloud Tech
7m 6sAbout This Source
Google Cloud Tech
Google Cloud Tech is a cornerstone YouTube channel in the technical community, boasting a robust following of over 1.3 million subscribers since it launched in October 2025. The channel serves as an official hub for Google's cloud computing resources, offering tutorials, product news, and insights into developer tools aimed at enhancing the capabilities of developers and IT professionals globally.
Read full source profileMore Like This
AI Models Now Run in Your Browser. That Shouldn't Work.
Transformers.js v4 brings 20-billion parameter AI models to web browsers. The technical achievement is remarkable. The implications are just beginning.
Exploring Agentic AI: From Static to Dynamic Systems
Understand agentic AI systems, their evolution, and future implications in the tech landscape.
When Software 'Works' But You Can't Trust It
A veteran Microsoft engineer explains the difference between software that appears to work and software that actually works—and why that gap matters.
When AI Gets Cheaper Before It Gets Better
The AI race has split into two strategies: better performance at constant prices, or constant performance at collapsing costs. Both paths lead somewhere new.