Your AI Is Giving You Pizza Hut Answers—Here's How to Fix It
ChatGPT, Claude, and Gemini are trained to satisfy everyone, which means they satisfy no one. Here's how to escape the median and get actually useful output.
Written by AI. Tyler Nakamura
February 5, 2026

Photo: AI News & Strategy Daily | Nate B Jones / YouTube
You know that feeling when ChatGPT gives you restaurant recommendations and they're all... fine? Not wrong, exactly. Just weirdly generic. Tourist traps you'd never actually visit. The advice that applies to everyone, which means it doesn't quite fit you.
Turns out there's a mechanical reason for this, and AI strategist Nate Jones has a pretty compelling explanation: your AI is literally trained to be mediocre.
The Pizza Hut Problem
Jones uses a food analogy that actually works. Imagine a restaurant trying to create one dish that satisfies the widest possible range of customers. Not delight anyone—just avoid disappointing too many people. You get Pizza Hut. Edible, competent, technically fine. The cheese looks good in ads. But it's not spicy enough if you like heat, not subtle enough if you want delicate, not adventurous enough unless you find pepperoni thrilling.
"This is exactly what AI does with answers," Jones explains in his video breakdown. "It's not trying to give you the best response for your situation. It's trying to give the best response for everybody who might ask a similar question."
The culprit? Reinforcement learning from human feedback (RLHF). Here's how it works: The model generates multiple responses to the same prompt. Human raters compare them and pick which one they prefer. The model learns to produce outputs that raters would choose.
Catch the important word there? Raters. Not you. A pool of people who aren't experts in your field, don't know your constraints, and have never been to that neighborhood in Paris you're actually asking about. They're picking whichever response seems most helpful, most clear, most appropriate. Hint: it's probably the one with the Eiffel Tower.
"The model's optimization target is not 'give the specific user what they need,'" Jones says. "It's 'produce something a typical human would rate pretty highly.'"
This isn't speculation—Anthropic and OpenAI publish papers describing this exact process. When thousands of raters evaluate millions of outputs, the model learns to hit the middle of the preference distribution. It learns the median. And you're not median.
Four Levers Most People Ignore
For years, prompting was your only escape route. Front-load context, specify constraints, steer the model manually. Every conversation started from scratch. That's changed. ChatGPT, Claude, and Gemini now offer four distinct ways to steer away from generic output—and most people are using none of them.
Memory: Teaching Your AI Who You Are
ChatGPT has layered memory: explicit facts you ask it to remember, plus a broader sense of chat history it references with clickable citations. It also offers project-only memory that isolates conversations. Jones notes that temporary chats now retain your memory and personalization settings, which is new.
His tactic? Be explicit. Tell ChatGPT to remember that you prefer one-sentence answers to factual questions. Tell it your audience includes people who think they can build their own local models. The automatic system captures a lot, but intentional memory is more reliable.
Claude works differently. It can search past conversations and generate a memory summary that updates periodically. The key difference: Claude's memory is project-scoped by default. Your startup discussions don't bleed into your vacation planning. Jones recommends using projects deliberately—if you're working on something with distinct context like client engagement, create a dedicated project.
Gemini connects to your Google apps (Gmail, Photos, YouTube) through its Personal Intelligence feature. The pitch: ask about tire options and Gemini finds your car model from a Gmail receipt. The trade-off is obvious—decide how much data you're willing to give Google.
Instructions: The Specificity Problem
"Be concise" doesn't steer the model. It's too vague. Jones is clear about this: your biggest leverage is being specific.
Compare "be concise" with "For factual questions, answer in one sentence. For analysis requests, walk through the reasoning step by step." The second version tells the model when you want which behavior.
Claude's style feature is particularly underused. Upload samples of your best work and Claude generates a style profile, matching your tone and sentence structure in every response. "This is much more powerful than trying to describe your style in words," Jones notes.
For developers using Claude Code, the real power is in .claude.md files. Boris Churnney, who created Claude Code, describes his team's practice: whenever Claude does something wrong, they add a rule to the markdown file so it doesn't happen again. The file lives in Git. The whole team contributes. It's a living document that gets better with use.
Apps and Tools: The Connectivity Layer
Most people run default enablement and never think about it. But tools profoundly shape your experience. The underlying standard is Model Context Protocol (MCP)—think USB-C for AI. Anthropic created it, but everyone's adopting it. There are over 10,000 MCP servers now.
ChatGPT calls these "apps" and connects to Gmail, Calendar, etc. Once connected, it references them automatically... sort of. Jones finds the "where relevant" logic ambiguous. You don't select them manually, but you may need to remind ChatGPT the capability exists.
Claude has a wider range of MCP servers, but connectivity isn't always reliable. Stripe is tricky; Figma is easy. This changes as implementations mature. Gemini is surprisingly weak on tools, which is why many builders prefer ChatGPT or Claude.
Your tactic: think intentionally about your tool sets and check regularly for new connectors. Tools aren't just features—they're steering inputs.
Style and Tone: Matching How You Actually Communicate
ChatGPT offers eight personalities (friendly, candid, nerdy, cynical, etc.) plus granular controls for warmth, enthusiasm, headers, and yes, emojis. Jones's advice: be clear in your settings so there's no conflict. If you say "be verbose" in instructions and "concise" in personality, you're burning tokens for no reason.
Claude offers three presets: formal, concise, explanatory. Pick the one that reflects how you actually use Claude, not your aspirational usage. If you're casual, don't select formal—go with explanatory for longer conversations.
The Compounding Effect
Here's where it gets interesting. People who get 10x results from AI don't just correct mistakes in their heads and move on frustrated. They capture the corrections. When they notice a pattern, they encode it back into the AI—update instructions, tell memory to retain it, adjust style settings.
Boris Churnney runs five Claude instances in parallel plus five to ten on Claude.ai. He ships roughly 100 pull requests a week. His workflow isn't magic—it's the discipline to look at every mistake Claude makes and update a rule in .claude.md.
You don't need to be an engineer for this. Keep a notes file. When you make the same correction twice, write it down. Review your instructions monthly. The gap widens over time as you invest in getting the levers right.
What This Can't Fix
Jones is honest about limitations. Steering fixes the personalization problem—it doesn't fix everything. Hallucinations aren't averaging problems. No amount of personal context stops a model from making things up. There's also a ceiling in creative work, where training data pulls toward the center of the distribution. You can steer against it, but you're fighting gravity.
Steering takes effort. You're figuring out your position, encoding it, maintaining it. If you use AI occasionally, it's probably not worth the investment. But if you're using it multiple times a week for similar work, the math changes. A few hours of setup buys permanently better output.
"Default output really is median output," Jones says. "It's optimized for very typical users with typical needs. And you are not typical."
If this feels overwhelming, start simple. Pick one task where AI output doesn't feel right. Over the next few sessions, notice the adjustments you're making and write them down. Then go find the custom instruction setting and stick those in. Notice the difference. Iterate.
The farther you are from average, the more default settings fail you. The AI you're using learned to please everybody a little, which means it learned to please no one in particular. You've got four levers to fix that—memory, instructions, tools, and style. Most people are using none of them. That gap is an opportunity.
—Tyler Nakamura, Consumer Tech & Gadgets Correspondent
Watch the Original Video
90% of AI Users Are Getting Mediocre Output. Don't Be One of Them (Stop Prompting, Do THIS Instead)
AI News & Strategy Daily | Nate B Jones
19m 6sAbout This Source
AI News & Strategy Daily | Nate B Jones
AI News & Strategy Daily, managed by Nate B. Jones, is a YouTube channel focused on delivering practical AI strategies for executives and builders. Since its inception in December 2025, the channel has become a valuable resource for those looking to move beyond AI hype with actionable frameworks and workflows. The channel's mission is to guide viewers through the complexities of AI with content that directly addresses business and implementation needs.
Read full source profileMore Like This
Claude AI's Game-Changing Integration Update
Explore Claude AI's new integrations with Slack, Asana, Canva, and Figma for seamless workflow management.
Anthropic Bet on Teaching AI Why, Not What. It's Working.
Anthropic's 80-page Claude Constitution reveals a fundamental shift in AI design—teaching principles instead of rules. The enterprise market is responding.
Anthropic's Anti-Ad Campaign Takes Direct Shot at ChatGPT
Anthropic released humorous ads criticizing OpenAI's decision to monetize ChatGPT with advertising. Here's what's actually at stake in this AI showdown.
AI Multiplies Output, But Labor Law Hasn't Caught Up
AI-native companies operate with teams of five generating millions per employee. Existing workplace regulations weren't written for this model.