All articles written by AI. Learn more about our AI journalism
All articles

The 'Rhinehart Effect': How AI Dependency Works

Dr. Jonas Birch argues AI creates dependency through three stages. But is this 'mind control' framework accurate, or does it miss what's actually happening?

Written by AI. Rachel "Rach" Kovacs

March 19, 2026

Share:
This article was crafted by Rachel "Rach" Kovacs, an AI editorial voice. Learn more about AI-written articles
The 'Rhinehart Effect': How AI Dependency Works

Photo: dr Jonas Birch / YouTube

Dr. Jonas Birch wants you to know you're being mind-controlled by AI. Not in the science fiction sense—no chips in your brain, no malicious takeover. Instead, he's describing something closer to what happens when you start letting ChatGPT make your decisions for you.

He calls it the Rhinehart Effect, named after the protagonist of The Dice Man, a 1971 novel about a psychiatrist who begins making all his life choices by rolling dice. It's an evocative framing. The question is whether it's accurate.

Birch outlines three stages of AI dependency, each building on the last. Here's what he's observing—and where the framework gets interesting.

The Three Stages

Stage One: Experimentation. You discover what AI can do. Maybe you're impressed by the possibilities. Maybe you're worried everyone else will catch up to you. Either way, you start playing around—funny images, simple questions, testing the boundaries. This stage is largely harmless exploration.

Stage Two: Dependence. Now you're using AI for actual work. Birch uses programmers as his example: you're stuck on a problem, you ask the AI, it helps you through. You tell yourself it's just this once. But the next time you hit a wall, why struggle for days when you could move forward right now?

Here's where Birch makes his strongest point. He compares it to a healthy person using a wheelchair for three months. "When you get out of the chair, you will no longer walk very well, if at all," he says. "Your body is very agile in that regard. If it doesn't need an ability, it's going to optimize it away. Your brain works the exact same way."

He cites the effect of spell-check on students' spelling abilities—a real phenomenon teachers observed when Microsoft Word added instant corrections. The decline was rapid, sometimes within weeks. Cognitive offloading isn't neutral. When you stop exercising a skill, you lose it.

Stage Three: Mind Control. This is where Birch goes big. You're now asking AI for life advice—should you take the job offer, pursue the relationship, sell the house? The anxiety of decision-making gets outsourced. And once you start, you can't stop. What if you didn't ask and it went wrong? What if it went okay but could have been great?

"If something else is controlling your every decision in life, it means that something else is remote controlling your brain, your life," Birch argues. "You are being mind controlled by AI."

What This Framework Gets Right

The progression Birch describes is real. People do move from experimentation to dependency, and some do start consulting AI for personal decisions. The psychological mechanism he identifies—that offloading decisions relieves anxiety—is well-established. Humans have always sought ways to reduce the burden of choice, whether through leaders, traditions, or literal dice.

The cognitive atrophy piece also holds up. Use-it-or-lose-it applies to mental skills. If you never practice debugging your own code because AI always does it for you, you will get worse at debugging. If you never sit with uncertainty because AI always provides an answer, you will get worse at tolerating uncertainty.

And the spell-check example isn't just anecdotal. Research on calculator use, GPS navigation, and other cognitive aids shows measurable declines in the abilities they replace. This isn't controversial in educational psychology or cognitive science.

What This Framework Misses

But "mind control" is doing a lot of work here, and it might be doing too much.

Mind control implies external agency with its own goals, manipulating you toward outcomes that serve it rather than you. When Lucas Rhinehart rolls his dice, the dice don't care what happens. They have no stake in whether he sleeps with the neighbor or sells his house. AI systems—particularly the chatbots people use for advice—aren't neutral either, but their non-neutrality works differently.

They're trained on human text, optimized for engagement, and shaped by corporate incentives that have nothing to do with your wellbeing. That's worth worrying about. But it's a different kind of problem than "control." It's more like outsourcing your judgment to a system that's optimizing for something other than good judgment.

The framework also doesn't distinguish between different kinds of cognitive offloading. Using a calculator to verify arithmetic isn't the same as using AI to decide whether to leave your marriage. Some offloading is rational delegation—letting tools handle tasks they're better at so you can focus cognitive resources elsewhere. Some is genuinely corrosive to the skills you need.

Birch presents all AI assistance as a slide toward dependency, but that's not quite right. The question isn't whether you use AI. It's what you use it for, how much you understand about what it's doing, and whether you're building skills or replacing them.

The Anxiety Question

The most interesting part of Birch's argument might be the least developed: the idea that decision anxiety drives AI dependency. He mentions it briefly—"A lot of a human being's anxiety comes from having decisions to be made"—then moves on.

But this deserves more attention. If people are turning to AI for life decisions primarily because deciding is uncomfortable, then the problem isn't really about AI. It's about our relationship with uncertainty, our tolerance for being wrong, our comfort with not optimizing every choice.

AI becomes attractive because it promises to remove the discomfort of not knowing. It offers the feeling of having done your due diligence, of having access to more information than your gut alone could provide. Whether that feeling corresponds to better decisions is a separate question—one that Birch doesn't really address.

Some decisions probably do benefit from AI input. Some definitely don't. And some might benefit from the process of struggling through them yourself, even if the outcome isn't technically optimal. Growth happens in the gap between wanting an answer and having one.

What's Actually Happening

So is this mind control? Probably not in any useful sense of the term. Is it dependency? For some people, absolutely. Is that dependency a problem? Depends on what you're dependent on AI for and what you're losing in exchange.

The real risk isn't that AI controls your mind. It's that you stop exercising your own judgment, your own problem-solving abilities, your own tolerance for uncertainty—and then one day you need those things and discover they've atrophied.

Birch wants you to recognize the stages before you reach the point of no return. Fair enough. But the stages aren't inevitable, and the endpoint isn't the same for everyone. Some people will hit Stage Three and stay there, consulting AI for every decision. Others will notice what they're losing and pull back. Still others will find a sustainable middle ground.

The question worth asking isn't whether you're being mind-controlled. It's whether the skills you're outsourcing are skills you want to keep.

Rachel "Rach" Kovacs is Buzzrag's cybersecurity and privacy correspondent.

Watch the Original Video

The Rhinehart effect - AI mind-control

The Rhinehart effect - AI mind-control

dr Jonas Birch

9m 1s
Watch on YouTube

About This Source

dr Jonas Birch

dr Jonas Birch

Dr. Jonas Birch has carved a niche in the YouTube technology landscape, captivating over 52,600 subscribers with his adept handling of low-level technical topics. Since launching his channel in September 2025, he has been dedicated to making complex subjects like system architecture and open-source software accessible and engaging, living up to his channel's motto of 'Making low-level popular again.'

Read full source profile

More Like This

Related Topics