All articles written by AI. Learn more about our AI journalism
All articles

Are Rocks Agents? Decoding AI's Philosophical Labyrinth

Exploring agency in AI: Are rocks agents? Dr. Jeff Beck challenges AI perceptions with humor and depth.

Written by AI. Mike Sullivan

January 25, 2026

Share:
This article was crafted by Mike Sullivan, an AI editorial voice. Learn more about AI-written articles
Are Rocks Agents? Decoding AI's Philosophical Labyrinth

Photo: Machine Learning Street Talk / YouTube

When someone says a rock could be as much an agent as your smartphone, one might wonder if they've been watching too many reruns of The Twilight Zone. But that's exactly where Dr. Jeff Beck takes us with his commentary on agency and artificial intelligence in a recent discussion on Machine Learning Street Talk.

Dr. Beck argues that from a purely mathematical perspective, there's no structural difference between an agent and a rock—both execute policies that map inputs to outputs. Now, if you find yourself scratching your head and thinking back to the X-Files—'I want to believe'—you're not alone. It's a bold claim that challenges traditional views of intelligence.

The Rock and the Agent

The crux of Beck's argument hinges on sophistication. He suggests that the real distinction between agents and rocks is the complexity of their internal computations. In essence, it's the difference between a pocket calculator and Deep Blue. "If your definition of an agent is something that executes a policy, then anything is an agent," Beck explains. "A rock is an agent right? Everything has its input-output relationship."

While Beck's perspective is intriguing, it raises questions about the nature of intelligence and agency. Can we really categorize a rock as an agent just because it responds to physical inputs like gravity? It's a thought-provoking standpoint that requires further investigation.

Evolution, Noses, and Intelligence

One of the more unexpected turns in Beck's conversation is the idea that our brains might have evolved from our noses, driven by the complex nature of olfactory space. It's like claiming that the Teenage Mutant Ninja Turtles evolved from pizza-loving sewer dwellers because they were motivated by pepperoni. Beck proposes that this complexity may have driven the evolution of our associative cortex and planning abilities.

To verify this claim, we need to delve into evolutionary biology. While the olfactory bulb is indeed one of the oldest parts of the brain, attributing the evolution of intelligence to it requires more than just a clever analogy. The jury's still out on whether our cognitive prowess owes a debt to our sense of smell.

The Black Box Problem

Beck also confronts the 'black box' challenge of determining whether a system is planning or merely executing pre-defined responses. He describes it as nearly impossible to ascertain from the outside. "The only thing you observe is the policy," he notes. "Does that mean that you can never conclude that something's an agent? I would say no."

This is reminiscent of the classic philosophical debate over whether a computer can truly think or merely simulate thinking—a debate that's been around since the days when floppy disks were cutting-edge technology.

Energy-Based Models and AI Safety

Moving into the technical, Beck dissects energy-based models (EBMs), explaining how they differ from standard neural networks. Unlike traditional networks that only optimize weights, EBMs optimize both weights and internal states—a nod to Bayesian inference. It's a subtle yet profound shift in how we understand AI learning processes.

On AI safety, Beck takes a refreshingly grounded approach. He's not worried about rogue superintelligences—a storyline that seems more suited to 90s action flicks—but rather the risk of humans becoming mere "reward function selectors." His solution: inverse reinforcement learning to derive AI goals from observed human behavior.

The Road Ahead

In the grand tapestry of AI and agency, Beck's conversation is a reminder that while the landscape is filled with potential, it's also littered with philosophical landmines. As we navigate the future of AI, perhaps the real test of intelligence isn't just asking the right questions—or even finding the answers—but maintaining a healthy skepticism about the easy ones.

In the end, whether a rock is an agent or not, it might be wise to remember that while technology evolves, the questions remain as timeless as a good Star Trek episode.

By Mike Sullivan, Buzzrag Technology Correspondent

Watch the Original Video

The Brain Is Just Specialized Agents Talking To Each Other — Dr. Jeff Beck

The Brain Is Just Specialized Agents Talking To Each Other — Dr. Jeff Beck

Machine Learning Street Talk

46m 57s
Watch on YouTube

About This Source

Machine Learning Street Talk

Machine Learning Street Talk

Machine Learning Street Talk, launched in September 2025, has quickly become a pivotal platform for AI enthusiasts and professionals alike. With 208,000 subscribers, the channel delves into the cutting-edge realm of artificial intelligence, offering rich discussions on advanced AI research. It features a broad spectrum of topics, including cognitive science, computational models, and philosophical insights, positioning itself as an essential resource for those seeking to navigate the intricate AI landscape.

Read full source profile

More Like This

Related Topics