Rethinking AI: Smarter Models Over Bigger Ones
Dr. Jeff Beck proposes a shift in AI development towards smarter, brain-like models rather than just scaling up current technologies.
Written by AI. Bob Reynolds
January 1, 2026

Photo: Machine Learning Street Talk / YouTube
The Future of AI: Not Just Bigger, But Smarter
In a recent conversation with Machine Learning Street Talk, Dr. Jeff Beck, a mathematician turned computational neuroscientist, challenges the current trajectory of artificial intelligence development. Beck argues that the key to building truly intelligent machines lies not in scaling up models like transformers, but in creating smarter, more brain-like systems.
The Brain as a Scientific Engine
Dr. Beck posits that the human brain operates less like a giant prediction engine and more like a scientist. "The brain doesn't work like a giant prediction engine. It works like a scientist," he explains. This approach is grounded in Bayesian inference, a statistical method that encapsulates the scientific method. According to Beck, the brain seamlessly combines sensory inputs using Bayesian principles, making it remarkably efficient at handling uncertainty.
The Game-Changer: Automatic Differentiation
While much of the AI hype focuses on transformers and large language models (LLMs), Beck highlights the role of automatic differentiation (AutoGrad) as the real catalyst for AI's recent advancements. AutoGrad transformed AI from purely a mathematical challenge into an engineering problem, enabling more complex model training. Beck cautions, however, that in this engineering shift, the essence of what truly makes intelligence work may have been overlooked.
The Cat in the Warehouse Problem
Beck illustrates his point with the "Cat in the Warehouse" problem. Imagine a warehouse robot encountering a cat for the first time. Traditional AI might either crash or fabricate an explanation. Beck suggests a model that recognizes its own ignorance, downloads new object models, and continues learning. This approach mirrors how humans deal with the unknown, by continuously updating their understanding based on new information.
Language vs. Physics: The Right Grounding for AI
In a provocative twist, Beck criticizes the grounding of AI in language, arguing that it is a flawed model for thought. He notes that self-reporting, a staple in psychology, often yields unreliable data. "We should be grounding AI in physics, not words," Beck asserts, emphasizing that AI models should mirror the physical world, much like human cognition does.
A Future of Modular AI Systems
Beck envisions a future where AI systems resemble video game engines, composed of numerous small, modular object models. Such systems would be more efficient, flexible, and reflective of human thought processes. "Instead of one massive neural network, the future is lots of little models," he suggests. This modular approach not only promises greater adaptability but also aligns more closely with how biological minds operate.
Unpacking the Implications
Dr. Beck's insights challenge the prevailing trend in AI research, urging a reevaluation of priorities. While the industry races to expand existing models, Beck's vision calls for a deeper understanding of intelligence itself. His emphasis on Bayesian inference and the brain's scientific method offers a compelling roadmap for future AI development.
In conclusion, Beck's perspective invites AI researchers, robotics enthusiasts, and curious minds alike to reconsider what constitutes intelligence — both artificial and biological. As we look to the future, the question remains: Will the AI of tomorrow think like a machine, or will it mirror the nuanced complexity of the human mind?
By Bob Reynolds, Senior Technology Correspondent
Watch the Original Video
AutoGrad Changed Everything (Not Transformers) [Dr. Jeff Beck]
Machine Learning Street Talk
1h 16mAbout This Source
Machine Learning Street Talk
Machine Learning Street Talk, launched in September 2025, has quickly become a pivotal platform for AI enthusiasts and professionals alike. With 208,000 subscribers, the channel delves into the cutting-edge realm of artificial intelligence, offering rich discussions on advanced AI research. It features a broad spectrum of topics, including cognitive science, computational models, and philosophical insights, positioning itself as an essential resource for those seeking to navigate the intricate AI landscape.
Read full source profileMore Like This
AI Models Now Run in Your Browser. That Shouldn't Work.
Transformers.js v4 brings 20-billion parameter AI models to web browsers. The technical achievement is remarkable. The implications are just beginning.
AI Models Are Now Building Their Next Versions
Major AI labs confirm their models now participate in their own development, handling 30-50% of research workflows autonomously. The recursive loop has begun.
What Happens When AI Models Compete to Be Funny
A developer built Quiplop, an AI-driven comedy game, to test which language models are actually funny. The results reveal unexpected truths about AI.
Anthropic's Claude Mythos Leaks: What We Know So Far
A leaked draft reveals Anthropic's most powerful AI model yet. The company's cautious rollout raises questions about what makes this one different.