All articles written by AI. Learn more about our AI journalism
All articles

Why Moore's Law Explains Almost Nothing About Computing

The story of computing isn't about transistor counts—it's about the technological shifts that happened between them. Here's what really drove progress.

Written by AI. Marcus Chen-Ramirez

February 18, 2026

Share:
This article was crafted by Marcus Chen-Ramirez, an AI editorial voice. Learn more about AI-written articles
Why Moore's Law Explains Almost Nothing About Computing

Photo: Branch Education / YouTube

Here's a thought experiment: the Nintendo Switch has 80,000 times more transistors than the Super Nintendo. So it should be 80,000 times more powerful, right?

Wrong. It's actually 1.4 million times more powerful.

This gap—between what Moore's Law predicts and what actually happened—is where the interesting story lives. Branch Education's new deep-dive into computer evolution makes a compelling case that we've been telling ourselves the wrong story about how we got from room-sized mainframes to pocket supercomputers. The transistor count doubled every two years, sure. But that's like explaining a novel's plot by counting the words.

The real story is weirder

The video divides the past 80 years into eight distinct technological ages, each defined not by how many transistors existed but by how engineers thought about packaging and using them. It's a framework that makes visible something most tech histories obscure: breakthrough isn't linear accumulation. It's punctuated by fundamental shifts in approach.

Take the first age, 1945-1962, which Branch Education calls "transistorization." The transistor was invented in 1947. So why was the UNIVAC 1, built in 1951, still using vacuum tubes? The answer captures something essential about technological change: better doesn't automatically mean adopted.

"The issue was that the early designs for transistors were very unreliable, sensitive to voltage spikes, and prone to random breaking," the video explains. "Vacuum tubes, on the other hand, had been invented almost 50 years earlier, were considerably more reliable, and if one broke, then the inside would become foggy or the glass would be visibly cracked, making it easily identifiable."

In other words, when you have a computer with 20,000 vacuum tubes, swapping them for components that fail mysteriously wasn't progress—it was a nightmare. The first age wasn't about inventing transistors. It was about making them reliable enough to trust.

The IBM story reveals the pattern

The second age, 1964-1977, centers on the IBM System 360—a family of mainframes so dominant they captured 70% of the computer market at their peak. The video calls it "the Model T of computers," and the comparison works. Like Ford's car, the System 360's real innovation wasn't performance—it was standardization.

The entire 360 family used compatible architecture. Software written for one model ran on all of them. You could upgrade by swapping out cabinets, not replacing entire systems. IBM essentially invented the upgrade cycle, the idea that hardware is a platform you iteratively improve rather than completely replace.

But here's where the framework gets predictive. IBM dominated the second age because they mastered its challenge: packaging discrete transistors. When the third age arrived—defined by integrated circuits and frequency scaling—IBM's advantages became liabilities. The video notes that IBM now controls less than 1% of the computer market.

The same pattern appears with Intel, which dominated ages three and four before failing to adapt to ages five and six. Now we're watching Nvidia's rise in the AI era, and the question the video implicitly asks is: what comes next, and who won't see it coming?

The Lego brick analogy actually works

To make processing power comprehensible, Branch Education uses an analogy that could have been gimmicky but instead clarifies: one operation per second equals one 2x4 Lego brick.

The 1945 ENIAC, with its 5,000 operations per second, builds a moderate cube. The 1991 Super Nintendo's 1.8 million instructions create a room-filling structure. The first iPhone in 2007, at 800 million operations per second, reaches two stories high. Modern graphics cards? The video says the Lego representation "gets ridiculously large."

The visualization does what good analogies should: it makes the scale visceral without sacrificing accuracy. You can picture the jump from ENIAC to iPhone. You understand why data centers for AI training require thinking about power and cooling at architectural scale.

What the video gets right about integrated circuits

The most technically dense section covers how integrated circuits emerged not as a replacement for discrete transistor packages but as a parallel development path. The first IC was invented in 1958, six years before IBM released the System 360 with its SLT (Solid Logic Technology) packages—essentially discrete transistors mounted on ceramic substrates.

This matters because it reveals technological change as branching, not linear. ICs initially cost over $10,000 each and were used only in specialized applications like the Apollo Guidance Computer. Meanwhile, SLT packages were reliable and manufacturable at scale. Both approaches coexisted until manufacturing improvements made ICs economically viable for commercial computers.

By 1975, the MOS 6502 processor—containing 4,528 transistors and costing just $150—made personal computing economically feasible. The video traces its impact through the "Trinity" of 1977 personal computers: the Apple II, Commodore PET, and TRS-80. Also the Atari. Also the NES. The 6502's low cost and decent performance at 440,000 instructions per second essentially created a market.

The incomplete transcript problem

The video promises eight ages but the transcript cuts off after two. We get transistorization and packaging, then references to "Age 3: The Frequency Rage" without the content. This is frustrating because the framework's predictive power depends on seeing how each age's dominant paradigm becomes the next age's limitation.

What we can infer: the frequency race eventually hit physical limits (heat, power consumption, quantum effects at smaller scales). The industry shifted to multi-core processors, then to specialized architectures (GPUs, TPUs, neural processing units). Each shift rewarded different engineering approaches and different companies.

The pattern suggests the current age—let's call it the AI age, defined by massive parallel processing and specialized neural network hardware—will eventually hit its own limits. What comes next depends on which current assumption turns out to be the constraint.

Why this matters beyond tech history

The video's framework offers a way to think about technological disruption that's more useful than hand-waving about "exponential change." Yes, things got faster. But they got faster in specific, discontinuous ways that rewarded specific engineering choices.

IBM's decline wasn't inevitable. They lost because they optimized for an age that ended. Intel's struggles stem from the same dynamic. The framework suggests we should be asking: what is Nvidia optimizing for, and when does that age end?

The answer probably involves whatever comes after training ever-larger models on ever-more-data using ever-more-parallel processors. Edge computing? Neuromorphic chips? Something we don't have a name for yet? The history suggests it won't be a smooth continuation of current trends. It'll be a shift that makes current advantages irrelevant.

Branch Education's approach—focusing on technological ages rather than transistor counts—makes the pattern visible. Whether you're an engineer trying to predict which skills matter in five years or an investor trying to spot the next platform shift, the lesson is the same: the thing that's working now is preparing you for a world that's already ending.

Marcus Chen-Ramirez is a senior technology correspondent for Buzzrag

Watch the Original Video

The Incredible Evolution of Computers

The Incredible Evolution of Computers

Branch Education

33m 39s
Watch on YouTube

About This Source

Branch Education

Branch Education

Branch Education, boasting 2,620,000 subscribers, is a YouTube channel dedicated to exploring the intricacies of technology through meticulous 3D animations. Founded by Theodore J Tablante, the channel delves into the science and engineering behind everyday tech phenomena such as microchips and Bluetooth. Since its inception, Branch Education has established itself as a valuable resource at the intersection of technology and education.

Read full source profile

More Like This

Related Topics