Apple's M5 Pro & Max Just Changed Everything About Chips
Apple's M5 Pro and Max use chiplets for the first time, ditching efficiency cores entirely. Here's what that means for performance and why it matters.
Written by AI. Tyler Nakamura
March 4, 2026

Photo: Gary Explains / YouTube
Apple just dropped the M5 Pro and M5 Max, and for the first time in the company's silicon journey, these aren't just bigger versions of the base chip. They're fundamentally different machines, built using an approach Apple has never taken before—and one that Intel and AMD have been perfecting for years.
The headline change? Apple is now using chiplets. Or as they're calling it, "Fusion architecture." Same concept, different branding.
What Actually Changed
Gary Sims from Gary Explains broke down the architecture in detail, and the short version is this: instead of one monolithic piece of silicon doing everything, Apple is now stitching together two separate 3nm dies—one focused on CPU tasks, one handling GPU and memory—and connecting them with high-bandwidth, low-latency packaging.
This is the same basic approach that AMD uses with Ryzen and Intel uses with its tile-based designs. The benefit? You can optimize each die independently, improve manufacturing yields (smaller dies = fewer defects), and mix-and-match components more efficiently.
For Apple, this marks a significant shift. Previously, Fusion was just the interconnect they used to glue two M-series Max chips together to create an Ultra. Now it's the foundation of how Pro and Max chips are built from the ground up.
The Core Naming Just Got Weird
Here's where things get confusing in a very Apple way: they've completely reorganized how they name CPU cores, and it's not just marketing.
The M5 Pro and M5 Max now have:
- 6 "super cores" (what used to be called performance cores)
- 12 "performance cores" (completely new, derived from the super cores)
- Zero efficiency cores
You read that right—no efficiency cores at all. Apple ditched them entirely for these chips.
As Sims explains, "What were the performance cores are now called super cores. The efficiency cores are now called performance cores. However, they are a new design derived from the super cores, not from the previous generation."
So we've basically moved up a tier. The new "performance cores" aren't the old efficiency cores renamed—they're a power-efficient version of the super cores. It's like MediaTek's big-core-only approach: why have truly small cores when you can just make scaled-down versions of your best architecture?
The M5 Pro has an 18-core CPU total. The M5 Max? Same CPU config, just a beefier GPU bolted on (40 cores vs 20).
Performance Numbers That Actually Matter
Apple's claiming a 30% boost in multi-threaded performance for the M5 Pro over the M4 Pro. That's significant, especially considering they're working with fewer total cores but all of them are now "big" cores.
The M5 Max sees a more modest 15% CPU improvement over the M4 Max, which makes sense—the CPU setup is identical to the M5 Pro. You're paying for GPU horsepower, not more CPU cores.
Speaking of GPU: this is where things get interesting. Apple says both chips offer four times the peak GPU compute compared to their M4 equivalents, and six times compared to the M1 Max. For AI workloads and graphics rendering, that's a monster jump.
Gaming and ray tracing see about 20% gains in GPU performance, with a 30% uplift specifically for ray tracing thanks to Apple's third-generation ray tracing engine.
Memory Bandwidth Finally Catches Up
One area where Apple Silicon has always punched above its weight is unified memory architecture—everything shares one pool of RAM with the CPU, GPU, and Neural Engine all accessing it directly.
The M5 Max can now hit 614 GB/s of memory bandwidth with 128GB of RAM configured. That's not a typo. For context, the M4 Max topped out at 546 GB/s.
Here's the thing people miss about these bandwidth numbers: they scale with how much RAM you actually buy. Each memory module gets its own controller, so more RAM literally means more bandwidth. If you're only rocking 64GB on the M5 Pro, you're sitting at 307 GB/s. Still excellent, but not the headline number.
Sims clarifies: "The way Apple's memory architecture works is that with each extra chunk of memory that you have, you get a new controller. So the overall bandwidth... is now up to 614 GB a second because you got up to 128 gigabytes of memory."
This matters most for large language models and other AI workloads that need to traverse huge amounts of memory. Video editors working with 8K ProRes? Same deal.
What This Means for Normal Humans
If you're eyeing a new MacBook Pro, here's the practical breakdown:
M5 Pro: Better CPU performance than last gen, solid GPU for most creative work, good for developers and people who need power without going all-in on cost. The 30% multi-threaded boost is real and you'll feel it in compile times and exports.
M5 Max: Same CPU as the Pro, but the GPU doubles. If you're doing 3D rendering, heavy AI work, or high-end video editing, this is your chip. The 4x GPU compute claim is for peak performance, but even real-world gains are substantial.
Both are currently available in 14" and 16" MacBook Pros with various configurations. Want fewer GPU cores to save money? Apple's got options.
The question floating around: will we see these chips in a Mac Studio or Mac Mini? The chiplet approach makes it easier for Apple to mix configurations without redesigning entire SOCs. A Mac Studio with M5 Max (or even an Ultra made from two M5 Max chips) would be an absolute beast for anyone doing professional creative work.
Why Chiplets Actually Matter
The move to chiplets isn't just about performance—it's about manufacturing flexibility and economics. When everything is one giant die, a single defect can ruin the whole chip. Smaller dies have better yields and can be optimized independently.
It also means Apple can potentially reuse dies across product lines more efficiently. Need a lower-end config? Use a GPU die with some cores disabled. Want flagship performance? Connect your best CPU die with your biggest GPU die.
Intel and AMD have been doing this for years, but they had different motivations—mainly trying to compete with each other while managing complex product stacks. Apple's doing it to scale their own architecture more efficiently while maintaining the unified memory approach that makes Apple Silicon special in the first place.
The real test will be thermal performance and power efficiency. Chiplets can create thermal hotspots if not managed correctly, and inter-die communication can introduce latency. Apple's betting their advanced packaging solves both issues.
The M5 Pro and Max represent Apple no longer just scaling up—they're building different products for different needs, using an architecture that gives them more room to maneuver. Whether that translates to better value for consumers or just more configuration options at premium prices? That's the question only real-world availability and pricing will answer.
—Tyler Nakamura
Watch the Original Video
You Won’t Believe How Apple Designed the M5 Pro & Max (Chiplets!)
Gary Explains
12m 39sAbout This Source
Gary Explains
Gary Explains is a burgeoning YouTube channel that has quickly garnered a following of 344,000 subscribers since its inception in October 2025. The channel, helmed by Gary, is committed to demystifying complex technology concepts, offering viewers clear and accessible explanations on a variety of computing topics. Whether delving into the details of processor architecture or exploring the latest in GPU technology, Gary Explains serves as a valuable resource for anyone eager to understand the technological underpinnings of modern devices.
Read full source profileMore Like This
Sam Altman Says AGI Arrives in 2 Years. Here's the Data.
OpenAI's Sam Altman just compressed the AGI timeline to 2028. We examined the benchmarks, the skepticism, and what 'world not prepared' actually means.
Apple M5 Max Crushes Local AI—Even Beats M3 Ultra
The M5 Max's prompt processing destroys Apple's desktop M3 Ultra. Real-world tests show this laptop is rewriting local AI performance expectations.
30 Self-Hosted GitHub Projects Trending Right Now
From media automation to AI chat apps, here are 30 trending self-hosted GitHub projects that put you back in control of your data and infrastructure.
This Guy Fit 17TB of Enterprise Storage Into a Mini Rack
A home lab builder packed 17TB of NVMe storage into five mini PCs, ditching VMware for Proxmox and Ceph. Here's what actually worked—and what didn't.