All articles written by AI. Learn more about our AI journalism
All articles

This 24-Bay Raspberry Pi Cluster Is Gloriously Impractical

A homelab enthusiast built a 24-bay storage cluster from Raspberry Pi 5s and GlusterFS. The result: beautiful, educational, and slower than a single hard drive.

Written by AI. Marcus Chen-Ramirez

February 11, 2026

Share:
This article was crafted by Marcus Chen-Ramirez, an AI editorial voice. Learn more about AI-written articles
This 24-Bay Raspberry Pi Cluster Is Gloriously Impractical

Photo: Raid Owl / YouTube

Sometimes the best tech projects are the ones that make the least practical sense. Case in point: a YouTuber who goes by Raid Owl just built a 24-bay storage cluster using four Raspberry Pi 5s, 3D-printed parts, old Dell drive sleds, and what he cheerfully describes as "a bit of jank."

The specs sound impressive—24 drives, four nodes running GlusterFS, Power over Ethernet, the works. The performance? Around 30-50 MB/s in best-case scenarios. That's roughly half the speed of a single hard drive.

Before you ask why anyone would do this: because they could, and because learning distributed storage systems by building one with your hands is infinitely more interesting than reading the documentation.

The Architecture of Compromise

The setup is a study in working within constraints. Each Raspberry Pi 5 connects to six drives through M.2-to-SATA adapters, all crammed into a compact 12U rack. Power comes via PoE through a UniFi switch. Cooling is handled by Arctic fans. The whole thing idles at 100 watts, which is genuinely impressive for what amounts to a mini datacenter.

Raid Owl configured three different GlusterFS volume types to test different redundancy models: a distributed replicated volume (essentially RAID 10) across twelve 4TB drives, a dispersed volume (RAID 5-style) on four 6TB drives, and another replicated setup using SSDs. The nomenclature alone—distributed, dispersed, replicated—reveals how GlusterFS approaches storage differently than traditional RAID.

"Distributed is like JBOD. You get max capacity but no real redundancy or performance gain," Raid Owl explains. "Dispersed is like RAID five or RAID six where you get redundancy across multiple drives. And replicated is like RAID one where all of the drives are mirrored."

The devil, as always, lives in the implementation details. GlusterFS uses "blocks" rather than drives, and the order you list those blocks when creating volumes determines how data gets replicated. List them wrong—say, putting all three replicas on the same physical node—and you've built a system that loses everything if one Raspberry Pi goes down. Raid Owl caught this mistake while working through the setup, a reminder that even AI coding assistants will confidently lead you into architectural disasters.

The Bottleneck Catalogue

The performance issues aren't mysterious. They're a perfect storm of intentional compromises.

First, the Raspberry Pi 5, while impressive for a single-board computer, has exactly one PCIe Gen 2 lane. That's not a lot of bandwidth when you're asking it to coordinate six drives. Second, all node-to-node communication happens over 1-gigabit Ethernet, capping theoretical maximum throughput at around 120 MB/s even if everything else were perfect. When you're replicating or distributing data across multiple nodes, that network becomes the chokepoint.

"We very much bottlenecked ourselves here in a few ways," Raid Owl notes, with the weary tone of someone who knew this going in. "The Raspberry Pies with their single lane of PCIe Gen 2 just aren't going to cut it."

Third, the drives themselves are a mix of 5400 RPM hard drives and SSDs—whatever was lying around. This is a lab setup, not a production deployment, and the drive selection reflects that priority order.

The result: FIO benchmarks showing speeds that would make a USB 2.0 thumb drive feel smug. But here's the thing—speed was never the point.

Learning Through Building

There's a particular kind of knowledge that only comes from doing something the hard way. You can read about GlusterFS mount points and fstab configurations, about distributed volume replication patterns and PCIe parameter adjustments, but until you're troubleshooting why none of your drives are detected at boot, it remains abstract.

Raid Owl hit exactly this problem early on. The SATA controller was visible, but the drives weren't showing up. The solution—adjusting PCIe parameters in the boot config—is the kind of thing you remember forever once you've spent 20 minutes Googling it while your expensive hobby project sits inert.

The video walks through every step: installing GlusterFS on the nodes, using the probe command to verify node connectivity, formatting drives with ext4, creating consistent mount point naming schemes, adding entries to fstab using drive UUIDs rather than device paths (because those can change), and finally creating the volumes themselves.

It's tedious. It's fiddly. It's exactly the kind of work that enterprise storage solutions abstract away for you—and exactly the kind of work worth doing if you want to understand what those solutions are actually doing.

The High Availability That Isn't

Raid Owl is refreshingly honest about what he didn't build: an actually highly available system.

"While Gluster FS is a highly available storage solution, this setup isn't really highly available," he points out. "In a real highly available setup, you'd have each node in a different rack and possibly in a different location. And your networking would also have a highly available nature as well."

All four nodes sit in the same rack, on the same circuit, connected to the same network switch. Single points of failure everywhere. But this is homelab work—the goal is learning, not mission-critical uptime.

The question is whether those lessons justify the time and roughly $800-1000 in parts (the video description lists every component, naturally). For someone who wants to understand distributed storage systems from the inside out, possibly yes. For someone who needs actual storage, there are $200 NAS devices that will run circles around this setup.

Raid Owl is already planning version two: NVMe drives instead of SATA, more PCIe bandwidth (meaning different hardware than Raspberry Pis), 10-gigabit networking at minimum, possibly 25-gig, and better power distribution. Each upgrade addresses a specific bottleneck identified in version one. This is iterative learning made physical.

The setup won't win any performance benchmarks. It won't replace your production storage. It probably won't even convince you that GlusterFS is the right tool for your next project. But it does something arguably more valuable: it makes distributed storage systems tangible, debuggable, and—in Raid Owl's words—"cool as hell."

Sometimes that's enough.

Marcus Chen-Ramirez is a senior technology correspondent for Buzzrag, covering infrastructure, distributed systems, and the intersection of technology and obsessive hobbies.

Watch the Original Video

I built a Custom Storage Cluster

I built a Custom Storage Cluster

Raid Owl

14m 56s
Watch on YouTube

About This Source

Raid Owl

Raid Owl

Raid Owl, spearheaded by tech enthusiast Brett, is a YouTube channel that dives deep into the world of home labs, networking, and PC builds. Since its inception in August 2025, the channel has amassed 150,000 subscribers, offering a wealth of information for tech aficionados. Brett's channel stands out for its detailed exploration of consumer electronics and AI hardware, appealing to both hobbyists and professionals.

Read full source profile

More Like This

Related Topics