This $30 Breadboard Scans Rooms in 3D. Here's How
Engineer Henrique Ferrolho built a functional 3D room scanner for $30 using AliExpress parts. The interesting part isn't the price—it's what it reveals.
Written by AI. Marcus Chen-Ramirez
February 9, 2026

Photo: Henrique Ferrolho / YouTube
Engineer Henrique Ferrolho has built something quietly remarkable: a functioning 3D room scanner assembled on a breadboard for about $30 in AliExpress components. The setup maps physical spaces in real-time, accumulating point clouds as you rotate it. But the interesting part isn't the price tag—it's what this project reveals about how spatial sensing actually works, and the gaps between what sensors measure and what we think they measure.
The hardware is straightforward: an ESP32 microcontroller (£4), a VL53L5CX time-of-flight sensor (£5), and a BNO085 IMU for orientation tracking (£15). The time-of-flight sensor is the critical piece. Unlike a camera, which captures continuous light across an image plane, the VL53L5CX shoots out 64 discrete infrared laser pulses arranged in an 8x8 grid. Each pulse travels outward, bounces off whatever surface it encounters, and returns. The sensor measures that round-trip time—the "time of flight"—and converts it to distance.
Ferrolho's visualization makes the sensor's fundamental constraint visible: you only get measurements along those 64 specific rays. "The points you see here, they're locked to these line segments. They cannot appear anywhere else," he explains in his demo. When he waves his hand over the sensor, points jump between his palm and the ceiling behind it, locked to their predetermined rays. The visualization shows each ray as a blue line extending from 2cm to 4 meters, the sensor's operational range.
This isn't how we typically think about 3D scanning. Consumer LIDAR—the kind Apple stuffed into recent iPhones—feels continuous and comprehensive. But all time-of-flight systems share this constraint: they sample space along specific paths. The VL53L5CX just makes that sampling visible.
The Noise Problem
Raw sensor data is jittery. Points vibrate from ambient infrared radiation, electronic noise, surface reflectivity variations. Ferrolho addresses this with an exponential moving average filter, blending each new measurement with previous readings. Set the filter strength to 0.5, and each displayed position becomes 50% new measurement, 50% historical average. The points smooth out.
But there's a trade-off. Crank the filter to 0.9—90% history, 10% new data—and the smoothness increases but responsiveness tanks. When Ferrolho sweeps the sensor from ceiling to wall, the visualization lags noticeably behind the actual movement. Set it to 1.0, and new measurements effectively vanish; the display freezes on old data. He settles on 0.5 as a practical compromise.
This is a fundamental tension in sensor processing: you can have clean data or responsive data, but optimizing for one degrades the other. Every autonomous vehicle, every robot, every AR headset confronts this same trade-off. Most solve it with sensor fusion—combining multiple imperfect sensors whose errors don't correlate—but Ferrolho's single-sensor demo exposes the underlying problem cleanly.
RANSAC vs. Least Squares
The project's most instructive moment comes when Ferrolho demonstrates plane fitting. Point the sensor at a ceiling, and the software can fit a plane to those 64 measured points. The naive approach—least squares fitting—minimizes total error across all points. When every point represents a flat ceiling, this works fine.
But point the sensor at both ceiling and wall simultaneously, and least squares produces a diagonal plane splitting the difference. "This is not very useful," Ferrolho notes with admirable understatement.
He switches to RANSAC—Random Sample Consensus. Instead of treating all points equally, RANSAC repeatedly samples three random points, fits a plane to them, checks how many other points agree, and keeps the best result after about 100 iterations. "The key difference is instead of trying to please all the points equally, RANSAC finds the plane that the majority of points agree on and ignores the outliers."
When Ferrolho waves a pen over the sensor to introduce spurious measurements, least squares wobbles wildly. RANSAC holds steady, identifying the ceiling as the dominant surface. It's a masterclass in robust statistics: sometimes the right answer comes from ignoring data, not incorporating it.
RANSAC dates to 1981. It's taught in introductory computer vision courses. Yet it's also foundational to how autonomous systems operate in noisy environments—from drone navigation to archaeological site mapping. Watching it work in real-time on a breadboard makes its utility visceral in a way equations don't.
Mapping Mode and the Translation Problem
The system's party trick is mapping mode. Up to this point, Ferrolho has been showing instantaneous snapshots—the 64 points the sensor sees right now. Enable mapping mode, and points accumulate instead of replacing each other. The IMU, tracking orientation in real time, ensures each measurement lands in the correct position in 3D space. Rotate the breadboard, and a point cloud map of the room emerges: walls, ceiling, desk, monitor, floor.
"We essentially have a map of the room. We have a mapping device," Ferrolho says as he pans around his workspace, the visualization building a sparse but recognizable 3D representation.
Then he names the limitation: "The IMU is excellent at measuring rotation, but it cannot accurately track translations, moving the sensor side to side or forward and back." Walk around with the breadboard, and the map drifts into incoherence. The system only works if you rotate in place.
This is why SLAM—Simultaneous Localization and Mapping—is hard. IMUs measure angular changes precisely but accumulate position error rapidly. Time-of-flight sensors measure distances accurately but lack peripheral vision. Cameras provide rich context but struggle with scale. Building systems that actually navigate spaces requires fusing these imperfect modalities, and even then, drift happens.
Ferrolho's constraint—rotate but don't translate—isn't a failure of his implementation. It's an honest acknowledgment of what a $15 IMU can and cannot do. Consumer devices that handle translation smoothly are either using visual-inertial odometry (processing camera feeds in real time) or have access to external reference points. Those capabilities don't fit on a breadboard, and they certainly don't cost $30.
What $30 Actually Buys
This project succeeds precisely because it's constrained. By stripping away the complexity that makes commercial systems opaque, Ferrolho has built something legible. You can see the 64 rays. You can watch the filter trade responsiveness for stability. You can observe RANSAC reject outliers in real time. You can understand why translation fails where rotation succeeds.
There's a broader pattern here. As sensing technology commodifies—time-of-flight sensors are now $5 commodity parts—the barrier to understanding shifts from hardware access to conceptual clarity. Ferrolho's contribution isn't making 3D scanning cheap; it's making the underlying principles visible. The GitHub repo includes the full source code. Anyone can replicate this, modify it, extend it.
Will this replace a commercial LIDAR scanner? Obviously not. Is it useful for robotics projects, educational demonstrations, or just building intuition about how spatial sensing works? Demonstrably yes. Ferrolho closes with exactly that framing: "You can build something that gives you a real intuition for what the sensors are actually measuring."
The democratization of sensor technology isn't just about price. It's about making the mysterious measurable, the opaque transparent. Thirty dollars of components, assembled on a breadboard, teaching more about 3D sensing than most product marketing ever will.
—Marcus Chen-Ramirez
Watch the Original Video
Turn a Time-of-Flight Sensor into a 3D Scanner
Henrique Ferrolho
11m 18sAbout This Source
Henrique Ferrolho
Henrique Ferrolho is an emerging YouTube creator who explores the realms of advanced technology and DIY electronics. Despite a modest subscriber base of 3,820, Henrique has carved out a niche by focusing on specialized topics such as 3D scanning technology, time-of-flight sensors, and IMU applications since launching his channel in February 2026.
Read full source profileMore Like This
Inside Adam's ZFS Storinator Upgrade Adventure
Explore Adam and Wendell's journey upgrading a ZFS storage server with a Storinator Q30 for better data management.
The Power of Unexpected Results in Science
Explore how unexpected scientific results challenge assumptions and drive discovery.
Decentralized Tech: Gadgets for the Privacy-Conscious
Explore gadgets that blend tech and anarchism to maintain privacy and autonomy in a surveilled world.
Building a Raspberry Pi Cyberdeck From Scratch (CM5)
Maker Salim Benbouziyane spent months designing a custom cyberdeck around the Raspberry Pi Compute Module 5. Here's what worked—and what didn't.