Skate of the Art Street Mapping Tech
What do you get when you cross robot vacuum tech with a helmet and rollerblades? The weirdest street mapping rig you have ever seen.
They do not come around very often, but they certainly do get around, so you may have caught a glimpse of a Google Street View car at one time or another. If so, you would know it — they are not exactly very good at blending in with their surroundings. The tower of cameras and other hardware attached to their roofs that is required for street-level mapping gives them away in a second.
Equally conspicuous are the street mappers that are mounted on rollerblades. What? You have never seen one? That might be because there is likely only one in the world, and that lone street mapper is Owen Trueblood. The 1990s may be over, but even still, Trueblood feels no shame rollerblading through the streets of New York City. Oh, and with a giant hat loaded down with nearly as much hardware as a Street View car, at that. But it is all worth it, because this is being done in the name of science, or something.
Trueblood's one-of-a-kind mapping rig, which he calls the Helmdar, is built around a 2D LiDAR scanner (RPLidar A1) that is often found inside robot vacuums. It is mounted on a frame of aluminum extrusions fastened to a helmet using a mix of 3D-printed brackets, VHB tape, and some trusty white duct tape. The scanner is plugged directly into a Google Pixel 6 smartphone via a USB-to-serial adapter, which provides both data and power connections.
The phone runs an Android app developed by Trueblood. It includes an implementation of a custom LiDAR driver and can capture 6-DoF data using Google’s ARCore toolkit. This allows the system to gather both spatial LiDAR data and positional tracking data, which lets the phone know exactly where it is and how it is oriented in space. The app logs all this information into a binary file as Trueblood glides through the city.
To counterbalance the front-heavy setup, a power bank is mounted at the back of the helmet. And for flair — and maybe future experiments — the sides of the helmet sport AprilTags, machine-readable visual markers often used in robotics. While these tags were not involved in the mapping process itself, Trueblood explored using them later to align 3D scans with video footage for visualization purposes.
The solution may seem cobbled together, but it is inexpensive and easy to work with. To review scans in the field, for instance, Trueblood did not bring a high-powered laptop. Instead, he strapped a cheap laptop to a board with a handle to carry it along. Then the phone was plugged into it, and a custom Three.js web app was used to visualize the collected point clouds in 3D.
The scans produced are not especially useful as maps, but they capture something more. They represent a detailed record of Trueblood's movements through the city. Speed changes, head turns, even ARCore tracking failures all leave visible traces in the scan, making each point cloud a kind of digital memory painting.
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.