Try to Contain Yourself

Benchmarks show Docker can improve real-time performance in robotics, debunking the myth that it's too slow for time-critical applications.

nickbild
about 2 months ago Robotics

Running Docker containers is not just for people with mild OCD that like everything to be in its proper place. Containerization has become an invaluable strategy for professional software development teams looking to avoid a trip down into the depths of dependency hell. By keeping everything an application needs in a single package, isolated from the rest of the system, consistent application performance and higher levels of security are much easier to achieve. But everything, including Docker, comes at a cost, right?

Conventional wisdom says that the primary cost is a small performance hit. After all, any additional software layers must have some computational cost, so this makes intuitive sense. For most applications, especially in the business world, we have an overabundance of hardware resources these days. As such, a tiny performance hit is generally an acceptable trade-off for the myriad benefits of containerization. But in the world of real-time applications and robotics any latency is too much, so Docker is largely avoided.

A comparison of latency over time (📷: robocore)

However, conventional wisdom does sometimes fail us. Shouldn’t we ask whether or not our intuitions are actually true before making an important decision? The team over at robocore believes that to be the case, so they took a deep dive into Docker to get some hard data and determine if it really does slow things down. The results might surprise you and make you rethink your development strategy.

The team focused on robotics workloads with strict real-time requirements like control loops, high-rate sensor streams, and perception pipelines. Using a Jetson Orin Nano, they ran benchmarks comparing Dockerized ROS 2 setups to native execution. Tests measured latency, throughput, and jitter under both idle and heavily loaded CPU conditions.

What they found is that at idle, the differences between native and containerized execution were negligible. More interestingly, under heavy load, Docker often matched, or even outperformed, native in terms of worst-case latency. This may seem counterintuitive, but it was discovered that the reason for the unexpected boost came from Linux’s Completely Fair Scheduler (CFS). CFS can sometimes allocate CPU time more evenly to a container’s process group than to equivalent processes running directly on the host, smoothing out performance spikes.

Throughput test results (📷: robocore)

Throughput tests also showed little performance penalty under Docker. In fact, containerized setups sometimes held target message rates more consistently under CPU pressure. Jitter benchmarks, which are important for understanding the stability of control loops, showed that median performance was very close to native performance. Careful configuration, such as increasing shared memory, using host IPC, and explicitly pinning CPU cores, could further improve container performance.

The main takeaway of this work is that Docker does not kill real-time performance, at least not when configured properly. So, the next time someone dismisses Docker for robotics as too slow, you might just ask if they have actually measured it. The data suggests that with the right setup you can have both the convenience of containers and the performance your robots require.

nickbild

R&D, creativity, and building the next big thing you never knew you wanted are my specialties.

Latest Articles