What’s the Word on the Street?

Self-driving cars can stay safer by sharing information with each other using NYU Tandon's decentralized system for AI model swapping.

Nick Bild
9 months agoVehicles

Self-driving vehicles are getting to be quite good, yet even still it would be hard to argue that the technology has fully arrived. The share of cars on the market that have self-driving capabilities is very small, and while some diehards think it is the best thing since sliced bread, a large percentage of the population would rather let their dog take the wheel than a computer running artificial intelligence (AI) algorithms.

One major factor in the public’s lack of trust in these systems stems from the fact that the real world is just so messy. The algorithms are trained on mountains of data so that they can deal with just about any situation that could pop up, but eventually, something completely unexpected is all but guaranteed to happen. And when it does, you won’t want to be sitting in that car wondering how it will deal with it.

I heard it through the grapevine

A possible solution to this problem involves designing algorithms that continually learn, then share their knowledge with other vehicles. Such an approach helps the systems to experience far more than they ever could from their initial training and own driving experiences alone.

Researchers at NYU Tandon have introduced a new way of dealing with this challenge that they call Cached Decentralized Federated Learning (Cached-DFL). Unlike traditional methods, which rely on centralized servers or direct data sharing, Cached-DFL enables autonomous vehicles to learn from one another indirectly while maintaining privacy.

Rather than sending raw data to a central server, vehicles train their own AI models locally and exchange those models with others when they come within 100 meters. But what makes Cached-DFL different is that it allows vehicles to pass along models they have received from previous encounters, extending the range of knowledge transfer beyond direct interactions. Each car maintains a cache of up to 10 external models and updates its own AI every 120 seconds, ensuring that fresh and relevant knowledge is prioritized.

To prevent outdated information from corrupting the learning process, the system automatically discards models based on a staleness threshold. This ensures that self-driving vehicles continue to adapt to current road conditions rather than relying on outdated data.

You get knowledge, and you get knowledge! You all get knowledge!

The researchers tested Cached-DFL using computer simulations based on Manhattan’s street layout. In the simulations, virtual vehicles moved through the city, making random turns at intersections. The study found that the new system outperformed traditional decentralized learning approaches, which suffer when vehicles do not frequently meet. By enabling a multi-hop transfer of information — where vehicles act as relays even if they have not directly experienced certain conditions — Cached-DFL significantly improves learning efficiency.

This work has the potential to build trust with operators of self-driving vehicles by making them more reliable and capable of handling unforeseen situations. The ability to indirectly share knowledge about road hazards, traffic conditions, and obstacles makes this approach particularly well-suited for urban environments, where all kinds of unexpected things happen every day.

Beyond self-driving cars, Cached-DFL could also be applied to other smart mobile agents, such as drones, robots, and satellites, to improve decentralized learning in a variety of fields. As AI continues to shift from centralized servers to edge devices, solutions like Cached-DFL could help to usher in a more intelligent future in autonomous technology.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles