Keep It to Yourself

This AI accelerator chip has security features that combat side-channel attacks, keeping on-device data safe from malicious hackers.

Nick Bild
12 days agoSecurity

Artificial intelligence (AI) algorithms are continually being refined by researchers, which is giving them new capabilities all the time. These new capabilities are being leveraged to power more consumer devices by the day. Devices such as these are already being used for health monitoring and a wide variety of other applications that collect and process data that is highly sensitive.

Due to the large amount of computational resources that are typically needed by AI algorithms, cloud-based computing resources have often been used to handle the heavy lifting. But this raises a great many privacy-related concerns — data in transit can be stolen, and the security of data in remote data centers is sometimes questionable. For reasons such as these, many hardware and software optimizations have been developed that allow powerful AI algorithms to run directly on-device, eliminating the need for transferring data to the cloud.

Private data is very valuable to malicious hackers, however, so the cat-and-mouse game has not been put to an end with these innovations. The next front is side-channel attacks, in which the physical properties of a device are utilized to steal data from a system. In these cases, data can be compromised without it ever having to leave the physical hardware.

Sharing is not always caring

Researchers from MIT and the MIT-IBM Watson AI Lab are working to stay on top of this emerging threat. They have developed an AI accelerator — a processor that facilitates efficient on-device computations — that has a number of security features designed to thwart side-channel attacks. The strong security offered by this chip only introduces a minimal amount of overhead, which helps to keep the algorithms running at top speed.

The team’s chip performs in-memory computations to increase performance, which is a growing trend. But these devices cannot store a full model’s weights in that in-memory computation area, and instead need to shuttle data between it and another, larger area of memory. This data can be exposed by a hacker who probes the bus connecting these components. Monitoring the chip’s power consumption can also reveal this information.

A trio of techniques were employed to defeat these efforts. For starters, data in the in-memory compute area was split into random pieces. In this way, an attacker would never be able to reconstruct the real data contained inside. To defeat bus probing, a lightweight cipher was implemented that encrypts data as it is transferred between main memory and the computation unit. And finally, the encryption key itself is created as a product of random variations in the chip that are artifacts of the manufacturing process, which makes it virtually impossible to clone.

Did it pass the test?

To test the security of the new chip, the researchers took on the role of malicious hackers and attempted to crack their own system. Even after millions of attempts, they came up empty-handed. When working with an unprotected chip, for comparison, the team was able to reconstruct hidden data after about 5,000 attempts.

The methods introduced in this work may not be appropriate for every use case. The additional circuitry that is required increases the size of the chip, and the cost is also increased. And while the decreases in performance and energy-efficiency are not substantial, they are a factor. But where security and privacy are crucial, this chip may prove to be invaluable in the future.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles