A Power Play in AI Security
Michscan uses a novel power analysis method to ensure the integrity of black-box AI models licensed from third parties.
Artificial intelligence (AI) applications are rapidly coming of age. And as they do so, they are moving out of academic institutions and research labs to power commercial products and business use cases. Since these tools require a great deal of expertise to develop, many of these users do not create them in-house, but instead purchase a pre-built solution from a third party. To those that leverage these AI applications as a service, they are often a black box. Inputs go in…outputs come out…but how, they will never know.
This situation is enough to cause the hairs on a sysadmin's arms to stand up. How does one verify the integrity of a black-box model? Can you be sure it has not been tampered with? If a bad actor swapped out the weights in a licensed model with something malicious, what could be done to detect that? Unfortunately, without full access to the model, very little. And to protect their intellectual property, service providers are not likely to give that up.
To address this challenge, a pair of researchers at the Rochester Institute of Technology has developed Michscan, a novel methodology designed to verify the integrity of black-box AI models. Michscan has the potential to offer a significant step forward in the security of AI applications, especially in cases where models operate on edge devices with limited computational and power resources.
Michscan works by analyzing the power consumption of a device during the inference process of a neural network. The team observed that changes to a model's internal parameters — such as those caused by malicious attacks — manifest as subtle variations in the device's instantaneous power consumption. By using correlational power analysis, Michscan compares these power consumption patterns against a reference "golden template" to identify discrepancies.
The methodology employs a statistical technique called the Mann-Whitney U-Test to determine the likelihood of a model integrity violation. This test provides a mathematically robust framework for detecting tampering with extremely high confidence. Unlike traditional approaches, Michscan operates entirely in a black-box environment, requiring no cooperation from, or trust in, the model owner.
The potential applications for Michscan are vast. With the increasing deployment of tinyML models across industries like healthcare, autonomous vehicles, and industrial IoT, ensuring their security is extremely important. These models are often purchased as pre-trained solutions from third parties, making them prime targets for integrity attacks, including Trojan attacks, data poisoning, and fault injection. Michscan promises to provide an effective safeguard against these threats.
In testing, Michscan was evaluated on a STMicroelectronics STM32F303RC microcontroller running four TinyML models. The researchers introduced three types of integrity violations and Michscan successfully detected all of them. Remarkably, this was achieved with power data from just five inferences per test case, and no false positives were observed across 1,600 test cases.
As AI continues to shape the future of technology, ensuring the integrity of these systems will be crucial. Michscan is a timely innovation, offering a scalable, efficient, and reliable way to protect AI applications from malicious tampering while maintaining the confidentiality of proprietary models. For sysadmins and organizations that rely on licensed AI solutions, Michscan appears to offer much-needed peace of mind by turning a black box into a trusted tool.