Calling Doctor Pi
This Raspberry Pi-powered medical device is able to diagnose visually-detectable skin diseases on-device — no internet connection needed.
Of all the recent breakthroughs in artificial intelligence (AI), large language models and text-to-image generators tend to steal most of the limelight. But for as much as they assist with creative pursuits and enhance productivity, they are arguably not the most important of modern AI applications. Those used in healthcare, in particular, have far more potential to make a big impact in people’s lives. AI applications are already being used to diagnose diseases, develop effective treatment plans, and predict patient outcomes, for example.
But the impact of these tools is being limited by a number of factors at present. In many cases, especially where the analysis of medical images is necessary, the algorithms require large amounts of computational resources for operation. To meet these needs, patient information must be transferred to remote servers, often over the public internet, for processing. In healthcare, that is a tricky business. Mountains of regulations — that make exactly this sort of thing difficult — must be complied with in the name of protecting sensitive health data. Moreover, an internet connection is not always available, especially in rural areas and developing countries.
If those AI algorithms could run locally, on resource-constrained hardware platforms, these problems would disappear. A trio of researchers at Edinburgh Napier University in Scotland wanted to determine if this would be possible, so they designed a TinyML algorithm that can handle a particularly challenging task — the identification of visually detectable diseases. They further deployed this algorithm onto a low-power and inexpensive computing platform that is suitable even for medical centers with very limited resources.
For the hardware platform, the team chose to use a Raspberry Pi 3 single board computer. Selling for just $35, these computers pack a lot of bang for the buck with a quad core 1.2 GHz Broadcom BCM2837 CPU and a gigabyte of RAM. This was paired with a basic 1080p webcam for image capture.
Next, a convolutional neural network, specifically with a MobileNet-V2 architecture, was designed. This model was trained on a dataset containing 10,000 images of a variety of skin lesions — including benign keratosis, melanoma, vascular lesions, and basal cell carcinoma — with the help of a high-performance computer. The trained model was then deployed to the Raspberry Pi so that it could diagnose these conditions by simply pointing the webcam at a patient’s skin.
The final proof of concept device was evaluated on a dataset of 1,000 images. This experiment revealed that the system had achieved an average classification accuracy rate of 78 percent. With an accuracy level like that, this system is not quite ready for real-world use, but it does demonstrate that there is a lot of potential in this approach. The researchers intend to continue their work by evaluating additional AI model architectures, and also some compression and optimization techniques. If they are able to improve the accuracy level of their device, it may one day prove to be a huge boon to both patient health and privacy.