The Breast AI Around
A multimodal fusion network uses two types of ultrasound imaging to outperform other methods of detecting and classifying breast lesions.
Breast cancer remains a major health concern worldwide, with new cases continuing to emerge at an alarming rate. Despite significant advances in breast cancer research and treatment over the past several decades, the prevalence of the disease has continued to increase in many parts of the world. According to the World Health Organization, breast cancer is the most common cancer in women globally, accounting for more than 2 million new cases and nearly 700,000 deaths each year.
The reasons for the rising rates of breast cancer are complex and multifaceted. Some experts point to lifestyle factors such as poor diet, lack of exercise, and exposure to environmental toxins as contributing factors, while others emphasize the role of genetic and hormonal factors in the development of the disease.
Regardless of the underlying causes, the impact of breast cancer on individuals and society as a whole is significant. Breast cancer is a devastating diagnosis that can lead to a range of physical, emotional, and financial challenges for patients and their families.
To address the rising rates of breast cancer and reduce the burden of the disease on individuals and society, healthcare professionals emphasize the importance of increased awareness and early detection. Breast cancer is much more treatable, and outcomes are far better, when it is detected early.
A group of researchers at the Pohang University of Science and Technology in Korea have created a new approach that aims to noninvasively detect breast cancer at a very early stage. Their technique leverages two types of ultrasound data, which is analyzed by a deep learning-based multimodal fusion network for both the detection of lesions, as well as their classification as either benign or malignant.
Data is noninvasively obtained with both B-mode and strain elastography ultrasound imaging. This data is fed into a weighted multimodal U-Net model as a first step to seek out and localize any lesions that may be present. A second neural network classifier examines any lesions that were found to predict the likelihood that they may be cancerous.
This effort is not the first time that ultrasound imaging and machine learning have been paired up in an attempt to detect breast cancer. However, most existing methods rely solely on B-mode ultrasound for their segmentation and classification pipelines. And no use of both B-mode and strain elastography ultrasound for segmentation has previously been reported on for this disease. By taking a multimodal approach to the problem, the researchers hope to achieve a higher specificity, while also maintaining better sensitivity, than existing methods.
The finished system was validated in a small trial consisting of thirteen test cases. Of those test cases, seven were benign and the system predicted them correctly in three out of five trials. The six malignant samples were predicted to be malignant in five out of five trials. While there is still some room for improvement, these results do outperform existing single and multimodal methods of lesion segmentation and classification.
Due to a lack of data availability, the model was trained with a relatively small number of examples. Data augmentation, pre-trained model adaptation, and fine-tuning were employed to improve model prediction accuracy, however, with access to more data, the team expects that the system will continue to get better in the future. But even as it stands now, this new approach offers many advantages to clinicians that may assist them in making earlier diagnoses than was previously possible. Building on this success, the researchers hope to adapt their methods to the detection of a wider range of diseases in the coming years.