Loading...
Thumbnail Image
Item

Deep learning for fast quality assessment

Taye, Mesfin B.
Citations
Altmetric:
Abstract

This report presents an automated system for classifying the quality of Focused Assessment with Sonography for Trauma (FAST) ultrasound exams. Our dataset consisted of 441 FAST exams with 3,161 videos, graded from 1 (poor quality) to 5 (good quality). We deemed exams rated as 1 or 2 as poor quality, and those rated 3 to 5 as good quality. Our approach involved assigning the quality label of the exam to each video and then to each frame within, as frames lacked individual labels. We utilized a custom CNN autoencoder for image compression, and then used the encoder for further classification. The classification of a video or an exam as poor quality was determined if at least half of its frames or videos, respectively, were assessed as such. The encoder-classifier model outperformed the transfer learning model, with our ensemble testing accuracy achieving 98% for video quality and 100% sensitivity and specificity at the exam level. The performance of our encoder-classifier network significantly surpassed traditional transfer learning methods, demonstrating its effectiveness in accurately assessing ultrasound exam quality.

Deep learning methods have shown remarkable efficacy in the domain of medical imaging, offering promising advancements for various diagnostic procedures. Nonetheless, a significant barrier to their widespread adoption is the opacity of their decision- making processes. Several techniques exist to explain model decisions; some focus on patterns specific to individual predictions, while others are interested in the overall pattern or logic of the decision. In this paper, we present a new modification to existing local explainability techniques—DeepLIFT and Integrated Gradient. These modifications aim to enhance local explainability, providing clearer insights into the model’s decision-making process.

Furthermore, we introduce the Self-Organizing Map (SOM) as a novel approach to global explainability. This technique provides a global- level understanding by revealing patterns and trends in how deep learning models make classification decisions across a broad dataset. We combined these two new methods and applied them in a Focused Assessment with Sonography in trauma exam classification problem. Through our comprehensive analysis using local and global explainability techniques, we discovered that the clarity (sharpness) and informational content (density) of FAST ultrasound frames are pivotal attributes leveraged by deep learning algorithms for quality classification. This insight not only enhances our understanding of the underlying mechanisms of these models but also opens new avenues for improving the accuracy and reliability of image classification problems.

Date
2024-07