News

2022.06.16 Press Release

A lightweight deep learning model for automatic segmentation and analysis of ophthalmic images

Deep learning (DL) algorithms are generally task specific and tested for classification and detection/segmentation of general objects (such as garments, human, animals etc.). The exact marking of object boundaries (i.e., segmentation) is not a prime importance in such applications. This is quite different in diagnosis of diseases where staging depends on precision in the measurement of feature size/volume. Computational resources required by DL models with high prediction accuracy have increased dramatically, which make their deployment difficult on mobile devices. In views of aging society, limited resources and shortage of medical manpower, future of healthcare strongly depends on self-monitoring, and tele-screening of diseases, which require lightweight and accurate DL models.
 A group of students, Prof. Toru Nakazawa, Associate Prof. Parmanand Sharma, and Dr. Takahiro Ninomiya at Department of Ophthalmology, Tohoku University, have developed an exceptionally lightweight DL model in collaboration with Prof. Takayuki Okatani at Graduate School of Information Sciences, Tohoku University. The model can extract desired disease related features rapidly from the images with high accuracy. Additionally, it can be trained with a small number of images with good training reproducibility, even with the noisy images. There is always a trade-off between accuracy, speed, and computational resources. Performance of DL models degrades on reduction of model parameters (P). For example, P in mobile specific models such as Mobilenet and LeanConvNets with ResNet34 in a Unet type of backbone structure reduces from ~ 25.95 to 3.4 ~ 4.0 million with 1% reduction in validation accuracy and ~ 4–6% in intersection over union (IoU reduction ~ 8% for MobilenetV2). The current model has less than 3 million parameters but the performance (dice coefficient D = 0.958 ± 0.0181) is same/better than the popular DL models such as Unet (D = 0.958 ± 0.0183) and DeepLabV3+ (D=0.949 ± 0.0216P), which are 10 and 6 times heavier in terms of P, respectively. Similarly, discrimination ability of glaucoma (AUC~ 0.813) based on the segmentation of FAZ boundary is significantly high as compared to the commercial software, Imagenet (AUC ~ 0.776). The concept of channel narrowing introduced in the present model is not only important for the segmentation problems, but it can make disease/object classification models much lighter. Enhanced diagnostic accuracy in screening of glaucoma on low resource devices has been obtained. It is also capable of detecting/segment optic disc, hemorrhage etc. in fundus images with high precision. We are expecting to deploy this lightweight model on mobile devices used in screening of various ocular and other diseases.

- - - - - - - - - - - - - - - - - - - - - - - - - -
Title: A lightweight deep learning model for automatic segmentation and analysis of ophthalmic images
Authors: Parmanand Sharma, Takahiro Ninomiya, Kazuko Omodaka, Naoki Takahashi , Takehiro Miya, Noriko Himori, Takayuki Okatani & Toru Nakazawa
Journal: Scientific Reports
DOI: | https://doi.org/10.1038/s41598-022-12486-w
Embargo date:20 May 2022

- - - - - - - - - - - - - - - - - - - - - - - - - -
Name: Toru Nakazawa
Affiliation: Department of Ophthalmology, Graduate School of Medicine, Tohoku University, Sendai, Japan
Email: ntoru@oph.med.tohoku.ac.jp
Website: http://www.oph.med.tohoku.ac.jp/