Document Type : Original Research

Authors

1 MSc, Department of Biomedical Engineering, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran

2 PhD, Department of Biomedical Engineering, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran

Abstract

Background: Arcus Senilis (AS) appears as a white, grey or blue ring or arc in front of the periphery of the iris, and is a symptom of abnormally high cholesterol in patients under 50 years old.
Objective: This work proposes a deep learning approach to automatic recognition of AS in eye images.
Material and Methods: In this analytical study, a dataset of 191 eye images (130 normal, 61 with AS) was employed where ¾ of the data were used for training the proposed model and ¼ of the data were used for test, using a 4-fold cross-validation. Due to the limited amount of training data, transfer learning was conducted with AlexNet as the pretrained network.
Results: The proposed model achieved an accuracy of 100% in classifying the eye images into normal and AS categories.
Conclusion: The excellent performance of the proposed model despite limited training set, demonstrate the efficacy of deep transfer learning in AS recognition in eye images. The proposed approach is preferred to previous methods for AS recognition, as it eliminates cumbersome segmentation and feature engineering processes.

Keywords

Introduction

Human eye is an organ that receives light and permits vision. Iris is a pigmented, round, contractile structure in the eye, which is suspended between the cornea and lens and perforated by the pupil [ 1 ]. Iris controls the size of the pupil and hence the amount of light received by the eye. Iridology is a technique developed more than 100 years ago which aims to determine information on a person’s health by examining iris characteristics such as color or pattern [ 2 ]. One of the diseases that may be detected by iridology is the high level of cholesterol in the human body [ 3 ]. Cholesterol is a white, waxy, fat-like substance produced by the liver and is required for building and maintaining cell membranes [ 4 ]. High cholesterol or lipid can change the iris pattern, a condition referred to as Arcus Senilis (AS). In this condition, deposition of cholesterol in the peripheral cornea appears as a white, grey or blue ring or arc in the periphery of the iris. AS is benign in old individuals, but in patients with less than 50 years old, where it is also called arcus juvenilis, is a symptom of high cholesterol and high risk of cardiovascular disease [ 5 ].

Previous work has investigated various image processing techniques for computer based recognition of AS in eye images. In [ 1 , 6 ], after segmentation of the iris, the Rubber Sheet model was conducted to transform the iris from circular shape into rectangular form. Then, a thresholding method was performed on the histogram of the outer part of the normalized iris image to recognize AS. In [ 7 ], the iris was first segmented and a Gabor filter was conducted to extract features from the iris. These features were then employed for AS recognition using Otsu thresholding. In [ 4 ], the iris was segmented and downsampled to be used as the input to a multi-layer perceptron (MLP) neural network classifier for AS recognition.

As described above, previous methods of AS recognition involved cumbersome segmentation and feature engineering procedures. To overcome this issue, this work proposes a deep learning approach which takes the raw image as the input and classifies the image into AS and normal categories. Recently, deep learning has revolutionized many fields especially computer vision [ 8 ]. Deep learning has also showed great success in the field of biomedical engineering (e.g. [ 9 , 10 ]). Therefore, the proposed method eliminates segmentation and feature engineering that were necessary in previous approaches.

Convolutional neural netwroks (CNNs) are one of the most popular types of deep learning models. CNNs are typically applied to images for classification tasks, and can learn useful features through their convolutional layers from raw images and hence remove the need for feature engineering. This significantly facilitates the recognition task. Other than convolutional layers, CNNs typically comprise batch normalization, relu (rectified linear unit, as the most common activation function) and pooling layers. Also, the final layers of CNNs often include a fully-connected layer (which is the main layer in MLPs) and a softmax and classification layers (which compute the loss and perform classification).

Deep learning models perfromance improves with increasing the amount of training data. In case of limited traning data, the netwrok weights and biases may not converge to optimal values during the training. To overcome this challenge, transfer learning can be employed by transferring information from a related task to improve learning in a given task. With this method, the weights and biases of the source (pretrained) model are used as the initial values (instead of random initialization) for training and then the model is fine-tuned using data from the new task [ 11 - 14 ]. It has been shown that in case of similar tasks, transfer learning can significanlty improve the model training [ 8 ]. One of the most common transfer learning applications is in object recogntion tasks, where several well-known CNNs such as AlexNet are available [ 15 ]. These pretrained models have been trained with millions of images to classify images into 1000 categories such as keyboard, coffee mug, pencil, and many animals. Such models can be transferred for other object recognition tasks as features learned in different object recogntion tasks are expected to be similar.

Material and Methods

In this analytical study, a dataset of 191 eye images, comprising 130 images of normal eyes and 61 images of AS affected eyes, was employed. The normal eye images were obtained from UBIRIS (http://iris.di.ubi.pt/index.html), a public database for eye images. Due to the lack of database for AS affected eye images, these images were obtained one by one from different public medical websites. Figure 1(a) and (b) show samples of normal and AS affected eye images.

Figure 1. Samples of eye images (a) normal, (b) AS affected

Due to the limited amount of training data, a transfer learning approach was conducted with AlexNet as the pretrained network. Transfer learning was motivated as AlexNet has been trained for object recognition which is closely related to the AS recognition task. Two classes including the normal and AS were considered and the last three layers of AlexNet i.e. the last fully-connected, and softmax and classification layers were replaced for the AS recognition task. The eye images were resized to 227×227×3 to match the input layer of the model. For comparison, two other models were also employed including a VGG16 and a CNN trained from scratch. For VGG16, the input size was 224×224×3, so the images were resized to this size. Also, the CNN trained from scratch, had 3 convolution layers each followed by a batch normalization, relu and max pooling layer. The input size of the CNN was 250×250×3, which was the original size of the images in the dataset. Therefore, the images were not resized for the CNN input. For all three models, a stochastic gradient descent with momentum (SGDM) method with a mini-batch size of 40 and 8 epochs were used for training. Also, for all models, a 4-fold cross-validation was conducted where ¾ of the data were used for training the model and ¼ of the data were used for test. Table 1 lists the size and configuration of the training and test sets in each fold. The processing was performed using Matlab R2018a on a 2.8 GHz quad-core computer with an Nvidia GTX 1050 graphics processing unit (GPU).

Normal AS Total
Training set 97 46 143
Test set 33 15 48
All images 130 61 191
Table 1. The size and configuration of the training and test sets in each fold.

The proposed method can be summarized as follows:

Step 1: Crop and resize all images to form an iris image without the eyelid.

Step 2: Label each image as normal or AS-affected.

Step 3: Split the dataset into training and test sets, using a 4-fold cross-validation procedure.

Step 4: Replace the last three layers of AlexNet to form a pretrained model with two classes.

Step 5: Train this model with the iris training set.

Step 6: Test the trained model on the iris test set and compute the classification accuracy for each fold.

An overview of the proposed method is illustrated in Figure 2. Also, Figure 3 displays the proposed model loss function for the test set, during the training progress.

Figure 2. The block diagram of the proposed method: AlexNet (with replaced three last layers) is first trained with data from 3 folds and then the trained model is tested on data from the remaining fold.

Figure 3. The proposed model loss function for the test set, during the training progress.

Results

The average classification accuracies of the proposed method (AlexNet), the CNN trained from scratch and VGG16, are listed in Table 2. Moreover, the average training times for each model, for each fold, are listed in Table 2. Table 2 shows that the average classification accuracy of CNN, VGG16, and AlexNet were 98%, 98.5% and 100%, respectively.

Method Fold1 Fold2 Fold3 Fold4 Overall Training time (s)
CNN 98 98 98 98 98 5
VGG16 100 100 98 96 98.5 181
AlexNet 100 100 100 100 100 6
Table 2. The classification accuracy (%) for each method in each fold, along with the training time (s).

The accuracy of the proposed model in classifying the images into two classes of normal and AS affected was 100% in all 4 folds. Consequently, the sensitivity and specificity of the model was also 100%. The confusion matrix of the proposed model for one fold is displayed in Figure 4, which indicates that all cases in class 0 (normal: 33 images) and in class 1 (AS: 15 images) were classified correctly.

Figure 4. The confusion matrix of the proposed model for one fold.

Discussion

This work investigated the application of deep learning for automatic detection of AS. For this purpose, AlexNet was employed as a pretrained model to classify eye images into AS-affected and normal classes. The choice of transfer learning with AlexNet is motivated by the fact that it has been trained for object recognition, which is closely related to AS detection, as in both tasks, classification is based on the image appearance. Two other CNNs were also investigated for comparison, including a model trained from scratch (denoted by CNN), and a pretrained VGG16 model.

Based on the results (Table 2), the proposed method (AlexNet) achieved the highest classification accuracy (100%). This remarkable performance indicates the capacity of deep transfer learning in AS recognition. Furthermore, this model was computationally efficient with only 6s training time. The other models (CNN, VGG16) also achieved a classification accuracy of approximately 98%. However, the drawback of VGG16 is that its architecture is significantly more complex than needed for this task, and this was the reason for its long training time. On the other hand, the CNN trained from scratch allowed fast training time, while performing well. The main challenge with this approach however, is to find the optimal network architecture through trial and error. Nevertheless, the overall results verified outstanding performance of deep learning models for AS recognition. The important advantages of deep learning models over previous methods include elimination of cumbersome procedures of iris segmentation and feature engineering, and ability to deliver high quality results. Deep learning models take the raw image as the input and learn features discriminating the normal and AS images. Feature learning is performed by the model convolutional layers, and substantially simplifies the AS recognition process.

The standard procedure for measuring cholesterol level is taking a blood test and examining the blood sample. Nevertheless, the proposed method presents an alternative approach based on iridology for detection of high cholesterol by implementing computer aided detection (CAD) of AS.

Conclusion

A deep learning model was proposed for automatic recognition of AS. The proposed model employed AlexNet as the pretrained network and achieved an accuracy of 100% in classifying eye images into normal and AS-affected. For comparison purposes, two other models were also developed including a CNN trained from scratch, and a model based on a pretrained VGG16 network. These models achieved a classification accuracy of roughly 98%. The results indicate the success of deep learning in automatic recognition of AS which can be a symptom of high cholesterol.

Future work will involve collection of a large database from local hospitals for better validation of the proposed method. Moreover, the application of deep learning in automatic detection of other conditions such as cataract, glaucoma, and diabetes will be investigated in separate studies.

References

  1. Ramlee R A, Aziz K A, Ranjit S, Esro M. Automated detecting arcus senilis, symptom for cholesterol presence using iris recognition algorithm. Journal of Telecommunication, Electronic and Computer Engineering (JTEC). 2011; 3(2):29-39.
  2. Berggren L. Iridology: A critical reveiw. Acta Ophthalmologica. 1985; 63(1):1-8. DOI
  3. Morrison P J. The iris–a window into the genetics of common and rare eye diseases. The Ulster medical journal. 2010; 79(1):3-5. Publisher Full Text | PubMed
  4. Anjarsari A, Damayanti A, Pratiwi A B, Winarko E. Hybrid radial basis function with firefly algorithm and simulated annealing for detection of high cholesterol through iris images. IOP Conf Ser: Mater Sci Eng; Malang, Indonesia: IOP Publishing Ltd; 2019.DOI
  5. Um J Y, An N H, Yang G B, Lee G M, Cho J J, Cho J W, Hwang W J, et al. Novel approach of molecular genetic understanding of iridology: relationship between iris constitution and angiotensin converting enzyme gene polymorphism. The American journal of Chinese medicine. 2005; 33(3):501-5. DOI | PubMed
  6. Songire S G, Joshi M S. Automated detection of cholesterol presence using iris recognition algorithm. International Journal of Computer Applications. 2016; 133(6):41-5.
  7. Simangunsong L P, Napitupulu I N, Lumbantoruan R E, et al. The Expert System of Cholesterol Detection Based on Iris Using the Gabor Filter. SinkrOn. 2019; 4(1):13-8. DOI
  8. Goodfellow I, Bengio Y, Courville A. Deep learning. MIT press; 2016.
  9. Ameri A. EMG-based wrist gesture recognition using a convolutional neural network. Tehran Univ Med J. 2019; 77(7):434-9.
  10. Ameri A, Akhaee M A, Scheme E, Englehart K. Regression convolutional neural network for improved simultaneous EMG control. Journal of neural engineering. 2019; 16(3):036015.
  11. Shridhar K, Laumann F, Liwicki M. A comprehensive guide to bayesian convolutional neural network with variational inference. arXiv. 2019.
  12. Alom M Z, Taha T M, Yakopcic C, et al. A state-of-the-art survey on deep learning theory and architectures. Electronics. 2019; 8(3):292. DOI
  13. Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks. 25th International Conference on Neural Information Processing Systems; United States: NIPS; 2012. p. 1097–105.
  14. George A, Routray A. Real-time eye gaze direction classification using convolutional neural network. 2016 International Conference on Signal Processing and Communications (SPCOM); Bangalore, India: IEEE; 2016. p. 1-5.DOI
  15. Khan S, Rahmani H, Shah S A, Bennamoun M. A guide to convolutional neural networks for computer vision. Morgan & Claypool; 2018.DOI