Document Type : Original Research

Authors

1 Center for Quantum Computational System, Department of Electrical and Electronics Engineering, Adeleke University, Osun State, Nigeria

2 European Centre for Research and Academic Affairs, Lefkosa, Turkey

3 Department of Medical Imaging, Suzhou Institute of Biomedical and Technology, Chinese Academy of Sciences, Suzhou, 215163, China

4 Interdisciplinary Centre for Security, Reliability, and Trust (SnT), University of Luxembourg, Luxembourg

5 Department of Electrical and Electronics Engineering, Adeleke University, Ede, Nigeria

6 Department of Biomedical Engineering, Shenzhen University, Shenzhen, China

7 European Centre for Research and Academic Affairs, Turkey

10.31661/jbpe.v0i0.2101-1268

Abstract

Background: Eye melanoma is deforming in the eye, growing and developing in tissues inside the middle layer of an eyeball, resulting in dark spots in the iris section of the eye, changes in size, the shape of the pupil, and vision. 
Objective: The current study aims to diagnose eye melanoma using a gray level co-occurrence matrix (GLCM) for texture extraction and soft computing techniques, leading to the disease diagnosis faster, time-saving, and prevention of misdiagnosis resulting from the physician’s manual approach.
Material and Methods: In this experimental study, two models are proposed for the diagnosis of eye melanoma, including backpropagation neural networks (BPNN) and radial basis functions network (RBFN). The images used for training and validating were obtained from the eye-cancer database. 
Results: Based on our experiments, our proposed models achieve 92.31% and 94.70% recognition rates for GLCM+BPNN and GLCM+RBFN, respectively.  
Conclusion: Based on the comparison of our models with the others, the models used in the current study outperform other proposed models.

Keywords

Introduction

The iris is the protecting internal organ of the eye, helping regulate the amount of light that enters the eye. It is located behind the cornea and situated in front of the lens [ 1 ].

Melanoma is referred to as the most common malignancy of the iris that has two growth patterns; circumscribed and diffusion. Iris melanoma accounts for about 3% to 10% of the uveal melanomas [ 2 - 3 ] and is assumed to have the tendency to spread (metastasized) as well, due to its common histogenesis with the ciliary body and choroidal melanoma. Iris melanomas are mostly diagnosed in people of 45 years to the late 40s, which is approximately 10 years younger than the average age of patients diagnosed with choroidal and ciliary body melanoma [ 2 ].

In the early 2000s, the World Health Organization (WHO) reports that about 6.2 million people lived with cancer worldwide. In the United States of America alone, about 3,540 adults comprise 60.2% of men, and 38.9% of women were diagnosed with primary intraocular cancer [ 2 ]. Mostly, 3 out of 4 patients living with eye melanoma survive for at least 5 years. However, when the melanoma does not metastasize to other parts of the body, the five years’ relative survival rate is about 80%. If the melanoma metastasized to distant parts of the body, the 5 years relative survival rate will be about 15% [ 4 ].

Generally, there is an increase in mortality rate due to unawareness by the patient suffering from a particular disease. Besides, physicians’ improper awareness of the causes of diseases makes several diseases worsen and aggravate mortality, particularly in developing countries. However, early diagnosis and treatment of any disease make the survival rate of the patients increase beyond reasonable doubt [ 5 ].

Soft computing techniques such as artificial neural network (ANN) and radial basis function (RBF) have been vital tools, adopted by the clinical to assist in diagnosing diseases. ANN is modeled after the human brain to perform human functions. It can perform human functions such as coordination, reasoning, association, recognition, classification, and has three layers, including the input layer, hidden layer, and output layer. Each layer has neurons that help perform its functions. The input layer is a non-processing layer and only takes in the data while the hidden layer is processing which takes in data from the input layer, and processes it to sum the synaptic weight and give the output using the output layer of the system.

Several works have been proposed on related diseases such as diagnosis, detection, and pathology examination of skin cancer [ 6 - 9 ], lung cancer [ 10 , 11 ], breast cancer [ 12 - 15 ], prostate cancer [ 16 - 18 ], and others with limited work on the diagnosis of eye melanoma.

Ahmed et al. [ 19 ] proposed a classifying and diagnosing system for iris cancer (melanoma). The study reported forty preoperative samples that were made up of 20 malignant and 20 benign. Median filter and histogram techniques were adopted to enhance the image processing while the physicians were asked to determine the region of interest of the images. These regions were cropped to prevent redundancy. Then the texture feature extraction approach was adopted to extract all the necessary features required in training the neural network. ANN is adopted as a classifier to classify the disease into malignant and benign. The current study could obtain recognition of 85%.

Oyedotun et al. [ 20 ] described an automated diagnosis of iris nevus, a pigmented tumor found in the front of the eye using a convolution neural network and deep belief network, achieving recognition rates of 93.35% and 93.67%, respectively.

Kamil et al. [ 21 ] proposed an image analysis system for the detection of eye tumors. In their work, they adopted different types of image processing techniques such as filtering, morphological operation, image addition, image adjustment, edge detection, and image fusion to extract the features needed in the classification phase of the study. The neural network was adopted at the classification phase to diagnose the iris tumor.

Kabari et al. [ 22 ] applied a hybrid of neural networks and decision trees to classify eye disease according to the patients’ complaints and symptoms and physical eye examination systems to diagnose the problem accurately. From this work, they achieved a recognition rate of 92%.

Early diagnosis of eye melanoma helps the physician make adequate prescriptions and provide proper health care to the affected patients. Therefore, it is essential to propose a system that will aid the diagnosis of eye melanoma by the physician, such a diagnostic system will save time and improves the diagnosis of the disease.

In this work, we propose an eye melanoma diagnosis system using the Gray level co-occurrence matrix (GLCM) feature extraction approach and soft computing techniques. Such a system can mitigate the misdiagnosis that may result from the manual diagnosis of the disease by the physicians. Our proposed model also saves time required in the diagnosis procedure since the system is designed by extracting the textural features of the eye, revealing the hidden pathogenesis, which the human perception cannot observe. To the best of our knowledge, GLCM textural feature extraction has not been used before for the diagnosis of eye melanoma in any related literature.

The rest of the paper is arranged as follows: 1) section II presents the proposed method, 2) section III summarizes the result and evaluation, and finally, 3) section IV concludes this work.

Material and Methods

This is the section for an experimental study that explicates the image processing, the feature extraction method, and the classifiers we adopted to diagnose eye melanoma.

Database Preparation

The images used in this work were obtained from the eye cancer database [ 23 ]. The images collected comprise 50 malignant and 45 benign images, which were rotated at 0°, 90°, 180°, and 270° to incorporate translational invariant into the diagnosis system and increase the database amount to 380 images: 200 malignant images and 180 benign images. Figure 1 shows the stages involved in the proposed iris melanoma diagnosis using GLCM texture feature extraction and neural network arbitration. Figure 2 shows the acquisition with the in-built translational invariance. The images captured were 280×280 pixels, which were later, downsized to 128×128 pixels to reduce the redundancy in the images.

Figure 1. Stages of the proposed iris melanoma diagnosis using Gray Level Co-occurrence Matrix texture feature extraction and neural network arbitration.

Figure 2. (i) Acquisition with translational invariance in-built into the system for both melanoma and benign iris. (ii) Colour conversion stage for both melanoma and benign iris. (iii) Filtering of the images by the median filter for both melanoma and benign iris

Image Processing Stage

The image processing stage is the phase in which the GLCM texture features needed for training the neural network at the classification phase were extracted from the images. In this phase, care needs to be taken in extracting the essential features to avoid misrepresentation of information which may lead to misdiagnosis.

RGB to Grayscale image

There is a need to convert the Red-Green-Blue (RGB) color images to grayscale ones for subsequent usage. The conversion method must retain the original information of the image [ 24 ]. Thus, RGB color to a grayscale image is the first operation that needs to be performed in image processing. There are three methods used in carrying out this operation: lightness method, average method, and weighted average or luminosity method. We adopted the third method because this method is modeled after the human eye, and finds the intensity of the images by estimating the weighted average of the RGB color images. Human being has a high sensitivity to green color. Equation (1) also illustrated that the luminosity method also has a high sensitivity for the green color in an image, making this method essential in RGB to grayscale image conversion. Figure 2(ii) shows the result obtained from converting the color iris images to grayscale images.

WA=0.21R+0.72G+0.07B (1)

WA=weighted average, R is the red component, G is the green component, and B is the blue component of the images.

Image Filtering

During the image acquisition process, the images may be corrupted and may have caused variation in the intensity of the images due to poor illumination or contrast resulting from the capturing system [ 25 - 26 ]. In this regard, filtering of the images is essential to remove noises, which may eventually affect the performance of our classifier systems. In this work, the median filtering method is adopted in removing noise embedded in the iris images.

The median filtering method is widely used in image processing because it allows the removal of the noises and helps preserve the edges of the images, unlike mode filtering [ 26 ].

The median filter is much efficient in the removal of impulse noise and salt and pepper in images. We used the median filtering approach to remove the impulse noises caused by corruption and variation of intensity of the images during image acquisition.

The median filtering operation is carried out by setting a window slide of a particular size across the data. The median value of the data within the window size after the data has been arranged in either ascending or descending order, obtained as the output of the filter. Then the window slide is shifted until it operates on all the data.

In this work, a window slide of 5×5 is used as it is found to be the best slide selection in removing the image noises. The output images obtained from the median filtering approach are shown in Figure 2(i).

Feature Texture Extraction

In our proposed model, texture feature analyses were conducted to obtain unique values that represent the textural surface of the iris images. The approach provides the classification phase with the special sample features as the input data for training and testing the classification models.

The property representing the structure and surface of an image is regarded as texture. It may likewise refer to how often a specific pattern or element occurs on the surface of an image. It should be noted that texture analysis plays a significant role in the visual system for identification and interpretation [ 26 - 27 ]. The GLCM feature extraction is used for the extraction of the necessary feature needed in training the classification model. Besides, the GLCM can be regarded as the statistical approach, examining the texture feature in an image by using the spatial relationship of the pixels and also performs its operation by finding how often a pair of the pixel with a specific value occurs in an image. It finds how many times a pixel with a specific intensity (gray-level) value occurs in a particular spatial relationship between a pair of pixels. Resultant GLCM comprises elements (i,j) in which each one is the summation of the occurrence of the pixel (i) in the specified spatial relationship to a pixel with value (j) in the input image [ 26 , 28 ] after creating the gray-level co-occurrence by using “graycoprops” in Matlab, providing several statistical measures. From equations (2-5), G denotes the gray level number used, μ represents the value of P, σx, σy, μx and μy denotes the standard deviation, variance, and the mean value of Px and Py respectively [ 26 ].

Px(i)=j=0G-1P(i,j),Py(j)=i=0G-1P(i,j) (2)

μx=i=0G-1iPx(i),μy(j)=i=0G-1jPy(j) (3)

σxx2=(i=0G-1Px(i)-μx(i))2 (4)

σyy2=(j=0G-1Py(i)-μy(j))2 (5)

Also, several textural features can be calculated by the use of the following equations. These textural features are entropy, contrast, homogeneity, energy, and correlation.

Entropy is the degree of disorderliness of the pixel in an image, and can also be described as a statistical feature that determines the randomness of an input image [ 29 ]. Note that the level of the homogenous scene and the entropy level in an image are directly proportional to each other, i.e. an image that possesses a high homogenous scene will present a high entropy level. Also, an image that possesses a low homogenous scene will possess a corresponding low entropy level. Equation (6) illustrates the mathematical representation to calculate entropy for the iris images [ 30 ].

Entropy=i=0G-1i=0G-1P(i,j)*log(P(i,j)) (6)

Contrast is a statistical measure to calculate the variation within the intensity values of neighboring pixels. Equation (7) shows how contrast is determined in the iris images.

Contrast=n=0Gn2{i=0Gj=0GP(i,j)},|i-j|=n (7)

Energy returns the sum of the squared elements in the gray level co-occurrence matrix. Equation (8) shows the mathematical representation of energy.

Energy=i,jP(i,j) (8)

Homogeneity determines the uniformity of a given region in respect to its gray level variation.

Homogeneity=i,jP(i,j)1+|i-j| (9)

Correlation determines how a pixel is associated with its neighbor. When there is a high correlation, i.e. there is a cordial relationship between the local intensity. Correlation returns a measure of how a pixel is related to its neigbor over the whole image.

Correlation=(i-μi)(j-μj)σiσj (10)

Normalization of the Dataset

Since designing a better-performing system is one of the optimum goals of this research work. Therefore, there is a need to normalize the dataset, leading to the dataset with homogenous stability [ 31 - 32 ]. It should be noted that some features represented in Table 1 such as the entropy, standard deviation, mean, and variance are unnormalized, resulting in an unstable system when used to train the system and eventually affect system performance.

Statistical Texture Feature Class A (Benign Iris Image) Class B (Malignant Iris mage)
Correlation 0.9499 0.9678
Contrast 0.3032 0.1190
Entropy 7.3451 7.2799
Homogeneity 0.8741 0.9429
Energy 0.1082 0.1747
Variance 6534.3220 2373.5110
Standard Deviation 80.8352 48.7187
Mean 122.9144 160.7580
Table 1.Unnormalized feature vector for benign and malignant

Therefore, there is a need to normalize these features. The results obtained from the normalization of the features are in the range of 0 and 1 as shown in Table 2.

Statistical Texture Feature Class A (Benign Iris Image) Class B (Malignant Iris mage)
Correlation 0.9512 0.9678
Contrast 0.2950 0.1190
Entropy 0.9688 0.9285
Homogeneity 0.8796 0.9429
Energy 0.1089 0.1747
Variance 0.4514 0.2964
Standard Deviation 0.6719 0.5444
Mean 0.9685 0.4228
Table 2.Normalized feature vector for benign and malignant

This is achieved by firstly determination of the highest value in a feature vector and then the highest value obtained is then used to divide through that particular feature vector. Equation 11 shows the formula for normalizing the dataset.

X=X-XminXmax-Xmin (11)

Where,

X=Original values of the sample vector

Xmin=Minimum value in the sample vector

Xmax=Maximum value of the sample vector

X'=Normalized sample value

The normalized dataset for the benign and malignant iris images are prepared to feed them into the classifier, in which a sample of eye tumor will be automatically classified as either benign or malignant without the aid of the physicians. This automated system is modeled on a feedforward neural network trained with backpropagation by adjusting its weight, and the radial basis function network.

Classification Phase

The classification phase is a phase where the classifiers required to diagnose iris images into either benign or malignant are implemented. Two models were set up in this research to classify the iris images, including the feedforward neural network trained with backpropagation by adjusting and updating its weights and radial basis function which used the Gaussian function as the activation function.

The feedforward neural network is trained with backpropagation in supervised learning based on some layers model such as the input layer receiving the input dataset and serving as an intake to the system [ 33 ].

The hidden layer takes in the weighted sum of the input while the output layer also takes in the weighted sum of hidden neurons and gives the result of the system [ 34 - 36 ]. Certain parameters and conditions have to be taken into consideration during modeling the feedforward neural network [ 37 ]. These are the number of neurons at the hidden layer, the learning rate, and the momentum rate (constant). The number of neurons at the hidden layer of the system is obtained by experimentation and actual neurons at the hidden layer. Then the neurons are either increased or decreased by experimenting to obtain the best number of hidden neurons that best fit the pattern. In our model, eight neurons were presented at the input layer, showing the features extracted from the iris images as shown in Figure 3. The number of neurons at the hidden layer will be only determined during experimenting. At this hidden layer, the sigmoid activation function is employed because of its soft-switching attribute. Two neurons were used at the output layer of the system, indicating benign iris images and malignant iris images [ 10 ] as shown in Figure 3.

Figure 3. Structure representation of proposed artificial neural network model

Radial basis function network is used as the other model to validate the performance of the neural network model. The radial basis function approximates continuous function with the aid of the Gaussian function. The radial basis function network is also a supervised learning algorithm that involves the input and target with the desired output [ 38 ]. The radial basis function consists of three layers; the input, hidden, and output layer. The input layer is a non-processing layer with input data, and after the input layer is the hidden layer, known as a processing layer consolidated with the Gaussian activation function. The number of hidden neurons is determined during experimentation. The output layer produces the result of the network. Two neurons are used at the output layer, denoting the benign or malignant Iris [ 10 ]. Equations (12) and (13) denote the function represented by the RBF network with P hidden neurons and the output using the Gaussian activation function.

y=j=1pwj(||X-θj||) (12)

y=j=1pwjexp(||X-θj||)22σ2 (13)

Where: θj=Center of the j hidden of RBF neuron, σ=Width, wj=weight, p=number of hidden neurons, X=Input dataset.

Results

A system with better performance, robustness in identifying and classifying the patterns embedded in any data is the main target for any machine-learning expert. Therefore, certain parameters have to be varied to achieve these in the neural network. Such parameters are the learning rate, momentum constant, and the hidden neuron at the hidden layer of the system. The speed at which the system learns the dataset pattern is determined by the learning rate. The momentum constant improves the rate, at which the system is learning, and at the same time improves the accuracy of the system. It also prevents the system from being stuck at the local minim.

In our approach, several neural networks were created with different hidden neurons. A constructive approach is adopted in this work to choose the hidden neurons at the hidden layer of the system. In this approach, a particular number of neurons will be chosen initially at the hidden layer and this will increase in creating another set of networks [ 39 ].

Therefore, in this work, eight hidden neurons were firstly chosen and this increases by two additional neurons until a network of twelve hidden neurons is reached. These networks were trained and their performances were obtained to determine the best performing system.

It should be noted that the learning rate and the momentum constant were varied until a learning rate of 0.27 and a momentum constant of 0.77 were reached which yielded the best recognition rate for the system. Table 3 and Figure 4 show the performance curve of the optimum performing system while Figure 5 shows the performance obtained from the three networks.

BPNN Hidden Neurons Learning Rate Momentun Rate Recognition Rate (%)
Network 1 8 0.27 0.77 92.31
Network 2 10 0.27 0.77 92.31
Network 3 12 0.27 0.77 76.9
BPNN: Back Propagation Neural Network
Table 3.Back Propagation Neural Network Performance Models

Figure 4. The Minimum Square Error against Epoch

Figure 5. Performance comparisons of our Back Propagation Neural Network models

Discussion

Figure 5 shows the performance comparison of the three models of the neural network. The results show clearly that despite varying the number of hidden neurons, network 1 and network 2 have the best performing recognition rate of 92.3% respectively. Although these two networks have the same recognition rate, a system with a minimum square error has to be considered as the best-performing system. As a result of this, network 1, which has the recognition of 92.3% and a minimum square error of 0.15208, is considered as the best performing system of the feedforward neural network trained with backpropagation.

During the training of the RBF network, the spread constant and hidden neurons were varied to obtain optimum recognition. The spread constant is varied experimentally together with the hidden neurons with an increase of 0.5 and 2 neurons, respectively. When the spread constant 2.0 and the hidden neurons reached 32 neurons, an optimum recognition of the system is obtained. Table 4 and Figure 6 show the representation of the recognition rate obtained when the trained system is tested with the test dataset.

RBFN Spread Constant Hidden Neurons Recognition Rate (%)
Network 1 1.0 28 89.47
Network 2 1.5 30 89.64
Network 3 2.0 32 94.70
Network 4 2.5 34 92.10
RBFN: Radial Basic Function Network
Table 4.Radial Basis Function Network Performance Model

Figure 6. Schematic plot for radial basic function network models performance comparison.

To determine the optimal performing system, there is a need to compare the results obtained from the performance of the two models. In this regard, the RBF network has the highest recognition rate of 94.7%, showing it with the best performance. Table 5 shows the comparison of the results to determine the best system.

Author(s) Model Recognition Rate (%)
Our proposed System 1 Using GLCM+BPNN 92.31
Our proposed System 2 Using GLCM+RBFN 94.70
GLCM: Graylevel Co-occurrence Matrix, BPNN: Back Propagation Neural Network, RBFN: Radial Basis Functions Network
Table 5.Comparison performance of the proposed models

The results obtained from this work were also compared with related works by other researchers to confirm the relevance and optimal performing system that is well suitable for the diagnosis of the disease. Table 6 shows the comparison of our proposed systems with other related works.

Author(s) Model Recognition Rate (%)
Ahmed et al. 2018 Iris Melanoma+ANN 85.00
Kamil et al. 2016 Using Artificial Neural Network 92.00
Kabari and Nwachukwu 2012 Eye Disease+Hybrid NN+Decision Tree 92.00
Our proposed System 1 Using GLCM+BPNN 92.31
Our proposed System 2 Using GLCM+RBFN 94.70
ANN: Artificial Neural Network, NN: Neural Network, GLCM: Graylevel Co-occurrence Matrix, BPNN: Back Propagation Neural Network, RBFN: Radial Basis Functions Network
Table 6.Our Proposed Model Performance Comparison with other Related Works

Table 6 shows a comparison of our results with other related works to validate the significance of our work with a higher recognition rate than other related works with 0.31% and 2.7% for our ITDS+BPNN and ITDS+RBFN respectively, causing the proposed system to be the best among other related works. The system with optimal performance will be considered as the best system, more efficient for the disease diagnosis. Our proposed ITDS+RBFN proves to be the best system adopted by clinicians in the diagnosis of eye melanoma.

Conclusion

In this work, we have shown that earlier diagnosis of eye melanoma is essential for proper treatment. When eye melanoma is detected at its early stage, it encourages its proper treatment and management. This will help to prevent spreading to other regions of the body, which will eventually aid the 5 years survival rate of the patient to about 80%.

Eye melanoma diagnosis system has been proposed using GLCM feature extraction and soft computing. To the best of our knowledge, this approach has not been used before to diagnose this disease. The recognition rate obtained from this work proof that this work is efficient diagnosis of eye melanoma compared with other systems used in solving the same problem.

Acknowledgment

The authors appreciate Timothy O. Olaniyi for his encouragement and support toward the success of this research work.

Authors’ Contribution

EO. Olaniyi conceived the idea. Introduction of the paper was written by EO. Olaniyi and TE. Komolafe. TT. Oyemakinde and M. Abdulaziz gather the images and the related literature and also help with writing of the related works. The method implementation was carried out by EO. Olaniyi and OK. Oyedotun. Results and Analysis was carried out by EO. Olaniyi, TT. Oyemakinde and TE. Komolafe. The research work was proofread and supervised by OK. Oyedotun and A. Khashman. All the authors read, modified, and approved the final version of the manuscript.

Ethical Approval

The data used is an open source from ‘‘https://eyecancer.com/eye-cancer/image-galleries/iris-tumors/’’ which we referenced according to the ethic of the scientific research.

Conflict of Interest

None

References

  1. Muron A, Pospisil J. The Human Iris Structure and Its Usages. Physica. 2000; 39:87-95.
  2. Skalicky SE, Giblin M, Conway RM. Diffuse iris melanoma: Report of a case with review of the literature. Clin Ophthalmol. 2007; 1(3):339-42. Publisher Full Text | PubMed
  3. Cardoso TB, Pizzari T, Kinsella R, Hope D, Cook JL. Current trends in tendinopathy management. Best Pract Res Clin Rheumatol. 2019; 33(1):122-40. DOI | PubMed
  4. Schwartz GG. Eye cancer incidence in U.S. States and access to fluoridated water. Cancer Epidemiol Biomarkers Prev. 2014; 23(9):1707-11. DOI | PubMed
  5. Singh M, Durairaj P, Yeung J. Uveal Melanoma: A Review of the Literature. Oncol Ther. 2018; 6(1):87-104. Publisher Full Text | DOI | PubMed
  6. Rigel DS, Russak J, Friedman R. The Evolution of Melanoma Diagnosis: 25 Years Beyond the ABCDs. CA Cancer J Clin. 2010; 60(5):301-16. DOI | PubMed
  7. Huang JT, Coughlin CC, Hawryluk EB, Hook K, Humphrey SR, Kruse L, et al. Risk Factors and Outcomes of Nonmelanoma Skin Cancer in Children and Young Adults. J Pediatr. 2019; 211:152-8. Publisher Full Text | DOI | PubMed
  8. Serte S, Demirel H. Gabor wavelet-based deep learning for skin lesion classification. Comput Biol Med. 2019; 113:103423. DOI | PubMed
  9. Ko JS, Matharoo-Ball B, Billings SD, et al. Diagnostic distinction of malignant melanoma and benign nevi by a gene expression signature and correlation to clinical outcomes. Cancer Epidemiol Biomarkers Prev. 2017; 26(7):1107-13. DOI | PubMed
  10. Guo H, Kruger U, Wang G, Kalra MK, Yan P. Knowledge-Based Analysis for Mortality Prediction from CT Images. IEEE J Biomed Heal Informatics. 2020; 24(2):457-64. Publisher Full Text | DOI | PubMed
  11. Nasrullah N, Sang J, Alam MS, Mateen M, Cai B, Hu H. Automated lung nodule detection and classification using deep learning combined with multiple strategies. Sensors. 2019; 19(17):3722. Publisher Full Text | DOI | PubMed
  12. Gao J, Jiang Q, Zhou B, Chen D. Convolutional neural networks for computer-aided detection or diagnosis in medical image analysis: An overview. Math Biosci Eng. 2019; 16(6):6536-61. DOI | PubMed
  13. Fogliatto FS, Anzanello MJ, Soares F, Brust-Renck PG. Decision Support for Breast Cancer Detection: Classification Improvement Through Feature Selection. Cancer Control. 2019; 26(1):1-8. Publisher Full Text | DOI | PubMed
  14. Hosni M, Abnane I, Idri A, Carrillo De Gea JM, Fernández Alemán JL. Reviewing ensemble classification methods in breast cancer. Comput Methods Programs Biomed. 2019; 177:89-112. DOI | PubMed
  15. Byra M, Galperin M, Ojeda-Fournier H, Olson L, O’Boyle M, Comstock C, et al. Breast mass classification in sonography with transfer learning using a deep convolutional neural network and color conversion. Med Phys. 2019; 46(2):746-55. DOI | PubMed
  16. Yuan Y, Qin W, Buyyounouski M, Ibragimov B, Hancock S, Han B, et al. Prostate cancer classification with multiparametric MRI transfer learning model. Med Phys. 2019; 46(2):756-65. DOI | PubMed
  17. Azizi S, Bayat S, Yan P, Tahmasebi A, Kwak JT, Xu S, et al. Deep recurrent neural networks for prostate cancer detection: Analysis of temporal enhanced ultrasound. IEEE Trans Med Imaging. 2018; 37(12):2695-703. Publisher Full Text | DOI | PubMed
  18. Hou Q, Bing ZT, Hu C, Li MY, Yang KH, Mo Z, et al. RankProd Combined with Genetic Algorithm Optimized Artificial Neural Network Establishes a Diagnostic and Prognostic Prediction Model that Revealed C1QTNF3 as a Biomarker for Prostate Cancer. EBioMedicine. 2018; 32:234-44. Publisher Full Text | DOI | PubMed
  19. Ahmed IO, Ibraheem BA, Mustafa ZA. Detection of Eye Melanoma Using Artificial Neural Network. J Clin Eng. 2018; 43(1):22-8. DOI
  20. Oyedotun O, Khashman A. Iris nevus diagnosis: Convolutional neural network and deep belief network. Turkish J Electr Eng Comput Sci. 2017; 25(2):1106-15. DOI
  21. Dimililer K, Ever YK, Ratemi H. Intelligent eye Tumour Detection System. Procedia Computer Science. 2016; 102:325-32. DOI
  22. Kabari LG, Nwachukwu EO. Neural networks and decision trees for eye diseases diagnosis. In: Advances in Expert Systems. IntechOpen; 2012.
  23. New York Eye Cancer Center. Eye Cancer Resources. 1995. Available from: https://eyecancer.com/eye-cancer/image-galleries/iris-tumors.
  24. Ĉadík M. Perceptual evaluation of color-to-grayscale image conversions. Comput Graph Forum. 2008; 27(7):1745-54. DOI
  25. Olaniyi EO, Oyedotun OK, Adnan K. Intelligent Grading System for Banana Fruit Using Neural Network Arbitration. J Food Process Eng. 2017; 40(1):e12335. DOI
  26. Olaniyi EO, Adekunle AA, Odekuoye T, Khashman A. Automatic system for grading banana using GLCM texture feature extraction and neural network arbitrations. J Food Process Eng. 2017; 40(6):e12575. DOI
  27. Ping Tian D. A review on image feature extraction and representation techniques. International Journal of Multimedia and Ubiquitous Engineering. 2013; 8(4):385-96.
  28. Olaniyi EO, Oyedotun OK, Ogunlade CA, Khashman A. In-line grading system for mango fruits using GLCM feature extraction and soft-computing techniques. Int J Appl Pattern Recognit. 2019; 6(1):58. DOI
  29. Ebenezer O, Oyebade KO, Khashman A. Heart diseases diagnosis using neural network arbitration. Int J Intell Syst Appl. 2015; 7(12):75-82. DOI
  30. Vo MH. A Diagonally Weighted Binary Memristor Crossbar Architecture Based on Multilayer Neural Network for Better Accuracy Rate in Speech Recognition Application. Adv Electr Comput Eng. 2019; 19(2):75-82. DOI
  31. Koyuncu I. Implementation of high speed Tangent Sigmoid Transfer Function approximations for Artificial Neural Network applications on FPGA. Adv Electr Comput Eng. 2018; 18(3):79-86. DOI
  32. Olaniyi EO, Oyedotun OK, Helwan A, Adnan K. IEEE: Beirut, Lebanon; 2015.
  33. Bashiri M, Geranmayeh AF. Tuning the parameters of an artificial neural network using central composite design and genetic algorithm. Sci Iran. 2011; 18(6):1600-8. DOI
  34. Kopal I, Harničárová M, Valíček J, Krmela J, Lukáč O. Radial basis function neural network-based modeling of the dynamic thermo-mechanical response and damping behavior of thermoplastic elastomer systems. Polymers. 2019; 11(6):1074. Publisher Full Text | DOI | PubMed
  35. Lan Y, Soh YC, Huang GB. Constructive hidden nodes selection of extreme learning machine for regression. Neurocomputing. 2010; 73(16-18):3191-9. DOI
  36. Zulpe N, Pawar V. GLCM textural features for brain tumor classification. Int J Comput Sci Issues. 2012; 9(3):354-9.
  37. Sildir H, Aydin E, Kavzoglu T. Design of feedforward neural networks in the classification of hyperspectral imagery using superstructural optimization. Remote Sens. 2020; 12(6):956. DOI
  38. Olaniyi EO, Adnan K. Onset diabetes diagnosis using artificial neural network. Int J Sci Eng Res. 2014; 5(10):754-9.
  39. Humeau-Heurtier A. Texture feature extraction methods: A survey. IEEE Access. 2019; 7:8975-9000. DOI