Background: Obstructive Sleep Apnea (OSA) is a respiratory disorder due to obstructive upper airway (mainly in the oropharynx) periodically during sleep. The common examination used to diagnose sleep disorders is Polysomnography (PSG). Diagnose with PSG feels uncomfortable for the patient because the patient’s body is fitted with many sensors.
Objective: This study aims to propose an OSA detection using the Fast Fourier Transform (FFT) statistics of electrocardiographic RR Interval (R interval from one peak to the peak of the pulse of the next pulse R) and machine learning algorithms.
Material and Methods: In this case-control study, data were taken from the Massachusetts Institute of Technology at Beth Israel Hospital (MIT-BIH) based on the Apnea ECG database (RR Interval). The machine learning algorithms were Linear Discriminant Analysis (LDA), Artificial Neural Network (ANN), K-Nearest Neighbors (K-NN), and Support Vector Machine (SVM).
Results: The OSA detection technique was designed and tested, and five features of the FFT were examined, namely mean (f1), Shannon entropy (f2), standard deviation (f3), median (f4), and geometric mean (f5). The OSA detection found the highest performance using ANN. Among the ANN types tested, the ANN with gradient descent backpropagation resulted in the best performance with accuracy, sensitivity, and specificity of 84.64%, 94.21%, and 64.03%, respectively. The lowest performance was found when LDA was applied.
Conclusion: ANN with gradient-descent backpropagation performed higher than LDA, SVM, and KNN for OSA detection.
Obstructive Sleep Apnea (OSA) is the occurrence of the obstructive upper airway (mainly in the oropharynx) periodically during sleep, resulting in decreased airflow and the breath to stop intermittently, both complete (apnea) and partial (hypopnea). OSA is also a disorder of complex sleep disorder breathing syndrome [ 1 ] and its severity based on the Apnea-Hypopnea Index (AHI) can be divided into three groups, namely mild OSA, moderate OSA, and severe OSA. Mild OSA is indicated by AHI of 5-15 events/hour, moderate OSA 15-30 events/hour, and severe OSA of more than 30 events/hour [ 2 ].
OSA is often undetectable as it appears while sleeping, causing heavier degrees of it with increases the risk of experiencing coronary heart disease. In addition, OSA patients with AHI>30 have 68% more risk of coronary heart disease than OSA patients with AHI<5 [ 3 ], and the more severe OSA degree can also affect long-term health problems, such as metabolic system disorders, cognitive disorders, sleep quality, nocturia, irritability, and memory disorders. OSA is also related to decreased work productivity, work accidents, and traffic accidents that can result in mild to severe injuries [ 4 , 5 ].
Early diagnosis of OSA is needed to reduce its serious impact OSA [ 3 ]. The common examination used to diagnose sleep disorders is Polysomnography (PSG), done while the patient is sleeping with several measurements such as Electroencephalography (EEG), Electrooculography (EOG), Electromyography (EMG), and Electrocardiography (ECG) [ 6 ]. Patients with PSG feel uncomfortable for the examination since the patients’ body is fitted with many sensors [ 7 ].
In OSA patients, several negative effects occur, such as hypoxemia, hypercapnia, acidosis, increasing adrenergic system and afterload, and rapid fluctuations in the heart wall, resulting in the heart’s conduction problem and arrhythmias. Clinically, arrhythmias can be detected from ECG signals [ 8 ]. For the convenience of the patient, OSA can be detected using a single-lead ECG [ 9 ], a medical device used to record the electrical activity of the heart by measuring the biopotential differences on the skin surface [ 10 ].
Sharma, H and Sharma, K. K mentioned OSA detection using a single-lead ECG with Hermite-based functions [ 9 ]. Rashik, H. A, and Aynal, H did research on the automatic identification of OSA using a single-lead ECG with Boosting sampling [ 11 ]. Wang et al. proposed an automatic OSA detection based on RR interval using residual networks [ 12 ]. Rashik, H. A and Aynal, H studied OSA screening using a single-lead ECG with statistical and spectral features as well as Bootstrap aggregation [ 13 ]. Sharma, H and Sharma, K. K researched OSA detection by ECG using various decomposition modes [ 14 ].
In OSA patients, a cycle change is in the RR interval (or variation in heart rate) of the ECG signal [ 14 ], underling this study about OSA detection by using Fast Fourier Transform (FFT) that is an algorithm for calculating Discrete Fourier Transform (DFT) efficiently and divides the sampled signal into several parts completed with the same algorithm and the results are recollected [ 15 ]. In this study, an OSA detection was introduced using frequency analysis with various classification techniques that are Linear Discriminant Analysis (LDA), Artificial Neural Network (ANN), K-Nearest Neighbors (K-NN), and Support Vector Machine (SVM).
Material and Methods
This case-control study aims to propose an OSA detection using the Fast Fourier Transform (FFT) statistics of electrocardiographic RR Interval and machine learning algorithms.
The case-control study was based on AHI (Apnea Hypopnea Index) scores, divided into three classes, A, B, and C. Class-A records were AHI≥10, contained 10 or more episodes of apnea per hour, at least 100 min apnea on all records. Class-B records were 5≤AHI<10, contained 5 or more episodes of apnea per hour, at least 5 to 99 min apnea on all records. Class-C records were AHI<5, considered a normal group due to no or less than 5 episodes of apnea per hour. The data used were taken from the Massachusetts Institute of Technology at Beth Israel Hospital (MIT-BIH) based on the Apnea ECG database [ 16 ].
The dataset for training and validating the performance of the proposed apnea detection algorithm is 35 patients, consisting of 20 patients A, 5 patients B, and 10 patients C (Total dataset: 13,709 minutes). Some data were not used because of noises in the signal [ 17 ].
OSA Detection System Design
The general OSA detection system is shown in Figure 1.
Segmentation and RR interval determination
The initial data used in this study was the R peaks data of each patient with segmentation every 60s. After segmenting and knowing the value of the R peak, the RR interval value was calculated, determined by calculating the time difference between two consecutive peaks, R(i) and R(i+1). RR interval was the subtraction R(i+1) by R(i), with i = 1,2,3,...,n.
Fast Fourier Transform (FFT)
RR interval is processed using a Fast Fourier Transform (FFT) algorithm, an algorithm used for representing signals in the time domain into frequency. The FFT of signal, x(n), can be calculated using [ 15 ]:
where, N= length of complex input vector, n = index in time domain = 0, 1,..., N-1, k = index in the frequency domain = 0, 1,..., N-1, x(n)= input signal, X(k)= FFT of signal, and WNnk= Fourier matrix.
In the FFT process, the first 10 data were removed in each segment. After obtaining the output from the FFT process, statistical features were calculated, including the mean (f1), Shannon entropy (f2) [ 18 ], standard deviation (f3), median (f4), and geometric mean (f5). Equations for calculating the statistical features are presented in Table 1.
|f1||X(k) ave = ∑ X(k) n|
|f2||H(X(k)) = - ∑ i=1 n P ( x (k) i ) log b P ( x (k) i )|
|f3||x (k) std = ∑ i=1 n ( x (k) i - x (k) ave ) 2 n-1|
|f4||x (k) med = x (k) (n+1) 2|
|f5||x (k) geo = ∏ i=1 n x (k)|
|X(k)ave= Average, P = Probability mass, n = Amount of data, ∑X(k)= Total amount of data, H(X(k)) = Shannon’s entropy, b = Logarithm used, X(k)std = Standard deviation, X(k)i = Value i-th, X(k)med = Median, X(k)geo = Geometric mean|
The data generated from statistical calculations were normalized to make the data at a specified scale or range [ 19 ]. In this study, min-max normalization was used, ranging from 0 to 1 (Equation (2)),
where x = statistical features, xmin= minimum data, and xmax= maximal data [ 20 ].
After normalizing data, divided into training and testing data, the training and testing data were 80% and 20% of the total data (Training data: 10,955 min; Testing data: 2,754 min). After dividing data into training and testing data, the next step was to classify data between normal and OSA with various classification methods used for detecting OSA were LDA, ANN, K-NN, and SVM.
Linear Discriminant Analysis (LDA)
The working principle of the LDA aims to determine an optimal projection for input data on spaces with very small dimensions. Therefore, LDA minimizes the spread of inputs in the same class and maximizes the spread of input data between different classes [ 21 ], the function of LDA can be seen in equation (3).
where fi= class discriminant function, μi= the average value of each class from each matrix, C-1= inverse of the covariance matrix group, xkT= transpose of the test data matrix, μiT= average grade transpose of I, and Pi=chance of the appearance of the i-th class.
Artificial Neural Network (ANN)
ANN is an information processing system with characteristics similar to biological neural networks. This study uses the backpropagation ANN algorithm with several units in one or more hidden screens [ 22 ] defined as Equation 4 [ 23 ].
where n = dimension of input unit, m = dimension of output unit, P= number of training patterns, Xi= input unit, i = 1,..., n, Yi = output unit, i = 1,..., m, x(p) = input training vector, p= 1,..., P, t(p)= target vector for input vector x(p), y(p) = calculated output for input vector x(p), z’ = derivative of the activation function for the candidate unit.
K-Nearest Neighbor (K-NN)
The working principle of K-NN was a classification based on the similarity of data to other data. K-NN was used to classify data without labeling [ 24 ] and its classification included the supervised learning model, i.e. the population database was identified in advance in a particular class. This method works by searching for K-objects in a database and the closest size to the newly tested object, depending on the features used in this study with the value of k=11.
The K-NN function uses Euclidean distances calculated with the following equation.
where p and q were subjects, compared with n characteristics. D is the K-NN function [ 24 ].
Support Vector Machine (SVM)
SVM was a learning technique using pattern recognition and hypothesis space in the form of linear functions in high-dimensional feature space [ 25 ]. In general, the SVM function can be calculated with the following equation.
The parameters b and w, respectively indicate bias and weight vectors determined in the training process by minimizing functions. Parameters ϕ (.) was used to map the input vector x into a higher dimensional space that is easily separated by a linear hyperplane [ 26 ].
The training sample (xi,yi) was a support vector if yifSVM (xi)≤1, expressed as extracted support vector (sk), is a small subset of training equipment. Therefore, the SVM function can be written as follows [ 26 ].
where K (.,.) functions as a kernel to represent the nonlinear effect ϕ (.) in classification. The kernel function used in this study was the RBF kernel, seen in the following equation [ 26 ].
After classification using LDA, ANN, K-NN, and SVM, the next step was determining performance. The performance determination was to find how well the system is in detecting OSA by calculating three parameters, accuracy, sensitivity, and specificity. Accuracy was the ability to identify positive and negative results correctly. Sensitivity and specificity were the abilities to identify positive and negative results, respectively (Equations 10-12) [ 27 ].
Here TP (True Positive) and FP (False Positive) are defined as the number of OSA detected correctly and incorrectly, respectively. FN (False Negative) is defined as the number of normal detected incorrectly and TN (True Negative) is defined as the number of normal detected correctly.
The data were processed by the FFT algorithm that the first 10 data for each segment are eliminated (Figure 2).
Figure 2 shows the RR intervals of normal and OSA patient FFT signals in one segment (a and c) with the first 10 data points omitted (b and d) and the FFT signal of OSA patients with a lower amplitude range compared to the FFT signal of normal patients. Normal patients to compare the initial data (Figure 2(a)) have a higher value than after eliminating the first 10 data (Figure 2(b)). Data after eliminating 10 the first data has a consistent signal, applying to OSA patient signals.
After obtaining the FFT signal, the statistical features are calculated, mean (f1), Shannon entropy (f2), standard deviation (f3), median (f4), and geometric mean (f5). Table 2 shows the average results of each feature in normal, OSA patients, and the P-value for each feature.
|Features||Normal||Obstructive Sleep Apnea (OSA)||P-value|
|Entropy Sahnnon (f2)||0.4424||0.2700||P<0.0001|
|Standard deviation (f3)||0.0248||0.0166||P<0.0001|
|Geometric mean (f5)||0.0356||0.0210||P<0.0001|
Table 2 shows that features in OSA patients are smaller than normal patients that the P-value is defined as the magnitude of the opportunity (probability) observed from the statistical test [ 28 ]. In this study, the P-value for each feature is very small (P<0.0001), showing the features in normal and OSA patients were different significantly.
After normalizing, the next stage is a classification using LDA, ANN, K-NN, and SVM. In this study, the results of the classification were presented in terms of accuracy, sensitivity, and specificity that accuracy is the number of normal and OSA beats classified correctly, sensitivity is the number of OSA pulses classified correctly, and specificity is the number of normal pulses classified correctly.
The classification using Linear Discriminant Analysis (LDA), Artificial Neural Network (ANN), K-Nearest Neighbors (K-NN), and Support Vector Machine (SVM) with all features (f1, f2, f3, f4, and f5) were conducted (Table 3).
|Classification||Sensitivity (%)||Specificity (%)||Accuracy (%)|
|Linear Discriminant Analysis (LDA)||90.43||66.44||82.83|
|Artificial Neural Network (ANN)||94.21||64.03||84.64|
|K-Nearest Neighbors (K-NN)||91.76||65.41||83.41|
|Support Vector Machine (SVM)||94.15||63.80||84.53|
According to Table 3, the ANN performance obtained the best accuracy and sensitivity of 84.64% and 94.21%, respectively and the LDA performance obtained the best specificity of 66.44%. High accuracy value results in recognizing better all the normal and OSA features by the system. Overall, the classification using ANN performs better due to the highest accuracy.
In this study, feature varies to determine the best features for OSA detection and is a single feature and a combination of features. In this study, the ANN algorithm is a type of backpropagation. The parameters used in the training stage include the hidden layer, learning rate, and the number of epochs with the number of hidden layers 12 and the learning rate 0.05. From the training process, indicators were obtained representing the results of the training in the form of performance graphs. The training performance is a graph of the relationship between epoch and Mean Square Error (MSE). The graph can be analyzed if the MSE value approaches zero as the epoch increases, then the training results are good because of its small detection errors (Figure 3).
Figure 3 shows the performance of each MSE until epoch 400. After epoch 400, the lowest MSE is stable. The best performance and lowest were obtained using the feature f1 as the MSE is the lowest amount and f2 as the MSE is the highest amount.
The results of the performance by classifying ANN for a single feature and a combination of features can be seen in Table 4, sorted by the best features.
|Features||Sensitivity (%)||Specificity (%)||Accuracy (%)|
|f1, f4, f5||93.78||63.69||84.24|
|f1, f4, f5, f3||93.83||63.80||84.31|
|f1, f4, f5, f3, f2||94.21||64.03||84.64|
Table 4 illustrates more feature causes the better the accuracy and sensitivity. The best accuracy and sensitivity value is obtained by all features (f1, f4, f5, f3, and f2) of 84.64% and 94.21%, respectively. The best specificity value is obtained including a single feature (f1) of 66.44%, i.e. more features, better performance, and fewer features, less resulting performance.
Accuracy means how much a program works well because the value of accuracy shows overall performance (sensitivity and specificity). If accuracy is high, i.e. system better recognizes all the normal and OSA. Also, many factors influence the small accuracy, one of them is the noise that might occur in ECG recordings.
The classification for all features (f1, f4, f5, f3, and f2) are with various backpropagation ANNs as present in Table 5.
|Various Backpropagation Artificial Neural Networks (ANNs)||Sensitivity (%)||Specificity (%)||Accuracy (%)|
|Artificial Neural Network (ANN) gradient descent backpropagation||94.21||64.03||84.64|
|Artificial Neural Network (ANN) Levenberg Marquardt backpropagation||94.79||61.63||84.28|
|Artificial Neural Network (ANN) Bayesian||93.62||63.57||84.10|
|Artificial Neural Network (ANN) quasi Newton||93.41||64.15||84.13|
|Artificial Neural Network (ANN) with Powell-Beale restarts||94.52||60.82||83.84|
|Artificial Neural Network (ANN) with Polak-Ribiere updates||93.73||62.89||83.95|
|Artificial Neural Network (ANN) with Fletcher-Reeves updates||93.62||63.57||84.10|
|Artificial Neural Network (ANN) adaptive backpropagation||95.80||49.14||81.01|
|Artificial Neural Network (ANN) momentum adaptive backpropagation||95.59||53.26||82.17|
|Artificial Neural Network (ANN) one step secant backpropagation||93.73||62.89||83.95|
|Artificial Neural Network (ANN) RPROP backpropagation||94.58||59.45||83.44|
|Artificial Neural Network (ANN) scaled conjugate gradient||93.62||62.77||83.84|
From Table 5, the performance with the highest accuracy (84.64%) is obtained by the gradient-descent backpropagation classification. The performance with the highest sensitivity (95.80%) is also obtained by the adaptive backpropagation classification. The performance with the highest specificity (64.15%) is obtained by Newton’s quasi classification.
This study is related to OSA detection, known still a serious concern in the health sector with the data R peak ECG. In each patient, segmentation was prolonged the 60s such as conducted by Rashik, H. A and Aynal, H regarding the automatic identification of OSA with a single-lead ECG using boosting sampling [ 11 ].
Some statistical features were used, including mean (f1), Shannon entropy (f2), standard deviation (f3), median (f4), and geometric mean (f5) to find the most appropriate features for detecting OSA.
In this study, the p-value was used as a differentiator between normal and OSA patients. The p-values for each statistical feature are very small (P-value<0.0001), i.e. all statistical features can be significantly distinguished between normal and OSA patients [ 28 ].
After normalizing feature extraction, min-max normalization is conducted so that the statistical calculation is on a predetermined scale or range [ 19 ] 0 to 1 by Jain et al. about score normalization in multimodal biometric systems [ 29 ] and other studies.
After normalizing, data were divided into training and testing data [ 20 ]. Based on the training process, some indicators can represent the results of the training in the form of performance graphs. Thus, the mean (f1) and the Shannon entropy (f2) provided the highest and lowest results for OSA detection, respectively.
Variables are determined to know how the system works for OSA based on calculating three parameters, accuracy, sensitivity, and specificity. If the accuracy is good, the program can recognize the normal and OSA patterns well, but if the accuracy value is small, the program cannot recognize normal and OSA patterns well. In addition, if the accuracy value is high, the system recognizes all the detected better and for good sensitivity value, the program can recognize the OSA pattern well. However, if the accuracy value is low, the program cannot recognize the OSA pattern well. In good specificity value, the program can recognize normal patterns well. Providing its small amount, the program cannot recognize normal patterns well [ 27 ].
However, the best performance results are obtained using the ANN and SVM classifications, and K-NN results in a smaller performance that is not significantly different from classifying ANN. Performing ANN that is better than SVM was also explained in [ 26 ]. It compared the performance of ANN and SVM for the classification of MCCs in mammogram imaging. ANN performance that is better than K-NN was also explained by Moosavian et al. about the comparison of the two K-NN and ANN classifications for fault diagnosis in the main journal-bearing engines [ 30 ]. Overall, the results with the lowest performance were classified by LDA due to the LDA algorithm works by grouping the data determined by the boundary line (straight line) obtained from the linear equation [ 31 ].
In this study, results from a single feature still yield suboptimal performance. Therefore, variations in combining features are used so that the resulting performance can be even better. Accuracy and sensitivity increase if more features are used. Accuracy is a measure of whether a program is good because it shows overall performance (sensitivity and specificity). Then if the more features are used, the better the resulting performance will be computed.
Due to the higher performance of ANN than the others, different types of ANNs were further examined; however, based on the results, these different ANNs showed similar performance. The best performance results are obtained from the classification using gradient descent backpropagation, with accuracy, sensitivity, and specificity of 84.64%, 94.21%, and 64.03%, respectively. Gradient descent backpropagation is a network training function wish updates bias values and weight according to gradient descent [ 32 ]. The lowest performance results are obtained from the classification using adaptive backpropagation, with accuracy, sensitivity, and specificity of 81.01%, 95.80%, and 49.14% respectively. Adaptive backpropagation is a network training function that updates bias values and weight according to gradient descent with an adaptive learning level [ 33 ].
The method developed in this study has not provided very good performance yet, thus it is necessary to do further research as automatic OSA detection is very important for OSA patients.
In this study, an OSA detection system was developed using FFT from electrocardiographic RR interval. Four machine-learning algorithms are used for classification, namely LDA, ANN, k-NN, and SVM to detect OSA. The statistical features of FFT comprised are mean, Shannon entropy, standard deviation, median, and geometric mean. The best accuracy obtained by ANN with gradient-descent backpropagation was measured with its accuracy, sensitivity, and specificity of 84.64%, 94.21%, and 64.03%, respectively.
AN. Indrawati contributed to the extraction of electrocardiographic data and the algorithmic implementation. N. Nuryani introduced the idea of the article. AS. Nugroho provided the detail of the algorithms. Analysis of the results were conducted by TP. Utomo. The preliminary article was presented by AN. Indrawati and was followed up by discussion with the others. All the authors read, modified, and approved the final version of the manuscript.
The source of the data used in this article was obtained from the Apnea-ECG Database. The database was a medical research data which was freely-available provided by the Physionet.
This work was supported by the research funding from University of Sebelas Maret.
Conflict of Interest
- Franklin KA, Sahlin C, Stenlund H, Lindberg E. Sleep apnoea is a common occurrence in females. Eur Respir J. 2013; 41(3):610-5. DOI | PubMed
- Bradley TD, Floras JS. Obstructive sleep apnoea and its cardiovascular consequences. The Lancet. 2009; 373(9657):82-93. DOI | PubMed
- Gottlieb DJ, Yenokyan G, Newman AB, et al. Prospective study of obstructive sleep apnea and incident coronary heart disease and heart failure: the sleep heart health study. Circulation. 2010; 122(4):352-60. Publisher Full Text | DOI | PubMed
- Spicuzza L, Caruso D, Maria GDi. Obstructive sleep apnoea syndrome and its management. Ther Adv Chronic Dis. 2015; 6(5):273-85. Publisher Full Text | DOI | PubMed
- Ye L, Pien GW, Ratcliffe SJ, Bjo E, et al. The different clinical faces of obstructive sleep apnoea: a cluster analysis. Eur Respir J. 2014; 44(6):1600-7. Publisher Full Text | DOI | PubMed
- Haviv Y, Benoliel R, Bachar G, Michaeli E. On the edge between medicine and dentistry: Review of the dentist’s role in the diagnosis and treatment of snoring and sleep apnea. Quintessence Int. 2014; 45(4):345-53. DOI | PubMed
- Kapoor M, Greenough G. Home Sleep Tests for Obstructive Sleep Apnea (OSA). J Am Board Fam Med. 2015; 28(4):504-9. DOI | PubMed
- Kapa S, Javaheri S, Somers VK. Obstructive sleep apnea and arrhythmias. Sleep Med. 2007; 2(4):575-81. DOI
- Sharma H, Sharma KK. An algorithm for sleep apnea detection from single-lead ECG using Hermite basis functions. Comput Biol Med. 2016; 77:116-24. DOI | PubMed
- Khair M. Leadless wireless ECG measurement system for measuring of bio-potential electrical activity of the heart. Patent No. 8,838,218: United States; 2014.
- Hassan AR, Haque MA. An expert system for automated identification of obstructive sleep apnea from single-lead ECG using random under sampling boosting. Neurocomputing. 2017; 235:122-30. DOI
- Wang L, Lin Y, Wang J. Computer Methods and Programs in Biomedicine A RR interval based automated apnea detection approach using residual network. Comput Methods Programs Biomed. 2019; 176:93-104. DOI | PubMed
- Hassan AR, Haque MA. Computer-aided obstructive sleep apnea screening from single-lead electrocardiogram using statistical and spectral features and bootstrap aggregating. Biocybernetics and Biomedical Engineering. 2016; 36(1):256-66. DOI
- Sharma H, Sharma KK. Sleep apnea detection from ECG using variational mode decomposition. Biomed Phys Eng Express. 2020; 6(1):015026. DOI | PubMed
- Chu E, Alan G. Inside the Fast Fourier Transform Black Box: Serial and parallel FFT Algorithms. CRC Press: Boca Raton, FL; 2000.
- Penzel T, Moody GB, Mark RG, Goldberger AL, Peter JH. IEEE: Cambridge, MA, USA; 2000.
- Viswabhargav CS, Tripathy RK, Acharya UR. Automated detection of sleep apnea using sparse residual entropy features with various dictionaries extracted from heart rate and EDR signals. Comput Biol Med. 2019; 108(1):20-30. DOI | PubMed
- Wan S, Zhang X, Dou L. Shannon entropy of binary wavelet packet subbands and its application in bearing fault extraction. Entropy. 2018; 20(4):260. Publisher Full Text | DOI | PubMed
- Jain YK, Bhandare SK. Min Max Normalization Based Data Perturbation Method for Privacy Protection. International Journal of Computer and Communication Technology. 2013; 4(4):233-8. DOI
- Al Shalabi L, Shaaban Z, Kasasbeh B. Data mining: A preprocessing engine. Journal of Computer Science. 2006; 2(9):735-9. DOI
- Singh NA, Kumar MB, Bala MC. Face recognition system based on SURF and LDA technique. International Journal of Intelligent Systems and Applications. 2016; 8(2):13-19. DOI
- Syahfitra FD, Syahputra R, Putra KT. Implementation of Backpropagation Artificial Neural Network as a Forecasting System of Power Transformer Peak Load at Bumiayu Substation. Journal of Electrical Technology UMY. 2017; 1(3):118-25.
- Fausett L. Fundamentals of Neural Networks. Prentice-Hall: New Jersey, Englewood Cliffs; 1994.
- Zhang Z. Introduction to machine learning: k-nearest neighbors. Ann Transl Med. 2016; 4(11):218. Publisher Full Text | DOI | PubMed
- Deak K, Kocsis I, Vamosi A, Keviczki Z. Failure Diagnostics With SVM in Machine Maintenance Engineering. Annals of the Oradea University. 2014; 1:19-24.
- Ren J. ANN vs. SVM: Which one performs better in classification of MCCs in mammogram imaging. Knowledge-Based Systems. 2012; 26:144-53. DOI
- Zhu W, Zeng N, Wang N. Sensitivity, specificity, accuracy, associated confidence interval and ROC analysis with practical SAS implementations. NESUG 2010 Health Care and Life Sciences: Baltimore, Maryland; 2010.
- Zhu W. p < 0.05, < 0.01, < 0.001, < 0.0001, < 0.00001, < 0.000001, or < 0.0000001. J Sport Health Sci. 2016; 5(1):77-9. Publisher Full Text | DOI | PubMed
- Jain A, Nandakumar K, Ross A. Score normalization in multimodal biometric systems. Pattern Recognition. 2005; 38(12):2270-85. DOI
- Moosavian A, Ahmadi H, Tabatabaeefar A, Khazaee M. Comparison of two classifiers; K-nearest neighbor and artificial neural network, for fault diagnosis on a main engine journal-bearing. Shock and Vibration. 2013; 20(2):263-72.
- Bhardwaj A, Gupta A, Jain P, Rani A, Yadav J. IEEE: Noida, India; 2015. DOI
- Rehman MZ, Nawi NM. Springer: Berlin, Heidelberg; 2011. DOI
- Man Z, Wu HR, Liu S, Yu X. A new adaptive backpropagation algorithm based on Lyapunov stability theory for neural networks. IEEE Transactions on Neural Networks. 2006; 17(6):1580-91. DOI