Document Type : Review Article

Authors

1 B Tech, School of Electrical Engineering, Vellore Institute of Technology, Tamil Nadu, India

2 PhD, School of Electrical Engineering, Vellore Institute of Technology, Tamil Nadu, India

10.31661/jbpe.v0i0.2202-1461

Abstract

The health organisation has suffered from the lack of diagnosis support systems and physicians in India. Further, the physicians are struggling to treat many patients, and the hospitals also have the lack of a radiologist especially in rural areas; thus, almost all cases are handled by a single physician, leading to many misdiagnoses. Computer aided diagnostic systems are being developed to address this problem. The current study aimed to review the different methods to detect pneumonia using neural networks and compare their approach and results. For the best comparisons, only papers with the same data set Chest X-ray14 are studied.

Keywords

Introduction

Pneumonia is an infection in the lungs, occurring either in the left or right or both lungs at the same time, affecting the alveoli, which are the small air sacs in the lungs. The various symptoms of pneumonia are dry cough, chest pain, fever, and difficulty breathing [ 1 ].

A study on community-acquired Pneumonia shows that patients suffering from asthma, diabetes, or those with a history of heart failure or smoking and a weak immune system are at high risk of developing complications due to pneumonia [ 2 ]. According to Joao Goncalves-Perira et al. [ 3 ], pneumonia contributes a considerable share of hospital admissions and mortality. Early identification of patients is then required. According to the center for disease control and prevention [ 4 ], more than 50,000 deaths from pneumonia occur each year in the USA, with more than 1 million hospitalizations of adults.

According to the World Health Organisation (WHO), the best way to detect pneumonia [ 5 ] is currently computed tomography (CT). Further, imaging has a vital role in detection and management of pneumonia as T. Franquet [ 6 ], proposed that imaging examination should always begin with a conventional radiography, and CT is only required when the radiography results are inconclusive. A chest X-ray is most commonly recommended to patients with uncertain causes of pneumonia. Chest radiographs or chest film (CXR) uses ionizing radiation in the form of X-rays, which is similar to all other methods of radiography. The chest film generates images of the chest. From the chest X-ray the pneumonia can be classified into 4 categories, including lobar pneumonia, bronchopneumonia, lobular pneumonia, and interstitial pneumonia. These 4 different classifications may have quite a lot of variations from between patients, changing with different types of pneumonia. Therefore, the classification of pneumonia is considered a difficult task [ 7 ]. In addition, the X-ray findings are not necessarily present in the early stages of the disease, resulting in late diagnosis; chest X-rays are difficult for interpretation. In the current study, chest X-rays are the most suitable method for diagnosing pneumonia; however, detection of pneumonia by chest X-rays is a challenging task due to the shortage of radiologists [ 8 ]. After a chest X-ray, blood tests can be used to confirm the diagnosis.

Better tools are needed to interpret chest X-ray data, and neural networks can be used to improve the accuracy of diagnosis. Deep learning and artificial intelligence are widely used in medicine. Geert Litjens et al. [ 9 ] reviewed over 300 contributions of convolutions networks and its application in anomaly detection in the field of neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, and musculoskeletal fields [ 9 ]. Yang W et al. [ 10 ] used convolutional neural networks in the suppression of bony structure in chest X-rays. Despite the development of deep learning, it is not advanced enough to replace physicians in medical diagnosis, instead it should be a tool in aiding the doctors with their diagnosis. Neural networks should be used for time-consuming work, such as looking at the chest X-rays to find signs of pneumonia.

Many contributions have been proposed for the task of detecting pneumonia from chest X-rays. Different neural networks architectures have been implemented for the classification problem and many have been very successful in diagnosing pneumonia by analyzing chest X-rays [ 11 - 35 ].

In this paper, the neural network-based papers with Chest X-ray14 dataset to detect pneumonia are compared [ 11 ]. The current study aimed to understand the current maturity of the technology and compare the different works in the field to overcome difficulties.

Literature Survey

Wang et al. [ 11 ] provided a database named, Chest X-ray14 comprising 112, 120 frontal view X-ray images and 32,717 unique patients, labelled with 8 labels (atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, pneumonia, and pneumothorax). The data set was initially proposed for 8 diseases and later for 14 diseases [ 11 ]. The limitation of this dataset in the context of pneumonia is few labelled images with pneumonia (1500 images), leading to highly unbalanced classification. Wang et al. [ 11 ] proposed a 2D ConvNet for classifying the abnormalities in the chest X-ray images using a simple binary relevance to predict the labels. Wang et al. [ 11 ] used AlexNet, GoogleNet, ResNet, and VGG16 architecture to classify the images. Further, ResNet had achieved the highest accuracy.

Chest X-ray14 data set was used by Rajpurkar et al. [ 12 ], who developed CheXNet, with a 121- layer convolutional neural network. The paper compared the performance of the CheXNet to that of a radiologist, using the F1 metric. This network can detect 14 diseases, including pneumonia. While working on an X-ray image, the model gives a result of the probability of a pathology and also shows the localized areas in the image. A total of 98637 (70%) images for training, 6351 (20%) images for validation and 430 (10%) images for testing were utilized, and the model could achieve a f1 score of 0.435 which was higher than the radiologist (0.387).

Yao L et al. [ 13 ] also used this dataset to develop a model trained from scratch to ensure that the application specific features were captured. Long Short-term Memory Models (LSTMs) are implemented to leverage interdependencies amount target labels. Yao L et al. [ 13 ] employs a 2D convNet as an imagine encoder to process chest X-rays. As there is no standard split for the dataset, the same split is followed to have a better comparison (70% for training, 10% for validation, 20% for testing). Their model has shown an effectiveness and feasibility over other pre-trained models with significantly better results over Wang et al. [ 11 ] with an accuracy of 76%.

Benjamin Antin et al. [ 14 ] used a supervised learning approach with the same Chest X-ray14 data set, focusing on binary classification to provide a result of pneumonia or non-pneumonia. K-means clustering and logistic regression were used with the Adam [ 15 ] algorithm to train the network. However, they explored 5606 random images due to resource constraints. Additionally, they conclude that logistic regression does not accurately predict the result due to the complexities of the data set, and a DenseNet could perform the task better with accuracy (AUC) of 0.60.

Rahib Abiyev et al. [ 16 ] trained both traditional and deep networks using Chest X-ray14 dataset and compared their performances with 620 images, and 380 images were used for testing, the back propagation, and counter propagation neural networks. For the BPNN, Rahib Abiyev et al. implemented an architecture with 12 neurons with a sigmoid activation function. The lowest mean square error achieved was 0.0025 for 5000 iterations. The CPNN had 1024 input neurons and 12 output neurons, and the best results were achieved with a learning rate of 0.0036 and 1000 epochs, with a mean square error of 0.0036. The Convolutional Neural Network (CNN) was trained with 70% of the images and tested with 30% of the images. The CNN was implemented with 3 hidden layers, using the ReLu activation function. The CNN was able to achieve a mean square error of 0.0013 with 40,000 iterations. The CNN can achieve the lowest mean square error out of the three. The paper concluded that shallow networks like BPNN and CpNN could not achieve a recognition rate as high as CNN.

Dimpy Varshni et al. [ 17 ] detected pneumonia with DenseNet169 for feature extraction and Support Vector Machine (SVM) classifiers. DenseNet169 was selected after comparing the results with XCeption, VGG-19, Resnet50, and DenseNet-121. For the classifier, the best results were achieved with a SVM classifier, as compared to Random Forest, K-nearest Neighbours, and Naive Bayes. Each of the feature model algorithms was tested with each of the classifiers and the AUC was compared for the best results. For optimal binary classification, Dimpy Varshni et al. [ 17 ] added 1431 normal images to balance the data set and compared their work to the similar work of Benjamin Antin et al., [ 14 ]. Additionally, their DenseNet-169 model has a higher AUC of 0.8002 as compared to Benjamin Antin et al. [ 14 ] AUC of 0.609.

Tatiana Malygina et al. [ 18 ] used the Rajpurkar et al. [ 12 ] work further by proposing CycleGAN (generative adversarial networks) for solving the imbalance in the dataset and also used 70% (98637 images) for training, 10% (6351) of the images for validation, and 20% (430) of the images for testing. The classifier used in this model was DenseNet-121, similar to Rajpurkar et al. [ 12 ] with the implementation of 3 training datasets in binary classification:

  1. 1) CXR14 without augmentation,
  2. 2) CXR14 dataset that was used to pretrain the augmented CycleGAN,
  3. 3) Another dataset to pretrain the augmented CycleGAN.

The results show that their balancing method increased Receiver Operating Characteristics (ROC) AUC from 0.9745 to 0.9929 and Precision Recall (PR) AUC from 0.9580 to 0.9865.

A regional-based convolutional neural network was proposed by Taufik Rahmat et al. [ 19 ]. The network had a high confidence in classifying the image into either pathological or normal. The network was also faster than other Region Proposal Networks (RPN). In addition, they used 80% data from training and 20% for testing and compared their model to a medical student and general practitioner with parameters, such as accuracy (62%), sensitivity (72.09%), specificity (54.39%), and precision (54.39%). The models had higher accuracy, sensitivity, and prediction compared to the medical student and general practitioner. Additionally, they also found that their model took an average of 4.8s per image, which was much faster than both the medical student (27s) and the general practitioner (18s).

Chouchan et al. proposed a transfer-learning model to detect pneumonia [ 20 ], with 5 models (AlexNet, InceptionV3, ResNet18, DenseNet121, and GoogLeNet). AlexNet was trained for 200 iterations and achieved an AUC value of 0.9783. However, the ResNet18 model achieved the best results with an ROC AUC of 0.9936 with testing accuracy of 94.23% compared to all the other models. The combination of the 5 models had remarkable results of ROC AUC of 0.9934 with testing accuracy of 96.39% and a high sensitivity of 99.62%. When these results are compared to those of Cohen et al. [ 21 ], who created a model using DenseNet-121 architecture with training by the Adam optimization [ 15 ] and similar learning rates, using the same dataset to achieve an AUC of 0.9840. Cohen et al. [ 21 ] also implemented a train-validation-test split of 70-10-20 and notably made code and network freely available.

Vikash Chouchan et al. [ 20 ] proposed the model with an AUC of 0.9936 and compared their results to those of Daniel Kermany et al. [ 22 ], who achieved accuracy and sensitivity of 92.8% and 93.2%, respectively in comparing chest X-rays of pneumonia vs. normal with an ROC AUC of 0.9680. This result was achieved by testing with 234 normal images and 390 pneumonia images using inception V3 architecture. Li Yao et al. [ 23 ] proposed a novel architecture that was learned under weak supervision using a Resnet-v2-50 model with an Adam optimizer [ 15 ], with 75% and 25% of the images for training and validation, respectively. The model could achieve an accuracy of 80%, using a learning rate of 0.001.

Acharya et al. proposed a deep Siamese network [ 24 ] to classify the images into viral pneumonia, bacterial pneumonia, and no pneumonia and achieved an ROC AUC of 0.9500 using 5328 and 300 images for training and testing in their DSN, respectively. Different deep CNN were evaluated after developing by Yu-Xing Tang et al. [ 25 ] to differentiate between normal and abnormal chest X-rays. Yu-Xing Tang et al. also had various deep CNNs architecture, such as visual geometry group (VGG), AlexNet, GoogLeNet, ResNet, and DenseNet. When the model was tested on the Chest X-ray14 dataset with 8500 images, the images that were pretrained on the DenseNet121 attained the highest AUC values of 0.9871. All the 7 models (VGG15, VGG19, AlexNet, ResNet18, ResNet50, Inception V3, and DenseNet121) were tested with pretrained models and models tested from scratch, observing the pretrained networks outperformed the models trained from scratch.

Shuaijing Xu et al. proposed a hierarchical CNN [ 26 ] and a new network CXNet-m1 to overcome the limitations of the dataset that this network was trained on 84090 (75%) images. The validation was done on 11212 (10%) images, and the testing was done on 16818 (15%) images. The CXNet-m1 network was compared to networks, such as the VGGNET-16 (AUC=0.5102), VGGNet-16-DCNN (AUC=0.6090), ResNet-50 (AUC=0.5390), ResNet-50-DCNN (AUC=0.6420), Inception-ResNet (AUC=0.5000) and Inception-ResNet-DCNN (AUC=0.6110). It was seen that the CXNet-m1 outperforms these networks with an AUC of 0.6580.

Ivo Baltruschat et al. developed a ResNet-50 [ 27 ], ResNet-38, and ResNet-101 models to compare the results with a Multi-layer Perceptron (MLP) classifier to improve the classification results. Ivo Baltruschat et al. used 70% for training, 10% for validation, and 20% for testing from the images in the Chest X-ray14 dataset. The best results were obtained to be 0.8220 AUC with scratch trained ResNet-50.

Togacar et al. proposed a deep-feature CNN [ 28 ] by models, such as AlexNet, VGG-16 and VGG-19 with parameters ranging from 100 to a 1000 in number, using a minimum redundancy and maximum relevance algorithm. The features were also given to models, such as K-nearest neighbours, linear discriminant analysis, support vector machine, and linear regression to produce an accuracy of 99.41%.

However, Ken Wong et al. [ 29 ] classified the images into normal and disease to provide relief to those with a normal chest X-ray, they considered a not to send sick patients’ home. Their network used Inception-ResNet-v2, which was pre-train on ImageNet and a dilated ResNet block. They set recall at 50% so that half of the patients would be diagnosed as disease-free. For training this network, they also used 3217 images from the Chest X-ray14 dataset, running for 50 epochs to achieve a maximum ROC AUC of 0.9300. To provide a method that can achieve success with limited location annotations, Zhe Li et al. [ 30 ] proposed a model that does not predict bounding boxes but regions of the diseases with the goal of providing better visual interpretation. The data contains 880 “annotated” images and 111,240 “unannotated” images. Using a pre-trained ResNet and fully convolutional classification CNN model, they could observe an ROC AUC of 0.6700.

Bo Zhou et al. proposed a weakly supervised adaptive DenseNet [ 31 ] with an adaptive DenseNet and customized pooling structure with Chest X-ray14 dataset to classify and identify abnormalities. An adaptive DenseNet was used followed by a weak supervised learning pooling structure to generate feature maps and a probability for each abnormality. The training, validation, and testing were done on 70%, 10%, and 20% of images with a learning rate of 0.002. They compared the ROC AUC with that of Wang et al. [ 11 ], Yao L et al. [ 13 ], Zhe Li et al. [ 30 ] and Rajpurkar et al. [ 12 ] to obtain 0.7852 for pneumonia.

Qingji Guan et al. [ 32 ] proposed a guided CNN (AG-CNN) for classification of diseases. After a random selection of images in a 70-10-20 split for training, validation, and testing, they got an AUC of 0.776 with ResNet-50 and an AUC of 0.774 with DenseNet-121. Abdullah Irfan et al. [ 33 ] trained ResNet-50, Inception V3, and DenseNet121 through 3 different transfer learning models while also doing the same from scratch, finding that the pre-trained models were significantly outperforming the latter. They have found their results for the binary classification of pneumonia with a five-layer model and a training set of 90% of images and validation of 10% of images. The highest AUC values were obtained for DenseNet121 with a value of 0.7100 after running for 20 epochs.

Dejun Zhang et al. [ 34 ] developed a VGG-based model. The uniqueness of their straightforward VGG model has the minimum number of layers using dynamic histogram enhancement technique in pre-processing, resulting in an AUC of 0.99107. The designed model is a VGG-based CNN model using a sigmoid and ReLU activation functions to detect the pneumonia from the chest X-ray images. The designed model consists of 6 layers, in which 3×3 convolution layers are used with a ReLU activation function and several layers to drop weight into zero randomly to improve performance. This study uses an image enhancement method (Dynamic Histogram Equalization) to increase the image quality before delivering to the model for better results. Enes Ayan et al. [ 35 ] used Xception and VGG16 models to solve the task of detecting pneumonia from chest X-rays and achieved an accuracy of 0.87% with VGG16 model, with the fine tuning of parameters. Enes Ayan et al. [ 35 ] showed that the Xception net was less accurate than the VGG16. However, the Xception net could detect the presence of pneumonia more frequently when compared to the VGG16. Therefore, a delicate combination of both models is most suitable for best results. The comparative analysis is shown in Table 1.

References METHODOLY RESULTS
[ 11 ] AlexNet, GoogLeNet, ResNet, VGG16 AUC=0.6300
[ 12 ] 121- Layer CNN (CheXNet) F1=0.435
[ 13 ] LSTM AUC=0.76
[ 14 ] K-means, logistic regression AUC=0.6000
[ 16 ] BPNN, CpNN, CNN MSE=0.0013
[ 17 ] DenseNet169 and SVM AUC=0.8002
[ 18 ] DenseNet-121with CycleGAN AUC=0.9745
[ 19 ] R-CNN AUC=0.62
[ 20 ] AlexNet, InceptionV3, ResNet18, DenseNet121 and GoogLeNet AUC=0.9936
[ 21 ] DenseNet-121 AUC=0.9840
[ 22 ] Inception V3 AUC=0.9680
[ 23 ] Resnet-v2-50 AUC=0.80
[ 24 ] DSN AUC=0.9500
[ 25 ] VGG15, VGG19, AlexNet, ResNet18, ResNet50, Inception V3, DenseNet121 AUC=0.9871
[ 26 ] hierarchical CNN (CXNet-m1) AUC=0.6580
[ 27 ] ResNet-50 AUC=0.8220
[ 28 ] Deep feature CNN AUC=0.994
[ 29 ] Inception-ResNet-v2 AUC=0.9300
[ 30 ] DenseNet and WSL AUC=0.7852
[ 31 ] ResNet and CNN AUC=0.6700
[ 32 ] AG-CNN AUC=0.7760
[ 33 ] ResNet-50, Inception V3, DenseNet121 AUC=0.7100
[ 34 ] VGG AUC=0.99107
[ 35 ] Xception, VGG16 F1=0.90
VGG: Visual Geometry Group, AUC: Area Under the Curve, CNN: Convolutional Neural Networks, LSTM: Long Short-Term Memory, BPNN: Back Propagation Neural Network, CpNN: Competitive Neural Network, MSE: Mean Square Error, SVM: Support Vector Machine, GAN: Generative Adversarial Networks, RCNN: Region-Based Convolutional Neural Networks, DSN: Deep Stacking Network, WSL: Weakly Supervised Learning, AG-CNN: Attention Guided Convolutional Neural Network
Table 1.Comparative analysis of different methodologies

Discussion

Networks, such as hierarchical CNN, AG-CNN, and R-CNN do not provide desirable results [ 19 , 26 , 32 ] in comparison with networks, such as VGG, Inception net, Resnet, and hybrid models [ 20 , 22 , 27 , 29 , 34 ]

LSTM based models [ 13 ] show promise with a higher accuracy than other traditional models, and models, such as the CpNN [ 16 ] and R-CNN [ 19 ] have much faster computational times when compared to other models. However, shallow networks like BPNN and CpNN did not achieve high recognition rates [ 16 ].

Vikash Chouchan et al. [ 20 ] showed one of the most impressive results with an Inception V3 model while Dejun Zhang et al. [ 34 ] showed that pre-processing and image enhancement techniques, such as dynamic histogram equalization improve results.

Further study must be done into how Xception networks can be improved to provide better accuracy in this application with less false positives [ 35 ]. With the improvement of technology and computer hardware, it is expected that lower computational expenses and reduced computational time result in better model predictions.

Conclusion

To overcome pneumonia, the infection must be detected in the early stages. Detection is most easily accomplished with chest X-rays due to the cost effectiveness in comparison with other methods of diagnosis. There are many chest X-rays for diagnosis due to the critical lack of diagnosticians that are time-consuming. Therefore, improved diagnostic tools are required. We have critically reviewed relevant contributions of authors, who proposed to use neural networks to detect pneumonia and compared their results and methodology. This paper is useful for individuals, who are attempting to solve this problem, by providing various approaches and its relative success.

Authors’ Contribution

DJ. Alapat conceived the idea. Introduction of the paper was written by MV. Menon. Sh. Ashok gathered all the relevant literature. The literature survey was written by DJ. Alapat. The reviewing and editing were done by MV. Menon. Conclusion and discussion were written by DJ. Alapat and MV. Menon. The work was proofread and supervised by Sh. Ashok. All the authors read, modified, and approved the final version of the manuscript.

Conflict of Interest

None

References

  1. Torres A, Serra-Batlles J, Ferrer A, Jiménez P, Celis R, Cobo E, Rodriguez-Roisin R. Severe community-acquired pneumonia. Epidemiology and prognostic factors. Am Rev Respir Dis. 1991; 144(2):312-8. DOI | PubMed
  2. Marrie TJ. Community-acquired pneumonia. Clin Infect Dis. 1994; 18(4):501-13. DOI | PubMed
  3. Gonçalves-Pereira J, Conceição C, Póvoa P. Community-acquired pneumonia: identification and evaluation of nonresponders. Ther Adv Infect Dis. 2013; 1(1):5-17. Publisher Full Text | DOI | PubMed
  4. CDC. Pneumonia [Internet]. 2020 Mar [cited 2022 Jun 8]. Available from: https://www.cdc.gov/pneumonia/.
  5. Mahomed N, Fancourt N, De Campo J, De Campo M, Akano A, Cherian T, et al. Preliminary report from the World Health Organisation Chest Radiography in Epidemiological Studies project. Pediatr Radiol. 2017; 47(11):1399-404. Publisher Full Text | DOI | PubMed
  6. Franquet T. Imaging of pneumonia: trends and algorithms. Eur Respir J. 2001; 18(1):196-208. DOI | PubMed
  7. File TM. Community-acquired pneumonia. Lancet. 2003; 362(9400):1991-2001. Publisher Full Text | DOI | PubMed
  8. Oates A, Halliday K, Offiah AC, Landes C, Stoodley N, Jeanes A, et al. Shortage of paediatric radiologists acting as an expert witness: position statement from the British Society of Paediatric Radiology (BSPR) National Working Group on Imaging in Suspected Physical Abuse (SPA). Clin Radiol. 2019; 74(7):496-502. DOI | PubMed
  9. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017; 42:60-88. DOI | PubMed
  10. Yang W, Chen Y, Liu Y, Zhong L, Qin G, Lu Z, Feng Q, Chen W. Cascade of multi-scale convolutional neural networks for bone suppression of chest radiographs in gradient domain. Med Image Anal. 2017; 35:421-33. DOI | PubMed
  11. Nandi R, Mulimani M. Detection of COVID-19 from X-rays using hybrid deep learning models. Res Biomed Eng. 2021; 37(4):687-95. DOI
  12. Rajpurkar P, Irvin J, Ball RL, Zhu K, Yang B, Mehta H, et al. Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med. 2018; 15(11):e1002686. Publisher Full Text | DOI | PubMed
  13. Jadhav A, Wong KCL, Wu JT, Moradi M, Syeda-Mahmood T. Combining Deep Learning and Knowledge-driven Reasoning for Chest X-Ray Findings Detection. AMIA Annu Symp Proc. 2021; 2020:593-601. Publisher Full Text | PubMed
  14. Elshennawy NM, Ibrahim DM. Deep-Pneumonia Framework Using Deep Learning Models Based on Chest X-Ray Images. Diagnostics (Basel). 2020; 10(9):649. Publisher Full Text | DOI | PubMed
  15. Kingma DP, Ba J. Adam: A method for stochastic optimization. ArXiv Preprint. 2014; 12(8):756. DOI
  16. Abiyev RH, Ma’aitah MKS. Deep Convolutional Neural Networks for Chest Diseases Detection. J Healthc Eng. 2018; 2018:4168538. Publisher Full Text | DOI | PubMed
  17. Varshni D, Thakral K, Agarwal L, Nijhawan R, Mittal A. IEEE: Coimbatore, India; 2019.
  18. Malygina T, Ericheva E, Drokin I. GANs’ N Lungs: improving pneumonia prediction. ArXiv. 2019.
  19. Rahmat T, Ismail A, Aliman S. Chest X-ray image classification using faster R-CNN. Malaysian Journal of Computing (MJoC). 2019; 4(1):225-36.
  20. Chouhan V, Singh SK, Khamparia A, Gupta D, Tiwari P, Moreira C, Damaševičius R, De Albuquerque VH. A novel transfer learning based approach for pneumonia detection in chest X-ray images. Applied Sciences. 2020; 10(2):559. DOI
  21. Cohen JP, Bertin P, Frappier V. Chester: A web delivered locally computed chest x-ray disease prediction system. Arxiv. 2019.
  22. Kermany DS, Goldbaum M, Cai W, Valentim CCS, Liang H, Baxter SL, et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. Cell. 2018; 172(5):1122-31. DOI | PubMed
  23. Yao L, Prosky J, Poblenz E, Covington B, Lyman K. Weakly supervised medical diagnosis and localization from multiple resolutions. ArXiv. 2018.
  24. Acharya AK, Satapathy R. A deep learning based approach towards the automatic diagnosis of pneumonia from chest radio-graphs. Biomedical and Pharmacology Journal. 2020; 13(1):449-55. DOI
  25. Tang YX, Tang YB, Peng Y, Yan K, Bagheri M, Redd BA, et al. Automated abnormality classification of chest radiographs using deep convolutional neural networks. NPJ Digital Medicine. 2020; 3(1):1-8. DOI
  26. Xu S, Wu H, Bie R. CXNet-m1: anomaly detection on chest X-rays with image-based deep learning. IEEE Access. 2018; 7:4466-77. DOI
  27. Baltruschat IM, Nickisch H, Grass M, Knopp T, Saalbach A. Comparison of deep learning approaches for multi-label chest X-ray classification. Sci Rep. 2019; 9(1):1.
  28. Togaçar M, Ergen B, Cömert Z, Özyurt F. A deep feature learning model for pneumonia detection applying a combination of mRMR feature selection and machine learning models. Irbm. 2020; 41(4):212-22. DOI
  29. Wong KC, Moradi M, Wu J, Syeda-Mahmood T. SPIE: San Diego, California, United States; 2019. DOI
  30. Li Z, Wang C, Han M, Xue Y, Wei W, Li LJ, Fei-Fei L. IEEE: USA; 2018.
  31. Zhou B, Li Y, Wang J. A weakly supervised adaptive densenet for classifying thoracic diseases and identifying abnormalities. Arxiv. 2018.
  32. Guan Q, Huang Y, Zhong Z, Zheng Z, Zheng L, Yang Y. Diagnose like a radiologist: Attention guided convolutional neural network for thorax disease classification. Arxiv. 2018.
  33. Irfan A, Adivishnu AL, Sze-To A, Dehkharghanian T, Rahnamayan S, Tizhoosh HR. Classifying Pneumonia among Chest X-Rays Using Transfer Learning. Annu Int Conf IEEE Eng Med Biol Soc. 2020; 2020:2186-9. DOI | PubMed
  34. Jain R, Nagrath P, Kataria G, Kaushik VS, Hemanth DJ. Pneumonia detection in chest X-ray images using convolutional neural networks and transfer learning. Measurement. 2020; 165:108046. DOI
  35. Ayan E, Ünver HM. IEEE: Istanbul, Turkey; 2019.