Document Type : Original Research
Authors
- Mahmoud Bagheri 1, 2
- Alireza Ghanadan 3
- Mobin Saboohi 1
- Maryam Daneshpazhooh 3
- Fatemeh Atyabi 4
- Marjaneh Hejazi 1, 2
1 Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
2 Research Center for Molecular and Cellular Imaging, Bio-Optical Imaging Group, Tehran University of Medical Sciences, Tehran, Iran
3 Department of Dermatology, Razi Hospital, Tehran University of Medical Sciences, Tehran, Iran
4 Department of Pharmaceutical Nanotechnology, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran
Abstract
Background: The use of Hematoxylin-and-Eosin (H&E) staining is widely accepted as the most reliable method for diagnosing pathological tissues. However, the conventional H&E staining process for tissue sections is time-consuming and requires significant labor. In contrast, Confocal Microscopy (CM) enables quick and high-resolution imaging with minimal tissue preparation by fluorescence detection. However, it seems harder to interpret images from CM than H&E-stained images.
Objective: This study aimed to modify an unsupervised deep-learning model to generate H&E-like images from CM images.
Material and Methods: This analytical study evaluated the efficacy of CM and virtual H&E staining for skin tumor sections related to Basal Cell Carcinoma (BCC). The acridine orange staining, combined with virtual staining techniques, was used to simulate H&E dyes; accordingly, an unsupervised CycleGAN framework, trained to virtually stain CM images was implemented. The training process incorporated adversarial and cycle consistency losses to ensure a precise mapping between CM and H&E images without compromising image content. The quality of the generated images was assessed by comparing them to the original images.
Results: The CM images, specifically focusing on subtyping BCC and evaluating skin tissue characteristics, were qualitatively assessed. The H&E-like images generated from CM using the CycleGAN model exhibited both qualitative and quantitative similarities to real H&E images.
Conclusion: The integration of CM with deep learning-based virtual staining provides advantages for diagnostic applications by streamlining laboratory staining procedures.
Highlights
Mahmoud Bagheri (Google Scholar)
Marjaneh Hejazi (Google Scholar)
Keywords
Introduction
The most prevalent form of cancer diagnosed is skin cancer, and Basal Cell Carcinoma (BCC) is the most typical kind of skin cancer worldwide [ 1 ]. Microscopic examination of histologically processed and chemically stained tissue is used to diagnose BCC [ 2 ]. Histological images provide valuable information regarding tissue sections, as follows: 1) tissue structures, phenotypes, and pathology and 2) the microscopic details of tissues, leading researchers and clinicians to analyze and interpret the complex architecture and cellular organization within different tissues [ 3 ]. Skin cancer pathological identification involves invasive biopsy, followed by several comprehensive tissue preparation processes and histological staining with H&E. However, the histological tissue processing and staining typically take one to five days, resulting in a delay in the final diagnosis for the patient [ 4 , 5 ].
In the past few decades, some imaging technologies, such as Confocal Microscopy (CM) [ 6 ], Optical Coherence Tomography (OCT) [ 7 ], and Multiphoton Microscopy (MPM) [ 8 ], have been employed to modify the process of noninvasive imaging of skin cancers. CM shows a resolution at the cellular level, similar to that of tissue histology, enabling improved correlation between image outputs and histological findings due to its ability to capture intricate cellular-level details [ 9 ]. Further, CM refers to a noninvasive optical imaging device using a low-intensity laser to create quasi-histological images [ 10 ]. Despite rapid progress in CM pathology, providing cellular-level resolution, interpreting CM images remains challenging. It is crucial to note that CM does not display skin cell features equivalent to conventional microscopic tissue histology assessments. Several obstacles have to be addressed to accurately interpret CM images, necessitating extensive training for individuals new to the field [ 11 ].
In recent years, the use of deep learning approaches has shown promise in digitally enhancing the interpretation of pathological images by transforming between different microscopy modes. However, there are several challenges associated with applying deep learning models to image transformations. Deep learning algorithms perform better when are provided with a large amount of high-quality training data. In “supervised learning”, a substantial number of images and corresponding annotations are required. However, the process of manually annotating these images is time-consuming and error-prone, especially for tasks involving pixel-level registration in image transformation. Additionally, due to hardware and experimental limitations, it can be difficult or even impossible to obtain a sufficient amount of high-quality ground truth data in paired datasets for certain deep learning models.
In response to these challenges, a set of “unsupervised learning” models have been developed to achieve stain-to-stain transformations between two different image domains using unpaired datasets. Without paired data, these models can translate images between different domains and perform as well as supervised techniques. These frameworks have been successfully applied to various microscopical image analysis tasks, such as virtual staining [ 12 , 13 ], classification, and segmentation [ 14 ]. In this study, a modified deep learning-based approach is present for rapidly generating ex-vivo virtual histology of fluorescently stained confocal microscopic images of skin tissue samples. The framework is based on unsupervised learning techniques. During the training phase, CM images of excised skin tissue stained with acridine orange are used to train Convolutional Neural Networks (CNNs). These fluorescently stained CM images provided valuable spatial guidance for the neural networks to establish correlations between features in CM images and their corresponding histological representations.
Material and Methods
Tissue Preparation
In this analytical study, skin tissue samples from 20 patients, after standard-of-care surgery, resected primary BCC tumor specimens. The resected tissue was divided into two parts. The first part of the sample was stained with acridine orange followed by CM imaging within less than 15 minutes. The second part, on the other hand, underwent conventional fixation, embedding, sectioning, and staining with H&E, taking approximately 24 hours, before brightfield Whole Slide Imaging (WSI) [ 15 ]. Consequently, a total of 20 large pathology microscopy images with H&E staining and an equal number of corresponding CM images were obtained for analysis.
In the CM imaging process, the tissue sections were stained with acridine orange to enhance the contrast between the nuclei and the dermal region. This fluorescent stain is known to improve the detection of tumors. To achieve the desired staining, tissue sections were subjected to a sequence of solutions during the staining procedure. This involved immersing the sections in a series of solutions, including 10 percent acetic acid and Dulbecco’s Phosphate-buffered saline, followed by incubation with acridine orange solution (0.6 mM), and subsequent immersion in Dulbecco’s Phosphate-buffered saline again, which takes 20 seconds for each incubation phase [ 16 ]. After the staining process, the tissue samples were examined using a Nikon Eclipse Ti microscope (Nikon Instruments Inc., Tokyo, Japan). The microscope was equipped with a ten-time objective lens and appropriate fluorescence filters to visualize the acridine orange-stained samples.
Formalin-Fixed Paraffin-Embedded (FFPE) samples were prepared by promptly fixing specimens in 10% formalin, embedding them in paraffin, and cutting them into approximately 7-µm thick sections. These sections were placed on standard glass slides and stored at room temperature for archival purposes. Standard H&E staining was applied to the slides, followed by imaging using the bright-field mode of a digital whole-slide scanning microscope (Zeiss Axio Scan.Z1, Germany) to produce histological WSI. The tissue preparation described above was specifically used during the training and evaluation phases; after the training the network, it was no longer required. Deep learning-based virtual staining was utilized to make CM histological images more recognizable to pathologists. Through deep learning, the network can learn the correlations between two image domains and generate images that resemble the H&E staining commonly used by pathologists.
Image processing
To address the challenge of handling high-resolution whole-slide histology images with limited hardware memory, an approach was adopted, in which the images were randomly cropped into overlapping segments measuring 512×512 pixels. This splitting strategy not only facilitated the analysis of individual tiles but also expedited the training process by enabling efficient processing of smaller image tiles. A single high-resolution WSI of a tissue sample, which comprises numerous representations of each histological structure of interest, would be sufficient to train the model. By dividing the WSI into smaller patches, approximately 5000 patches were obtained for each domain. The datasets were split into training and testing portions, roughly at a 5:1 ratio. Moreover, WSIs of tissue samples contain high-resolution images with millions of pixels, capturing numerous distinct representations of relevant histological structures. This abundance of data is sufficient for training deep neural networks effectively.
Unsupervised Virtual Staining Algorithm
In the context of staining style transfer, a tissue image consists of two components, including content and style. The content refers to the primary structure and morphological information within the image, while the style encompasses the specific staining characteristics, including variations, such as H&E and acridine orange staining [ 8 ]. The essence of stain transfer is to preserve the content of the original image while transferring its style to the target one [ 5 ]. The CycleGAN was used to learn the nonlinear mapping from CM images to standard H&E histological staining of the sample. The CycleGAN architecture, initially proposed by Zhu et al. [ 17 ], was utilized in the proposed application, but it was further enhanced to suit the proposed specific requirements.
The aim of the model was to conduct image transformation using unpaired data. The generator G was designed to map images from domain X to domain Y. It accomplished this by generating a virtual H&E-stained image from real CM image, x, denoted as The generator F was employed to map images from domain Y to domain X. The generator F accomplished this task by transforming a real H&E-stained image, y, into a generated a CM image, represented as . The d iscriminator DY classifies real H&E image y and virtual H&E image . Real CM image x and virtual CM image are distinguished by the discriminator DX. The initial component of the objective function is the commonly employed adversarial loss [ 17 ], formulated in its most prevalent form, as follows:
and were the expectation operators.
CycleGANs enforce the principle of ‘cycle consistency’ in image translation, ensuring that when an image is translated from domain X to domain Y and then reversed back from Y to X, the resulting output closely resembles the original image. A CM image x is transformed into an H&E image using the generator G. To ensure cycle consistency, the translated H&E image G(x) is further transformed back into a CM image F(G(x)) using the generator F. The aim of the model is to minimize the loss between the genuine CM image x and the re-transformed CM image F(G(x)). Similarly, starting with an H&E image y, it is first transformed into CM image F(y) and then translated back into an H&E image G(F(y)). The L1 loss between original images and back-translated images as cycle consistency loss Lcyle(G,F) is thus defined as follows [ 17 ]:
The complete objective function can be formulated as follows:
where λ is a constant used to enforce the cycle-consistency loss.
The generator’s initial three layers utilize stridden convolution to achieve downsampling, enabling the extraction of low-level abstract representations. In order to capture high-level features, the model incorporates nine stacked residual blocks. The inclusion of these blocks plays a crucial role in determining the model’s capacity, with a higher number of blocks for more intricate tasks. The design of residual blocks effectively tackles the challenge of vanishing gradients, which often occurs when employing deeper networks. Additionally, the use of residual blocks facilitates faster convergence in comparison to standard solvers. Moreover, within the network architecture, the last three upsampling layers employ stridden convolution for the integration of extracted features and image resizing. This approach contributes to the reconstruction of the image, ensuring it returns to its initial dimensions, thereby achieving the desired resolution.
This procedure is pivotal in restoring the image to its initial dimensions while seamlessly incorporating the relevant extracted information. Regarding the discriminator in this model, it is structured as a relatively shallow CNN. Each layer of the discriminator executes downsampling on the feature maps, concurrently doubling the number of channels. This design empowers the discriminator to effectively capture and analyze essential features, facilitating accurate discrimination between real and generated images.
In the final convolution layer of the model, a single-channel feature map is generated. The process is classified on each element of this feature map. Both the generator and the discriminator are equipped with nonlinear activation units in each convolutional layer.
Image Evaluation
Both qualitative and quantitative evaluation methods are employed to assess the virtual H&E-stained images for BCC tissue samples. One of the qualitative methods involved utilizing a t-stochastic neighbor embedding (t-SNE) plot to visualize the features extracted from the virtual H&E images on a two-dimensional graph [ 18 ]. This visualization technique led to comparing and analyzing the original CM images, H&E-stained images used for training the CycleGAN model, and the virtual H&E images generated by the CycleGAN model.
The virtual H&E-stained images were compared with real H&E-stained images to analyze whether the translated images were realistic. Three board-certified pathologists, who were unaware of the staining techniques used for each image, were involved in the assessment. They were asked to apply a “real vs virtual” perceptual judgment to evaluate the realistic nature of the translated images. This process helped determine the extent to which the virtual H&E images resembled the gold standard H&E images, providing valuable insights into the quality and accuracy of the translation. In the current study, the pathologists were independent of the research and evaluated 100 images. Half of the 50 images were actual H&E-stained tissue sections, while the other half were virtual H&E images. Pathologists determined whether each image was real or virtual, leading to a comprehensive evaluation of image quality, considering different aspects of stain quality as rated by pathologists.
Also, the Structural Similarity Index (SSIM) [ 19 ] was calculated to measure the model structure preservation performances for a given dataset of images and the corresponding reconstructed image.
Results
Confocal Microscopy Pathology
In the context of skin cancer diagnosis, pathologists commonly assess the shape of nuclei and the distribution of nuclei within the skin tissue as part of their examination process. The study focuses primarily on the identification of BCC tumors within the skin, encompassing the epidermis and dermis. Additionally, features of various appendages e.g., hair follicles, sebaceous glands, and sweat glands were shown through CM images. Furthermore, the subcutaneous tissue examined encompasses adipose tissue, collagen, blood vessels, and meibomian glands. As shown in Figure 1, acridine orange displayed a nuclear staining pattern, highlighting the densely nuclear tumor and epidermis. It also highlighted other structures, such as hair follicles, sebaceous glands, inflammatory cells, and eccrine glands. Ex-vivo CM imaging after acridine orange staining enhances cellular visualization. However, its colors and structures differ greatly from standard H&E images.
Figure 1. Characteristics of skin tissue stained with acridine orange and imaging with confocal microscopy; (A) the epidermal layer, (*epithelium) (B) dermis with hair follicles, * hair follicles (C), (D), and (E) show the Nodular-micronodular Basal Cell Carcinoma (BCC), BCC tumor, BCC tumor surrounded by empty spaces (clefting) respectively (*). (F) shows the sebaceous glands (*). (G) and (H) show the dense cell nuclei in BCC tumors that define the margin (*). (K) shows the sweat glands (*).
Virtual Staining
Subsequently, we proceeded to validate the efficacy of the computational staining method for histological imaging of CM on thin BCC sections. This approach allowed us to digitally simulate the staining process, offering valuable insights into the visualization and analysis of histological features without the reliance on traditional physical staining techniques. Figure 2 illustrates a fundamental schematic of virtual staining. In the conventional method, biological tissue, such as a skin sample, is harvested and manually sectioned into thin slices. These slices undergo the standard FFPE process and are stained with H&E to generate histological images. In the virtual staining approach, thick tissue samples can be directly imaged using CM, eliminating the necessity for sectioning.
Figure 2. The workflow for obtaining histological images in conventional histopathology and virtual histopathology. The top pathway shows the traditional histopathology and the bottom pathway illustrates Confocal Microscopy (CM) imaging and deep-learning-based virtual staining
Training Configuration
Tables 1 and 2 provide a comprehensive overview of the network architecture and configuration utilized during the training process of CycleGAN. The hardware and software configurations and also used network parameters are shown in Table 3. The training process lasted approximately 30 hours. Once the training was completed, the pre-trained forward generators were loaded for subsequent applications. Following the training process, the forward GAN could generate a 512×512 H&E patch in approximately 2 seconds, showing that the pre-trained model achieved a relatively fast inference time, which can enable efficient generation of transformed images during subsequent applications.
| Phase | Filter Number | Filter Size | Layer Type | Stride | Normalization | Activation | Representation Size |
|---|---|---|---|---|---|---|---|
| Encoder | 64 | 7×7 | Convolution | 1 | Batch Norm | ReLUn | |
| 128 | 3×3 | Convolution | 2 | Batch Norm | ReLUn/2 | ||
| 256 | 3×3 | Convolution | 2 | Batch Norm | ReLUn/4 | ||
| Transformer | 256 | 3×3 | Residual Block | 1 | Batch Norm | ReLUn/4 | |
| 256 | 3×3 | Residual Block | 1 | Batch Norm | ReLUn/4 | ||
| 256 | 3×3 | Residual Block | 1 | Batch Norm | ReLUn/4 | ||
| 256 | 3×3 | Residual Block | 1 | Batch Norm | ReLUn/4 | ||
| 256 | 3×3 | Residual Block | 1 | Batch Norm | ReLUn/4 | ||
| 256 | 3×3 | Residual Block | 1 | Batch Norm | ReLUn/4 | ||
| Decoder | 128 | 3×3 | Transpose | 1/2 | Batch Norm | ReLUn/2 | |
| 64 | 3×3 | Transpose | 1/2 | Batch Norm | ReLUn | ||
| 3 | 7×7 | Transpose | 1 | Batch Norm | ReLU | n |
| Filter Number | Filter Size | Layer Type | Stride | Normalization | Activation |
|---|---|---|---|---|---|
| 64 | 4×4 | Convolution | 2 | - | ReLU |
| 128 | 4×4 | Convolution | 2 | Batch Norm | ReLU |
| 256 | 4×4 | Convolution | 2 | Batch Norm | ReLU |
| 512 | 4×4 | Convolution | 2 | Batch Norm | ReLU |
| 1 | 4×4 | Convolution | 1 | - | Sigmoid |
| Hardware or Software | Technical Parameter | Parameter | Value |
|---|---|---|---|
| Operating system | Windows 10 | Optimizer | Adam |
| GPU | RTX2080 | Epoch | 100 |
| CPU | Intel | Learning Rate | 10000 |
| Memory | 32 GB | Step Decay | 50 |
| Deep learning library | Pytorch 1.8.0 | Bach Size | 2 |
| Programming language | Python 3.7.6 | ||
| GPU: Graphics Processing Unit, CPU: Central Processing Unit |
Stain Translation Results
The efficacy of the virtual staining method was validated on tissue samples. Upon visually evaluating the sensitized images, we observed remarkable similarities to slides produced from the same tissue block that had undergone conventional staining. Figure 3 presents sample results from the test dataset, showcasing the outcomes of a CycleGAN trained on both CM and H&E images. The virtually stained CM images exhibited a remarkable resemblance in morphology to the histopathological images obtained from H&E slides. Various skin structures, including the epidermis and dermis, as well as features, such as hair follicles, sebaceous glands, and collagen, were identifiable in both image types.
Figure 3. Translated image examples from the Confocal Microscopy (CM) to Hematoxylin-and-Eosin (H&E). Virtual stained samples (a row) and their closest real neighbors, H&E-stained images (b row). Black Arrows show smooth muscle fibers. (*) Sebaceous glands. Blue arrows show veins. The black star shows adipose tissue.
The similarity or difference was compared between CM, virtual H&E, and H&E images to assess the staining precision of the proposed approach. Figure 4 illustrates the t-SNE plot, highlighting the visual and color similarities between virtual stained images and their counterparts subjected to traditional staining techniques. t-SNE graph showed a good correlation between real and translated images. The green, blue, and red dots represent CM, virtual H&E, and H&E-stained images, respectively. The distribution of green dots has a clean boundary with the distribution of blue and red dots, showing that the original CM images belong to a different image domain than the virtual H&E, and H&E-stained images. The distribution of blue and red dots mixed together indicates that the virtual H&E images precisely imitate the look of the specified H&E-stained images.
Figure 4. t-Stochastic Neighbor Embedding (t-SNE) graph; t-SNE visualizes the Confocal Microscopy (CM), Hematoxylin-and-Eosin (H&E), and virtual H&E image quality for Basal Cell Carcinoma (BCC) specimens.
In addition, SSIM scores are computed between the real and their reconstructed confocal microscopy image patches, and the average SSIM for all test patches was 0.91 (standard deviation=0.04).
Clinical Evaluations of Virtually Stained Images
The current study is conducted on perceptual studies comparing real and fake images to evaluate the authenticity of translated images in qualitative analyses. Three pathologists were needed to determine whether a particular image was a genuine or virtual image as part of the blinded pathology study. The corresponding indicators are summarized in Table 4. The assessment of the results involved the use of three indicators: sensitivity, specificity, and accuracy. True positives (real images), false positives, true negatives (synthetic images), and false negatives are represented by TP, FP, TN, and FN, respectively. The formulas for these indicators are provided, as follows [ 20 ]:
| Stain | H&E | ||
|---|---|---|---|
| Indicators | Sensitivity | Specificity | Accuracy |
| Histopathologist 1 | 0.70 | 0.38 | 0.54 |
| Histopathologist 2 | 0.78 | 0.32 | 0.55 |
| Histopathologist 3 | 0.64 | 0.42 | 0.53 |
| Average | 0.71 | 0.37 | 0.54 |
| H&E: Hematoxylin-and-Eosin | |||
Discussion
CM is considered a potential tool for quick and affordable bedside pathology, enabling reoperations and/or wound closures in much less time than traditional H&E pathology [ 21 , 22 ]. In this study, a deep learning-based method was implemented to carry out virtual staining on CM images of skin tissue samples. We converted the CM images into H&E-like images, which closely resemble the appearance of H&E staining. This visualization format is widely utilized by pathologists for the evaluation of histochemical-stained tissue biopsies on microscopy slides. First, the staining protocol was described for dermatological imaging that incorporates acridine orange, a dye used to image specimens. It accurately depicts cell nuclei and raises the contrast of cell structures, particularly the contrast between cells and stroma. Second, the CM imaging system combined with CycleGAN, can quickly produce histological images (also known as virtual H&E) that are comparable to standard H&E-stained images. The CycleGAN algorithm’s input images already exhibit a high degree of concordance with true H&E images. However, the CycleGAN algorithm helps achieve a level of realism that allows our virtual histology method to offer better quality than CM images.
In this study, unpaired image-to-image translation techniques were used to convert CM images into virtual H&E images, applied to other medical imaging. The absence of quantitative metrics presents a substantial obstacle to training unpaired image-to-image translation algorithms. Due to the utilization of paired data in most studies focusing on microscopic image conversion, objective evaluations were possible using metrics that measure structural and perceptual similarity [ 23 ]. However, the most reliable metric for unpaired image-to-image translation techniques is still visual inspection by humans [ 24 ]. The proposed algorithm generated output images assessed by three expert pathologists and confirmed that the images were similar to those in routine. The results suggest that the judgments of the three participants were essentially random guesses, as they were unable to distinguish between real and virtual images. Table 4 presents the values of key indicators for synthesized H&E and real H&E images, revealing average sensitivity, specificity, and accuracy values of 0.71, 0.37, and 0.54, respectively. These findings align with those of Lo et al. in the context of stain translation for renal pathology images [ 25 ]. The outcomes underscore the remarkable realism achieved by the translated images through the trained CycleGAN model. Additionally, the trained model effectively transitions into the intended color palette while preserving the structural contents of the original image [ 5 , 26 ]. This is facilitated through the application of cycle consistency constraints, resulting in SSIM scores surpassing 0.9 during the back-translation of generated images to their original source domain [ 5 , 26 ].
In conclusion, our results illustrate that the virtual staining networks can effectively reconstruct skin tissue and BCC nodules, replicating the features and color contrast commonly observed in histologically stained microscopy sections.
Further research is imperative to thoroughly evaluate the influence of digital pathology on diagnostic accuracy, sensitivity, and specificity relative to the original images. It is crucial to acknowledge that our training dataset consisted of normal skin samples, nodular, and superficial types of BCC. In future endeavors, we intend to broaden our dataset to incorporate a more extensive array of BCC samples, encompassing various subtypes. This expansion will facilitate the assessment of the network’s efficacy in detecting cell nuclei within basal cell tumor islands.
Conclusion
In conclusion, this study focused on assessing an unpaired Stain-to-stain transformation model designed to convert CM images into H&E-stained images. To enhance practical applicability in clinical contexts, future efforts should explore the implementation of transfer learning techniques, larger batch sizes, and specialized hardware to expedite the training process. Moreover, a promising avenue for further investigation involves the development of a CycleGAN model capable of performing multiple stain conversions using confocal microscopy images. This proposed approach holds the potential to significantly advance the application of deep learning methods in the analysis of pathology images, paving the way for diverse stain styles within the pathology field.
Acknowledgment
The authors would like to thank Dr. Babak Arji Roodsari (School of Medicine at the Guilan University of Medical Science, Department of Cell Biology and Anatomy) for helpful discussions and valuable pathological feedback.
Authors’ Contribution
M. Bagheri and M. Hejazi conceived the presented idea. M. Hejazi developed the theory and performed the computations. Material preparation, data collection, and tissue analysis were performed by A. Ghanadan, M. Daneshpazhooh, M. Saboohi, and F. Atyabi. All authors wrote and reviewed the manuscript.
Ethical Approval
This study was approved by the institutional review board of Razi Hospital of Tehran University of Medical Sciences (TUMS), Tehran, Iran, and the Ethics Committee of the Tehran University of Medical Sciences, Tehran, Iran (Approval number: no IR.TUMS.MEDICINE.REC.1399.129). The patients were enrolled in the study after the patients’ informed written consent was obtained. All the experiment protocol for involving human data was in accordance with the Declaration of Helsinki.
Informed Consent
Written informed consent was obtained from all participants.
Funding
This research leading to these results received funding from the research chancellor of Tehran University of Medical Sciences (TUMS) Tehran, Iran (Grant number: 98-06-11-43949).
Conflict of Interest
None
References
- Muzic JG, Schmitt AR, Wright AC, Alniemi DT, Zubair AS, Olazagasti Lourido JM, et al. Incidence and Trends of Basal Cell Carcinoma and Cutaneous Squamous Cell Carcinoma: A Population-Based Study in Olmsted County, Minnesota, 2000 to 2010. Mayo Clin Proc. 2017; 92(6):890-8. Publisher Full Text | DOI | PubMed [ PMC Free Article ]
- Boktor M, Ecclestone BR, Pekar V, Dinakaran D, Mackey JR, Fieguth P, Haji Reza P. Virtual histological staining of label-free total absorption photoacoustic remote sensing (TA-PARS). Sci Rep. 2022; 12(1):10296. Publisher Full Text | DOI | PubMed [ PMC Free Article ]
- Gao XH, Li J, Gong HF, Yu GY, Liu P, Hao LQ, et al. Comparison of Fresh Frozen Tissue With Formalin-Fixed Paraffin-Embedded Tissue for Mutation Analysis Using a Multi-Gene Panel in Patients With Colorectal Cancer. Front Oncol. 2020; 10:310. Publisher Full Text | DOI | PubMed [ PMC Free Article ]
- Pradhan P, Meyer T, Vieth M, Stallmach A, Waldner M, Schmitt M, et al. Computational tissue staining of non-linear multimodal imaging using supervised and unsupervised deep learning. Biomed Opt Express. 2021; 12(4):2280-98. Publisher Full Text | DOI | PubMed [ PMC Free Article ]
- Liu S, Zhang B, Liu Y, Han A, Shi H, Guan T, He Y. Unpaired Stain Transfer Using Pathology-Consistent Constrained Generative Adversarial Networks. IEEE Trans Med Imaging. 2021; 40(8):1977-89. DOI | PubMed
- Hartmann D, Ruini C, Mathemeier L, Dietrich A, Ruzicka T, Von Braunmühl T. Identification of ex-vivo confocal scanning microscopic features and their histological correlates in human skin. J Biophotonics. 2016; 9(4):376-87. DOI | PubMed
- Tsai ST, Liu CH, Chan CC, Li YH, Huang SL, Chen HH. H&E-like staining of OCT images of human skin via generative adversarial network. Appl Phys Lett. 2022; 121(13):134102. DOI
- Borhani N, Bower AJ, Boppart SA, Psaltis D. Digital staining through the application of deep neural networks to multi-modal multi-photon microscopy. Biomed Opt Express. 2019; 10(3):1339-50. Publisher Full Text | DOI | PubMed [ PMC Free Article ]
- Ortner VK, Sahu A, Cordova M, Kose K, Aleissa S, Alessi-Fox C, et al. Exploring the utility of Deep Red Anthraquinone 5 for digital staining of ex vivo confocal micrographs of optically sectioned skin. J Biophotonics. 2021; 14(4):e202000207. Publisher Full Text | DOI | PubMed [ PMC Free Article ]
- Guida S, Arginelli F, Farnetani F, Ciardo S, Bertoni L, Manfredini M, et al. Clinical applications of in vivo and ex vivo confocal microscopy. Appl Sci. 2021; 11(5):1979.
- Bini J, Spain J, Nehal K, Hazelwood V, DiMarzio C, Rajadhyaksha M. Confocal mosaicing microscopy of human skin ex vivo: spectral analysis for digital staining to simulate histology-like appearance. J Biomed Opt. 2011; 16(7):076008. Publisher Full Text | DOI | PubMed [ PMC Free Article ]
- Xu Z, Huang X, Moro CF, Bozóky B, Zhang Q. GAN-based virtual re-staining: a promising solution for whole slide image analysis [Internet]. arXiv [Preprint]. 2019 [cited 2019 Jan 13]. Available from: https://arxiv.org/abs/1901.04059
- Lee G, Oh JW, Her NG, Jeong WK. DeepHCS++: Bright-field to fluorescence microscopy image conversion using multi-task learning with adversarial losses for label-free high-content screening. Med Image Anal. 2021; 70:101995. DOI | PubMed
- Han S, Lee S, Chen A, Yang C, Salama P, Dunn KW, Delp EJ. Three dimensional nuclei segmentation and classification of fluorescence microscopy images. 17th International Symposium on Biomedical Imaging (ISBI). Iowa City, IA, USA: IEEE; 2020.
- Winetraub Y, Yuan E, Terem I, Yu C, Chan W, Do H, et al. OCT2Hist: Non-invasive virtual biopsy using optical coherence tomography [Internet]. medRxiv [Preprint]. 2021 [cited 2021 Apr 6]. Available from: https://www.medrxiv.org/content/10.1101/2021.03.31.21254733v1
- Ruini C, Vladimirova G, Kendziora B, Salzer S, Ergun E, Sattler E, et al. Ex-vivo fluorescence confocal microscopy with digital staining for characterizing basal cell carcinoma on frozen sections: A comparison with histology. J Biophotonics. 2021; 14(8):e202100094. DOI | PubMed
- Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (ICCV). Venice, Italy: ICCV; 2017.
- Van Der Maaten L, Hinton G. Visualizing data using t-SNE. J Mach Learn Res. 2008; 9(11):2579-605.
- Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: A review. Med Image Anal. 2019; 58:101552. DOI | PubMed
- Nečasová T, Burgos N, Svoboda D. Validation and evaluation metrics for medical and biomedical image synthesis. In Biomedical Image Synthesis and Simulation. 2022;573-600.
- Gareau DS. Feasibility of digitally stained multimodal confocal mosaics to simulate histopathology. J Biomed Opt. 2009; 14(3):034050. Publisher Full Text | DOI | PubMed [ PMC Free Article ]
- Vladimirova G, Ruini C, Kapp F, Kendziora B, Ergün EZ, Bağcı IS, et al. Ex vivo confocal laser scanning microscopy: A diagnostic technique for easy real-time evaluation of benign and malignant skin tumours. J Biophotonics. 2022; 15(6):e202100372. DOI | PubMed
- Bai B, Yang X, Li Y, Zhang Y, Pillar N, Ozcan A. Deep learning-enabled virtual histological staining of biological samples. Light Sci Appl. 2023; 12(1):57. Publisher Full Text | DOI | PubMed [ PMC Free Article ]
- Rivenson Y, Wang H, Wei Z, De Haan K, Zhang Y, Wu Y, et al. Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning. Nat Biomed Eng. 2019; 3(6):466-77. DOI | PubMed
- Lo YC, Chung IF, Guo SN, Wen MC, Juang CF. Cycle-consistent GAN-based stain translation of renal pathology images with glomerulus detection application. Appl Soft Comput. 2021; 98:106822. DOI
- Runz M, Rusche D, Schmidt S, Weihrauch MR, Hesser J, Weis CA. Normalization of HE-stained histological images using cycle consistent generative adversarial networks. Diagn Pathol. 2021; 16(1):71. Publisher Full Text | DOI | PubMed [ PMC Free Article ]