Document Type : Original Research

Authors

1 PhD, Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran

2 MD, Department of Radiation Oncology, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran

3 PhD, Radiation Biology Research Center, Iran University of Medical Sciences, Tehran, Iran

Abstract

Background: Medical image fusion is being widely used for capturing complimentary information from images of different modalities. Combination of useful information presented in medical images is the aim of image fusion techniques, and the fused image will exhibit more information in comparison with source images.
Objective: In the current study, a BEMD-based multi-modal medical image fusion technique is utilized. Moreover, Teager-Kaiser energy operator (TKEO) was applied to lower BIMFs. The results were compared to six routine methods.
Material and Methods: In this study, which is of experimental type, an image fusion technique using bi-dimensional empirical mode decomposition (BEMD), Teager-Kaiser energy operator (TKEO) as a local feature selection and Hierarchical Model And X (HMAX) model is presented. BEMD fusion technique can preserve much functional information. In the process of fusion, we adopt the fusion rule of TKEO for lower bi-dimensional intrinsic mode functions (BIMFs) of two images and HMAX visual cortex model as a fusion rule for higher BIMFs, which are verified to be more appropriate for human vision system. Integrating BEMD and this efficient fusion scheme can retain more spatial and functional features of input images.
Results: We compared our method with IHS, DWT, LWT, PCA, NSCT and SIST methods. The simulation results and fusion performance show that the presented method is effective in terms of mutual information, quality of fused image (QAB/F), standard deviation, peak signal to noise ratio, structural similarity and considerably better results compared to six typical fusion methods.
Conclusion: The statistical analyses revealed that our algorithm significantly improved spatial features and diminished the color distortion compared to other fusion techniques. The proposed approach can be used for routine practice. Fusion of functional and morphological medical images is possible before, during and after treatment of tumors in different organs. Image fusion can enable interventional events and can be further assessed.

Keywords

Introduction

Recently, an increasing interest in medical image fusion has been observed [ 1 - 4 ]. Fusion of medical images obtained from different imaging systems such as positron emission tomography (PET), magnetic resonance image (MRI), single photon emission computed tomography (SPECT) and computed tomography (CT) facilitate image analysis, clinical diagnosis and treatment planning [ 5 ]. Each medical imaging modality provides a different level of structural and functional information. For instance, CT (based on x-ray principle) is often used to represent dense structures, and is not suitable for soft tissues and physiological analysis. By contrast, MRI provides a better representation of soft tissue and is usually used for the diagnosis of tumors and other tissue abnormalities. Similarly, low blood pressure information in one region of the body is obtained by PET; nonetheless, its low resolution is one of the disadvantages of this imaging modality [ 6 ].

Previous methods have revealed that image fusion has a great ability to improve diagnostic and treatment in different pathological populations such as cancer patients [ 7 - 10 ]. Various algorithms have been applied effectively for most applications in the past and were successfully applied for diagnosis of kidney and liver tumors [ 11 ]. Fusion can be valuable during interventional events and can contribute before, during and after tumor therapy [ 12 ]. Most functional and morphologic imaging researches offer distinct and complimentary information. Registration of medical images can provide another insight into the spatial relationships between tumor and thermal lesion. Predictable clarification practices mental registration [ 13 ]; nevertheless, computer processing can afford an impartial and exact assessment [ 14 ].

Recent research has revealed that fusion of abdominal images from diverse modalities can recover analysis and monitoring of progression of disease [ 15 , 16 ]. New imaging modalities joining positron emission tomography (PET), single photon emission computed tomography (SPECT) and computed tomography (CT) proposes a unique inspection encouraging investigative and prognostic capacities for different applications of image fusion in cancer [ 14 ]. Image fusion has confirmed advantageous for the assessment of patients with cancer supportive diagnosis, treatment development, monitoring the reply to therapy with disease development [ 17 - 19 ].

Hence, combining images obtained from different methods is required to extract sufficient information, reduce redundancy and make it more suitable for visual perception [ 20 ]. When there are multiple images of a patient, medical image fusion is applied. Fused images could be provided from multiple images from the same imaging modality [ 21 ], or multiple modalities [ 22 ]. Goshtas by categorized image fusion algorithms into pixel [ 23 ], feature [ 24 ] and symbolic [ 25 ] levels [ 20 ]. Pixel level fusion is more appropriate than other fusion methods, and can be implemented in both spatial and transform domains. Principal component analysis (PCA) [ 26 ], and Intensity hue saturation (IHS) [ 27 , 28 ] methods are in spatial domain pixel level fusion category. However, spatial domain fusion methods can cause spatial distortion [ 29 ]. In order to overcome these disadvantages, multi-scale decomposition (MSD) based medical image methods such as Daubechies complex wavelet transform [ 4 ], lifting wavelet transform (LWT) [ 30 ], Weighted Score Level Fusion [ 31 ], curvelet transform [ 32 , 33 ], non-subsample Deontourlet transform (NSCT) [ 34 ], Shearlet transform [ 35 ], shift-invariant shearlet transform (SIST) [ 1 ] and fuzzy transform [ 20 ] have been widely used to the fusion of medical imaging. Because of limitations in providing directional information, discrete wavelet transform (DWT) based fusion method produce block artifacts and inconsistency in the fused results [ 36 ]. Contourlet transform methods use various and flexible directions to distinguish geometrical structures. Hence, the down- and up-sampling cause ringing artifacts; it is a redundant transform [ 37 ]. Curvelet transform can capture the intrinsic geometrical structure of an image; however, it does not provide a multi-resolution demonstration of geometry [ 38 ]. Empirical Mode decomposition (EMD) is an innovative data representation that decomposes non-stationary and non-linear signals into Intrinsic Mode Functions (IMFs) [ 39 ]. Compared to other former multi-scale decomposition approaches, EMD can more precisely represent image information. The reasons are as follows [ 40 ]: (1) This decomposition technique is data driven; (2) Decomposition is based on the local spatial scale of the image; and (3) IMFs permit illustration of instantaneous frequencies as functions of space.

Physical properties of one-dimensional EMD can also be extended to two-dimensional image analysis. Qiao et al. developed space transform by combining the panchromatic images into multispectral images [ 41 ]. Chenet et al. integrated SVM with EMD to create multi-focus image fusion [ 42 ]. After that Zhang et al. implemented a comparison of EMD-based image fusion approaches and showed that the fused image quality is the best by BEMD [ 43 ]. Ahmed et al. and Wielgus et al. considered the use of fast and adaptive BEMD in image fusion [ 44 , 45 ]. Also Zhao et al. proposed a bidimensional empirical mode decomposition with directional information to merge medical images [ 46 ]. These studies exhibit the potential of using BEMD on medical image fusion. Consequently, we select BEMD as the MSD tool in our present work.

Choice of the fusion schemes is another essential work for the MSD-based image fusion technique. There are various fusion rules for a variety of applications. The coefficients are combined with a rule, such as choose-max [ 35 , 47 ], the energy and regional information entropy [ 48 ], Pulse Coupled Neural Network (PCNN) [ 49 ] and Self-Generating Neural Network [ 50 ]. The drawbacks of these rules include time-consuming process and no statistical dependency between these MSD coefficients. The correlation coefficients of the cross- and inter- sub-bands scales have been considered as fusion criteria [ 2 ]. However, BIMFs are statistically uncorrelated or orthogonal, and there are no dependencies between the BIMFs.

In this study, a BEMD-based multi-modal medical image fusion technique is utilized. Teager-Kaiser energy operator (TKEO) applied to lower BIMFs. TKEO can track the energy and distinguish the instantaneous frequency and instantaneous amplitude of mono-component AM-FM signal [ 51 ]. TKEO is used to emphasize the pixel activity. Furthermore, this operator reflects much more the pixel energy activity compared to other features. HMAX visual cortex model is used for higher BIMFs. The proposed fusion schema makes complete use of the mechanism of V1 visual cortex to fused proper BIMFs. This study is organized as follows: Section 2 explains the proposed framework of method (Section 2.1), the BEMD-based fusion technique (Section 2.2), theoretical overview of TKEO (Section 2.3), HMAX visual cortex model (Section 2.4) and the fusion rule for the BIMFs is described in Section 2.5. Section 3 presents experimental results. Finally, Section 4 is devoted to conclusion.

Material and Methods

This study is an experimental type and this section offers main fusion method under the BEMD frame. Then, the theory and implementation of BEMD are presented. The fusion rules based on dependencies of the BIMFs are also discussed.

The Proposed Framework of Medical Image Fusion

It should be noted that PET images are depicted by pseudo-color, thus, we considered them color images. Figure 1 is a block diagram representing proposed algorithm. The steps of the algorithm are summarized here.

Figure 1. Schematic diagram of the bi-dimensional empirical mode decomposition (BEMD)-based medical image fusion method

Step 1: Convert source image B into IHS model and then calculate the intensity component of it.

Step 2: Decompose the intensity components into BIMFs via BEMD.

Step 3: Combine lower BIMFs according to TKEO rules.

Step 4: Combine higher BIMFs based on HMAX Visual Cortex model.

Step 5: Reconstruct the intensity components of the fused image by summation of selected BIMFs.

Step 6: Reconstruct the fused color image using the inverse IHS transform.

BEMD-based Fusion Algorithm: Theoretical Overview of BEMD

The bi-dimensional empirical mode decomposition (BEMD) has been suggested to adaptively extract different frequency components of image [ 39 ]. This technique is derived from the assumption that image consists of various bi-dimensional intrinsic mode functions (BIMFs). ABIMF is defined by two criteria: firstly, each BIMF has the same number of zero crossings and extrema; secondly, each BIMF is symmetric with respect to the local mean. The following plan suggests an idea about the principle algorithm of the BEMD:

1) Identify the extrema of the image I by morphological reconstruction based on geodesic operators.

2) Generate the 2D ‘envelope’ by connecting maxima points (respectively, minima points) with a radial basis function (RBF).

3) Determine the local mean m1; by averaging the two envelopes.

4) Since BIMF should have zero local mean, subtract out the mean from the image:

I-m1=h1

5) Repeat as h1 is a BIMF.

Theoretical Overview of TKEO

It is revealed that the TKEO can track the energy and recognize the instantaneous frequency (IF) and the amplitude of a signal [ 52 ]. Energy of each pixel can be assessed using an image statistic such as the Sobel detectors or gradient; nevertheless, these methods are sensitive to noise [ 53 ] and do not perfectly highlight edges. The 2D-TKEO distinguishes noise peaks and true edges, and reflects better the local activity than the amplitude of the gradient [ 54 ]. The 2D-TKEO is defined by [ 55 ]:

ψ(I(m,n))=||∇I(m,n)||2-I(m,n)∇2I(m,n) (1)

I(x, y) supposes twice-differentiable continuous real valued function. The first type of the new 2D nonlinear filter has been attained by applying the filtering operation of Eq. (1) along both the vertical and horizontal directions resulting in a 2-D version given by [ 57 ]:

ψ(I(m,n))=2I2(m,n)-I(m-1,n).I(m,n-1)-I(m,n-1)I(m,n+1) (2)

An essential characteristic of 2D-TKEO is that it is approximately instantaneous and this resolution offers us an ability to capture the energy fluctuations. Additionally, implementation is very easy.

HMAX Visual Cortex Model

The HMAX Visual Cortex model is arranged in some layers to evaluate information in a bottom-up way. The layers of the model are called simple “S” or complex “C” which are discovered by Hubel and Wiesel [ 42 ]. These cells are placed in the striate cortex (called V1), which is the part of visual cortex in the most posterior area of the occipital lobe. The structure of HMAX model is shown in Figure 2.

Figure 2. The structure of mathematical simulation of Hierarchical Model And X (HMAX) model

In this model [ 43 ], S1 and S2 are two layers of simple cells, and C1 and C2 are two layers of complex cells (Figure 2). The layers are computed by a hard max filter. The images are processed by the subsequent simple and complex cells layers and reduced to set of features (F). The S1 layer adjusts the 2D Gabor filters calculated for four orientations (horizontal, vertical, and two diagonal) at each position and scale. The Gabor filter is described by:

G(x,y)=exp(-X2+γY22σ2)cos(λX) (3)

where, X = xcosφ - ysinφ and Y = xsinφ + ycosφ; The aspect ratio (γ), affective width (σ), and wavelength (λ) are fixed to 0.3, 4.5 and 5.6, respectively. Finally, the HMAX response R can be computed using the formula (4)

R(X,G)=abs(XiGiXi2) (4)

Fusion Rule for BIMFs

It is well known that lower IMFs correspond to higher frequency parts and vice versa, thus, higher BIMFs provide the approximation of original images. Frequently, the averaging or regional standard deviation methods are used to produce the fused low frequency coefficients. However, their drawback is low-contrast results. On the other hand, clarity of the local energy is observed. Therefore, a new scheme for fusion is developed by the hierarchical HMAX in the visual cortex model to select between BIMFs. The completely developed scheme is described as follows:

1) Compute the HMAX response by Eq. (4)

2) The fused BIMFs are obtained by the hierarchical HMAX response mapping:

CF(i,j)={CA(i,j)RA(X,G)RB(X,G)CB(i,j)RA(X,G)<RB(X,G) (5)

where, Cl(i,j) denotes higher BIMFs located at (i, j), l=A,B.A (MRI), B (PET/SPECT) and F is fused image. Lower BIMFs offer the detailed information of image. The choose-max method is a popular scheme, which is used for composition of high frequency coefficients. This method selects only the maximum amplitude of single coefficient; hence, it is not suitable for medical image features. Consequently, to obtain better results than other fusion schemes, 2D Teager Kaiser Energy Operator (2D TKEO) is applied to construct a weight fusion scheme. The 2D TKEO reflects finer local activity. This quadratic filter enhances information, which is the average of gray values by the energy activity at each pixel. Let Ci(x,y) denote lower BIMFs located at (x, y), i=A,B. The coeffcient of the fused image at location (x, y) can be calculated by:

CF(x,y)=i=A,BμiCi(x,y) (6)

where, μi is the weight for the local Energy Ei. Ei is computed in the 3 * 3 neighborhood by Eq. (2):

μi=Ei(x,y)EA(x,y)+EB(x,y) (7)

Results

PET/MRI/SPECT images used in this study were obtained from Harvard university site (http://www.med.harvard.edu/AANLIB/home.html). The simulation results of our method were compared with IHS transform, DWT, LWT, PCA, NSCT and SIST. The performance of our algorithm is evaluated by Mutual Information (MI for short) [ 27 ], QAB/F [ 57 ], Standard deviation (SD for short) [ 35 ], peak signal-to-noise ratio (PSNR for short) and structural similarity (SS for short) [ 58 , 59 ].

Figures 3a and b show PET-MRI images from a 60 year-old man with Mild Alzheimer’s disease. Figures 4a and b demonstrate SPECT-MRI images from a 38 year-old man with Mild Neoplastic Disease (brain tumor). From the result, it can be obviously seen that proposed fusion technique can retain high spatial resolution features of the MRI image. Moreover, the fused image does not distort the spectral features of multispectral image. In addition, according to the quantitative comparison of different fusion techniques, most metrics can achieve the best value by the proposed method that is seen in Tables 1 and 2.

Figure 3. Alzheimer’s disease positron emission tomography (PET) and magnetic resonance image (MRI) images (a and b), Intensity hue saturation (IHS) model (c), Lifting wavelet transform (LWT) (d), Discrete wavelet transform (DWT) (e), Principal component analysis (PCA) (f), Nonsubsampled contour transformation (NSCT) (g), Shearlet transform (ST) (h) and proposed method (i).

Figure 4. Single photon emission computed tomography (SPECT) and magnetic resonance image (MRI) images (a and b), Intensity hue saturation (IHS) model (c), Lifting wavelet transform (LWT) (d), Discrete wavelet transform (DWT) (e), Principal component analysis (PCA) (f), Nonsubsampled contour transformation (NSCT) (g), Shearlet transform (ST) (h) and proposed method (i).

IHS PCA DWT LWT NSCT ST Proposed method
MI 2.4641 2.6093 2.7740 2.7783 2.5253 2.8554 2.9871
SD 37.7569 47.6791 53.1702 53.1941 53.5308 68.9855 81.9598
QAB/F 0.3102 0.2742 0.3745 0.3721 0.2156 0.2111 0.4023
PSNR 14.8553 17.2423 21.1892 21.1806 20.9094 25.8630 28.4485
SS 0.8870 0.8141 0.9546 0.9537 0.9144 0.9445 0.9456
IHS: Intensity hue saturation, PCA: Principal component analysis, DWT: Discrete wavelet transform, LWT: Lifting wavelet transform, NSCT: Nonsubsampled contourlet transform, ST: Shearlet transform, MI: Mutual Information, SD: Standard deviation, QAB/F: Quality of fused image, PSNR: Peak signal to noise ratio, SS: Structural similarity
Table 1. The objective evaluation of the seven methods for the fusion of magnetic resonance image (MRI)/positron emission tomography (PET) (Alzheimer’s disease).
IHS PCA DWT LWT NSCT ST Proposed method
MI 2.6142 2.5116 2.4973 2.4920 2.2191 2.6869 2.9302
SD 51.9392 48.0455 48.4792 48.5408 51.3770 69.2972 83.4856
QAB/F 0.5673 0.2569 0.2853 0.2821 0.1117 0.5218 0.5253
PSNR 19.4348 25.8734 24.7308 24.6873 22.0906 20.7223 17.2108
SS 0.8662 0.9240 0.9150 0.9138 0.8453 0.8871 0.8827
IHS: Intensity hue saturation, PCA: Principal component analysis, DWT: Discrete wavelet transform, LWT: Lifting wavelet transform, NSCT: Nonsubsampled contourlet transform, ST: Shearlet transform, MI: Mutual Information, SD: Standard deviation, QAB/F: Quality of fused image, PSNR: Peak signal to noise ratio, SS: Structural similarity
Table 2. The objective evaluation of the seven methods for the fusion of magnetic resonance image (MRI)/single photon emission computed tomography (SPECT) (Neoplastic Disease).

Discussion

Visual analysis demonstrates that results of the proposed method have more spatial resolutions. It appears that, our results (Figures (3i) and (4i)) show visually the best results among all other results. The proposed method enables offering less spectral information loss as compared to other state-of-the-art techniques.

The proposed algorithm is compared with IHS, DWT, LWT, PCA, NSCT and SIST methods. A proper fusion method should maintain the spectral characteristics of PET image and the high spatial characteristics of the MRI image, which is obtained by the proposed method.

To entirely envisage two fused images, it is significant to be able to regulate the colorization, brightness and image contrast autonomously. In addition, it is also vital to be able to correct the amount of merger between the two fused images. Having the capability to adjust these image characteristics significantly recovers the imagining of lesions and necrotic parts. Such a visualization is vigorous to precise evaluations. The most individually actual color arrangements were kept, which permitted further computerization of repetitive post-processing stages. In addition, our techniques allow localizing and fusing both MRI and PET images. Image processing and fusion established investigative apparatuses that can be further assessed for possible effectiveness during interventional procedures.

Conclusion

In this study, we present a novel method based on the BEMD technique to decompose medical images into various frequency bands and Teager-Kaiser energy operator (TKEO) applied to lower modes to extract regional features and HMAX visual cortex model for higher BIMFs. Thanks to this model, the proposed divisive HMAX-based fusion rule can be applied on higher BIMFs to make complete use of the mechanism of visual cortex (V1). The statistical analyses revealed that our algorithm significantly rose spatial information and decreased the color distortion compared to other fusion methods on the fusion of MRI/PET and MRI/SPECT. Furthermore, the results of our algorithm are visually better than all other results.

References

  1. Wang L, Li B, Tian L-F. Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients. Information Fusion. 2014; 19:20-8.
  2. Wang L, Li B, Tian L-F. EGGDD: An explicit dependency model for multi-modal medical image fusion in shift-invariant shearlet transform domain. Information Fusion. 2014; 19:29-37.
  3. Wang Q, Li S, Qin H, Hao A. Robust multi-modal medical image fusion via anisotropic heat diffusion guided low-rank structural analysis. Information Fusion. 2015; 26:103-21.
  4. Singh R, Khare A. Fusion of multimodal medical images using Daubechies complex wavelet transform-A multiresolution approach. Information Fusion. 2014; 19:49-60.
  5. Polo A, Cattani F, Vavassori A, Origgi D, Villa G, Marsiglia H, et al. MR and CT image fusion for postimplant analysis in permanent prostate seed implants. Int J Radiat Oncol Biol Phys. 2004; 60:1572-9. DOI | PubMed
  6. Javed U, Riaz M M, Ghafoor A, Ali S S, Cheema T A. MRI and PET image fusion using fuzzy logic and image local features. Scientific World Journal. 2014; 2014:708075. Publisher Full Text | DOI | PubMed
  7. Faulhaber P, Nelson A, Mehta L, O’Donnell J. 24. The Fusion of Anatomic and Physiologic Tomographic Images to Enhance Accurate Interpretation. Clin Positron Imaging. 2000; 3:178. PubMed
  8. Mutic S, Palta J R, Butker E K, Das I J, Huq M S, Loo L N, et al. Quality assurance for computed-tomography simulators and the computed-tomography-simulation process: report of the AAPM Radiation Therapy Committee Task Group No. 66. Med Phys 2003; 30:2762-92. DOI | PubMed
  9. Bowsher J E, Johnson V E, Turkington T G, Jaszczak R J, Floyd C R, Coleman RE. Bayesian reconstruction and use of anatomical a priori information for emission tomography. IEEE Trans Med Imaging. 1996; 15:673-86. DOI | PubMed
  10. Scott A M, Macapinlac H, Zhang J J, Kalaigian H, Graham M C, Divgi C R, et al. Clinical applications of fusion imaging in oncology. Nucl Med Biol. 1994; 21:775-84. PubMed
  11. Giesel F L, Mehndiratta A, Locklin J, McAuliffe M J, White S, Choyke P L, et al. Image fusion using CT, MRI and PET for treatment planning, navigation and follow up in percutaneous RFA. Exp Oncol. 2009; 31:106-14. Publisher Full Text | PubMed
  12. Yap J T, Carney J P, Hall N C, Townsend D W. Image-guided cancer therapy using PET/CT. Cancer J. 2004; 10:221-33. PubMed
  13. Ferrari De Oliveira L, Azevedo Marques P M. Coregistration of brain single-positron emission computed tomography and magnetic resonance images using anatomical features. J Digit Imaging. 2000; 13:196-9. Publisher Full Text | PubMed
  14. Vannier M W, Gayou D E. Automated registration of multimodality images. Radiology. 1988; 169:860-1. DOI | PubMed
  15. Forster G J, Laumann C, Nickel O, Kann P, Rieker O, Bartenstein P. SPET/CT image co-registration in the abdomen with a simple and cost-effective tool. Eur J Nucl Med Mol Imaging. 2003; 30:32-9. DOI | PubMed
  16. Antoch G, Kanja J, Bauer S, Kuehl H, Renzing-Koehler K, Schuette J, et al. Comparison of PET, CT, and dual-modality PET/CT imaging for monitoring of imatinib (STI571) therapy in patients with gastrointestinal stromal tumors. J Nucl Med. 2004; 45:357-65. PubMed
  17. Keidar Z, Israel O, Krausz Y. SPECT/CT in tumor imaging: technical aspects and clinical applications. Semin Nucl Med. 2003; 33:205-18. DOI | PubMed
  18. Israel O, Mor M, Gaitini D, Keidar Z, Guralnik L, Engel A, et al. Combined functional and structural evaluation of cancer patients with a hybrid camera-based PET/CT system using (18)F-FDG. J Nucl Med. 2002; 43:1129-36. PubMed
  19. Beyer T, Townsend D W. Putting ‘clear’ into nuclear medicine: a decade of PET/CT development. Eur J Nucl Med Mol Imaging. 2006; 33:857-61. DOI | PubMed
  20. Goshtasby A A, Nikolov S. Image fusion: advances in the state of the art. Information fusion. 2007; 8:114-8.
  21. Gooding MJ, Rajpoot K, Mitchell S, Chamberlain P, Kennedy S H, Noble J A. Investigation into the fusion of multiple 4-D fetal echocardiography images to improve image quality. Ultrasound Med Biol. 2010; 36:957-66. DOI | PubMed
  22. Maintz J B, Viergever M A. A survey of medical image registration. Med Image Anal. 1998; 2:1-36. PubMed
  23. Petrovic V. Multisensor pixel-level image fusion. Manchester: University of Manchester; 2001.
  24. Region-Based Image Fusion Using Complex Wavelets. Seventh International Conference on Information Fusion; Stockholm, Sweden: FUSION; 2004.
  25. Decision-level fusion of infrared and visible images for face recognition. Control and Decision Conference; Chinese: IEEE; 2008.
  26. Nirosha Joshitha J, Selin R M. Image fusion using PCA in multifeature based palmprint recognition. International Journal of Soft Computing and Engineering. 2012; P2.
  27. Daneshvar S, Ghassemian H. MRI and PET image fusion by combining IHS and retina-inspired models. Information Fusion. 2010; 11:114-23.
  28. Pradhan P S, King R L, Younan N H, Holcomb D W. Estimation of the number of decomposition levels for a wavelet-based multiresolution multisensor image fusion. IEEE Transactions on Geoscience and Remote Sensing. 2006; 44:3674-86.
  29. WWang Z, Ziou D, Armenakis C, Li D, Li Q. A comparative analysis of image fusion methods. IEEE Transactions on Geoscience and Remote Sensing. 2005; 43:1391-402.
  30. Kor S, Tiwary U. Feature level fusion of multimodal medical images in lifting wavelet transform domain. Conf Proc IEEE Eng Med Biol Soc. 2004; 2:1479-82. DOI | PubMed
  31. Sim H M, Asmuni H, Hassan R, Othman RM. Multimodal biometrics: Weighted score level fusion based on non-ideal iris and face images. Expert Systems with Applications. 2014; 41:5390-404.
  32. Alipour S H M, Houshyari M, Mostaar A. A novel algorithm for PET and MRI fusion based on digital curvelet transform via extracting lesions on both images. Electron Physician. 2017; 9:4872-9. Publisher Full Text | DOI | PubMed
  33. Yang L, Guo B, Ni W. Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform. Neurocomputing. 2008; 72:203-11.
  34. Li T, Wang Y. Biological image fusion using a NSCT based variable-weight method. Information Fusion. 2011; 12:85-92.
  35. Miao Q-g, Shi C, Xu P-f, Yang M, Shi Y-b. A novel algorithm of image fusion using shearlets. Opt Commun. 2011; 284:1540-7.
  36. Amolins K, Zhang Y, Dare P. Wavelet based image fusion techniques-An introduction, review and comparison. ISPRS Journal of Photogrammetry and Remote Sensing. 2007; 62:249-63.
  37. Do M N, Vetterli M. The contourlet transform: an efficient directional multiresolution image representation. IEEE Trans Image Process. 2005; 14:2091-106. PubMed
  38. Candes E, Demanet L, Donoho D, Ying L. Fast discrete curvelet transforms. Multiscale Modeling & Simulation. 2006; 5:861-99.
  39. Huang N E, Shen Z, Long S R, Wu M C, Shih H H, Zheng Q, et al. , editors. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 1998; 454(1971):903-95. DOI
  40. Hariharan H, Gribok A, Abidi M A, Koschan A. Image fusion and enhancement via empirical mode decomposition. Journal of Pattern Recognition Research. 2006; 1:16-32.
  41. A novel image fusion algorithm based on 2D EMD and IHS. International Conference on Machine Learning and Cybernetics; Kunming, China: IEEE; 2008.
  42. Chen S, Su H, Zhang R, Tian J, Yang L. Improving Empirical Mode Decomposition Using Support Vector Machines for Multifocus Image Fusion. Sensors (Basel). 2008; 8:2500-8. Publisher Full Text | DOI | PubMed
  43. Comparison of EMD based image fusion methods. International Conference on Computer and Automation Engineering; Bangkok, Thailand: IEEE; 2009.
  44. Image fusion based on fast and adaptive bidimensional empirical mode decomposition. 13th International Conference on Information Fusion; Edinburgh, UK: IEEE; 2010.
  45. Fast and adaptive bidimensional empirical mode decomposition for the real-time video fusion. 15th International Conference on Information Fusion; Singapore: IEEE; 2012.
  46. Zhao H, Zhang X, Li X, Zang X. A New Model for Image Fusion Using Bi-dimensional Empirical Mode Decomposition with Directional Information. Journal of Information & Computational Science. 2014; 11:3461-8.
  47. Remote sensing images fusion algorithm based on shearlet transform. International Conference on Environmental Science and Information Application Technology; Wuhan, China: IEEE; 2009.
  48. Wavelet-based texture fusion of CT/MRI images. 3rd International Congress on Image and Signal Processing; Yantai, China: IEEE; 2010.
  49. Li M, Cai W, Tan Z. A region-based multi-sensor image fusion scheme using pulse-coupled neural network. Pattern Recognition Letters. 2006; 27:1948-56.
  50. Jiang H, Tian Y. Fuzzy image fusion based on modified Self-Generating Neural Network. Expert Systems with Applications. 2011; 38:8515-23.
  51. Cexus J-C, Boudraa A. Teager-Huang analysis applied to sonar target recognition. World Academy of Science, Engineering and Technology. 2005;111-4.
  52. Cexus J, Boudraa A. Teager-Huang analysis applied to sonar target recognition. Int J Signal Process. 2004; 1:23-7.
  53. Optimal performance of the watershed segmentation of an image enhanced by Teager energy driven diffusion. Proceedings Workshop on Very Low Bit Rate Coding; Champaign: Department of Telecommunications and information processing; 1998.
  54. Watershed segmentation of an image enhanced by teager energy driven diffusion. Sixth International Conference on Image Processing and Its Applications; Dublin, Ireland: IET; 1997.
  55. Boudraa A-O, Salzenstein F, Cexus J-C. Two-dimensional continuous higher-order energy operators. Optical Engineering. 2005; 44:117001.
  56. A new class of nonlinear filters for image enhancement. International Conference on Acoustics, Speech, and Signal Processing; Toronto, Ontario, Canada: IEEE; 1991.
  57. Li S, Yang B, Hu J. Performance comparison of different multi-resolution transforms for image fusion. Information Fusion. 2011; 12:74-84.
  58. Naidu V, Raol J R. Pixel-level image fusion using wavelets and principal component analysis. Def Sci J. 2008; 58:338.
  59. Wang Z, Bovik A C, Sheikh H R, Simoncelli E P. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004; 13:600-12. PubMed