Document Type : Original Research


1 PhD, Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran

2 PhD, Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran

3 MD, Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran Iran

4 MD, Department of Radiology and Imaging Sciences, Emory University School of Medicine 1364 Clifton Rd NE Atlanta, Georgia 30322,USA

5 PhD, Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran



Background: Pancreatic ductal adenocarcinoma (PDAC) is the most prevalent type of pancreas cancer with a high mortality rate and its staging is highly dependent on the extent of involvement between the tumor and surrounding vessels, facilitating treatment response assessment in PDAC. 
Objective: This study aims at detecting and visualizing the tumor region and the surrounding vessels in PDAC CT scan since, despite the tumors in other abdominal organs, clear detection of PDAC is highly difficult.
Material and Methods: This retrospective study consists of three stages: 1) a patch-based algorithm for differentiation between tumor region and healthy tissue using multi-scale texture analysis along with L1-SVM (Support Vector Machine) classifier, 2) a voting-based approach, developed on a standard logistic function, to mitigate false detections, and 3) 3D visualization of the tumor and the surrounding vessels using ITK-SNAP software. 
Results: The results demonstrate that multi-scale texture analysis strikes a balance between recall and precision in tumor and healthy tissue differentiation with an overall accuracy of 0.78±0.12 and a sensitivity of 0.90±0.09 in PDAC.  
Conclusion: Multi-scale texture analysis using statistical and wavelet-based features along with L1-SVM can be employed to differentiate between healthy and pancreatic tissues. Besides, 3D visualization of the tumor region and surrounding vessels can facilitate the assessment of treatment response in PDAC. However, the 3D visualization software must be further developed for integrating with clinical applications.



Pancreatic ductal adenocarcinoma (PDAC), constituting 85% of pancreatic cancers, is an aggressive gastrointestinal (GI) malignancy with a high mortality rate and is also the second most common GI malignancy after colorectal cancer [ 1 , 2 ]. In the early stages of the disease, diagnosis is complicated due to insufficient symptoms. Therefore, at the time of diagnosis, most patients suffer from the advanced stage of disease, leading to an overall 5-year survival rate of 8%. The only potentially curative treatment is surgery; however, only 10–20% of patients have resectable tumors at the time of presentation. Currently, neoadjuvant chemotherapy with or without radiotherapy is used in patients with locally advanced and borderline PDAC [ 1 ], resulting in the best chance of survival. Pre and post-therapy evaluation of PDAC is the most critical factor for resectability assessment and decision on a treatment plan; however, identifying the exact border of the tumor region is complicated [ 3 ].

PDAC staging is highly dependent on the involvement extent of the tumor and surrounding vessels such as the superior mesenteric artery (SMA), superior mesenteric vein (SMV), and portal vein. 3D visualization of the tumor and surrounding vessels can help assess treatment response in PDAC. To achieve this, accurate differentiation between tumor mass and healthy tissue is a crucial factor.

Tissue biopsy is the gold standard for evaluating PDAC; however, the biopsy is an invasive procedure with possible complications. Currently, CT-angiography is the most common modality for the evaluation of PDAC patients [ 2 , 4 , 5 ], although the sensitivity and specificity of these non-invasive techniques still stay insufficient [ 6 ]. Besides, in spite of the tumors of other abdominal organs, PDAC is appeared like a shadow, resulting in a difficult detection of the tumor borders [ 7 ].

New techniques such as CT texture analysis (CTTA) have been proposed to address the limitations mentioned above and assist physicians in better management of the PDAC [ 1 , 4 , 8 , 9 ]. CTTA can capture pixel or voxel gray-level variations and distributions within the image, provide a semi-quantitative method for evaluating the heterogeneity within a tumor, and is also capable of predicting prognosis and survival outcomes in non-small cell lung cancer, esophageal cancer, colon cancer, and metastatic renal cell carcinoma [ 10 - 13 ].

The two most popular sets of texture-based features are statistical and multi-resolution features. The first set consists of textural descriptors such as first-order statistical (FOS) features, gray-level co-occurrence matrix (GLCM) [ 14 ], gray-level run-length matrix (GLRLM) [ 15 ], local binary patterns (LBP) [ 16 ], and Law’s energy [ 17 ], reflecting the relationship between the intensity of two image pixels or groups of pixels and estimating the first and second-order statistic features. The second set comprises wavelet-based features such as discrete wavelet transform (DWT) [ 18 ], Gabor wavelet [ 19 ], and dual-tree complex wavelet (DTCWT) [ 20 ], capturing scale and orientation information in the spatial domain as well as the frequency content. Moreover, several studies have tried to combine statistical and multi-resolution features (GLCM+ DTCWT, LBP+DTCWT) to achieve effective features [ 21 , 22 ].

In recent studies, CT texture features have been used for predicting survival in patients with PDAC [ 4 , 6 , 9 , 23 - 27 ]. Ciaravino et al. [ 9 ] extracted simple texture features in down-stage PDAC for surgery after Chemotherapy and CT texture features are proposed for assessing treatment response [ 1 ]. In [ 28 ], texture features were obtained from Endoscopic ultrasound (EUS) images of the PDAC for differential diagnosis between pancreatic and normal tissues using a combination of M-band wavelet and fractal features, and a support vector machine (SVM) classifier. Marconi et al. [ 7 ] used a fuzzy logic system for discriminating between tumor and healthy tissues using a Multi-Detector CT. Zhu et al. [ 29 ] proposed a system for screening PDAC via deep neural networks and used multi-scale segmentation for classifying normal and abnormal (PDAC) pancreas. Chu et al. [ 30 ] used radiomics features for classifying PDAC and normal cases.

This work proposed a method for segmentation and 3D visualization of tumor region and surrounding vessels in PDAC in the following three stages: 1) PDAC and normal tissue are discriminated using texture analysis of CT images. Multi-scale feature extraction is used along with the L1-SVM classifier to deal with tumors of various sizes. Three different image patch sizes were used as the result of a trade-off between 1- contributing valuable information by each patch for extracting discriminant features, creating a tendency toward using bigger patch sizes and 2- availability of sufficient data patches, motivating extracting smaller patches, resulting in detecting small tumors and a balance between recall and precision; 2) vessels are segmented using 3D active contours; 3) the tumor region and surrounding vessels are visualized using ITK-SNAP software.

Material and Methods

In this retrospective study, multi-slice CT scans of 10 patients with pathologically proven adenocarcinoma of the pancreas were enrolled. CT scans were acquired prior to obtaining the tissue sample by percutaneous core needle biopsy or a fine needle aspiration (FNA) using endosonography (EUS). All imaging was performed at a radiology department of Shariati Hospital, Tehran University of Medical Sciences, using a 16-detector-row CT scanner (Somatom Emotion, Siemens, Erlangen, Germany) with confirming the study protocol by the Local Ethics Committee of Shariati Hospital. Oral contrast material was administered orally 90 to 120 min prior to the exam. The scans included non-enhanced CT of the abdomen, contrast-enhanced pancreatic parenchymal phase (40–45 s) CT of the abdomen, and portal phase (70 s) CT of the abdomen and pelvis. All patients received an intravenous injection of 1.5 ml per kilogram of body weight of an iodinated contrast agent (Visipaque 320 mg I/ml; GE Healthcare, Little Chalfont, England) at a rate of 3.5 mL/sec (maximum total amount of 150 mL). Images were obtained craniocaudally with thin collimation (1.5 mm) and other scan parameters were section thickness (3 mm), voltage (120 Kv), and effective tube current time charge (200–250 mAs). Two experienced abdominal radiologists manually segmented the pancreas on CT images in each slice.

Tumor/normal Tissue Differentiation

The proposed approaches for differentiation between tumor region and normal tissue are summarized in Figure 1. All algorithms were implemented using Matlab R2019a (Mathworks Inc, USA) software.

Figure 1. Block Diagram of the proposed method for differentiation between tumor region and normal tissue


The preprocessing stage of training data consists of contrast enhancement, manual segmentation of pancreas and tumor, and normalization. In quantitative texture analysis, image intensity normalization is necessary [ 23 , 31 ]. Therefore, the pixels within each ROI were normalized using the following equation:

In=I-mean(I)std(I) (1)

where I and In are original and normalized images, mean(I) and std(I) represent the average and standard deviation of each data, respectively.

Multi-scale feature extraction in a training phase

For each CT image, the pancreas and tumor regions were extracted by two experienced radiologists. Since texture descriptors are sensitive to ROI size, patch size plays an important role in estimating reliable features. In addition, given that tumors appear in diverse sizes and shapes, a multi-scale feature extraction method was adopted. Three image patch sizes, i.e., 16×16, 24×24, and 32×32, with an overlap of 66% were selected in the result of a trade-off between contributing valuable information by each patch to extract discriminant features, creating a tendency toward using bigger patch sizes and availability of sufficient data patches, motivating extracting smaller patches. In the feature extraction stage, statistical and wavelet-based features are used. Statistical features include statistical moments, GLCM, LBP, and LAW’s features. Meanwhile, wavelet-based features consist of DWT, Gabor, and DTCWT. Moreover, we combined statistical and wavelet-based features namely, GLCM+DTCWT and DTCWT+LBP.

Four orientations (θ=0°, 45°, 90°, 135°), four distances (d=1, 2, 3, 4), and quantization levels of 4, 8, 16, 25, and 32 were examined for extracting GLCM features, leading to 16 GLCMs for each ROI. Then 17 features mentioned in section 2.3 were extracted and a subset of best features were nominated based on the classifier performance. Applying law’s method with a window size of 3 and kernels with a length of 5, we obtained 9 energy maps. Then FOS features are extracted from these maps.

Multiple descriptors were used to implement LBP due to diversifying the number of surrounding pixels (P), the neighborhood radius (R), and cell size. To build the final feature vector, the obtained data by concatenating histograms of single-scale analysis were combined. Based on a trade-off between sensitivity and specificity of the L1-SVM classifier, LBP10,2riu2 with cell sizes of 12 and 14, LBP10,3riu2 with a cell size of 14 and LBP10,2riu2 with a cell size of 14 were used as LBP features. DWTs were computed with 17 different wavelet filters such as daubchis, symlet, and coiflet. Wavelet decompositions calculated in two levels are also examined in search on effective features. Gabor filters were applied with 5 scales and 8 orientations to calculate the Gabor features [ 19 ] and energy features were determined for each sub-image. Furthermore, 1-level DTCWT was also used to decompose the images into 12 high pass sub-bands (6 real and 6 imaginary). In addition to the FOS features extracted from wavelets, second-order features were also extracted to attain dominant features within large scales. Thus, 1-level DTCWT+GLCM and 1-level DTCWT+LBP are used for feature extraction; while LBP features were extracted from real low-pass sub-image of DTCWT, GLCM features were extracted from the real high-pass sub-image of DTCWT [ 21 , 22 ]. After checking all possible combinations of mentioned features, the best subsets as final feature vectors are nominated using classifier performance (Figure 2).

Figure 2. Diagram of extracted features

Feature Selection and Classification

For feature dimensionality reduction and classification, two approaches were examined, including 1) principal component analysis (PCA) along with SVM (linear and RBF kernels), KNN, and decision tree classifiers and 2) selecting features using L1-SVM within the classification framework via L1-norm penalized sparse representations [ 32 ]. The standard SVM is used for robust performance in binary classification problems; however, it has been shown that L1- SVM may offer some advantages over the standard SVM [ 32 ]. A set of training data S={(x1,y1),(x2,y2),...,(xn,yn)}, where N is the number of input data, xNRWN×lN is the 2D ROIs, and yN{0,1} is the label (0 for normal patches and 1 for tumoral patches). The goal of the current study is to design a model M:y=f(x) to predict the label for each testing patch. In L1-SVM, which is an equivalent Lagrange version of the optimization problem, ridge penalty is replaced with lasso penalty as follows:

minw0,wi=1n[1-yi(w0+j=1qwjhj(xi))]++λw1 (2)

where λ is regularization parameter and D={h1(x),...,hq(x) is a dictionary of basic functions and w1 can be presented as reciprocal of the geometric margin.

Lasso penalty shrinks fitted coefficients toward zero, leading to a reduction in the coefficient’s variance. Moreover, because of the L1 nature of the penalty, some of the coefficients (wj ,s ) would finally be exactly zero. Therefore, the lasso penalty shows a kind of feature selection effect. A wide range of λ values was tested on training data to achieve a model performing well on the test data. To evaluate the discrimination results, three common performance criteria were used namely Dice coefficient, specificity, and sensitivity, defined as follows:

Sensitivity=TPTP+FN (3)

Specificity=TNTN+FP (4)

Accuracy=TP+TNTP+TN+FP+FN (5)

Voting –based Multi-scale Pixel Labelling

In this stage, we should discriminate between tumor and healthy tissues in test data; providing that the best models for the classifiers with patch sizes 16, 24, and 32, namely C16, C24, and C32, were obtained in the previous stage. A sliding window sweeps the whole pancreas tissue pixel by pixel. Using the developed feature extraction and classification models, the image patch is classified as tumoral or normal and the center pixel of the patch is labeled accordingly and the processes are repeated for each image patch size. Each of the three obtained classifiers should contribute to the final decision according to their performance to predict the final label of a patch. Therefore, a weighting vector w=2, 1, 0.5 was defined consisting of weights for C32, C24, and C16, respectively. Thus, the label obtained from classifier C32 and its related weight is assigned to the corresponding pixel. If C32 is unable to label the pixel, it refers to C24 and its corresponding weight; otherwise, the label and weight are specified using C16. Furthermore, to solve the problem of label fluctuations along with CT slices and improve the classifying performance of the tissue, a novel was adopted but the simple approach, using the information provided by 3 adjacent slices, as described below.

A function d(i,j) is defined as the Euclidean distance between i-th pixel and j-th pixel in each CT slice.

d(i,j)={distance(i,j)ijepsiloni=j (6)

Besides, we define two measures for the total influence of neighboring normal/tumoral pixels on the i-th pixel class prediction as energy functions E0(i) and E1(i):

E0(i)=s=-11js=1ns1d(i,js)4×w(js) (7)

E1(i)=s=-11js=1ts1d(i,js)4×w(js) (8)

where ns/ts is the number of neighboring normal/tumoral pixels at slice s, and w(js) is the corresponding label weight of the j-th pixel in slice s, where s represents the same (0), upper (1), and lower (-1) slices relative to the slice of the pixel under consideration. Now, the probability of pixel i for tumoral/normal is calculated using a logistic function.

Pt(i)=11+eE(i) (9)

Pn(i)=1-Pt(i) (10)

where E(i) is defined as E(i)=-E1(i)+E0(i).

The final label of the i-th pixel is then assigned as follows:

Lt(i)={NormalifPt(i)>Pn(i)Tumoral otherwise (11)

Visualization of Tumor and Surrounding Vessels

A dependable measurement of the tumor and volume of surrounding vessels can help monitor the treatment outcome and assist the surgeon in decision making and/or its probable subsequent surgical planning. A semi-automatic 3D active contour method was used to segment the vessels. The active contour algorithm [ 33 , 34 ] is an iterative approach using energy forces and constraining separation of an ROI. The used method includes two stages: first, the speed function is produced using thresholding to obtain foreground/background probabilities, and second, an active contour is segmented by user-placed initialization seeds and g(x) as the edge indicator. Parametric contour C representing the boundary of segmented region evolves according to:

Ct=[α g(C)+β kc]N (12)

where kC represents mean curvature C, N is the normal vector of the curve, and α and β are scalar parameters.

The above two stages are repeatedly applied to the image to segment the vessels. Three main vessels, namely SMA, SMV, and portal vein are segmented using 3D active contour that segmented vessels are merged with images containing labeled tumoral regions for final visualization and 3D rendering using ITK-SNAP.


This section presents the classification performance and the differentiation between normal and tumor tissue using CT images, the results of the vessel segmentation, and 3D visualization.

Results of Feature Extraction and Classification

L1-SVM is far better in performance than the other classifiers mentioned for the evaluation criteria. The results of some feature combinations for image patch sizes of 32, 24, and 16 using LI-SVM classifier are shown in Tables 1, 2, and 3, respectively that best feature subsets for each ROI were highlighted as well. For comparison purposes, performing other classifiers, namely KNN (k=3), decision tree, and SVM with Radial basis function (RBF), and linear kernels, are evaluated for an image patch size of 32 as 0.88, 0.80, 0.85, and 0.88 respectively.

Feature Accuracy Sensitivity Specificity λ /# of selected features
DTCWT+ Moments 0.947 1 0.909 0.65/11
Gabor 0.690 0.630 0.730 1.25/18
DWT 0.842 1 0.727 0.6/9
GLCM 0.632 0.875 0.455 0.1/2
Multi scale LBP 0.737 0.750 0.727 1.7/18
LAW’S Texture 0.684 1 0.455 1.05/6
DTCWT+GLCM 0.632 0.875 0.455 0.15/4
DTCWT+LBP 0.632 0.875 0.455 0.15/7
GLCM+LBP 0.737 0.750 0.727 0.65/12
DTCWT: Dual-Tree Complex Wavelet Transform, DWT: Discrete Wavelet Transform, GLCM: Gray-Level Co-occurrence Matrix, LBP: Local Binary Pattern, LAW’S: Law is a name
Table 1.Results of feature extraction for an image patch size of 32 pixels using L1-Support Vector Machine (L1-SVM)
Feature Accuracy Sensitivity Specificity λ /# of selected features
DTCWT 0.700 0.662 0.724 0.9/18
Gabor 0.576 0.446 0.657 3.2/49
DWT 0.718 0.631 0.771 0.05/6
GLCM 0.618 0.708 0.562 3/26
Multi scale LBP+ Moments 0.800 0.708 0.857 1.85/40
LAW’S Texture 0.600 0.849 0.448 2.65/16
DTCWT+GLCM 0.500 0.600 0.438 1.8/81
DTCWT+LBP 0.647 0.708 0.610 1.05/14
GLCM+LBP 0.606 0.800 0.486 1.35/25
DTCWT: Dual-Tree Complex Wavelet Transform, DWT: Discrete Wavelet Transform, GLCM: Gray-Level Co-occurrence Matrix, LBP: Local Binary Pattern, LAW’S: Law is a name
Table 2.Results of feature extraction for an image patch size of 24 pixels using L1-Support Vector Machine (L1-SVM)
Feature Dice Sensitivity Specificity λ /# of selected features
DTCWT(36) 0.610 0.514 0.780 1.9/31
Gabor 0.650 0.190 0.933 0.25/28
DWT 0.669 0.494 0.775 0.95/24
GLCM 0.668 0.494 0.773 2.7/29
Multi scale LBP 0.652 0.400 0.804 0.15/13
LAW’S Texture 0.651 0.531 0.723 0.2/12
DTCWT+GLCM 0.606 0.401 0.730 0.3/79
DTCWT+LBP 0.617 0.418 0.737 0.3/16
GLCM+LBP+ Moments 0.663 0.528 0.744 0.45/28
DTCWT: Dual -Tree Complex Wavelet Transform, DWT: Discrete Wavelet Transform, GLCM: Gray-Level Co-occurrence Matrix, LBP: Local Binary Pattern, LAW’S: Law is a name
Table 3.Results of feature extraction for an image patch size of 16 pixels using L1-Support Vector Machine (L1-SVM)

Results of Voting-based Multi-scale Pixel Labelling

As indicated, each image patch was treated as a sample and a sliding window was used to label each pixel located in the pancreas region. Feature selection and classification were used along with L1-SVM, conducted for each of three selected scales. To achieve better results in labeling each pixel, the introduced multi-slice post-processing algorithm was used. Figure 3 shows the results of applying this algorithm on three sample pancreas slices. The final results of pixel labeling for 16, 24, 32 patch sizes and the multi-scale labeling with and without post-processing are presented in Table 4.

Figure 3. Three examples of multi-scale texture-based differentiation of normal and abnormal (PDAC) tissues. Blue indicates normal tissue and red presents abnormal tissue.

Method Dice coefficient Recall Precision
Single scale (scale =32) 0.42±0.27 0.32±0.23 0.84±0.26
Single scale (scale =24) 0.70±0.14 0.85±0.015 0.55±0.19
Single scale (scale =16) 0.69±0.12 0.89±0.05 0.57±0.17
Our Proposed Multi scale approach 0.78±0.12 0.90±0.09 0.72±0.20
Table 4.Results of pixel labeling for scales 16, 24, 32, and multi-scale with multi-slice post-processing

Results of vessel Segmentation and Visualization

The results of the vessel segmentation using semi-automatic 3D active contour with ITK-SNAP shows that the Dice coefficient in the superior mesenteric artery (SMA), superior mesenteric vein (SMV), and portal vein segmentation is 0.938, 0.815, and 0.924, respectively. A sample of the 3D visualization outcome is shown in Figure 4.

Figure 4. 3D visualization of tumor region and surrounding vessels. Yellow, red, blue, and purple mark tumor region mesenteric artery (SMA), superior mesenteric vein (SMV), and portal vein respectively. a) Manual segmentation of tumor and surrounding vessels, b) Semi-automatic segmentation of tumor and vessels, c) 3D visualization of manually segmented tumor and vessels, and d) 3D visualization of semi-automatically segmented tumor and vessels. Best viewed in color


Pancreatic ductal adenocarcinoma (PDAC), the most prevalent type of pancreas cancer, is an aggressive gastrointestinal (GI) malignancy with a high mortality rate. Since its staging is highly dependent on the extent of involvement between the tumor and surrounding vessels, 3D visualization of the tumor region and surrounding vessels can facilitate the assessment of treatment response in PDAC. For this purpose, accurate differentiation between tumor mass and healthy tissue is regarded as essential.

Some studies have focused on the automatic segmentation of the pancreas in normal cases [ 35 - 38 ]. Yet, more investigations are needed on-demand to satisfy our ambiguity regarding the segmentation of tumor region and surrounding vessels and assist the surgeons in decision making and/or the probable subsequent surgical planning. The aim of this study is to introduce an algorithm for determining the tumoral region of the pancreas and segmentation of vessels along with 3D visualization. A multi-slice and -scale CT texture analysis is proposed to discriminate between normal and abnormal (PDAC) tissues using statistical and wavelet-based features. Subsequently, peripancreatic vessels were segmented employing 3D active contours. Finally, visualization and 3D rendering were performed using ITK-SNAP.

Considering the available small dataset, a patch-based algorithm is proposed for classification in which, image patch size plays a crucial role in the differentiation between tumor regions and healthy tissues. An appropriate image patch size can capture valuable information and provide salient features for classification. Provided that a small patch size results in a low-performance classifier and a large patch size causes the borders blurred. A multi-scale analysis is done to harvest the benefits of both. Image patch sizes ranging from 16 to 32 pixels and their corresponding feature combinations were evaluated for selecting appropriate scales. Finally, three image patch sizes, i.e. 16 ×16, 24×24, and 32×32 pixels, were used as the result of a trade-off between contributing valuable information by each patch for extracting the discreminant features, creating a tendency toward using bigger patch sizes and availability of sufficient data patches, motivating extracting smaller patches.

In addition to the features used in previous works, new features and combinations of them [ 21 - 22 ] were used to improve the classification performance. Tables 1, 2, and 3 show the results of feature extraction from image patch sizes 32, 24, and 16, respectively. Wavelet-based features show a better performance for image patch size 32. Besides, the features of DTCWT with level 1 have better results for all three patch sizes using L1-SVM. However, the features of DTCWT with level 2 were tested with no acceptable results. Among all filter banks employed for DWT, bior3.1 showed the best performance. For feature extraction from Gabor wavelet, the energy of sub-bands in 5 scales and 8 orientations were calculated. Also, all FOS features were tested, but energy features demonstrated better performance. DTCWT outperforms DWT and Gabor. Although Gabor can perform way better by selecting larger λ’s, it needs much more runtime for feature extraction compared to DTCWT. For image patch size 24, multi-scale LBP is effective compared to other methods, and for image patch size 16, a combination of single-scale LBP, GLCM, and statistical moments exhibit the best performance compared to others. It can be concluded that local features can be extracted using statistical approaches such as LBP and GLCM. In DTCWT, the performance of the classifier increased with the larger image patch sizes. As shown in Table 4, the best precision is achieved by patch size 32, however with a lower level of accuracy and recall. For smaller scales, higher levels of accuracy and recall are obtained at the expense of lower precision. Therefore, multi-scale predictions can result in a balance between accuracy, recall, and precision. Figure 4 shows a voting-based approach using logistic function enhances the results considerably. Aggregating the information of 3 adjucent slices improves the results, especially in detecting small tumors.

To the best of our knowledge, very limited studies have investigated the possibility of differentiation between normal and tumoral (PDAC) tissues using CT images. Previous studies on PDAC used simple texture features for tasks such as survival rate estimation and assessment of treatment response. In [ 28 ], the authors performed a texture analysis for discriminating normal from pancreatic tissues using EUS images and achieved a good sensitivity and specificity. Although the results are acceptable, EUS is still considered an invasive technique. A recent study [ 29 ] on PDAC used a segmentation-for-classification approach to screen PDAC and searched for tumoral tissues on CT images using a large dataset of PDAC. Their results on tumor segmentation indicate good sensitivity and specificity, but low accuracy.

Furthermore, vessels were segmented using 3D active contour, and tumor and surrounding vessels were visualized in 3D using ITK-SNAP. Segmented tumor and vessel regions can be corrected by a radiologist prior to visualization. Quality evaluation of the proposed method relies on the experience of an expert radiologist and pathologist. According to the large dataset, deep learning can be used for better results.


In this paper, an approach was introduced for visualizing the tumor and surrounding vessels. The healthy and pancreatic tissues were differentiated using multi-scale texture analysis with statistical and wavelet-based features and L1-SVM classifier. The experimental results show that multi-scale texture analysis can result in a balance between recall and precision. 3D visualization of tumor region and surrounding vessels can facilitate investigating treatment response in PDAC; however, the 3D software must be further developed to integrate into clinical application. Since the limitation of this study is a small sample size, data gathering was done for automatically segmenting the pancreas and detecting tumoral tissues using deep learning methods. For the next phase, automatically segmenting the vessels without laying seed points was conducted as well.


The authors would like to thank the Research Centre of Biomedical Technology and Robotics, Tehran University of Medical Sciences. This study is part of a PhD thesis supported by Tehran University of Medical Sciences (Registration No: 37907-30-01-97).

Authors’ Contribution

T. Mahmoudi was involved in conceptualization, methodology, software design, validation and writing the paper. AR. Radmard participated in conceptualization, data collection and checking the labels. A. Salehnia was responsible for labeling the data. A. Ahmadian was involved in supervision, conceptualization, methodology and checking the final version of the manuscript. AH. Davarpanah participated in data collection. R. Kafieh was involved in methodology. H. Arabalibeik was involved in supervision, conceptualization, methodology, validation, investigation and editing the manuscript. All authors read the final version of the manuscript and approved it.

Ethical Approval

This study was approved by the Tehran University of Medical Sciences Institutional Review Board (IRB) (IR.TUMS.MEDICINE.REC.1397.119) and followed the tenets of the Declaration of Helsinki.

Informed consent

This is a retrospective study. The information of patients was deleted.


This study was supported by Tehran University of Medical Sciences, Tehran, Iran (Grant for proposal No. 37907-30-01-97).

Conflict of Interest



  1. Baliyan V, Kordbacheh H, Parakh A, Kambadakone A. Response assessment in pancreatic ductal adenocarcinoma: role of imaging. Abdom Radiol. 2018; 43(2):435-44. DOI | PubMed
  2. Al-Hawary MM, Francis IR, Chari ST, et al. Pancreatic ductal adenocarcinoma radiology reporting template: consensus statement of the Society of Abdominal Radiology and the American Pancreatic Association. Radiology. 2014; 270(1):248-60. DOI | PubMed
  3. Choi MH, Lee YJ, Yoon SB, et al. MRI of pancreatic ductal adenocarcinoma: texture analysis of T2-weighted images for predicting long-term outcome. Abdom Radiol. 2019; 44(1):122-30. DOI | PubMed
  4. Eilaghi A, Baig S, Zhang Y, et al. CT texture features are associated with overall survival in pancreatic ductal adenocarcinoma–a quantitative analysis. BMC Med Imaging. 2017; 17(1):38. Publisher Full Text | DOI | PubMed
  5. Callery MP, Chang KJ, Fishman EK, et al. Pretreatment assessment of resectable and borderline resectable pancreatic cancer: expert consensus statement. Ann Surg Onco. 2009; 16(7):1727-33. DOI | PubMed
  6. Cassinotto C, Mouries A, Lafourcade JP, et al. Locally advanced pancreatic adenocarcinoma: reassessment of response with CT after neoadjuvant chemotherapy and radiation therapy. Radiology. 2014; 273(1):108-16. DOI | PubMed
  7. Marconi S, Pugliese L, Del Chiaro M, et al. An innovative strategy for the identification and 3D reconstruction of pancreatic cancer from CT images. Updates Surg. 2016; 68(3):273-8. DOI | PubMed
  8. Boninsegna E, Negrelli R, Zamboni GA, et al. Assessing treatment response in pancreatic cancer: role of different imaging criteria. European Congress of Radiology-ECR; 2017.
  9. Ciaravino V, Cardobi N, De Robertis R, et al. CT texture analysis of ductal adenocarcinoma downstaged after chemotherapy. Anticancer Res. 2018; 38(8):4889-95. DOI | PubMed
  10. Goh V, Ganeshan B, Nathan P, et al. Assessment of response to tyrosine kinase inhibitors in metastatic renal cell cancer: CT texture as a predictive biomarker. Radiology. 2011; 261(1):165-71. DOI | PubMed
  11. Ganeshan B, Goh V, Mandeville HC, et al. Non–small cell lung cancer: histopathologic correlates for texture parameters at CT. Radiology. 2013; 266(1):326-36. DOI | PubMed
  12. Ng F, Ganeshan B, Kozarski R, Miles KA, Goh V. Assessment of primary colorectal cancer heterogeneity by using whole-tumor texture analysis: contrast-enhanced CT texture as a biomarker of 5-year survival. Radiology. 2013; 266(1):177-84. DOI | PubMed
  13. Ganeshan B, Skogen K, Pressney I, Coutroubis D, Miles K. Tumour heterogeneity in oesophageal cancer assessed by CT texture analysis: preliminary evidence of an association with tumour metabolism, stage, and survival. Clin Radiol. 2012; 67(2):157-64. DOI | PubMed
  14. Haralick RM, Shanmugam K, Dinstein IH. Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics. 1973; SMC-3(6):610-21. DOI
  15. Albregtsen F. Statistical texture measures computed from gray level run-length matrices. University of Oslo; 1995.
  16. Ojala T, Pietikainen M, Maenpaa T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on pattern analysis and machine intelligence. 2002; 24(7):971-87. DOI
  17. Laws KI. Textured image segmentation. University of Southern California Los Angeles Image Processing INST; 1980.
  18. Arivazhagan S, Ganesan L. Texture classification using wavelet transform. Pattern Recognition Letters. 2003; 24(9-10):1513-21. DOI
  19. Ahmadian A, Mostafa A. IEEE: Cancun, Mexico; 2003. DOI
  20. Kingsbury N. Complex wavelets for shift invariant analysis and filtering of signals. Applied and Computational Harmonic Analysis. 2001; 10(3):234-53. DOI
  21. Yang P, Yang G. Feature extraction using dual-tree complex wavelet transform and gray level co-occurrence matrix. Neurocomputing. 2016; 197:212-20. DOI
  22. Yang P, Zhang F, Yang G. Fusing DTCWT and LBP based features for rotation, illumination and scale invariant texture classification. IEEE Access. 2018; 6:13336-49. DOI
  23. Chakraborty J, Langdon-Embry L, Escalon JG, et al. Texture analysis for survival prediction of pancreatic ductal adenocarcinoma patients with neoadjuvant chemotherapy. SPIE Medical Imaging; 2016. DOI
  24. Attiyeh MA, Chakraborty J, Doussot A, et al. Survival prediction in pancreatic ductal adenocarcinoma by quantitative computed tomography image analysis. Ann Surg Oncol. 2018; 25(4):1034-42. Publisher Full Text | DOI | PubMed
  25. Chakraborty J, Langdon-Embry L, Cunanan KM, et al. Preliminary study of tumor heterogeneity in imaging predicts two year survival in pancreatic cancer patients. PLoS One. 2017; 12(12)Publisher Full Text | DOI | PubMed
  26. Yun G, Kim YH, Lee YJ, Kim B, Hwang JH, Choi DJ. Tumor heterogeneity of pancreas head cancer assessed by CT texture analysis: association with survival outcomes after curative resection. Sci Rep. 2018; 8(1):1-10. Publisher Full Text | DOI | PubMed
  27. Sandrasegaran K, Lin Y, Asare-Sawiri M, Taiyini T, Tann M. CT texture analysis of pancreatic cancer. Eur Radiol. 2019; 29(3):1067-73. Publisher Full Text | PubMed
  28. Zhang MM, Yang H, Jin ZD, Yu JG, Cai ZY, Li ZS. Differential diagnosis of pancreatic cancer from normal tissue with digital imaging processing and pattern recognition based on a support vector machine of EUS images. Gastrointest Endosc. 2010; 72(5):978-85. DOI | PubMed
  29. Zhu Z, Xia Y, Xie L, Fishman EK, Yuille AL. Springer, Cham; 2019.
  30. Chu LC, Park S, Kawamoto S, Fouladi DF, et al. Utility of CT radiomics features in differentiation of pancreatic ductal adenocarcinoma from normal pancreatic tissue. AJR Am J Roentgenol. 2019; 213(2):349-57. DOI | PubMed
  31. Nabizadeh N, Kubat M. Brain tumors detection and segmentation in MR images: Gabor wavelet vs. statistical features. Computers & Electrical Engineering. 2015; 45:286-301. DOI
  32. Zhu J, Rosset S, Tibshirani R, Hastie TJ. MIT Press: USA; 2003.
  33. Caselles V, Kimmel R, Sapiro G. Geodesic active contours. International Journal of Computer Vision. 1997; 22(1):61-79. DOI
  34. Zhu SC, Yuille A. Region competition: Unifying snakes, region growing, and Bayes/MDL for multiband image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 1996; 18(9):884-900. DOI
  35. Roth H, Oda M, Shimizu N, Oda H, Hayashi Y, Kitasaka T, et al. SPIE: Houston, Texas, United States; 2018. DOI
  36. Zhou Y, Xie L, Shen W, Wang Y, Fishman EK, Yuille AL. Springer, Cham; 2017.
  37. Zhu Z, Xia Y, Shen W, Fishman EK, Yuille AL. A 3d coarse-to-fine framework for automatic pancreas segmentation. ArXiv Preprint ArXiv:1712. 00201 2017.
  38. Farag A, Lu L, Roth HR, Liu J, Turkbey E, Summers RM. A bottom-up approach for pancreas segmentation using cascaded superpixels and (deep) image patch labeling. IEEE Transactions on Image Processing. 2016; 26(1):386-99. DOI