Document Type : Original Research

Authors

1 PhD Candidate, Department of Electrical Engineering, Sahand University of Technology, Tabriz, Iran

2 PhD, Department of Electrical Engineering, Sahand University of Technology, Tabriz, Iran

3 BSc, Department of Electrical Engineering, Sahand University of Technology, Tabriz, Iran

4 MD, Department of physical medicine & Rehabilitation Center, Tabriz Medical Sciences University, Tabriz, Iran

Abstract

Background: Foot pressure assessment systems are widely used to diagnose foot pathologies. The human foot plays an important role in maintaining the biomechanical function of the lower extremities which includes the provision of balance and stabilization of the body during gait.
Objective: There are different types of assessment tools with different capabilities which are discussed in detail in this paper. In this project, we introduce a new camera-based pressure distribution estimation system which can give a numerical estimation in addition to giving a visual illustration of pressure distribution of the sole.
Material and Methods: In this analytical study we proposed an accurate Foot Print segmentation using hidden Markov Random Field model. In the first step, an image is captured from the traditional Podoscope device. Then, the HMRF-EM image segmentation scheme applies to extract the contacting part of the sole to the ground. Finally, based on a simple calibration method, per mm2, pressure estimates to give an accurate pressure distribution measure.
Results: A significant and usable estimation of foot pressure has been introduced in this article. The main drawback of introduced systems is the low resolution of sensors which is solved using a high resolution camera as a sensor. Another problem is the patchy edge extracted by the systems which is automatically solved in the proposed device using an accurate image segmentation algorithm.
Conclusion: We introduced a camera-based plantar pressure assessment tool which uses HMRF-EM-based method has been explained in more detail which gives a brilliant sole segmentation from the captured images.

Keywords

Introduction

Plantar pressure data is the information about each plantar region in contact with the ground. Analysis of plantar pressure can provide useful information about the dynamic loading of each foot [ 1 ]. The data can be dynamic or static. Pressure is defined as the vertically applied force from the sole of the foot to the surface of the ground per unit area. Because feet are the basic body parts which control gait, loading distribution and other functional activities, measurement of plantar pressure distribution and timing information provide valuable insights on a variety of static and dynamic foot problems [ 2 ]. These foot problems are a direct outcome of the modern lifestyle of the people such as continuous use of transportation, obesity, and prolonged periods of physical inactivity [ 3 ]. In addition to congenital problems, in some cases, high pressures from ill-fitting orthotics, prosthetics or footwear can cause pain to people with intact sensation [ 4 ].

Collected data from the sole of the foot can help an orthopedist to recognize problems associated with musculoskeletal, integumentary, and neurological disorders [ 2 ]. As an application, plantar pressure is a useful parameter in footwear design [ 4 , 5 ]. In fact, by understanding the pressure distribution of each patient’s feet, the proportional insole can be designed for them.

There are a wide variety of measurement systems available to evaluate the patients’ foot pressure. These systems use different sensors depending on their assessment technology. There are four kinds of sensors which are common in the measurement systems: resistive, capacitive, piezoelectric [ 6 ] and piezoresistive sensors [ 7 ] which will give different resolutions and measurement. There are also three kinds of popular structures: platform, insole, and single transducer system [ 6 ].

The most important advantage of the insole systems in comparison with platforms is that the walking procedure will be more natural using these wearable gadgets. But sensor slipping is very important factor in these systems which can make the results faulty and unreliable [ 7 ]. In sensor-based systems, there are some limitations such as pressure range, nonlinearity, non-repeatability, hysteresis [ 8 ], sensor range, and resolution which are automatically solved as problems in image-based systems. Moreover, MEMS sensors which have more advantages in comparison with the conventional sensors (e.g., high accuracy and reliability, lower cost, and power consumption [ 7 ]) have been used to design a sensor pad [ 8 ]. There is also another type of pressure assessment tool which uses imaging technologies to illustrate a visual distribution of the stress between sole and the contacting area called podoscope. To the best of our knowledge, usual podoscope devices do not give a mathematical illustration of the pressure distribution. In this study, we will use computer vision techniques to extract the pressure pattern of the feet in a digital podoscope system.

The remaining part of the paper is organized as follows: Section Podoscope introduces Podoscope and the available devices. Section Computer Vision Techniques briefly studies related works on image segmentation based on a taxonomy with an example and reviews the main idea of each state-of-the-art work. Implemented segmentation algorithm and the computations for extracting the pressure distribution are discussed in the section Pressure Distribution Computation.

Material and Methods

Podscope

Designe

In this analytical study we proposed an accurate Foot Print segmentation using hidden markov random field model. As mentioned in previous section, the traditional Podoscope. This type of evaluation is very common because it is so inexpensive and has a simple architecture. There is a wide variety of designed devices as seen in Figure 1. Usually, podoscope consists of a wooden box with a glass standing area or a full glass or acrylic sheet which is formed to have two 90-degree curves. A mirror is placed under the standing area with a 45- degree angle or parallel with the standing area.

Figure 1. Some available podoscope devices on the market.

Camera-based Podoscope

Camera-based podoscope is a simple podoscope with a 45-degree mirror equipped with a camera on the side of the device which captures real-time video and provides images to compute pressure distribution with a brilliant resolution. Computational details are explained in section 3. A prototype of camera-based podoscope device has been designed, fabricated, and tested to show the effectiveness of computer vision algorithms. Figure 2 shows the prototype device which is placed in a laboratory to acquire images from the patients and make an all-inclusive database from the patients. The main advantage of the proposed system is that there is no need for any sensors or internal hardware. Only a camera is needed which is connected to a standard computer, and the computation and image processing steps are performed in the general use computer.

Figure 2. Prototype camera-based podoscope device mounted in the laboratory.

Computer Vision Techniques

After capturing sole images, a segmentation process will be applied to the image to extract the sole of the foot. Factually, the region of interest is the part of the sole which projects the light from the light source to the camera. Figure 3 illustrates a test image captured by a camera in the designed podoscope. An abstract of the proposed algorithm is represented in Figure 4.

Figure 3. A sample image captured by the camera placed the podoscope. Images have a medium resolution (640×480 pixels) and the camera offers 30 fps frame rate for video recording.

Figure 4. Overview of the proposed methodology

Image segmentation

Image segmentation algorithms aim at splitting an image into meaningful sectors to extract homogeneous regions in the image and represent it in a new way to make it ready for further processing. New labels will be assigned to each group of the pixels which have meaningful relationship with each other. After segmentation, each group with similar labels can be merged to represent a segmented object. One can categorize the state-of-the-art studies about image segmentation to three main taxonomies: spatially blind or spatially guided and miscellaneous methods [ 9 ].

Spatially blind approaches

In this type of segmentation, the process will be performed in intensity/color space. It means that the spatial information will not be used in the segmentation procedure and the segmentation only considers pixel/voxel intensities. Spatially blind approaches are divided into two main classes: Clustering and Histogram Thresholding. In the clustering-based segmentation methods, a one-dimensional (for grayscale images) or a multi-dimensional (for color images) point cloud is defined and the cloud is partitioned predefined metrics/objective functions to merge similar pixel groups as clusters. Some examples of this method are mean shift clustering algorithm [ 10 ], fuzzy clusters [ 11 ] and Voroni tessellation [ 12 ] algorithms. The methods based on clustering algorithms are easy to implement which is their most important advantage. Another type of spatially blind approaches is histogram thresholding which does not need prior information to segment images to clusters. For example, in [ 13 ] a multi-thresholding scheme is used which is based on segmentation of subsets of bands. In [ 14 ], Nie introduced an algorithm which aims to minimize the Tsallis cross-entropy between the original image and the thresholded image.

Spatially guided approaches

In spatially guided applications, the relationship between pixels plays an important role in segmentation. In fact, in this type of segmentation, strong spatial constraints are imposed on the areas. The task of image segmentation based on spatially guided methods can be classified into three main classes: Region-based, Energy-based, and region and contour-based. As an example of region-based methods, which is a popular one among others, Subudhi et al., in [ 15 ] proposed an algorithm based on region growing which uses edge preserving segmentation technique for segmenting aerial images. In Energy-based segmentation algorithms, the main goal is to minimize a cost function. For example, an active contours idea [ 16 ] is an energy minimizing spline guided by external constraint forces and image forces. In [ 17 ], Wang proposed a Gaussian mixture model-based hidden Markov random field to perform image segmentation and 3D volume segmentation problems. In [ 18 ], the problem of image segmentation is addressed by finding an optimal color–texture segmentation of a color textured image by regarding it as a minimum cut problem in a weighted graph.

There are lots of image segmentation techniques available in the literature which can offer results close to human segmentation. But in this research, GMM-based hidden Markov random field model [ 17 ] is exploited as a robust and accurate method to extract the touching part of the sole to the glass in the image captured from the podoscope. Hidden Markov random field is derived from hidden Markov model which is a stochastic process generated by a Markov chain [ 19 ] which can be presented as a simple dynamic Bayesian network. In image processing and vision applications, an image is converted to a group of nodes where each node corresponds to a pixel or a super pixel. Then, a model is defined to explain the color values for all pixels using hidden variables associated with the nodes. Afterward, a joint probabilistic model is built over the variables and pixel values. By grouping hidden variables, the direct statistical dependencies between hidden variables are declared. The groups of hidden variables are often depicted pairs as edges in a graph [ 20 ]. There are different properties of Markov random fields which are illustrated in Figure 5. These Markov model graphs can be 4-neighbour connected grid of image pixels, 8-neighbour connected pixel grid, or they can have irregular architecture.

Figure 5. Different properties for MRFs which can have a grid-like (top) or irregular architecture (bottom) [20].

In the above-mentioned study, a combination of Gaussian mixture model and Hidden Markov random field is used to perform 2D and 3D segmentation which is originated from Markov random field. Gaussian mixture model is preferred to single Gaussian models, because it is a more powerful tool for modeling the complex distributions. In this method, given an image Y=(y1,y2,…,yn) where n is number of pixels in image and yi is the intensity value of ith pixel, we want to assign a label from the set X=(x1,x2,…,xn), where xi is a subset of all possible labels L. according to the MAP criterion, we have:

X*=argmaxx{P(Y|X,Θ)P(X)} (1)

In which, P(X) is the prior probability. The joint likelihood probability is defined as Eq. (2).

P(Y|X,Θ)=ΠiP(yi|X,Θ)=ΠiP(yi|xi,Θxi) (2)

P(yi|xixi) is a Gaussian distribution with parameter set Θxi=(μxixi). If there is a primary knowledge about the distribution of the intensity in background and foreground of the image, we can formulate the problem as Markov random field in which the parameter set Θ={θl|l∈L} can be learned from training data. Using hidden Markov random fields, the parameter set is learned in an unsupervised manner. It means that there is no need to have any prior knowledge about foreground/background intensity distribution. Thus, expectation maximization (EM) algorithm is employed to tackle HMRF problem where parameter set and label configuration X are learned alternatively. With EM algorithm, using a current parameter set θ, the missing part is estimated as X^ Then, it is employed to form the complete dataset {X^, Y}. The new parameter set can be estimated by maximizing the expectation of the complete-data log likelihood [ 19 ].

There are five major steps to implement HMRF-EM algorithm which are discussed below:

1. First of all, we have some initial parameter set Θ(0).

2. The likelihood distribution is computed using: P(t)(yi|xixi).

3. MAP estimator is employed to estimate the labels using current parameter set Θ(t).

X(t)=argmaxXϵχ{P(Y|X,Θ(t))P(X)}=argmaxXϵχ{U(Y|X,Θ(t))U(X)} (3)

4. The posterior distribution is computed for all lϵL and all of the pixels yi using Bayesian rule:

P(t)(l|yi)=G(yi,θi)P(l|xNi(t))P(t)(yi) (4)

Where xNi(t) is defined as the neighborhood configuration of xi(t) and:

P(t)(yi)=lϵLG(yi;θi)P(l|xNi(t)) (5)

P(l|xNi(t))=1zexp(-NiVc(l,xj(t))) (6)

5. Then the parameters are updated using P(t)(l|yi).

μl(t+1)=iP(t)(l|yi)yiiP(t)(l|yi) (7)

(σl(t+1))2=iP(t)(l|yi)(yi-μl(t+1))2iP(t)(l|yi) (8)

As explained in [ 17 ], to estimate the labels using MAP method, we have to find X* which minimizes total posterior energy:

X*=argmaxXϵχ{U(Y|X,Θ)+U(X)} (9)

We have the likelihood energy as:

U(Y|X,Θ)=iU(Yi|xi,θi)=i[(yi-μxi)22σxi2+lnσxi] (10)

The prior energy function is defined as:

U(X)=c∈CVc(X) (11)

Vc(X) is the clique potential for set of all possible cliques C:

Vc(xi,xj)=12(1-Ixixj) (12)

Ixixj={0ifxixj1ifxixj (13)

As an adaptive model, HMRF can be defined with respect to a pair of random variables (X,Y) while MRF is only defined with respect to X [ 19 ]. Using Gaussian mixture model instead of a simple the Gaussian distribution, the parameter set will be defined as Eq. (3) with a weighted probability.

Θl={(μl,1l,1,wl,1),…(μl,gl,g,wl,g)} (14)

The algorithm is expanded to 3 channel grayscale images to be applied to color images. In this study, 3 components are used for the Gaussian mixture model.

We have tested the method under different color spaces e.g. RGB, HSV, YDbDr, YPbPr, etc., and the results are illustrated in Figure 6.

Figure 6. Comparison of different color spaces in the preprocessing of the segmentation process; HSV (a), CIELAB (b), YPbPr (c), YIQ (d), RGB (e) and YDbDr (f).

As it is illustrated in Figure 6, best results are obtained using YDbDr color space. The little toe of the right foot is segmented as a region of interest using YDbDr space while a very small number of pixels (or no pixels) are allocated to the region of interest using other color spaces. Obviously, the little toe of the left foot is not segmented because it has no reflection of the light source to the camera (see Figure 3). This way, both luminance (Y) and chrominance (Db, Dr) components have been used to separate the lightened part of the foot from the sole itself, while some other segmentation methods would give the whole sole as a single segmented area. Figure 7 illustrates final segmented image which is the product of segmented mask area with the original image.

Figure 7. Final segmented image which is a product of the main image and binary mask of the segmented area.

Perssure Distribution Computation

After performing the segmentation process, the pressure distribution is computed using intensity values of the image. To do this, segmented image is converted to grayscale image (Figure 8, top-left). Then, pixel values are stretched to provide better contrast between each pixel (Figure 8, top-right). Afterward, image is divided to specified ranges and each range is illustrated using a pre-defined color to show probable pressure distribution disorders. For example, it is obvious from Figure 8 (down) that the patient has a loading distribution problem in his/her right foot.

Figure 8. Computation of pressure distribution using segmented image

The segmented image in Figure 8 is only for visualization tasks. In order to perform an estimation of pressure distribution, we need to calibrate our image screen. To achieve this end, we should compute the area occupied by each pixel in the world coordinates. For example, each pixel in the image is equal to 1 millimeter in global coordinates. For a segmented m×n image, summation of positive intensity values is computed as T parameter in Eq. (2). Then, each pixel value is divided by T to obtain each pixel’s coefficient (Pi).

T=i=1m×nim(i) (15)

Pi=im(i)T (16)

The product of patient’s weight and Pi will define each pixel’s portion in the patient’s weight. In fact, the weight is calculated with a new unit named N/pixel. Since each pixel equals 1 millimeter area in the touching surface, we can call the unit N/mm2.

Results

In order to show the effectiveness of the exploited segmentation method, we have compared segmentation results of three benchmarking algorithms with the results obtained by hidden Markov random field method. The mentioned algorithms are active contour model [ 16 ] which is a well-known method in image segmentation applications, Spatial Fuzzy clustering method [ 21 ] as a fast and useful image segmentation algorithm and KNN matting [ 22 ] as a newly introduced method. Figure 9 illustrates the segmented image using mentioned algorithms.

Figure 9. Visual comparison between the segmented images obtained by different algorithms, active contour model (top-left), Fuzzy clustering method (top-right), KNN matting (down-left) and HMRF (down-right).

To have a fair comparison, we have compared the best results obtained by each method. For example, in Fuzzy clustering method, the best results are obtained using a 5-cluster segmentation scheme. The cluster which points to the underfoot area is selected as the interested cluster and other clusters are labeled as the not-interested cluster. In KNN matting method, the best result is obtained setting lambda value to 100 and the input window size of 15. The output of each method is compared to human perceptual ground truth which contains hand segments of 10 different persons. We have also defined a voted image, in which each pixel is defined as foreground if it is selected as the foreground with a threshold in the ground truth images. For example, a 50% voted image is an image with pixels selected as foreground in more than 5 ground truth images out of 10. To quantify the consistency between different image segmentation, Martin et al., [ 23 ] introduced some error measure definitions to evaluate segmented images. Factually, they have defined two error measures based on a definition of local refinement error (LRE). This metric measures the degree of overlap of each cluster in the segmented and ground-truth image. Let S and S ́be two segmentations of an image X={X1…,Xn} consisting of N pixels. LRE is defined as follows:

LRE(S,S ́,xi)=|C(S,xi)\C(S ́,xi)||C(S,xi)| (17)

Where, C'(S,xi) is the set of pixels corresponding to the region in segmentation S that contains pixel Xi. |xi| is the cardinality of set x and denotes the set differencing operator. If the segmented image is a proper subset of the ground truth image, then the pixel lies in an area of refinement, and the defined local error should be zero. The output value lies in the range 0-1 where zero signifies no error. Based on LRE metric, Global Consistency Error (GCE) and Local Consistency Error (LCE) measures are defined to combine the values into an error measure for the entire image. As mentioned in [ 23 ], Global Consistency Error (GCE) is defined to force all local refinements to be in the same direction, while Local Consistency Error (LCE) allows refinement in different directions in different parts of the image. GCE and LCE metrics are defined as follows:

GCE(S,S ́)=1Nmin{iLRE(S,S ́,xi),iLRE(S ́,S,xi)} (18)

LCE(S,S ́)=1Nmin{iLRE(S,S ́,xi),LRE(S ́,S,xi)} (19)

As LCE≤GCE, it is obvious that GCE is a tougher measure than LCE. Figure 10 illustrates box plots for GCE error comparing each segmented image using the above-mentioned measures with 10 hand-segmented ground truth images. As it is illustrated in this figure, the proposed method gives the best results among others. We used three clusters to segment the image using HMRF method which gives the best accuracy. Figure 11 demonstrates the error rate using a variable number of clusters. One can see that the lowest error rates are achieved using three clusters.

Figure 10. Box plots for GCE error rate. The segmented image using different algorithms are compared to 10 hand-segmented ground truth images.

Figure 11. GCE error rate for the proposed method with variable cluster numbers. Using three clusters for segmentation, the GCE error rate will be close to zero.

In [ 24 ], another measure termed the Bidirectional Consistency Error (BCE) is introduced which penalizes dissimilarity between segmentations in proportion to the degree of overlap. In this measure, the pixel-wise minimum operation in the LCE is replaced with a maximum. Considering a set of hand-segmented ground-truth images {S1,…,Sk}, The (BCE) measure matches the segment for each pixel in a test segmentation Stest to the minimally overlapping segment containing that pixel in any of the ground-truth images.

BCE(Stest,{Sk})=1Ni=1Nmink{max{LRE(Stest,Sk,xi)LRE(Sk,Stest,xi)}} (20)

The BCE measure ignores the frequency with which pixel labeling refinements in the test image are reflected in the manual segmentations. A hard “minimum” operation is exploited to compute the measure. We have measured BCE error rate for the mentioned algorithms which are illustrated in Table 1. Note that BCE measure is defined based on all of the ground-truth images, while in Table 1, we have measured GCE and LCE error rates on a 50% voted image. The results demonstrate that lowest error rates are obtained with HMRF segmentation method.

LCE active contour 0.10228
Fuzzy clustering 0.08926
KNN matting 0.00484
Proposed method 0.00130
GCE active contour 0.19303
Fuzzy clustering 0.09443
KNN matting 0.02741
Proposed method 0.02651
BCE active contour 0.5715
Fuzzy clustering 0.0888
KNN matting 0.0707
Proposed method 0.0442
Table 1. LCE, GCE and BCE error rate comparison between the different algorithms.

Discussion

In order to compare our results with the products available on the market, pressure distribution images from some popular devices are collected. As it is seen in Figure 12, some sensor-based systems e.g. (c) and (e) cannot present a high resolution pressure distribution image because of the limitations in sensors dimensions. In Figure 12, images (a) and (b) depict peak pressure data with better resolution in comparison with the above mentioned images, but the quality of the edges in these images are not good enough. As it is shown in Figure 12 (d), the output of Pedikom device has a very high resolution and the extracted edges are very close to real edges of the sole, but the main problem is the noise extracted within the insole image which is obvious from the officially revealed images. Figure 12 (f) illustrates the output of our proposed device. Red and orange colors denote the highest pressures, while cyan and green colors illustrate the lowest pressure values. The mentioned problems with other assessment systems are solved in the new camera-based system. The main drawback of introduced systems is low resolution of sensors which is solved using a high resolution camera as sensor. Another problem is patchy edge extracted by the systems which is automatically solved in the proposed device using an accurate image segmentation algorithm. Third problem is the noise extracted along with the sole image. As it is seen in Figure 9 (f), the extracted image is pure sole image because the segmentation algorithm can perform a perfect segmentation on the reflected light from the sole.

Figure 12. Comparison of different pressure assessment systems output; (a) AmCube sensor-based device [25], (b) sensor-based Footscan® device [26], (c) the Pedar® system, (d) Pedikom device [27], (e) F-Scan® device [28], (f) our proposed device.

Conclusion

To conclude, we introduced a camera-based plantar pressure assessment tool which uses computer vision techniques to extract the sole image. We have also explored the capability of plantar pressure estimation system in recognition of static and dynamic foot problems. After introducing some available plantar pressure systems with different technologies, we reviewed the latest research on segmentation methods. HMRF-EM-based method has been explained in more detail which gives a brilliant sole segmentation from the captured images. Most of the marketable measurement systems use electronic sensors to estimate the pressure distribution, but here we used the captured image and grayscale levels to compute a per-pixel pressure which can be converted to N/mm2 scale. Factually, the numerical output is extracted from the captured images in addition to visual output of the pressure distribution. The method gives an image with higher resolution in comparison with other techniques.

References

  1. Cousins S D, Morrison S C, Drechsler W I. The reliability of plantar pressure assessment during barefoot level walking in children aged 7-11 years. J Foot Ankle Res. 2012; 5:8. Publisher Full Text | DOI | PubMed
  2. Orlin M N, McPoil T G. Plantar pressure assessment. Phys Ther. 2000; 80:399-409. DOI | PubMed
  3. Computer assisted optical podoscope for orthostatic measurements. International Conference on Advancements of Medicine and Health Care through Technology; Cluj-Napoca, Romania: Springer; 2011. p. 226-9.
  4. Mueller M J. Application of plantar pressure assessment in footwear and insert design. J Orthop Sports Phys Ther. 1999; 29:747-55. DOI | PubMed
  5. Hung K, Zhang Y-T, Tai B. Wearable medical devices for tele-home healthcare. The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; San Francisco, CA, USA: IEEE; 2004.DOI
  6. Rosenbaum D, Becker H P. Plantar pressure distribution measurements. Technical background and clinical applications. Foot and Ankle Surgery. 1997; 3:1-14. DOI
  7. Razak A H, Zayegh A, Begg R K, Wahab Y. Foot plantar pressure measurement system: a review. Sensors (Basel). 2012; 12:9884-912. Publisher Full Text | DOI | PubMed
  8. Lee N, Goonetilleke R S, Cheung Y S, So G M. A flexible encapsulated MEMS pressure sensor system for biomechanical applications. Microsystem technologies. 2001; 7:55-62. DOI
  9. Vantaram S R, Saber E. Survey of contemporary trends in color image segmentation. Journal of Electronic Imaging. 2012; 21:040901. DOI
  10. Comaniciu D, Meer P. Mean shift: A robust approach toward feature space analysis. IEEE Transactions on pattern analysis and machine intelligence. 2002; 24:603-19. DOI
  11. Pakhira M K, Bandyopadhyay S, Maulik U. A study of some fuzzy cluster validity indices, genetic clustering and application to pixel classification. Fuzzy sets and systems. 2005; 155:191-214. DOI
  12. Arbeláez P A, Cohen L D. A metric approach to vector-valued image segmentation. International Journal of Computer Vision. 2006; 69:119-26. DOI
  13. Kurugollu F, Sankur B, Harmanci A E. Color image segmentation using histogram multithresholding and fusion. Image and vision computing. 2001; 19:915-28. DOI
  14. Nie F. Tsallis cross-entropy based framework for image segmentation with histogram thresholding. Journal of Electronic Imaging. 2015; 24:013002. DOI
  15. Subudhi B N, Patwa I, Ghosh A, Cho S-B. Edge preserving region growing for aerial color image segmentation. Intelligent Computing, Communication and Devices - Proceedings of ICCD; Bhubaneswar, India: Springer Verlag; 2015. p. 481-8.
  16. Kass M, Witkin A, Terzopoulos D. Snakes: Active contour models. International journal of computer vision. 1988; 1:321-31. DOI
  17. Wang Q. Gmm-based hidden markov random field for color image and 3d volume segmentation. arXiv. 2012.
  18. Kim J-S, Hong K-S. Color–texture segmentation using unsupervised graph cuts. Pattern Recognition. 2009; 42:735-50. DOI
  19. Zhang Y, Brady M, Smith S. Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans Med Imaging. 2001; 20:45-57. DOI | PubMed
  20. Blake A, Kohli P, Rother C. Markov random fields for vision and image processing. London: Mit Press; 2011.
  21. Li B N, Chui C K, Chang S, Ong S H. Integrating spatial fuzzy clustering with level set methods for automated medical image segmentation. Comput Biol Med. 2011; 41:1-10. DOI | PubMed
  22. Chen Q, Li D, Tang C K. KNN Matting. IEEE Trans Pattern Anal Mach Intell. 2013; 35:2175-88. DOI | PubMed
  23. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001; Canada: IEEE; 2001.
  24. Wang X-Y, Wang T, Bu J. Color image segmentation using pixel wise support vector machine classification. Pattern Recognition. 2011; 44:777-87. DOI
  25. Anatomy Stuff. Foot Posture Model Set - Flat Foot Anatomy Model CHM312. [Accessed: 12 September 2015]. Available from: http://www.anatomystuff.co.uk/product-foot-posture-model-set-3-models_243952.aspx.
  26. RSscan International. Solutions for plantar pressure mesearment and analysis. [Accessed: 14 October 2015]. Available from: http://www.rsscan.com/.
  27. Pedikom. Introducing Pedikom Kft. 2010. Available from: http://www.podiart.sk/.
  28. Tekscan. Pressure Mapping, Force Measurement & Tactile Sensors. Available from: https://www.tekscan.com/.