SciELO - Scientific Electronic Library Online

 
vol.19 número2Automatic Glaucoma Detection Based on Optic Disc Segmentation and Texture Feature ExtractionSemantic Mining based on graph theory and ontologies. Case Study: Cell Signaling Pathways índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Links relacionados

Compartir


CLEI Electronic Journal

versión On-line ISSN 0717-5000

CLEIej vol.19 no.2 Montevideo ago. 2016

 

Computerized Medical Diagnosis of Melanocytic Lesions based on the ABCD approach

Laura Raquel Bareiro Paniagua, Deysi Natalia Leguizamón Correa
Universidad Nacional de Asunción, Facultad Politécnica,
San Lorenzo, Paraguay
laurabareiro@gmail.com, deysi.leg@gmail.com
Diego P. Pinto-Roa, José Luis Vázquez Noguera
Universidad Nacional de Asunción, Facultad Politécnica,
San Lorenzo, Paraguay
dpinto@pol.una.py, jlvazquez@pol.una.py
Lizza A. Salgueiro Toledo
Universidad Nacional de Asunción, Facultad de Ciencias Médicas,
San Lorenzo, Paraguay
salgueiro.liza@gmail.com

Abstract

Melanoma is a type of skin cancer and is caused by the uncontrolled growth of atypical melanocytes. In recent decades, computer aided diagnosis is used to support medical professionals; however, there is still no globally accepted tool. In this context, similar to state-of-the-art we propose a system that receives a dermatoscopy image and provides a diagnostic if the lesion is benign or malignant. This tool is composed with next modules: Preprocessing, Segmentation, Feature Extraction, and Classification. Preprocessing involves the removal of hairs. Segmentation is to isolate the lesion. Feature extraction is considering the ABCD dermoscopy rule. The classification is performed by the Support Vector Machine. Experimental evidence indicates that the proposal has 90.63 % accuracy, 95 % sensitivity, and 83.33 % specificity on a data-set of 104 dermatoscopy images. These results are favorable considering the performance of diagnosis by traditional progress in the area of dermatology.

Abstract in Spanish

El melanoma es un tipo de cáncer de piel y es causada por el crecimiento descontrolado de melanocitos atípicos. En las últimas décadas, el diagnóstico asistido por computadora se utiliza para apoyar a los profesionales médicos; Sin embargo, aún no existe una herramienta aceptada a nivel mundial. En este contexto, similar al estado de la técnica se propone un sistema que recibe una imagen dermatoscopica y proporciona un diagnóstico si la lesión es benigna o maligna. Esta herramienta se compone de los siguientes módulos: preprocesamiento, segmentación, extracción de características y clasificación. El preprocesamiento consiste en la eliminación de los pelos. La segmentación es para aislar la lesión. La extracción de características considera la regla ABCD de la dermatoscopía. La clasificación se lleva a cabo por medio de la máquina de vectores de soporte. Los experimentos indican que la propuesta tiene 90.63

Keywords: Melanoma, Automatic Diagnosis, Image Processing, Machine Learning.
Keywords in Spanish: Melanoma, Diagnóstico Automatizado, Procesamiento de Imágenes, Aprendizaje de Máquinas
Received: 2015-11-12 Revised 2016-05-23 Accepted 2016-06-20
DOI: http://dx.doi.org/10.19153/cleiej.19.2.5

1 Introduction

The presence of melanocytic skin lesions is becoming increasingly more common, and early detection becomes vital for decision making regarding the treatment to be followed. There are two types of melanocytic lesions: the benign or nevus and the malignant or melanoma. The latter is a skin cancer produced by the uncontrolled growth of melanocytes [1]. This disease could be asymptomatic for the patient, especially in the early stages, so the dermatological inspection is vital [2]. The spread of melanocytes around the body and its asymptomatic nature [3] make it the most lethal skin cancer. In the past years, new methods of non-invasive diagnosis have been developed in order to detect early melanocytic lesions with malignancy features, to reduce unnecessary biopsies, and credit decisions regarding the conduct of the treatment that the specialist suggests [4]. Early diagnosis of melanoma is crucial due to the absence of effective treatments in advanced disease [5].

Melanoma makes up less than 5% of skin cancer cases, but it is the cause of 75% of the deaths that occur by this type of cancer [6]. There is a dramatic increase of melanoma cases; it is estimated that 132,000 cases happen in the world every year, and approximately 66,000 people die because of this, and other types of skin cancer [7]. The determining factors are given by global warming, the continues unprotected exposure to sun rays, and solar beds increases the chances of contracting melanoma [8].

1.1 Approach to the problem

  • The techniques used by dermatologists for the diagnosis depend largely on the accuracy of the professional, and the experience in the diagnosis of melanoma as mentioned in Zhou et al. [9].
  • A bad diagnosis translates into performing unnecessary biopsies for healthy patients and the spreading of cancer in sick patients, in Gangster et al. [10].
  • The task of diagnosis by the specialist is commonly performed under factors associated with the visual environment that could cause the inaccuracy of the analysis [8].

1.2 General Objective

To develop a support tool for diagnosis of non-invasive lesions at early stages with sensitivity and specificity of 89% and 79% respectively [5], using the ABCD rule. The tool, based on techniques of digital image processing and learning machines, should indicate if the image corresponds to a benign or malignant lesion.

1.3 Specific Objectives

  • To completely remove the hairs covering the lesion.
  • To do a good segmentation of the lesion.
  • To find the relevant characteristics that indicate malignancy.
  • To build, train and validate a binary classifier with the obtained characteristics.

This work is organized as follows: Section 2 presents the concepts of melanocytic lesions and the methods of traditional diagnosis. Section 3 details the tools and algorithms used in the proposed methodology. Section 4 presents the experiments and results obtained. Section 5 shows a discussion of the obtained results. Section 6 shows the final conclusions after the experiments and analysis of the results of the project; and finally, the future works.

2 The Skin and the Melanocytic Lesions

The skin is the largest human-body organ and is composed of three main layers, the epidermis (outer layer of skin), the dermis (inner layer of skin), and the hypodermis (composed of adipocytes that produce and store fat) [1]. In the epidermis there are numerous cells, such as squamous cells, basal cells or cells called melanocytes, as shown in Figure 1. The latter are in charge of producing a substance called melanin, which is the pigment that gives color to the skin [1].


PIC

Figure 1: Layers of the Skin [11]


When there is a pigmented skin lesion, there are 2 groups: non-melanocytic lesions, and melanocytic lesions [5]. Melanocytic lesions are classified between benign and malignant lesions, which according to its characteristics may become a melanoma [4].This work focuses on the study of melanocytic lesions due to the mortality characteristic of melanoma.

2.1 Melanoma

Melanoma is produced by the uncontrolled growth of cells that accumulate pigment (melanocytes), thus, it can be widely spread on the body through the lymphatic vessels and the blood vessels, giving it its main characteristic of mortality [2].

2.1.1 Causes

The factors that cause skin cancer are diverse, however, 86% of melanomas is due to the exposure to ultraviolet radiation (UV) from the Sun [12]. There are a number of factors that increase the risk of developing melanoma [13] such as: the presence of atypical nevus, the presence of nevus exceeding 40 units, the presence of congenital nevus, having a family history of atypical nevus, of melanoma, or to have been exposed for many hours to ultraviolet rays (Sun, UVB, UVA) even though having a completely tanned skin.

2.2 Dermoscopy

The dermoscopy is a non-invasive technique of imaging diagnosis which is used for the observation of skin lesions, allowing the visualization of the structures of the epidermis and dermis [14]. An instrument called a dermatoscope is used for this technique, which consists of a light source and an image amplification system [8]. Figure 2 (a) shows an image taken with a regular camera, while Figure 2 (b) shows an image taken with the dermatoscope, offering a better view of the characteristics of the area of interest.


PIC

Figure 2: Image confrontation (a) Clinical image (b) Dermoscopic image [15]


The use of this technique reaches 89% of sensitivity, and 79% of specificity of the professional [1617]. This is the reason why dermoscopy in the diagnosis of the lesions is of vital importance in little expressive lesions and at early stages [1617].

2.3 Dermatological diagnostic methods

The diagnostic method commonly accepted in Dermatology is the 2-stage analysis, where the first stage analyses the lesion of the skin to determine whether it is melanocytic or non-melanocytic; and then the lesion is analyzed to see if the melanocytic one is malignant or benign (second stage) [18319].

In the first stage, it is important to detect noticeable features of the melanocytic lesions (for a better study, consult in [14] and [20]). In the second stage, exhaustive and thorough methods of analysis [5] are required for the diagnosis of benign or malignant lesion. There are several methods commonly used, such as: the Pattern Analysis in Zalaudek et al. [21], the Menzies Method in Menzies et al. [22], the 7-point checklist in Argenziano et al. [23], and the ABCD rule in Stolz et al. [24], the latter is used in this work.

2.4 The ABCD rule

Developed by Stolz [24] in 1993, it analyzes 4 criteria: asymmetry, borders, colors, and dermoscopic structures. Asymmetry is the pattern generated by the uncontrolled growth of the lesion, it is measured by dividing the pigmented lesion into 2(two) axes, looking for the best symmetry possible [525]. Figure 3 shows a symmetric and an asymmetric lesion with 2 (two) marked axes. The first axis is outlined on the longest part of the lesion and is called the main axis, the second axis is outlined perpendicular to the main axis.


PIC

Figure 3: Melanocytic lesion (a) Symmetrical (b) Asymmetrical.


Borders are patterns associated with abnormal terminations of color that melanocytic lesions have, i.e., borders have color variations. The area of interest is divided into 8 segments. Figure 4 shows normal borders indicated with an X, the others are considered as abrupt.


PIC

Figure 4: Borders of a lesion divided into 8 segments.


Color is related to the excess of melanin under the surface of the lesion, causing a different color in a concentration of pixels over a specific region. 6 (six) colors are taken into account: white, light brown, dark brown, blue-gray, red, and black. The presence of many colors imply a malignant tendency. Example is shown in Figure 5, where it indicates the presence of colors red (black square) and light brown (red square).


PIC

Figure 5: Image of pigmented lesion with presence of different colors.


Dermoscopic structures taken into account are detailed below:

  1. Linear Branches: They must be more than 2 branches. Both, pseudopods as well as radial projections are considered as linear branches.
  2. Atypical Pigmented Reticulum: It consists in a connection of intersecting lines causing hollows, regular or irregular. The lines indicate greater amount of melanin in that region.
  3. Structureless Areas: They correspond to areas of the melanocytic lesion in which structures inside cannot be distinguished. It has to be more than 10% of the lesion.
  4. Dots: They are pigmented circular structures of 0.1 mm. They must be more than 2 dots. They are almost invisible to the naked eye.
  5. Globules: They are pigmented structures of a greater size than dots (greater than 0.1 mm). Both, dots as well as globules can be black, brown, or blue color. They must be of 2 globules or more.

2.5 Related works

Some state-of-the-art reported works are presented next, that propose automated diagnostic systems.

In Batugo [1] an automated diagnostic system was proposed for the detection of melanomas, composed by the following modules: The pre-processing, which consists of the elimination of borders or margins that should not be present in the image at the time of the analysis. Also seeks to isolate objects foreign to the lesion, such as hair. The proposed segmentation was performed by means of the Clustering Method using Gaussian mixtures; the feature extraction was performed based on the ABCD rule. Classification was carried out using linear discriminant classifier [26], in order to obtain the diagnosis using the previously extracted characteristics. The tests were carried out with 100 images whose diagnosis is detailed as follows: 29 melanomas and 71 nevus. The evaluation of the system obtained a 79.31% of sensitivity, and 71.83% of specificity.

Similar to the previous work, Oliveira [27] developed a diagnostic system of pigmented lesions formed by 4 modules: pre-processing of images was carried out by anisotropic diffusion filter [28] to eliminate the noise. The segmentation module was carried out by using the Chan Vese Method [29]. The feature extraction was performed using the ABC-T (Assimetry, Border, Color, Texture) rule. The last characteristic was added in order to consider the ”lesiones queranocíticas” (Keratinocytic Lesions) which are anomalies of the skin that appear as a crust on its surface. Classification was carried out using the SVM (Support Vector Machine) classifier [29]. The evaluation according to the sensitivity metrics and specificity were of 73.81% and 76.67% respectively. The image bank was composed of 408 digital images, of which 62 were nevus, 86 images with keratosis and 260 melanomas.

Em Rahman et al. [30] combined different classifiers in order to develop a melanoma automatic recognition system using dermoscopic images. For the segmentation, the clustering algorithm was used, Fuzzy C-means [31].Then, an opening and a morphological closing were applied to eliminate noise and soften the border, obtaining this way the outline of the lesion. For the classification of the skin lesions, the combination of the technology of the SVM, GML [32], and K-NN [33] classifiers were used. The following results were obtained: 62.50% of specificity and 83.75% of sensitivity, from a total of 358 dermoscopic images using 40% for training and 60% for tests.

In Alcon et al. [34] a diagnostic system of pigmented lesions was proposed. Segmentation was carried out using the Otsu method [35]. The characteristics were extracted through the ABCD method, totaling 55 characteristics. The classification was carried out using the K-NN classifier. Tests conducted on the total of 152 digital images, yielded the following results: 45 nevus and 107 melanoma, corresponding to a 68% of specificity, and 94% of sensitivity.

It is presented in Celebi et al. [36] a methodological approach for the classification of pigmented lesions of the skin using dermoscopic images. Median filter is used for pre-processing and Otsu’s method for segmentation. In the feature extraction, the following characteristics were used: the area of the lesion, relation of appearance, asymmetry, compactness. For the classification, they used SVM and in experiments conducted on a set of 564 images, they produced a specificity of 92.34% and a sensitivity of 93.33%.

It is important to note that there are several related works that do not follow any dermatological rule to find the diagnosis, since they propose the study of characteristics such as color and texture of the lesion in a specific way, as the ones reported in [373839404142]. In Scharcanski et al. [43] you can see a summary of the work using computer vision techniques in the diagnosis of skin cancer and provides an orientation of medical image analysis.

This topic is controversial in terms of whether to use or not to use the diagnostic methods approved by the dermatologists [37]. In this context, this work prefers to follow the line accepted by them, as it is the ABCD rule, developing techniques which comply with the requirements of the dermatologic community and thus improve the performance reported in the state-of-the-art.

3 Proposed Method

Based on the work of the state-of-the-art, 4 main modules are proposed: Pre-processing, Segmentation, Feature Extraction, and Classification.

3.1 Module 1: Pre-processing

The main obstacle for the study of lesions is the hair presence on the dermoscopic image as shown in Figure 6; and therefore, the applied methodology to extract them is presented.

3.1.1 Hair Removal

An fd  image (dermoscopic melanocytic image) for the removal of hairs is received, to which the bottom-hat operator is applied on each channel of the RGB space. Then, the maximum value among the 3 channels is taken, as shown in Figure 7 (b). Then, the Otsu algorithm [35] is applied to make the image binary, the result of this operation can be seen in Figure 8 (a), and it is used as a mask for painting the hairs with a very different color from that of the lesion, in this case, green, as shown in Figure 8 (b). Finally, Inpainting is applied, proposed by [44], which is an algorithm of image restoration that uses information from the rest of it to rebuild the areas associated with the hairs. The output of this module is an fp  image, as shown in Figure 9, which at the same time is the entrance to the next module (segmentation).


PIC

Figure 6: Image with lots of hairs.



PIC

Figure 7: Enhancement of hairs (a) Input image and (b) Image in grayscale result of Bottom-hat.



PIC

Figure 8: The process of hair removal (a) Binary Image of Hair (b) Hair mask.


3.2 Module 2: Segmentation

In this phase, it is worked over the fp  image in grayscale Figure 10 (a). To remove the peaks of intensity between pixels, median filter is applied with a window size of 4 x 4. The level of grey of each point is replaced by the value of gray of a point by the average of the levels of grey of a certain vicinity, the resulting image is shown in Figure 10 (b). The adjustment of the intensity values of the image is made using the value of gamma = 0.01, in order to reduce noise, as shown in Figure 11 (a).


PIC

Figure 9: Resulting image without hair.



PIC

Figure 10: Preparation of the image (a) Image in gray-scale (b) Results of the average filter.


To improve the contrast of the image so that the lesion can be highlighted, the algorithm of improvement of contrast (Contrast Limited Adaptive Histogram Equalization) CLAHE is used [45], with Clip Limit = 0.01 and with contextual regions equal to 2 x 2 (see Figure 11 (b)). Once the image is improved, the threshold is calculated with the maximum-entropy automatic thresholding algorithm [45] and the binary mask resulting from this stage can be seen in Figure 12 (a). The image of the lesion associated to the binary mask can be seen in Figure 12 (b).


PIC

Figure 11: Improvement of the Image (a) Intensity Adjustment (b) Histogram Equalization.



PIC

Figure 12: Resulting Masks (a) Binary Mask (b) RGB Associated Image.


3.3 Module 3: Feature Extraction

This work uses the ABCD rule for the diagnosis of Melanoma. In that sense, the features to be extracted are: Asymmetry, Border, Color, Reticular Pattern, Dots and Glabules, Structureless Areas, and Pseudopods.

3.3.1 Asymmetry

This feature is obtained through the asymmetry index which is calculated using a combination of instructions. This index indicates asymmetric is the lesion, i.e., the greater the value of it, the more asymmetric the lesion is; and the lower the value of it, the more symmetrical the lesion is. The segmented lesion shown in Figure 13 (a), is rotated through its centroid by bi-linear interpolation method [24], and is shown in Figure 13 (b).


PIC

Figure 13: Segmented Image (A) Original (b) Rotated.


Once the lesion is rotated it is then divided through the main axis, as shown in Figure 14 (a) and then divided by the secondary axis, in Figure 14 (b) there is the lesion divided by the secondary axis.


PIC

Figure 14: Analysis of the Asymmetry of the Lesion (a) Main Axis (b) Secondary Axis..


The asymmetry index AI  is calculated by the expression (1)

     1 ∑2  ΔAem-- AI = 2      AT L        em=1
(1)

where em  = 1 is the major axis and em  = 2 is the minor axis, is the area of the non-overlapping lesion; and A  TL  is the area of the lesion. In this way, the first feature is obtained and is denoted by C   a  .

3.3.2 Borders

For the border analysis, the lesion is divided into 8 regions. Each retrieved sector, as shown in Figure 15, is analyzed separately. In this way, if the color of the lesion varies from the center towards the border, it is said that it has an abrupt border, otherwise is a normal or regular border, lesions with abrupt termination are associated with melanoma. The color variation in the borders is measured by calculating the variance from the center of the lesion until the border of it, where high values of variance indicates a border with abrupt termination, and low values indicate a normal border.


PIC

Figure 15: Sector of the Lesion.


In this way, the feature of border is obtained and is denoted by Cb  .


PIC

Figure 16: Lesion Divided into 8 Regions.


3.3.3 Color

In the study of this feature, 6 colors are taken into account: light brown, dark brown, blue-gray, red, black and white.The presence of these colors suggest melanoma. Each color is detected by means of the Euclidean distance of each pixel to the corresponding intensities to that color. Table 1 shows the value of intensity of each color. The lower the value of the calculated distance suggests closeness to a color. Figure 16 (a) shows a lesion with the presence of the blue-gray color, and in Figure 16 (b), a lesion that contains the colors dark brown and light brown.

The presence of different colors in the lesion points to a positive diagnosis of Melanoma. For the presence of the color, the obtained features are: light brown denoted by Ccmc  , dark brown denoted by Ccmo  , blue-gray denoted by Ccag  , red denoted by Ccr  , black denoted by Ccn  and white denoted by Ccb  , represented by the vector Cc  , which in turn is given by: Cc = [Ccmc,Ccmo,Cag,Ccr,Ccn,Ccb]c


Table 1: RGB Values and rgb for colors.
|---------------|------------|----------------| |----Color------|---RGB------|------rgb-------| |White          |(255,255,255) |  (1.0 , 1.0 , 1.0)| |Black          |   (0,0,0)    |  (0.0 , 0.0 , 0.0)| |Red            |  (255,0,0)   |  (1.0 , 0.0 , 0.0)| |Light Brown    |(205,133,63) |(0.80 , 0.52 , 0.25) |Dark Brown     | (101,67,33)  |(0.40 , 0.26 , 0.13) -Blue Gray--------(0,134,139)----(0.0 ,-0.52-, 0.54)


PIC

Figure 17: Lesion with presence of Color (A) Blue-Gray (b) Light and Dark Brown.


3.3.4 Dermoscopic Structures

They are different patterns associated with melanoma, it includes the study of 5 structures: Linear Branches denoted by Crl  , Atypical Pigmented Reticulum denoted by Crp  , Structureless Areas denoted by Cad, points denoted by Cp  , and globules Cg  . This set of features is represented by the Ced  vector defined by: Ced = [Crl,Crp,Cad,Cp,Cg]ed  . Details of the recognition process of these features are given below.

Linear Branches: Pseudopods are bulbous and bent projections as shown in Figure 18 (a), the borders are finger-like structures (in the form of fingers).

To detect the pseudopods, the variances of the Euclidean distance are analyzed between the centroid and the border of the lesion.The binary mask from the original image can be seen in Figure 18 (b). To this mask, the morphological operator dilation is applied with a structuring element with the shape of a disc of radious 1, Figure 18 (c). The border of the lesion results from the subtraction of the dilated image and the original image, as shown in Figure 18 (d). The variance is calculated as a measure of dispersion of the distances to the border, a high value suggests the presence of pseudopods.

Atypical Pigmented Reticulum:

A relevant feature associated to the diagnosis of melanoma is the atypical pigmented reticulum. Figure 19 (a) shows a lesion with cross-linked, to diagnose as typical or atypical, a series of steps will be taken, which are explained below.

On the image of the segmented lesion in grayscale, Figure 19 (b), it is applied the detection of border with an Laplacian-of-Gaussian operator (LOG) [46], the result is shown in Figure 19 (c). The cross lines which form a mesh over the lesion; and, also the outside border of the lesion that must be removed, can be observed. With the aim of obtaining only formed holes, the border is calculated and then it is subtracted from Figure 19 (c).


PIC

Figure 18: Obtaining the Border of the Lesion (a) Image of a lesion with pseudopods. (b) Original Binary Mask. (c) Dilated Image. (d) Subtraction of the Images (b) and (c).



PIC

Figure 19: (a) Image of a Lesion with Pigmented Reticulum (b) Image in Grayscale (c) LoG filter (Inverted Image).


To obtain the border, the morphological operator erosion is applied, with a structuring element in the shape of radium disc of 1 pixel and it is subtracted from the original binary mask (morphological gradient), Figure 20 (a). The resulting border can be seen in Figure 20 (b). To remove the border of the lesion, the border of the image is substracted as shown in Figure 20 (b) of the image that contains the mask of the mesh, shown in Figure 19 (c) and the result is shown in Figure 20 (c). Once the border is removed, the holes formed by the cross-linked are filled, as shown in Figure 20 (d). Finally, the dots and small areas with filtered of areas lower than 30 pixels, the final image is seen in Figure 20 (e). The area of each zone is examined, if there is not much variation between the areas, the pigmented reticulum is atypical.


PIC

Figure 20: Obtaining the border of the lesion (a) Original binary mask (b) Border of lesion (Mirror image) (c) Image without border (Mirror image) (d) Filling of holes (Mirror image) (e) Remaining areas of the filtering area (Mirror image).


Structureless Areas: Structureless areas are regions of the lesion to which the aforementioned structures can not be remunerated.

In the center of the Figure 21 (a) a lesion is shown, with a structureless area of a dark brown color, indicated by the red square. The image of the lesion is in grayscale as shown in Figure 21 (b). This lesion is segmented by the traditional Otsu algorithm, to isolate the dark areas as can be seen in Figure 20 (c). Then, the region that belongs to the structureless area gets separated, and the resulting image is shown in Figure 21 (d).

Dots and Globules: Dots and globules are differentiated by their size, the first are almost invisible to the naked eye. Figure 22 (a) shows a lesion presenting various dark-brown globules.

With the aim of locating globules, first, an improvement of the distribution of the pixels of the image in grayscale is made (Figure 22 (b)), using the average filter with a window size of 4 x 4, as shown in Figure 22 (c). To highlight them, an intensity adjustment is done with gamma = 0.01 and a contrast improvement is applied through the equalization of the histogram using the CLAHE method with Clip Limit = 0.01 and contextual regions equal to 2 x 2, as shown in Figure 22 (d).


PIC

Figure 21: (a) Lesion with presence of structureless areas (b) Image in grayscale (c) segmented area with the OTSU method (d) Structureless area of the lesion (Mirror image).



PIC

Figure 22: (a) Image in grayscale (b) Average filter on the image (c) Application of CLAHE.


The binary image is obtained through the Otsu method (Figure 23 (a)). Then, closure is applied with a structural element is the shape of a disk and size 3, in such a way that the nearby components are united as can be seen in Figure 23 (b). The vector of x  characteristics consisting of: x = [Ca;Cb;Cc;Ced ]  where each element represents the sub-vectors previously calculated.

3.4 Module 4: Classification

In this work, the classification of lesions will be done by using the SVM classifier (Support Vector Machine) [29]. Entry data will be formed by D = {x,y} where x  is the set of vectors of features while y  is the set of labels. They represent one of the two classes, yi  =0 for benign lesions and yi  =1 for malignant lesions associated with the i-th melanocytic lesion.


PIC

Figure 23: (a) Binary image. (b) Closure is applied.


The classification will have two stages: a) The construction of the classifier for the learning of the parameters of the system, this process is performed with a set of training. b) The system’s tests to evaluate the classifier’s performance is carried out with a set of tests that are independent from the set of training.

4 Experimental tests

This section presents the evaluation metrics used, the experiments done, the results obtained, and those of the state-of-the-art; then, there is a discussion about the results.

4.1 Image Bank

120 dermatoscopy images were used for the tests (non-melanocytic and melanocytic) acquired through a dermatoscope computerized video with polarized light, which uses lenses with 20 to 70x given by the Centre for Processing Digital Images of the Faculty of Sciences, Universidad Central de Venezuela [47].

The images were analyzed by Dr. Lizza Salgueiro, from the Department of Dermatology of the Universidad Nacional de Asunción, since they were not previously diagnosed. According to her diagnosis, 16 non-melanocytic and 104 melanocytic lesions were found, from which 76 were malignant lesions, and 28 were benign lesions.

4.2 Experimental Results from Removal of hairs

The tests that were carried out for the module of hair removal, consists of the application of 2 techniques for its comparison.

Figure 24 (a) shows a lesion, with the presence of hair, both, on the lesion as well as on the skin; the goal is to perform the hair removal using the following methods:

The first approach is based on the following operations [47] :

First, it applies morphological filters. Then, the Botton-Hat operator is used. Through segmentation by thresholding, the hairs from the skin are separated. Finally, the associated areas with the hairs are reconstructed with the average filter.

The second technique, proposed in this work (section 3.1.1), is based on: Obtaining a mask of hairs through the Botton-Hat operator and the traditional Otsu algorithm. On this mask, the Inpainting algorithm [44] is applied for the replacement of the pixels associated with hairs.

Figure 24 (b) shows the resulting image from the first method previously mentioned, it can be noticed that the elimination of hair affected some areas of the skin and the lesion, which is not recommendable. Finally, Figure 24 (c) shows the image without hairs, resulting from applying the second method; it can be noticed that the colors of the lesion remained intact.


PIC

Figure 24: Methods for the elimination of hairs. (a) Original image. (b) Image resulting from the first method.(c) Image resulting from the our method


As a second test, Figure 25(a) shows a lesion with presence of hairs with greater thickness than the previous example. Figure 25(b) shows the image resulting from the first method, and it can be noted that eliminating the hairs affected the area of the lesion, and some whitish markings with the shape of hairs were left. Figure 25(c) shows the hairless image resulting from applying the Inpainting algorithm, the lesion with a total absence of hairs, unmarked and without alteration of the colors, can be noticed in the image. These methods were visually evaluated by the professional dermatologist. After several tests performed on lesions with thick and fine hairs, the expert arrived to the conclusion that the thickest hairs are best removed with the second method exposed. Despite the computational cost of the Inpainting algorithm, as it is mentioned in [44], in this work is used due to the improvement it represents visually in the results.


PIC

Figure 25: Applying the methods for removal of thick hairs. (a) Original Image. (b) Image resulting from the first method.(c) Resulting image from the our method.


4.3 Performance segmentation metrics

Segmentation seeks to isolate the lesion from the rest of the skin for further study. Performance metrics were defined in this section to evaluate the segmentation method. Four possible outcomes can be obtained from a segmented lesion:

  • True positive (TP), the pixel belongs to the lesion and is segmented as a pixel that belongs to the lesion.
  • False negative (FN), the pixel belongs to the lesion and is not segmented as a pixel that belongs to the lesion.
  • True negative (TN), the pixel does not belong to the lesion and is not segmented as a pixel that belongs to the lesion.
  • False positive (FP), the pixel does not belong to the lesion and is segmented as a pixel that belongs to the lesion.

In Figure 26 you can see the representation of TP, FN, TN and FP variables regarding the segmented lesion and its ideal segmentation, where the pixel associated with the targeted zone is given by (i,j)  , the ideal image represents the segmentation done by a professional, and the segmented image represents the segmentation of the lesion obtained automatically.


PIC

Figure 26: Ideal Segmentation and automatic segmentation [48].


The metrics used are:

  • Sensitivity: The probability of correctly segment the injury.
  • Specificity is the probability of correctly segmented skin.
  • Accuracy: The likelihood of the skin or the lesion to obtain a correct segmentation.
4.3.1 Experimental Results of Segmentation

Currently, there are several segmentation methods for melanocytic lesions due to the important role that they play in a diagnostic system. In [49] a comparison of 6 segmentation methods applied to melanoma were carried out, where it was concluded that the AS (Adaptive Snake) and EM-LS (Expectation Maximization Level Set) methods were strong and useful for the segmentation of the lesion in a computer-aided diagnostic system in order to help the clinical diagnosis of dermatologists. On the other hand, this work has as its main objective the improvement of the performance of final diagnosis for which 2 automatic segmentation algorithms were tested: the Otsu’s traditional algorithm, and that of Maximum Entropy. Table 2 shows the results of the 2 algorithms. The results obtained were compared with images of ideal segmentation made by the dermatologist.


Table 2: Comparison of Segmentation Algorithms
|-----------------|---------|----------|----------| |-----Method------|Accuracy-|Sensitivity-|Specificity-| |Otsu-------------|--96%----|---82%----|---98%-----| -Maximum--Entropy----97%--------85%--------99%-----

As shown in Table 2, the results are similar in terms of performance, specifically, for pigmented lesions were reported [8] which is convenient to use the method of maximum entropy, so that in this work such algorithm is used for the segmentation.

4.4 Performance Classifier Metrics

The objective of the classifier on the proposed methodology is to return a negative or positive diagnosis, ie, whether the input samples represent a malignant or benign lesion. For a given sample, a diagnostic system can lead to one of four possible outcomes:

  • True positive (TP), diagnosis is positive and is classified as positive.
  • False negative (FN), diagnosis is positive and classified as negative.
  • True negative (TN), diagnosis is negative and is classified as negative.
  • False positive (FP), diagnosis is negative and is classified as positive.

In Table 3 the matrix that includes the four possible outcomes is shown. The Sensitivity, Specificity and Accuracy metrics are taken into account in order to assess the results.


Table 3: Matrix of possible results
|-----------------------|---------------------| |                       |    Classification     | |  Melanocytic Lesion   |                     | |                       |-Benign--|Malignant--| |-----------|-----------|---------|-----------| |           |  Benign   |  True   |   False    | |Diagnostic |           |Negative | Negative   | |           |-----------|---------|-----------| |           |Malignant  |  False   |   True    | --------------------------Positive----Positive---

  • Sensitivity is the probability of classifying malignant lesions as such. In other words, it is the ability to properly diagnose sick patients.
  • Specificity is the probability of classifying benign lesions as such. It is defined as the ability to correctly diagnose healthy patients.
  • Accuracy is the probability of correctly classifying a lesion, the probability that a sick or healthy patient gets a correct diagnosis.

Table 4 shows the formal definition of the performance metrics for specificity, sensitivity and accuracy.


Table 4: Performance Metrics
|----------|---------------------------------------| |          |                                       | |--Metrics--|---------------Formula-----------------| |          |                          - 1          | |Sensitivity-|---------Ss-=-TP-(T-P-+-FN-)------------| |          |                           -1          | |Specificity-|---------Se-=-TN-(T-N-+-FP-)------------| |          |                                   - 1 | --Accuracy--Sp-=-(TP-+-TN-)(T-P-+-TN-+-F-P +-F-N-)--- ----------------------------------------------------

For the construction of the classifier and the making of the experimental testing, the Bioinformatics Toolbox for Matlab R2013b was used. Experiments are explained below.

4.4.1 Experimental test I

This test is based on the retention method which consists in dividing the sample data into 2 complementary sets of 104 images. The first is called set of training Dtraining  and the second is called test set Dtest  . Then, the set of training is formed, which contains 32 images of which were randomly selected 12 benign and 20 malignant with a probability p = 30 without repetition.The rest of the images (72 images) are used for the validation of the classifier obtained in the previous step. The results of this process are shown in Table 5.


Table 5: Results of Experiment I
|----------|---------------------------|--------| |          |         Images            |        | | Metrics  |                           | Rate   | |          |---------|-----|-Properly--|        | |          |Training |Test | Classified  |        | |----------|---------|-----|-----------|--------| |Sensitivity-|---20----|-56--|----53-----|94.64-%--| |Specificity--|---12----|-16--|----12-----|--75%----| -Accuracy------32------72-------65------90.28%--

4.4.2 Experimental Test II

It is necessary to use other methods since in experiment I the minimum standard of the Specificity Metrics was not reached (greater than 79%). The cross-validation is used to give strength to the classifier. For the K-Fold Cross-Validation, the data set is divided into K groups and K iterations are performed. Each iteration uses a group for the test Dztest = Dz  and the K-1 remaining groups Dtraining = D - Dz  are used for training with z = 1,..,K  . In pseudocode 1, the collection of the SCL  classification system is described, whereas in the results gathered from this process are shown in Table 6.


Algorithm 1: K-Fold Cross-Validation
INPUT: Number of K groups
OUTPUT: SCL  Classification System
1:  for all z ← 1  to K   do
2:  Dztest = Dz
3:  Dztraining = D - Dz
4:  CLz  = trainingSV M (Dztraining)  ; //Calculus of ωz ⋅x - bz

5:  end for
6:  SCL  = {CL1, CL2,CL3, ...,CLK }
7:  Return SCL  ;


Table 6: Results of experiment II
|---------|-----------|-----------|-----------| |-Group---|Sensitivity-|Speci-ficity-|-Accuracy--| |---1-----|---80%-----|--52.17%---|--72.73%----| |---2-----|--51.74%----|--48.61%---|--51.11%----| |---3-----|--66.15%----|--69.57%---|--67.04%----| |---4-----|--67.69%----|--47.83%---|---62.5%----| -Average-----66.4%-------54.55%------63.34%----

This data partition is unworkable when there is small amount of data, since the amount of images of training that are used by each group still cannot provide a classifier that complies with the performance standards of the dermatoscopy that are sought in this work. Note that the performance by group is less than that obtained by the retention method.

4.5 Experimental Test III

Due to the unfavorable results in experiment II (shown in Table 6 - less than the standards of specificity and sensitivity of 79% and 89%, respectively) Random Cross-Validation was conducted where K iterations were made, but by each iteration, a set of test Dztest = Dz  and the set of training Dtraining = D - Dz  . must be chosen randomly.

In the k-th iteration, it is calculated the k-th classification system CL to conform classifiers K, progressively. In this context, it is in the K-th iteration, the K-th classification system SCL  = {CL1,CL2, CL3,...,CLK } whose performance is obtained from the average of the classifiers K. Next, The Algorithm 2 is shown, which describes the collection of the classification system SCL  based on the Random Cross-Validation.


Algorithm 2: Random Cross-Validation.
INPUT: Number of K groups
OUTPUT: Classifications System SCL
1:  for all z ← 1  to K   do
2:  Dztraining = randomSelection{D }
3:  CLz  = trainingSV M (Dztraining)  ; //Calculus of ωz ⋅x - bz
4:  Dztest = D - Dz

5:  end for
6:  SCL  = {CL1, CL2,CL3, ...,CLK }
7:  Return SCL  ;

As shown in Figure 27, the results begin to stabilize from iteration K = 12, so 3 iterations more are performed, until reaching a K=15.


PIC

Figure 27: Behavior of results for K=15 iterations.


The final average of such tests can be seen in Table 7 that corresponds to a SCL15  system.


Table 7: Average result of 15 iterations of the Random Cross-Validation (experiment III).
|----------|--------| |          |        | | Metrics  | Rate   | |          |        | |----------|--------| |Sensitivity-|95.12-%--| |Specificity--|79.58-%--| -Accuracy---91.67%--

4.5.1 Assembled System - Experimental Test IV

This section describes the process of selection of the best assembled system. Next, The Algorithm 3 is shown, where e  iterations are performed, where e = 1,2,3,4,...μ  and each assembled system SCLe  , obtained through routine SSVM, it is evaluated by the metric of accuracy.The assembled system selected will be the one that has obtained greater accuracy.


Algorithm 3: Selection of Assembled System.
INPUT:
OUTPUT: Best Classification System SCL
1:  Initialize μ
2:  for all e ← 1  to μ   do
3:  SCLe  = SSV M ()
4:  exae = Accuracy(SCLe )

5:  end for
6:  best = M ax(exa)

7:  SCL  = SCLbest
8:  Return SCL  ;

For the specific problem μ  = 30 was used, from the 30 iterations, the assembled system used in the experiments III was selected.

In operation time, an assembled system consisting of classifiers K, obtains the final diagnostic by means of simple vote, because the output of each classifier CL is a discrete value.

The results of the assembled system of the experiment III in operational time are shown in Table 8.


Table 8: Results of the Assembled System of Experiment III
|----------|-------| |-Metrics--|-Rate--| |Sensitivity-|96.05%--| |Specificity--|96.43%--| -Accuracy---96.15%---

Notice the good results obtained, but they can be tricky since all the data is used to train the classifiers. For this reason, in step A, a set of the total data is isolated for the validation of the assembled system , and step B takes the rest of the set for training and testing, having as base the method of random cross-validation. This experiment is performed with two different data sets, the first set is using the ideal segmentation made by the professional, and the second using the proposed segmentation; both results are shown in Table 9.


Table 9: Results of the Assembled System of Experiment IV
|--------------|-----------|------------|----------| |   System     |Sensitivity | Specificity  |Accuracy  | |-Assembler----|-----------|------------|----------| |    Ideal     |    95%     |   91.67%    | 93.75%   | |Segmentation--|-----------|------------|----------| |  Proposed    |    95%     |   83.33%    | 90.63%   | -Segmentation---------------------------------------

4.5.2 Experimental test V

This experiment is carried out in order to observe the impact of a non-ideal segmentation on the system, which consists in building a classifier based on images with ideal segmentation (generated by the medical professional). All the images are used for the test data, but they follow the proposed segmentation process of this work.The results obtain are shown in Table 10.


Table 10: Results of Experiment V
           |-----------|-----------|-----------| |----------|Sensitivity-|Specificity-|Accuracy---| |Classifier |  93.42%    |  89.28%   |  92.30%    | --CLideal--------------------------------------

5 Discussion of Results

This section presents the discussions on the results obtained from the experimental tests of the segmentation and classification modules.

5.0.1 Segmentation

It must be taken into account that segmentation can be affected by the method of hair extraction used in each work, which could lead to unfavorable or advantageous results.

The results obtained in the segmentation made on the same bank of images, by means of the proposed method obtained 97% of accuracy as shown in Table 11; in the same way, it can be observed that the results of [47] were favorable but below the proposal. This could be due to the hair extraction method used in this work, hence the importance of a good method for hair extraction.


Table 11: Comparative Table of the Results of the Segmentation
|--------------------|---------|----------|----------| |------Method--------|-Accuracy-|-Sensitivity-|-Specificity-| |Torres-[<a href=47]-----------|---90%---|----------|----------| -Segmentation-proposed-----97%-------85%--------99%----- " >

5.0.2 Classification

Prior to the discussion of the results, it is important to clarify that the bases of images used in this work and those of the state-of-the-art are different because there are no public database due to confidentiality reasons kept between doctors and their patients. Regarding the reproduction of such works, they could not be done because of the scarce information provided by the authors, in most of the cases, the main obstacle was the lack of information on the methods used for the extraction of hairs. It could also be observed that the characteristics used in these works require knowledge of the consecutive acquisition protocol for image taking, information that is not available, and it is the reason why features such as the area and the diameter of the lesion could not be used. For these reasons, the proposed scheme is incomparable with the state-of-the-art approaches, it can only be alleged that the results obtained in this work are well positioned as shown in the Table 12.


Table 12: Comparative Table of the Results of Classification Systems

|--------------|------------------------------------------------------------------------|-------------------------------| |              |                                Methods                                 |            Results             | |    Work      |                                                                        |                               | |              |-----------------|----------------------------|-------------------------|----------|----------|---------| |--------------|--Segmentation----|-Extraction-of characteristics--|--------Classifiers---------|Sensitivity-|Specificity-|Accuracy-| | Batugo       |   Clustering     |          ABCD              |Linear discriminant classifier |  79.31%   | 71.83%   |   -     | |-[<a href=1]-----------|-----------------|----------------------------|-------------------------|----------|----------|---------| | Oliveira | Chan Veese | ABC -T | SVM | 73.81% | 76.67% | - | |-[27]----------|-----------------|----------------------------|-------------------------|----------|----------|---------| | Rahman et al.| Fuzzy C-means |Histogram of Color and Texture SE (K -NN, SVM, GML ) | 83.75% | 62.5% | - | |-[30]----------|-----------------|----------------------------|-------------------------|----------|----------|---------| | Alcon et al. | | | | | | | |-[34]----------|------Otsu-------|----------ABCD--------------|-Linear Regression-Logistics-|---94%----|---68%----|--86%----| | Celebi et al. | | | | | | | |-[36]----------|------Otsu-------|---area-of the lesion, texture-|----------SVM------------|--93.33%---|-92.34%---|---------| | Approach | | | | | | | --proposed-------Maximum--Entropy------------ABCD---------------------SE-(15 SVMs-)---------95.12%-----79.58%-----91.67%---" >

The Retention Method was used in experiment I, where it can be observed that there is an unbalance of loads between benign and malignant lesions, being the latter the kind which prevails. In these cases, the majority of classifiers focuses on learning from the large kinds resulting in a poor accuracy of classification for the smaller kinds. This is verified in the results obtained in this experiment being specificity lower than sensitivity, it is worth mentioning that the result obtained in sensitivity is crucial because the proper detection of malignant lesions is a priority.

With the aim of improving specificity, experiment II was conducted, whose results were not favorable even affecting sensitivity and showing the effect of insufficient data. This happened because of the division into sets of training and testing of equal size in order to obtain different classifiers which led to few images to train each classifier, which in turn, led to conduct experiment III.

In experiment III encouraging results were obtained, with a slight improvement in specificity and without sacrificing sensitivity, this is the effect of using more benign lesions during training.

With the aim of providing strength to the proposed method, an assembled system was used, which consisted of 15 classifiers.First of all, the assembled system of the experiment was formed but it was concluded that these results suggest over-training, which could lead to a poor performance for new samples.

For this reason, experiment IV was conducted, where the results are satisfactory for an unknown validation group.

Considering the sensitivity and specificity of the dermoscopy as a basis, the proposed scheme is valid since it reached these metrics which constitute the general objective of this work.

It can be observed in experiment V that non-ideal segmentation degrades the performance of the classifier that was built with all of the ideal images, getting a more negative impact on specificity than on sensitivity. With this, we can conclude that work must continue in the improvement of a segmentation algorithm that can reach to be as ideal as it can get.

6 Conclusions and Future Work

A set of image processing techniques were used to obtain features of the ABCD rule, quite used in dermatology, and use it as an input to a classifier.This is done in order to be able to give a diagnosis of skin cancer, specifically, melanoma. Several tests were conducted while classifying, so as to evaluate its performance and get acceptable results by dermatology experts. According to the performance metrics, very favourable results were obtained, taking into account the state-of-the-art approaches. However, for a work confrontation the same database should be used, which was not possible due to lack of public databases. In Dermatology, it is considered as a valid tool only if it reaches the performance of the diagnosis using the dermatoscopy, so the scheme proposed constitutes as a valid tool for the dermatologist. The proposed scheme complies with the outlined objectives, the generals and the specific ones. The tool has been developed in order to assist the dermatologist and not to replace them; it is intended to contribute to obtain an objective diagnosis, independent from the professional experience.

As future work, the authors propose to carry out the feature extraction using another rule, either pattern analysis [21], the Menzies method [22] or another. To build a tool that will point out each of the features found in the lesion and that will display itself as a means of information for the medical professional. To implement a method for the extraction of hairs with lower computational cost and implement metrics to measure its performance. To do tests with a standard database, with a defined acquisition protocol in such a way that it can include characteristics like the area and the compactness of the lesion.

References

[1]   G. A. Batugo, “Reconocimiento automático de melanomas mediante técnicas de visión por ordenador y reconocimiento de patrones,” Universidad Carlos III de Madrid, Tech. Rep., 2013.

[2]   A. Sboner, C. Eccher, E. Blanzieri, P. Bauer, M. Cristofodolini, G.Zumiani, and S.Forti, “A multiple classifier system for early melanoma diagnosis,” Artificial Intelligence in Medicine, vol. 27, no. 1, pp. 29–44, 2003. [Online]. Available: http://dx.doi.org/10.1016/S0933-3657(02)00087-8

[3]   P. Braun, H. Rabinovitz, M. Oliviero, A. Kopf, and J. Saurat, “Dermoscopy of pigmented skin lesions,” Journal of the American Academy of Dermatology, vol. 48, no. 5, pp. 679–693, may 2003. [Online]. Available: http://dx.doi.org/10.1016/j.jaad.2001.11.001

[4]   Parikh and Hitesh, “A survey on computer vision based diagnosis for skin lesion detection,” International Journal of Engineering Science and Innovative Technology, vol. 2, no. 2, pp. 431–437, 2013.

[5]   P. Zaballos, C. Carrera, S. Puig, and J.Malvehy, “Criterios dermatoscópicos para el diagnóstico del melanoma,” Medigraphic, vol. 32, 2004.

[6]   La Organización Mundial de la Salud desaconseja el uso de camas solares a las personas menores de 18 años. http://www.who.int/mediacentre/news/notes/2005/np07/es/. Accessed: 2016-06-27.

[7]   P. Ramos, F. C. nete, R. Dullak, L. Bolla, N. Centurión, A. Centurión, S. Chamorro, A. Chaparro, and F. Chaves, “Epidemiología del cáncer de piel en pacientes atendidos en la cátedra de dermatología de la facultad de ciencias médicas de la universidad nacional de asunción, paraguay (2008-2011),” ANALES de la Facultad de Ciencias Médicas, vol. 45, no. 2, pp. 49–69, 2012.

[8]   O. Blandom, “A support tool for melanoma diagnosis by using dermoscopy images,” Master’s thesis, Universidad Nacional de Colombia, Manizales, Colombia, 2010.

[9]   H. Zhou, G. Schaefer, M. Celebi, H. Iyatomi, K. Norton, and F. L. T. Liu, “Skin lesion segmentation using an improved snake model,” Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE, 2010. [Online]. Available: http://dx.doi.org/10.1109/IEMBS.2010.5627556

[10]   H. Ganster, A. Pinz, R. Rohrer, E. Wildling, M. Binder, and H. Kittler, “Automated melanoma recognition,” Medical Imaging, IEEE Transactions on, vol. 20, no. 3, pp. 233–239, 2001. [Online]. Available: http://dx.doi.org/10.1109/42.918473

[11]   A. Lifshitz. Llega Zelboraf: una medicina nueva contra el melanoma, un cáncer de la piel. http://www.vidaysalud.com/diario/cancer/llega-zelboraf-una-medicina-nueva-contra-el-melanoma-un-cancer-de-la-piel/. Accessed: 2016-06-27.

[12]   Melanoma. http://www.dmedicina.com/enfermedades/cancer/melanoma. Accessed: 2016-06-27.

[13]   R. J. Friedman, D. S. Rigel, A. W. Kopf, and D. Polsky. Melanoma. http://cancerdepiel.org/cancer-de-piel/melanoma. Accessed: 2016-06-27.

[14]   M. Fossati, C. Guebenlián, M. Restano, and A. Wolf, “Lesiones melanocíticas,” Dermatoscopía Elemental, 2013.

[15]   H. P. Soyer, R. Hofmann-Wellenhof, R. H. Johr et al., Color atlas of melanocytic lesions of the skin. Springer, 2007.

[16]   M. de Troya Martína, N. B. Sáncheza, I. F. Canedoa, M. F. Eliceguia, R. F. Liébanab, and F. R. Ruizc, “Estudio dermoscópico del melanoma maligno cutáneo: análisis descriptivo de 45 casos,” vol. 99, no. 1, pp. 44–53, 2008.

[17]   I. Stanganelli. Dermoscopy. http://emedicine.medscape.com/article/1130783-overview. Accessed: 2016-06-27.

[18]   G. Argenziano, H. Soyer, S. Chimenti, and G. Ruocco, “Dermoscopy of pigmented skin lesions,” European Journal of Dermatology, vol. 11, no. 3, pp. 270–277, may 2002.

[19]   Malvehy and Puig, “Principios de dermatoscopía,” 2002.

[20]   A. Bazzano, M. Cueto, C. García, and J. Pérez, “Lesiones no melanocíticas,” Dermatoscopía Elemental, 2013.

[21]   I. Zalaudek, G. Argenziano, H. Soyer, R. Corona, F. Sera, A. Blum, R. Braun, H. Cabo, G. Ferrara, A. Kopf et al., “Three-point checklist of dermoscopy: an open internet study,” British journal of dermatology, vol. 154, no. 3, pp. 431–437, 2006. [Online]. Available: http://dx.doi.org/10.1111/j.1365-2133.2005.06983.x

[22]   S. W. Menzies, C. Ingvar, and W. H. McCarthy, “A sensitivity and specificity analysis of the surface microscopy features of invasive melanoma,” vol. 6, no. 1, pp. 55–62, 1996.

[23]   G. Argenziano, G. Fabbrocini, P. Carli, V. D. Giorgi, E. Sammarco, and M. Delfino, “Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin lesions comparison of the abcd rule of dermatoscopy and a new 7-point checklist based on pattern analysis,” JAMA Dermatology, vol. 134, no. 12, pp. 1563–1570, December 1998.

[24]   Stolz, W. Riemann, A. Cognetta, A. Pillet, L. Abmayr, and W. Hoelzel, “Abcd rule of dermatoscopy: a new practical method for early recognition of malignant melanoma.” European Journal of Dermatology, vol. 4, no. 7, pp. 521–527, 1994.

[25]   R. Montero, A. H. Abúndez, and A. Zamarón, “Descripción de la asimetriá de neoplasias en piel utilizando el concepto de compacidad,” IX Congreso Internacional sobre Innovación y Desarrollo Tecnológico CIINDET 2011, Cuernavaca Morelos, México, 2011.

[26]   C. Pérez López, “Métodos estadísticos avanzados con spss,” Thompson. Madrid, 2005.

[27]   R. B. Oliveira, “Método de detecção e classificação de lesões de pele em imagens digitais a partir do modelo chan-vese e máquina de vetor de suporte,” Universidade Estadual Paulista Júlio de Mesquita Filho, Tech. Rep., 2012.

[28]   C. A. Z. Barcelos, M. Boaventura, and E. C. Silva Jr, “A well-balanced flow equation for noise removal and edge detection,” Image Processing, IEEE Transactions on, vol. 12, no. 7, pp. 751–763, 2003. [Online]. Available: http://dx.doi.org/10.1109/TIP.2003.814242

[29]   C. Cortes and V. Vapnik, “Support-vector networks,” Machine learning, vol. 20, pp. 273–297, 1995. [Online]. Available: http://dx.doi.org/10.1007/BF00994018

[30]   M. Rahman, P. Bhattacharya, and B. Desai, “A multiple expert-based melanoma recognition system for dermoscopic images of pigmented skin lesions,” in BioInformatics and BioEngineering, 2008. BIBE 2008. 8th IEEE International Conference on. IEEE, 2008, pp. 1–6. [Online]. Available: http://dx.doi.org/10.1109/BIBE.2008.4696799

[31]   J. C. Bezdek, R. Ehrlich, and W. Full, “Fcm: The fuzzy c-means clustering algorithm,” Computers & Geosciences, vol. 10, no. 2, pp. 191–203, 1984. [Online]. Available: http://dx.doi.org/10.1016/0098-3004(84)90020-7

[32]   D. S. Stoffer and K. D. Wall, “Bootstrapping state-space models: Gaussian maximum likelihood estimation and the kalman filter,” Journal of the american statistical association, vol. 86, no. 416, pp. 1024–1033, 1991. [Online]. Available: http://dx.doi.org/10.1080/01621459.1991.10475148

[33]   M. Steinbach and P.-N. Tan, “knn: k-nearest neighbors,” The top ten algorithms in data mining, pp. 151–162, 2009. [Online]. Available: http://dx.doi.org/10.1201/9781420089653.ch8

[34]   J. Alcon, C. Ciuhu, W. Kate, A. Heinrich, and N. Uzunbajakava, “Automatic imaging system with decision support for inspection of pigmented skin lesions and melanoma diagnosis,” IEEE Journal of Selected Topics in Signal Processing, vol. 3, no. 1, pp. 14–25, February 2009. [Online]. Available: http://dx.doi.org/10.1109/JSTSP.2008.2011156

[35]   N. Otsu, “A threshold selection method from gray-level histograms,” Automatica, vol. 11, no. 285-296, pp. 23–27, 1975.

[36]   M. E. Celebi, H. A. Kingravi, B. Uddin, H. Iyatomi, Y. A. Aslandogan, W. V. Stoecker, and R. H. Moss, “A methodological approach to the classification of dermoscopy images,” Computerized Medical Imaging and Graphics, vol. 31, no. 6, pp. 362–373, 2007.

[37]   G. Capdehourat, A. Corez, A. Bazzano, R. Alonso, and P. Musé, “Toward a combined tool to assist dermatologists in melanoma detection from dermoscopic images of pigmented skin lesions,” Pattern Recognition Letters, vol. 32, no. 16, pp. 2187–2196, 2011. [Online]. Available: http://dx.doi.org/10.1016/j.patrec.2011.06.015

[38]   M. Fornaciali, S. Avila, M. Carvalho, and E. Valle, “Statistical learning approach for robust melanoma screening,” in Proceedings of the 2014 27th SIBGRAPI Conference on Graphics, Patterns and Images. IEEE Computer Society, 2014, pp. 319–326. [Online]. Available: http://dx.doi.org/10.1109/SIBGRAPI.2014.48

[39]   L. Y. Mera-González, J. A. Delgado-Atencio, J. C. Valdiviezo-Navarro, and M. Cunill-Rodríguez, “An algorithm for the characterization of digital images of pigmented lesions of human skin,” in SPIE Optical Engineering+ Applications. International Society for Optics and Photonics, 2014, pp. 921 718–921 718.

[40]   Q. Abbas, M. Celebi, C. Serrano, I. F. García, and G. Ma, “Pattern classification of dermoscopy images: A perceptually uniform model,” Pattern Recognition, vol. 46, no. 1, pp. 86–97, 2013.

[41]   R. Kaur, P. Albano, J. Cole, J. Hagerty, R. LeAnder, R. Moss, and W. Stoecker, “Real-time supervised detection of pink areas in dermoscopic images of melanoma: importance of color shades, texture and location,” Skin Research and Technology, 2015. [Online]. Available: http://dx.doi.org/10.1111/srt.12216

[42]   C. Barata, M. Ruela, M. Francisco, T. Mendonça, and J. Marques, “Two systems for the detection of melanomas in dermoscopy images using texture and color features,” IEEE Syst. J, pp. 1–15, 2013. [Online]. Available: http://dx.doi.org/10.1109/JSYST.2013.2271540

[43]   J. Scharcanski and M. E. Celebi, Computer vision techniques for the diagnosis of skin cancer. Springer, 2014. [Online]. Available: http://dx.doi.org/10.1007/978-3-642-39608-3

[44]   A. Criminisi, P. Pérez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” Image Processing, IEEE Transactions on, vol. 13, no. 9, pp. 1200–1212, 2004. [Online]. Available: http://dx.doi.org/10.1109/TIP.2004.833105

[45]   C. A. Cattaneo, L. I. Larchera, A. I. Ruggerib, A. C. Herreraa, and E. M. Biasoni, “Métodos de umbralización de imágenes digitales basados en entropia de shannon y otros,” Machine learning, vol. 20, no. 3, pp. 2785–2805, 2011.

[46]   R. C. Gonzalez and R. E. Woods, “Digital imaging processing,” Massachusetts: Addison-Wesley, 1992.

[47]   W. Torres, M. Landrove, M. Torreyes, and M. López, “Segmentación de imágenes dermatoscopicas en el espacio cielab utilizando filtros morfologicos SML,” in XII Congreso Internacional de Métodos Numéricos en Ingeniería y Ciencias Aplicadas, 2014.

[48]   Carrasco, Z., “Segmentación de Fallas en Soldaduras Utilizando Técnicas de Procesamiento Digital de Imágenes,” Universidad de Santiago de Chile, Facultad de Ingeniería, Master’s thesis, 2003.

[49]   M. Silveira, J. C. Nascimento, J. S. Marques, A. R. Marçal, T. Mendonça, S. Yamauchi, J. Maeda, and J. Rozeira, “Comparison of segmentation methods for melanoma diagnosis in dermoscopy images,” Selected Topics in Signal Processing, IEEE Journal of, vol. 3, no. 1, pp. 35–45, 2009. [Online]. Available: http://dx.doi.org/10.1109/JSTSP.2008.2011119

Creative Commons License Todo el contenido de esta revista, excepto dónde está identificado, está bajo una Licencia Creative Commons