<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>0717-5000</journal-id>
<journal-title><![CDATA[CLEI Electronic Journal]]></journal-title>
<abbrev-journal-title><![CDATA[CLEIej]]></abbrev-journal-title>
<issn>0717-5000</issn>
<publisher>
<publisher-name><![CDATA[Centro Latinoamericano de Estudios en Informática]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S0717-50002016000100007</article-id>
<title-group>
<article-title xml:lang="en"><![CDATA[Combining Leaf Shape and Texture for Costa Rican Plant Species Identification]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Carranza-Rojas]]></surname>
<given-names><![CDATA[Jose]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Mata-Montero]]></surname>
<given-names><![CDATA[Erick]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
</contrib-group>
<aff id="A01">
<institution><![CDATA[,Costa Rica Institute of Technology Computer Engineering Department ]]></institution>
<addr-line><![CDATA[ ]]></addr-line>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>04</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>04</month>
<year>2016</year>
</pub-date>
<volume>19</volume>
<numero>1</numero>
<fpage>7</fpage>
<lpage>7</lpage>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://www.scielo.edu.uy/scielo.php?script=sci_arttext&amp;pid=S0717-50002016000100007&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.edu.uy/scielo.php?script=sci_abstract&amp;pid=S0717-50002016000100007&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.edu.uy/scielo.php?script=sci_pdf&amp;pid=S0717-50002016000100007&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="en"><p><![CDATA[In the last decade, research in Computer Vision has developed several algorithms to help botanists and non-experts to classify plants based on images of their leaves. LeafSnap is a mobile application that uses a multiscale curvature model of the leaf margin to classify leaf images into species. It has achieved high levels of accuracy on 184 tree species from Northeast US. We extend the research that led to the development of LeafSnap along two lines. First, LeafSnap&#8217;s underlying algorithms are applied to a set of 66 tree species from Costa Rica. Then, texture is used as an additional criterion to measure the level of improvement achieved in the automatic identification of Costa Rica tree species. A 25.6% improvement was achieved for a Costa Rican clean image dataset and 42.5% for a Costa Rican noisy image dataset. In both cases, our results show this increment as statistically significant. Further statistical analysis of visual noise impact, best algorithm combinations per species, and best value of <img width=32 height=32 id="_x0000_i1025" src="../../../../../img/revistas/cleiej/v19n1/1a070x.png" alt="k " class=math>, the minimal cardinality of the set of candidate species that the tested algorithms render as best matches is also presented in this research]]></p></abstract>
<abstract abstract-type="short" xml:lang="es"><p><![CDATA[Abstract in Spanish: En la última década, la investigación en Visión por Computadora ha desarrollado algoritmos para ayudar a botánicos e inexpertos a clasificar plantas basándose en fotos de sus hojas. LeafSnap es una aplicación móvil que usa un modelo de curvatura con multi-escala del margen de la hoja para clasificar especies de plantas. Ha obtenido altos niveles de exactitud con 184 especies de árboles del noreste de Estados Unidos. Extendemos la investigación de LeafSnap en dos aristas. Primero, se aplican los algoritmos de LeafSnap en un set de datos de 66 especies de árboles de Costa Rica. Luego, la textura es usada como criterio adicional para medir el nivel de mejora en la detección automática de especies de árboles de Costa Rica. Una mejora de un 25.6% se logra con el set de datos limpio y un 42.5% para el set de datos sucio de Costa Rica. En ambos casos, los resultados muestran un incremento significativo en la exactitud del modelo. Además se presenta un análisis estadístico del impacto visual del ruido, las mejores combinaciones de algoritmos por especie, y el mejor valor de k, que es la cardinalidad mínima del set de especies candidatas que surgen como respuesta de identificación.]]></p></abstract>
<kwd-group>
<kwd lng="en"><![CDATA[Biodiversity Informatics]]></kwd>
<kwd lng="en"><![CDATA[Computer Vision]]></kwd>
<kwd lng="en"><![CDATA[Image Processing]]></kwd>
<kwd lng="en"><![CDATA[Leaf Recognition]]></kwd>
<kwd lng="es"><![CDATA[Informática para la Biodiversidad]]></kwd>
<kwd lng="es"><![CDATA[Visión por Computadora]]></kwd>
<kwd lng="es"><![CDATA[Procesamiento de Imágenes]]></kwd>
<kwd lng="es"><![CDATA[Reconocimiento con Hojas]]></kwd>
<kwd lng="es"><![CDATA[Reconocimiento de Especies]]></kwd>
</kwd-group>
</article-meta>
</front><body><![CDATA[ <div class="maketitle">                                                                                                                                                                                                                                                                                                                                                                          <h2 class="titleHead" style="font-size:14pt">Combining Leaf Shape and Texture for Costa Rican Plant Species Identification</h2>                <div class="author" > <span  class="cmbx-12">Carranza-Rojas, Jose</span>     <br>      <span  class="cmr-12">Computer Engineering Department</span>     <br>      <span  class="cmr-12">Costa Rica Institute of Technology</span>     <br> <span  class="cmti-12"><a href="mailto:jcarranza@itcr.ac.cr">jcarranza@itcr.ac.cr</a> </span><br class="and"><span  class="cmbx-12">Mata-Montero, Erick</span>     <br>      <span  class="cmr-12">Computer Engineering Department</span>     <br>      <span  class="cmr-12">Costa Rica Institute of Technology</span>     <br>               <span  class="cmti-12"><a href="mailto:emata@itcr.ac.cr">emata@itcr.ac.cr</a> </span></div>    <br>     <div class="date" ></div>    </div>        ]]></body>
<body><![CDATA[<div  class="abstract"  >     <div class="center"  > <!--l. 80-->    <p class="noindent" >                                                                                                                                                                                         <div class="minipage">    <div class="center"  > <!--l. 80-->    <p class="noindent" > <!--l. 80-->    <p class="noindent" ><span  class="cmbx-10">Abstract</span></div> <!--l. 81-->    <p class="noindent" >In the last decade, research in Computer Vision has developed several algorithms to help botanists and non-experts to classify plants based on images of their leaves. LeafSnap is a mobile application that uses a multiscale curvature model of the leaf margin to classify leaf images into species. It has achieved high levels of accuracy on 184 tree species from Northeast US. We extend the research that led to the development of LeafSnap along two lines. First, LeafSnap&#8217;s underlying algorithms are applied to a set of 66 tree species from Costa Rica. Then, texture is used as an additional criterion to measure the level of improvement achieved in the automatic identification of Costa Rica tree species. A 25.6% improvement was achieved for a Costa Rican clean image dataset and 42.5% for a Costa Rican noisy image dataset. In both cases, our results show this increment as statistically significant. Further statistical analysis of visual noise impact, best algorithm combinations per species, and best value of <img  src="/img/revistas/cleiej/v19n1/1a070x.png" alt="k  "  class="math" >, the minimal cardinality of the set of candidate species that the tested algorithms render as best matches is also presented in this research. <!--l. 84-->    <p class="noindent" >Abstract in Spanish:<br  class="newline">En la última década, la investigación en Visión por Computadora ha desarrollado algoritmos para ayudar a botánicos e inexpertos a clasificar plantas basándose en fotos de sus hojas. LeafSnap es una aplicación móvil que usa un modelo de curvatura con multi-escala del margen de la hoja para clasificar especies de plantas. Ha obtenido altos niveles de exactitud con 184 especies de árboles del noreste de Estados Unidos. Extendemos la investigación de LeafSnap en dos aristas. Primero, se aplican los algoritmos de LeafSnap en un set de datos de 66 especies de árboles de Costa Rica. Luego, la textura es usada como criterio adicional para medir el nivel de mejora en la detección automática de especies de árboles de Costa Rica. Una mejora de un 25.6% se logra con el set de datos limpio y un 42.5% para el set de datos sucio de Costa Rica. En ambos casos, los resultados muestran un incremento significativo en la exactitud del modelo. Además se presenta un análisis estadístico del impacto visual del ruido, las mejores combinaciones de algoritmos por especie, y el mejor valor de k, que es la cardinalidad mínima del set de especies candidatas que surgen como respuesta de identificación.</div></div> </div> <!--l. 88-->    <p class="noindent" ><span  class="cmbx-10">Keywords: </span>Biodiversity Informatics, Computer Vision, Image Processing, Leaf Recognition<br  class="newline">Keywords in Spanish: Informática para la Biodiversidad, Visión por Computadora, Procesamiento de Imágenes, Reconocimiento con Hojas, Reconocimiento de Especies<br  class="newline">Received: 2015-10-31 Revised: 2016-03-23 Accepted: 2016-04-11<br  class="newline">DOI: <a  href="http://dx.doi.org/10.19153/cleiej.19.1.7" class="url" ><span  class="cmtt-10">http://dx.doi.org/10.19153/cleiej.19.1.7</span></a>    <h3 class="sectionHead"><span class="titlemark">1   </span> <a   id="x1-10001"></a>Introduction</h3> <!--l. 96-->    ]]></body>
<body><![CDATA[<p class="noindent" >Plant species identification is fundamental to conduct studies of biodiversity richness of a region, inventories, monitoring of populations of endangered plants and animals, climate change impact on forest coverage, bioliteracy, invasive species distribution modelling, payment for environmental services, and weed control, among many other major challenges for biodiversity conservation. Unfortunately, the traditional approach used by taxonomists to identify species is tedious, inefficient and error-prone <span class="cite">[<a  href="#Ximpediment">1</a><a id="br1">]</a></span>. In addition, it seriously limits public access to this knowledge and participation as, for instance, citizen scientists. In spite of enormous progress in the application of computer vision algorithms in other areas such as medical imaging, OCR, and biometrics <span class="cite">[<a  href="#Xjournals/cviu/AndreopoulosT13">2</a><a id="br2">]</a></span>, only recently have they been applied to identify organisms. In the last decade, research in Computer Vision has produced algorithms to help botanists and non-experts classify plants based on images of their leaves <span class="cite">[<a  href="#Xlagerwall">3</a><a id="br3">,</a>&#x00A0;<a  href="#XRashad">4</a><a id="br4">,</a>&#x00A0;<a  href="#XKumar">5</a><a id="br5">,</a>&#x00A0;<a  href="#XWUFlavia">6</a><a id="br6">,</a>&#x00A0;<a  href="#XSumathi">7</a><a id="br7">,</a>&#x00A0;<a  href="#XTAR4630">8</a><a id="br8">,</a>&#x00A0;<a  href="#XArun">9</a><a id="br9">,</a>&#x00A0;<a  href="#XAggarwal">10</a><a id="br10">,</a>&#x00A0;<a  href="#XYeni">11</a><a id="br11">,</a>&#x00A0;<a  href="#XKadir">12</a><a id="br12">]</a></span>. However only a few studies have resulted in efficient systems that are used by the general public, such as <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span>. The most popular system to date is LeafSnap <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span>. It is considered a state-of-the-art mobile leaf recognition application that uses an efficient multiscale curvature model to classify leaf images into species. LeafSnap was applied to 184 tree species from Northeast USA, resulting in a very high accuracy method for species recognition for that region. It has been downloaded by more than 1 million users                                                                                                                                                                                     <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span>. LeafSnap has not been applied to identified trees from tropical countries such as Costa Rica. The challenge of recognizing tree species in biodiversity rich regions is expected to be considerably bigger. <!--l. 99-->    <p class="indent" >   Vein analysis is an important, discriminative element for species recognition that has been used in several studies such as <span class="cite">[<a  href="#XClarke">14</a><a id="br14">,</a>&#x00A0;<a  href="#XLarese">15</a><a id="br15">,</a>&#x00A0;<a  href="#XLee">16</a><a id="br16">,</a>&#x00A0;<a  href="#XLeeShape">17</a><a id="br17">,</a>&#x00A0;<a  href="#XVein3D">18</a><a id="br18">]</a></span>. According to Nelson Zamora, curator of the herbarium at the National Biodiversity Institute (INBio), venation is as important as the curvature of the margin of the leaf when classifying plant species in Costa Rica <span class="cite">[<a  href="#XNelson">19</a><a id="br19">]</a></span>. <!--l. 101-->    <p class="indent" >   This paper focuses on studying the accuracy of a leaf recognition model based not only on the curvature of the leaf margin, but also on its texture (in which veins are visually very important). This is the first attempt to create such model for Costa Rican plant species. <!--l. 103-->    <p class="indent" >   The rest of this manuscript is organized as follows: Section <a  href="#x1-20002">2<!--tex4ht:ref: RW_Sec --></a> presents relevant related work. Section <a  href="#x1-30003">3<!--tex4ht:ref: chap:metodo --></a> and Section <a  href="#x1-260004">4<!--tex4ht:ref: chap:experiments --></a> cover methodological aspects and experiment design, respectively. Section <a  href="#x1-330005">5<!--tex4ht:ref: chap:results --></a> describes the results obtained. Section <a  href="#x1-520006">6<!--tex4ht:ref: sec:conclusions --></a> presents conclusions and, finally, Section <a  href="#x1-530007">7<!--tex4ht:ref: sec:future --></a> summarizes future work. <!--l. 105-->    <p class="noindent" >    <h3 class="sectionHead"><span class="titlemark">2   </span> <a   id="x1-20002"></a>Related Work</h3> <!--l. 107-->    <p class="noindent" >In LeafSnap <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span> the authors create a leaf classification method based on unimodal curvature features and similarity search using k Nearest Neightbors (kNN). This method is tested against an image dataset from North American trees, using 184 species in total. Since their system requires images to have a uniform background, leaf segmentation works by estimating the foreground and background color distributions, and then classifying each pixel at a time into one of those two categories. A conversion to Hue Saturation Value (HSV) color domain is applied before using Expectation-Maximization (EM) <span class="cite">[<a  href="#XEM">20</a><a id="br20">]</a></span> for the leaf segmentation. A 96.8% of accuracy is reported by the authors on their dataset with <img  src="/img/revistas/cleiej/v19n1/1a071x.png" alt="k = 5  "  class="math" >. <!--l. 109-->    <p class="indent" >   Researchers in <span class="cite">[<a  href="#XHerdiyeni">21</a><a id="br21">]</a></span> use Local Binary Patterns (LBP) features to classify medicinal and house plants from Indonesia. They extract LBP descriptors from different sample points and radius, calculate a histogram for each radius length feature set, and concatenate those histograms, similarly to Histogram of Curvature over Scale (HCoS) of LeafSnap <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span>. As a classifier, a four layer Probabilistic Neural Network (PNN) is used. Their dataset consists of two subsets; one comprises 1,440 images of 30 species of tropical plants, and the other one has 300 images of 30 house plant species. The image background of the medicinal plants is uniform, while house plant images have non-uniform backgrounds. For medicinal plants the reported precision is 77% and for house plants 86.67%, revealing that using LBP for complex image backgrounds is a suitable technique. <!--l. 111-->    <p class="indent" >   Authors in <span class="cite">[<a  href="#XNguyen">22</a><a id="br22">]</a></span> use Speeded Up Robust Features (SURF) to develop an Android application for mobile leaf recognition. For the species classification task, SURF features are extracted from the gray scale image of the leaf. The feature set is reduced to histograms in order to reduce dimensionality since the resulting SURF feature vector may be too big. The precision reported is 95.94% on the Flavia dataset <span class="cite">[<a  href="#XWUFlavia">6</a><a id="br6">]</a></span>, which consists of 3,621 leaf images of 32 species. <!--l. 113-->    <p class="noindent" >    <h3 class="sectionHead"><span class="titlemark">3   </span> <a   id="x1-30003"></a>Methodology</h3> <!--l. 116-->    <p class="noindent" >This section describes how the leaf recognition process was set up. Section <a  href="#x1-40003.1">3.1<!--tex4ht:ref: sec:datasets --></a> describes the image datasets used. Section <a  href="#x1-70003.2">3.2<!--tex4ht:ref: sec:segmentation --></a> summarizes the techniques used to segment each image into leaf and non-leaf pixels clusters. Section <a  href="#x1-120003.3">3.3<!--tex4ht:ref: sec:enhancements --></a> presents several image enhancements conducted, such as cleaning up undesirable artifacts and elements, stem removal, clipping and resizing. Section <a  href="#x1-170003.4">3.4<!--tex4ht:ref: sec:features --></a> describes the feature extraction approach for both the curvature and texture model. Finally, Section <a  href="#x1-230003.5">3.5<!--tex4ht:ref: sec:classification --></a> presents the species classification metrics and algorithms used in this research. <!--l. 118-->    ]]></body>
<body><![CDATA[<p class="noindent" >    <h4 class="subsectionHead"><span class="titlemark">3.1   </span> <a   id="x1-40003.1"></a>Image Datasets</h4> <!--l. 121-->    <p class="noindent" >An image dataset of leaves from Costa Rica was created from scratch. To our knowledge, no other suitable Costa Rican datasets existed before. The dataset has both clean and noisy images, in order to identify how the amount of noise affects the algorithms. All images were captured from mainly two places: La Sabana Park, located in San Jose, and INBiopark, located in Santo Domingo, Heredia. In most cases, images for both surfaces of each leaf were taken. The dataset includes endemic species of Costa Rica and threatened species according to <span class="cite">[<a  href="#XNelson">19</a><a id="br19">]</a></span>. The                                                                                                                                                                                     complete list of species in the dataset can be found in <span class="cite">[<a  href="#XCarranza:Thesis:2014">23</a><a id="br23">]</a></span>. The dataset consists of the following two subsets: <!--l. 123-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-50003.1"></a><span  class="cmbx-10">Clean Subset</span></span>    Fresh leaf images were captured during field trips to both La Sabana and INBiopark. If the leaves were not flat enough, a press was used to flatten them for 24 hours. A total of 1468 leaf images were scanned. The images have a white uniform background and a size of 2548x3300 pixels, scanned with 300 dpi in JPEG format. Photoshop CS6 was used to remove shadows, dust particles and other undesired artifacts from the background. Figure <span  class="cmbx-10">??</span> shows a sample of a cleaned Costa Rican leaf image of this subset. The scanner used was an HP ScanJet 300. <!--l. 126-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-60003.1"></a><span  class="cmbx-10">Noisy Subset</span></span>    Fresh leaf images were captured during field trips to both La Sabana and INBiopark. No press was used to flatten them. A total of 2345 fresh leaf images were captured. This subset was captured against white uniform backgrounds (normally a sheet of paper). Each image has a 3000x4000 pixel resolution, in JPG format. No artifacts were removed manually. However as explained in Section <a  href="#x1-120003.3">3.3<!--tex4ht:ref: sec:enhancements --></a> several automated image enhancements were performed both on the clean subset and the noisy subset. Figure <span  class="cmbx-10">??</span> presents a noisy leaf image sample. The camera used is a Canon PowerShot SD780 IS. <!--l. 130-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-6001r1"></a>                                                                                                                                                                                      <!--l. 132-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f1.png" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;1: </span><span   class="content">Collected image samples</span></div><!--tex4ht:label?: x1-6001r1 -->                                                                                                                                                                                     <!--l. 136-->    <p class="indent" >   </div><hr class="endfigure">    <h4 class="subsectionHead"><span class="titlemark">3.2   </span> <a   id="x1-70003.2"></a>Image Leaf Segmentation</h4> <!--l. 140-->    ]]></body>
<body><![CDATA[<p class="noindent" >The first step to process the leaf image is to segment which pixels belong to a leaf and which do not. We used the same approach as LeafSnap by applying color-based segmentation. <!--l. 142-->    <p class="noindent" >    <h5 class="subsubsectionHead"><span class="titlemark">3.2.1   </span> <a   id="x1-80003.2.1"></a>HSV Color Domain</h5> <!--l. 143-->    <p class="noindent" >When segmenting with color it is imperative to use the right color domains in order to exclude undesired noise. <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span> states how, in the HSV domain, Hue had a tendency to contain greenish shadows from the original leaf pictures. Saturation and Value however, had a tendency to be clean. So we also used those two color components for leaf segmentation. Figure <a  href="#x1-8001r2">2<!--tex4ht:ref: hsvcolordomain --></a> shows the noise present in the Hue channel, but also shows how Saturation and Value are cleaner. This was useful for posterior segmentation using Expectation-Maximization (EM). We used OpenCV <span class="cite">[<a  href="#Xopencv">24</a><a id="br24">]</a></span> to convert the original images into the HSV domain. Then, by using NumPy <span class="cite">[<a  href="#Xnumpy">25</a><a id="br25">]</a></span> , we extracted the Saturation and Value components, which were fed to the Expectation-Maximization (EM) algorithm. <!--l. 145-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-8001r2"></a>                                                                                                                                                                                      <!--l. 147-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f2.png" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;2: </span><span   class="content">HSV decomposition of a leaf image</span></div><!--tex4ht:label?: x1-8001r2 --> <!--l. 148-->    <p class="noindent" >The top-left image shows the original sample. The top-right image shows the Hue channel of the image with noticeable noise. The bottom-left image shows the Saturation component and the bottom-right image shows the Value component                                                                                                                                                                                     <!--l. 150-->    <p class="indent" >   </div><hr class="endfigure">    <h5 class="subsubsectionHead"><span class="titlemark">3.2.2   </span> <a   id="x1-90003.2.2"></a>Expectation-Maximization (EM)</h5> <!--l. 154-->    ]]></body>
<body><![CDATA[<p class="noindent" >Once images were converted to HSV and the desired channels were extracted, we applied EM to the color domain in order to cluster the pixels into one of 2 possible groups: leaf and non-leaf groups <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span>. Figure <a  href="#x1-9001r3">3<!--tex4ht:ref: segmentationexample --></a> shows several samples of the final segmentation after applying EM. As shown, EM segments the image into the leaf and non-leaf pixel groups by assigning a <img  src="/img/revistas/cleiej/v19n1/1a072x.png" alt="1  "  class="math" > to the leaf pixels and a <img  src="/img/revistas/cleiej/v19n1/1a073x.png" alt="0  "  class="math" > to the non-leaf pixels. This method also works well on both simple and compound leaves. It is important to highlight that we did not assign weights to each cluster manually as the work done by <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span>, because we wanted to leave the process as automatic as possible. In their work, they improve the segmentation of certain types of leaves, especially skinny ones, by manually assigning different weights to each cluster. Weights play a fundamental role into the segmentation process as reported in <span class="cite">[<a  href="#X6909584">26</a><a id="br26">]</a></span>. <!--l. 157-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-9001r3"></a>                                                                                                                                                                                      <!--l. 159-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f3.png" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;3: </span><span   class="content">Segmented Samples</span></div><!--tex4ht:label?: x1-9001r3 --> <!--l. 160-->    <p class="noindent" >After applying EM to different Costa Rican species                                                                                                                                                                                     <!--l. 162-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 164-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-100003.2.2"></a><span  class="cmbx-10">Training</span></span>    Algorithm <a  href="#x1-10001r1">1<!--tex4ht:ref: alg:em --></a> describes the process to train the EM algorithm. We used OpenCV&#8217;s implementation of EM. First we stacked all the pixels of the image matrix into a single vector. Then we trained the model using a diagonal matrix as a co-variance matrix, and we assigned two clusters to it, which internally were translated into two Gaussian Distributions, one for the leaf cluster and one for the non-leaf cluster. Once trained, we returned the EM object.        <div class="algorithm">                                                                                                                                                                                     <!--l. 167-->    ]]></body>
<body><![CDATA[<p class="indent" >   <a   id="x1-10001r1"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Algorithm 1: </span><span   class="content">EM Training</span></div><!--tex4ht:label?: x1-10001r1 -->     <div class="algorithmic"> <a   id="x1-10002r1"></a>  <span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a074x.png" alt="stackedPixels&#x2190; &#x2205; "  class="math" > <a   id="x1-10003r2"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">for all</span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a075x.png" alt="pixelRow  "  class="math" > <span  class="cmr-7">in image</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">do</span><span class="for-body"> <a   id="x1-10004r3"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <span  class="cmbx-7">for all</span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a076x.png" alt="pixel  "  class="math" > <span  class="cmr-7">in pixelRow</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">do</span><span class="for-body"> <a   id="x1-10005r4"></a>      <br><span class="ALCitem"></span><span style="width:23.98615pt;">&nbsp;</span>     <img  src="/img/revistas/cleiej/v19n1/1a077x.png" alt="stackedPixels&#x2190; stackedPixels&#x222A;pixel  "  class="math" >    </span><a   id="x1-10006r5"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <span  class="cmbx-7">end</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">for</span>   </span><a   id="x1-10007r6"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">end</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">for</span><a   id="x1-10008r7"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a078x.png" alt="EM &#x2190; OpenCV.EM (nClusters= 2,covMatType= OpenCV.DIAGONAL )  "  class="math" > <a   id="x1-10009r8"></a>      ]]></body>
<body><![CDATA[<br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a079x.png" alt="EM.train(stackedPixels)  "  class="math" > <a   id="x1-10010r9"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">return </span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0710x.png" alt="EM  "  class="math" > </div>                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 183-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-110003.2.2"></a><span  class="cmbx-10">Pixel Prediction</span></span>    Algorithm <a  href="#x1-11001r2">2<!--tex4ht:ref: alg:emPixelPredict --></a> explains how the owning cluster of a single pixel of the image was predicted. Once the EM object was trained, the OpenCV&#8217;s implementation allowed to compute the probabilities of the pixel belonging to each cluster. However, for more efficiency, we created a dictionary containing each unique <img  src="/img/revistas/cleiej/v19n1/1a0711x.png" alt="(Saturation,Value)  "  class="math" > pair as key, and the cluster as value. If the key was not found in the dictionary, we then proceeded to predict the probabilities for each cluster, added the key and cluster to the dictionary, and returned the associated cluster with the biggest probability.        <div class="algorithm">                                                                                                                                                                                     <!--l. 186-->    <p class="indent" >   <a   id="x1-11001r2"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Algorithm 2: </span><span   class="content">EM Pixel Prediction</span></div><!--tex4ht:label?: x1-11001r2 -->     <div class="algorithmic"> <a   id="x1-11002r10"></a>  <span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0712x.png" alt="key &#x2190; hash(pixel[S],pixel[V ])  "  class="math" > <a   id="x1-11003r11"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">if</span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0713x.png" alt="hash  "  class="math" > <span  class="cmr-7">in pixelDictionary</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">then</span><span class="if-body"> <a   id="x1-11004r12"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <span  class="cmbx-7">return </span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0714x.png" alt="pixelDictionary[key]  "  class="math" >   </span><a   id="x1-11005r13"></a>      ]]></body>
<body><![CDATA[<br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">end</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">if</span><a   id="x1-11006r14"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0715x.png" alt="probabilities&#x2190; EM.predict(pixel[S],pixel[V])  "  class="math" > <a   id="x1-11007r15"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0716x.png" alt="pixelDict[key]= probabilities[0]&#x003E; probabilities[<a href="#br1">1</a>]  "  class="math" > <a   id="x1-11008r16"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">return </span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0717x.png" alt="pixelDict[key]  "  class="math" > </div>                                                                                                                                                                                        </div><hr class="endfloat">    </div>    <h4 class="subsectionHead"><span class="titlemark">3.3   </span> <a   id="x1-120003.3"></a>Image Enhancements/Post-Processing</h4> <!--l. 202-->    <p class="noindent" >After segmentation of the leaf using EM, some extra work was needed to clean up several false positives areas. We followed the process of LeafSnap <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span>. First of all, each image was clipped to the internal leaf size provided by the segmentation. Then the image was resized to a common leaf area, followed by a heuristic applied to delete undesired objects. Finally, the stem was deleted since it added noise to the model of curvature (not that much to the texture model). <!--l. 204-->    <p class="noindent" >    <h5 class="subsubsectionHead"><span class="titlemark">3.3.1   </span> <a   id="x1-130003.3.1"></a>Clipping</h5> <!--l. 205-->    <p class="noindent" >Before extracting features, a clipping phase was needed in order to resize the region where the leaf was present to a common size. The clipping algorithm was trivial to implement once the contours were calculated using OpenCV. As shown in Algorithm <a  href="#x1-13001r3">3<!--tex4ht:ref: alg:clipping --></a>, the minimum and maximum coordinates were calculated for all contour <img  src="/img/revistas/cleiej/v19n1/1a0718x.png" alt="x  "  class="math" > and <img  src="/img/revistas/cleiej/v19n1/1a0719x.png" alt="y  "  class="math" > components, followed by a cut of the leaf image matrix to those resulting minimum and maximum coordinates. The <img  src="/img/revistas/cleiej/v19n1/1a0720x.png" alt="&#x03F5;  "  class="math" > was used to allow posterior algorithms ignore false positives regions that intersect the border. The results of the Clipping phase can be seen in Figure <a  href="#x1-13007r4">4<!--tex4ht:ref: clipping --></a>.        <div class="algorithm">                                                                                                                                                                                     <!--l. 208-->    <p class="indent" >   <a   id="x1-13001r3"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          ]]></body>
<body><![CDATA[<div class="caption"  ><span class="id">Algorithm 3: </span><span   class="content">Clipping Leaf Portion of the Image</span></div><!--tex4ht:label?: x1-13001r3 -->     <div class="algorithmic"> <a   id="x1-13002r17"></a>  <span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0721x.png" alt="xmin&#x2190; min(contours.xs)- &#x03F5;  "  class="math" > <a   id="x1-13003r18"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0722x.png" alt="ymin &#x2190; min(contours.ys)- &#x03F5;  "  class="math" > <a   id="x1-13004r19"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0723x.png" alt="xmax&#x2190; max(contours.xs)+&#x03F5;  "  class="math" > <a   id="x1-13005r20"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0724x.png" alt="ymax&#x2190; max(contours.ys)+&#x03F5;  "  class="math" > <a   id="x1-13006r21"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0725x.png" alt="clipped&#x2190; image[xmin :xmax,ymin:ymax]  "  class="math" > </div>                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 221-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-13007r4"></a>                                                                                                                                                                                      <!--l. 223-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f4.png" alt="PIC"   >     <br>     ]]></body>
<body><![CDATA[<div class="caption"  ><span class="id">Figure&#x00A0;4: </span><span   class="content">Clipping of a Coccoloba floribunda sample</span></div><!--tex4ht:label?: x1-13007r4 --> <!--l. 224-->    <p class="noindent" >The left image is the original leaf image, and the right one is clipped to the leaf size                                                                                                                                                                                     <!--l. 226-->    <p class="indent" >   </div><hr class="endfigure">    <h5 class="subsubsectionHead"><span class="titlemark">3.3.2   </span> <a   id="x1-140003.3.2"></a>Resizing Leaf Area</h5> <!--l. 229-->    <p class="noindent" >Once the leaf area had been clipped, a resize was applied in order to standardize the leaf areas inside all images. If not, the model of curvature would be affected negatively since the amount of contour pixels varied significantly <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span>. Our implementation of the resize was applied to the whole clipped image. Images may end up having different sizes, but the internal leaf areas were the same or almost the same. Algorithm <a  href="#x1-14001r4">4<!--tex4ht:ref: alg:resize --></a> shows how a new width and height were obtained by calculating the ratio between the current leaf area, the desired new leaf area, and the current height and width of the image. Finally, OpenCV was used to resize the clipped image to a constant leaf size of <img  src="/img/revistas/cleiej/v19n1/1a0726x.png" alt="100,000  "  class="math" > pixels. This number was used empirically based on LeafSnap&#8217;s original dataset resolution and the internal regions associated with leaf pixels. This approach means that the absolute measures of leafs are lost.        <div class="algorithm">                                                                                                                                                                                     <!--l. 231-->    <p class="indent" >   <a   id="x1-14001r4"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Algorithm 4: </span><span   class="content">Common Leaf Area Resize</span></div><!--tex4ht:label?: x1-14001r4 -->     <div class="algorithmic"> <a   id="x1-14002r22"></a>  <span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0727x.png" alt="newLeafArea&#x2190; 100000  "  class="math" > <a   id="x1-14003r23"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0728x.png" alt="imgArea&#x2190; height&#x00D7; weight  "  class="math" > <a   id="x1-14004r24"></a>      ]]></body>
<body><![CDATA[<br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0729x.png" alt="newImgArea&#x2190; (imgArea&#x00D7; newLeafArea)&#x2215;leafArea  "  class="math" > <a   id="x1-14005r25"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0730x.png" alt="wGrowth &#x2190; weight&#x2215;height+weight  "  class="math" > <a   id="x1-14006r26"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0731x.png" alt="hGrowth&#x2190; height&#x2215;height+ weight  "  class="math" > <a   id="x1-14007r27"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0732x.png" alt="a&#x2190; wGrowth&#x00D7;hGrowth  "  class="math" > <a   id="x1-14008r28"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0733x.png" alt="x&#x2190; abs(&#x221A;4&#x00D7;-a&#x00D7;newImgArea&#x2215;(2&#x00D7; a))  "  class="math" > <a   id="x1-14009r29"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0734x.png" alt="newW idth= wGrowth&#x00D7;x  "  class="math" > <a   id="x1-14010r30"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0735x.png" alt="newHeight=hGrowth&#x00D7; x  "  class="math" > <a   id="x1-14011r31"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">return </span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0736x.png" alt="OpenCV.resize(image,newWidth,newHeight)  "  class="math" > </div>                                                                                                                                                                                        </div><hr class="endfloat">    </div>    <h5 class="subsubsectionHead"><span class="titlemark">3.3.3   </span> <a   id="x1-150003.3.3"></a>Deleting Undesired Objects</h5> <!--l. 250-->    <p class="noindent" >Even when uniform background images were used, initial segmentation turned out not to be enough when the image contained undesired objects, such as dust, shadows, among others. <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span> attempted to delete these noisy objects by using the same heuristic we implemented as shown in Algorithm <a  href="#x1-15001r5">5<!--tex4ht:ref: alg:undesiredobjectdeletion --></a>. By using Scikit-learn <span class="cite">[<a  href="#Xsklearn">27</a><a id="br27">]</a></span> we calculated the connected components of the segmented image. We deleted the &#8221;small&#8221; components by area (in pixels). Small components were normally dust, small bugs or pieces of leaves, among other things. Once all small components were deleted, if the remaining was only one then we took that to be the leaf. If more than one component remained, then we calculated for each remaining component how many pixels had intersections with the image margin. We then deleted the component with the biggest number of intersections. The thinking behind this is to get rid of components that were not centered on the image, which tend to be non-leaf objects. Finally, the component with the biggest area from the remaining components was taken as the leaf.        <div class="algorithm">                                                                                                                                                                                     <!--l. 253-->    ]]></body>
<body><![CDATA[<p class="indent" >   <a   id="x1-15001r5"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Algorithm 5: </span><span   class="content">Deleting Undesired Objects Heuristic</span></div><!--tex4ht:label?: x1-15001r5 -->     <div class="algorithmic"> <a   id="x1-15002r32"></a>  <span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0737x.png" alt="n,components&#x2190; connectedComponents(segmentedImage)  "  class="math" > <a   id="x1-15003r33"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0738x.png" alt="components&#x2190; deleteSmallComponents(components,kMinimumArea)  "  class="math" > <a   id="x1-15004r34"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">if</span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0739x.png" alt="size(components)== 1  "  class="math" ><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">then</span><span class="if-body"> <a   id="x1-15005r35"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <span  class="cmbx-7">return </span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0740x.png" alt="components[0]  "  class="math" >   </span><a   id="x1-15006r36"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">end</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">if</span><a   id="x1-15007r37"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0741x.png" alt="inters&#x2190; empty  "  class="math" > <a   id="x1-15008r38"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0742x.png" alt="areas&#x2190; empty  "  class="math" > <a   id="x1-15009r39"></a>      ]]></body>
<body><![CDATA[<br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">for all</span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0743x.png" alt="component  "  class="math" > <span  class="cmr-7">in</span> <img  src="/img/revistas/cleiej/v19n1/1a0744x.png" alt="components  "  class="math" ><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">do</span><span class="for-body"> <a   id="x1-15010r40"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <img  src="/img/revistas/cleiej/v19n1/1a0745x.png" alt="inters&#x2190; inters&#x222A;getImageMarginIntersections(component)  "  class="math" > <a   id="x1-15011r41"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <img  src="/img/revistas/cleiej/v19n1/1a0746x.png" alt="areas&#x2190; areas&#x222A;getComponentArea(component)  "  class="math" >   </span><a   id="x1-15012r42"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">end</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">for</span><a   id="x1-15013r43"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0747x.png" alt="noisyObject&#x2190;max (inters)  "  class="math" > <a   id="x1-15014r44"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">return </span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0748x.png" alt="max(areas- noisyObject)  "  class="math" > </div>                                                                                                                                                                                        </div><hr class="endfloat">    </div>    <h5 class="subsubsectionHead"><span class="titlemark">3.3.4   </span> <a   id="x1-160003.3.4"></a>Deleting the stem</h5> <!--l. 274-->    <p class="noindent" >We followed the approach for stem deletion described in <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span>. If the stem was left intact, it would add noise to the model of curvature, given all the possible sizes the stem may take. Algorithm <a  href="#x1-16005r7">7<!--tex4ht:ref: alg:stemdelete --></a> shows the procedure. First, a Top Hat transformation was applied to the segmented image in order to leave only possible stem regions, as shown in Figure <a  href="#x1-16020r5">5<!--tex4ht:ref: topHat --></a>. Then all connected components were calculated from the Top Hat transformed image, and also their quantity. Then we looped over all the components, deleting every single one from the original segmentation and recalculating the new number of connected components. If the original number of recalculated connected components did not change upon deletion, that meant the current component was a good stem candidate (heuristically, a stem does not affect how many original connected components there are). Once all stem candidates were calculated, the one with the biggest area and largest aspect ratio was chosen to be the stem, as described in Algorithm <a  href="#x1-16001r6">6<!--tex4ht:ref: alg:stemaspectratio --></a>.        <div class="algorithm">                                                                                                                                                                                     <!--l. 277-->    <p class="indent" >   <a   id="x1-16001r6"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          ]]></body>
<body><![CDATA[<div class="caption"  ><span class="id">Algorithm 6: </span><span   class="content">Calculate Aspect Ratio Combined with Area</span></div><!--tex4ht:label?: x1-16001r6 -->     <div class="algorithmic"> <a   id="x1-16002r45"></a>  <span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0749x.png" alt="width,heigth&#x2190; calculateRectangleAround(component)  "  class="math" > <a   id="x1-16003r46"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0750x.png" alt="area&#x2190; calculateArea(component)  "  class="math" > <a   id="x1-16004r47"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">return </span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0751x.png" alt="width&#x2215;heigth*area  "  class="math" > </div>                                                                                                                                                                                        </div><hr class="endfloat">    </div>        <div class="algorithm">                                                                                                                                                                                     <!--l. 289-->    <p class="indent" >   <a   id="x1-16005r7"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Algorithm 7: </span><span   class="content">Deleting the Stem</span></div><!--tex4ht:label?: x1-16005r7 -->     <div class="algorithmic"> <a   id="x1-16006r48"></a>  <span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0752x.png" alt="candidates &#x2190; empty  "  class="math" > <a   id="x1-16007r49"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0753x.png" alt="candidatesRatios&#x2190; empty  "  class="math" > <a   id="x1-16008r50"></a>      ]]></body>
<body><![CDATA[<br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0754x.png" alt="possibleStemsImage&#x2190; topHatTransformation(segmentedImage)  "  class="math" > <a   id="x1-16009r51"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0755x.png" alt="n,components&#x2190; connectedComponents(possibleStemsImage)  "  class="math" > <a   id="x1-16010r52"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">for all</span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0756x.png" alt="component  "  class="math" > <span  class="cmr-7">in</span> <img  src="/img/revistas/cleiej/v19n1/1a0757x.png" alt="components  "  class="math" ><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">do</span><span class="for-body"> <a   id="x1-16011r53"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <img  src="/img/revistas/cleiej/v19n1/1a0758x.png" alt="tempSegmentation&#x2190; delete(component,segmentedImage)  "  class="math" > <a   id="x1-16012r54"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <img  src="/img/revistas/cleiej/v19n1/1a0759x.png" alt="currentN &#x2190; connectedComponents(tempSegmentation)  "  class="math" > <a   id="x1-16013r55"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <span  class="cmbx-7">if</span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0760x.png" alt="currentN =n  "  class="math" ><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">then</span><span class="if-body"> <a   id="x1-16014r56"></a>      <br><span class="ALCitem"></span><span style="width:23.98615pt;">&nbsp;</span>     <img  src="/img/revistas/cleiej/v19n1/1a0761x.png" alt="candidates&#x2190; candidates&#x222A; component  "  class="math" > <a   id="x1-16015r57"></a>      <br><span class="ALCitem"></span><span style="width:23.98615pt;">&nbsp;</span>     <img  src="/img/revistas/cleiej/v19n1/1a0762x.png" alt="candidatesRatios&#x2190; candidatesRatios&#x222A; calculateAspectRatio(component)  "  class="math" >    </span><a   id="x1-16016r58"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <span  class="cmbx-7">end</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">if</span>   </span><a   id="x1-16017r59"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">end</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">for</span><a   id="x1-16018r60"></a>      ]]></body>
<body><![CDATA[<br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0763x.png" alt="bestCandidate&#x2190; candidates[max(candidatesRatios).index]  "  class="math" > <a   id="x1-16019r61"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0764x.png" alt="segmentedImage&#x2190; delete(bestCandidate,segmentedImage)  "  class="math" > </div>                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 310-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-16020r5"></a>                                                                                                                                                                                      <!--l. 312-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f5.png" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;5: </span><span   class="content">Top Hat Transformation applied to a segmented compound leaf image to detect the stem of the leaf</span></div><!--tex4ht:label?: x1-16020r5 -->                                                                                                                                                                                     <!--l. 315-->    <p class="indent" >   </div><hr class="endfigure">    <h4 class="subsectionHead"><span class="titlemark">3.4   </span> <a   id="x1-170003.4"></a>Leaf Feature Extraction</h4> <!--l. 320-->    <p class="noindent" >Feature extraction was designed and implemented considering three main design goals:      <ul class="itemize1">      <li class="itemize">Efficiency: algorithms should be fast enough to support future mobile apps.      </li>      <li class="itemize">Rotation invariance: the leaf may be rotated by any angle within the image.      </li>      <li class="itemize">Leaf  Size  Invariance:  datasets  contain  different  sizes  of  leaves  and  users  can  capture  images      independently of the relative size of leaves.</li>    </ul> <!--l. 327-->    ]]></body>
<body><![CDATA[<p class="indent" >   Two different feature sets were calculated. The first one captures information about the contour of the leaf, while the second one captures information about its texture. Section <a  href="#x1-180003.4.1">3.4.1<!--tex4ht:ref: sec:hcos --></a> describes how we implemented Histogram of Curvature over Scale (HCoS) <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span> to extract contour information. Section <a  href="#x1-220003.4.2">3.4.2<!--tex4ht:ref: sec:lbpv --></a> describes how we implemented Local Binary Pattern Variance (LBPV) to extract texture information. Both models generate histograms that are suitable for distance metric calculations. <!--l. 329-->    <p class="noindent" >    <h5 class="subsubsectionHead"><span class="titlemark">3.4.1   </span> <a   id="x1-180003.4.1"></a>Extracting contour information (HCoS)</h5> <!--l. 331-->    <p class="noindent" >The model of curvature used by LeafSnap comprises several steps. Previously explained segmentation and post-processing resulted in a mask of leaf and non-leaf pixels. The non-leaf pixels have values of <img  src="/img/revistas/cleiej/v19n1/1a0765x.png" alt="0  "  class="math" >, and the leaf pixels have values of <img  src="/img/revistas/cleiej/v19n1/1a0766x.png" alt="1  "  class="math" >. First, the different contour pixels were found, then 25 different masks with disk shapes were applied on top of each contour point, providing both an area of the intersection and an arc length. Then all calculations at each scale were turned into a histogram, resulting in 25 different histograms per image, one per scale. Finally, the 25 resulting histograms were concatenated, conforming the HCoS. <!--l. 333-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-190003.4.1"></a><span  class="cmbx-10">Contours</span></span>    On a binary image (resulted from the previous segmentation), the OpenCV implementation of contour finding worked very well, based on the original algorithm of <span class="cite">[<a  href="#XSuzuki198532">28</a><a id="br28">]</a></span> for contour finding. The algorithm generated in a vector of pairs <img  src="/img/revistas/cleiej/v19n1/1a0767x.png" alt="(x,y)  "  class="math" > that represent the coordinates where a contour pixel was found. A contour pixel can be defined as a pixel which is surrounded by at least another pixel with the opposite color of it. Figure <a  href="#x1-19001r6">6<!--tex4ht:ref: contours --></a> shows in red the contour pixels detected in the original image, calculated from the segmented mask. Notice how shadows affect the contour algorithm, since they were not segmented perfectly. <!--l. 336-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-19001r6"></a>                                                                                                                                                                                      <!--l. 338-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f6.png" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;6: </span><span   class="content"><span  class="cmti-10">Croton niveus </span>contours</span></div><!--tex4ht:label?: x1-19001r6 --> <!--l. 339-->    <p class="noindent" >extracted using OpenCV                                                                                                                                                                                     <!--l. 341-->    ]]></body>
<body><![CDATA[<p class="indent" >   </div><hr class="endfigure"> <!--l. 343-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-200003.4.1"></a><span  class="cmbx-10">Scales</span></span>    The original algorithm of <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span> makes use of 25 different scales, creating one disk per scale. We implemented a discrete version of the disks making use of matrices based on <span class="cite">[<a  href="#X1677517">29</a><a id="br29">]</a></span>, whose code is available in Matlab <a  href="https:\/\/www.ceremade.dauphine.fr\/\~peyre\/numerical\-tour\/tours\/shapes\_4\_shape\_matching\/" class="url" ><span  class="cmtt-10">https:\/\/www.ceremade.dauphine.fr\/\</span><span  class="cmtt-10">~</span><span  class="cmtt-10">peyre\/numerical\-tour\/tours\/shapes\_4\_shape\_matching\/</span></a>. <!--l. 348-->    <p class="indent" >   The disks used are actually matrices of <img  src="/img/revistas/cleiej/v19n1/1a0768x.png" alt="1  "  class="math" >&#8217;s and <img  src="/img/revistas/cleiej/v19n1/1a0769x.png" alt="0  "  class="math" >&#8217;s. They were applied as masks over specific parts of the segmented leaf image (mostly contour points). The idea was to count how many pixels intersected the segmented image and each disk mask. We created two different types of disks. The first type is filled up with <img  src="/img/revistas/cleiej/v19n1/1a0770x.png" alt="1  "  class="math" >&#8217;s, as shown in Figure <a  href="#x1-20001r7">7<!--tex4ht:ref: fig:disks --></a>. It is used to measure the area of intersection. The second type is more like a ring, where <img  src="/img/revistas/cleiej/v19n1/1a0771x.png" alt="1  "  class="math" >&#8217;s are present only in the circumference of the disk (see Figure <a  href="#x1-20001r7">7<!--tex4ht:ref: fig:disks --></a>). It is used to determine the arc&#8217;s length of the intersection of the disk with the leaf, at a given contour point. <!--l. 351-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-20001r7"></a>                                                                                                                                                                                      <!--l. 352-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f7.png" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;7: </span><span   class="content">Various discrete disks</span></div><!--tex4ht:label?: x1-20001r7 -->                                                                                                                                                                                     <!--l. 357-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 362-->    <p class="indent" >   Once all disks were created for both area and arc length versions, we applied them to each pixel of the contour vector, as shown by Algorithm <a  href="#x1-20002r8">8<!--tex4ht:ref: alg:areavectorcalculation --></a>.        ]]></body>
<body><![CDATA[<div class="algorithm">                                                                                                                                                                                     <!--l. 365-->    <p class="indent" >   <a   id="x1-20002r8"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Algorithm 8: </span><span   class="content">Area and Arc Length Vector Calculation</span></div><!--tex4ht:label?: x1-20002r8 -->     <div class="algorithmic"> <a   id="x1-20003r62"></a>  <span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0772x.png" alt="arcs&#x2190; empty  "  class="math" > <a   id="x1-20004r63"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0773x.png" alt="areas&#x2190; empty  "  class="math" > <a   id="x1-20005r64"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">for all</span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0774x.png" alt="pixel  "  class="math" > <span  class="cmr-7">of the contour vector</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">do</span><span class="for-body"> <a   id="x1-20006r65"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <span  class="cmbx-7">for all</span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a0775x.png" alt="areaMask,arcMask= 1  "  class="math" > <span  class="cmr-7">to</span> <img  src="/img/revistas/cleiej/v19n1/1a0776x.png" alt="25  "  class="math" ><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">do</span><span class="for-body"> <a   id="x1-20007r66"></a>      <br><span class="ALCitem"></span><span style="width:23.98615pt;">&nbsp;</span>     <span  class="cmr-7">center</span> <img  src="/img/revistas/cleiej/v19n1/1a0777x.png" alt="areaMask,arcMask  "  class="math" > <span  class="cmr-7">at current contour</span> <img  src="/img/revistas/cleiej/v19n1/1a0778x.png" alt="pixel  "  class="math" > <a   id="x1-20008r67"></a>      <br><span class="ALCitem"></span><span style="width:23.98615pt;">&nbsp;</span>     <img  src="/img/revistas/cleiej/v19n1/1a0779x.png" alt="area&#x2190; count(areaMask&#x2229;segmentation)  "  class="math" > <a   id="x1-20009r68"></a>      ]]></body>
<body><![CDATA[<br><span class="ALCitem"></span><span style="width:23.98615pt;">&nbsp;</span>     <img  src="/img/revistas/cleiej/v19n1/1a0780x.png" alt="areas&#x2190; areas&#x222A;area  "  class="math" > <a   id="x1-20010r69"></a>      <br><span class="ALCitem"></span><span style="width:23.98615pt;">&nbsp;</span>     <img  src="/img/revistas/cleiej/v19n1/1a0781x.png" alt="arc&#x2190; count(arcMask&#x2229; segmentation)  "  class="math" > <a   id="x1-20011r70"></a>      <br><span class="ALCitem"></span><span style="width:23.98615pt;">&nbsp;</span>     <img  src="/img/revistas/cleiej/v19n1/1a0782x.png" alt="arcs&#x2190; arcs&#x222A; arc  "  class="math" >    </span><a   id="x1-20012r71"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <span  class="cmbx-7">end</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">for</span>   </span><a   id="x1-20013r72"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">end</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">for</span> </div>                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 384-->    <p class="indent" >   Figure <a  href="#x1-20014r8">8<!--tex4ht:ref: areadiskapplied --></a> shows how one specific area disk was applied to the segmented image, for an specific scale (radius=<img  src="/img/revistas/cleiej/v19n1/1a0783x.png" alt="18  "  class="math" > in this case), at a given contour pixel. The gray area shows the intersection of pixels with the leaf segmentation. This procedure was then repeated over all the pixels from the contour vector in the same way. <!--l. 386-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-20014r8"></a>                                                                                                                                                                                      <!--l. 388-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f8.png" alt="PIC"   >     <br>     ]]></body>
<body><![CDATA[<div class="caption"  ><span class="id">Figure&#x00A0;8: </span><span   class="content">Area disk applied</span></div><!--tex4ht:label?: x1-20014r8 --> <!--l. 389-->    <p class="noindent" >to a <span  class="cmti-10">Croton niveus </span>sample at an specific pixel of the contour, with radius=<img  src="/img/revistas/cleiej/v19n1/1a0784x.png" alt="18  "  class="math" >                                                                                                                                                                                     <!--l. 391-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 394-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-210003.4.1"></a><span  class="cmbx-10">Histograms</span></span>    Using NumPy at each scale, a histogram was created from all the values generated from all contour pixels, as described by Algorithm <a  href="#x1-20002r8">8<!--tex4ht:ref: alg:areavectorcalculation --></a>. We used histograms of 21 bins, as <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span> did. This means a total of 25 different histograms were created, each with 21 bins, per image. At each scale, each histogram was normalized to unit length. Then, all histograms were concatenated together (both the 25 for area and 25 for arc length), generating what <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span> describes as the Histogram of Curvature over Scale (HCoS). <!--l. 397-->    <p class="noindent" >    <h5 class="subsubsectionHead"><span class="titlemark">3.4.2   </span> <a   id="x1-220003.4.2"></a>Extracting texture information (Local Binary Pattern Variance (LBPV))</h5> <!--l. 399-->    <p class="noindent" >We aimed at improving the model of curvature by adding texture analysis. We used a Local Binary Pattern Variance (LBPV) implementation called Mahotas <span class="cite">[<a  href="#Xmahotas">30</a><a id="br30">]</a></span> that is invariant to rotation, multiscale, and efficient. This implementation of LBPV is based on the algorithm of <span class="cite">[<a  href="#X1017623">31</a><a id="br31">]</a></span> and makes use of NumPy libraries to represent the image and the resulting histograms. It works on gray images, so we used OpenCV to convert the Red Green Blue (RGB) images to gray scale images. The LBPV approach detects micro structures such as lines, spots, flat areas, and edges <span class="cite">[<a  href="#X1017623">31</a><a id="br31">]</a></span>. This is useful to detect patterns of the veins, areas between them, reflections, and even roughness. Figure <a  href="#x1-22002r9">9<!--tex4ht:ref: crotondracolbp --></a> shows what two different LBPV implementations look like. The upper image shows a <img  src="/img/revistas/cleiej/v19n1/1a0785x.png" alt="radius = 2,pixels = 16  "  class="math" > (R2P16) implementation, and the one below shows a <img  src="/img/revistas/cleiej/v19n1/1a0786x.png" alt="radius = 1,pixel = 8  "  class="math" > (R1P8) pixel implementation. The different variants of the LBPV used are shown in Table <a  href="#x1-22001r1">1<!--tex4ht:ref: tab:lbpvvariations --></a>. In some cases we concatenated two histograms of different scales such as R1P8 &amp; R2P16. It is important to note that we did not use the variant which samples 24 pixels, since it generated too large histograms. We did, however, run some tests in which we noticed the 24 pixels variation didn&#8217;t add more accuracy, so we decided to ignore this method.        <div class="table">                                                                                                                                                                                     <!--l. 404-->    <p class="indent" >   <a   id="x1-22001r1"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;1: </span><span   class="content">Variants of LBPV</span></div><!--tex4ht:label?: x1-22001r1 -->     ]]></body>
<body><![CDATA[<div class="pic-tabular"> <img  src="/img/revistas/cleiej/v19n1/1a0787x.png" alt="---------------------------------- -Variant------------Radius--Pixels--  R1P8              1       8  R2P16             2       16  R3P16             3       16  R1P8 &amp; R2P16      1  and  8  and                    2       16  R1P8 &amp; R3P16      1  and  8  and                    3       16  R3P24             3       24 ----------------------------------  " ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 422-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-22002r9"></a>                                                                                                                                                                                      <!--l. 424-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f9.png" alt="PIC"   >     <br>      <div class="caption"  ><span class="id">Figure&#x00A0;9:   </span><span   class="content">LBPV   patterns   of   a   <span  class="cmti-10">Croton   draco   </span>sample.   The   upper   image   corresponds   to   a <img  src="/img/revistas/cleiej/v19n1/1a0788x.png" alt="radius = 2,pixels = 16  "  class="math" > (R2P16) and the lower one to a <img  src="/img/revistas/cleiej/v19n1/1a0789x.png" alt="radius = 1,pixels = 8  "  class="math" > (R1P8) pattern</span></div><!--tex4ht:label?: x1-22002r9 -->                                                                                                                                                                                     <!--l. 427-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 429-->    <p class="indent" >   Just like the HCoS, LBPV generates histograms that can be used for similarity search. Several histograms were generated at different radius sizes and different circumference pixel sampling, in order to validate which combinations provided the best results. The Mahotas implementation returned a histogram of the feature counts, where position <img  src="/img/revistas/cleiej/v19n1/1a0790x.png" alt="i  "  class="math" > corresponds the count of pixels in the leaf texture that had code <img  src="/img/revistas/cleiej/v19n1/1a0791x.png" alt="i  "  class="math" >. Also, given that the implementation is a LBPV, non-uniform codes are not used. Thus, the bin number <img  src="/img/revistas/cleiej/v19n1/1a0792x.png" alt="i  "  class="math" > is the <img  src="/img/revistas/cleiej/v19n1/1a0793x.png" alt="i- th  "  class="math" > feature, not just the binary code <img  src="/img/revistas/cleiej/v19n1/1a0794x.png" alt="i  "  class="math" > <span class="cite">[<a  href="#Xmahotas">30</a><a id="br30">]</a></span>. Figure <a  href="#x1-22003r10">10<!--tex4ht:ref: processlbp --></a> describes at a very high level how the process of extracting the local patterns histograms works. First, the image is converted to a gray scale image. Then, for each pixel inside the segmented leaf area, we calculated the local pattern with different radius and circumference using the mahotas implementation. Finally, each pattern was assigned to a bucket in the resulting histogram. Each pixel has a number assigned to it corresponding to a pattern, and the histogram was created using all those numbers from the segmented leaf pixels. <!--l. 431-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-22003r10"></a>                                                                                                                                                                                      <!--l. 433-->    ]]></body>
<body><![CDATA[<p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f10.png" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;10: </span><span   class="content">Process of extracting LBPV</span></div><!--tex4ht:label?: x1-22003r10 -->                                                                                                                                                                                     <!--l. 436-->    <p class="indent" >   </div><hr class="endfigure">    <h4 class="subsectionHead"><span class="titlemark">3.5   </span> <a   id="x1-230003.5"></a>Species Classification based on Leaf Images</h4> <!--l. 440-->    <p class="noindent" >Once all histograms were ready and normalized, a machine learning algorithm was used to classify unseen images into species. We implemented the same classification scheme used by LeafSnap. The following paragraphs describe how k Nearest Neightbors (kNN) was implemented. <!--l. 442-->    <p class="indent" >   Scikit-learn&#8217;s kNN implementation was used for leaf species classification. This process was fed with previously generated histograms from both the model of curvature using HCoS and the texture model using LBPV. Additional code was created to take into consideration only the first matching <img  src="/img/revistas/cleiej/v19n1/1a0795x.png" alt="k  "  class="math" > species, not the first <img  src="/img/revistas/cleiej/v19n1/1a0796x.png" alt="k  "  class="math" > images, as shown by Algorithm <a  href="#x1-23001r9">9<!--tex4ht:ref: alg:kspecies --></a>. The difference resides in taking into account only the best matching image per species, until completing the first <img  src="/img/revistas/cleiej/v19n1/1a0797x.png" alt="k  "  class="math" > species <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span>.        <div class="algorithm">                                                                                                                                                                                     <!--l. 445-->    <p class="indent" >   <a   id="x1-23001r9"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Algorithm 9: </span><span   class="content">k Species Ranking</span></div><!--tex4ht:label?: x1-23001r9 -->     ]]></body>
<body><![CDATA[<div class="algorithmic"> <a   id="x1-23002r73"></a>  <span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0798x.png" alt="neighborImages,distances&#x2190; knnSearch(histogram,k)  "  class="math" > <a   id="x1-23003r74"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a0799x.png" alt="resultSpecies &#x2190;empty  "  class="math" > <a   id="x1-23004r75"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">while</span><span  class="cmr-7">&#x00A0;each</span> <img  src="/img/revistas/cleiej/v19n1/1a07100x.png" alt="neighborImage  "  class="math" > <img  src="/img/revistas/cleiej/v19n1/1a07101x.png" alt="and  "  class="math" > <img  src="/img/revistas/cleiej/v19n1/1a07102x.png" alt="k&#x003E; 0  "  class="math" > <span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">do</span><span class="while-body"> <a   id="x1-23005r76"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <span  class="cmbx-7">if</span><span  class="cmr-7">&#x00A0;not</span> <img  src="/img/revistas/cleiej/v19n1/1a07103x.png" alt="neighborImage.species  "  class="math" > <span  class="cmr-7">in</span> <img  src="/img/revistas/cleiej/v19n1/1a07104x.png" alt="resultSpecies  "  class="math" > <span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">then</span><span class="if-body"> <a   id="x1-23006r77"></a>      <br><span class="ALCitem"></span><span style="width:23.98615pt;">&nbsp;</span>     <img  src="/img/revistas/cleiej/v19n1/1a07105x.png" alt="resultSpecies&#x2190; resultSpecies&#x222A;neighborImage.species  "  class="math" > <a   id="x1-23007r78"></a>      <br><span class="ALCitem"></span><span style="width:23.98615pt;">&nbsp;</span>     <img  src="/img/revistas/cleiej/v19n1/1a07106x.png" alt="k&#x2190; k- 1  "  class="math" >    </span><a   id="x1-23008r79"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <span  class="cmbx-7">end</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">if</span>   </span><a   id="x1-23009r80"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">end</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">while</span> </div>                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 460-->    <p class="indent" >   We used <img  src="/img/revistas/cleiej/v19n1/1a07107x.png" alt="1 &#x003C;= k &#x003C;= 10  "  class="math" > in order to measure how different algorithms behaved as the value of <img  src="/img/revistas/cleiej/v19n1/1a07108x.png" alt="k  "  class="math" > increased.    <h4 class="subsectionHead"><span class="titlemark">3.6   </span> <a   id="x1-240003.6"></a>Distance Metric - Histogram Intersection</h4> <!--l. 463-->    <p class="noindent" >We tested the basic Euclidean distance to measure similarity between histograms, however the results were not encouraging. We implemented the histogram intersection shown on Equation <a  href="#x1-24001r1">1<!--tex4ht:ref: eq:histogramintersection --></a>, where <img  src="/img/revistas/cleiej/v19n1/1a07109x.png" alt="I(x,y)  "  class="math" > is the histogram intersection between a histograms <img  src="/img/revistas/cleiej/v19n1/1a07110x.png" alt="x  "  class="math" > and <img  src="/img/revistas/cleiej/v19n1/1a07111x.png" alt="y  "  class="math" > of same size, <img  src="/img/revistas/cleiej/v19n1/1a07112x.png" alt="n  "  class="math" > is the number of bins, and <img  src="/img/revistas/cleiej/v19n1/1a07113x.png" alt="xi  "  class="math" > and <img  src="/img/revistas/cleiej/v19n1/1a07114x.png" alt="yi  "  class="math" > are each a bin in histograms <img  src="/img/revistas/cleiej/v19n1/1a07115x.png" alt="x  "  class="math" > and <img  src="/img/revistas/cleiej/v19n1/1a07116x.png" alt="y  "  class="math" >, respectively. This distance metric is also normalized to unit length.    <table  class="equation"><tr><td><a   id="x1-24001r1"></a>    <center class="math-display" > <img  src="/img/revistas/cleiej/v19n1/1a07117x.png" alt="        &#x2211;n     &#x2211;n I(x,y) =   xi -   min(xi,yi)         i=1    i=1 " class="math-display" ></center></td><td class="equation-label">(1)</td></tr></table> <!--l. 467-->    ]]></body>
<body><![CDATA[<p class="nopar" > <!--l. 469-->    <p class="noindent" >    <h4 class="subsectionHead"><span class="titlemark">3.7   </span> <a   id="x1-250003.7"></a>Accuracy</h4> <!--l. 471-->    <p class="noindent" >Let <img  src="/img/revistas/cleiej/v19n1/1a07118x.png" alt="E  "  class="math" > be an identification experiment that consists of a model <img  src="/img/revistas/cleiej/v19n1/1a07119x.png" alt="M  "  class="math" >, a set <img  src="/img/revistas/cleiej/v19n1/1a07120x.png" alt="S  "  class="math" > that contains <img  src="/img/revistas/cleiej/v19n1/1a07121x.png" alt="n  "  class="math" > images of leaves of <img  src="/img/revistas/cleiej/v19n1/1a07122x.png" alt="n  "  class="math" > (not necessarily different) unknown tree species to be identified, and an integer value <img  src="/img/revistas/cleiej/v19n1/1a07123x.png" alt="k  "  class="math" >, <img  src="/img/revistas/cleiej/v19n1/1a07124x.png" alt="k &#x2265; 1  "  class="math" >. We define <img  src="/img/revistas/cleiej/v19n1/1a07125x.png" alt="hit(M, k,x)  "  class="math" > as a boolean function that indicates if model <img  src="/img/revistas/cleiej/v19n1/1a07126x.png" alt="M  "  class="math" > generates a ranking in which one of the top <img  src="/img/revistas/cleiej/v19n1/1a07127x.png" alt="k  "  class="math" > candidate species is a correct identification of sample <img  src="/img/revistas/cleiej/v19n1/1a07128x.png" alt="x  "  class="math" >. Equation <a  href="#x1-25001r2">2<!--tex4ht:ref: eq:accuracy --></a> formally defines <img  src="/img/revistas/cleiej/v19n1/1a07129x.png" alt="Accuracy(M, S,k)  "  class="math" >.    <table  class="equation"><tr><td><a   id="x1-25001r2"></a>    <center class="math-display" > <img  src="/img/revistas/cleiej/v19n1/1a07130x.png" alt="                  &#x2211;      hit(M, k,x ) Accuracy(M, S,k) =   x&#x2208;S ----n----- " class="math-display" ></center></td><td class="equation-label">(2)</td></tr></table> <!--l. 475-->    <p class="nopar" >                                                                                                                                                                                     <!--l. 477-->    <p class="noindent" >    <h3 class="sectionHead"><span class="titlemark">4   </span> <a   id="x1-260004"></a>Experiments</h3> <!--l. 479-->    <p class="noindent" >Several model variations were used in the experiments (see Table <a  href="#x1-26004r2">2<!--tex4ht:ref: algorithm_variations --></a>).      <dl class="enumerate-enumitem"><dt class="enumerate-enumitem">   1. </dt><dd  class="enumerate-enumitem">Our implementation of LeafSnap&#8217;s model of curvature HCoS.      </dd><dt class="enumerate-enumitem">   2. </dt><dd  class="enumerate-enumitem">Several scales of the texture model based on LBPV.      </dd><dt class="enumerate-enumitem">   3. </dt><dd  class="enumerate-enumitem">The combination of HCoS and the best LBPV variant, which according to our tests was R1P8 &amp;      R3P16. This combination was further disaggregated by assigning different weights to HCoS and the      texture model.</dd></dl>        <div class="table">                                                                                                                                                                                     <!--l. 487-->    <p class="indent" >   <a   id="x1-26004r2"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;2: </span><span   class="content">Models used in the experiments including curvature, variants of texture model, and combination of both</span></div><!--tex4ht:label?: x1-26004r2 -->     ]]></body>
<body><![CDATA[<div class="pic-tabular"> <img  src="/img/revistas/cleiej/v19n1/1a07131x.png" alt="--------------------------------------------- -Model-Name---Description-----------Type------  HCoS         25 scales, 21 bins per Curvature               scale  R1P8         radius = 1,pixels = 8 Texture  R2P16        radius = 2,pixels  =  Texture               16  R3P16        radius = 3,pixels  =  Texture               16  R1P8     &amp;   radius = 1,pixels = 8 Texture  R2P16        &amp; radius = 2,pixels =               16  R1P8     &amp;   radius = 1,pixels = 8 Texture  R3P16        &amp; radius = 3,pixels =               16  HCoS     &amp;   Assigned  a factor to  Curvature  R1P8     &amp;   curvature and texture.  and  Tex-  R3P16        Factors summed  1, in-  ture               creasing by 0.10 ---------------------------------------------  " ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 505-->    <p class="noindent" ><span class="likeparagraphHead"><a   id="x1-270004"></a>One Versus All</span>    One approach to test a model is to partition a dataset into two datasets: one for training and one for testing. Another approach is to use One versus All, that is, each image in a dataset with <img  src="/img/revistas/cleiej/v19n1/1a07132x.png" alt="n  "  class="math" > elements is considered a test image and the remaining <img  src="/img/revistas/cleiej/v19n1/1a07133x.png" alt="n- 1  "  class="math" > images the training subset. We used both approaches as explained at the end of this section. <!--l. 508-->    <p class="noindent" ><span class="likeparagraphHead"><a   id="x1-280004"></a>Combining Curvature and Texture</span>    When combining two different models, we faced the issue of having different scales in the resulting ranking of each model. This was resolved by normalizing the rankings to unit length. <!--l. 511-->    <p class="indent" >   After normalizing the rankings (one per combined algorithm), we assigned a factor to each combined model in order to rank the predicted species into a single ranking. This factor sums <img  src="/img/revistas/cleiej/v19n1/1a07134x.png" alt="1  "  class="math" > in total. However we varied the factor associated with each model to see the behavior across different combinations. We used factors of (0.10, 0.90), (0.20, 0.80), (0.30, 0.70), (0.40, 0.60), (0.50, 0.50), (0.60, 0.40), (0.70, 0.30), (0.80, 0.20), (0.90, 0.10). For example, <img  src="/img/revistas/cleiej/v19n1/1a07135x.png" alt="(0.50,0.50)  "  class="math" > means we gave the same level of importance to each model on that combination. Algorithm <a  href="#x1-28001r10">10<!--tex4ht:ref: alg:combineranking --></a> describes how the merge between two methods was achieved.        <div class="algorithm">                                                                                                                                                                                     <!--l. 514-->    <p class="indent" >   <a   id="x1-28001r10"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Algorithm 10: </span><span   class="content">Combining Two Rankings</span></div><!--tex4ht:label?: x1-28001r10 -->     <div class="algorithmic"> <a   id="x1-28002r81"></a>  <span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a07136x.png" alt="combinedRanking &#x2190; &#x2205; "  class="math" > <a   id="x1-28003r82"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <img  src="/img/revistas/cleiej/v19n1/1a07137x.png" alt="FACTORS &#x2190; {0.10,0.20,0.30,0.40,0.50,0.60,0.70,0.80,0.90} "  class="math" > <a   id="x1-28004r83"></a>      ]]></body>
<body><![CDATA[<br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">for all</span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a07138x.png" alt="factor  "  class="math" > <span  class="cmr-7">in</span> <img  src="/img/revistas/cleiej/v19n1/1a07139x.png" alt="FACTORS  "  class="math" ><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">do</span><span class="for-body"> <a   id="x1-28005r84"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <img  src="/img/revistas/cleiej/v19n1/1a07140x.png" alt="results&#x2190; empty  "  class="math" > <a   id="x1-28006r85"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <span  class="cmbx-7">for all</span><span  class="cmr-7">&#x00A0;</span><img  src="/img/revistas/cleiej/v19n1/1a07141x.png" alt="species  "  class="math" > <span  class="cmr-7">in</span> <img  src="/img/revistas/cleiej/v19n1/1a07142x.png" alt="allSpecies  "  class="math" ><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">do</span><span class="for-body"> <a   id="x1-28007r86"></a>      <br><span class="ALCitem"></span><span style="width:23.98615pt;">&nbsp;</span>     <img  src="/img/revistas/cleiej/v19n1/1a07143x.png" alt="distance1 &#x2190; resultsAlgorithm1[species]  "  class="math" > <a   id="x1-28008r87"></a>      <br><span class="ALCitem"></span><span style="width:23.98615pt;">&nbsp;</span>     <img  src="/img/revistas/cleiej/v19n1/1a07144x.png" alt="distance2 &#x2190; resultsAlgorithm2[species]  "  class="math" > <a   id="x1-28009r88"></a>      <br><span class="ALCitem"></span><span style="width:23.98615pt;">&nbsp;</span>     <img  src="/img/revistas/cleiej/v19n1/1a07145x.png" alt="results[species]&#x2190; (distance1*factor)+(distance2*(1- factor))  "  class="math" >    </span><a   id="x1-28010r89"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <span  class="cmbx-7">end</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">for</span><a   id="x1-28011r90"></a>      <br><span class="ALCitem"></span><span style="width:13.98613pt;">&nbsp;</span>   <img  src="/img/revistas/cleiej/v19n1/1a07146x.png" alt="combinedRanking[factor]&#x2190; TakeBestKDistances(results)  "  class="math" >   </span><a   id="x1-28012r91"></a>      <br><span class="ALCitem"></span><span style="width:3.98611pt;">&nbsp;</span> <span  class="cmbx-7">end</span><span  class="cmr-7">&#x00A0;</span><span  class="cmbx-7">for</span> </div>                                                                                                                                                                                        </div><hr class="endfloat">    </div>    <h4 class="subsectionHead"><span class="titlemark">4.1   </span> <a   id="x1-290004.1"></a>Texture and Curvature Model Experiments</h4> <!--l. 533-->    <p class="noindent" >We ran all models <img  src="/img/revistas/cleiej/v19n1/1a07147x.png" alt="M  "  class="math" > described in Table <a  href="#x1-26004r2">2<!--tex4ht:ref: algorithm_variations --></a>, with <img  src="/img/revistas/cleiej/v19n1/1a07148x.png" alt="1 &#x2264; k &#x2264; 10  "  class="math" >, and the following data sets: Costa Rica clean subset (One versus All, <img  src="/img/revistas/cleiej/v19n1/1a07149x.png" alt="n = 1468  "  class="math" >), Costa Rica noisy subset (One versus All, <img  src="/img/revistas/cleiej/v19n1/1a07150x.png" alt="n = 2345  "  class="math" >), and Costa Rica complete data set (training set with all <img  src="/img/revistas/cleiej/v19n1/1a07151x.png" alt="1468  "  class="math" > clean images and testing set with all <img  src="/img/revistas/cleiej/v19n1/1a07152x.png" alt="2345  "  class="math" > noisy images). In each experiment, <img  src="/img/revistas/cleiej/v19n1/1a07153x.png" alt="Accuracy(M, S,k)  "  class="math" > was calculated for the corresponding dataset <img  src="/img/revistas/cleiej/v19n1/1a07154x.png" alt="S  "  class="math" >. In addition, for model HCoS &amp; R1P8 &amp; R3P16, Algorithm <a  href="#x1-28001r10">10<!--tex4ht:ref: alg:combineranking --></a> was used to comprehensively consider different weight combinations for HCoS and the texture model. Table <a  href="#x1-38001r5">5<!--tex4ht:ref: tab:costaricahcosvscombined --></a> summarizes the results obtained. <!--l. 536-->    ]]></body>
<body><![CDATA[<p class="noindent" >    <h4 class="subsectionHead"><span class="titlemark">4.2   </span> <a   id="x1-300004.2"></a>Processing Times</h4> <!--l. 537-->    <p class="noindent" >To understand the duration of the recognition process, we measured the recognition time for all images from both Costa Rican noisy and clean subsets, as if a back-end received images from a mobile app. The measured time includes image loading, segmentation, stem deletion, normalization, curvature calculations, texture calculations, and similarity search. It does not include network related times. We used a MacBook Pro with an Intel Core i7, <img  src="/img/revistas/cleiej/v19n1/1a07155x.png" alt="2.8  "  class="math" > GHz, and <img  src="/img/revistas/cleiej/v19n1/1a07156x.png" alt="8  "  class="math" > GBs on RAM. <!--l. 539-->    <p class="noindent" >    <h4 class="subsectionHead"><span class="titlemark">4.3   </span> <a   id="x1-310004.3"></a>Statistical Analysis For Noise Affectation, Best Algorithms per Species, and best value <img  src="/img/revistas/cleiej/v19n1/1a07157x.png" alt="&#x02C6;k  "  class="math" ></h4> <!--l. 540-->    <p class="noindent" >Using the clean and noisy datasets, we calculated a General Linear Model (GLM) per species over a total of 65 species. We aimed at discovering the following:      <ul class="itemize1">      <li class="itemize">What is the minimum value of <img  src="/img/revistas/cleiej/v19n1/1a07158x.png" alt="k  "  class="math" > that provides results statistically equivalent to those obtained when      <img  src="/img/revistas/cleiej/v19n1/1a07159x.png" alt="k = 10  "  class="math" > for each species? Obviously, accuracy increases as the value of <img  src="/img/revistas/cleiej/v19n1/1a07160x.png" alt="k  "  class="math" > increases. However, for      practical reasons, we would like to test if there is a threshold value <img  src="/img/revistas/cleiej/v19n1/1a07161x.png" alt="&#x02C6;k  "  class="math" > after which accuracy remains      statistically equivalent to using <img  src="/img/revistas/cleiej/v19n1/1a07162x.png" alt="k = 10  "  class="math" >. For example, in a mobile app users would appreciate if the      number of best ranked species is not the maximum value 10, but a smaller number.      </li>      <li class="itemize">What is the best algorithm or combination of algorithms for each species? For this we used five different      algorithms: R1P8 &amp; R3P16 (texture alone), <img  src="/img/revistas/cleiej/v19n1/1a07163x.png" alt="0.1  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07164x.png" alt="0.9  "  class="math" > R1P8 &amp; R3P16, <img  src="/img/revistas/cleiej/v19n1/1a07165x.png" alt="0.5  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07166x.png" alt="0.5  "  class="math" >      R1P8 &amp; R3P16, <img  src="/img/revistas/cleiej/v19n1/1a07167x.png" alt="0.9  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07168x.png" alt="0.1  "  class="math" > R1P8 &amp; R3P16 and HCoS (curvature alone). This also includes      creating clusters of species based on their most significant algorithms, and understanding the clusters      with more species and best accuracies.      </li>      <li class="itemize">Does noise decrease the accuracy level obtained per species? Can we find some species that are not      affected by noise in the data?</li>    </ul> <!--l. 551-->    <p class="indent" >   To achieve this, we calculated a GLM per species to detect significance of noise, algorithm used and value of <img  src="/img/revistas/cleiej/v19n1/1a07169x.png" alt="k  "  class="math" >. We used a confidence level of <img  src="/img/revistas/cleiej/v19n1/1a07170x.png" alt="0.95  "  class="math" >. Once each GLM was calculated and each main effect significance known and proven, we calculated if all levels within each factor were statistically equivalent. We are actually trying to find the levels that are significantly different, for all three factors.. We used a Tukey statistical test for each factor. Table <a  href="#x1-31001r3">3<!--tex4ht:ref: factors_and_levels --></a> shows the different factors and levels used during this experiment.        <div class="table">                                                                                                                                                                                     <!--l. 554-->    <p class="indent" >   <a   id="x1-31001r3"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;3: </span><span   class="content">Factors and levels for GLM per species</span></div><!--tex4ht:label?: x1-31001r3 -->     ]]></body>
<body><![CDATA[<div class="pic-tabular"> <img  src="/img/revistas/cleiej/v19n1/1a07171x.png" alt="-----------------------------------------------------------  Factor             Number     Levels -------------------of-Levels--------------------------------                               R1P8  &amp; R3P16    Algorithm          5                                 0.1HCoS  and0.9R1P8 &amp; R3P16                               0.5HCoS  and0.5R1P8 &amp; R3P16                               0.9HCoS  and0.1R1P8 &amp; R3P16                               HCoS  Noise is worse?   2          Yes, No  k                 10         1,2,3,4,5,6,7,8,9,10 -----------------------------------------------------------  " ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div>    <h4 class="subsectionHead"><span class="titlemark">4.4   </span> <a   id="x1-320004.4"></a>Statistical Analysis of Best Algorithms for <img  src="/img/revistas/cleiej/v19n1/1a07172x.png" alt="k = 5  "  class="math" ></h4> <!--l. 573-->    <p class="noindent" >Because <img  src="/img/revistas/cleiej/v19n1/1a07173x.png" alt="k = 5  "  class="math" > has become an informal benchmarking value in other research <span class="cite">[<a  href="#Xleafsnap">13</a><a id="br13">]</a></span>, it is important to discover what algorithms got the best accuracy when <img  src="/img/revistas/cleiej/v19n1/1a07174x.png" alt="k = 5  "  class="math" >. For this experiment, we ran a Binary Logistic Regression and optimized it, thus maximizing the probability of a successful identification. Based on the resulting regression model, we calculated the two best algorithms for both noisy and clean factors, and <img  src="/img/revistas/cleiej/v19n1/1a07175x.png" alt="k = 5  "  class="math" >, per species. <!--l. 579-->    <p class="noindent" >    <h3 class="sectionHead"><span class="titlemark">5   </span> <a   id="x1-330005"></a>Results</h3> <!--l. 582-->    <p class="noindent" >    <h4 class="subsectionHead"><span class="titlemark">5.1   </span> <a   id="x1-340005.1"></a>Comparison with Others Studies</h4> <!--l. 584-->    <p class="noindent" >In order to set a baseline, several other studies have used the Flavia dataset for their research <span class="cite">[<a  href="#XWUFlavia">6</a><a id="br6">]</a></span>. Table <a  href="#x1-34001r4">4<!--tex4ht:ref: tab:flaviacomparison --></a> shows the comparison of these studies and our approaches. Some studies do not report accuracy but precision only. The best accuracy of our work was achieved, on this dataset, by adding 0.5 HCoS and 0.5 R1P8 &amp; R3P1 with <img  src="/img/revistas/cleiej/v19n1/1a07176x.png" alt="k = 10  "  class="math" > for a 0.991. We also attempted to use texture only, which shows to be very extremely accurate with up to 0.98. This dataset has been, however, artificially cleaned, so other studies should be evaluated on more complex datasets.        <div class="table">                                                                                                                                                                                     <!--l. 587-->    <p class="indent" >   <a   id="x1-34001r4"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;4: </span><span   class="content">Other studies comparison of obtained results on the Flavia dataset</span></div><!--tex4ht:label?: x1-34001r4 -->      <div class="pic-tabular"><img  src="/img/revistas/cleiej/v19n1/1a07177x.png" ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div>    <h4 class="subsectionHead"><span class="titlemark">5.2   </span> <a   id="x1-350005.2"></a>Texture and Curvature Model Experiments</h4> <!--l. 615-->    ]]></body>
<body><![CDATA[<p class="noindent" ><span class="paragraphHead"><a   id="x1-360005.2"></a><span  class="cmbx-10">Clean Subset</span></span>    As shown in Table <a  href="#x1-38001r5">5<!--tex4ht:ref: tab:costaricahcosvscombined --></a>, the best results were obtained when <img  src="/img/revistas/cleiej/v19n1/1a07178x.png" alt="k = 10  "  class="math" > and the model is <img  src="/img/revistas/cleiej/v19n1/1a07179x.png" alt="0.5  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07180x.png" alt="0.5  "  class="math" > R1P8 &amp; R3P16. The resulting accuracy is <img  src="/img/revistas/cleiej/v19n1/1a07181x.png" alt="0.945  "  class="math" >, in contrast with the accuracy of HCoS which is <img  src="/img/revistas/cleiej/v19n1/1a07182x.png" alt="0.79  "  class="math" >. Notice however that <img  src="/img/revistas/cleiej/v19n1/1a07183x.png" alt="0.5  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07184x.png" alt="0.5  "  class="math" > R1P8 &amp; R3P16 is also the best for all values of <img  src="/img/revistas/cleiej/v19n1/1a07185x.png" alt="6 &#x003C;= k &#x003C;= 10  "  class="math" >. For <img  src="/img/revistas/cleiej/v19n1/1a07186x.png" alt="1 &#x003C;= k &#x003C;=  5  "  class="math" >, <img  src="/img/revistas/cleiej/v19n1/1a07187x.png" alt="0.5  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07188x.png" alt="0.5  "  class="math" > R1P8 &amp; R3P16 and <img  src="/img/revistas/cleiej/v19n1/1a07189x.png" alt="0.1  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07190x.png" alt="0.9  "  class="math" > R1P8 &amp; R3P16 have very similar levels of accuracy. Figure <a  href="#x1-38002r11">11<!--tex4ht:ref: gra:costaricancleanhcosvscombined --></a> more clearly depicts these comparisons. <!--l. 617-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-370005.2"></a><span  class="cmbx-10">Noisy Subset</span></span>    Figure <a  href="#x1-38002r11">11<!--tex4ht:ref: gra:costaricannoisyhcosvscombined --></a> clearly shows that <img  src="/img/revistas/cleiej/v19n1/1a07191x.png" alt="0.1  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07192x.png" alt="0.9  "  class="math" > R1P8 &amp; R3P16 has the best accuracy for all values of <img  src="/img/revistas/cleiej/v19n1/1a07193x.png" alt="k  "  class="math" >. In addition, the level of accuracy improvement with respect to HCoS is considerably larger, ranging from <img  src="/img/revistas/cleiej/v19n1/1a07194x.png" alt="35.2%  "  class="math" > when <img  src="/img/revistas/cleiej/v19n1/1a07195x.png" alt="k = 10  "  class="math" > to <img  src="/img/revistas/cleiej/v19n1/1a07196x.png" alt="42.5%  "  class="math" > when <img  src="/img/revistas/cleiej/v19n1/1a07197x.png" alt="k = 4  "  class="math" > as shown in Table <a  href="#x1-43002r7">7<!--tex4ht:ref: tab:hypothesisnoisyset --></a>. <!--l. 619-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-380005.2"></a><span  class="cmbx-10">Complete Dataset</span></span>    As Figure <a  href="#x1-38002r11">11<!--tex4ht:ref: gra:costaricanallhcosvscombined --></a> shows, the level of accuracy is considerably lower for all models, as compared to the previous two experiments. Even the best model achieves levels of accuracy in a poor <img  src="/img/revistas/cleiej/v19n1/1a07198x.png" alt="[14.5%,43.9% ]  "  class="math" > range.        <div class="table">                                                                                                                                                                                     <!--l. 639-->    <p class="indent" >   <a   id="x1-38001r5"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;5: </span><span   class="content">Accuracy obtained when combining curvature and texture over the clean subset, the noisy subset, and the complete Costa Rican dataset</span></div><!--tex4ht:label?: x1-38001r5 -->      <div class="pic-tabular"><img  src="/img/revistas/cleiej/v19n1/1a07199x.png" alt="--------------------------------------------------------------------------------------------------                 Clean                             Noisy                           All --------------------------------------------------------------------------------------------------              HCoS=a, R1P8.R3P16=b           HCoS=a, R1P8.R3P16=b           HCoS=a, R1P8.R3P16=b   k   HCoS                           HCoS                           HCoS               a=0.1  a=0.5    a=0.9           a=0.1  a=0.5    a=0.9          a=0.1  a=0.5    a=0.9              b=0.9  b=0.5    b=0.1          b=0.9  b=0.5    b=0.1          b=0.9  b=0.5    b=0.1 -----|------------------------------|------------------------------|------------------------------    1 |0.311   0.567   0.563     0.386 |0.151   0.519   0.320     0.177 |0.070   0.145   0.120     0.084    2 |0.446   0.702   0.702     0.520 |0.225   0.638   0.435     0.257 |0.119   0.209   0.178     0.133    3 |0.535   0.766   0.785     0.610 |0.277   0.701   0.515     0.311 |0.148   0.252   0.216     0.165    4 |0.587   0.816   0.822     0.668 |0.325   0.750   0.574     0.364 |0.176   0.295   0.251     0.201    5 |0.631   0.857   0.854     0.706 |0.364   0.783   0.616     0.408 |0.204   0.326   0.277     0.224    6 |0.674   0.875   0.881     0.748 |0.399   0.810   0.660     0.455 |0.228   0.350   0.304     0.249    7 |0.710   0.890   0.909     0.779 |0.435   0.830   0.692     0.484 |0.253   0.377   0.328     0.277    8 |0.740   0.903   0.924     0.812 |0.470   0.844   0.721     0.516 |0.273   0.400   0.353     0.299    9 |0.768   0.918   0.937     0.832 |0.496   0.858   0.744     0.546 |0.295   0.417   0.371     0.320 --10--0.790---0.931---0.945-----0.845--0.521---0.872---0.771-----0.574--0.318---0.439---0.393-----0.336-" ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 669-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-38002r11"></a>                                                                                                                                                                                      <!--l. 672-->    ]]></body>
<body><![CDATA[<p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f11.png" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;11: </span><span   class="content">Comparison of HCoS and Combinations</span></div><!--tex4ht:label?: x1-38002r11 -->                                                                                                                                                                                     <!--l. 677-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 679-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-390005.2"></a><span  class="cmbx-10">Discussion</span></span>    These experiments show how, in general, the combination of HCoS and LBPV consistently increases the accuracy of HCoS alone. Accuracy declines as the combination factor assigned to curvature reaches <img  src="/img/revistas/cleiej/v19n1/1a07200x.png" alt="1  "  class="math" >. Overall, the best combination seems to be <img  src="/img/revistas/cleiej/v19n1/1a07201x.png" alt="0.1  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07202x.png" alt="0.9  "  class="math" > LBPV. It is also important to notice how the accuracy is sensitive to the quality of the dataset. The clean subset has a tendency to improve the recognition accuracy, in contrast with the noisy subset. This reflects the importance of good pre-processing and good segmentation. Shadows, dust, and other artifacts affect the final accuracy results. <!--l. 683-->    <p class="noindent" >    <h4 class="subsectionHead"><span class="titlemark">5.3   </span> <a   id="x1-400005.3"></a>Measuring Significance of the Accuracy Increase</h4> <!--l. 684-->    <p class="noindent" >As shown in the previous section there is an increase in accuracy when texture is added to our implementation HCoS. This, however, may not be statistically significant. We proceeded then to apply a Statistical Proportion Test for Two Samples. Our null hypothesis <img  src="/img/revistas/cleiej/v19n1/1a07203x.png" alt="H0  "  class="math" > is that the accuracy of the implementation of HCoS equals the ones obtained by combining curvature and texture. In contrast, our alternative hypothesis <img  src="/img/revistas/cleiej/v19n1/1a07204x.png" alt="H1  "  class="math" > is that the accuracy of the implementation of HCoS is less than the combinations. <!--l. 686-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-410005.3"></a><span  class="cmbx-10">Proportion Tests on the Clean Subset</span></span>    Table <a  href="#x1-43001r6">6<!--tex4ht:ref: tab:hypothesiscleanset --></a> shows the results obtained for all the proportion tests for the clean subset. Most combinations of HCoS and R1P8 &amp; R3P16 for <img  src="/img/revistas/cleiej/v19n1/1a07205x.png" alt="1 &#x003C;= k &#x003C;= 10  "  class="math" > resulted in very low p-Values, which reject <img  src="/img/revistas/cleiej/v19n1/1a07206x.png" alt="H0  "  class="math" >. However a few accuracy increases from <img  src="/img/revistas/cleiej/v19n1/1a07207x.png" alt="0.9  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07208x.png" alt="0.1  "  class="math" > R1P8 &amp; R3P16 did fail the test. This means that, as the weight increases for HCoS, it starts getting non-significant accuracy increases, which makes sense since it is almost equal to HCoS alone. <!--l. 688-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-420005.3"></a><span  class="cmbx-10">Proportion Tests on the Noisy Subset</span></span>    Table <a  href="#x1-43002r7">7<!--tex4ht:ref: tab:hypothesisnoisyset --></a> shows the results obtained for all the proportion tests for the noisy subset. All combinations of HCoS and R1P8 &amp; R3P16 resulted in very low p-Values, which reject <img  src="/img/revistas/cleiej/v19n1/1a07209x.png" alt="H0  "  class="math" >. <!--l. 690-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-430005.3"></a><span  class="cmbx-10">Proportion Tests on the Complete Dataset</span></span>    Table <a  href="#x1-43003r8">8<!--tex4ht:ref: tab:hypothesiscompleteset --></a> shows the results obtained for all the proportion tests on the complete dataset of leaf images from Costa Rica. Almost every single test rejected <img  src="/img/revistas/cleiej/v19n1/1a07210x.png" alt="H0  "  class="math" >. For <img  src="/img/revistas/cleiej/v19n1/1a07211x.png" alt="k = 1  "  class="math" > the results are not significant. <!--l. 692-->    ]]></body>
<body><![CDATA[<p class="indent" >   In all Proportion Tests, by adding texture with a bigger factor the model improves significantly the accuracy. As the factor assigned to texture declines, the improvement becomes statistically insignificant.        <div class="table">                                                                                                                                                                                     <!--l. 695-->    <p class="indent" >   <a   id="x1-43001r6"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;6: </span><span   class="content">Proportion Test results over the Costa Rican Clean Subset</span></div><!--tex4ht:label?: x1-43001r6 -->      <div class="pic-tabular"><img  src="/img/revistas/cleiej/v19n1/1a07212x.png" alt="-------------------------------------------------------------------------                           Costa Rica Clean Subset                            Confidence Level=0.95                              Sample Size=1468                      H0: HCoS=HCoS   &amp; R1P8 &amp; R3P16                      H1: HCoS &#x003C;HCoS  &amp; R1P8 &amp; R3P16 -------------------------------------------------------------------------   k   HCoS    HCoS=0.1, R1P8       p-Value  Reject H0?      Accuracy ---------------&amp;-R3P16=0.9--------------------------------Improvement----    1  0.311        0.567        5.65023E-21       YES         0.255    2  0.446        0.702        4.99997E-19       YES         0.255    3  0.535        0.766        2.00608E-15       YES         0.231    4  0.587        0.816        1.21109E-19       YES         0.230    5  0.631        0.857        2.81689E-21       YES         0.225    6  0.674        0.875        9.64321E-21       YES         0.201    7  0.710        0.890        2.70704E-18       YES         0.180    8  0.740        0.903        4.32615E-17       YES         0.163    9  0.768        0.918        6.49779E-16       YES         0.151   10  0.790        0.931        1.14726E-14       YES         0.141 -------------------------------------------------------------------------   k   HCoS    HCoS=0.5, R1P8       p-Value  Reject H0?      Accuracy ---------------&amp;-R3P16=0.5--------------------------------Improvement----    1  0.311        0.563        4.32788E-06       YES         0.251    2  0.446        0.702        6.56883E-09       YES         0.256    3  0.535        0.785        1.09341E-11       YES         0.251    4  0.587        0.822        5.88439E-16       YES         0.235    5  0.631        0.854        2.42945E-19       YES         0.223    6  0.674        0.881        4.19306E-23       YES         0.207    7  0.710        0.909        1.18899E-21       YES         0.198    8  0.740        0.924        1.62723E-20       YES         0.185    9  0.768        0.937        7.26426E-20       YES         0.170 --10--0.790--------0.945--------7.84393E-20-------YES---------0.155-------               HCoS=0.9, R1P8                               Accuracy   k   HCoS     &amp; R3P16=0.1         p-Value  Reject H0?     Improvement -------------------------------------------------------------------------    1  0.311        0.386        0.976355356        NO         0.075    2  0.446        0.520        0.823819993        NO         0.074    3  0.535        0.610        0.840833982        NO         0.075    4  0.587        0.668         0.26158887        NO         0.082    5  0.631        0.706          0.0201783       YES         0.074    6  0.674        0.748        0.017077481       YES         0.074    7  0.710        0.779        0.002586312       YES         0.069    8  0.740        0.812        0.000201496       YES         0.072    9  0.768        0.832        5.92221E-05       YES         0.065 --10--0.790--------0.845--------3.63353E-06-------YES---------0.055-------  " ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div>        <div class="table">                                                                                                                                                                                     <!--l. 753-->    <p class="indent" >   <a   id="x1-43002r7"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;7: </span><span   class="content">Proportion Test results over the Costa Rican Noisy Subset</span></div><!--tex4ht:label?: x1-43002r7 -->      ]]></body>
<body><![CDATA[<div class="pic-tabular"><img  src="/img/revistas/cleiej/v19n1/1a07213x.png" alt="-------------------------------------------------------------------------                           Costa Rica Noisy Subset                            Confidence Level=0.95                              Sample Size=2345                      H0: HCoS=HCoS   &amp; R1P8 &amp; R3P16                      H1: HCoS &#x003C;HCoS  &amp; R1P8 &amp; R3P16 -------------------------------------------------------------------------   k   HCoS    HCoS=0.1, R1P8       p-Value  Reject H0?      Accuracy ---------------&amp;-R3P16=0.9--------------------------------Improvement----    1  0.151        0.519        1.7283E-129       YES         0.368    2  0.225        0.638        7.7632E-157       YES         0.413    3  0.277        0.701        1.3354E-165       YES         0.424    4  0.325        0.750        6.4313E-182       YES         0.425    5  0.364        0.783        5.3369E-191       YES         0.420    6  0.399        0.810        2.8814E-194       YES         0.411    7  0.435        0.830        5.1936E-187       YES         0.396    8  0.470        0.844        1.3596E-178       YES         0.374    9  0.496        0.858        2.6133E-177       YES         0.362   10  0.521        0.872        5.9603E-173       YES         0.352 -------------------------------------------------------------------------   k   HCoS    HCoS=0.5, R1P8       p-Value  Reject H0?      Accuracy ---------------&amp;-R3P16=0.5--------------------------------Improvement----    1  0.151        0.320         1.2405E-75       YES         0.169    2  0.225        0.435        2.2453E-116       YES         0.209    3  0.277        0.515        1.1237E-149       YES         0.238    4  0.325        0.574        7.6123E-168       YES         0.250    5  0.364        0.616        5.4143E-184       YES         0.252    6  0.399        0.660        1.5749E-202       YES         0.261    7  0.435        0.692        5.0885E-199       YES         0.258    8  0.470        0.721        8.0747E-191       YES         0.250    9  0.496        0.744        1.7097E-191       YES         0.248 --10--0.521--------0.771--------2.0950E-191-------YES---------0.250-------               HCoS=0.9, R1P8                               Accuracy   k   HCoS     &amp; R3P16=0.1         p-Value  Reject H0?     Improvement -------------------------------------------------------------------------    1  0.151        0.177         2.4494E-26       YES         0.025    2  0.225        0.257         1.9667E-50       YES         0.032    3  0.277        0.311         1.9949E-63       YES         0.035    4  0.325        0.364         4.4262E-79       YES         0.040    5  0.364        0.408         1.5164E-96       YES         0.044    6  0.399        0.455        6.3080E-102       YES         0.055    7  0.435        0.484        8.9291E-112       YES         0.050    8  0.470        0.516        6.4232E-118       YES         0.046    9  0.496        0.546        4.2650E-125       YES         0.049 --10--0.521--------0.574--------9.9417E-134-------YES---------0.054-------  " ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div>        <div class="table">                                                                                                                                                                                     <!--l. 810-->    <p class="indent" >   <a   id="x1-43003r8"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;8: </span><span   class="content">Proportion Test results over the Costa Rican Complete Dataset</span></div><!--tex4ht:label?: x1-43003r8 -->      <div class="pic-tabular"><img  src="/img/revistas/cleiej/v19n1/1a07214x.png" alt="------------------------------------------------------------------------                           Costa Rica All Dataset                            Confidence Level=0.95                             Sample Size=2345                      H0: HCoS=HCoS  &amp; R1P8  &amp; R3P16                      H1: HCoS&#x003C;HCoS  &amp; R1P8  &amp; R3P16 ------------------------------------------------------------------------   k   HCoS    HCoS=0.1, R1P8      p-Value  Reject H0?       Accuracy ---------------&amp;-R3P16=0.9-------------------------------Improvement----    1  0.070        0.145        4.3210E-01        NO         0.075    2  0.119        0.209        6.2947E-06       YES         0.090    3  0.148        0.252        1.8102E-10       YES         0.105    4  0.176        0.295        3.2827E-12       YES         0.119    5  0.204        0.326        1.6580E-12       YES         0.122    6  0.228        0.350        6.0361E-13       YES         0.122    7  0.253        0.377        8.0774E-12       YES         0.124    8  0.273        0.400        4.1983E-11       YES         0.126    9  0.295        0.417        5.8190E-11       YES         0.122   10  0.318        0.439        1.0927E-10       YES         0.121 ------------------------------------------------------------------------   k   HCoS    HCoS=0.5, R1P8      p-Value  Reject H0?       Accuracy ---------------&amp;-R3P16=0.5-------------------------------Improvement----    1  0.070        0.120        8.2576E-01        NO         0.050    2  0.119        0.178        6.0228E-03       YES         0.059    3  0.148        0.216        1.3785E-05       YES         0.069    4  0.176        0.251        4.9141E-09       YES         0.075    5  0.204        0.277        2.9011E-10       YES         0.072    6  0.228        0.304        1.0408E-11       YES         0.076    7  0.253        0.328        9.4610E-11       YES         0.075    8  0.273        0.353        8.2167E-12       YES         0.080    9  0.295        0.371        2.6311E-11       YES         0.076 --10--0.318--------0.393--------1.6020E-10-------YES---------0.075-------               HCoS=0.9, R1P8                               Accuracy   k   HCoS     &amp; R3P16=0.1        p-Value  Reject H0?     Improvement ------------------------------------------------------------------------    1  0.070        0.084        6.9915E-01        NO         0.014    2  0.119        0.133        4.7461E-03       YES         0.014    3  0.148        0.165        3.6781E-04       YES         0.018    4  0.176        0.201        7.3870E-06       YES         0.025    5  0.204        0.224        4.0212E-06       YES         0.020    6  0.228        0.249        7.8066E-07       YES         0.021    7  0.253        0.277        2.8185E-06       YES         0.024    8  0.273        0.299        2.0626E-07       YES         0.026    9  0.295        0.320        1.0903E-07       YES         0.025 --10--0.318--------0.336--------1.6458E-06-------YES---------0.017-------  " ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div>    <h4 class="subsectionHead"><span class="titlemark">5.4   </span> <a   id="x1-440005.4"></a>Processing Time</h4> <!--l. 866-->    <p class="noindent" >As shown in Figure <a  href="#x1-44001r12">12<!--tex4ht:ref: gra:elapsedtimes --></a>, times range from <img  src="/img/revistas/cleiej/v19n1/1a07215x.png" alt="2.76  "  class="math" > to <img  src="/img/revistas/cleiej/v19n1/1a07216x.png" alt="12.81  "  class="math" > seconds. However, the median of the elapsed time is <img  src="/img/revistas/cleiej/v19n1/1a07217x.png" alt="5.70  "  class="math" > seconds for the clean subset and <img  src="/img/revistas/cleiej/v19n1/1a07218x.png" alt="5.66  "  class="math" > seconds for the noisy subset. These are suitable times even for mobile applications that use the developed back-end. <!--l. 868-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-44001r12"></a>                                                                                                                                                                                      <!--l. 870-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f12.png" alt="PIC"   >     ]]></body>
<body><![CDATA[<br>     <div class="caption"  ><span class="id">Figure&#x00A0;12: </span><span   class="content">Box Plot of leaf image recognition times simulating a mobile app back-end, for Costa Rican noisy and clean subsets</span></div><!--tex4ht:label?: x1-44001r12 -->                                                                                                                                                                                     <!--l. 873-->    <p class="indent" >   </div><hr class="endfigure">    <h4 class="subsectionHead"><span class="titlemark">5.5   </span> <a   id="x1-450005.5"></a>Statistical Analysis of Noise Affectation, Best Algorithms per Species, and best value of <img  src="/img/revistas/cleiej/v19n1/1a07219x.png" alt="k  "  class="math" ></h4> <!--l. 876-->    <p class="noindent" >Table <a  href="#x1-45001r9">9<!--tex4ht:ref: per_species_analysis --></a> shows the results of each GLM per species. Each species has the accuracy maximum, mean and median. Also, a cluster has been assigned regarding the best algorithms resulted from the Tukey test per species. Table <a  href="#x1-45002r10">10<!--tex4ht:ref: tab:clusters --></a> depicts the algorithms in each cluster for reference. Additionally, column &#8221;Best Without Noise&#8221; indicates if noise does affect or not the accuracy for each species. Finally, column &#8221;<img  src="/img/revistas/cleiej/v19n1/1a07220x.png" alt="&#x02C6;k  "  class="math" >&#8221; indicates the threshold value <img  src="/img/revistas/cleiej/v19n1/1a07221x.png" alt="&#x02C6;k "  class="math" > per species. As indicated before, any <img  src="/img/revistas/cleiej/v19n1/1a07222x.png" alt="k &#x003E; &#x02C6;k  "  class="math" > will be slightly better, but this is not statistically significant.        <div class="table">                                                                                                                                                                                     <!--l. 879-->    <p class="indent" >   <a   id="x1-45001r9"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;9: </span><span   class="content">Per Species Table with Accuracy Mean, Maximum, Best Algorithms, Affectation by Noise and <img  src="/img/revistas/cleiej/v19n1/1a07223x.png" alt="&#x02C6;k  "  class="math" ></span></div><!--tex4ht:label?: x1-45001r9 -->      <div class="pic-tabular"><img  src="/img/revistas/cleiej/v19n1/1a07224x.png" alt="--SBBBdCmpaalaesaueuucilymcihhkaceiienosnnaapiaiahypulunlurgmmpuaulauretrcaaaanfdeirdniassni----MA1111acxciumraucmy---MA0000ec.9.8.8.7dc4376iuarnacy--Mc0000eu....ar9887na233cyAc---C3245luster-BNYYNYeoeeoesissstse-without--&#x02C6;k4778------NI5449um3820mabgeers----of   Cedrela odorata           1         0.79       0.73       5      Yes            6      62   Cestrum tomentosum       1         0.75       0.66       5      Yes            6      68   Citharexylum     donnell-  1         0.6        0.6        6      Yes            7      44   smithii   Colubrina spinosa        1         0.8        0.75       7      Yes            7      54   Croton draco             1         0.78       0.77       10     Yes            7      63   Dipteryx panamensis       1         0.77       0.7        5      Yes            7      103   Eugenia hiraeifolia         1         0.95       0.86       4      Yes            5      50   Ficus cotinifolia           1         0.7        0.64       5      Yes            7      58   Genipa americana         1         0.87       0.8        9      Yes            5      42   Guaiacum  sanctum         1         0.85       0.74       9      Yes            8      68   Guazuma ulmifolia         1         0.89       0.85       5      Yes            6      54   Heliocarpus  appendicula-  1         0.84       0.78       1      Yes            6      56   tus   Hura crepitans           1         0.83       0.83       5      No             5      53   Hymenaea courbaril       1         0.82       0.72       1      Yes            7      80   Muntingia calabura        1         0.96       0.94       9      Yes            3      61   Picramnia antidesma       1         0.77       0.72       1      No             6      52   Platymiscium  parviflorum   1         0.6        0.56       5      Yes            8      58   Platymiscium  pinnatum     1         0.56       0.6        5      Yes            6      67   Posoqueria latifolia       1         0.66       0.63       5      Yes            7      48   Quercus corrugata        1         0.9        0.82       8      Yes            5      50   Robinsonella   lindeniana   1         0.83       0.8        9      Yes            7      48   var. divergens   Samanea saman            1         0.74       0.69       8      Yes            8      78   Stemmadenia     donnell-  1         0.65       0.61       3      Yes            7      56   smithii   Tabebuia impetiginosa     1         0.85       0.79       5      Yes            7      58   Tabebuia ochracea         1         0.81       0.74       5      Yes            7      66   Tabebuia ochracea CR      1         0.81       0.71       4      Yes            7      36   Terminalia oblonga       1         0.7        0.67       8      Yes            8      64   Urera caracasana          1         0.71       0.72       5      Yes            5      28   Vernonia patens           1         0.71       0.64       5      Yes            6      36   Zygia longifolia            1         0.76       0.66       1      Yes            8      60   Astronium graveolens      0.97       0.65       0.62       1      Yes            7      78   Croton niveus             0.96       0.71       0.65       5      Yes            6      34   Terminalia amazonia      0.96       0.73       0.68       8      Yes            9      110   Trichilia havanensis        0.96       0.62       0.58       5      Yes            8      76   Acnistus arborescens      0.95       0.7        0.61       1      Yes            9      47   Ardisia revoluta          0.95       0.55       0.53       8      Yes            8      60   Erythrina poeppigiana     0.95       0.55       0.54       1      Yes            8      50   Sapium glandulosum        0.95       0.6        0.61       6      Yes            6      50   Tabebuia rosea            0.95       0.6        0.57       7      Yes            6      40   Anacardium excelsum       0.94       0.72       0.62       1      Yes            7      58   Calophyllum  brasiliense    0.94       0.66       0.61       5      Yes            7      61   Cordia eriostigma          0.94       0.55       0.5        5      Yes            6      38   Hyeronima alchorneoides   0.94       0.72       0.61       5      Yes            8      50   Simarouba glauca          0.94       0.71       0.65       5      Yes            8      121   Swietenia macrophylla     0.94       0.54       0.53       5      Yes            8      60   Persea americana          0.93       0.55       0.54       5      Yes            7      42   Manilkara chicle           0.91       0.64       0.6        5      Yes            8      65   Pimenta dioica            0.91       0.62       0.59       4      Yes            8      58   Tabernaemontana     lit-  0.91       0.62       0.53       5      Yes            9      56   toralis   Clusia croatii            0.9        0.75       0.62       5      Yes            6      50   Ocotea sinuata            0.89       0.56       0.52       5      Yes            7      46   Sideroxylon capiri         0.88       0.6        0.57       1      Yes            8      55   Brosimum  alicastrum       0.87       0.61       0.58       5      No             7      60   Cretra costaricense        0.87       0.44       0.47       1      Yes            9      48   Psidium guajava           0.87       0.55       0.48       8      Yes            8      40   Pachira quinata          0.86       0.6        0.57       8      Yes            8      79   Solanum rovirosanum       0.86       0.57       0.54       1      Yes            8      56   Aegiphila valerioi         0.81       0.5        0.45       8      Yes            6      44   Dendropanax arboreus    0.81       0.5        0.48       8      Yes            8      54   Annona mucosa           0.77       0.5        0.48       2      Yes            8      55 --------------------------------------------------------------------------------------------------" ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div>        <div class="table">                                                                                                                                                                                     <!--l. 959-->    ]]></body>
<body><![CDATA[<p class="indent" >   <a   id="x1-45002r10"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;10: </span><span   class="content">Cluster definition and most significant Algorithms per Cluster</span></div><!--tex4ht:label?: x1-45002r10 -->     <div class="pic-tabular"> <img  src="/img/revistas/cleiej/v19n1/1a07225x.png" alt="--------------------------------------  Cluster  Algorithm -Name---------------------------------  1       0.1 HCoS and0.9R1P8 &amp; R3P16 --------------------------------------  2       0.5 HCoS and0.5R1P8 &amp; R3P16          0.1 HCoS and0.9R1P8 &amp; R3P16   3           0 HCoS and 1 R1P8 &amp; R3P16 ---------0.5-HCoS-and0.5R1P8-&amp;-R3P16---          0.1 HCoS and0.9R1P8 &amp; R3P16   4           0.5 HCoS and0.5R1P8 &amp; R3P16 ---------0 HCoS-and 1-R1P8-&amp;-R3P16----          0.1 HCoS and0.9R1P8 &amp; R3P16  5  ---------0 HCoS-and 1-R1P8-&amp;-R3P16----          0.5 HCoS and0.5R1P8 &amp; R3P16   6           0.1 HCoS and0.9R1P8 &amp; R3P16 ---------0.9-HCoS-and0.1R1P8-&amp;-R3P16---          0 HCoS and 1 R1P8 &amp; R3P16   7          0.1 HCoS and0.9R1P8 &amp; R3P16 ---------0.5-HCoS-and0.5R1P8-&amp;-R3P16---   8          0.1 HCoS and0.9R1P8 &amp; R3P16 --------------------------------------          0.1 HCoS and0.9R1P8 &amp; R3P16  9          0.5 HCoS and0.5R1P8 &amp; R3P16 --------------------------------------          0.5 HCoS and0.5R1P8 &amp; R3P16  10           0.1 HCoS and0.9R1P8 &amp; R3P16          0 HCoS and 1 R1P8 &amp; R3P16 -------------------------------------- " ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 1002-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-460005.5"></a><span  class="cmbx-10">Noise Affectation.</span></span>    As Table <a  href="#x1-45001r9">9<!--tex4ht:ref: per_species_analysis --></a> shows, most species are affected negatively by noise in the data. However, four species do show no significant difference between noisy and clean data. <span  class="cmti-10">Blackea maurafernandesiana</span>, <span  class="cmti-10">Brosimum</span> <span  class="cmti-10">alicastrum</span>, <span  class="cmti-10">Hura crepitans</span>, and <span  class="cmti-10">Picramnia antidesma </span>seem to be fairly resilient to noise with these algorithms. Table <a  href="#x1-45001r9">9<!--tex4ht:ref: per_species_analysis --></a> shows some species which got low accuracy values at the bottom. <span  class="cmti-10">Annona mucosa</span> and <span  class="cmti-10">Dendropanax arboreus </span>got a median accuracy of 0.48, and <span  class="cmti-10">Aegiphila valerioi </span>of 0.45. Figure <a  href="#x1-46001r13">13<!--tex4ht:ref: worst_species --></a> shows 4 images of these 3 species. We suspect the reasons behind the low accuracy for these species are the shadows present inside the leaf and outside, against the paper sheet. Also it can be noticed some leaves also have physical damage. <span  class="cmti-10">Dendropanax arboreus </span>in Figure <a  href="#x1-46001r13">13<!--tex4ht:ref: worst_species --></a> also shows how different both sides of the same species are, suggesting we need to separate the dataset in both sides of the leaves. <!--l. 1007-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-46001r13"></a>                                                                                                                                                                                      <!--l. 1008-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f13.png" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;13: </span><span   class="content">Leaf samples of species with low accuracy</span></div><!--tex4ht:label?: x1-46001r13 -->                                                                                                                                                                                     <!--l. 1014-->    ]]></body>
<body><![CDATA[<p class="indent" >   </div><hr class="endfigure"> <!--l. 1016-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-470005.5"></a><span  class="cmbx-10">Value of threshold</span> <img  src="/img/revistas/cleiej/v19n1/1a07226x.png" alt="&#x02C6;k  "  class="math" ><span  class="cmbx-10">.</span></span>    The best <img  src="/img/revistas/cleiej/v19n1/1a07227x.png" alt="&#x02C6;k  "  class="math" > is achieved by species <span  class="cmti-10">Muntingia calabura</span>, with <img  src="/img/revistas/cleiej/v19n1/1a07228x.png" alt="&#x02C6;k = 3  "  class="math" >. <span  class="cmti-10">Bauhinia purpurea </span>also shows a low value of <img  src="/img/revistas/cleiej/v19n1/1a07229x.png" alt="&#x02C6;k = 4  "  class="math" >. <span  class="cmti-10">Eugenia hiraeifolia</span>, <span  class="cmti-10">Genipa americana</span>, <span  class="cmti-10">Hura crepitans</span>, <span  class="cmti-10">Quercus corrugata </span>and <span  class="cmti-10">Urera caracasana </span>have <img  src="/img/revistas/cleiej/v19n1/1a07230x.png" alt="&#x02C6;k = 5  "  class="math" >. Overall, 13 species show a <img  src="/img/revistas/cleiej/v19n1/1a07231x.png" alt="&#x02C6;k = 6  "  class="math" > value, while 20 species have <img  src="/img/revistas/cleiej/v19n1/1a07232x.png" alt="&#x02C6;k = 7  "  class="math" >, and the rest result in <img  src="/img/revistas/cleiej/v19n1/1a07233x.png" alt="8 &#x003C;= &#x02C6;k &#x003C;= 10  "  class="math" >. As <img  src="/img/revistas/cleiej/v19n1/1a07234x.png" alt="k&#x02C6;  "  class="math" > is lower, then the potential maximum accuracy for that species tends to be very high. <!--l. 1019-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-480005.5"></a><span  class="cmbx-10">Best Algorithms per species.</span></span>    Several clusters were identified based on algorithms that showed the best accuracy per species. Table <a  href="#x1-45002r10">10<!--tex4ht:ref: tab:clusters --></a> shows the list of clusters. Each cluster contains one to three algorithms that had the same statistical significance during our experiments per species. We have in total 10 clusters based on the best, second best and third best algorithm per species. Table <a  href="#x1-45001r9">9<!--tex4ht:ref: per_species_analysis --></a> shows that most of the species belong to clusters that had the combination of <img  src="/img/revistas/cleiej/v19n1/1a07235x.png" alt="0.1  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07236x.png" alt="0.9  "  class="math" > R1P8 &amp; R3P16. Some also have R1P8 &amp; R3P16 which is texture alone without curvature. <!--l. 1022-->    <p class="indent" >   Figure <a  href="#x1-48001r14">14<!--tex4ht:ref: gra:AccuracyByCluster --></a> shows the accuracy distribution across the 10 clusters formed after carrying out Tukey tests on the different algorithms. The best algorithms have the biggest factor for texture. Cluster 3, Cluster 8 and Cluster 10, have as the best algorithm the combination <img  src="/img/revistas/cleiej/v19n1/1a07237x.png" alt="0.1  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07238x.png" alt="0.9  "  class="math" > R1P8 &amp; R3P16, and have the second best accuracy across all species. The very best cluster is Cluster 9, reaching an Accuracy of <img  src="/img/revistas/cleiej/v19n1/1a07239x.png" alt="1  "  class="math" >. <!--l. 1024-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-48001r14"></a>                                                                                                                                                                                      <!--l. 1026-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f14.png" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;14: </span><span   class="content">Accuracy distribution across different clusters found on the species</span></div><!--tex4ht:label?: x1-48001r14 -->                                                                                                                                                                                     <!--l. 1029-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 1031-->    ]]></body>
<body><![CDATA[<p class="indent" >   Figure <a  href="#x1-48002r15">15<!--tex4ht:ref: gra:SpeciesCountByCluster --></a> shows the distribution of species count per cluster. The cluster with the most species is Cluster 5, with more than 25 species. This cluster, as shown in Table <a  href="#x1-45002r10">10<!--tex4ht:ref: tab:clusters --></a>, contains two best algorithms: <img  src="/img/revistas/cleiej/v19n1/1a07240x.png" alt="0.1  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07241x.png" alt="0.9  "  class="math" > R1P8 &amp; R3P16 combination, and <img  src="/img/revistas/cleiej/v19n1/1a07242x.png" alt="0  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07243x.png" alt="1  "  class="math" > R1P8 &amp; R3P16 combination. This means both of them are statistically equivalent for these species. <!--l. 1034-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-48002r15"></a>                                                                                                                                                                                      <!--l. 1036-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f15.png" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;15: </span><span   class="content">Species count distribution across different clusters</span></div><!--tex4ht:label?: x1-48002r15 -->                                                                                                                                                                                     <!--l. 1039-->    <p class="indent" >   </div><hr class="endfigure">    <h4 class="subsectionHead"><span class="titlemark">5.6   </span> <a   id="x1-490005.6"></a>Statistical Analysis of Best Algorithms for <img  src="/img/revistas/cleiej/v19n1/1a07244x.png" alt="k = 5  "  class="math" ></h4> <!--l. 1044-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-500005.6"></a><span  class="cmbx-10">Best Algorithms for</span> <img  src="/img/revistas/cleiej/v19n1/1a07245x.png" alt="k = 5  "  class="math" > <span  class="cmbx-10">and noisy dataset.</span></span>    Table <a  href="#x1-51002r11">11<!--tex4ht:ref: tab:noisyK5 --></a> shows what algorithms maximize the probability of a good identification, given that <img  src="/img/revistas/cleiej/v19n1/1a07246x.png" alt="k = 5  "  class="math" > and the noisy dataset is used. Most algorithms are combinations, from pure texture, to a <img  src="/img/revistas/cleiej/v19n1/1a07247x.png" alt="0.5  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07248x.png" alt="0.5  "  class="math" > R1P8 &amp; R3P16 combination. No combination gets near a pure curvature algorithm. Similar results are noted in Figure <a  href="#x1-50001r16">16<!--tex4ht:ref: gra:noisy5k --></a> where the best probabilities are around the <img  src="/img/revistas/cleiej/v19n1/1a07249x.png" alt="0.2  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07250x.png" alt="0.8  "  class="math" > R1P8 &amp; R3P16 combination. In general the distribution is very homogenous. <!--l. 1047-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-50001r16"></a>                                                                                                                                                                                      <!--l. 1049-->    ]]></body>
<body><![CDATA[<p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f16.png" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;16: </span><span   class="content">Distribution of Probability of successful identification with noisy data and <img  src="/img/revistas/cleiej/v19n1/1a07251x.png" alt="k = 5  "  class="math" ></span></div><!--tex4ht:label?: x1-50001r16 -->                                                                                                                                                                                     <!--l. 1052-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 1055-->    <p class="noindent" ><span class="paragraphHead"><a   id="x1-510005.6"></a><span  class="cmbx-10">Best Algorithms for</span> <img  src="/img/revistas/cleiej/v19n1/1a07252x.png" alt="k = 5  "  class="math" > <span  class="cmbx-10">and clean dataset.</span></span>    Table <a  href="#x1-51003r12">12<!--tex4ht:ref: tab:cleanK5 --></a> shows what algorithms maximize the probability of a good identification, given that <img  src="/img/revistas/cleiej/v19n1/1a07253x.png" alt="k = 5  "  class="math" > and the clean dataset is used. In this case the best algorithms per species are more spread across most combinations. This is due the lack of noise in the data and the lesser affectation of the curvature algorithms. This, compared with the data of Table <a  href="#x1-51002r11">11<!--tex4ht:ref: tab:noisyK5 --></a> confirms how texture seems to be more robust with noise. Figure <a  href="#x1-51001r17">17<!--tex4ht:ref: gra:clean5k --></a> shows the distribution of probabilities of a good identification per algorithm. On clean data it seems the biggest probabilities are near the center with combinations around the <img  src="/img/revistas/cleiej/v19n1/1a07254x.png" alt="0.3  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07255x.png" alt="0.7  "  class="math" > R1P8 &amp; R3P16 combination. <!--l. 1058-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-51001r17"></a>                                                                                                                                                                                      <!--l. 1060-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a07f17.png" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;17: </span><span   class="content">Distribution of Probability of successful identification with clean data and <img  src="/img/revistas/cleiej/v19n1/1a07256x.png" alt="k = 5  "  class="math" ></span></div><!--tex4ht:label?: x1-51001r17 -->                                                                                                                                                                                     <!--l. 1063-->    ]]></body>
<body><![CDATA[<p class="indent" >   </div><hr class="endfigure">        <div class="table">                                                                                                                                                                                     <!--l. 1066-->    <p class="indent" >   <a   id="x1-51002r11"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;11: </span><span   class="content">Algorithms that maximize the probability of a good identification for all species on noisy data, with a fixed <img  src="/img/revistas/cleiej/v19n1/1a07257x.png" alt="k = 5  "  class="math" ></span></div><!--tex4ht:label?: x1-51002r11 -->      <div class="pic-tabular"><img  src="/img/revistas/cleiej/v19n1/1a07258x.png" alt="---------------AAncanciAaserAAtgdnruiindspuoi ahmnsriaiablaeoxrSrvcmepesaeuvoeclelsclceruouieniomstssiaa-0000.1.1.1.2-H0HHHCCCCoHoooSCSSSoSanananandadddn0d000....91998F R R R R Ri11111rPPPPPst88888-Al&amp;&amp;&amp;&amp;&amp;goRRRRRri33333tPPPPPh11111m66666---Probability of-Su00000c.....ce94745s08239s34562                 Astronium graveolens  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.729                   Bauhinia purpurea 0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.943                   Bauhinia ungulata 0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.858           Blackea maurafernandesiana  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.889                 Brosimum  alicastrum      0 HCoS and 1 R1P8 &amp; R3P16                0.749               Calophyllum brasiliense  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.808         Calycophyllum candidissimum   0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.844                     Cedrela odorata 0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.858                 Cestrum  tomentosum   0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.867          Citharexylum donnell- smithii 0.5 HCoS  and 0.5 R1P8 &amp; R3P16                0.591                        Clusia croatii    0 HCoS and 1 R1P8 &amp; R3P16                0.797                    Colubrina spinosa  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.906                    Cordia eriostigma  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.704                   Cretra costaricense  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.609                        Croton draco  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.876                       Croton niveus     0 HCoS and 1 R1P8 &amp; R3P16                0.863                Dendropanax arboreus     0 HCoS and 1 R1P8 &amp; R3P16                0.452                 Dipteryx panamensis 0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.855                Erythrina poeppigiana  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.767                   Eugenia hiraeifolia  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                 0.99                     Ficus cotinifolia  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.849                    Genipa americana 0.2 HCoS  and 0.8 R1P8 &amp; R3P16                 0.89                   Guaiacum  sanctum   0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.905                   Guazuma ulmifolia     0 HCoS and 1 R1P8 &amp; R3P16                 0.95            Heliocarpus appendiculatus 0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.957                      Hura crepitans     0 HCoS and 1 R1P8 &amp; R3P16                0.902              Hyeronima  alchorneoides     0 HCoS and 1 R1P8 &amp; R3P16                0.743                  Hymenaea courbaril  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.874                     Manilkara chicle 0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.696                  Muntingia calabura  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                 0.98                      Ocotea sinuata     0 HCoS and 1 R1P8 &amp; R3P16                0.807                      Pachira quinata 0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.631                    Persea americana    0 HCoS and 1 R1P8 &amp; R3P16                0.796                 Picramnia antidesma     0 HCoS and 1 R1P8 &amp; R3P16                 0.94                      Pimenta dioica  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.638             Platymiscium parviflorum   0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.709               Platymiscium  pinnatum   0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.858                   Posoqueria latifolia 0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.844                     Psidium guajava  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.477                   Quercus corrugata  0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.942   Robinsonella lindeniana var. divergens 0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.888                     Samanea saman   0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.827                 Sapium glandulosum   0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.589                    Sideroxylon capiri     0 HCoS and 1 R1P8 &amp; R3P16                0.648                    Simarouba glauca     0 HCoS and 1 R1P8 &amp; R3P16                0.734                 Solanum rovirosanum   0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.731          Stemmadenia donnell- smithii 0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.708                Swietenia macrophylla  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.762                Tabebuia impetiginosa 0.1 HCoS  and 0.9 R1P8 &amp; R3P16                 0.93                   Tabebuia ochracea  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                  0.9        Tabebuia ochracea (Costa Rica)  0 HCoS and 1 R1P8 &amp; R3P16                0.892                      Tabebuia rosea 0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.728            Tabernaemontana littoralis     0 HCoS and 1 R1P8 &amp; R3P16                0.716                 Terminalia amazonia  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.726                   Terminalia oblonga 0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.731                  Trichilia havanensis  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.687                    Urera caracasana     0 HCoS and 1 R1P8 &amp; R3P16                0.963                     Vernonia patens    0 HCoS and 1 R1P8 &amp; R3P16                0.932                      Zygia longifolia 0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.781 --------------------------------------------------------------------------------------  " ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div>        <div class="table">                                                                                                                                                                                     <!--l. 1144-->    <p class="indent" >   <a   id="x1-51003r12"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;12: </span><span   class="content">Algorithms that maximize the probability of a good identification for all species on clean data, with a fixed <img  src="/img/revistas/cleiej/v19n1/1a07259x.png" alt="k = 5  "  class="math" ></span></div><!--tex4ht:label?: x1-51003r12 -->      ]]></body>
<body><![CDATA[<div class="pic-tabular"><img  src="/img/revistas/cleiej/v19n1/1a07260x.png" alt="---------------AAncanciAaserAAtgdnruiindspuoi ahmnsriaiablaeoxrSrvcmepesaeuvoeclelsclceruouieniomstssiaa-00000.2.7.4.7.5-HHHHHCCCCCoooooSSSSS--anananananddddd-00000.....83635F R R R R Ri11111rPPPPPst88888-Al&amp;&amp;&amp;&amp;&amp;goRRRRRri33333tPPPPPh11111m66666---Probability of-Su0000c....0ce9686.s49748s82762                 Astronium graveolens  0.4 HCoS  and 0.6 R1P8 &amp; R3P16                0.873                   Bauhinia purpurea 0.5 HCoS  and 0.5 R1P8 &amp; R3P16                0.981                   Bauhinia ungulata 0.6 HCoS  and 0.4 R1P8 &amp; R3P16                0.961           Blackea maurafernandesiana  0.4 HCoS  and 0.6 R1P8 &amp; R3P16                0.946                 Brosimum  alicastrum   0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.825               Calophyllum brasiliense  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.903         Calycophyllum candidissimum   0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.923                     Cedrela odorata 0.4 HCoS  and 0.6 R1P8 &amp; R3P16                0.938                 Cestrum  tomentosum   0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.934          Citharexylum donnell- smithii 0.7 HCoS  and 0.3 R1P8 &amp; R3P16                0.875                        Clusia croatii 0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.857                    Colubrina spinosa  0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.957                    Cordia eriostigma  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.833                   Cretra costaricense  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.737                        Croton draco  0.3 HCoS  and 0.7 R1P8 &amp; R3P16                 0.95                       Croton niveus  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.913                Dendropanax arboreus  0.7 HCoS  and 0.3 R1P8 &amp; R3P16                0.717                 Dipteryx panamensis 0.4 HCoS  and 0.6 R1P8 &amp; R3P16                0.941                Erythrina poeppigiana  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.747                   Eugenia hiraeifolia  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.998                     Ficus cotinifolia  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.918                    Genipa americana 0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.952                   Guaiacum  sanctum   0.4 HCoS  and 0.6 R1P8 &amp; R3P16                0.967                   Guazuma ulmifolia  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                 0.97            Heliocarpus appendiculatus 0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.981                      Hura crepitans  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.943              Hyeronima  alchorneoides  0.5 HCoS  and 0.5 R1P8 &amp; R3P16                0.846                  Hymenaea courbaril  0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.942                     Manilkara chicle 0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.847                  Muntingia calabura  0.3 HCoS  and 0.7 R1P8 &amp; R3P16                 0.99                      Ocotea sinuata  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.873                      Pachira quinata 0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.806                    Persea americana    0 HCoS and 1 R1P8 &amp; R3P16                0.844                 Picramnia antidesma  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.961                      Pimenta dioica  0.5 HCoS  and 0.5 R1P8 &amp; R3P16                0.823             Platymiscium parviflorum   0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.829               Platymiscium  pinnatum   0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.918                   Posoqueria latifolia 0.2 HCoS  and 0.8 R1P8 &amp; R3P16                 0.92                     Psidium guajava  0.5 HCoS  and 0.5 R1P8 &amp; R3P16                0.687                   Quercus corrugata  0.4 HCoS  and 0.6 R1P8 &amp; R3P16                0.982   Robinsonella lindeniana var. divergens 0.5 HCoS  and 0.5 R1P8 &amp; R3P16                0.964                     Samanea saman   0.4 HCoS  and 0.6 R1P8 &amp; R3P16                0.934                 Sapium glandulosum   0.7 HCoS  and 0.3 R1P8 &amp; R3P16                0.843                    Sideroxylon capiri  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.767                    Simarouba glauca  0.3 HCoS  and 0.7 R1P8 &amp; R3P16                 0.85                 Solanum rovirosanum   0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.844          Stemmadenia donnell- smithii 0.4 HCoS  and 0.6 R1P8 &amp; R3P16                0.872                Swietenia macrophylla  0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.852                Tabebuia impetiginosa 0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.966                   Tabebuia ochracea  0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.957        Tabebuia ochracea (Costa Rica) 0.2 HCoS and 0.8 R1P8 &amp; R3P16               0.938                      Tabebuia rosea 0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.857            Tabernaemontana littoralis  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.793                 Terminalia amazonia  0.4 HCoS  and 0.6 R1P8 &amp; R3P16                0.871                   Terminalia oblonga 0.5 HCoS  and 0.5 R1P8 &amp; R3P16                0.895                  Trichilia havanensis  0.3 HCoS  and 0.7 R1P8 &amp; R3P16                0.831                    Urera caracasana  0.1 HCoS  and 0.9 R1P8 &amp; R3P16                0.976                     Vernonia patens    0 HCoS and 1 R1P8 &amp; R3P16                 0.95                      Zygia longifolia 0.2 HCoS  and 0.8 R1P8 &amp; R3P16                0.884 --------------------------------------------------------------------------------------  " ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div>    <h3 class="sectionHead"><span class="titlemark">6   </span> <a   id="x1-520006"></a>Conclusions</h3> <!--l. 1227-->    <p class="noindent" >The addition of texture increases significantly the accuracy of our implementation of the HCoS. When comparing HCoS versus the combination of <img  src="/img/revistas/cleiej/v19n1/1a07261x.png" alt="0.1  "  class="math" > HCoS and <img  src="/img/revistas/cleiej/v19n1/1a07262x.png" alt="0.9  "  class="math" > R1P8 &amp; R3P16, for the Costa Rican clean subset, the improvement ranges from <img  src="/img/revistas/cleiej/v19n1/1a07263x.png" alt="14.1%  "  class="math" > to <img  src="/img/revistas/cleiej/v19n1/1a07264x.png" alt="25.5%  "  class="math" >, depending on the value of <img  src="/img/revistas/cleiej/v19n1/1a07265x.png" alt="k  "  class="math" >. Similarly, with the noisy subset, the improvement ranges from <img  src="/img/revistas/cleiej/v19n1/1a07266x.png" alt="35.5%  "  class="math" > to <img  src="/img/revistas/cleiej/v19n1/1a07267x.png" alt="42.5%  "  class="math" >. These improvements were proved to be statistically significant in our experiments. <!--l. 1229-->    <p class="indent" >   The complete dataset experiments demonstrated that poor accuracy levels are achieved when noisy images are classified against clean images. We speculate that this is due to the many enhancements that leaf images underwent before being added to the clean dataset. First, leaves were pressed for 24 hours in order to flatten them and thus minimize the presence of shadows. Secondly, Photoshop was used to manually remove artifacts. Finally, image enhancement algorithms (e.g., stem removal) were applied. This result has important implications if a mobile application is developed, given that users will take noisy pictures. As a result we are left with two alternatives. The first one is to use a noisy dataset to train the classifier. Alternatively a clean dataset could be used but user images would need to undergo further automated image enhancements comparable to those performed manually with Photoshop. <!--l. 1231-->    <p class="indent" >   Experiments for individual species provided some interesting results. Concerning minimal values of <img  src="/img/revistas/cleiej/v19n1/1a07268x.png" alt="k  "  class="math" >, i.e., the size of the set of candidates that are considered best possibilities in an identification process, good levels of accuracy were obtained for <img  src="/img/revistas/cleiej/v19n1/1a07269x.png" alt="&#x02C6;k = 7  "  class="math" > in 63% of the species. Working with noisy images had a negative effect on levels of accuracy on 61 out of 65 species studied, as compared to clean images and a clean dataset. Finally, texture also stands out in most individual cases as the determining factor for high accuracy levels as compared to leaf shape. <!--l. 1233-->    <p class="indent" >   Our statistical analysis of best algorithms for <img  src="/img/revistas/cleiej/v19n1/1a07270x.png" alt="k = 5  "  class="math" > did not render a clear winner but highlighted that the best combination of algorithms should use weigths smaller than 0.2 to HCoS. <!--l. 1235-->    <p class="noindent" >    <h3 class="sectionHead"><span class="titlemark">7   </span> <a   id="x1-530007"></a>Future Work</h3> <!--l. 1238-->    <p class="noindent" >A natural next step in this research is to develop a mobile app that uses the georeference of photographs of leaves as an additional criterion to classify species. Most modern mobile phones already include excellent cameras and provide the option of automatically georeferencing any picture taken with these cameras. In addition to the reference image dataset such as the one developed for this research, maps of potential distribution of species of Costa Rican trees would be needed. <span  class="cmti-10">Atta</span>, a comprehensive and fully georeferenced database of thousands of species of organisms from Costa Rica developed by the National Biodiversity Institute (INBio) <a  href="www.inbio.ac.cr" class="url" ><span  class="cmtt-10">www.inbio.ac.cr</span></a> and GBIF&#8217;s database <span class="footnote-mark"><a  href="/img/revistas/cleiej/v19n1/1a072.html#fn1x0"><sup class="textsuperscript">1</sup></a></span><a   id="x1-53001f1"></a> are excellent foundations to generate these potential distribution maps of species. In addition to curvature, texture, and georeferencing as discriminating factors, morphological measures of leaves are also frequently used by specialists to identify plant species. Some of these measures are: aspect ratio, which is the ratio of horizontal width to vertical length; form coefficient, which is a numerical value that grades the leaf shape as between circular (shortest perimeter for a given area) and filliform (longest perimeter for a given area); and blade and petiole length. Algorithms to calculate these measures have already been developed (e.g., WinFOLIA). However, they have not been integrated in computer vision systems for automatic identification of plant species. <!--l. 1241-->    <p class="indent" >   A crowd sourcing approach could be a very efficient way to increase the size of the image dataset that currently comprises 66 plant species from Costa Rica. In addition, crowdsourcing could also be used to clean noisy pictures as a citizen science project. <!--l. 1243-->    <p class="indent" >   Finally, the individual contribution of texture features such as venation, porosity, and reflection in characterizing a plant species has not been formally established. A more elaborate analysis of the leaf texture that disaggregates it into a separate layer for each these features would help understand and quantify their individual contribution.                                                                                                                                                                                        <h3 class="likesectionHead"><a   id="x1-540007"></a>Acknowledgement</h3> <!--l. 1246-->    <p class="noindent" >To the National Biodiversity Institute of Costa Rica (INBio) and Nelson Zamora for their help with the leaf sample recollection and expert feedback during this research. <!--l. 2-->    ]]></body>
<body><![CDATA[<p class="noindent" >    <h3 class="likesectionHead"><a   id="x1-550007"></a>References</h3> <!--l. 2-->    <p class="noindent" >         <div class="thebibliography">         <p class="bibitem" ><span class="biblabel">   [<a href="#br1">1</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Ximpediment"></a>M.&#x00A0;R.  de&#x00A0;Carvalho,  F.&#x00A0;A.  Bockmann,  D.&#x00A0;S.  Amorim,  C.&#x00A0;R.&#x00A0;F.  Brandão,  M.&#x00A0;de&#x00A0;Vivo,  J.&#x00A0;L.     de&#x00A0;Figueiredo, H.&#x00A0;A. Britski, M.&#x00A0;C. de&#x00A0;Pinna, N.&#x00A0;A. Menezes, F.&#x00A0;P. Marques, N.&#x00A0;Papavero, E.&#x00A0;M.     Cancello, J.&#x00A0;V. Crisci, J.&#x00A0;D. McEachran, R.&#x00A0;C. Schelly, J.&#x00A0;G. Lundberg, A.&#x00A0;C. Gill, R.&#x00A0;Britz, Q.&#x00A0;D.     Wheeler,  M.&#x00A0;L.  Stiassny,  L.&#x00A0;R.  Parenti,  L.&#x00A0;M.  Page,  W.&#x00A0;C.  Wheeler,  J.&#x00A0;Faivovich,  R.&#x00A0;P.  Vari,     L.&#x00A0;Grande, C.&#x00A0;J. Humphries, R.&#x00A0;DeSalle, M.&#x00A0;C. Ebach, and G.&#x00A0;J. Nelson, &#8220;Taxonomic impediment     or  impediment  to  taxonomy?  a  commentary  on  systematics  and  the  cybertaxonomic-automation     paradigm,&#8221;   <span  class="cmti-10">Evolutionary   Biology</span>,   vol.&#x00A0;34,   no.   3-4,   pp.   140&#8211;143,   2007.   [Online].   Available:     <a  href="http://dx.doi.org/10.1007/s11692-007-9011-6" class="url" >http://dx.doi.org/10.1007/s11692-007-9011-6</a>     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br2">2</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xjournals/cviu/AndreopoulosT13"></a>A.&#x00A0;Andreopoulos  and  J.&#x00A0;K.  Tsotsos,  &#8220;50  years  of  object  recognition:  Directions  forward.&#8221;     <span  class="cmti-10">Computer Vision and Image Understanding</span>, vol. 117, no.&#x00A0;8, pp. 827&#8211;891, 2013. [Online]. Available:     <a  href="http://dx.doi.org/10.1016/j.cviu.2013.04.005" class="url" >http://dx.doi.org/10.1016/j.cviu.2013.04.005</a>     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br3">3</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xlagerwall"></a>L.&#x00A0;R.D and V.&#x00A0;S, &#8220;Plant classification using leaf recognition,&#8221; in <span  class="cmti-10">Proceedings of the 22nd Annual</span>     <span  class="cmti-10">Symposium of the Pattern Recognition Association of South Africa</span>, November 2011, pp. 91&#8211;95.     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br4">4</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XRashad"></a>M.&#x00A0;S.&#x00A0;K. M.&#x00A0;Z.&#x00A0;Rashad, B.S.el-Desouky, &#8220;Plants images classification based on textural features     using combined classifier,&#8221; <span  class="cmti-10">International Journal of Computer Science and Information Technology</span>,     vol.&#x00A0;3, no.&#x00A0;4, 2011.     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br5">5</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XKumar"></a>A.&#x00A0;Bhardwaj,    M.&#x00A0;Kaur,    and    A.&#x00A0;Kumar,    &#8220;Recognition    of    plants    by    leaf    image     using    moment    invariant    and    texture    analysis,&#8221;    <span  class="cmti-10">International    Journal    of    Innovation</span>     <span  class="cmti-10">and     Applied     Studies</span>,     vol.&#x00A0;3,     no.&#x00A0;1,     pp.     237&#8211;248,     2013.     [Online].     Available:     <a  href="http://www.ijias.issr-journals.org/abstract.php?article=IJIAS-13-087-01" class="url" >http://www.ijias.issr-journals.org/abstract.php?article=IJIAS-13-087-01</a>     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br6">6</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XWUFlavia"></a>S.&#x00A0;Wu, F.&#x00A0;Bao, E.&#x00A0;Xu, Y.-X. Wang, Y.-F. Chang, and Q.-L. Xiang, &#8220;A leaf recognition algorithm     for  plant  classification  using  probabilistic  neural  network,&#8221;  in  <span  class="cmti-10">Signal  Processing  and  Information</span>     <span  class="cmti-10">Technology,  2007  IEEE  International  Symposium  on</span>,  Dec  2007,  pp.  11&#8211;16.  [Online].  Available:     <a  href="http://dx.doi.org/10.1109/ISSPIT.2007.4458016" class="url" >http://dx.doi.org/10.1109/ISSPIT.2007.4458016</a>     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br7">7</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XSumathi"></a>T.&#x00A0;Beghin,   J.&#x00A0;Cope,   P.&#x00A0;Remagnino,   and   S.&#x00A0;Barman,   &#8220;Shape   and   texture   based   plant     leaf  classification,&#8221;  in  <span  class="cmti-10">Advanced  Concepts  for  Intelligent  Vision  Systems</span>,  ser.  Lecture  Notes     in  Computer  Science,  J.&#x00A0;Blanc-Talon,  D.&#x00A0;Bone,  W.&#x00A0;Philips,  D.&#x00A0;Popescu,  and  P.&#x00A0;Scheunders,     Eds.     Springer    Berlin    Heidelberg,    2010,    vol.    6475,    pp.    345&#8211;353.    [Online].    Available:     <a  href="http://dx.doi.org/10.1007/978-3-642-17691-3\_32" class="url" >http://dx.doi.org/10.1007/978-3-642-17691-3<span  class="cmsy-10">\</span>_32</a>                                                                                                                                                                                         </p>         ]]></body>
<body><![CDATA[<p class="bibitem" ><span class="biblabel">   [<a href="#br8">8</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XTAR4630"></a>D.&#x00A0;Wijesingha and F.&#x00A0;Marikar, &#8220;Automatic detection system for the identification of plants using     herbarium specimen images,&#8221; <span  class="cmti-10">Tropical Agricultural Research</span>, vol.&#x00A0;23, no.&#x00A0;1, 2012. [Online]. Available:     <a  href="http://www.sljol.info/index.php/TAR/article/view/4630" class="url" >http://www.sljol.info/index.php/TAR/article/view/4630</a>     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br9">9</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XArun"></a>C.&#x00A0;H. Arun, W.&#x00A0;R.&#x00A0;S. Emmanuel, and D.&#x00A0;C. Durairaj, &#8220;Texture feature extraction for identification     of  medicinal  plants  and  comparison  of  different  classifiers,&#8221;  <span  class="cmti-10">International  Journal  of  Computer</span>     <span  class="cmti-10">Applications</span>, vol.&#x00A0;62, no.&#x00A0;12, January 2013.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br10">10</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XAggarwal"></a>N.&#x00A0;Aggarwal and R.&#x00A0;K. Agrawal, &#8220;First and second order statistics features for classification of     magnetic resonance brain images,&#8221; <span  class="cmti-10">Journal of Signal and Information Processing</span>, vol.&#x00A0;3, no.&#x00A0;2, pp.     146&#8211;153, 2012. [Online]. Available: <a  href="http://dx.doi.org/10.4236/jsip.2012.32019" class="url" >http://dx.doi.org/10.4236/jsip.2012.32019</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br11">11</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XYeni"></a>Y.&#x00A0;Herdiyeni and M.&#x00A0;Santoni, &#8220;Combination of morphological, local binary pattern variance and     color moments features for indonesian medicinal plants identification,&#8221; in <span  class="cmti-10">Advanced Computer Science</span>     <span  class="cmti-10">and Information Systems (ICACSIS), 2012 International Conference</span>, Dec 2012, pp. 255&#8211;259. [Online].     Available: <a  href="http://dx.doi.org/10.5120/10129-4920" class="url" >http://dx.doi.org/10.5120/10129-4920</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br12">12</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XKadir"></a>A.&#x00A0;Kadir, L.&#x00A0;E. Nugroho, A.&#x00A0;Susanto, and P.&#x00A0;I. Santosa, &#8220;Leaf classification using shape, color,     and  texture  features,&#8221;  <span  class="cmti-10">International  Journal  of  Computer  Trends  and  Technology</span>,  2011.  [Online].     Available: <a  href="http://arxiv.org/pdf/1401.4447.pdf" class="url" >http://arxiv.org/pdf/1401.4447.pdf</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br13">13</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xleafsnap"></a>N.&#x00A0;Kumar, P.&#x00A0;Belhumeur, A.&#x00A0;Biswas, D.&#x00A0;Jacobs, W.&#x00A0;Kress, I.&#x00A0;Lopez, and J.&#x00A0;Soares, &#8220;Leafsnap:     A  computer  vision  system  for  automatic  plant  species  identification,&#8221;  in  <span  class="cmti-10">Computer  Vision  -</span>     <span  class="cmti-10">ECCV  2012</span>,  ser.  Lecture  Notes  in  Computer  Science,  A.&#x00A0;Fitzgibbon,  S.&#x00A0;Lazebnik,  P.&#x00A0;Perona,     Y.&#x00A0;Sato, and C.&#x00A0;Schmid, Eds.   Springer Berlin Heidelberg, 2012, pp. 502&#8211;516. [Online]. Available:     <a  href="http://dx.doi.org/10.1007/978-3-642-33709-3\_36" class="url" >http://dx.doi.org/10.1007/978-3-642-33709-3<span  class="cmsy-10">\</span>_36</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br14">14</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XClarke"></a>J.&#x00A0;Clarke, S.&#x00A0;Barman, P.&#x00A0;Remagnino, K.&#x00A0;Bailey, D.&#x00A0;Kirkup, S.&#x00A0;Mayo, and P.&#x00A0;Wilkin, &#8220;Venation     pattern analysis of leaf images,&#8221; in <span  class="cmti-10">Proceedings of the Second International Conference on Advances</span>     <span  class="cmti-10">in Visual Computing - Volume Part II</span>, ser. ISVC&#8217;06.   Berlin, Heidelberg: Springer-Verlag, 2006, pp.     427&#8211;436. [Online]. Available: <a  href="http://dx.doi.org/10.1007/11919629\_44" class="url" >http://dx.doi.org/10.1007/11919629<span  class="cmsy-10">\</span>_44</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br15">15</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XLarese"></a>M.&#x00A0;G.  Larese,  R.&#x00A0;Namías,  R.&#x00A0;M.  Craviotto,  M.&#x00A0;R.  Arango,  C.&#x00A0;Gallo,  and  P.&#x00A0;M.  Granitto,     &#8220;Automatic classification of legumes using leaf vein image features,&#8221; <span  class="cmti-10">Pattern Recogn.</span>, vol.&#x00A0;47, no.&#x00A0;1,     pp. 158&#8211;168, Jan. 2014. [Online]. Available: <a  href="http://dx.doi.org/10.1016/j.patcog.2013.06.012" class="url" >http://dx.doi.org/10.1016/j.patcog.2013.06.012</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br16">16</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XLee"></a>K.-B.  Lee,  K.-W.  Chung,  and  K.-S.  Hong,  &#8220;An  implementation  of  leaf  recognition  system     based on leaf contour and centroid for plant classification,&#8221; in <span  class="cmti-10">Ubiquitous Information Technologies</span>     <span  class="cmti-10">and  Applications</span>,  ser.  Lecture  Notes  in  Electrical  Engineering,  Y.-H.  Han,  D.-S.  Park,  W.&#x00A0;Jia,     and  S.-S.  Yeo,  Eds.    Springer  Netherlands,  2013,  vol.  214,  pp.  109&#8211;116.  [Online].  Available:     <a  href="http://dx.doi.org/10.1007/978-94-007-5857-5\_12" class="url" >http://dx.doi.org/10.1007/978-94-007-5857-5<span  class="cmsy-10">\</span>_12</a>                                                                                                                                                                                         </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br17">17</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XLeeShape"></a>K.-B. Lee and K.-S. Hong, &#8220;An implementation of leaf recognition system using leaf vein and shape,&#8221;     <span  class="cmti-10">International Journal of Bio-Science and Bio-Technology</span>, pp. 57&#8211;66, Apr 2013.     </p>         ]]></body>
<body><![CDATA[<p class="bibitem" ><span class="biblabel">  [<a href="#br18">18</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XVein3D"></a>Y.&#x00A0;Li,  Z.&#x00A0;Chi,  and  D.&#x00A0;Feng,  &#8220;Leaf  vein  extraction  using  independent  component  analysis,&#8221;  in     <span  class="cmti-10">Systems, Man and Cybernetics, 2006. SMC &#8217;06. IEEE International Conference on</span>, vol.&#x00A0;5, Oct 2006,     pp. 3890&#8211;3894.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br19">19</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XNelson"></a>N.&#x00A0;Zamora,   National   Biodiversity   Institute,   May   2014,   private   Communication,   National     Biodiversity Institute, Costa Rica.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br20">20</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XEM"></a>A.&#x00A0;P. Dempster, N.&#x00A0;M. Laird, and D.&#x00A0;B. Rubin, &#8220;Maximum likelihood from incomplete data via     the  em  algorithm,&#8221;  <span  class="cmti-10">JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B</span>,  vol.&#x00A0;39,     no.&#x00A0;1, pp. 1&#8211;38, 1977. [Online]. Available: <a  href="http://dx.doi.org/10.2307/2984875" class="url" >http://dx.doi.org/10.2307/2984875</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br21">21</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XHerdiyeni"></a>Y.&#x00A0;Herdiyeni   and   I.&#x00A0;Kusmana,   &#8220;Fusion   of   local   binary   patterns   features   for   tropical     medicinal   plants   identification,&#8221;   in   <span  class="cmti-10">Advanced   Computer   Science   and   Information   Systems</span>     <span  class="cmti-10">(ICACSIS),  2013  International  Conference  on</span>,   Sept   2013,   pp.   353&#8211;357.   [Online].   Available:     <a  href="http://dx.doi.org/10.1109/ICACSIS.2013.6761601" class="url" >http://dx.doi.org/10.1109/ICACSIS.2013.6761601</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br22">22</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XNguyen"></a>Q.&#x00A0;Nguyen,  T.&#x00A0;Le,  and  N.&#x00A0;Pham,  &#8220;Leaf  based  plant  identification  system  for  android  using     surf  features  in  combination  with  bag  of  words  model  and  supervised  learning,&#8221;  in  <span  class="cmti-10">International</span>     <span  class="cmti-10">Conference on Advanced Technologies for Communications (ATC)</span>, October 2013. [Online]. Available:     <a  href="http://dx.doi.org/10.1109/ATC.2013.6698145" class="url" >http://dx.doi.org/10.1109/ATC.2013.6698145</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br23">23</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XCarranza:Thesis:2014"></a>J.&#x00A0;Carranza-Rojas, &#8220;A Texture and Curvature Bimodal Leaf Recognition Model for Costa Rican     Plant Species Identification,&#8221; Master&#8217;s thesis, Costa Rica Institute of Technology, Cartago, Costa Rica,     2014. [Online]. Available: <a  href="http://hdl.handle.net/2238/3913#sthash.dxxgH0FI.dpuf" class="url" >http://hdl.handle.net/2238/3913#sthash.dxxgH0FI.dpuf</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br24">24</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xopencv"></a>G.&#x00A0;Bradski, &#8220;The OpenCV Library,&#8221; <span  class="cmti-10">Dr. Dobb&#8217;s Journal of Software Tools</span>, 2000.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br25">25</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xnumpy"></a>T.&#x00A0;E. Oliphant, <span  class="cmti-10">Guide to NumPy</span>, Provo, UT, Mar. 2006. [Online]. Available: <a  href="http://www.tramy.us/" class="url" >http://www.tramy.us/</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br26">26</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="X6909584"></a>X.&#x00A0;Zhu, C.&#x00A0;C. Loy, and S.&#x00A0;Gong, &#8220;Constructing robust affinity graphs for spectral clustering,&#8221;     in  <span  class="cmti-10">Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on</span>,  June  2014,  pp.     1450&#8211;1457. [Online]. Available: <a  href="http://dx.doi.org/10.1109/CVPR.2014.188" class="url" >http://dx.doi.org/10.1109/CVPR.2014.188</a>     </p>                                                                                                                                                                                             <p class="bibitem" ><span class="biblabel">  [<a href="#br27">27</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xsklearn"></a>F.&#x00A0;Pedregosa,   G.&#x00A0;Varoquaux,   A.&#x00A0;Gramfort,   V.&#x00A0;Michel,   B.&#x00A0;Thirion,   O.&#x00A0;Grisel,   M.&#x00A0;Blondel,     P.&#x00A0;Prettenhofer,  R.&#x00A0;Weiss,  V.&#x00A0;Dubourg,  J.&#x00A0;Vanderplas,  A.&#x00A0;Passos,  D.&#x00A0;Cournapeau,  M.&#x00A0;Brucher,     M.&#x00A0;Perrot, and E.&#x00A0;Duchesnay, &#8220;Scikit-learn: Machine learning in python,&#8221; <span  class="cmti-10">Journal of Machine Learning</span>     <span  class="cmti-10">Research</span>, vol.&#x00A0;12, pp. 2825&#8211;2830, 2011.     </p>         ]]></body>
<body><![CDATA[<p class="bibitem" ><span class="biblabel">  [<a href="#br28">28</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XSuzuki198532"></a>S.&#x00A0;Suzuki  and  K.&#x00A0;be,  &#8220;Topological  structural  analysis  of  digitized  binary  images  by  border     following,&#8221;  <span  class="cmti-10">Computer  Vision,  Graphics,  and  Image  Processing</span>,  vol.&#x00A0;30,  no.&#x00A0;1,  pp.  32  &#8211;  46,  1985.     [Online]. Available: <a  href="http://dx.doi.org/10.1016/0734-189X(85)90016-7" class="url" >http://dx.doi.org/10.1016/0734-189X(85)90016-7</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br29">29</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="X1677517"></a>S.&#x00A0;Manay,  D.&#x00A0;Cremers,  B.-W.  Hong,  A.&#x00A0;Yezzi,  and  S.&#x00A0;Soatto,  &#8220;Integral  invariants  for  shape     matching,&#8221; <span  class="cmti-10">Pattern Analysis and Machine Intelligence, IEEE Transactions on</span>, vol.&#x00A0;28, no.&#x00A0;10, pp.     1602&#8211;1618, Oct 2006. [Online]. Available: <a  href="http://dx.doi.org/10.1109/TPAMI.2006.208" class="url" >http://dx.doi.org/10.1109/TPAMI.2006.208</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br30">30</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xmahotas"></a>L.&#x00A0;P. Coelho, &#8220;Mahotas: Open source software for scriptable computer vision,&#8221; <span  class="cmti-10">Journal of Open</span>     <span  class="cmti-10">Research Software</span>, vol.&#x00A0;1, 2013. [Online]. Available: <a  href="http://dx.doi.org/10.5334/jors.ac" class="url" >http://dx.doi.org/10.5334/jors.ac</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br31">31</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="X1017623"></a>T.&#x00A0;Ojala,    M.&#x00A0;Pietikainen,    and    T.&#x00A0;Maenpaa,    &#8220;Multiresolution    gray-scale    and    rotation     invariant   texture   classification   with   local   binary   patterns,&#8221;   <span  class="cmti-10">Pattern   Analysis   and   Machine</span>     <span  class="cmti-10">Intelligence,  IEEE  Transactions  on</span>,  vol.&#x00A0;24,  no.&#x00A0;7,  pp.  971&#8211;987,  Jul  2002.  [Online].  Available:     <a  href="http://dx.doi.org/10.1109/TPAMI.2002.1017623" class="url" >http://dx.doi.org/10.1109/TPAMI.2002.1017623</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br32">32</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XMouineTriangles"></a>S.&#x00A0;Mouine,   I.&#x00A0;Yahiaoui,   and   A.&#x00A0;Verroust-Blondet,   &#8220;A   shape-based   approach   for   leaf     classification   using   multiscaletriangular   representation.&#8221;   in   <span  class="cmti-10">ICMR</span>,   R.&#x00A0;Jain,   B.&#x00A0;Prabhakaran,     M.&#x00A0;Worring,  J.&#x00A0;R.  Smith,  and  T.-S.  Chua,  Eds.    ACM,  2013,  pp.  127&#8211;134.  [Online].  Available:     <a  href="http://dblp.uni-trier.de/db/conf/mir/icmr2013.html#MouineYV13" class="url" >http://dblp.uni-trier.de/db/conf/mir/icmr2013.html#MouineYV13</a> </p>     </div>           ]]></body><back>
<ref-list>
<ref id="B1">
<label>1</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[de Carvalho]]></surname>
<given-names><![CDATA[M. R.]]></given-names>
</name>
<name>
<surname><![CDATA[Bockmann]]></surname>
<given-names><![CDATA[F. A.]]></given-names>
</name>
<name>
<surname><![CDATA[Amorim]]></surname>
<given-names><![CDATA[D. S.]]></given-names>
</name>
<name>
<surname><![CDATA[Brandão]]></surname>
<given-names><![CDATA[C. R. F]]></given-names>
</name>
<name>
<surname><![CDATA[de Vivo]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[de Figueiredo]]></surname>
<given-names><![CDATA[J. L.]]></given-names>
</name>
<name>
<surname><![CDATA[Britski]]></surname>
<given-names><![CDATA[H. A.]]></given-names>
</name>
<name>
<surname><![CDATA[de Pinna]]></surname>
<given-names><![CDATA[M. C]]></given-names>
</name>
<name>
<surname><![CDATA[Menezes]]></surname>
<given-names><![CDATA[N. A]]></given-names>
</name>
<name>
<surname><![CDATA[Marques]]></surname>
<given-names><![CDATA[F. P.]]></given-names>
</name>
<name>
<surname><![CDATA[Papavero]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
<name>
<surname><![CDATA[Cancello]]></surname>
<given-names><![CDATA[E. M.]]></given-names>
</name>
<name>
<surname><![CDATA[Crisci]]></surname>
<given-names><![CDATA[J. V]]></given-names>
</name>
<name>
<surname><![CDATA[McEachran]]></surname>
<given-names><![CDATA[J. D.]]></given-names>
</name>
<name>
<surname><![CDATA[Schelly]]></surname>
<given-names><![CDATA[R. C]]></given-names>
</name>
<name>
<surname><![CDATA[Lundberg]]></surname>
<given-names><![CDATA[J. G]]></given-names>
</name>
<name>
<surname><![CDATA[Gill]]></surname>
<given-names><![CDATA[A. C.]]></given-names>
</name>
<name>
<surname><![CDATA[Britz]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Wheeler]]></surname>
<given-names><![CDATA[Q. D.]]></given-names>
</name>
<name>
<surname><![CDATA[Stiassny]]></surname>
<given-names><![CDATA[M. L.]]></given-names>
</name>
<name>
<surname><![CDATA[Parenti]]></surname>
<given-names><![CDATA[L. R]]></given-names>
</name>
<name>
<surname><![CDATA[Page]]></surname>
<given-names><![CDATA[L. M]]></given-names>
</name>
<name>
<surname><![CDATA[Wheeler]]></surname>
<given-names><![CDATA[W. C.]]></given-names>
</name>
<name>
<surname><![CDATA[Faivovich]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Vari]]></surname>
<given-names><![CDATA[R. P.]]></given-names>
</name>
<name>
<surname><![CDATA[Grande]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Humphries]]></surname>
<given-names><![CDATA[C. J]]></given-names>
</name>
<name>
<surname><![CDATA[DeSalle]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Ebach]]></surname>
<given-names><![CDATA[M. C.]]></given-names>
</name>
<name>
<surname><![CDATA[Nelson]]></surname>
<given-names><![CDATA[G. J.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Taxonomic impediment or impediment to taxonomy?]]></article-title>
<source><![CDATA[Evolutionary Biology]]></source>
<year>2007</year>
<volume>34</volume>
<numero>3-4</numero>
<issue>3-4</issue>
<page-range>140-143</page-range></nlm-citation>
</ref>
<ref id="B2">
<label>2</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Andreopoulos]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Tsotsos]]></surname>
<given-names><![CDATA[J. K]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[50 years of object recognition: Directions forward]]></article-title>
<source><![CDATA[Computer Vision and Image Understanding]]></source>
<year>2013</year>
<volume>117</volume>
<numero>8</numero>
<issue>8</issue>
<page-range>827-891</page-range></nlm-citation>
</ref>
<ref id="B3">
<label>3</label><nlm-citation citation-type="confpro">
<article-title xml:lang="en"><![CDATA[Plant classification using leaf recognition]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[22nd Annual Symposium of the Pattern Recognition Association of South Africa]]></conf-name>
<conf-date>November 2011</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B4">
<label>4</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Rashad]]></surname>
<given-names><![CDATA[M. S. K. M. Z.]]></given-names>
</name>
<name>
<surname><![CDATA[el-Desouky]]></surname>
<given-names><![CDATA[B.S]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Plants images classification based on textural features using combined classifier]]></article-title>
<source><![CDATA[International Journal of Computer Science and Information Technology]]></source>
<year>2011</year>
<volume>3</volume>
<numero>4</numero>
<issue>4</issue>
</nlm-citation>
</ref>
<ref id="B5">
<label>5</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Bhardwaj]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Kaur]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Kumar]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Recognition of plants by leaf image using moment invariant and texture analysis]]></article-title>
<source><![CDATA[International Journal of Innovation and Applied Studies]]></source>
<year>2013</year>
<volume>3</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>237-248</page-range></nlm-citation>
</ref>
<ref id="B6">
<label>6</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Wu]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Bao]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
<name>
<surname><![CDATA[Xu]]></surname>
<given-names><![CDATA[E]]></given-names>
</name>
<name>
<surname><![CDATA[Wang]]></surname>
<given-names><![CDATA[Y.-X]]></given-names>
</name>
<name>
<surname><![CDATA[Chang]]></surname>
<given-names><![CDATA[Y.-F]]></given-names>
</name>
<name>
<surname><![CDATA[Xiang]]></surname>
<given-names><![CDATA[Q.-L.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A leaf recognition algorithm for plant classification using probabilistic neural network]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ IEEE International Symposium on]]></conf-name>
<conf-date>Dec 2007</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B7">
<label>7</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Beghin]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
<name>
<surname><![CDATA[Cope]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Remagnino]]></surname>
<given-names><![CDATA[P.]]></given-names>
</name>
<name>
<surname><![CDATA[Barman]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<source><![CDATA[Shape and texture based plant leaf classification]]></source>
<year>2010</year>
<volume>6475</volume>
<page-range>345-353</page-range><publisher-name><![CDATA[Springer Berlin Heidelberg]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B8">
<label>8</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Wijesingha]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Marikar]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Automatic detection system for the identification of plants using herbarium specimen images]]></article-title>
<source><![CDATA[Tropical Agricultural Research]]></source>
<year></year>
<volume>23</volume>
<numero>1</numero>
<issue>1</issue>
</nlm-citation>
</ref>
<ref id="B9">
<label>9</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Arun]]></surname>
<given-names><![CDATA[C. H.]]></given-names>
</name>
<name>
<surname><![CDATA[Emmanuel]]></surname>
<given-names><![CDATA[W. R. S.]]></given-names>
</name>
<name>
<surname><![CDATA[Durairaj]]></surname>
<given-names><![CDATA[D. C.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Texture feature extraction for identification of medicinal plants and comparison of different classifiers]]></article-title>
<source><![CDATA[International Journal of Computer Applications]]></source>
<year>Janu</year>
<month>ar</month>
<day>y </day>
<volume>62</volume>
<numero>12</numero>
<issue>12</issue>
</nlm-citation>
</ref>
<ref id="B10">
<label>10</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Aggarwal]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
<name>
<surname><![CDATA[Agrawal]]></surname>
<given-names><![CDATA[R. K]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[First and second order statistics features for classification of magnetic resonance brain images]]></article-title>
<source><![CDATA[Journal of Signal and Information Processing]]></source>
<year>2012</year>
<volume>3</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>146-153</page-range></nlm-citation>
</ref>
<ref id="B11">
<label>11</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Herdiyeni]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
<name>
<surname><![CDATA[Santoni]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
</person-group>
<source><![CDATA[Combination of morphological, local binary pattern variance and color moments features for indonesian medicinal plants identification]]></source>
<year>Dec </year>
<month>20</month>
<day>12</day>
<conf-name><![CDATA[ Advanced Computer Science and Information Systems]]></conf-name>
<conf-loc> </conf-loc>
<page-range>255-259</page-range></nlm-citation>
</ref>
<ref id="B12">
<label>12</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Kadir]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Nugroho]]></surname>
<given-names><![CDATA[L. E.]]></given-names>
</name>
<name>
<surname><![CDATA[Susanto]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Santosa]]></surname>
<given-names><![CDATA[P. I.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Leaf classification using shape, color, and texture features]]></article-title>
<source><![CDATA[International Journal of Computer Trends and Technology]]></source>
<year>2011</year>
</nlm-citation>
</ref>
<ref id="B13">
<label>13</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Kumar]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
<name>
<surname><![CDATA[Belhumeur]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[Biswas]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Jacobs]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[Kress]]></surname>
<given-names><![CDATA[W]]></given-names>
</name>
<name>
<surname><![CDATA[Lopez]]></surname>
<given-names><![CDATA[I]]></given-names>
</name>
<name>
<surname><![CDATA[Soares]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
</person-group>
<source><![CDATA[Leafsnap: A computer vision system for automatic plant species identification]]></source>
<year>2012</year>
<page-range>502-516</page-range><publisher-name><![CDATA[Springer Berlin Heidelberg]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B14">
<label>14</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Clarke]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Barman]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Remagnino]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[Bailey]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
<name>
<surname><![CDATA[Kirkup]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Mayo]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Wilkin]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
</person-group>
<source><![CDATA[Venation pattern analysis of leaf images]]></source>
<year>2006</year>
<conf-name><![CDATA[ Second International Conference on Advances in Visual Computing]]></conf-name>
<conf-loc> </conf-loc>
<page-range>427-436</page-range><publisher-loc><![CDATA[Berlin ]]></publisher-loc>
<publisher-name><![CDATA[Heidelberg: Springer-Verlag]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B15">
<label>15</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Larese]]></surname>
<given-names><![CDATA[M. G.]]></given-names>
</name>
<name>
<surname><![CDATA[Namías]]></surname>
<given-names><![CDATA[R.]]></given-names>
</name>
<name>
<surname><![CDATA[Craviotto]]></surname>
<given-names><![CDATA[R. M.]]></given-names>
</name>
<name>
<surname><![CDATA[Arango]]></surname>
<given-names><![CDATA[M. R]]></given-names>
</name>
<name>
<surname><![CDATA[Gallo]]></surname>
<given-names><![CDATA[C.]]></given-names>
</name>
<name>
<surname><![CDATA[Granitto]]></surname>
<given-names><![CDATA[P. M.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Automatic classification of legumes using leaf vein image features]]></article-title>
<source><![CDATA[Pattern Recogn]]></source>
<year>Jan.</year>
<month> 2</month>
<day>01</day>
<volume>47</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>158-168</page-range></nlm-citation>
</ref>
<ref id="B16">
<label>16</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Lee]]></surname>
<given-names><![CDATA[K.-B]]></given-names>
</name>
<name>
<surname><![CDATA[Chung]]></surname>
<given-names><![CDATA[K.-W]]></given-names>
</name>
<name>
<surname><![CDATA[Hong]]></surname>
<given-names><![CDATA[K.-S.]]></given-names>
</name>
</person-group>
<source><![CDATA[An implementation of leaf recognition system based on leaf contour and centroid for plant classification]]></source>
<year>2013</year>
<volume>214</volume>
<page-range>109-116</page-range><publisher-name><![CDATA[Springer Netherlands]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B17">
<label>17</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Lee]]></surname>
<given-names><![CDATA[K.-B.]]></given-names>
</name>
<name>
<surname><![CDATA[Hong]]></surname>
<given-names><![CDATA[K.-S.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[An implementation of leaf recognition system using leaf vein and shape]]></article-title>
<source><![CDATA[International Journal of Bio-Science and Bio-Technology]]></source>
<year>Apr </year>
<month>20</month>
<day>13</day>
<page-range>57-66</page-range></nlm-citation>
</ref>
<ref id="B18">
<label>18</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Li]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
<name>
<surname><![CDATA[Chi]]></surname>
<given-names><![CDATA[Z]]></given-names>
</name>
<name>
<surname><![CDATA[Feng]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
</person-group>
<source><![CDATA[Leaf vein extraction using independent component analysis: in Systems, Man and Cybernetics, 2006. SMC &#8217;06]]></source>
<year>Oct </year>
<month>20</month>
<day>06</day>
<volume>5</volume>
<conf-name><![CDATA[ IEEE International Conference]]></conf-name>
<conf-loc> </conf-loc>
<page-range>3890-3894</page-range></nlm-citation>
</ref>
<ref id="B19">
<label>19</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Zamora]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
</person-group>
<source><![CDATA[]]></source>
<year>May </year>
<month>20</month>
<day>14</day>
<publisher-loc><![CDATA[Costa Rica ]]></publisher-loc>
<publisher-name><![CDATA[National Biodiversity Institute]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B20">
<label>20</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Dempster]]></surname>
<given-names><![CDATA[A. P.]]></given-names>
</name>
<name>
<surname><![CDATA[Laird]]></surname>
<given-names><![CDATA[N. M]]></given-names>
</name>
<name>
<surname><![CDATA[Rubin]]></surname>
<given-names><![CDATA[D. B.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Maximum likelihood from incomplete data via the em algorithm&#8221;]]></article-title>
<source><![CDATA[JOURNAL OF THE ROYAL STATISTICAL SOCIETY]]></source>
<year>1977</year>
<volume>39</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>1-38</page-range></nlm-citation>
</ref>
<ref id="B21">
<label>21</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Herdiyeni]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
<name>
<surname><![CDATA[Kusmana]]></surname>
<given-names><![CDATA[I]]></given-names>
</name>
</person-group>
<source><![CDATA[Fusion of local binary patterns features for tropical medicinal plants identification]]></source>
<year>Sept</year>
<month> 2</month>
<day>01</day>
<conf-name><![CDATA[ Advanced Computer Science and Information Systems (ICACSIS), 2013 International Conference on]]></conf-name>
<conf-loc> </conf-loc>
<page-range>353-357</page-range></nlm-citation>
</ref>
<ref id="B22">
<label>22</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Nguyen]]></surname>
<given-names><![CDATA[Q]]></given-names>
</name>
<name>
<surname><![CDATA[Le]]></surname>
<given-names><![CDATA[T.]]></given-names>
</name>
<name>
<surname><![CDATA[Pham]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
</person-group>
<source><![CDATA[Leaf based plant identification system for android using surf features in combination with bag of words model and supervised learning]]></source>
<year>Octo</year>
<month>be</month>
<day>r </day>
<conf-name><![CDATA[ International Conference on Advanced Technologies for Communications (ATC]]></conf-name>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B23">
<label>23</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Carranza-Rojas]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
</person-group>
<source><![CDATA[A Texture and Curvature Bimodal Leaf Recognition Model for Costa Rican Plant Species Identification]]></source>
<year>2014</year>
<publisher-loc><![CDATA[Cartago ]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B24">
<label>24</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Bradski]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[The OpenCV Library]]></article-title>
<source><![CDATA[Dr. Dobb&#8217;s Journal of Software Tools]]></source>
<year>2000</year>
</nlm-citation>
</ref>
<ref id="B25">
<label>25</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Oliphant]]></surname>
<given-names><![CDATA[T. E.]]></given-names>
</name>
</person-group>
<source><![CDATA[Guide to NumPy]]></source>
<year>Mar.</year>
<month> 2</month>
<day>00</day>
<publisher-loc><![CDATA[^eUT UT]]></publisher-loc>
</nlm-citation>
</ref>
<ref id="B26">
<label>26</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Zhu]]></surname>
<given-names><![CDATA[X]]></given-names>
</name>
<name>
<surname><![CDATA[Loy]]></surname>
<given-names><![CDATA[C. C.]]></given-names>
</name>
<name>
<surname><![CDATA[Gong]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<source><![CDATA[Constructing robust affinity graphs for spectral clustering]]></source>
<year>June</year>
<month> 2</month>
<day>01</day>
<conf-name><![CDATA[ Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference]]></conf-name>
<conf-loc> </conf-loc>
<page-range>1450-1457</page-range></nlm-citation>
</ref>
<ref id="B27">
<label>27</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Pedregosa]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
<name>
<surname><![CDATA[Varoquaux]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
<name>
<surname><![CDATA[Gramfort]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Michel]]></surname>
<given-names><![CDATA[V]]></given-names>
</name>
<name>
<surname><![CDATA[Thirion]]></surname>
<given-names><![CDATA[B]]></given-names>
</name>
<name>
<surname><![CDATA[Grisel]]></surname>
<given-names><![CDATA[O.]]></given-names>
</name>
<name>
<surname><![CDATA[Blondel]]></surname>
<given-names><![CDATA[M.]]></given-names>
</name>
<name>
<surname><![CDATA[Prettenhofer]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[Weiss]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Dubourg]]></surname>
<given-names><![CDATA[V]]></given-names>
</name>
<name>
<surname><![CDATA[Vanderplas]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Passos]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Cournapeau]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Brucher]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Perrot]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Duchesnay]]></surname>
<given-names><![CDATA[E]]></given-names>
</name>
</person-group>
<source><![CDATA[Journal of Machine Learning Research]]></source>
<year>2011</year>
<volume>12</volume>
<page-range>2825-2830</page-range></nlm-citation>
</ref>
<ref id="B28">
<label>28</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Suzuki]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[be]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Topological structural analysis of digitized binary images by border following]]></article-title>
<source><![CDATA[Computer Vision, Graphics, and Image Processing]]></source>
<year>1985</year>
<volume>30</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>32 - 46</page-range></nlm-citation>
</ref>
<ref id="B29">
<label>29</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Manay]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Cremers]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Hong]]></surname>
<given-names><![CDATA[B.-W.]]></given-names>
</name>
<name>
<surname><![CDATA[Yezzi]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Soatto]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Integral invariants for shape matching]]></article-title>
<source><![CDATA[Pattern Analysis and Machine Intelligence, IEEE Transactions on]]></source>
<year>Oct </year>
<month>20</month>
<day>06</day>
<volume>28</volume>
<numero>10</numero>
<issue>10</issue>
<page-range>1602-1618</page-range></nlm-citation>
</ref>
<ref id="B30">
<label>30</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Coelho]]></surname>
<given-names><![CDATA[L. P.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Mahotas: Open source software for scriptable computer vision]]></article-title>
<source><![CDATA[Journal of Open Research Software]]></source>
<year></year>
<volume>1</volume>
</nlm-citation>
</ref>
<ref id="B31">
<label>31</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Ojala]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
<name>
<surname><![CDATA[Pietikainen]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Maenpaa]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Multiresolution gray-scale and rotation invariant texture classification with local binary patterns]]></article-title>
<source><![CDATA[Pattern Analysis and Machine Intelligence, IEEE Transactions on]]></source>
<year>Jul </year>
<month>20</month>
<day>02</day>
<volume>24</volume>
<numero>7</numero>
<issue>7</issue>
<page-range>971-987</page-range></nlm-citation>
</ref>
<ref id="B32">
<label>32</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Mouine]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Yahiaoui]]></surname>
<given-names><![CDATA[I]]></given-names>
</name>
<name>
<surname><![CDATA[Verroust-Blondet]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
</person-group>
<source><![CDATA[A shape-based approach for leaf classification using multiscaletriangular representation]]></source>
<year>2013</year>
<page-range>127-134</page-range><publisher-name><![CDATA[ACM]]></publisher-name>
</nlm-citation>
</ref>
</ref-list>
</back>
</article>
