<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>0717-5000</journal-id>
<journal-title><![CDATA[CLEI Electronic Journal]]></journal-title>
<abbrev-journal-title><![CDATA[CLEIej]]></abbrev-journal-title>
<issn>0717-5000</issn>
<publisher>
<publisher-name><![CDATA[Centro Latinoamericano de Estudios en Informática]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S0717-50002016000100001</article-id>
<title-group>
<article-title xml:lang="en"><![CDATA[Impact of Thresholds and Load Patterns when Executing HPC Applications with Cloud Elasticity]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Facco Rodrigues]]></surname>
<given-names><![CDATA[Vinicius]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Rostirolla]]></surname>
<given-names><![CDATA[Gustavo]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[da Rosa Righi]]></surname>
<given-names><![CDATA[Rodrigo]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[André da Costa]]></surname>
<given-names><![CDATA[Cristiano]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Victória Barbosa]]></surname>
<given-names><![CDATA[Jorge Luis]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
</contrib-group>
<aff id="A01">
<institution><![CDATA[,Unisinos Applied Computing Graduate Program ]]></institution>
<addr-line><![CDATA[ São Leopoldo]]></addr-line>
<country>Brazil</country>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>04</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>04</month>
<year>2016</year>
</pub-date>
<volume>19</volume>
<numero>1</numero>
<fpage>1</fpage>
<lpage>1</lpage>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://www.scielo.edu.uy/scielo.php?script=sci_arttext&amp;pid=S0717-50002016000100001&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.edu.uy/scielo.php?script=sci_abstract&amp;pid=S0717-50002016000100001&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.edu.uy/scielo.php?script=sci_pdf&amp;pid=S0717-50002016000100001&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="en"><p><![CDATA[Elasticity is one of the most known capabilities related to cloud computing, being largely deployed reactively using thresholds. In this way, maximum and minimum limits are used to drive resource allocation and deallocation actions, leading to the following problem statements: How can cloud users set the threshold values to enable elasticity in their cloud applications? And what is the impact of the application’s load pattern in the elasticity? This article tries to answer these questions for iterative high performance computing applications, showing the impact of both thresholds and load patterns on application performance and resource consumption. To accomplish this, we developed a reactive and PaaS-based elasticity model called AutoElastic and employed it over a private cloud to execute a numerical integration application. Here, we are presenting an analysis of best practices and possible optimizations regarding the elasticity and HPC pair. Considering the results, we observed that the maximum threshold influences the application time more than the minimum one. We concluded that threshold values close to 100% of CPU load are directly related to a weaker reactivity, postponing resource reconfiguration when its activation in advance could be pertinent for reducing the application runtime.]]></p></abstract>
<abstract abstract-type="short" xml:lang="pt"><p><![CDATA[Elasticidade é uma das capacidades mais conhecidas da computação em nuvem, sendo amplamente implantada de forma reativa usando thresholds. Desta forma, os limites máximos e mínimos são usados ??para conduzir ações de alocação de recursos e desalocação eles, levando às seguintes sentenças-problema: Como podem os usuários definir os valores de limite para permitir a elasticidade em suas aplicações em nuvem? E qual é o impacto do padrão de carga do aplicativo na elasticidade? Este artigo tenta responder a estas perguntas para aplicações iterativas de computação de alto desempenho, mostrando o impacto de ambos os lthresholds e padrões de carga no desempenho do aplicativo e consumo de recursos. Para isso, foi desenvolvido um modelo de elasticidade baseado em PaaS chamado autoplastic, empregando-o sobre uma nuvem privada para executar um aplicativo de integração numérica. Apresenta-se uma análise das melhores práticas e possíveis otimizações no que diz respeito ao par elasticidade e HPC. Considerando os resultados, observou-se que o limite máximo influencia o tempo de aplicação mais do que o mínimo. Concluiu-se que os limiares próximos a 100% da carga de CPU estão diretamente relacionados com a reactividade mais fraca, adiando reconfiguração de recursos quando sua ativação com antecedência pode ser pertinente para reduzir o tempo de execução do aplicativo.]]></p></abstract>
<kwd-group>
<kwd lng="en"><![CDATA[Cloud elasticity]]></kwd>
<kwd lng="en"><![CDATA[high-performance computing]]></kwd>
<kwd lng="en"><![CDATA[resource management]]></kwd>
<kwd lng="en"><![CDATA[self-organizing]]></kwd>
<kwd lng="pt"><![CDATA[Elasticidade]]></kwd>
<kwd lng="pt"><![CDATA[Nuvem]]></kwd>
<kwd lng="pt"><![CDATA[computação de alto desempenho]]></kwd>
<kwd lng="pt"><![CDATA[gerenciamento de recursos]]></kwd>
<kwd lng="pt"><![CDATA[auto-organização]]></kwd>
</kwd-group>
</article-meta>
</front><body><![CDATA[ <div class="maketitle">  <h2 class="titleHead" style="font-size:14pt">Impact of Thresholds and Load Patterns when Executing HPC Applications with Cloud Elasticity</h2>      <div class="author" > <span  class="cmbx-12">Vinicius Facco Rodrigues, Gustavo Rostirolla, Rodrigo da Rosa Righi</span>     <br>         <span  class="cmbx-12">Cristiano Andr</span><span  class="cmbx-12">é da Costa, Jorge Luis Vict</span><span  class="cmbx-12">ória Barbosa</span>     <br>                <span  class="cmr-12">Applied Computing Graduate Program - Unisinos</span>     <br>                  <span  class="cmr-12">Av. Unisinos, 950 &#8211; S</span><span  class="cmr-12">ão Leopoldo, RS, Brazil</span>     <br>            <span  class="cmsy-10x-x-120">{</span><span  class="cmti-12"><a href="mailto:vfrodrigues@unisinos.br">vfrodrigues</a>, <a href="mailto:rostirolla@unisinos.br">rostirolla</a>, <a href="mailto:rrrighi@unisinos.br">rrrighi</a>, <a href="mailto:cac@unisinos.br">cac</a>, <a href="mailto:jbarbosa@unisinos.br">jbarbosa</a></span><span  class="cmsy-10x-x-120">}</span><span  class="cmti-12">@unisinos.br </span></div>    <br>     <div class="date" ></div>    </div>        <div  class="abstract"  >     <div class="center"  > <!--l. 58-->    ]]></body>
<body><![CDATA[<p class="noindent" >                                                                                                                                                                                         <div class="minipage">    <div class="center"  > <!--l. 58-->    <p class="noindent" > <!--l. 58-->    <p class="noindent" ><span  class="cmbx-10">Abstract</span></div> <!--l. 59-->    <p class="noindent" >Elasticity is one of the most known capabilities related to cloud computing, being largely deployed reactively using thresholds. In this way, maximum and minimum limits are used to drive resource allocation and deallocation actions, leading to the following problem statements: How can cloud users set the threshold values to enable elasticity in their cloud applications? And what is the impact of the application&#8217;s load pattern in the elasticity? This article tries to answer these questions for iterative high performance computing applications, showing the impact of both thresholds and load patterns on application performance and resource consumption. To accomplish this, we developed a reactive and PaaS-based elasticity model called AutoElastic and employed it over a private cloud to execute a numerical integration application. Here, we are presenting an analysis of best practices and possible optimizations regarding the elasticity and HPC pair. Considering the results, we observed that the maximum threshold influences the application time more than the minimum one. We concluded that threshold values close to 100% of CPU load are directly related to a weaker reactivity, postponing resource reconfiguration when its activation in advance could be pertinent for reducing the application runtime. <!--l. 61-->    <p class="noindent" >Abstract in Portuguese: <!--l. 63-->    <p class="noindent" >Elasticidade é uma das capacidades mais conhecidas da computação em nuvem, sendo amplamente implantada de forma reativa usando thresholds. Desta forma, os limites máximos e mínimos são usados ??para conduzir ações de alocação de recursos e desalocação eles, levando às seguintes sentenças-problema: Como podem os usuários definir os valores de limite para permitir a elasticidade em suas aplicações em nuvem? E qual é o impacto do padrão de carga do aplicativo na elasticidade? Este artigo tenta responder a estas perguntas para aplicações iterativas de computação de alto desempenho, mostrando o impacto de ambos os lthresholds e padrões de carga no desempenho do aplicativo e consumo de recursos. Para isso, foi desenvolvido um modelo de elasticidade baseado em PaaS chamado autoplastic, empregando-o sobre uma nuvem privada para executar um aplicativo de integração numérica. Apresenta-se uma análise das melhores práticas e possíveis otimizações no que diz respeito ao par elasticidade e HPC. Considerando os resultados, observou-se que o limite máximo influencia o tempo de aplicação mais do que o mínimo. Concluiu-se que os limiares próximos a 100% da carga de CPU estão diretamente relacionados com a reactividade mais fraca, adiando reconfiguração de recursos quando sua ativação com antecedência pode ser pertinente para reduzir o tempo de execução do aplicativo.</div></div> </div> <!--l. 69-->    <p class="noindent" ><span  class="cmbx-10">Keywords: </span>Cloud elasticity, high-performance computing, resource management, self-organizing. <br  class="newline">Portuguese Keywords: Elasticidade, Nuvem, computação de alto desempenho, gerenciamento de recursos, auto-organização. <br  class="newline">Received: 2015-12-10 Revised: 2016-03-18 Accepted: 2016-03-21 <br  class="newline"> DOI: http://dx.doi.org/10.19153.19.1.1     <h3 class="sectionHead"><span class="titlemark">1   </span> <a   id="x1-10001"></a>Introduction</h3> <!--l. 7-->    <p class="noindent" >The emergence of cloud computing offers the flexibility of managing the resources in a more dynamic manner because it uses the virtualization technology to abstract, encapsulate, and partition machines. Virtualization enables one of the most known cloud capabilities - the resource elasticity&#x00A0;<span class="cite">[<a  href="#XREF2014-3">1</a>,&#x00A0;<a  href="#XREF2014-4">2</a><a id="br2">]</a></span>. x the on-demand provision principle, the interest on elasticity capability is related to the benefits it can provide, which includes improvements in applications performance, better resources utilization and cost reduction. Another fact that contributes to better performance could be achieved through dynamic allocation of resources, including processing, memory, network and storage. The possibility to allocate a short amount of resources in the beginning of the application, avoiding over-provisioning, besides the need to deallocating them on moderated load periods, emphasizes the rationale for cost reduction, which impacts directly on energy saving&#x00A0;<span class="cite">[<a  href="#XREF2014-1">3</a><a id="br3">]</a></span>. <!--l. 9-->    ]]></body>
<body><![CDATA[<p class="indent" >   Today, the combination of horizontal and reactive approaches represents the most-used methodology for delivering cloud elasticity&#x00A0;<span class="cite">[<a  href="#XWEBER:2014">4</a>,&#x00A0;<a  href="#XCLAUS:2014">5</a>,&#x00A0;<a  href="#XPAPER4:GUO:2012">6</a>,&#x00A0;<a  href="#XPAPER10:GALANTE:2012">7</a><a id="br7">]</a></span>. In this approach, rule-condition-action statements with upper and lower load                                                                                                                                                                                     thresholds occur to drive the allocation and consolidation of instances, either nodes or virtual machines (VMs). Despite being simple and intuitive, this method requires programmer&#8217;s technical skills for tuning the parameters. Moreover, these parameters can vary according to the application and the infrastructure. Specifically for the High-performance Computing (HPC) panorama, the upper threshold defines for how long the parallel application should run close to the overloaded state. For example, a value close to 75% implies always executing the HPC code in a non-saturated environment, but paying for as many instances (and increasing energy consumption) as needed to reach this situation. Conversely, a value near to 100% for this threshold is relevant for energy saving when analyzing the number of allocations. However, this could result in weaker application reactivity, which can generate penalties in execution time. The lower threshold, in turn, is useful for deallocating resources, whereas a value close to 0% will delay this action. This scenario can cause overestimation of resource usage because the application will postpone consolidations&#x00A0;<span class="cite">[<a  href="#XANTON:2012">8</a><a id="br8">]</a></span>. The main reason for that behavior in HPC and dynamic applications is to avoid VM deallocations because, in a near term, the application could increase its need for CPU capacity, forcing another time-costly allocation action. Figure&#x00A0;<a  href="#x1-10011">1<!--tex4ht:ref: fig:importancia-thresholds --></a> shows two different situations resulted by different threshold organizations: (a) the load (either CPU load, network consumption or a combination of metrics, for instance) does not violates the thresholds and; (b) the load violates both thresholds resulting in elasticity operations. <!--l. 11-->    <p class="indent" >   Some recent efforts have specifically focused on exploiting the elasticity of clouds for traditional services, including a transactional data store, data-intensive web services and the execution of a bag-of-tasks application&#x00A0;<span class="cite">[<a  href="#XPAPER6:BICER:2011">9</a><a id="br9">]</a></span>. Basically, this scenario covers companies aiming at avoiding the downfalls involved with the fixed provisioning on mission critical applications. Thus, the typical elasticity organization on such systems uses virtual machines (VMs) replication and a centralized dispatcher. Normally, replicas do not communicate among themselves and the premature crash of each one does not mean a system unavailability, but an user request retry&#x00A0;<span class="cite">[<a  href="#XREF2014-2">10</a><a id="br10">]</a></span>. <!--l. 13-->    <p class="indent" >   Although the aforementioned solution is successfully employed on server-based applications, tightly-coupled HPC-driven scientific applications cannot benefit from the use of these mechanisms. Generally, these scientific programs have been designed to use a fixed number of resources, and cannot explore elasticity without an appropriate support. In other words, the simple addition of instances and the use of load balancers have no effect in these applications since they are not able to detect and use these resources&#x00A0;<span class="cite">[<a  href="#XCOUTINHO:2014">11</a><a id="br11">]</a></span>. Technically, over recent years most parallel applications have been developed using the Message Passing Interface (MPI) 1.x, which does not have any support for changing the number of processes during the execution&#x00A0;<span class="cite">[<a  href="#XEXPOSITO:2013">12</a><a id="br12">]</a></span>. While this changed with MPI version 2.0, this feature is not yet supported by many of the available MPI implementations&#x00A0;<span class="cite">[<a  href="#XPAPER6:BICER:2011">9</a><a id="br9">]</a></span>. Moreover, significant effort is needed at application level both to manually change the process group and to redistribute the data to effectively use a different number of processes. Furthermore, the consolidation (the act of turning off) a VM running one or more processes can incur in an application crash, since the communication channels among the processes are suddenly disconnected. <!--l. 15-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-10011"></a>                                                                                                                                                                                      <!--l. 17-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a01f1.jpg" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;1: </span><span   class="content">Different threshold organizations: (a) upper threshold close to 100% and lower threshold close to 0%, so avoiding load violations and; (b) upper and lower thresholds close to 50% resulting load violations. In (b), we can observe that after exceeding the upper threshold, new resources take place and the load goes down since it is better distributed over a larger number of compute nodes. Yet in (b), the complementary situation happens when reaching the lower threshold</span></div><!--tex4ht:label?: x1-10011 -->                                                                                                                                                                                     <!--l. 20-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 22-->    <p class="indent" >   Aiming at providing cloud elasticity for HPC applications in an efficient and transparent manner, we propose a model called <span  class="cmbx-10">AutoElastic </span>(Project website: <a  href="http://autoelastic.github.io/autoelastic" class="url" ><span  class="cmtt-10">http://autoelastic.github.io/autoelastic</span></a>). Efficiency is addressed at both time and energy management perspectives on changing from one resource configuration to another to handle modifications in the workload. Moreover, transparency is addressed by providing cloud elasticity at middleware level. Since AutoElastic is a PaaS-based (Platform as a Service) manager, the programmer does not need to change any line of the application&#8217;s source code to take profit of new resources. The proposed model specifically assumes that the target HPC application is iterative by nature, <img  src="/img/revistas/cleiej/v19n1/1a010x.png" alt="i.e.  "  class="math" >, it has a time-step loop. This is a reasonable assumption for most MPI programs, so this does not limit the applicability of our framework. AutoElastic&#8217;s contributions can be divided in two ways as follows:      <ul class="itemize1">      <li class="itemize">Elastic infrastructure for HPC applications to provide <span  class="cmbx-10">asynchronous elasticity</span>, both for not blocking      any processes when instantiating or consolidating VMs and for managing the change on the number      of processes avoiding any possibility of application crash;      </li>      <li class="itemize">Taking into account the AutoElastic&#8217;s reactive behavior, we analyzed the impact of <span  class="cmbx-10">different upper</span>      <span  class="cmbx-10">and lower threshold values </span>and <span  class="cmbx-10">four load patterns </span>(constant, ascending, descending and wave)      on cloud elasticity when executing a scientific application compiled using the AutoElastic prototype.</li>    ]]></body>
<body><![CDATA[</ul> <!--l. 30-->    <p class="indent" >   This article describes the AutoElastic model and its prototype, developed to run over the OpenNebula private cloud platform. Furthermore, it presents the rationales on AutoElastic evaluation, the implementation of a numerical integration application and a discussion about its performance with cloud elasticity. The remainder of this article will first introduce the related work in Section <a  href="#x1-20002">2<!--tex4ht:ref: section:related:work --></a>. Section <a  href="#x1-30003">3<!--tex4ht:ref: section:autoelastic --></a> is the main part of the article, describing AutoElastic together with its application model in detail. Evaluation methodology and results are discussed in Sections <a  href="#x1-60004">4<!--tex4ht:ref: section:methodology --></a> and <a  href="#x1-70005">5<!--tex4ht:ref: section:results --></a>. Finally, Section <a  href="#x1-100006">6<!--tex4ht:ref: section:conclusion --></a> emphasizes the scientific contribution of the work and notes several challenges that we can address in the future.    <h3 class="sectionHead"><span class="titlemark">2   </span> <a   id="x1-20002"></a>Related Work</h3> <!--l. 7-->    <p class="noindent" >Cloud computing is addressed both by providers with commercial purposes and open source middlewares, as well as academic initiatives. Regarding the first group, the initiatives in the Web offer elasticity either manually when considering the user viewpoint&#x00A0;<span class="cite">[<a  href="#XCAI:BIN:2012">13</a><a id="br13">,</a>&#x00A0;<a  href="#XDEJAN:MONTERO:RUBEN:2011">14</a><a id="br14">,</a>&#x00A0;<a  href="#XWEN:GU:LI:GAO:2012">15</a><a id="br15">]</a></span> or through preconfiguration mechanisms for reactive elasticity&#x00A0;<span class="cite">[<a  href="#XREF2014-1">3</a><a id="br3">,</a>&#x00A0;<a  href="#XREF2014-2">10</a><a id="br10">,</a>&#x00A0;<a  href="#XCHIU:AGRAWAL:2010">16</a><a id="br16">]</a></span>. In particular, in the latter case, the user must setup thresholds and actions of elasticity, which may not be trivial for those not experts in cloud environments. Systems such as Amazon EC2 <a  href="http://aws.amazon.com" class="url" ><span  class="cmtt-10">http://aws.amazon.com</span></a>, Nimbus <a  href="http://www.nimbusproject.org" class="url" ><span  class="cmtt-10">http://www.nimbusproject.org</span></a> and Windows Azure <a  href="http://azure.microsoft.com" class="url" ><span  class="cmtt-10">http://azure.microsoft.com</span></a> are examples of this methodology. In particular, Amazon EC2 eases the allocation and preparation of the VM but not the automatic configuration which is a requirement for a truly elastic-aware cloud platform. Concerning the middlewares for private clouds, as OpenStack <a  href="https://www.openstack.org" class="url" ><span  class="cmtt-10">https://www.openstack.org</span></a>, OpenNebula <a  href="http://opennebula.org" class="url" ><span  class="cmtt-10">http://opennebula.org</span></a>, Eucalyptus <a  href="https://www.eucalyptus.com" class="url" ><span  class="cmtt-10">https://www.eucalyptus.com</span></a> and CloudStack <a  href="http://cloudstack.apache.org" class="url" ><span  class="cmtt-10">http://cloudstack.apache.org</span></a>, the elasticity is normally controlled manually, either on the command line or via a graphical application that comes together with the software package. <!--l. 9-->    <p class="indent" >   Academic research initiatives seek to reduce gaps and/or enhance cloud elasticity approaches. ElasticMPI proposes elasticity in MPI applications by stop-reconfigure-and-go approach&#x00A0;<span class="cite">[<a  href="#XPAPER6:BICER:2011">9</a><a id="br9">]</a></span>. Such action may have a negative impact, especially for HPC applications that do not have long duration. In addition, the ElasticMPI approach makes a change in the application&#8217;s source code in order to insert monitoring policies. Mao, Li and Humphrey &#x00A0;<span class="cite">[<a  href="#XMING:MAO:JIE:2010">17</a><a id="br17">]</a></span> deal with auto-scalability by changing the number of VM instances based on workload information. In their scope, considering that a program has deadlines to conclude each of its phases, the proposal works with resources and VMs to meet these deadlines properly. Martin et al.&#x00A0;<span class="cite">[<a  href="#XMARTIN:POLETTI:2011">18</a><a id="br18">]</a></span> present a typical scenario of requests over a cloud service that works with a load balancer. The elasticity changes the amount of worker VMs according to the demand on the service. In the same approach of using a balancer and replicas, Elastack appears as a system running on OpenStack to address the lack of elasticity of the latter&#x00A0;<span class="cite">[<a  href="#XMATOS:2012">19</a><a id="br19">]</a></span>. <!--l. 11-->    <p class="indent" >   Weiwei et al.&#x00A0;<span class="cite">[<a  href="#XPAPER193:LIN:2011">20</a><a id="br20">]</a></span> use CloudSim&#x00A0;<span class="cite">[<a  href="#XCLOUDSIM:2011">21</a><a id="br21">]</a></span> to enable dynamic resource allocations using thresholds. The resources monitoring occurs between malleable time intervals according the application regularity. Although, the application execution blocks during the resource reorganizations and there is no peak treatment when violating a threshold. This approach requires previous application information to define the thresholds. The system monitors variations in the workload to decide when resource organizations are necessary.                                                                                                                                                                                     In the tests using CloudSim, the authors used the threshold values 95%, 75% and 50%. Rui Han et al.&#x00A0;<span class="cite">[<a  href="#XHAN:MOUSTAFA:2012">22</a><a id="br22">]</a></span> present an approach to resource management focusing IaaS (Infrastructure as a Service) to Cloud providers. The system monitors the application response time and compare the values with predetermined thresholds. When the response time reaches a threshold, the system uses horizontal and vertical elasticity to reorganize the resources. The thresholds 80% and 30% were used in the tests. Spinner et al.&#x00A0;<span class="cite">[<a  href="#XSPINNER:2014">23</a><a id="br23">]</a></span> propose an algorithm that calculates at every new application phase the ideal resource configuration based in thresholds. In this way, the approach uses vertical elasticity to increase or decrease the number of CPU&#8217;s available to the application through virtual machines. The algorithm works with a aggressiveness parameter to define the elasticity behavior. In the tests, the authors used the threshold 75%. <!--l. 13-->    <p class="indent" >   Chuang et al.&#x00A0;<span class="cite">[<a  href="#XCHUANG:2013">24</a><a id="br24">]</a></span> affirm that we need a model in which programmers do not need be aware of the actual scale of the application or of the runtime support that dynamically reconfigures the system to distribute application state across computing resources. To address this point, the authors developed EventWave, a programming model and runtime support for tightly coupled elastic cloud applications. The initiative focuses on a game server and works only with VM migration. Gutierrez-Garcia and Sim&#x00A0;<span class="cite">[<a  href="#XMONG:2013">25</a><a id="br25">]</a></span> developed an agent-based framework to address cloud elasticity for bag-of-tasks demands. The status of the BoT execution is verified every hour; if it is not completely executed, all the cloud resources are reallocated for the next hour. Wei et al.&#x00A0;<span class="cite">[<a  href="#XHAO:2013">26</a><a id="br26">]</a></span> explore elastic resource management on the PaaS level. When applications are deployed, they are first allocated on separate servers, so that they can be monitored more meticulously. Then, the authors collect long-term monitoring data to estimate the application&#8217;s characteristics, time that is added to the normal execution of the application. Aniello et al.&#x00A0;<span class="cite">[<a  href="#XANIELLO:2014">27</a><a id="br27">]</a></span> developed an architecture for automatically scaling replicated services. A queuing model of the replicated service is used to compute the expected response time, given the current configuration (number of replicas) and the distributions of both input requests and service times. However, a queuing system is not the best option for dynamic applications and/or infrastructure because it is based on rigid parameters. Leite et al.&#x00A0;<span class="cite">[<a  href="#XLEITE:2014">28</a><a id="br28">]</a></span> developed a middleware named Excalibur to execute parallel applications in the cloud automatically. Excalibur works with independent tasks organized in different partitions. The algorithms must to know some information in advance, such as the total number of tasks and the estimated CPU time to execute each type of task.        <div class="table">                                                                                                                                                                                     <!--l. 16-->    <p class="indent" >   <a   id="x1-20011"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;1: </span><span   class="content">Elasticity-related initiatives, emphasizing their goals and weak points</span></div><!--tex4ht:label?: x1-20011 --> <img  src="/img/revistas/cleiej/v19n1/1a01t1.jpg" alt="PIC"   >                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 25-->    ]]></body>
<body><![CDATA[<p class="indent" >   Table&#x00A0;<a  href="#x1-20011">1<!--tex4ht:ref: tab:relatedwork --></a> presents a comparison of the most representative works discussed in this section. Elasticity is explored further in the IaaS level as a reactive approach. In this way, the works are not unisons about the use of a single threshold for the tests. For example, it is possible to note the following values: (i) 70%&#x00A0;<span class="cite">[<a  href="#XPAPER139:WESAN:2011">30</a><a id="br30">]</a></span>; (ii) 75%&#x00A0;<span class="cite">[<a  href="#XVARELA:CARLOS:2012">31</a><a id="br31">]</a></span>; (iii) 80%&#x00A0;<span class="cite">[<a  href="#XPAPER18:2:MARIAN:2012">32</a><a id="br32">]</a></span>; (iv) 90%&#x00A0;<span class="cite">[<a  href="#XMATOS:2012">19</a><a id="br19">,</a>&#x00A0;<a  href="#XPAPER11:SULELMAN:2012">33</a><a id="br33">]</a></span>. These values deal with upper limits that when exceeded, trigger elasticity actions. Furthermore, an analysis of the state-of-the-art in elasticity allows to point out some weaknesses of the academy initiatives, which can be summarized in five statements: (i) no strategy is proposed to assess whether it is a peak when reach a threshold&#x00A0;<span class="cite">[<a  href="#XMARTIN:POLETTI:2011">18</a><a id="br18">,</a>&#x00A0;<a  href="#XMATOS:2012">19</a><a id="br19">]</a></span>; (ii) need to change the application source code&#x00A0;<span class="cite">[<a  href="#XPAPER6:BICER:2011">9</a><a id="br9">,</a>&#x00A0;<a  href="#XCAMINO:JESUS:2011">29</a><a id="br29">]</a></span>; (iii) need to know the application&#8217;s data before its execution, such as the expected time of execution of each component&#x00A0;<span class="cite">[<a  href="#XPAPER6:BICER:2011">9</a><a id="br9">,</a>&#x00A0;<a  href="#XFETZER:2011">34</a><a id="br34">]</a></span>; (iv) need to reconfigure resources with stop of the application and subsequent recovery&#x00A0;<span class="cite">[<a  href="#XPAPER6:BICER:2011">9</a><a id="br9">]</a></span>; (v) supposition that the communication between VMs is given at a constant rate&#x00A0;<span class="cite">[<a  href="#XZON:YIN:2012">35</a><a id="br35">]</a></span>. <!--l. 27-->    <p class="indent" >   To summarize, in the best of our knowledge, three articles address cloud elasticity for HPC applications&#x00A0;<span class="cite">[<a  href="#XPAPER6:BICER:2011">9</a><a id="br9">,</a>&#x00A0;<a  href="#XMARTIN:POLETTI:2011">18</a><a id="br18">,</a>&#x00A0;<a  href="#XCAMINO:JESUS:2011">29</a><a id="br29">]</a></span>. They have in common the fact that they approached the master-slave programming model. Particularly, the initiatives &#x00A0;<span class="cite">[<a  href="#XPAPER6:BICER:2011">9</a><a id="br9">,</a>&#x00A0;<a  href="#XCAMINO:JESUS:2011">29</a><a id="br29">]</a></span> are based on iterative applications, where there is a redistribution of tasks by the master entity at each new phase. Applications that do not have an iterative loop cannot be adapted to this framework, since it uses the iteration index as execution restarting point. In addition, the elasticity in &#x00A0;<span class="cite">[<a  href="#XCAMINO:JESUS:2011">29</a><a id="br29">]</a></span> is managed manually by the user, obtaining monitoring data using the framework proposed by the authors. Finally, the purpose of the solution of Martin et al.&#x00A0;<span class="cite">[<a  href="#XMARTIN:POLETTI:2011">18</a><a id="br18">]</a></span> is the efficient handling of requests to a Web Server. It acts as a delegator and creates and consolidates instances based on the flow of arrival requests and the load of worker VMs.    <h3 class="sectionHead"><span class="titlemark">3   </span> <a   id="x1-30003"></a>AutoElastic: Reactive-based Cloud Elasticity Model</h3> <!--l. 7-->    <p class="noindent" >This section describes the AutoElastic model, which analyzes alternatives for the following problem statements: <!--l. 10-->    <p class="indent" >      <dl class="enumerate"><dt class="enumerate">   1. </dt><dd  class="enumerate"><span  class="cmti-10">Which mechanisms are needed to provide cloud elasticity transparently at both user and application</span>      <span  class="cmti-10">levels?</span>      </dd><dt class="enumerate">   2. </dt><dd  class="enumerate"><span  class="cmti-10">Considering resource monitoring and VM management procedures, how can we model the elasticity as</span>      <span  class="cmti-10">a viable capability on HPC applications?</span></dd></dl> <!--l. 14-->    <p class="indent" >   Our idea is to provide reactive elasticity totally in a transparent and effortless way to the user, who does not need to write rules and actions for resource reconfiguration. In addition, users must not need to change their parallel application, not inserting elasticity calls from a particular library or modifying the application to add/remove resources by themselves. Considering the second aforementioned question, AutoElastic should be aware of the overhead to instantiate a VM, taking this knowledge to offer this feature without prohibitive costs. Figure&#x00A0;<a  href="#x1-30032">2<!--tex4ht:ref: fig:autoelastic-idea --></a> (a) illustrates the traditional approaches of providing cloud elasticity to HPC applications, while (b) highlights AutoElastic&#8217;s idea. AutoElastic allows users to compile and submit a HPC non-elastic aware application to the cloud. So, the middleware at PaaS level firstly transforms a non-elastic application in an elastic one and secondly, it manages resource (and also application processes, consequently) reorganization through automatic VM allocation and consolidation procedures. <!--l. 16-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-30032"></a>                                                                                                                                                                                      <!--l. 18-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a01f2.jpg" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;2: </span><span   class="content">General ideas on using elasticity: (a) standard approach adopted by Amazon AWS and Windows Azure, in which the user must pre-configure a set of elasticity rules and actions; (b) AutoElastic idea, contemplating a manager that coordinates the elasticity actions and configurations on behalf of the user</span></div><!--tex4ht:label?: x1-30032 -->                                                                                                                                                                                     <!--l. 21-->    ]]></body>
<body><![CDATA[<p class="indent" >   </div><hr class="endfigure"> <!--l. 23-->    <p class="indent" >   The first AutoElastic ideas were published in &#x00A0;<span class="cite">[<a  href="#XRIGHI:2016">36</a><a id="br36">]</a></span>. In &#x00A0;<span class="cite">[<a  href="#XRIGHI:2016">36</a><a id="br36">]</a></span>, our idea was to present a deep analysis of the state-of-the-art in the cloud elasticity area, presenting the gaps in the HPC landscape. The mentioned article considered only a pair of thresholds (one upper threshold and one lower threshold), besides not explaining the interaction between the application processes and the AutoElastic Manager. Now, the current article presents a novel prediction function (see Equations 1 and 2), a graphical demonstration about how an application talks with the Manager and extensive details about the application used in the tests. Moreover, in the current article we are presenting novel types of graphs, exploring the impact of the thresholds in the application performance, the relationship between CPU load and allocated CPU cores, and energy consumption profiles.    <h4 class="subsectionHead"><span class="titlemark">3.1   </span> <a   id="x1-40003.1"></a>Architecture</h4> <!--l. 29-->    <p class="noindent" >AutoElastic is a cloud elasticity model that operates at the PaaS level of a cloud platform, acting as a middleware that enables the transformation of a non-elastic parallel application in an elastic one. The model works with both automatic and reactive elasticity in their horizontal (managing VM replicas) and vertical modes (resizing computational infrastructure), providing allocation and consolidation of compute nodes and virtual machines. As PaaS proposal, AutoElastic proposes a middleware to compile an iterative-based master-slave application, besides an elasticity manager. Figure <a  href="#x1-40013">3<!--tex4ht:ref: figure:autoelastic:architecture --></a> (a) depicts users interaction with the cloud, who needs to concentrate their efforts only on the application coding. The Manager hides the details from the user on writing elasticity rules and actions. Figure <a  href="#x1-40013">3<!--tex4ht:ref: figure:autoelastic:architecture --></a> (b) illustrates the relation among processes, virtual machines and computational nodes. In our scope, an AutoElastic cloud can be defined as follows:      <ul class="itemize1">      <li class="itemize"><span  class="cmbx-10">Definition  1  -  AutoElastic  cloud</span>:  a  cloud  modeled  with  <img  src="/img/revistas/cleiej/v19n1/1a011x.png" alt="m  "  class="math" >  homogeneous  and  distributed      computational resources, where at least one of them (Node0) is always active. This node is in charge      of running a VM with the master process and other <img  src="/img/revistas/cleiej/v19n1/1a012x.png" alt="c  "  class="math" > VMs with slave processes, where <img  src="/img/revistas/cleiej/v19n1/1a013x.png" alt="c  "  class="math" > means      the number of processing units (cores or CPUs) inside a particular node. The elasticity grain for each      scaling up or down action refers to a single node, and consequently, its VMs and processes. Lastly, at      any time, the number of VMs running slave processes is equal to <img  src="/img/revistas/cleiej/v19n1/1a014x.png" alt="n = c &#x00D7;m  "  class="math" >.</li>    </ul> <!--l. 35-->    <p class="indent" >   Here, we are presenting the AutoElastic Manager as an application outside the cloud, but it could be mapped to the first node, for example. This flexibility is achieved by using the API (Application Programming Interface) of the cloud software packages. Taking into account that HPC applications are commonly CPU intensive&#x00A0;<span class="cite">[<a  href="#XASLAM:2011">37</a><a id="br37">]</a></span>, we opted for creating a single process per VM and <img  src="/img/revistas/cleiej/v19n1/1a015x.png" alt="c  "  class="math" > VMs per compute node to explore its fully potential. This approach is based on the work of Lee et al. &#x00A0;<span class="cite">[<a  href="#XBATTEN:2011">38</a><a id="br38">]</a></span>, where they seek to explore a better efficiency in parallel applications.                                                                                                                                                                                     <!--l. 37-->    <p class="indent" >   <a   id="x1-40013"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                      <img  src="/img/revistas/cleiej/v19n1/1a01f3.jpg" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;3: </span><span   class="content">Distribution of nodes, VMs and processes using the AutoElastic cloud infrastructure, where each VM encompasses a single application process and each node runs <img  src="/img/revistas/cleiej/v19n1/1a016x.png" alt="c  "  class="math" > processing VMs, where <img  src="/img/revistas/cleiej/v19n1/1a017x.png" alt="c  "  class="math" > denotes the number of processing units in the node</span></div><!--tex4ht:label?: x1-40013 -->                                                                                                                                                                                        </div><hr class="endfloat"> <!--l. 48-->    <p class="indent" >   The user can enter an SLA (Service Level Agreement) with the minimum and maximum number of allowed VMs. If this file is not provided, it is assumed that this maximum is twice the number of VMs observed at the application launch. The fact that the Manager, and not the application itself, increases or decreases the number of resources provides the benefit of asynchronous elasticity. Here, asynchronous elasticity means that process execution and elasticity actions occur concomitantly, not penalizing the application because of resource overhead (node and VM) reconfiguration (allocation and deallocation). However, this asynchronism leads to the following question: How can we notify the application about resource reconfiguration? To accomplish this, AutoElastic communicates among the VMs and the Manager using a shared memory area. Other options of communication should also be possible, including using NFS, message-oriented middleware (such as JMS or AMQP) or yet tuple space (JavaSpaces, for instance). The use of a shared area for data interaction among VM instances is a common approach in private clouds&#x00A0;<span class="cite">[<a  href="#XCAI:BIN:2012">13</a><a id="br13">,</a>&#x00A0;<a  href="#XDEJAN:MONTERO:RUBEN:2011">14</a><a id="br14">,</a>&#x00A0;<a  href="#XWEN:GU:LI:GAO:2012">15</a><a id="br15">]</a></span>. AutoElastic uses this idea to trigger actions as presented in Table&#x00A0;<a  href="#x1-40022">2<!--tex4ht:ref: table:actions --></a>.        ]]></body>
<body><![CDATA[<div class="table">                                                                                                                                                                                     <!--l. 63-->    <p class="indent" >   <a   id="x1-40022"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;2: </span><span   class="content">Actions provided through the shared data area</span></div><!--tex4ht:label?: x1-40022 --> <img  src="/img/revistas/cleiej/v19n1/1a01t2.jpg" alt="PIC"   >                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 71-->    <p class="indent" >   Based on Action1, the current processes may start working with the new set of resources (a single node with <img  src="/img/revistas/cleiej/v19n1/1a018x.png" alt="c  "  class="math" > VMs, each one with a new process). Figure <a  href="#x1-40034">4<!--tex4ht:ref: fig:autoelastic-asynchronous-elasticity --></a> illustrates the functioning of the AutoElastic Manager when creating a new slave, so launching Action 1 afterwards. Action2 is relevant for the following reasons: (i) not stopping a process executing while either communication or computation procedures take place; (ii) ensuring that application will not be aborted with the sudden interruption of one or more processes. In particular, the second reason is important for MPI applications that run over TCP/IP networks, since they commonly crash with a premature termination of any process. Action3 is normally taken by a master process, which ensures that the application has a consistent global state where processes may be disconnected properly. Afterwards, the remaining processes do not exchange any message to the given node. We are working with a shared area because it makes easier the notification of all processes about resource addition or dropping and then, performing communication channel reconfigurations in a simple way. <!--l. 73-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-40034"></a>                                                                                                                                                                                      <!--l. 75-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a01f4.jpg" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;4: </span><span   class="content">Functioning of the master, the new slave and the AutoElastic Manager to enable the Asynchronous Elasticity</span></div><!--tex4ht:label?: x1-40034 -->                                                                                                                                                                                     <!--l. 78-->    ]]></body>
<body><![CDATA[<p class="indent" >   </div><hr class="endfigure"> <!--l. 81-->    <p class="indent" >   AutoElastic offers cloud elasticity using the replication technique. In the activity of enlarging the infrastructure, the Manager allocates a new compute node and launches new virtual machines on it using an application template. The bootstrap of a VM is ended with the execution of a slave process which will do requests to the master. The instantiation of VMs is controlled by the Manager and only after they are running the Manager notifies the other processes through Action1. The consolidation procedure increases the efficiency on resource utilization (not partially using the available cores) and also provides a better management of energy consumption. Particularly, Baliga et al. &#x00A0;<span class="cite">[<a  href="#XBALIGA:2011">39</a><a id="br39">]</a></span> claim that the number of VMs in a node is not an influential factor for energy consumption, but the fact of a node is turned on or not. <!--l. 83-->    <p class="indent" >   As in &#x00A0;<span class="cite">[<a  href="#XCHIU:AGRAWAL:2010">16</a><a id="br16">]</a></span> and &#x00A0;<span class="cite">[<a  href="#XVARELA:CARLOS:2012">31</a><a id="br31">]</a></span>, data monitoring is given periodically. Hence, AutoElastic Manager obtains the CPU metric, applies time series based on past values and compares the final metric with the maximum and minimum thresholds. More precisely, we are employing Moving Average in accordance with Equations <a  href="#x1-4005r2">2<!--tex4ht:ref: equation:mm --></a> and <a  href="#x1-4004r1">1<!--tex4ht:ref: equation:pc --></a>. <img  src="/img/revistas/cleiej/v19n1/1a019x.png" alt="LP (i)  "  class="math" > returns a CPU load prediction when considering the execution of the <img  src="/img/revistas/cleiej/v19n1/1a0110x.png" alt="n  "  class="math" > slave VMs in the Manager intervention number <img  src="/img/revistas/cleiej/v19n1/1a0111x.png" alt="i  "  class="math" >. To accomplish this, <img  src="/img/revistas/cleiej/v19n1/1a0112x.png" alt="M M (i,j)  "  class="math" > informs the CPU load of a virtual machine <img  src="/img/revistas/cleiej/v19n1/1a0113x.png" alt="j  "  class="math" > in the observation <img  src="/img/revistas/cleiej/v19n1/1a0114x.png" alt="i  "  class="math" >. Equation <a  href="#x1-4005r2">2<!--tex4ht:ref: equation:mm --></a> uses moving average by considering the last <img  src="/img/revistas/cleiej/v19n1/1a0115x.png" alt="z  "  class="math" > observations of the CPU load <img  src="/img/revistas/cleiej/v19n1/1a0116x.png" alt="Load(k,j)  "  class="math" > over the VM <img  src="/img/revistas/cleiej/v19n1/1a0117x.png" alt="j  "  class="math" >, where <img  src="/img/revistas/cleiej/v19n1/1a0118x.png" alt="i- z &#x2264; k &#x2264; i  "  class="math" >. Finally, Action1 is triggered if LP is greater than the maximum threshold, while Action2 is thrown when LP is lower than the minimum threshold.    <table  class="equation"><tr><td>    <center class="math-display" > <img  src="/img/revistas/cleiej/v19n1/1a0119x.png" alt="       1- n&#x2211;-1 LP(i) = n .  M M (i,j)           i=0 " class="math-display" ><a   id="x1-4004r1"></a></center></td><td class="equation-label">(1)</td></tr></table> <!--l. 88-->    <p class="nopar" > <!--l. 90-->    <p class="indent" >   where    <table  class="equation"><tr><td>    <center class="math-display" > <img  src="/img/revistas/cleiej/v19n1/1a0120x.png" alt="           &#x2211;i       Load(k,j) M M (i,j) =--k=i-z+1---------                    z " class="math-display" ><a   id="x1-4005r2"></a></center></td><td class="equation-label">(2)</td></tr></table> <!--l. 96-->    <p class="nopar" > <!--l. 99-->    <p class="indent" >   for <img  src="/img/revistas/cleiej/v19n1/1a0121x.png" alt="i &#x2265; z  "  class="math" >.    <h4 class="subsectionHead"><span class="titlemark">3.2   </span> <a   id="x1-50003.2"></a>Model of Parallel Application</h4> <!--l. 105-->    <p class="noindent" >AutoElastic exploits data parallelism on iterative-based message passing parallel applications. Figure&#x00A0;<a  href="#x1-50015">5<!--tex4ht:ref: fig:application-iteractions --></a> shows an iterative application supported by AutoElastic where each iteration is composed by three steps: (a) the process master distributes the load among the active slave processes; (b) slave processes compute the load received by the master process; and (c) the slave processes send the computed results to the master process. The elasticity occurs always in between each iteration where the computation has a consistent                                                                                                                                                                                     global state, allowing changes in the number of processes. In particular, the current version of the model still has the restriction to operate with applications in the master-slave programming style. Although trivial, this style is used in several areas, such as genetic algorithms, Monte Carlo techniques, geometric transformations in computer graphics, cryptography algorithms and applications that follow the Embarrassingly Parallel computing model&#x00A0;<span class="cite">[<a  href="#XPAPER6:BICER:2011">9</a><a id="br9">]</a></span>. However, the Action1 allows existing processes to know the identifiers of the new ones allowing an all-to-all communication channel reorganization eventually. Another characteristic is that AutoElastic deals with applications that do not use specific deadlines for concluding the subparts. <!--l. 107-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-50015"></a>                                                                                                                                                                                      <!--l. 109-->    ]]></body>
<body><![CDATA[<p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a01f5.jpg" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;5: </span><span   class="content">Iterative application supported by AutoElastic. Process reorganization takes place before starting each new iteration</span></div><!--tex4ht:label?: x1-50015 -->                                                                                                                                                                                     <!--l. 112-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 115-->    <p class="indent" >   As AutoElastic project decision, elasticity feature must be offered to programmers without changing their application. Thus, we modeled the communication framework by analyzing the traditional interfaces from MPI 1.x and MPI 2.x. The first creates processes statically, where a program begins and ends with the same number of processes. On the other hand, MPI 2.0 has support for elasticity, since it offers the possibility of creating processes dynamically, with transparent connections to the existing ones. AutoElastic follows the MPMD (Multiple Program Multiple Data) approach from MPI 2.x, where the master has an executable and the slaves another. <!--l. 117-->    <p class="indent" >   Based on the MPI 2.0, AutoElastic works with the following directives: (i) publication of connection ports; (ii) finding the server based on a particular port; (iii) accepting a connection; (iv) requesting a connection; (v) making a disconnection. Different from the approach in which the master process launches the slaves using a spawn-like directive, the proposed model operates according to another approach of MPI 2.0 for dynamic process management: connection-oriented communication using point-to-point, as sockets do. The launching of a VM automatically occurs in the execution of a slave process, which requests a connection with the master afterwards. Here, we emphasize that an application with AutoElastic does not need to follow the MPI 2.0 interface, but the semantic of each aforementioned directive. <!--l. 119-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-50026"></a>                                                                                                                                                                                      <!--l. 122-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a01f6.jpg" alt="PIC"   >     <br>     ]]></body>
<body><![CDATA[<div class="caption"  ><span class="id">Figure&#x00A0;6: </span><span   class="content">Application model in pseudo-language: (a) Master process; (b) Slave process; (c) elasticity code to be inserted in the Master process at PaaS level by using either method overriding, source-to-source translation or wrapper technique</span></div><!--tex4ht:label?: x1-50026 -->                                                                                                                                                                                     <!--l. 127-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 129-->    <p class="indent" >   Figure <a  href="#x1-50026">6<!--tex4ht:ref: figure:application:model --></a> (a) presents a pseudo-code of the master process. The master performs a series of tasks, sequentially capturing a task and dividing it before sending for processing on slaves. Concerning the code, the method in the line 4 of Figure <a  href="#x1-50026">6<!--tex4ht:ref: figure:application:model --></a> (a) checks the distributed environment and publishes a set of ports (disjoint set of numbers, names or a combination of them) to receive a connection from each slave process. Data communication happens in an asynchronous model, where sending data to the slaves is non-blocking and receiving data from them is blocking. The occurrence of an external loop is convenient for elasticity, since the beginning of each iteration is a possible point for resource and process reconfiguration, including communication channel reorganizations. Still, the beginning of a new loop implies in a consistent global state for the distributed system. <!--l. 131-->    <p class="indent" >   The transformation of a non-elastic in an elastic application can be offered in different ways:     <dl class="enumerate"><dt class="enumerate">   (i) </dt><dd  class="enumerate">implementation of an object-oriented program using polymorphism to override the method to manage     the elasticity;     </dd><dt class="enumerate">  (ii) </dt><dd  class="enumerate">using a source-to-source translator to insert code between lines 4 and 5 of the master code;     </dd><dt class="enumerate">  (iii) </dt><dd  class="enumerate">development of a wrapper for procedural languages in order to change the function in line 4 of Figure     <a  href="#x1-50026">6<!--tex4ht:ref: figure:application:model --></a> (a). Regardless of the technique, the elasticity code is simple and shown in Figure <a  href="#x1-50026">6<!--tex4ht:ref: figure:application:model --></a> (c). A region of     additional code checks the shared directory if there is a new action for AutoElastic. For example, this     part of code can be inserted as an extension of the function publish_ports() following the technique     number iii above.</dd></dl> <!--l. 139-->    <p class="indent" >   Although the initial focus of AutoElastic is on master-slave, the use of the sockets-like MPI 2.0 ideas eases the inclusion of processes and the reestablishment of connections to compose a new totally arbitrary topology. At implementation level, it is possible to optimize connections and disconnections if the process persists in the list of active processes. This behavior is especially pertinent over TCP/IP connections, since this suite uses an onerous three-way hand shake protocol for connection establishment.    <h3 class="sectionHead"><span class="titlemark">4   </span> <a   id="x1-60004"></a>Evaluation Methodology</h3> <!--l. 7-->    <p class="noindent" >We developed an iterative application to execute in the cloud with different sets of load patterns and elasticity thresholds. Besides the metric application time, our idea consists in analyzing the elasticity reactivity and the costs in terms of infrastructure to achieve a particular execution time. The application computes the numerical integration of a function <img  src="/img/revistas/cleiej/v19n1/1a0122x.png" alt="f(x)  "  class="math" > in a closed interval <img  src="/img/revistas/cleiej/v19n1/1a0123x.png" alt="[a,b]  "  class="math" >. In general, numerical integration has been used in two ways: (i) as benchmark for HPC systems, including multicore, GPU, cluster and grid architectures&#x00A0;<span class="cite">[<a  href="#XBANAS:2014">40</a><a id="br40">,</a>&#x00A0;<a  href="#XPLAYNE:2011">41</a><a id="br41">]</a></span>; (ii) as a computational method employed on simulations of dynamic and electromechanical systems&#x00A0;<span class="cite">[<a  href="#XMIHAI:2012">42</a><a id="br42">,</a>&#x00A0;<a  href="#XTRIPODI:2015">43</a><a id="br43">]</a></span>. The first case explains why we are using numerical integration to evaluate the AutoElastic model. Here, we are using the Composite Trapezoidal rule from Newton-Cotes postulation&#x00A0;<span class="cite">[<a  href="#XMIHAI:2012">42</a><a id="br42">]</a></span>. Newton-Cotes formula can be useful if the value of the integrand is given at equally-spaced points. Firstly, consider the division of the interval <img  src="/img/revistas/cleiej/v19n1/1a0124x.png" alt="[a,b]  "  class="math" > in <img  src="/img/revistas/cleiej/v19n1/1a0125x.png" alt="s  "  class="math" > equally-spaced subintervals, each one with length <img  src="/img/revistas/cleiej/v19n1/1a0126x.png" alt="h  "  class="math" > (<img  src="/img/revistas/cleiej/v19n1/1a0127x.png" alt="[x ,x   ]  i  i+1  "  class="math" >, for <img  src="/img/revistas/cleiej/v19n1/1a0128x.png" alt="i = 0,1,2,...,s- 1  "  class="math" >). Thus, <img  src="/img/revistas/cleiej/v19n1/1a0129x.png" alt="x   - x  = h = b-a  i+1   i       s  "  class="math" >. The integral of <img  src="/img/revistas/cleiej/v19n1/1a0130x.png" alt="f (x)  "  class="math" > is defined as the sum of the areas of the <img  src="/img/revistas/cleiej/v19n1/1a0131x.png" alt="s  "  class="math" > trapezoids contained in the interval <img  src="/img/revistas/cleiej/v19n1/1a0132x.png" alt="[a,b]  "  class="math" > as presented in Equation <a  href="#x1-6001r3">3<!--tex4ht:ref: equation:aplic1 --></a>. Figure&#x00A0;<a  href="#x1-60027">7<!--tex4ht:ref: fig:newton-cotes-1 --></a> shows two examples of splitting the interval [a,b]: in (a) with 4 and in (b) with 20 trapezoids. Considering this figure, we can observe that the larger the number of trapezoids, or subintervals, the greater the precision to the precision to compute the total area in [a,b].    <table  class="equation"><tr><td>    <center class="math-display" > <img  src="/img/revistas/cleiej/v19n1/1a0133x.png" alt="&#x222B; b    f(x )dx &#x2248; A + A  +A  + A  + ...+ A  a           0   1    2   3        s-1 " class="math-display" ><a   id="x1-6001r3"></a></center></td><td class="equation-label">(3)</td></tr></table> <!--l. 12-->    <p class="nopar" > where&#x00A0; <img  src="/img/revistas/cleiej/v19n1/1a0134x.png" alt="Ai  "  class="math" > = area of trapezoid <img  src="/img/revistas/cleiej/v19n1/1a0135x.png" alt="i  "  class="math" >, with &#x00A0;<img  src="/img/revistas/cleiej/v19n1/1a0136x.png" alt="i = 0,1,2,3,...,s- 1  "  class="math" >. <!--l. 15-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-60027"></a>                                                                                                                                                                                      <!--l. 17-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a01f7.jpg" alt="PIC"   >     ]]></body>
<body><![CDATA[<br>     <div class="caption"  ><span class="id">Figure&#x00A0;7: </span><span   class="content">Representation of two examples of the Newton-Cotes postulation splitting the interval [a,b] in 4 and 20 equally-spaced subintervals. The larger the number of subintervals, the larger the precision and the computing workload to calculate the numerical integration</span></div><!--tex4ht:label?: x1-60027 -->                                                                                                                                                                                     <!--l. 20-->    <p class="indent" >   </div><hr class="endfigure">    <table  class="equation"><tr><td>    <center class="math-display" > <img  src="/img/revistas/cleiej/v19n1/1a0137x.png" alt="&#x222B;   b         h-               x&#x2211;-1  a f(x)dx &#x2248; 2[f(x0)+ f(xs) +2.   f (xi)]                              i=1 " class="math-display" ><a   id="x1-6003r4"></a></center></td><td class="equation-label">(4)</td></tr></table> <!--l. 27-->    <p class="nopar" > <!--l. 30-->    <p class="indent" >   Equation <a  href="#x1-6003r4">4<!--tex4ht:ref: equation:aplic2 --></a> shows the development of the numerical integration in accordance with the Newton-Cotes postulation. This equation is used to develop the parallel application modeling. The values of <img  src="/img/revistas/cleiej/v19n1/1a0138x.png" alt="x  0  "  class="math" > and <img  src="/img/revistas/cleiej/v19n1/1a0139x.png" alt="x  s  "  class="math" > in Equation <a  href="#x1-6003r4">4<!--tex4ht:ref: equation:aplic2 --></a> are equal to <img  src="/img/revistas/cleiej/v19n1/1a0140x.png" alt="a  "  class="math" > and <img  src="/img/revistas/cleiej/v19n1/1a0141x.png" alt="b  "  class="math" >, respectively. In this context, <img  src="/img/revistas/cleiej/v19n1/1a0142x.png" alt="s  "  class="math" > means the number of subintervals. Following this Equation, there are <img  src="/img/revistas/cleiej/v19n1/1a0143x.png" alt="s + 1  "  class="math" > <img  src="/img/revistas/cleiej/v19n1/1a0144x.png" alt="f(x )  "  class="math" >-like simple equations for getting the final result of the numerical integration. The master process must distribute these <img  src="/img/revistas/cleiej/v19n1/1a0145x.png" alt="s+ 1  "  class="math" > equations among the slaves. Logically, some slaves can receive more work than others when <img  src="/img/revistas/cleiej/v19n1/1a0146x.png" alt="s+ 1  "  class="math" > is not fully divisible by the number of slaves. As being defined in <img  src="/img/revistas/cleiej/v19n1/1a0147x.png" alt="s  "  class="math" > the amount of subintervals to compute the integration, the greater this parameter, the larger the computational load involved on reaching the result for a particular equation. <!--l. 32-->    <p class="indent" >   Aiming at analyzing the impact of different thresholds in the parallel application, we used the aforesaid parameter <img  src="/img/revistas/cleiej/v19n1/1a0148x.png" alt="s  "  class="math" > to model four load patterns: Constant, Ascending, Descending and Wave. An execution of a load consists in start the application with a set of integral equations with the same parameters and a different <img  src="/img/revistas/cleiej/v19n1/1a0149x.png" alt="s  "  class="math" > for each one. In each iteration the master process distributes the load of one equation. Varying the parameter <img  src="/img/revistas/cleiej/v19n1/1a0150x.png" alt="s  "  class="math" > we can increase or decrease the load for each equation. Table&#x00A0;<a  href="#x1-60043">3<!--tex4ht:ref: table:functions --></a> shows the function used to calculate the load for each equation in each iteration. The idea of using different patterns, or workloads, for the same HPC application is widely explored in the literature to observe how the input load can impact in points of saturation, bottlenecks and resource allocation and deallocation&#x00A0;<span class="cite">[<a  href="#XPAPER155:ISLAM:2012">44</a><a id="br44">,</a>&#x00A0;<a  href="#XMAO:MING:2011">45</a><a id="br45">,</a>&#x00A0;<a  href="#XZHANG:WEI:2008">46</a><a id="br46">]</a></span>.        <div class="table">                                                                                                                                                                                     <!--l. 34-->    <p class="indent" >   <a   id="x1-60043"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;3: </span><span   class="content">Functions to express different load patterns. In <img  src="/img/revistas/cleiej/v19n1/1a0151x.png" alt="load(x)  "  class="math" >, <img  src="/img/revistas/cleiej/v19n1/1a0152x.png" alt="x  "  class="math" > is the iteration index at application runtime</span></div><!--tex4ht:label?: x1-60043 --> <img  src="/img/revistas/cleiej/v19n1/1a01t3.jpg" alt="PIC"   >                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 40-->    ]]></body>
<body><![CDATA[<p class="indent" >   Figure&#x00A0;<a  href="#x1-60058">8<!--tex4ht:ref: figure:functions --></a> shows a graphical representation of each pattern. The <img  src="/img/revistas/cleiej/v19n1/1a0153x.png" alt="x  "  class="math" > axis in the graph of Figure&#x00A0;<a  href="#x1-60058">8<!--tex4ht:ref: figure:functions --></a> expresses the functions (each iteration represents a function) that are being tested, whereas the <img  src="/img/revistas/cleiej/v19n1/1a0154x.png" alt="y  "  class="math" > axis informs the respective load. Again, the load represents the number of subintervals <img  src="/img/revistas/cleiej/v19n1/1a0155x.png" alt="s  "  class="math" > between limits <img  src="/img/revistas/cleiej/v19n1/1a0156x.png" alt="a  "  class="math" > and <img  src="/img/revistas/cleiej/v19n1/1a0157x.png" alt="b  "  class="math" >, which in this experiment are 1 and 10, respectively. A greater number of intervals is associated with a greater computational load for generating the numerical integration of the function. For simplicity, the same function is employed in the tests, but the number of subintervals for the integration varies. <!--l. 43-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-60058"></a>                                                                                                                                                                                      <!--l. 46-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a01f8.jpg" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;8: </span><span   class="content">Graphical representation of the load patterns considered in the model evaluation</span></div><!--tex4ht:label?: x1-60058 -->                                                                                                                                                                                     <!--l. 49-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 52-->    <p class="indent" >   The loads were executed in two different scenarios: (i) starting with 2 nodes and (ii) starting with 4 nodes. We are using 2.9 GHz dual core nodes with 4 GB of RAM memory and an interconnection network of 100 Mbps. Each load was executed against each scenario when considering AutoElastic with and without enabling the elasticity feature. In the case where the elasticity is active, all loads were tested <img  src="/img/revistas/cleiej/v19n1/1a0158x.png" alt="25  "  class="math" > times, where <img  src="/img/revistas/cleiej/v19n1/1a0159x.png" alt="25  "  class="math" > is the number of combinations of maximum and minimum possible thresholds. Assuming the choices of different works, where we can find thresholds like 50% <span class="cite">[<a  href="#XSALAH:2013">47</a><a id="br47">]</a></span>, 70% <span class="cite">[<a  href="#XPAPER139:WESAN:2011">30</a><a id="br30">]</a></span><span class="cite">[<a  href="#XSALAH:2013">47</a><a id="br47">]</a></span>, 75% <span class="cite">[<a  href="#XVARELA:CARLOS:2012">31</a><a id="br31">]</a></span>, 80% <span class="cite">[<a  href="#XPAPER11:SULELMAN:2012">33</a><a id="br33">]</a></span><span class="cite">[<a  href="#XSALAH:2013">47</a><a id="br47">]</a></span> and 90% <span class="cite">[<a  href="#XMATOS:2012">19</a><a id="br19">]</a></span><span class="cite">[<a  href="#XSALAH:2013">47</a><a id="br47">]</a></span>, for the maximum value, we adopted 70%, 75%, 80%, 85% and 90%, while the minimum thresholds were 30%, 35%, 40%, 45% and 50%. Particularly, the range for the minimum threshold is based on the work of Haidari et al.&#x00A0;<span class="cite">[<a  href="#XSALAH:2013">47</a><a id="br47">]</a></span>, who propose a theoretical analysis with queuing theory to observe cloud elasticity performance.    <table  class="equation"><tr><td>    <center class="math-display" > <img  src="/img/revistas/cleiej/v19n1/1a0160x.png" alt="          n Energy = &#x2211; (i&#x00D7; T(i))          i=1 " class="math-display" ><a   id="x1-6006r5"></a></center></td><td class="equation-label">(5)</td></tr></table> <!--l. 57-->    <p class="nopar" >    <table  class="equation"><tr><td>    <center class="math-display" > <img  src="/img/revistas/cleiej/v19n1/1a0161x.png" alt="Cost = Energy &#x00D7; AppxT ime " class="math-display" ><a   id="x1-6007r6"></a></center></td><td class="equation-label">(6)</td></tr></table> <!--l. 62-->    <p class="nopar" > <!--l. 64-->    ]]></body>
<body><![CDATA[<p class="indent" >   Besides the performance perspective, we also analyze the energy consumption in order to perceive the impact of the elasticity feature. In other words, we do not want to reduce the application time by using a large number of resources, thus consuming much more energy. Empirically, we are using Equation <a  href="#x1-6006r5">5<!--tex4ht:ref: equ_energy --></a> for estimating the energy consumption. This equation relies on the close relationship between energy and resource consumption as presented by Orgerie et al.&#x00A0;<span class="cite">[<a  href="#XOrgerie2014">48</a><a id="br48">]</a></span>. In this context, we use Equation <a  href="#x1-6006r5">5<!--tex4ht:ref: equ_energy --></a> to create an index of the resource usage. Here, we use the same ideas of the pricing model employed by Amazon and Microsoft; they consider the number of VMs at each unit of time, which is normally set to an hour. <img  src="/img/revistas/cleiej/v19n1/1a0162x.png" alt="T(i)  "  class="math" > presents the time that the application executed with <img  src="/img/revistas/cleiej/v19n1/1a0163x.png" alt="i  "  class="math" > virtual machines. Therefore, our unit of time depends on the measure of <img  src="/img/revistas/cleiej/v19n1/1a0164x.png" alt="T  "  class="math" > (in minutes, seconds or milliseconds, and so on) in which the final intent is to sum the number of VMs used at each unit of time. For example, considering a unit of time in minutes and an application completion time of 7 min, we could have the following histogram: 1 min (2 VMs), 1 min (2 VMs), 1 min (4 VMs), 1 min (4 VMs), 1 min (2 VMs), 1 min (2 VMs) and 1 min (2 VMs). Finally, 2 VMs were used in 5 min (partial resource equal to 10) and 4 VMs in 2 min (partial resource equal to 8), summing to 18 for Equation <a  href="#x1-6006r5">5<!--tex4ht:ref: equ_energy --></a>. Thus, Equation <a  href="#x1-6006r5">5<!--tex4ht:ref: equ_energy --></a> analyzes the use from <img  src="/img/revistas/cleiej/v19n1/1a0165x.png" alt="1  "  class="math" > to <img  src="/img/revistas/cleiej/v19n1/1a0166x.png" alt="n  "  class="math" > VMs, considering the partial execution time on each infrastructure size. To the best of our understanding, the energy index here is relevant for comparison among different elastic-enabled executions. Figure&#x00A0;<a  href="#x1-60089">9<!--tex4ht:ref: fig:energy --></a> represents the energy consumption as an area calculation. Taking into account both our measure of energy (see Equation <a  href="#x1-6006r5">5<!--tex4ht:ref: equ_energy --></a>) and the application time, we can evaluate the cost <img  src="/img/revistas/cleiej/v19n1/1a0167x.png" alt="Cost  "  class="math" > by multiplying                                                                                                                                                                                     both aforementioned values (see Equation <a  href="#x1-6007r6">6<!--tex4ht:ref: equ_cost --></a>). The final idea consists of obtaining a better cost when enabling the AutoElastic&#8217;s elasticity feature in a comparison with an execution of a fixed number of processes. <!--l. 66-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-60089"></a>                                                                                                                                                                                      <!--l. 68-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a01f9.jpg" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;9: </span><span   class="content">Graphic representation of energy consumption using Equation&#x00A0;<a  href="#x1-6006r5">5<!--tex4ht:ref: equ_energy --></a></span></div><!--tex4ht:label?: x1-60089 -->                                                                                                                                                                                     <!--l. 71-->    <p class="indent" >   </div><hr class="endfigure">    <h3 class="sectionHead"><span class="titlemark">5   </span> <a   id="x1-70005"></a>Discussing the Results</h3> <!--l. 8-->    <p class="noindent" >This section shows the results obtained when executing the parallel application in the cloud considering two scenarios: <!--l. 10-->    <p class="indent" >     <dl class="enumerate"><dt class="enumerate">   (i) </dt><dd  class="enumerate">using AutoElastic, enabling its self-organizing feature when dealing with elasticity;     </dd><dt class="enumerate">  (ii) </dt><dd  class="enumerate">using  AutoElastic,  considering  scheduling  computation,  but  without  taking  any  elasticity  action.     Particularly, the values of thresholds are not important in this second scenario since infrastructure     reconfiguration does not take place.     </dd></dl> <!--l. 18-->    <p class="indent" >   Subsection&#x00A0;<a  href="#x1-80005.1">5.1<!--tex4ht:ref: subsec51 --></a> presents an analysis of the impact on the application time when varying the threshold configuration. The results concerning energy consumption and cost to execute the application are presented in the Subsection&#x00A0;<a  href="#x1-90005.2">5.2<!--tex4ht:ref: subsec52 --></a>. <!--l. 21-->    ]]></body>
<body><![CDATA[<p class="noindent" >    <h4 class="subsectionHead"><span class="titlemark">5.1   </span> <a   id="x1-80005.1"></a>Impact of the Thresholds on Application Time</h4> <!--l. 24-->    <p class="noindent" >Aiming at analyzing the impact and possible trends of the employed thresholds on application time, we organized the results in the Figures &#x00A0;<a  href="#x1-800110">10<!--tex4ht:ref: fig_tendencia_2_nodes --></a> and&#x00A0;<a  href="#x1-800211">11<!--tex4ht:ref: fig_tendencia_4_nodes --></a>. Both present the final execution time when enabling elasticity and maximum and minimum thresholds. At performance perspective, we observed that the time is not significantly changed when varying the minimum threshold (see the lower part of the Figures &#x00A0;<a  href="#x1-800110">10<!--tex4ht:ref: fig_tendencia_2_nodes --></a> and&#x00A0;<a  href="#x1-800211">11<!--tex4ht:ref: fig_tendencia_4_nodes --></a>). On the other hand, the maximum threshold impacts in the application performance directly, where the larger the maximum threshold the larger the execution time. The lack of reactivity is the main cause for this situation, <img  src="/img/revistas/cleiej/v19n1/1a0168x.png" alt="i.e  "  class="math" >, the application will execute in the overloaded stated during a larger period when evaluating thresholds close to 90%. Particularly, this behavior is more evident when using the Ascending function. In this case, the workload grows up continuously and then, a threshold close to 70% can allocate more resource faster, relieving the system CPU load quickly too.                                                                                                                                                                                     <!--l. 26-->    <p class="indent" >   <a   id="x1-800110"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                      <img  src="/img/revistas/cleiej/v19n1/1a01f10.jpg" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;10: </span><span   class="content">Trend of application execution time starting with 2 nodes varying the upper (a) and the lower (b) thresholds</span></div><!--tex4ht:label?: x1-800110 -->                                                                                                                                                                                        </div><hr class="endfloat">                                                                                                                                                                                     <!--l. 36-->    <p class="indent" >   <a   id="x1-800211"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                      <img  src="/img/revistas/cleiej/v19n1/1a01f11.jpg" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;11: </span><span   class="content">Trend of application execution time starting with 4 nodes varying the upper (a) and the lower (b) thresholds</span></div><!--tex4ht:label?: x1-800211 -->                                                                                                                                                                                        </div><hr class="endfloat">        ]]></body>
<body><![CDATA[<div class="table">                                                                                                                                                                                     <!--l. 1-->    <p class="indent" >   <a   id="x1-80034"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;4: </span><span   class="content">The best and the worst results considering the time (in seconds) to execute the application of each load in all scenarios. Here, Energy and Cost refer to the Equations 5 and 6, respectively</span></div><!--tex4ht:label?: x1-80034 --> <img  src="/img/revistas/cleiej/v19n1/1a01t4.jpg" alt="PIC"   >                                                                                                                                                                                        </div><hr class="endfloat">    </div>                                                                                                                                                                                     <!--l. 48-->    <p class="indent" >   <a   id="x1-800412"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                      <img  src="/img/revistas/cleiej/v19n1/1a01f12.jpg" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;12: </span><span   class="content">History of resource utilization of the best results of Table I when starting with 2 nodes. The upper part refers to an execution without elasticity, while bottom one considers this capability. Each column denotes a particular load pattern: (a) ascending; (b) constant; (c) descending; (d) wave. In the bottom, we highlight the functioning of the ascending (e) and descending (f) load patterns. These two graphs are pertinent to see the relation between the available CPU cores and the used CPU load</span></div><!--tex4ht:label?: x1-800412 -->                                                                                                                                                                                        </div><hr class="endfloat">                                                                                                                                                                                     <!--l. 57-->    <p class="indent" >   <a   id="x1-800513"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                      <img  src="/img/revistas/cleiej/v19n1/1a01f13.jpg" alt="PIC"   >     ]]></body>
<body><![CDATA[<br>     <div class="caption"  ><span class="id">Figure&#x00A0;13: </span><span   class="content">History of resource utilization of the best results of Table I when starting with 4 nodes. The upper part refers to an execution without elasticity, while bottom one considers this capability. Each column represents a studied load pattern: (a) ascending; (b) constant; (c) descending; (d) wave. In the bottom, we highlight the functioning of the ascending (e) and descending (f) load patterns. These two graphs are pertinent to see the relation between the available CPU cores and the used CPU load</span></div><!--tex4ht:label?: x1-800513 -->                                                                                                                                                                                        </div><hr class="endfloat"> <!--l. 65-->    <p class="indent" >   Table <a  href="#x1-80034">4<!--tex4ht:ref: tab_best_worst_execution_time --></a> presents the results for the best and the worst execution times when varying the used load patterns and the scenarios. In addition, Figures <a  href="#x1-800412">12<!--tex4ht:ref: fig_history_execution_of_best_time_2_nodes --></a> and <a  href="#x1-800513">13<!--tex4ht:ref: fig_history_execution_of_best_time_4_nodes --></a> illustrate the performance involving the column of the best results under the two aforementioned scenarios. Each node starts with two VMs, each one running a slave process in one of two cores of the node. Considering that the application is CPU-bound, the execution with 4 nodes outperforms about 65% in average the tests with 2 nodes without enabling elasticity. This explains also the better results with elasticity when considering the start configuration with 2 and 4 nodes. In other words, the possibility to change the number of resources has a stronger impact when initializing a more limited configuration. For example, Figure&#x00A0;<a  href="#x1-800412">12<!--tex4ht:ref: fig_history_execution_of_best_time_2_nodes --></a> (a) shows the Ascending function and the increment up to 12 CPUs (6 nodes) with elasticity, denoting a performance gain up to 31%. Here, we can observe that the used CPU reaches the total allocated quickly, demonstrating the application&#8217;s CPU-bound character. Considering this figure and the Descending function, we allocate up to 10 CPUs that become underutilized, being deallocated close to the final of the application. This occurs because AutoElastic does not work with previous knowledge of the application, acting only with data captured at runtime.    <h4 class="subsectionHead"><span class="titlemark">5.2   </span> <a   id="x1-90005.2"></a>Energy Consumption and Cost</h4> <!--l. 71-->    <p class="noindent" >Figures&#x00A0;<a  href="#x1-900214">14<!--tex4ht:ref: fig_time_profile_of_best_and_worst_time_2_nodes --></a> and <a  href="#x1-900315">15<!--tex4ht:ref: fig_time_profile_of_best_and_worst_time_4_nodes --></a> present an application execution profile, depicting the mapping of VMs when considering the best and worst results (see Table <a  href="#x1-80034">4<!--tex4ht:ref: tab_best_worst_execution_time --></a>). The start with 2 nodes (4 VMs) does not present elasticity actions in the worst case when using the maximum threshold equal to 90%. This explains the results in the part (b) of Figure <a  href="#x1-900214">14<!--tex4ht:ref: fig_time_profile_of_best_and_worst_time_2_nodes --></a>. Yet with 2 nodes in the starting moment, threshold 70% is responsible for allocating up to 12 VMs in the Ascending load as we can observe in Figure <a  href="#x1-900214">14<!--tex4ht:ref: fig_time_profile_of_best_and_worst_time_2_nodes --></a> (a). Contrary to this situation, Figure <a  href="#x1-900315">15<!--tex4ht:ref: fig_time_profile_of_best_and_worst_time_4_nodes --></a> shows less variation on resource configuration, when 4 nodes (or 8 VMs) are maintained in the larger part of the execution. As exception, we can observe the Ascending function as the worst case, where a maximum threshold equal to 90% is employed. The application starts reducing the number of VMs from 8, to 6 up 4, using this last value in 96% of the execution. Although not exceeding a load of 90%, so not enlarging the infrastructure, the allocation of more resource in this situation could help on both balancing the load among the CPUs and reducing the application time. <!--l. 75-->    <p class="indent" >   Figure&#x00A0;<a  href="#x1-900416">16<!--tex4ht:ref: fig_efficiency_of_best_time_2_4_nodes --></a> illustrates the amount of allocated and used CPU considering the best cases of f&#x00A0;<a  href="#x1-80034">4<!--tex4ht:ref: tab_best_worst_execution_time --></a>. All loads used less CPU when enabling the elasticity, except in the Constant load when starting with 4 nodes which we achieved the same value in both cases. However, the use of 2 nodes as start configuration implies on allocating more CPU when enabling elasticity. This behavior was expected for two reasons: (i) AutoElastic does not use a prior information about application behavior; (ii) after allocating resources, the overall load decreases implying on a better load balancing but on a worse resource utilization. <!--l. 77-->    <p class="indent" >   Considering the Figures&#x00A0;<a  href="#x1-900214">14<!--tex4ht:ref: fig_time_profile_of_best_and_worst_time_2_nodes --></a> (a) and <a  href="#x1-900315">15<!--tex4ht:ref: fig_time_profile_of_best_and_worst_time_4_nodes --></a> (a), we computed the energy consumption as declared in Equation 5. Considering this metric and the application time, we can configure the cost as previously mentioned in Equation 6. The final idea regarding the cost is summarized in the Inequality <a  href="#x1-9001r7">7<!--tex4ht:ref: ineq --></a>, where our plan is either to reduce the time but not paying a large resource penalty for this or to present a time slightly larger than a non-elastic execution but improving resource utilization. In this Inequality, <img  src="/img/revistas/cleiej/v19n1/1a0169x.png" alt="s1  "  class="math" > means the scenario with AutoElastic and elasticity, while <img  src="/img/revistas/cleiej/v19n1/1a0170x.png" alt="s2  "  class="math" > means the scenario with same middleware but resource reconfiguration does not take place. Figure&#x00A0;<a  href="#x1-900517">17<!--tex4ht:ref: fig_cost_of_best_time_2_4_nodes --></a> illustrates the cost results when starting with 2 and 4 nodes. Elasticity is responsible for the better results in the former case over all evaluated loads. On the other hand, the lack of reactivity and static thresholds were the main reasons for the results in the latter case. In other words, the application executes inefficiently during a large period up to reaching a threshold, so implying after this on resource reconfiguration. The prediction of application behavior and adaptable thresholds could aid to improve performance when using configurations with a reduced computation grain (index that can be estimated by dividing the work involved on computational tasks by the costs on network communication).    <table  class="equation"><tr><td>    <center class="math-display" > <img  src="/img/revistas/cleiej/v19n1/1a0171x.png" alt="Energys1 &#x00D7; Appxtimes1 &#x003C; Energys2 &#x00D7; AppxT imes2 " class="math-display" ><a   id="x1-9001r7"></a></center></td><td class="equation-label">(7)</td></tr></table> <!--l. 82-->    <p class="nopar" > <!--l. 85-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-900214"></a>                                                                                                                                                                                      <!--l. 88-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a01f14.jpg" alt="PIC"   >     ]]></body>
<body><![CDATA[<br>     <div class="caption"  ><span class="id">Figure&#x00A0;14: </span><span   class="content">Profile of the best (a) and the worst (b) application execution time starting with 2 nodes</span></div><!--tex4ht:label?: x1-900214 -->                                                                                                                                                                                     <!--l. 91-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 93-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-900315"></a>                                                                                                                                                                                      <!--l. 96-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a01f15.jpg" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;15: </span><span   class="content">Profile of the best (a) and the worst (b) application execution time starting with 4 nodes</span></div><!--tex4ht:label?: x1-900315 -->                                                                                                                                                                                     <!--l. 99-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 102-->    <p class="indent" >   <hr class="figure">    ]]></body>
<body><![CDATA[<div class="figure"  >                                                                                                                                                                                     <a   id="x1-900416"></a>                                                                                                                                                                                      <!--l. 105-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a01f16.jpg" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;16: </span><span   class="content">Efficiency of the best application execution time: (a) starting with 2 nodes and (b) starting with 4 nodes</span></div><!--tex4ht:label?: x1-900416 -->                                                                                                                                                                                     <!--l. 108-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 110-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-900517"></a>                                                                                                                                                                                      <!--l. 113-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a01f17.jpg" alt="PIC"   >     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;17: </span><span   class="content">Cost of the best application execution time in accordance with Equation 6</span></div><!--tex4ht:label?: x1-900517 -->                                                                                                                                                                                     <!--l. 116-->    ]]></body>
<body><![CDATA[<p class="indent" >   </div><hr class="endfigure">    <h3 class="sectionHead"><span class="titlemark">6   </span> <a   id="x1-100006"></a>Conclusion</h3> <!--l. 7-->    <p class="noindent" >This article presented a model named AutoElastic and its functioning in the HPC scope when varying both the cloud elasticity thresholds and the application&#8217;s load pattern. Considering the problem statements (listed in Section <a  href="#x1-30003">3<!--tex4ht:ref: section:autoelastic --></a>), AutoElastic acts at middleware level targeting message-passing applications with explicit parallelism, which use send/receive and accept/connect directives. Particularly, we adopted this design decision because it can be easily implemented in MPI 2, which offers a sockets-based programming style for dynamic process creation. Moreover, considering the time requirements of HPC applications, we modeled a framework to enable a novel feature denoted asynchronous elasticity, where VM transferring or consolidation happens in parallel with application execution. <!--l. 10-->    <p class="indent" >   The main contribution of this work is the joint analysis of an elasticity model and a HPC application when varying both elasticity parameters and load patterns. Thus, the discussion here can help cloud programmers to tune elasticity thresholds to exploit better performance and resource utilization on CPU-driven demands. At a glance, the gain with elasticity depends on: (i) the computational grain of the application; (ii) elasticity reactivity. We showed that a CPU-bound application can execute faster with elasticity when comparing its execution with other scenarios which use a fixed and/or reduced number of resources. Considering this kind of applications, we observed that a maximum threshold (that drives the infrastructure enlargement) close to 100% can be seen as a bad choice because the application will execute unnecessarily with overloaded resources up to reach this threshold level. In our tests, our lower value for this parameter was 70%, which had generated the best results over all load patterns. <!--l. 14-->    <p class="indent" >   Future research concerns the study of network, storage and memory elasticity to employ these capabilities in the next versions of AutoElastic. Moreover, we plan to develop a hybrid proactive and reactive strategy for the elasticity, joining ideas from reinforcement learning, neural networks and/or time-series analysis. Future works also include investigations on designing the SLA. Today, we are considering only the maximum number of VMs, but metrics like time, energy and cost can be also combined in this context. The study of elasticity grain and the execution of highly irregular applications also contemplate future steps. The grain, in particular, refers to the number of nodes and VMs involved on each elasticity action. Regarding the target application, although the numerical integration application be useful to evaluate AutoElastic ideas, we intend to explore elasticity on highly-irregular applications&#x00A0;<span class="cite">[<a  href="#XLIU:DONG:2014">49</a><a id="br49">]</a></span>. Our plans also consider to extend AutoElastic to cover elasticity on other HPC programming models, such as divide-and-conquer, pipeline and bulk-synchronous parallel (BSP). In addition, the current article focused mainly on exploring the impact of the lower and upper thresholds in the application performance; so future work also includes the development of 3D graphs in order to demonstrate their impact also in the energy and cost perspectives. <!--l. 84-->    <p class="noindent" >    <h3 class="likesectionHead"><a   id="x1-110006"></a>Acknowledgment</h3> <!--l. 85-->    <p class="noindent" >The authors would like to thank to the following Brazilian agencies: CNPq, CAPES and FAPERGS. <!--l. 2-->    <p class="noindent" >    <h3 class="likesectionHead"><a   id="x1-120006"></a>References</h3> <!--l. 2-->    <p class="noindent" >         <div class="thebibliography">         <p class="bibitem" ><span class="biblabel">   [<a href="#br1">1</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XREF2014-3"></a>M.&#x00A0;Mohan&#x00A0;Murthy, H.&#x00A0;Sanjay, and J.&#x00A0;Anand, &#8220;Threshold based auto scaling of virtual machines     in cloud environment,&#8221; in <span  class="cmti-10">Network and Parallel Computing</span>, ser. Lecture Notes in Computer Science,     C.-H. Hsu, X.&#x00A0;Shi, and V.&#x00A0;Salapura, Eds.   Springer Berlin Heidelberg, 2014, vol. 8707, pp. 247&#8211;256.     </p>         ]]></body>
<body><![CDATA[<p class="bibitem" ><span class="biblabel">   [<a href="#br2">2</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XREF2014-4"></a>J.&#x00A0;Bao,  Z.&#x00A0;Lu,  J.&#x00A0;Wu,  S.&#x00A0;Zhang,  and  Y.&#x00A0;Zhong,  &#8220;Implementing  a  novel  load-aware  auto  scale     scheme for private cloud resource management platform,&#8221; in <span  class="cmti-10">Network Operations and Management</span>     <span  class="cmti-10">Symposium (NOMS), 2014 IEEE</span>, May 2014, pp. 1&#8211;4.                                                                                                                                                                                         </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br3">3</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XREF2014-1"></a>S.&#x00A0;Sah and S.&#x00A0;Joshi, &#8220;Scalability of efficient and dynamic workload distribution in autonomic cloud     computing,&#8221; in <span  class="cmti-10">Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International</span>     <span  class="cmti-10">Conference on</span>, Feb 2014, pp. 12&#8211;18.     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br4">4</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XWEBER:2014"></a>A.&#x00A0;Weber, N.&#x00A0;R. Herbst, H.&#x00A0;Groenda, and S.&#x00A0;Kounev, &#8220;Towards a resource elasticity benchmark     for cloud environments,&#8221; in <span  class="cmti-10">Proceedings of the 2nd International Workshop on Hot Topics in Cloud</span>     <span  class="cmti-10">Service Scalability (HotTopiCS 2014), co-located with the 5th ACM/SPEC International Conference on</span>     <span  class="cmti-10">Performance Engineering (ICPE 2014)</span>.   ACM, March 2014.     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br5">5</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XCLAUS:2014"></a>P.&#x00A0;Jamshidi, A.&#x00A0;Ahmad, and C.&#x00A0;Pahl, &#8220;Autonomic resource provisioning for cloud-based software,&#8221;     in  <span  class="cmti-10">Proceedings  of  the  9th  International  Symposium  on  Software  Engineering  for  Adaptive  and</span>     <span  class="cmti-10">Self-Managing Systems</span>, ser. SEAMS 2014.   New York, NY, USA: ACM, 2014, pp. 95&#8211;104. [Online].     Available: <a  href="http://doi.acm.org/10.1145/2593929.2593940" class="url" >http://doi.acm.org/10.1145/2593929.2593940</a>     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br6">6</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XPAPER4:GUO:2012"></a>Y.&#x00A0;Guo, M.&#x00A0;Ghanem, and R.&#x00A0;Han, &#8220;Does the cloud need new algorithms? an introduction to elastic     algorithms,&#8221; in <span  class="cmti-10">Cloud Computing Technology and Science (CloudCom), 2012 IEEE 4th International</span>     <span  class="cmti-10">Conference on</span>, December 2012, pp. 66 &#8211;73.     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br7">7</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XPAPER10:GALANTE:2012"></a>G.&#x00A0;Galante and L.&#x00A0;C. E.&#x00A0;d. Bona, &#8220;A survey on cloud computing elasticity,&#8221; in <span  class="cmti-10">Proceedings of</span>     <span  class="cmti-10">the 2012 IEEE/ACM Fifth International Conference on Utility and Cloud Computing</span>, ser. UCC &#8217;12.     Washington, DC, USA: IEEE Computer Society, 2012, pp. 263&#8211;270.     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br8">8</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XANTON:2012"></a>A.&#x00A0;Beloglazov,  J.&#x00A0;Abawajy,  and  R.&#x00A0;Buyya,  &#8220;Energy-aware  resource  allocation  heuristics  for     efficient  management  of  data  centers  for  cloud  computing,&#8221;  <span  class="cmti-10">Future Gener. Comput. Syst.</span>,  vol.&#x00A0;28,     no.&#x00A0;5, pp. 755&#8211;768, May 2012. [Online]. Available: <a  href="http://dx.doi.org/10.1016/j.future.2011.04.017" class="url" >http://dx.doi.org/10.1016/j.future.2011.04.017</a>     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br9">9</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XPAPER6:BICER:2011"></a>A.&#x00A0;Raveendran, T.&#x00A0;Bicer, and G.&#x00A0;Agrawal, &#8220;A framework for elastic execution of existing mpi     programs,&#8221; in <span  class="cmti-10">Proceedings of the 2011 IEEE Int. Symposium on Parallel and Distributed Processing</span>     <span  class="cmti-10">Workshops and PhD Forum</span>, ser. IPDPSW &#8217;11.   Washington, DC, USA: IEEE Computer Society, 2011,     pp. 940&#8211;947.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br10">10</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XREF2014-2"></a>P.&#x00A0;Jamshidi, A.&#x00A0;Ahmad, and C.&#x00A0;Pahl, &#8220;Autonomic resource provisioning for cloud-based software,&#8221;     in  <span  class="cmti-10">Proceedings  of  the  9th  International  Symposium  on  Software  Engineering  for  Adaptive  and</span>     <span  class="cmti-10">Self-Managing Systems</span>, ser. SEAMS 2014.   New York, NY, USA: ACM, 2014, pp. 95&#8211;104. [Online].     Available: <a  href="http://doi.acm.org/10.1145/2593929.2593940" class="url" >http://doi.acm.org/10.1145/2593929.2593940</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br11">11</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XCOUTINHO:2014"></a>E.&#x00A0;F. Coutinho, G.&#x00A0;Paillard, and J.&#x00A0;N. de&#x00A0;Souza, &#8220;Performance analysis on scientific computing     and cloud computing environments,&#8221; in <span  class="cmti-10">Proceedings of the 7th Euro American Conference on Telematics</span>     <span  class="cmti-10">and Information Systems</span>, ser. EATIS &#8217;14.   New York, NY, USA: ACM, 2014, pp. 5:1&#8211;5:6. [Online].     Available: <a  href="http://doi.acm.org/10.1145/2590651.2590656" class="url" >http://doi.acm.org/10.1145/2590651.2590656</a>                                                                                                                                                                                         </p>         ]]></body>
<body><![CDATA[<p class="bibitem" ><span class="biblabel">  [<a href="#br12">12</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XEXPOSITO:2013"></a>R.&#x00A0;R. Expósito, G.&#x00A0;L. Taboada, S.&#x00A0;Ramos, J.&#x00A0;Touriño, and R.&#x00A0;Doallo, &#8220;Evaluation of messaging     middleware for high-performance cloud computing,&#8221; <span  class="cmti-10">Personal Ubiquitous Comput.</span>, vol.&#x00A0;17, no.&#x00A0;8, pp.     1709&#8211;1719, Dec. 2013. [Online]. Available: <a  href="http://dx.doi.org/10.1007/s00779-012-0605-3" class="url" >http://dx.doi.org/10.1007/s00779-012-0605-3</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br13">13</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XCAI:BIN:2012"></a>B.&#x00A0;Cai, F.&#x00A0;Xu, F.&#x00A0;Ye, and W.&#x00A0;Zhou, &#8220;Research and application of migrating legacy systems to the     private cloud platform with cloudstack,&#8221; in <span  class="cmti-10">Automation and Logistics (ICAL), 2012 IEEE International</span>     <span  class="cmti-10">Conference on</span>, August 2012, pp. 400 &#8211;404.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br14">14</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XDEJAN:MONTERO:RUBEN:2011"></a>D.&#x00A0;Milojicic, I.&#x00A0;M. Llorente, and R.&#x00A0;S. Montero, &#8220;Opennebula: A cloud management tool,&#8221; <span  class="cmti-10">Internet</span>     <span  class="cmti-10">Computing, IEEE</span>, vol.&#x00A0;15, no.&#x00A0;2, pp. 11 &#8211;14, March-April 2011.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br15">15</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XWEN:GU:LI:GAO:2012"></a>X.&#x00A0;Wen, G.&#x00A0;Gu, Q.&#x00A0;Li, Y.&#x00A0;Gao, and X.&#x00A0;Zhang, &#8220;Comparison of open-source cloud management     platforms: Openstack and opennebula,&#8221; in <span  class="cmti-10">Fuzzy Systems and Knowledge Discovery (FSKD), 2012 9th</span>     <span  class="cmti-10">International Conference on</span>, May 2012, pp. 2457 &#8211;2461.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br16">16</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XCHIU:AGRAWAL:2010"></a>D.&#x00A0;Chiu and G.&#x00A0;Agrawal, &#8220;Evaluating caching and storage options on the amazon web services     cloud,&#8221; in <span  class="cmti-10">Grid Computing (GRID), 2010 11th IEEE/ACM International Conference on</span>, October 2010,     pp. 17 &#8211;24.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br17">17</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XMING:MAO:JIE:2010"></a>M.&#x00A0;Mao, J.&#x00A0;Li, and M.&#x00A0;Humphrey, &#8220;Cloud auto-scaling with deadline and budget constraints,&#8221; in     <span  class="cmti-10">Grid Computing (GRID), 2010 11th IEEE/ACM International Conference on</span>, October 2010, pp. 41     &#8211;48.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br18">18</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XMARTIN:POLETTI:2011"></a>P.&#x00A0;Martin, A.&#x00A0;Brown, W.&#x00A0;Powley, and J.&#x00A0;L. Vazquez-Poletti, &#8220;Autonomic management of elastic     services in the cloud,&#8221; in <span  class="cmti-10">Proceedings of the 2011 IEEE Symposium on Computers and Communications</span>,     ser. ISCC &#8217;11.   Washington, DC, USA: IEEE Computer Society, 2011, pp. 135&#8211;140.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br19">19</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XMATOS:2012"></a>L.&#x00A0;Beernaert,  M.&#x00A0;Matos,  R.&#x00A0;Vilaça,  and  R.&#x00A0;Oliveira,  &#8220;Automatic  elasticity  in  openstack,&#8221;  in     <span  class="cmti-10">Proceedings  of  the  Workshop  on  Secure  and  Dependable  Middleware  for  Cloud  Monitoring  and</span>     <span  class="cmti-10">Management</span>, ser. SDMCMM &#8217;12.   New York, NY, USA: ACM, 2012, pp. 2:1&#8211;2:6. [Online]. Available:     <a  href="http://doi.acm.org/10.1145/2405186.2405188" class="url" >http://doi.acm.org/10.1145/2405186.2405188</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br20">20</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XPAPER193:LIN:2011"></a>W.&#x00A0;Lin, J.&#x00A0;Z. Wang, C.&#x00A0;Liang, and D.&#x00A0;Qi, &#8220;A threshold-based dynamic resource allocation scheme     for cloud computing,&#8221; <span  class="cmti-10">Procedia Engineering</span>, vol.&#x00A0;23, no.&#x00A0;0, pp. 695 &#8211; 703, 2011, pEEA 2011.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br21">21</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XCLOUDSIM:2011"></a>R.&#x00A0;N.  Calheiros,  R.&#x00A0;Ranjan,  A.&#x00A0;Beloglazov,  C.&#x00A0;A.&#x00A0;F.  De&#x00A0;Rose,  and  R.&#x00A0;Buyya,  &#8220;Cloudsim:  a     toolkit  for  modeling  and  simulation  of  cloud  computing  environments  and  evaluation  of  resource     provisioning algorithms,&#8221; <span  class="cmti-10">Software: Practice and Experience</span>, vol.&#x00A0;41, no.&#x00A0;1, pp. 23&#8211;50, 2011. [Online].     Available: <a  href="http://dx.doi.org/10.1002/spe.995" class="url" >http://dx.doi.org/10.1002/spe.995</a>                                                                                                                                                                                         </p>         ]]></body>
<body><![CDATA[<p class="bibitem" ><span class="biblabel">  [<a href="#br22">22</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XHAN:MOUSTAFA:2012"></a>R.&#x00A0;Han,  L.&#x00A0;Guo,  M.&#x00A0;M.  Ghanem,  and  Y.&#x00A0;Guo,  &#8220;Lightweight  resource  scaling  for  cloud     applications,&#8221; <span  class="cmti-10">Cluster Computing and the Grid, IEEE International Symposium on</span>, vol.&#x00A0;0, pp. 644&#8211;651,     2012.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br23">23</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XSPINNER:2014"></a>S.&#x00A0;Spinner, S.&#x00A0;Kounev, X.&#x00A0;Zhu, L.&#x00A0;Lu, M.&#x00A0;Uysal, A.&#x00A0;Holler, and R.&#x00A0;Griffith, &#8220;Runtime vertical     scaling of virtualized applications via online model estimation,&#8221; in <span  class="cmti-10">Proceedings of the 2014 IEEE 8th</span>     <span  class="cmti-10">International Conference on Self-Adaptive and Self-Organizing Systems (SASO)</span>, September 2014.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br24">24</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XCHUANG:2013"></a>W.-C. Chuang, B.&#x00A0;Sang, S.&#x00A0;Yoo, R.&#x00A0;Gu, M.&#x00A0;Kulkarni, and C.&#x00A0;Killian, &#8220;Eventwave: Programming     model and runtime support for tightly-coupled elastic cloud applications,&#8221; in <span  class="cmti-10">Proceedings of the 4th</span>     <span  class="cmti-10">Annual Symposium on Cloud Computing</span>,  ser.  SOCC  &#8217;13.   New  York,  NY,  USA:  ACM,  2013,  pp.     21:1&#8211;21:16. [Online]. Available: <a  href="http://doi.acm.org/10.1145/2523616.2523617" class="url" >http://doi.acm.org/10.1145/2523616.2523617</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br25">25</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XMONG:2013"></a>J.&#x00A0;O.  Gutierrez-Garcia  and  K.&#x00A0;M.  Sim,  &#8220;A  family  of  heuristics  for  agent-based  elastic  cloud     bag-of-tasks concurrent scheduling,&#8221; <span  class="cmti-10">Future Gener. Comput. Syst.</span>, vol.&#x00A0;29, no.&#x00A0;7, pp. 1682&#8211;1699, Sep.     2013. [Online]. Available: <a  href="http://dx.doi.org/10.1016/j.future.2012.01.005" class="url" >http://dx.doi.org/10.1016/j.future.2012.01.005</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br26">26</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XHAO:2013"></a>H.&#x00A0;Wei,   S.&#x00A0;Zhou,   T.&#x00A0;Yang,   R.&#x00A0;Zhang,   and   Q.&#x00A0;Wang,   &#8220;Elastic   resource   management   for     heterogeneous applications on paas,&#8221; in <span  class="cmti-10">Proceedings of the 5th Asia-Pacific Symposium on Internetware</span>,     ser.  Internetware  &#8217;13.    New  York,  NY,  USA:  ACM,  2013,  pp.  7:1&#8211;7:7.  [Online].  Available:     <a  href="http://doi.acm.org/10.1145/2532443.2532451" class="url" >http://doi.acm.org/10.1145/2532443.2532451</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br27">27</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XANIELLO:2014"></a>L.&#x00A0;Aniello, S.&#x00A0;Bonomi, F.&#x00A0;Lombardi, A.&#x00A0;Zelli, and R.&#x00A0;Baldoni, &#8220;An architecture for automatic     scaling of replicated services,&#8221; in <span  class="cmti-10">Networked Systems</span>, ser. Lecture Notes in Computer Science, G.&#x00A0;Noubir     and M.&#x00A0;Raynal, Eds.   Springer International Publishing, 2014, pp. 122&#8211;137.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br28">28</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XLEITE:2014"></a>A.&#x00A0;F.  Leite,  T.&#x00A0;Raiol,  C.&#x00A0;Tadonki,  M.&#x00A0;E.  M.&#x00A0;T.  Walter,  C.&#x00A0;Eisenbeis,  and  A.&#x00A0;C.  M.&#x00A0;a.&#x00A0;A.     de&#x00A0;Melo,   &#8220;Excalibur:   An   autonomic   cloud   architecture   for   executing   parallel   applications,&#8221;     in   <span  class="cmti-10">Proceedings   of   the   Fourth   International   Workshop   on   Cloud   Data   and   Platforms</span>,     ser.   CloudDP   &#8217;14.    New   York,   NY,   USA:   ACM,   2014,   pp.   2:1&#8211;2:6.   [Online].   Available:     <a  href="http://doi.acm.org/10.1145/2592784.2592786" class="url" >http://doi.acm.org/10.1145/2592784.2592786</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br29">29</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XCAMINO:JESUS:2011"></a>D.&#x00A0;Rajan, A.&#x00A0;Canino, J.&#x00A0;A. Izaguirre, and D.&#x00A0;Thain, &#8220;Converting a high performance application to     an elastic cloud application,&#8221; in <span  class="cmti-10">Proceedings of the 2011 IEEE Third International Conference on Cloud</span>     <span  class="cmti-10">Computing Technology and Science</span>, ser. CLOUDCOM &#8217;11.   Washington, DC, USA: IEEE Computer     Society, 2011, pp. 383&#8211;390.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br30">30</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XPAPER139:WESAN:2011"></a>W.&#x00A0;Dawoud, I.&#x00A0;Takouna, and C.&#x00A0;Meinel, &#8220;Elastic vm for cloud resources provisioning optimization,&#8221;     in <span  class="cmti-10">Advances in Computing and Communications</span>, ser. Communications in Computer and Information     Science, A.&#x00A0;Abraham, J.&#x00A0;Lloret&#x00A0;Mauri, J.&#x00A0;Buford, J.&#x00A0;Suzuki, and S.&#x00A0;Thampi, Eds.   Springer Berlin     Heidelberg, 2011, vol. 190, pp. 431&#8211;445.                                                                                                                                                                                         </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br31">31</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XVARELA:CARLOS:2012"></a>S.&#x00A0;Imai, T.&#x00A0;Chestna, and C.&#x00A0;A. Varela, &#8220;Elastic scalable cloud computing using application-level     migration,&#8221; in <span  class="cmti-10">Proceedings of the 2012 IEEE/ACM Fifth International Conference on Utility and Cloud</span>     <span  class="cmti-10">Computing</span>, ser. UCC &#8217;12.   Washington, DC, USA: IEEE Computer Society, 2012, pp. 91&#8211;98.     </p>         ]]></body>
<body><![CDATA[<p class="bibitem" ><span class="biblabel">  [<a href="#br32">32</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XPAPER18:2:MARIAN:2012"></a>M.&#x00A0;Mihailescu  and  Y.&#x00A0;M.  Teo,  &#8220;The  impact  of  user  rationality  in  federated  clouds,&#8221;  <span  class="cmti-10">Cluster</span>     <span  class="cmti-10">Computing and the Grid, IEEE International Symposium on</span>, vol.&#x00A0;0, pp. 620&#8211;627, 2012.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br33">33</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XPAPER11:SULELMAN:2012"></a>B.&#x00A0;Suleiman, &#8220;Elasticity economics of cloud-based applications,&#8221; in <span  class="cmti-10">Proceedings of the 2012 IEEE</span>     <span  class="cmti-10">Ninth International Conference on Services Computing</span>, ser. SCC &#8217;12.   Washington, DC, USA: IEEE     Computer Society, 2012, pp. 694&#8211;695.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br34">34</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XFETZER:2011"></a>T.&#x00A0;Knauth  and  C.&#x00A0;Fetzer,  &#8220;Scaling  non-elastic  applications  using  virtual  machines,&#8221;  in  <span  class="cmti-10">Cloud</span>     <span  class="cmti-10">Computing (CLOUD), 2011 IEEE International Conference on</span>, July 2011, pp. 468 &#8211;475.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br35">35</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XZON:YIN:2012"></a>X.&#x00A0;Zhang,   Z.-Y.   Shae,   S.&#x00A0;Zheng,   and   H.&#x00A0;Jamjoom,   &#8220;Virtual   machine   migration   in   an     over-committed cloud,&#8221; in <span  class="cmti-10">Network Operations and Management Symposium (NOMS), 2012 IEEE</span>,     April 2012, pp. 196 &#8211;203.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br36">36</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XRIGHI:2016"></a>R.&#x00A0;d.&#x00A0;R.&#x00A0;Righi, V.&#x00A0;F. Rodrigues, C.&#x00A0;A. da&#x00A0;Costa, G.&#x00A0;Galante, L.&#x00A0;C.&#x00A0;E. de&#x00A0;Bona, and T.&#x00A0;Ferreto,     &#8220;Autoelastic:  Automatic  resource  elasticity  for  high  performance  applications  in  the  cloud,&#8221;  <span  class="cmti-10">IEEE</span>     <span  class="cmti-10">Transactions on Cloud Computing</span>, vol.&#x00A0;4, no.&#x00A0;1, pp. 6&#8211;19, Jan 2016.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br37">37</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XASLAM:2011"></a>F.&#x00A0;Azmandian,  M.&#x00A0;Moffie,  J.&#x00A0;Dy,  J.&#x00A0;Aslam,  and  D.&#x00A0;Kaeli,  &#8220;Workload  characterization  at  the     virtualization layer,&#8221; in <span  class="cmti-10">Modeling, Analysis Simulation of Computer and Telecommunication Systems</span>     <span  class="cmti-10">(MASCOTS), 2011 IEEE 19th International Symposium on</span>, July 2011, pp. 63&#8211;72.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br38">38</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XBATTEN:2011"></a>Y.&#x00A0;Lee, R.&#x00A0;Avizienis, A.&#x00A0;Bishara, R.&#x00A0;Xia, D.&#x00A0;Lockhart, C.&#x00A0;Batten, and K.&#x00A0;Asanovic, &#8220;Exploring     the  tradeoffs  between  programmability  and  efficiency  in  data-parallel  accelerators,&#8221;  in  <span  class="cmti-10">Computer</span>     <span  class="cmti-10">Architecture (ISCA), 2011 38th Annual International Symposium on</span>, 2011, pp. 129&#8211;140.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br39">39</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XBALIGA:2011"></a>J.&#x00A0;Baliga,  R.&#x00A0;Ayre,  K.&#x00A0;Hinton,  and  R.&#x00A0;Tucker,  &#8220;Green  cloud  computing:  Balancing  energy  in     processing, storage, and transport,&#8221; <span  class="cmti-10">Proceedings of the IEEE</span>, vol.&#x00A0;99, no.&#x00A0;1, pp. 149&#8211;167, 2011.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br40">40</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XBANAS:2014"></a>K.&#x00A0;Banas  and  F.&#x00A0;Kruzel,  &#8220;Comparison  of  xeon  phi  and  kepler  gpu  performance  for  finite     element  numerical  integration,&#8221;  in  <span  class="cmti-10">Proceedings  of  the  2014  IEEE  Intl  Conf  on  High  Performance</span>     <span  class="cmti-10">Computing  and  Communications,  2014  IEEE  6th  Intl  Symp  on  Cyberspace  Safety  and  Security,</span>     <span  class="cmti-10">2014  IEEE  11th  Intl  Conf  on  Embedded  Software  and  Syst  (HPCC,CSS,ICESS)</span>,  ser.  HPCC     &#8217;14.    Washington,  DC,  USA:  IEEE  Computer  Society,  2014,  pp.  145&#8211;148.  [Online].  Available:     <a  href="http://dx.doi.org/10.1109/HPCC.2014.27" class="url" >http://dx.doi.org/10.1109/HPCC.2014.27</a>                                                                                                                                                                                         </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br41">41</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XPLAYNE:2011"></a>K.&#x00A0;A.  Hawick,  D.&#x00A0;P.  Playne,  and  M.&#x00A0;G.&#x00A0;B.  Johnson,  &#8220;Numerical  precision  and  benchmarking     very-high-order integration of particle dynamics on gpu accelerators,&#8221; in <span  class="cmti-10">Proc. International Conference</span>     <span  class="cmti-10">on Computer Design (CDES&#8217;11)</span>, no. CDE4469.  Las Vegas, USA: CSREA, 18-21 July 2011, pp. 83&#8211;89.     </p>         ]]></body>
<body><![CDATA[<p class="bibitem" ><span class="biblabel">  [<a href="#br42">42</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XMIHAI:2012"></a>M.&#x00A0;Comanescu, &#8220;Implementation of time-varying observers used in direct field orientation of motor     drives by trapezoidal integration,&#8221; in <span  class="cmti-10">Power Electronics, Machines and Drives (PEMD 2012), 6th IET</span>     <span  class="cmti-10">International Conference on</span>, 2012, pp. 1&#8211;6.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br43">43</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XTRIPODI:2015"></a>E.&#x00A0;Tripodi, A.&#x00A0;Musolino, R.&#x00A0;Rizzo, and M.&#x00A0;Raugi, &#8220;Numerical integration of coupled equations     for high-speed electromechanical devices,&#8221; <span  class="cmti-10">Magnetics, IEEE Transactions on</span>, vol.&#x00A0;51, no.&#x00A0;3, pp. 1&#8211;4,     March 2015.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br44">44</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XPAPER155:ISLAM:2012"></a>S.&#x00A0;Islam,  K.&#x00A0;Lee,  A.&#x00A0;Fekete,  and  A.&#x00A0;Liu,  &#8220;How  a  consumer  can  measure  elasticity  for  cloud     platforms,&#8221; in <span  class="cmti-10">Proceedings of the third joint WOSP/SIPEW international conference on Performance</span>     <span  class="cmti-10">Engineering</span>,  ser.  ICPE  &#8217;12.    New  York,  NY,  USA:  ACM,  2012,  pp.  85&#8211;96.  [Online].  Available:     <a  href="http://doi.acm.org/10.1145/2188286.2188301" class="url" >http://doi.acm.org/10.1145/2188286.2188301</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br45">45</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XMAO:MING:2011"></a>M.&#x00A0;Mao  and  M.&#x00A0;Humphrey,  &#8220;Auto-scaling  to  minimize  cost  and  meet  application  deadlines  in     cloud workflows,&#8221; in <span  class="cmti-10">Proceedings of 2011 International Conference for High Performance Computing,</span>     <span  class="cmti-10">Networking, Storage and Analysis</span>, ser. SC &#8217;11.   New York, NY, USA: ACM, 2011, pp. 49:1&#8211;49:12.     [Online]. Available: <a  href="http://doi.acm.org/10.1145/2063384.2063449" class="url" >http://doi.acm.org/10.1145/2063384.2063449</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br46">46</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XZHANG:WEI:2008"></a>Y.&#x00A0;Zhang, W.&#x00A0;Sun, and Y.&#x00A0;Inoguchi, &#8220;Predict task running time in grid environments based on     cpu  load  predictions,&#8221;  <span  class="cmti-10">Future Generation Computer Systems</span>,  vol.&#x00A0;24,  no.&#x00A0;6,  pp.  489  &#8211;  497,  2008.     [Online]. Available: <a  href="http://www.sciencedirect.com/science/article/pii/S0167739X07001215" class="url" >http://www.sciencedirect.com/science/article/pii/S0167739X07001215</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br47">47</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XSALAH:2013"></a>F.&#x00A0;Al-Haidari, M.&#x00A0;Sqalli, and K.&#x00A0;Salah, &#8220;Impact of cpu utilization thresholds and scaling size on     autoscaling cloud resources,&#8221; in <span  class="cmti-10">Cloud Computing Technology and Science (CloudCom), 2013 IEEE 5th</span>     <span  class="cmti-10">International Conference on</span>, vol.&#x00A0;2, Dec 2013, pp. 256&#8211;261.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br48">48</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XOrgerie2014"></a>A.-C. Orgerie, M.&#x00A0;D.&#x00A0;D. Assuncao, and L.&#x00A0;Lefevre, &#8220;A survey on techniques for improving the     energy  efficiency  of  large-scale  distributed  systems,&#8221;  <span  class="cmti-10">ACM Computing Surveys</span>,  vol.&#x00A0;46,  no.&#x00A0;4,  pp.     1&#8211;31, Mar. 2014. [Online]. Available: <a  href="http://dl.acm.org/citation.cfm?doid=2597757.2532637" class="url" >http://dl.acm.org/citation.cfm?doid=2597757.2532637</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br49">49</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XLIU:DONG:2014"></a>L.&#x00A0;Jin, D.&#x00A0;Cong, L.&#x00A0;Guangyi, and Y.&#x00A0;Jilai, &#8220;Short-term net feeder load forecasting of microgrid     considering weather conditions,&#8221; in <span  class="cmti-10">Energy Conference (ENERGYCON), 2014 IEEE International</span>, May     2014, pp. 1205&#8211;1209. </p>     </div>           ]]></body><back>
<ref-list>
<ref id="B1">
<label>1</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Mohan Murthy]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Sanjay]]></surname>
<given-names><![CDATA[H]]></given-names>
</name>
<name>
<surname><![CDATA[Anand]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Threshold based auto scaling of virtual machines in cloud environment]]></article-title>
<person-group person-group-type="editor">
<name>
<surname><![CDATA[Hsu]]></surname>
<given-names><![CDATA[C.-H.]]></given-names>
</name>
<name>
<surname><![CDATA[Shi]]></surname>
<given-names><![CDATA[X]]></given-names>
</name>
<name>
<surname><![CDATA[Salapura]]></surname>
<given-names><![CDATA[V]]></given-names>
</name>
</person-group>
<source><![CDATA[Network and Parallel Computing: Lecture Notes in Computer Science]]></source>
<year>2014</year>
<volume>8707</volume>
<page-range>247-256</page-range><publisher-name><![CDATA[Springer Berlin Heidelberg]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B2">
<label>2</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Bao]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Lu]]></surname>
<given-names><![CDATA[Z]]></given-names>
</name>
<name>
<surname><![CDATA[Wu]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Zhang]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Zhong]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Implementing a novel load-aware auto scale scheme for private cloud resource management platform]]></article-title>
<source><![CDATA[]]></source>
<year></year>
</nlm-citation>
</ref>
<ref id="B3">
<label>3</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Sah]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Joshi]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Scalability of efficient and dynamic workload distribution in autonomic cloud computing]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Issues and Challenges in Intelligent Computing Techniques]]></conf-name>
<conf-date>Feb 2014</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B4">
<label>4</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Weber]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Herbst]]></surname>
<given-names><![CDATA[N. R]]></given-names>
</name>
<name>
<surname><![CDATA[Groenda]]></surname>
<given-names><![CDATA[H]]></given-names>
</name>
<name>
<surname><![CDATA[Kounev]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Towards a resource elasticity benchmark for cloud environments]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ International Conference on Performance Engineering]]></conf-name>
<conf-date>March 2014</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B5">
<label>5</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Jamshidi]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[Ahmad]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Pahl]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Autonomic resource provisioning for cloud-based software]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems]]></conf-name>
<conf-date>2014</conf-date>
<conf-loc>New York </conf-loc>
</nlm-citation>
</ref>
<ref id="B6">
<label>6</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Guo]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
<name>
<surname><![CDATA[Ghanem]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Han]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Does the cloud need new algorithms?: an introduction to elastic algorithms]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[4 Cloud Computing Technology and Science]]></conf-name>
<conf-date>2012</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B7">
<label>7</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Galante]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
<name>
<surname><![CDATA[Bona]]></surname>
<given-names><![CDATA[L. C. E. d.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A survey on cloud computing elasticity]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ International Conference on Utility and Cloud Computing]]></conf-name>
<conf-loc>Washington DC</conf-loc>
</nlm-citation>
</ref>
<ref id="B8">
<label>8</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Beloglazov]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Abawajy]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Buyya]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Energy-aware resource allocation heuristics for efficient management of data centers for cloud computing]]></article-title>
<source><![CDATA[Future Gener. Comput. Syst.]]></source>
<year>May </year>
<month>20</month>
<day>12</day>
<volume>28</volume>
<numero>5</numero>
<issue>5</issue>
<page-range>755-768</page-range></nlm-citation>
</ref>
<ref id="B9">
<label>9</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Raveendran]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Bicer]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
<name>
<surname><![CDATA[Agrawal]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A framework for elastic execution of existing mpi programs]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Symposium on Parallel and Distributed Processing Workshops and PhD Forum]]></conf-name>
<conf-date>2011</conf-date>
<conf-loc>Washington DC</conf-loc>
</nlm-citation>
</ref>
<ref id="B10">
<label>10</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Jamshidi]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[Ahmad]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Pahl]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Autonomic resource provisioning for cloud-based software]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[9 International Symposium on Software Engineering for Adaptive and Self-Managing Systems]]></conf-name>
<conf-date>2014</conf-date>
<conf-loc>New York NY</conf-loc>
</nlm-citation>
</ref>
<ref id="B11">
<label>11</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Coutinho]]></surname>
<given-names><![CDATA[E. F.]]></given-names>
</name>
<name>
<surname><![CDATA[Paillard]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
<name>
<surname><![CDATA[de Souza]]></surname>
<given-names><![CDATA[J. N.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Performance analysis on scientific computing and cloud computing environments]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Proceedings of the 7th Euro American Conference on Telematics and Information Systems]]></conf-name>
<conf-date>2014</conf-date>
<conf-loc>New York NY</conf-loc>
</nlm-citation>
</ref>
<ref id="B12">
<label>12</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Expósito]]></surname>
<given-names><![CDATA[R. R.]]></given-names>
</name>
<name>
<surname><![CDATA[Taboada]]></surname>
<given-names><![CDATA[G. L.]]></given-names>
</name>
<name>
<surname><![CDATA[Ramos]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Touriño]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Doallo]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Evaluation of messaging middleware for high-performance cloud computing]]></article-title>
<source><![CDATA[Personal Ubiquitous Comput.]]></source>
<year>Dec.</year>
<month> 2</month>
<day>01</day>
<volume>17</volume>
<numero>8</numero>
<issue>8</issue>
<page-range>1709-1719</page-range></nlm-citation>
</ref>
<ref id="B13">
<label>13</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Cai]]></surname>
<given-names><![CDATA[B]]></given-names>
</name>
<name>
<surname><![CDATA[Xu]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
<name>
<surname><![CDATA[Ye]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
<name>
<surname><![CDATA[Zhou]]></surname>
<given-names><![CDATA[W]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Research and application of migrating legacy systems to the private cloud platform with cloudstack]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Automation and Logistics]]></conf-name>
<conf-date>August 2012</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B14">
<label>14</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Milojicic]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Llorente]]></surname>
<given-names><![CDATA[I. M.]]></given-names>
</name>
<name>
<surname><![CDATA[Montero]]></surname>
<given-names><![CDATA[R. S.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Opennebula: A cloud management tool]]></article-title>
<source><![CDATA[Internet Computing]]></source>
<year>Marc</year>
<month>h-</month>
<day>Ap</day>
<volume>15</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>11 -14</page-range></nlm-citation>
</ref>
<ref id="B15">
<label>15</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Wen]]></surname>
<given-names><![CDATA[X]]></given-names>
</name>
<name>
<surname><![CDATA[Gu]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
<name>
<surname><![CDATA[Li]]></surname>
<given-names><![CDATA[Q]]></given-names>
</name>
<name>
<surname><![CDATA[Gao]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
<name>
<surname><![CDATA[Zhang]]></surname>
<given-names><![CDATA[X]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Comparison of open-source cloud management platforms: Openstack and opennebula]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[9 Fuzzy Systems and Knowledge Discovery]]></conf-name>
<conf-date>May 2012</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B16">
<label>16</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Chiu]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Agrawal]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Evaluating caching and storage options on the amazon web services cloud]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Grid Computing]]></conf-name>
<conf-date>October 2010</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B17">
<label>17</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Mao]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Li]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Humphrey]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Cloud auto-scaling with deadline and budget constraints]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[11 Grid Computing]]></conf-name>
<conf-date>October 2010</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B18">
<label>18</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Martin]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[Brown]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Powley]]></surname>
<given-names><![CDATA[W]]></given-names>
</name>
<name>
<surname><![CDATA[Vazquez-Poletti]]></surname>
<given-names><![CDATA[J. L.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Autonomic management of elastic services in the cloud]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Proceedings of the 2011 IEEE Symposium on Computers and Communications]]></conf-name>
<conf-date>2011</conf-date>
<conf-loc>Washington DC</conf-loc>
</nlm-citation>
</ref>
<ref id="B19">
<label>19</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Beernaert]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Matos]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Vilaça]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Oliveira]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Automatic elasticity in openstack]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Proceedings of the Workshop on Secure and Dependable Middleware for Cloud Monitoring and Management]]></conf-name>
<conf-date>2012</conf-date>
<conf-loc>New York NY</conf-loc>
</nlm-citation>
</ref>
<ref id="B20">
<label>20</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Lin]]></surname>
<given-names><![CDATA[W]]></given-names>
</name>
<name>
<surname><![CDATA[Wang]]></surname>
<given-names><![CDATA[J. Z.]]></given-names>
</name>
<name>
<surname><![CDATA[Liang]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
<name>
<surname><![CDATA[Qi]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A threshold-based dynamic resource allocation scheme for cloud computing]]></article-title>
<source><![CDATA[Procedia Engineering]]></source>
<year></year>
<volume>23</volume>
<numero>0</numero>
<issue>0</issue>
<page-range>695 - 703</page-range><page-range>2011</page-range></nlm-citation>
</ref>
<ref id="B21">
<label>21</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Calheiros]]></surname>
<given-names><![CDATA[R. N.]]></given-names>
</name>
<name>
<surname><![CDATA[Ranjan]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Beloglazov]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[De Rose]]></surname>
<given-names><![CDATA[C. A. F.]]></given-names>
</name>
<name>
<surname><![CDATA[Buyya]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Cloudsim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms]]></article-title>
<source><![CDATA[Software: Practice and Experience]]></source>
<year>2011</year>
<volume>41</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>23-50</page-range></nlm-citation>
</ref>
<ref id="B22">
<label>22</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Han]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Guo]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Ghanem]]></surname>
<given-names><![CDATA[M. M.]]></given-names>
</name>
<name>
<surname><![CDATA[Guo]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Lightweight resource scaling for cloud applications]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[0 Cluster Computing and the Grid]]></conf-name>
<conf-date>2012</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B23">
<label>23</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Spinner]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Kounev]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Zhu]]></surname>
<given-names><![CDATA[X]]></given-names>
</name>
<name>
<surname><![CDATA[Lu]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Uysal]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Holler]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Griffith]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Runtime vertical scaling of virtualized applications via online model estimation]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[8 International Conference on Self-Adaptive and Self-Organizing Systems]]></conf-name>
<conf-date>September 2014</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B24">
<label>24</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Chuang]]></surname>
<given-names><![CDATA[W.-C.]]></given-names>
</name>
<name>
<surname><![CDATA[Sang]]></surname>
<given-names><![CDATA[B]]></given-names>
</name>
<name>
<surname><![CDATA[Yoo]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Gu]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Kulkarni]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Killian]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Eventwave: Programming model and runtime support for tightly-coupled elastic cloud applications]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[4 Annual Symposium on Cloud Computing]]></conf-name>
<conf-date>2013</conf-date>
<conf-loc>New York NY</conf-loc>
</nlm-citation>
</ref>
<ref id="B25">
<label>25</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Gutierrez-Garcia]]></surname>
<given-names><![CDATA[J. O.]]></given-names>
</name>
<name>
<surname><![CDATA[Sim]]></surname>
<given-names><![CDATA[K. M.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A family of heuristics for agent-based elastic cloud bag-of-tasks concurrent scheduling]]></article-title>
<source><![CDATA[Future Gener. Comput. Syst.]]></source>
<year>Sep.</year>
<month> 2</month>
<day>01</day>
<volume>29</volume>
<numero>7</numero>
<issue>7</issue>
<page-range>1682-1699</page-range></nlm-citation>
</ref>
<ref id="B26">
<label>26</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Wei]]></surname>
<given-names><![CDATA[H]]></given-names>
</name>
<name>
<surname><![CDATA[Zhou]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Yang]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
<name>
<surname><![CDATA[Zhang]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Wang]]></surname>
<given-names><![CDATA[Q]]></given-names>
</name>
</person-group>
<source><![CDATA[Elastic resource management for heterogeneous applications on paas]]></source>
<year></year>
<conf-name><![CDATA[5 Asia-Pacific Symposium on Internetware]]></conf-name>
<conf-date>2013</conf-date>
<conf-loc>New York NY</conf-loc>
</nlm-citation>
</ref>
<ref id="B27">
<label>27</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Aniello]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Bonomi]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Lombardi]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
<name>
<surname><![CDATA[Zelli]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Baldoni]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[An architecture for automatic scaling of replicated services]]></article-title>
<source><![CDATA[Networked Systems: Lecture Notes in Computer Science]]></source>
<year>2014</year>
<page-range>122-137</page-range><publisher-name><![CDATA[Springer International Publishing]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B28">
<label>28</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Leite]]></surname>
<given-names><![CDATA[A. F.]]></given-names>
</name>
<name>
<surname><![CDATA[Raiol]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
<name>
<surname><![CDATA[Tadonki]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
<name>
<surname><![CDATA[Walter]]></surname>
<given-names><![CDATA[M. E. M. T.]]></given-names>
</name>
<name>
<surname><![CDATA[Eisenbeis]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
<name>
<surname><![CDATA[de Melo]]></surname>
<given-names><![CDATA[A. C. M. a. A.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Excalibur: An autonomic cloud architecture for executing parallel applications]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Proceedings of the Fourth International Workshop on Cloud Data and Platforms]]></conf-name>
<conf-loc>New York NY</conf-loc>
</nlm-citation>
</ref>
<ref id="B29">
<label>29</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Rajan]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Canino]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Izaguirre]]></surname>
<given-names><![CDATA[J. A.]]></given-names>
</name>
<name>
<surname><![CDATA[Thain]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Converting a high performance application to an elastic cloud application]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Third International Conference on Cloud Computing Technology and Science]]></conf-name>
<conf-date>2011</conf-date>
<conf-loc>Washington DC</conf-loc>
</nlm-citation>
</ref>
<ref id="B30">
<label>30</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Dawoud]]></surname>
<given-names><![CDATA[W]]></given-names>
</name>
<name>
<surname><![CDATA[Takouna]]></surname>
<given-names><![CDATA[I]]></given-names>
</name>
<name>
<surname><![CDATA[Meinel]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Elastic vm for cloud resources provisioning optimization]]></article-title>
<person-group person-group-type="editor">
<name>
<surname><![CDATA[Abraham]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Lloret Mauri]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Buford]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Suzuki]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Thampi]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<source><![CDATA[Communications in Computer and Information Science]]></source>
<year>2011</year>
<volume>190</volume>
<page-range>431-445</page-range><publisher-name><![CDATA[Springer Berlin Heidelberg]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B31">
<label>31</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Imai]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Chestna]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
<name>
<surname><![CDATA[Varela]]></surname>
<given-names><![CDATA[C. A.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Elastic scalable cloud computing using application-level migration]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Fifth International Conference on Utility and Cloud Computing]]></conf-name>
<conf-date>2012</conf-date>
<conf-loc>Washington DC</conf-loc>
</nlm-citation>
</ref>
<ref id="B32">
<label>32</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Mihailescu]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Teo]]></surname>
<given-names><![CDATA[Y. M.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[The impact of user rationality in federated clouds]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Cluster Computing and the Grid]]></conf-name>
<conf-date>2012</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B33">
<label>33</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Suleiman]]></surname>
<given-names><![CDATA[B]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Elasticity economics of cloud-based applications]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Ninth International Conference on Services Computing]]></conf-name>
<conf-date>2012</conf-date>
<conf-loc>Washington DC</conf-loc>
</nlm-citation>
</ref>
<ref id="B34">
<label>34</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Knauth]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
<name>
<surname><![CDATA[Fetzer]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Scaling non-elastic applications using virtual machines]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Cloud Computing]]></conf-name>
<conf-date>July 2011</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B35">
<label>35</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Zhang]]></surname>
<given-names><![CDATA[X]]></given-names>
</name>
<name>
<surname><![CDATA[Shae]]></surname>
<given-names><![CDATA[Z.-Y.]]></given-names>
</name>
<name>
<surname><![CDATA[Zheng]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Jamjoom]]></surname>
<given-names><![CDATA[H]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Virtual machine migration in an over-committed cloud]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Network Operations and Management Symposium]]></conf-name>
<conf-date>April 2012</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B36">
<label>36</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Righi]]></surname>
<given-names><![CDATA[R. d. R.]]></given-names>
</name>
<name>
<surname><![CDATA[Rodrigues]]></surname>
<given-names><![CDATA[V. F.]]></given-names>
</name>
<name>
<surname><![CDATA[da Costa]]></surname>
<given-names><![CDATA[C. A.]]></given-names>
</name>
<name>
<surname><![CDATA[Galante]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
<name>
<surname><![CDATA[de Bona]]></surname>
<given-names><![CDATA[L. C. E.]]></given-names>
</name>
<name>
<surname><![CDATA[Ferreto]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Autoelastic: Automatic resource elasticity for high performance applications in the cloud]]></article-title>
<source><![CDATA[IEEE Transactions on Cloud Computing]]></source>
<year>Jan </year>
<month>20</month>
<day>16</day>
<volume>4</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>6-19</page-range></nlm-citation>
</ref>
<ref id="B37">
<label>37</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Azmandian]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
<name>
<surname><![CDATA[Moffie]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Dy]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Aslam]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Kaeli]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Workload characterization at the virtualization layer]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[19 Modeling, Analysis Simulation of Computer and Telecommunication Systems]]></conf-name>
<conf-date>July 2011</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B38">
<label>38</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Y. Lee]]></surname>
<given-names><![CDATA[R. Avizienis]]></given-names>
</name>
<name>
<surname><![CDATA[Bishara]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Xia]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Lockhart]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Batten]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
<name>
<surname><![CDATA[Asanovic]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Exploring the tradeoffs between programmability and efficiency in data-parallel accelerators]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[38 Computer Architecture]]></conf-name>
<conf-date>2011</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B39">
<label>39</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Baliga]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Ayre]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Hinton]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
<name>
<surname><![CDATA[Tucker]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Green cloud computing: Balancing energy in processing, storage, and transport]]></article-title>
<source><![CDATA[Proceedings of the IEEE]]></source>
<year>2011</year>
<volume>99</volume>
<numero>1</numero>
<issue>1</issue>
<page-range>149-167</page-range></nlm-citation>
</ref>
<ref id="B40">
<label>40</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Banas]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
<name>
<surname><![CDATA[Kruzel]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Comparison of xeon phi and kepler gpu performance for finite element numerical integration]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Intl Conf on High Performance Computing and Communications]]></conf-name>
<conf-date>2014</conf-date>
<conf-loc>Washington DC</conf-loc>
</nlm-citation>
</ref>
<ref id="B41">
<label>41</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Hawick]]></surname>
<given-names><![CDATA[K. A.]]></given-names>
</name>
<name>
<surname><![CDATA[Playne]]></surname>
<given-names><![CDATA[D. P.]]></given-names>
</name>
<name>
<surname><![CDATA[Johnson]]></surname>
<given-names><![CDATA[M. G. B.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Numerical precision and benchmarking very-high-order integration of particle dynamics on gpu accelerators]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Proc. International Conference on Computer Design]]></conf-name>
<conf-date>July 2011</conf-date>
<conf-loc>Las Vegas </conf-loc>
</nlm-citation>
</ref>
<ref id="B42">
<label>42</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Comanescu]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Implementation of time-varying observers used in direct field orientation of motor drives by trapezoidal integration]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[6 Power Electronics, Machines and Drives]]></conf-name>
<conf-date>2012</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B43">
<label>43</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Tripodi]]></surname>
<given-names><![CDATA[E]]></given-names>
</name>
<name>
<surname><![CDATA[Musolino]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Rizzo]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Raugi]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Numerical integration of coupled equations for high-speed electromechanical devices]]></article-title>
<source><![CDATA[Magnetics, IEEE Transactions on]]></source>
<year>Marc</year>
<month>h </month>
<day>20</day>
<volume>51</volume>
<numero>3</numero>
<issue>3</issue>
<page-range>1-4</page-range></nlm-citation>
</ref>
<ref id="B44">
<label>44</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Islam]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Lee]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
<name>
<surname><![CDATA[Fekete]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Liu]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[How a consumer can measure elasticity for cloud platforms]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Proceedings of the third joint WOSP/SIPEW international conference on Performance Engineering]]></conf-name>
<conf-date>2012</conf-date>
<conf-loc>New York NY</conf-loc>
</nlm-citation>
</ref>
<ref id="B45">
<label>45</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Mao]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Humphrey]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Auto-scaling to minimize cost and meet application deadlines in cloud workflows]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ International Conference for High Performance Computing, Networking, Storage and Analysis]]></conf-name>
<conf-date>2011</conf-date>
<conf-loc>New York NY</conf-loc>
</nlm-citation>
</ref>
<ref id="B46">
<label>46</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Zhang]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
<name>
<surname><![CDATA[Sun]]></surname>
<given-names><![CDATA[W]]></given-names>
</name>
<name>
<surname><![CDATA[Inoguchi]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Predict task running time in grid environments based on cpu load predictions]]></article-title>
<source><![CDATA[Future Generation Computer Systems]]></source>
<year>2008</year>
<volume>24</volume>
<numero>6</numero>
<issue>6</issue>
<page-range>489 - 497</page-range></nlm-citation>
</ref>
<ref id="B47">
<label>47</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Al-Haidari]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
<name>
<surname><![CDATA[Sqalli]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Salah]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Impact of cpu utilization thresholds and scaling size on autoscaling cloud resources]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[5 Cloud Computing Technology and Science]]></conf-name>
<conf-date>Dec 2013</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
<ref id="B48">
<label>48</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Orgerie]]></surname>
<given-names><![CDATA[A.-C.]]></given-names>
</name>
<name>
<surname><![CDATA[Assuncao]]></surname>
<given-names><![CDATA[M. D. D.]]></given-names>
</name>
<name>
<surname><![CDATA[Lefevre]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A survey on techniques for improving the energy efficiency of large-scale distributed systems]]></article-title>
<source><![CDATA[ACM Computing Surveys]]></source>
<year>Mar.</year>
<month> 2</month>
<day>01</day>
<volume>46</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>1-31</page-range></nlm-citation>
</ref>
<ref id="B49">
<label>49</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Jin]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Cong]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Guangyi]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Jilai]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Short-term net feeder load forecasting of microgrid considering weather conditions]]></article-title>
<source><![CDATA[]]></source>
<year></year>
<conf-name><![CDATA[ Energy Conference]]></conf-name>
<conf-date>May 2014</conf-date>
<conf-loc> </conf-loc>
</nlm-citation>
</ref>
</ref-list>
</back>
</article>
