<?xml version="1.0" encoding="ISO-8859-1"?><article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id>0717-5000</journal-id>
<journal-title><![CDATA[CLEI Electronic Journal]]></journal-title>
<abbrev-journal-title><![CDATA[CLEIej]]></abbrev-journal-title>
<issn>0717-5000</issn>
<publisher>
<publisher-name><![CDATA[Centro Latinoamericano de Estudios en Informática]]></publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id>S0717-50002016000100008</article-id>
<title-group>
<article-title xml:lang="en"><![CDATA[An Adaptive and Hybrid Approach to Revisiting the Visibility Pipeline]]></article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname><![CDATA[da Cunha]]></surname>
<given-names><![CDATA[Ícaro L. L]]></given-names>
</name>
<xref ref-type="aff" rid="A01"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname><![CDATA[Gonçalves]]></surname>
<given-names><![CDATA[Luiz M. G.]]></given-names>
</name>
<xref ref-type="aff" rid="A02"/>
</contrib>
</contrib-group>
<aff id="A01">
<institution><![CDATA[,Universidade Federal Rural de Pernambuco Unidade Acaddemica de Garanhuns ]]></institution>
<addr-line><![CDATA[Garanhuns PE]]></addr-line>
<country>Brazil</country>
</aff>
<aff id="A02">
<institution><![CDATA[,Universidade Federal do Rio Grande do Norte Campus Universitario ]]></institution>
<addr-line><![CDATA[Natal RN]]></addr-line>
<country>Brazil</country>
</aff>
<pub-date pub-type="pub">
<day>00</day>
<month>04</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="epub">
<day>00</day>
<month>04</month>
<year>2016</year>
</pub-date>
<volume>19</volume>
<numero>1</numero>
<fpage>8</fpage>
<lpage>8</lpage>
<copyright-statement/>
<copyright-year/>
<self-uri xlink:href="http://www.scielo.edu.uy/scielo.php?script=sci_arttext&amp;pid=S0717-50002016000100008&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.edu.uy/scielo.php?script=sci_abstract&amp;pid=S0717-50002016000100008&amp;lng=en&amp;nrm=iso"></self-uri><self-uri xlink:href="http://www.scielo.edu.uy/scielo.php?script=sci_pdf&amp;pid=S0717-50002016000100008&amp;lng=en&amp;nrm=iso"></self-uri><abstract abstract-type="short" xml:lang="en"><p><![CDATA[We revisit the visibility problem, which is traditionally known in Computer Graphics and Vision fields as the process of computing a (potentially) visible set of primitives in the computational model of a scene. We propose a hybrid solution that uses a dry structure (in the sense of data reduction), a triangulation of the type <img width=32 height=32 id="_x0000_i1025" src="../../../../../img/revistas/cleiej/v19n1/1a080x.png" alt="Ja&#13;&#10; 1 " class=math>, to accelerate the task of searching for visible primitives. We came up with a solution that is useful for real-time, on-line, interactive applications as 3D visualization. In such applications the main goal is to load the minimum amount of primitives from the scene during the rendering stage, as possible. For this purpose, our algorithm executes the culling by using a hybrid paradigm based on viewing-frustum, back-face culling and occlusion models. Results have shown substantial improvement over these traditional approaches if applied separately. This novel approach can be used in devices with no dedicated processors or with low processing power, as cell phones or embedded displays, or to visualize data through the Internet, as in virtual museums applications.]]></p></abstract>
<abstract abstract-type="short" xml:lang="pt"><p><![CDATA[Revisitamos o problema de visibilidade, que é tradicionalmente conhecido nas áreas de Computação Gráfica e Visão Computacional como o processo de computar um conjunto de primitivas de uma cena (potencialmente) visíveis. Propomos uma solução híbrida que usa uma estrutura enxuta (no sentido de redução de dados), uma triangulação do tipo J1a, para acelerar a tarefa de procura por primitivas visíveis. Chegamos a uma solução que é útil para aplicações interativas, on-line, e de tempo real, tal como visualização 3D. Em tais aplicações, o objetivo principal é carregar uma quantidade tão mínima quanto possível de primitivas da cena, durante o estágio de renderização. Para este propósito, nosso algoritmo executa o recorte de primitivas usando um paradigma híbrido baseado nos modelos &#8221;viewing-frustum&#8221;, &#8221;back-face culling&#8221; e &#8221;occlusion&#8221;. Os resultados evidenciam melhoras substanciais sobre esses modelos, se aplicados separadamente. Este novo método pode ser usado em dispositivos sem pro cessador dedicado ou com pouco poder de processamento, como telefones celulares ou dispositivos de visualização embarcados, ou também para visualizar dados pela Internet, como em aplicações de museus virtuais.]]></p></abstract>
<kwd-group>
<kwd lng="en"><![CDATA[Visibility]]></kwd>
<kwd lng="en"><![CDATA[Visualization Structure]]></kwd>
<kwd lng="en"><![CDATA[Real-time 3D Visualization]]></kwd>
<kwd lng="en"><![CDATA[Hidden Primitive Culling]]></kwd>
<kwd lng="pt"><![CDATA[Visibilidade]]></kwd>
<kwd lng="pt"><![CDATA[Estrutura de Visualização]]></kwd>
<kwd lng="pt"><![CDATA[Visualização 3D em Tempo-Real]]></kwd>
<kwd lng="pt"><![CDATA[Recorte de Primitivas Escondidas]]></kwd>
</kwd-group>
</article-meta>
</front><body><![CDATA[ <div class="maketitle">                                                                                                                                                                                                                                                                                                                                                                          <h2 class="titleHead" style="font-size:14pt">An Adaptive and Hybrid Approach to Revisiting the Visibility Pipeline</h2>                                <div class="author" > <span  class="cmbx-12">Ícaro L. L. da Cunha</span>     <br><span  class="cmr-12">Universidade Federal Rural de Pernambuco, Unidade Academica de Garanhuns,</span>     <br>                 <span  class="cmr-12">Boa Vista, 55292-270, Garanhuns, PE, Brazil</span>     <br>                <span  class="cmti-12"><a href="mailto:icaro.cunha@ufrpe.br">icaro.cunha@ufrpe.br</a> </span><br class="and"><span  class="cmbx-12">Luiz M. G. Gon</span><span  class="cmbx-12">çalves</span>     <br>                <span  class="cmr-12">Universidade Federal do Rio Grande do Norte,</span>     <br>    <span  class="cmr-12">Campus Universitario, Lagoa Nova, CEP 59.078-970, Natal, RN, Brazil</span>     <br>                             <span  class="cmti-12"><a href="mailto:lmarcos@dca.ufrn.br">lmarcos@dca.ufrn.br</a> </span></div>    <br>     <div class="date" ></div>    </div>        ]]></body>
<body><![CDATA[<div  class="abstract"  >     <div class="center"  > <!--l. 28-->    <p class="noindent" >                                                                                                                                                                                         <div class="minipage">    <div class="center"  > <!--l. 28-->    <p class="noindent" > <!--l. 28-->    <p class="noindent" ><span  class="cmbx-10">Abstract</span></div> <!--l. 30-->    <p class="noindent" >We revisit the visibility problem, which is traditionally known in Computer Graphics and Vision fields as the process of computing a (potentially) visible set of primitives in the computational model of a scene. We propose a hybrid solution that uses a dry structure (in the sense of data reduction), a triangulation of the type <img  src="/img/revistas/cleiej/v19n1/1a080x.png" alt="Ja   1  "  class="math" > , to accelerate the task of searching for visible primitives. We came up with a solution that is useful for real-time, on-line, interactive applications as 3D visualization. In such applications the main goal is to load the minimum amount of primitives from the scene during the rendering stage, as possible. For this purpose, our algorithm executes the culling by using a hybrid paradigm based on viewing-frustum, back-face culling and occlusion models. Results have shown substantial improvement over these traditional approaches if applied separately. This novel approach can be used in devices with no dedicated processors or with low processing power, as cell phones or embedded displays, or to visualize data through the Internet, as in virtual museums applications. <!--l. 32-->    <p class="noindent" >Abstract in Portuguese: <br  class="newline">Revisitamos o problema de visibilidade, que é tradicionalmente conhecido nas áreas de Computação Gráfica e Visão Computacional como o processo de computar um conjunto de primitivas de uma cena (potencialmente) visíveis. Propomos uma solução híbrida que usa uma estrutura enxuta (no sentido de redução de dados), uma triangulação do tipo J1a, para acelerar a tarefa de procura por primitivas visíveis. Chegamos a uma solução que é útil para aplicações interativas, on-line, e de tempo real, tal como visualização 3D. Em tais aplicações, o objetivo principal é carregar uma quantidade tão mínima quanto possível de primitivas da cena, durante o estágio de renderização. Para este propósito, nosso algoritmo executa o recorte de primitivas usando um paradigma híbrido baseado nos modelos &#8221;viewing-frustum&#8221;, &#8221;back-face culling&#8221; e &#8221;occlusion&#8221;. Os resultados evidenciam melhoras substanciais sobre esses modelos, se aplicados separadamente. Este novo método pode ser usado em dispositivos sem pro cessador dedicado ou com pouco poder de processamento, como telefones celulares ou dispositivos de visualização embarcados, ou também para visualizar dados pela Internet, como em aplicações de museus virtuais.</div></div> </div> <!--l. 38-->    <p class="noindent" ><span  class="cmbx-10">Keywords:  </span>Visibility; Visualization Structure; Real-time 3D Visualization; Hidden Primitive Culling.<br  class="newline">Keywords in Portuguese: Visibilidade; Estrutura de Visualização; Visualização 3D em Tempo-Real; Recorte de Primitivas Escondidas. <br  class="newline">Received: 2015-11-10 Revised: 2016-04-04 Accepted: 2016-04-11<br  class="newline">DOI: <a  href="http://dx.doi.org/10.19153/cleiej.19.1.8" class="url" ><span  class="cmtt-10">http://dx.doi.org/10.19153/cleiej.19.1.8</span></a>    <h3 class="sectionHead"><span class="titlemark">1   </span> <a   id="x1-10001"></a>Introduction</h3> <!--l. 48-->    ]]></body>
<body><![CDATA[<p class="noindent" >We revisit the solutions for the visibility problem, which is the process of computing a potentially visible set of primitives in a scene model, intended for interactive and real-time applications. We came up with enhancements in the visualization pipeline by using a visualization structure that minimizes the amount of useless data being loaded to generate the visualization of a given scene. Basically, we accomplish this by dividing the scene into a special grid, upon the <img  src="/img/revistas/cleiej/v19n1/1a081x.png" alt=" a J1  "  class="math" > structure (a triangulation), then associating to each grid block the primitive(s) that belong to it and finally determining the visible set of primitives. On the way of visibility determination, several primitives and blocks are automatically hidden by others, thus without need of testing these in the visibility processing pipeline. As a novelty, our method makes use of the <img  src="/img/revistas/cleiej/v19n1/1a082x.png" alt=" a J1  "  class="math" > triangulation structure <span class="cite">[<a  href="#XCastelo06">1</a><a id="br1">]</a></span> to create the subdivision grid. We have chosen this structure due to its algebraic (easy of computing and store) and adaptive (can be used with irregular data) useful features. <!--l. 53-->    <p class="noindent" >                                                                                                                                                                                        <h4 class="subsectionHead"><span class="titlemark">1.1   </span> <a   id="x1-20001.1"></a>Visibility culling</h4> <!--l. 55-->    <p class="noindent" >In 3D graphics, hidden surface determination, which is also known as hidden surface removal, occlusion culling or determining the visible surfaces, is the process used to compute which parts of the surfaces are visible and, consequently, which ones are not visible from a certain point of view. An algorithm for hidden surface determination is a solution to the problem of visibility, which is known to be the one of the first major bottlenecks in the area of 3D computer graphics <span class="cite">[<a  href="#Xcohen03">2</a><a id="br2">]</a></span>. The process of hidden surface determination is sometimes called hiding. The analog problem for lines is to render the hidden line removal. Determining the hidden surfaces is required to process the scene and produce an image properly, so that one cannot look through (or walk into) the walls in virtual reality applications, for example. <!--l. 69-->    <p class="indent" >   Besides the advances in hardware technology and software experimented in the last decades, the visibility determination has yet being one of the main issues in computer graphics and vision. Several algorithms capable of removing hidden surfaces have been developed to solve this problem <span class="cite">[<a  href="#Xcohen03">2</a><a id="br2">]</a></span>. Basically, these algorithms determine which set of primitives that make up a particular scene are visible from a given viewing position. In fact, Cohen et al. <span class="cite">[<a  href="#Xcohen03">2</a><a id="br2">]</a></span> argue that the basic problem is mostly solved, however, it is known that, due to the constant demand for higher amount of 3D data, mainly in massive data applications, algorithms such as Z-buffer and other classical approaches may have trouble while trying to guaranteeing the visualization of the scene in real-time. The computation of the visible surfaces may be impractical depending on the system demand and resources available. Of course, agreeing to Cohen, one can find works in the literature such as Jones <span class="cite">[<a  href="#Xjones71">3</a><a id="br3">]</a></span>, Clark <span class="cite">[<a  href="#Xclark76">4</a><a id="br4">]</a></span> and Meagher <span class="cite">[<a  href="#XMeagher83">5</a><a id="br5">]</a></span> that addressed this issue in the very beginning of CG. Nonetheless, we recall the attention to the actual importance of this problem during evolution of CG. Even latter, researchers as Airey et al. <span class="cite">[<a  href="#XAirey90">6</a><a id="br6">]</a></span>, Teller and Sèquin <span class="cite">[<a  href="#Xteller92">7</a><a id="br7">,</a>&#x00A0;<a  href="#Xteller91">8</a><a id="br8">]</a></span>, and Greene et al. <span class="cite">[<a  href="#Xgreene93">9</a><a id="br9">]</a></span>, focused their work on this issue and have studied the problem of visibility trying to speed up visualization. <!--l. 89-->    <p class="indent" >   To better understand the pipeline, the process starts with visibility culling that aims to quickly reject primitives that do not contribute to the generation of the final image of the scene. This step is done before the step of hidden surface removal is performed, and only the set of primitives that contribute at least to one pixel in the screen is rendered. Visibility culling is executed using the two following strategies: back-facing and viewing-frustum culling <span class="cite">[<a  href="#Xfoley90">10</a><a id="br10">]</a></span>. Back-face culling intends only to use primitives looking at (facing to) the camera display (the viewing plane), and view-frustum culling rejects primitive located outside the visualization frustum, which is intended to be the only visible portion of the scene. Over the years, efficient hierarchical techniques have been developed <span class="cite">[<a  href="#Xclark76">4</a><a id="br4">,</a>&#x00A0;<a  href="#Xgarlick90">11</a><a id="br11">,</a>&#x00A0;<a  href="#Xkumar96">12</a><a id="br12">]</a></span>, as well as other optimizations <span class="cite">[<a  href="#Xassarsson00">13</a><a id="br13">,</a>&#x00A0;<a  href="#Xslater97">14</a><a id="br14">]</a></span> in order to accelerate visibility culling. Besides these previous techniques, the occlusion culling process avoids rendering primitives that are hidden (occluded) by other primitives in the scene. This technique is more complex because it involves an analysis of the overall relationship between all primitives in the scene. Figure <a  href="#x1-20011">1<!--tex4ht:ref: fig:Type --></a> illustrates the relation between the three known culling techniques. <!--l. 108-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-20011"></a>                                                                                                                                                                                         <div class="center"  > <!--l. 109-->    <p class="noindent" >  <!--l. 110-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a08f1.png" alt="PIC"   ></div>     ]]></body>
<body><![CDATA[<br>     <div class="caption"  ><span class="id">Figure&#x00A0;1: </span><span   class="content">Types of visibility culling techniques: view-frustum culling (in red), back-face culling (in blue), and occlusion culling (in green).</span></div><!--tex4ht:label?: x1-20011 -->                                                                                                                                                                                     <!--l. 114-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 116-->    <p class="indent" >   Since these traditional contributions studying the visibility culling, as cited above, and some few further contributions <span class="cite">[<a  href="#XBittner09">15</a><a id="br15">,</a>&#x00A0;<a  href="#XChandak09">16</a><a id="br16">,</a>&#x00A0;<a  href="#XTian10">17</a><a id="br17">,</a>&#x00A0;<a  href="#Xantani2010">18</a><a id="br18">]</a></span>, not recent, which will be better explored in Section <a  href="#x1-40002">2<!--tex4ht:ref: sec:related --></a>, we could not find any up to date literature on this subject. As there are currently many on-line and real-time 3D world navigation applications (as virtual museums or buildings visualization) that require significant amount of primitives to be rendered in real-time, we have been motivated to revisit the visibility problem to see whether it can not be even enhanced. In fact, we observe that the above techniques have some difficulties as the complexity of the occlusion culling to analyze the whole scene or the amount of computations in order to determine primitives facing sides (vector products) or if primitives are inside the frustum in the visibility culling methods. These are not trivial mainly in massive data applications or where the scene is modeled at once for example. This may occur in several applications as in a museum scene where there are too many sculptures, for example, or in outlooking scenarios such as a city landscape.    <h4 class="subsectionHead"><span class="titlemark">1.2   </span> <a   id="x1-30001.2"></a>Contributions</h4> <!--l. 134-->    <p class="noindent" >We overcome the above mentioned difficulties by proposing a different approach introducing and adapting the above simple techniques to our context. We employ a visualization (spatial) data structure that is capable of fast obtaining a potentially visible set of primitives of the scene from the viewpoint of a user. As said above, this structure is constructed by subdividing the whole scene into a grid based on a triangulation of the type <img  src="/img/revistas/cleiej/v19n1/1a083x.png" alt="Ja1  "  class="math" > . With this, we achieve significant improvements on the results, as it will be shown. In particular, our method can handle this kind of data reducing the resources requirements. In the experiments, we verify a removal gain of 15% to 20% of primitives for single objects and could observe several other interesting results that validate and also delimit our approach. These will be better discussed during the paper and mainly in the experiments and results section. <!--l. 140-->    <p class="indent" >   So the main contribution of this work is the in the visualization pipeline itself, which is based on a hybrid approach upon an adaptive structure that is capable of determining occlusion culling in two ways: internal block occlusion and adjacent block occlusion. The use of algebraic functions for fast access of block culling, block faces and adjacency is also another contribution that has helped enhancing the visualization process. <!--l. 142-->    <p class="indent" >   Another contribution of using this structure is that it can also be used to determine a hierarchical order of graphics data transmission often necessary in on-line applications. In case of a Virtual Museum application, for example, it can be used to determine the visible set of primitives at the starting point of simulation. So in this case our structure can be used not only to minimize the amount of data in the visualization phase but also in the transmission phase of on-line applications. <!--l. 146-->    <p class="noindent" >    <h3 class="sectionHead"><span class="titlemark">2   </span> <a   id="x1-40002"></a>Related work</h3> <!--l. 149-->    <p class="noindent" >Since viewing frustum and back-facing culling are mostly trivial to determine, the recent literature up to this point has mostly focused on occlusion culling. One of the most valuable works that we found in this subject area is the one of Cohen et al. <span class="cite">[<a  href="#Xcohen03">2</a><a id="br2">]</a></span> that is also a very useful survey. They subdivide and classify the various occlusion culling techniques developed proposing a classification of the various methods in accordance to their characteristics, as seen in the next subsections. <!--l. 159-->    <p class="noindent" >    <h4 class="subsectionHead"><span class="titlemark">2.1   </span> <a   id="x1-50002.1"></a>Techniques classification</h4> <!--l. 161-->    ]]></body>
<body><![CDATA[<p class="noindent" >The occlusion culling methods can be classified as point based or region based methods. Point based methods execute their computations from the perspective of the camera view point. On the other hand, region based methods perform their computations from a global point of view that is valid from any region of the scene. Point based methods can be classified as:      <ul class="itemize1">      <li class="itemize">Image Precision Methods - generally operate with the discrete representation of the objects when      they are broken into fragments during the rasterization process. Among these are the Ray Tracing      <span class="cite">[<a  href="#XBala99a">19</a><a id="br19">,</a>&#x00A0;<a  href="#XBala99b">20</a><a id="br20">,</a>&#x00A0;<a  href="#XCohen-or95">21</a><a id="br21">,</a>&#x00A0;<a  href="#XCohen-Or96">22</a><a id="br22">,</a>&#x00A0;<a  href="#XParker99">23</a><a id="br23">]</a></span> and the Z-buffer <span class="cite">[<a  href="#Xgreene93">9</a><a id="br9">,</a>&#x00A0;<a  href="#XGreene94">24</a><a id="br24">,</a>&#x00A0;<a  href="#XMeagher83">5</a><a id="br5">,</a>&#x00A0;<a  href="#XGreene99">25</a><a id="br25">,</a>&#x00A0;<a  href="#XGreene01b">26</a><a id="br26">]</a></span> based methods that are the most common.                                                                                                                                                                                          </li>      <li class="itemize">Object Precision - use the raw objects for the computation of visibility. Among these methods we can      find the works done by Luebke and Georges <span class="cite">[<a  href="#XLuebke95">27</a><a id="br27">]</a></span> and by Jones <span class="cite">[<a  href="#Xjones71">3</a><a id="br3">]</a></span>.      </li>    </ul> <!--l. 182-->    <p class="indent" >   On their turn, the region based methods can be classified as:      <ul class="itemize1">      <li class="itemize">Cell-and-Portal - Starts with an empty set of visible primitives and adds them through a series of      portals. Among cell-and-portal based methods we cite works by <span class="cite">[<a  href="#XAirey90">6</a><a id="br6">]</a></span>, <span class="cite">[<a  href="#Xteller91">8</a><a id="br8">]</a></span>.      </li>      <li class="itemize">Generic Scenes - Initially assumes that all primitives are visible and then eliminates if they are found      to be hidden.      </li>    </ul> <!--l. 196-->    <p class="indent" >   Cohen et al. <span class="cite">[<a  href="#Xcohen03">2</a><a id="br2">]</a></span> also devised several other classification criteria. For that, some dichotomies were established by Cohen et al. <span class="cite">[<a  href="#Xcohen03">2</a><a id="br2">]</a></span>. One of these is looking if a method overestimates the visible set (conservative) or if it approximates it, and what is the degree of over-estimation for the conservative methods. Other is if a method treats the occlusion caused by all objects in the scene or just of a selected subset of occluders. Also, looking if each occluder is treated individually or as group to be more precise. If the method is restricted to 2D floor plan or can they handle 3D scenes. Other simple criteria are: if the method executes a precomputation stage; if it requires special hardware in order to perform the precomputation stage or even the rendering stage; and, finally, if it can deal with dynamic scenes. <!--l. 199-->    <p class="noindent" >    <h4 class="subsectionHead"><span class="titlemark">2.2   </span> <a   id="x1-60002.2"></a>Recent approaches for visibility</h4> <!--l. 201-->    <p class="noindent" >Bittner et al. <span class="cite">[<a  href="#XBittner09">15</a><a id="br15">]</a></span> propose a from-region visibility method where rays are cast to sample visibility. The information from each ray is used for determination of all viewing cells that it intersects. They use adaptive sampling strategies based on ray mutations that exploit the coherence of visibility. Chandak et al. <span class="cite">[<a  href="#XChandak09">16</a><a id="br16">]</a></span> propose a method that has some similarity to ours. Their approach uses a set of high number of frusta and computes blockers for each one using simple intersection tests. They use this method to accurately compute the reflection paths from a point sound source. The method proposed by Tian et al. <span class="cite">[<a  href="#XTian10">17</a><a id="br17">]</a></span> integrates adaptive sampling-based simplification, visibility culling, out-of-core data management and level-of-detail. During the preprocessing phase the objects are subdivided and a bounding volume clustering hierarchy is built. They make use of the Adaptive Voxels, which is proposed as a novel adaptive sampling method applied to generate level of detail models. Antani et. al. <span class="cite">[<a  href="#Xantani2010">18</a><a id="br18">]</a></span> introduce a fast occluder selection algorithm that combines small, connected triangles to form large occluders and perform conservative computations at object-space precision. The approach is applied to computation of sound propagation, improving the performance of edge diffraction algorithms from a factor of 2 to 4. Carvalho et al. <span class="cite">[<a  href="#Xde2013improved">28</a><a id="br28">]</a></span> proposed an improvement to the visualization frustum culling method by using an octree based space partitioning method. This method is in some way similar to ours. The basic difference is that while their method partitions the view frustum, in our method the visualization frustum uses preprocessed data from our space partitioning structure. <!--l. 220-->    <p class="noindent" >    <h4 class="subsectionHead"><span class="titlemark">2.3   </span> <a   id="x1-70002.3"></a>Contextualization</h4> <!--l. 222-->    <p class="noindent" >Our method is classified as a region based method, somewhat similarly to the cell-and-portal. The main differences with that are that instead of a set of cells we use the scene subdivision grid, and that instead of a portal we use the adjacency between grid blocks. This has proven experimentally to be useful as well, at least. <!--l. 226-->    <p class="indent" >   Following the other cited criteria of Cohen et al. <span class="cite">[<a  href="#Xcohen03">2</a><a id="br2">]</a></span>, our method overestimates the potentially visible set. We discuss the degree of over-estimation in Section <a  href="#x1-160005.1">5.1<!--tex4ht:ref: sec:res --></a>. Though occlusion detection is block based, our method treats the occlusion created by all the blocks in groups. Our method handles 3D scenes and, due to the <img  src="/img/revistas/cleiej/v19n1/1a084x.png" alt=" a J1  "  class="math" > triangulation structure, we believe we can add a fourth dimension for animation applications.                                                                                                                                                                                     <!--l. 233-->    ]]></body>
<body><![CDATA[<p class="indent" >   Our precomputation stage executes without the need of a special hardware, though the use of a dedicated graphics processor would even speed-up the processing. We intend to implement this feature later on and also take advantage of parallel processing to speed up the internal and adjacent block culling stages. <!--l. 237-->    <p class="indent" >   So far we treat dynamic scenes (with moving objects) by ignoring the occlusion made by the moving objects. This means that if a moving object inside the viewing frustum passes in front of other objects (or primitives), these are also precomputed as visible, thus not influencing the final visualization of the occluded objects. Of course, the final visualization will have to deal with these extra primitives given by the procedure (and it will) by computing which one is truly visible (the object or the occluded primitives). We consider that this overestimation is better than trying to determining from a frame to another where the moving object is and then recomputing the whole grid at once. Of course, this could be understood as a drawback of our method if we consider that most of the objects of a scene is moving, on the worst case. However, nonetheless, we do not know any method that treats this, the applications the method is devised for (museum applications and sending 3D data through Internet) have not that huge amount of moving objects in their environments. Because of this we do not show results based on moving objects. Besides, results for the camera moving are shown. <!--l. 240-->    <p class="noindent" >    <h3 class="sectionHead"><span class="titlemark">3   </span> <a   id="x1-80003"></a>Solving visibility with the <img  src="/img/revistas/cleiej/v19n1/1a085x.png" alt="Ja  1  "  class="math" > triangulation</h3> <!--l. 242-->    <p class="noindent" >We base our visualization scheme on the algebraic features of the <img  src="/img/revistas/cleiej/v19n1/1a086x.png" alt="Ja  1  "  class="math" > using it basically as a spatial data structure. The whole 3D scene is initially contained within the grid created by the initial <img  src="/img/revistas/cleiej/v19n1/1a087x.png" alt="Ja  1  "  class="math" > structure. A initial step is performed in which the primitives of the model are related to (pointed from) each element of the spatial data structure in such way that is contains information of what are the visible set of elements in a given positioning of the camera. From the point of view of the camera, the idea is to traverse the <img  src="/img/revistas/cleiej/v19n1/1a088x.png" alt="Ja  1  "  class="math" > structure calculating all visible elements in it. So every time that the camera moves the calculation of the visible set of primitives is redone traversing the <img  src="/img/revistas/cleiej/v19n1/1a089x.png" alt="Ja  1  "  class="math" > structure. <!--l. 249-->    <p class="indent" >   The <img  src="/img/revistas/cleiej/v19n1/1a0810x.png" alt="Ja  1  "  class="math" > triangulation <span class="cite">[<a  href="#XCastelo06">1</a><a id="br1">]</a></span> is an algebraically defined structure that can be built in any size. To accommodate local aspects, the <img  src="/img/revistas/cleiej/v19n1/1a0811x.png" alt="Ja  1  "  class="math" > triangulation handles refinements naturally. Two of its main characteristics are the existence of a mechanism to uniquely represent each simplex of the triangulation and the existence of algebraic rules to traverse the structure. Using these rules prevents the structure need of storing connectivity information of simplices thus enabling more efficient storage. <!--l. 254-->    <p class="indent" >   Formalizing, the <img  src="/img/revistas/cleiej/v19n1/1a0812x.png" alt="Ja  1  "  class="math" > triangulation consists of a computational grid formed by n-dimensional hypercubes (blocks). Each block is divided by <img  src="/img/revistas/cleiej/v19n1/1a0813x.png" alt="2nn!  "  class="math" > n-simplices that can be described algebraically using the sixfold: <!--l. 259-->    <p class="indent" >    <center class="math-display" > <img  src="/img/revistas/cleiej/v19n1/1a0814x.png" alt="S = (g,r,&#x03C0;,s,t,h). " class="math-display" ></center> <!--l. 261-->    <p class="indent" >   The first two elements of <img  src="/img/revistas/cleiej/v19n1/1a0815x.png" alt="S  "  class="math" > defined in a block are contained in the simplex, being a vector <img  src="/img/revistas/cleiej/v19n1/1a0816x.png" alt="g  "  class="math" > with <img  src="/img/revistas/cleiej/v19n1/1a0817x.png" alt="n  "  class="math" >-dimensional coordinates indicating that the block is in a particular level of refinement <img  src="/img/revistas/cleiej/v19n1/1a0818x.png" alt="r  "  class="math" > in the grid. Figure <a  href="#x1-80012">2<!--tex4ht:ref: fig:juma --></a>, left, illustrates a two-dimensional grid of <img  src="/img/revistas/cleiej/v19n1/1a0819x.png" alt="Ja1  "  class="math" > and, right, a block of this outstanding level of refinement grid with <img  src="/img/revistas/cleiej/v19n1/1a0820x.png" alt="r = 0  "  class="math" > (0-block) and <img  src="/img/revistas/cleiej/v19n1/1a0821x.png" alt="g = (3;2)  "  class="math" >. Also in Figure <a  href="#x1-80012">2<!--tex4ht:ref: fig:juma --></a> it can be noticed that the blocks are darker for grid level r = 1 (so is called 1-block) and hence that they form part of a region of the grid of higher resolution. <!--l. 266-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-80012"></a>                                                                                                                                                                                         ]]></body>
<body><![CDATA[<div class="center"  > <!--l. 267-->    <p class="noindent" >  <!--l. 268-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a08f2.png" alt="PIC"   ></div>     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;2: </span><span   class="content">2D example for a triangulation grid <img  src="/img/revistas/cleiej/v19n1/1a0822x.png" alt="Ja1  "  class="math" > (left) and details of the block <img  src="/img/revistas/cleiej/v19n1/1a0823x.png" alt="g = (3;2)  "  class="math" >, <img  src="/img/revistas/cleiej/v19n1/1a0824x.png" alt="r = 0  "  class="math" > where it is shown two paths for tracing the block simplices.</span></div><!--tex4ht:label?: x1-80012 -->                                                                                                                                                                                     <!--l. 272-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 274-->    <p class="indent" >   Since the <img  src="/img/revistas/cleiej/v19n1/1a0825x.png" alt="Ja1  "  class="math" > triangulation is naturally adaptive, this helps in making this structure more precise when it comes to over-estimating the potentially visible set from the scene. This is because, with more refined (smaller) blocks in certain regions of the scene, more blocks are fully filled. Due to this, the adjacent block occlusion culling can be more common and thus more effective. <!--l. 276-->    <p class="indent" >   Since we are using <img  src="/img/revistas/cleiej/v19n1/1a0826x.png" alt="Ja1  "  class="math" > triangulation structure in this work for a purpose other than triangulating, from now on we will refer to its basic grid as the <img  src="/img/revistas/cleiej/v19n1/1a0827x.png" alt="Ja1  "  class="math" > structure.    <h3 class="sectionHead"><span class="titlemark">4   </span> <a   id="x1-90004"></a>Technique overview</h3> <!--l. 283-->    <p class="noindent" >Figure <a  href="#x1-90013">3<!--tex4ht:ref: fig:Overview --></a> illustrates the overview of our technique. Basically, it is subdivided into two stages: the preprocessing and the visualization (visibility precomputation) stages. In the following we better detail these stages.                                                                                                                                                                                     <!--l. 286-->    <p class="indent" >   <a   id="x1-90013"></a><hr class="float">    ]]></body>
<body><![CDATA[<div class="float"  >                                                                                                                                                                                         <div class="center"  > <!--l. 287-->    <p class="noindent" >  <!--l. 288-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a08f3.png" alt="PIC"   ></div>     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;3: </span><span   class="content">Overview of the processing pipeline for visualization. We encapsulate the mesh with the basic <img  src="/img/revistas/cleiej/v19n1/1a0828x.png" alt="Ja1  "  class="math" > grid, after which the visibility precomputation stage is executed to generate the primitive list for each the point of view on the grid blocks.</span></div><!--tex4ht:label?: x1-90013 -->                                                                                                                                                                                        </div><hr class="endfloat">    <h4 class="subsectionHead"><span class="titlemark">4.1   </span> <a   id="x1-100004.1"></a>Preprocessing stage</h4> <!--l. 296-->    <p class="noindent" >During the preprocessing, we use the <img  src="/img/revistas/cleiej/v19n1/1a0829x.png" alt="Ja1  "  class="math" > structure basic grid to generate the visualization blocks. The preprocessing stage is done in three steps: <!--l. 298-->    <p class="indent" >      <ol  class="enumerate1" >      <li    class="enumerate" id="x1-10002x1">Given the user input where it is established the minimal dimension of the grid along one of its axis,      we determine the size of the grid&#8217;s edges and then calculate the rest of the grid&#8217;s dimension.      </li>      <li    class="enumerate" id="x1-10004x2">Each triangle is firstly mapped to the grid block(s) where it is contained;      </li>      <li    class="enumerate" id="x1-10006x3">By using the triangle&#8217;s normals we identify what face of the block(s) the triangle is <span  class="cmti-10">looking at</span>. In this      step we automatically treat the back-face culling (this is calculated once even if the camera moves);</li>    </ol> <!--l. 309-->    <p class="indent" >   Internal block occlusion can then be calculated using an approach similar to Z-buffer. The final list of primitives is assigned to each block faces. During this step we can find out if the block face is fully <span  class="cmti-10">filled</span>, this is needed to determine adjacent block occlusion in the visualization stage. This step is illustrated in Figure <a  href="#x1-120014">4<!--tex4ht:ref: fig:zTechnique --></a>. <!--l. 316-->    ]]></body>
<body><![CDATA[<p class="noindent" >    <h4 class="subsectionHead"><span class="titlemark">4.2   </span> <a   id="x1-110004.2"></a>Visibility precomputation</h4> <!--l. 318-->    <p class="noindent" >We use the grid to verify what elements are being directly looked at. Given the camera position (<img  src="/img/revistas/cleiej/v19n1/1a0830x.png" alt="(i,j)  "  class="math" > as illustrated in Figure <a  href="#x1-130015">5<!--tex4ht:ref: fig:transition --></a>) and its look-at vector, we use the <img  src="/img/revistas/cleiej/v19n1/1a0831x.png" alt="Ja1  "  class="math" > structure transition function to access the blocks in its line of sight. The look-at vector is also used to determine which block face(s) will be used to compose the final list with the potentially visible set of primitives. Our visualization structure determines occlusion culling in two ways: internal block occlusion and adjacent block occlusion. <!--l. 322-->    <p class="noindent" >    <h5 class="subsubsectionHead"><span class="titlemark">4.2.1   </span> <a   id="x1-120004.2.1"></a>Internal block culling</h5> <!--l. 324-->    <p class="noindent" >Internal block culling occlusion is identified using only the primitives inside each block and it is done for every block face as illustrated in Figure <a  href="#x1-120014">4<!--tex4ht:ref: fig:zTechnique --></a> for face <img  src="/img/revistas/cleiej/v19n1/1a0832x.png" alt="F1  "  class="math" >. This approach is similar to the Z-buffer technique storing the depth for each ray that is cast for a given grid face. When verifying the visible set of triangles for a given face <img  src="/img/revistas/cleiej/v19n1/1a0833x.png" alt="Fi  "  class="math" >, if the rays determine that a triangle <img  src="/img/revistas/cleiej/v19n1/1a0834x.png" alt="Tj  "  class="math" > completely hides a triangle <img  src="/img/revistas/cleiej/v19n1/1a0835x.png" alt="Tk  "  class="math" > so this last is not pointed as visible for face <img  src="/img/revistas/cleiej/v19n1/1a0836x.png" alt="Fi  "  class="math" >. <!--l. 329-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-120014"></a>                                                                                                                                                                                         <div class="center"  > <!--l. 330-->    <p class="noindent" >  <!--l. 331-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a08f4.png" alt="PIC"   ></div>     <br>     ]]></body>
<body><![CDATA[<div class="caption"  ><span class="id">Figure&#x00A0;4: </span><span   class="content">Technique for determining internal block occlusion using an approach similar to Z-buffer. When verifying the visible set of Triangles for face <img  src="/img/revistas/cleiej/v19n1/1a0837x.png" alt="F1  "  class="math" >, the rays determine that triangle <img  src="/img/revistas/cleiej/v19n1/1a0838x.png" alt="T1  "  class="math" > hides triangle <img  src="/img/revistas/cleiej/v19n1/1a0839x.png" alt="T2  "  class="math" >.</span></div><!--tex4ht:label?: x1-120014 -->                                                                                                                                                                                     <!--l. 335-->    <p class="indent" >   </div><hr class="endfigure">    <h5 class="subsubsectionHead"><span class="titlemark">4.2.2   </span> <a   id="x1-130004.2.2"></a>Adjacent (external) block occlusion</h5> <!--l. 339-->    <p class="noindent" >Before each transition from block to block the visible face of the current block is verified in order to determine if it is fully filled, this is to permit the operation. If the case is true, fully filled, this means that the visible set from that face totally hides the visible set from its adjacent block face, so the hidden block and its subsequent adjacent blocks are not needed for the rendering stage. The transition step for adjacent block occlusion determination is illustrated in Figure <a  href="#x1-130015">5<!--tex4ht:ref: fig:transition --></a>. Note in the Figure that after visiting the block <img  src="/img/revistas/cleiej/v19n1/1a0840x.png" alt="(i+ 1,j)  "  class="math" > the blocks <img  src="/img/revistas/cleiej/v19n1/1a0841x.png" alt="(i+ 2,j + 1)  "  class="math" >, <img  src="/img/revistas/cleiej/v19n1/1a0842x.png" alt="(i+ 2,j)  "  class="math" > and <img  src="/img/revistas/cleiej/v19n1/1a0843x.png" alt="(i+ 2,j - 1)  "  class="math" > are queued to be visited next. After visiting block <img  src="/img/revistas/cleiej/v19n1/1a0844x.png" alt="(i+ 2,j - 1)  "  class="math" > and determining that its visible face is fully filled, the next adjacent blocks are not directly added to the visitation cue. But there might be a case, in this example, where the block <img  src="/img/revistas/cleiej/v19n1/1a0845x.png" alt="(i+ 2,j)  "  class="math" > adds one or more of these adjacent blocks to the cue. <!--l. 342-->    <p class="indent" >   Note that at the end of this second phase blocks occluding other blocks are determined thus complementing the internal block occlusion process. <!--l. 344-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-130015"></a>                                                                                                                                                                                         <div class="center"  > <!--l. 345-->    <p class="noindent" >  <!--l. 346-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a08f5.png" alt="PIC"   ></div>     <br>     ]]></body>
<body><![CDATA[<div class="caption"  ><span class="id">Figure&#x00A0;5: </span><span   class="content">Illustration of the transition block operation, we illustrate a 2D case analogous to the 3D case. In this example each visible block is visited to obtain the visible set of primitives that belongs to them. In this illustration we indicate that the visible face of block <img  src="/img/revistas/cleiej/v19n1/1a0846x.png" alt="(i+ 2,j - 1)  "  class="math" > is fully filled, so the next adjacent blocks are not directly added to the visitation queue due to it (they might be added due to another block).</span></div><!--tex4ht:label?: x1-130015 -->                                                                                                                                                                                     <!--l. 350-->    <p class="indent" >   </div><hr class="endfigure">    <h4 class="subsectionHead"><span class="titlemark">4.3   </span> <a   id="x1-140004.3"></a>Visualization (rendering)</h4> <!--l. 354-->    <p class="noindent" >Finally, once the <img  src="/img/revistas/cleiej/v19n1/1a0847x.png" alt="Ja1  "  class="math" > data structure contains pointers to the visible set of primitives, the step for computation of the visible set of primitives for rendering of the data can then be executed. To do so, the list of (visible) blocks faces is visited and used to compose the potentially visible set. We remark that at least the visible set is pointed by the grid data structure, from which the rendering stage can then visualize the really visible ones. As shown in the results, this overestimation rate is low thus allowing a fast processing of the data in the visualization process. As previously said, every time the camera is moved to another block or its look-at vector is changed, the calculation to obtain the visible set of primitives has to redone. <!--l. 358-->    <p class="indent" >   Also, the <img  src="/img/revistas/cleiej/v19n1/1a0848x.png" alt="Ja1  "  class="math" > structure is capable of dealing well with primitive transparency in the case of both types of occlusion culling. It does so during the internal block occlusion phase. When the ray encounters a transparent/semi-transparent primitive, that primitive is tagged for transparency processing and the ray will continue to try and find another primitive along its path. In this way, transparency can be calculated from primitive to primitive composing the final value of intensity for a given ray. <!--l. 366-->    <p class="noindent" >    <h3 class="sectionHead"><span class="titlemark">5   </span> <a   id="x1-150005"></a>Experiments and results</h3> <!--l. 368-->    <p class="noindent" >To assess the efficiency of our visualization structure, we have planned and performed a series of experiments. We executed experiments where each visibility model is visualized individually, experiments where the camera is statically positioned, and experiments where the camera navigates along the scene. <!--l. 371-->    <p class="indent" >   It is worth mentioning that our experiment does assess loading time since this stage is precomputing stage. Our main goal here is to assess real-time visualization efficiency. This does not mean that loading performance will be ignored, in later stages of this work we intend to implement the parallelism paradigm in stages such as the internal and adjacent block culling stages. <!--l. 374-->    <p class="noindent" >    <h4 class="subsectionHead"><span class="titlemark">5.1   </span> <a   id="x1-160005.1"></a>Experimental setup</h4> <!--l. 377-->    <p class="noindent" >The first series of experiments are devoted to assess the visualization structure&#8217;s efficiency. These experiments were done on an Intel Core i7 2.00GHz PC with 8GB Ram, with a Radeon HD 6770Ms Graphics Card and running on Windows 7 (64bits). <!--l. 383-->    <p class="indent" >   For these experiments, we use a series of simple mesh models obtained from the Aim-at-Shapes Repository: Chinese Lion (identified as i in result tables), Vase (identified as ii), Armadillo (identified as iii), Hand (identified as iv) and Eros (identified as v), and also use the Manhattan model (identified as vi). This model was developed by Andrew Lock and is available for purchase at the 3D Cad Browser homepage (available at <a  href="http://www.3dcadbrowser.com" class="url" ><span  class="cmtt-10">http://www.3dcadbrowser.com</span></a>), being composed by a set of 306 meshes with a total of 3.6 million polygons, 5 million vertices and 296 texture images each one with the resolution of 4096x4096. This model is a great study case as it has all primitives forming a unique model of the scene. Figures <a  href="#x1-160016">6<!--tex4ht:ref: fig:Meshes --></a> and <a  href="#x1-160027">7<!--tex4ht:ref: fig:Manhattan --></a> illustrate each model. The models&#8217; shape characteristics and high level of detail makes them ideal to test our structures&#8217; efficiency for visualizing individual meshes. <!--l. 386-->    ]]></body>
<body><![CDATA[<p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-160016"></a>                                                                                                                                                                                         <div class="center"  > <!--l. 387-->    <p class="noindent" >  <!--l. 388-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a08f6.png" alt="PIC"   ></div>     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;6: </span><span   class="content">Models used for testing the proposed approach. All models are very interesting test subjects due to their shape and high level of detail. Meshes Chinese Lion, Vase, Armadillo, Hand and Eros are respectively composed of 108k, 113k, 344k, 391k and 395k triangles.</span></div><!--tex4ht:label?: x1-160016 -->                                                                                                                                                                                     <!--l. 392-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 394-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-160027"></a>                                                                                                                                                                                         ]]></body>
<body><![CDATA[<div class="center"  > <!--l. 395-->    <p class="noindent" >  <!--l. 396-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a08f7.png" alt="PIC"   ></div>     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;7: </span><span   class="content">Illustration of the Manhattan model. This model is composed of various objects which potentially makes it ideal for the occlusion detection experiments.</span></div><!--tex4ht:label?: x1-160027 -->                                                                                                                                                                                     <!--l. 400-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 402-->    <p class="indent" >   Figures <a  href="#x1-160038">8<!--tex4ht:ref: fig:StepsSimple --></a> and <a  href="#x1-160049">9<!--tex4ht:ref: fig:StepsManhattan --></a> illustrates the series of navigation steps taken during the experiments. For the simple meshes we use the steps in Figure <a  href="#x1-160038">8<!--tex4ht:ref: fig:StepsSimple --></a> and for the Manhattan model we use the steps illustrates in Figure <a  href="#x1-160049">9<!--tex4ht:ref: fig:StepsManhattan --></a>. <!--l. 404-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-160038"></a>                                                                                                                                                                                         <div class="center"  > <!--l. 405-->    ]]></body>
<body><![CDATA[<p class="noindent" >  <!--l. 406-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a08f8.png" alt="PIC"   ></div>     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;8: </span><span   class="content">Illustration of the navigation steps used during the experiments with the simple models illustrated in Figure <a  href="#x1-160016">6<!--tex4ht:ref: fig:Meshes --></a>.</span></div><!--tex4ht:label?: x1-160038 -->                                                                                                                                                                                     <!--l. 410-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 412-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-160049"></a>                                                                                                                                                                                         <div class="center"  > <!--l. 413-->    <p class="noindent" >  <!--l. 414-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a08f9.png" alt="PIC"   ></div>     ]]></body>
<body><![CDATA[<br>     <div class="caption"  ><span class="id">Figure&#x00A0;9: </span><span   class="content">Illustration of the navigation steps used during the experiments with the Manhattan model.</span></div><!--tex4ht:label?: x1-160049 -->                                                                                                                                                                                     <!--l. 418-->    <p class="indent" >   </div><hr class="endfigure">    <h4 class="subsectionHead"><span class="titlemark">5.2   </span> <a   id="x1-170005.2"></a>Evaluating resulting data</h4> <!--l. 422-->    <p class="noindent" >During the experiments, we basically verify the following resulting data:      <ul class="itemize1">      <li class="itemize">The mean frame rate (still (A) and navigation (B)) in comparison with using the whole data, without      our technique (C). This kind of metric is a basic measure for any interactive 3D application and is      generally measured in frames per second.      </li>      <li class="itemize">Mean ratio (B) between the potentially visible set and the total number of primitives of the scene      (#Pri). We also analyze the mean ratio of only using view-frustum and back-face culling (A) with the      #Pri, we verify this to identify the proportion of elimination from each culling technique. The values      range is <img  src="/img/revistas/cleiej/v19n1/1a0849x.png" alt="0 - 1  "  class="math" > and can be interpreted as for example: if ratio is equal to <img  src="/img/revistas/cleiej/v19n1/1a0850x.png" alt="0.10  "  class="math" > then this means that      the method eliminates <img  src="/img/revistas/cleiej/v19n1/1a0851x.png" alt="10%  "  class="math" > of the original primitives.      </li>      <li class="itemize">Mean overestimation ratio as proposed by Cohen et. al. <span class="cite">[<a  href="#Xcohen03">2</a><a id="br2">]</a></span>, where we compare the ratio between the      size of the visible set (VS) and the size of the potentially visible set (PVS), in other words the ratio is      equal to <img  src="/img/revistas/cleiej/v19n1/1a0852x.png" alt="V S&#x2215;P VS  "  class="math" >.</li>    </ul> <!--l. 446-->    <p class="indent" >   Just like the first result data set, we calculate the mean value of the data in the second and third data set after each scene navigation. For each result data set we also present its standard deviation (<img  src="/img/revistas/cleiej/v19n1/1a0853x.png" alt="&#x03C3;  "  class="math" >). In order for providing a fair comparison between each ratio value, the same sequence of the scene navigation steps are made for every experiment executed in each model. <!--l. 449-->    <p class="indent" >   Tables <a  href="#x1-170011">1<!--tex4ht:ref: tab:ResultI --></a> and <a  href="#x1-170022">2<!--tex4ht:ref: tab:ResultII --></a> present the resulting data detailed above obtained during our experiments. As we mentioned previously, the experiments executed in each model are done using the same sequence of navigation steps and same grid dimension.        <div class="table">                                                                                                                                                                                     <!--l. 452-->    <p class="indent" >   <a   id="x1-170011"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          ]]></body>
<body><![CDATA[<div class="caption"  ><span class="id">Table&#x00A0;1: </span><span   class="content">Result comparison of the frame rate experiments for each model. We analyse the mean frame rate of static scene (1.A), dynamic scene (1.B) and the scene with the whole data visualization. We also analyse the standard deviation of each experiment.</span></div><!--tex4ht:label?: x1-170011 -->     <div class="pic-tabular"> <img  src="/img/revistas/cleiej/v19n1/1a0854x.png" alt="|------|-------|-------|------|-------|-------|-------| |Model-|--1.A---|&#x03C3;(1.A-)-|-1.B--|&#x03C3;-(1.B)-|--1.C---|&#x03C3;(1.C-)-| |--i---|28.569-|-0.967--|25.773-|-2.591--|18.883-|-2.766--| |--ii--|26.842-|-1.304--|24.962-|-2.215--|19.890-|-0.924--| |--iii--|11.367-|-0.557--|9.756-|-0.860--|-6.221--|-0.245--| |--iv---|-7.161--|-0.410--|6.929-|-0.638--|-5.628--|-0.223--| |--v---|-6.957--|-0.456--|6.566-|-0.703--|-5.364--|-0.326--| ---vi----10.75----0.884----9.85----0.788-----0.6-----0.043--  " ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 475-->    <p class="indent" >   As we can see from Table <a  href="#x1-170011">1<!--tex4ht:ref: tab:ResultI --></a> the frame rate obtained when visualizing each model using the visualization structure is better than using the whole data of the model. This result only ascertains that the structure is working properly, the important results from this series of experiments is if the recalculation of the visible set affects the frame rate. By comparing columns 1.A and 1.B we can see that the recalculation step executed along the navigation sequence does not affect the frame rate too much, lowering it at most by <img  src="/img/revistas/cleiej/v19n1/1a0855x.png" alt="15%  "  class="math" >.        <div class="table">                                                                                                                                                                                     <!--l. 485-->    <p class="indent" >   <a   id="x1-170022"></a><hr class="float">    <div class="float"  >                                                                                                                                                                                          <div class="caption"  ><span class="id">Table&#x00A0;2: </span><span   class="content">Comparison between the mean rate (and standard deviation) of the set where the primitive set are removed by the <span  class="cmti-10">view-frustum </span>and <span  class="cmti-10">back-face culling </span>in relation with <img  src="/img/revistas/cleiej/v19n1/1a0856x.png" alt="#P ri  "  class="math" > (<img  src="/img/revistas/cleiej/v19n1/1a0857x.png" alt="R¯V F+BF &#x2215;P RI  "  class="math" >) and the mean rate (and standard deviation) of PVS in relation to <img  src="/img/revistas/cleiej/v19n1/1a0858x.png" alt="#P ri  "  class="math" > (<img  src="/img/revistas/cleiej/v19n1/1a0859x.png" alt="R¯P VS&#x2215;PRI  "  class="math" >). We also analyze the rate between the number of primitives that are really visible and the size of the potentially visible set of primitives for experiment (<img  src="/img/revistas/cleiej/v19n1/1a0860x.png" alt="&#x03C3;(¯RVS&#x2215;PV S)  "  class="math" >).</span></div><!--tex4ht:label?: x1-170022 -->     <div class="pic-tabular"> <img  src="/img/revistas/cleiej/v19n1/1a0861x.png" alt="|------|------|-------|-----|-------|------|-----| |Model-|-2.A--|&#x03C3;(2.A-)-|-2.B--|&#x03C3;-(2.B)-|--3---|&#x03C3;-(3)-| |--i---|0.459-|-0.005--|0.111-|-0.001-|0.913-|0.058-| |--ii--|0.452-|-0.002--|0.291-|-0.003-|0.957-|0.032-| |--iii--|0.411-|-0.004--|0.165-|-0.002-|0.894-|0.075-| |--iv---|0.467-|-0.002--|0.132-|-0.003-|0.914-|0.067-| |--v---|0.401-|-0.002--|0.215-|-0.003-|0.856-|0.077-| ---vi---0.726---0.015---0.204---0.031--0.821--0.061--  " ></div>                                                                                                                                                                                        </div><hr class="endfloat">    </div> <!--l. 508-->    <p class="indent" >   In Table <a  href="#x1-170022">2<!--tex4ht:ref: tab:ResultII --></a>, we can see that the reduction of data being used for the visualizations stage is quite significant. Although the back-face culling did most of the hidden primitive removal, the occlusion culling still removes a reasonable amount of hidden primitives. And in the case of the vase, due to its concave shape, we can see that the occlusion culling removes a higher proportion of primitives. It removes <img  src="/img/revistas/cleiej/v19n1/1a0862x.png" alt="29.1%  "  class="math" > in comparison to the other models where a rate lower than <img  src="/img/revistas/cleiej/v19n1/1a0863x.png" alt="22%  "  class="math" > is removed. <!--l. 517-->    <p class="indent" >   From these results, we can figure out that, since the models that we use in each experiment are single object meshes, the occlusion culling does not remove as much primitives because of the actual lack of occlusion that occurs in the respective scene. In the same manner, the overestimation ratio is low due to the use of the single object meshes. At most the possible visible set is higher than the visible set by <img  src="/img/revistas/cleiej/v19n1/1a0864x.png" alt="15%  "  class="math" >. <!--l. 525-->    ]]></body>
<body><![CDATA[<p class="indent" >   When we analyze the overestimation ratio of our method in comparison with previous works, we could find out that the visible set has a roughly similar ratio, ranging from <img  src="/img/revistas/cleiej/v19n1/1a0865x.png" alt="3- 9  "  class="math" > For the three biggest models used during the experiments, the preprocessing stage took <img  src="/img/revistas/cleiej/v19n1/1a0866x.png" alt="60- 80  "  class="math" > seconds to execute. Though we are showing this as a result of the visualization structure, the execution of this stage does not affect the visualization stage itself. And, in many cases, once the precomputation stage has been executed, the program application can just store the data obtained.    <h4 class="subsectionHead"><span class="titlemark">5.3   </span> <a   id="x1-180005.3"></a>Visual quality assessment</h4> <!--l. 538-->    <p class="noindent" >Besides the importance of testing the efficiency of the structure parameters, it is also important to assess the visual quality of the graphic objects and scenes. In this case, we want not only to visually verify the existence of possible missing primitives (holes) but also to use the same ray-tracing based algorithm to determine if any ray cast from the point of view does not intercept a primitive where it should not have done so. So far we performed extensive visualizations over the data to verify that, and this has not been the case in our tests. <!--l. 540-->    <p class="indent" >   Another quality test that we executed is the application of our structure in a Virtual Cave for visualizing different points of views. As illustrated in Figure <a  href="#x1-1800110">10<!--tex4ht:ref: fig:CaveManhatan --></a>, we simulated this scenario by composing our scene using three projection planes on the scene. In this case, we once again not only tested the structures efficiency in regards to individually determining the visible primitive set for each point of view but also assessed its visual quality. <!--l. 542-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-1800110"></a>                                                                                                                                                                                         <div class="center"  > <!--l. 543-->    <p class="noindent" >  <!--l. 544-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a08f10.png" alt="PIC"   ></div>     <br>     <div class="caption"  ><span class="id">Figure&#x00A0;10: </span><span   class="content">Illustration of the visualization of the Manhattan Model using three point of views (simulating thus the visualization on a Virtual Cave).</span></div><!--tex4ht:label?: x1-1800110 -->                                                                                                                                                                                     <!--l. 548-->    ]]></body>
<body><![CDATA[<p class="indent" >   </div><hr class="endfigure">    <h3 class="sectionHead"><span class="titlemark">6   </span> <a   id="x1-190006"></a>Conclusion</h3> <!--l. 553-->    <p class="noindent" >We introduce a new visibility paradigm based on the use of <img  src="/img/revistas/cleiej/v19n1/1a0867x.png" alt="Ja  1  "  class="math" > triangulation structure that avoids the rendering of too much unnecessary primitives in a 3D scene. This structure is capable of executing culling operations in order to deal with the minimum amount of primitives in a scene during the rendering stage, as possible. To do that, we propose to execute the culling by combining the paradigms based on viewing-frustum, back-face culling and occlusion culling all using the <img  src="/img/revistas/cleiej/v19n1/1a0868x.png" alt="Ja  1  "  class="math" > triangulation as a spatial data structure. To our knowledge, this approach is new in comparison to existing approaches and there is no way to do a comparison because the objective is different of those (real-time and interactive visualization through the web, of massive and large scenarios). To our belief, this approach to occlusion culling is also different from previous known works. Results have shown a substantial improvement over the traditional approaches if applied separately, and without the particular spatial data structure <img  src="/img/revistas/cleiej/v19n1/1a0869x.png" alt="Ja  1  "  class="math" > . Regarding the applicability, this novel approach can be used in devices with no dedicated processors or with low processing power or to provide data for visualization through the Internet. In our lab, it has been applied in virtual museums applications. <!--l. 576-->    <p class="indent" >   In the very short, we intend to better explore the <img  src="/img/revistas/cleiej/v19n1/1a0870x.png" alt="Ja  1  "  class="math" > structure&#8217;s adaptive feature to improve the preprocessing stage. That is, by having smaller blocks in certain regions of the scene, there will be a higher amount of fully filled face blocks which makes the adjacent block occlusion culling works better. Thus, with the better occlusion culling, there will be less overestimation. <!--l. 583-->    <p class="indent" >   We believe that this visualization structure is not only useful in the context of real-time rendering, but also in other applications such as on-line, virtual environments applications. In this case, data from the environment is obtained on-line, which must permit the application to send the right data for the user to view without the need of sending the whole scene. With this in mind, we would also like to apply our technique in this scenario. <!--l. 592-->    <p class="indent" >   Applications that need on-line navigation, in real-time, as the one for virtual museums depicted in Figure <a  href="#x1-1900111">11<!--tex4ht:ref: fig:museum --></a> are becoming more and more common. As they need a substantial amount of primitives for being visualized in real-time, the proposed solution can be used for this. In fact, we observe that several traditional techniques have problems while dealing with such kind of applications as the complexity of occlusion removal, when analysing the whole scene. Or the amount of necessary calculations for determining for where the primitives are pointing to or if they are inside the viewing frustum in methods that discards visibility. Here we have treated these not trivial cases, mainly scenes with massive data or with a single complex model, with simplicity. In the museum case shown in Figure <a  href="#x1-1900111">11<!--tex4ht:ref: fig:museum --></a>, for example, one could find several, complex sculptures. Such scene is part of a Virtual Museum system developed by our team (the GTMV project <span class="cite">[<a  href="#Xgtmv">29</a><a id="br29">]</a></span>). Based on the current visualization model, we could speed up the processing in the GTMV project. <!--l. 594-->    <p class="indent" >   <hr class="figure">    <div class="figure"  >                                                                                                                                                                                     <a   id="x1-1900111"></a>                                                                                                                                                                                         <div class="center"  > <!--l. 595-->    <p class="noindent" >  <!--l. 596-->    <p class="noindent" ><img  src="/img/revistas/cleiej/v19n1/1a08f11.png" alt="PIC"   ></div>     ]]></body>
<body><![CDATA[<br>     <div class="caption"  ><span class="id">Figure&#x00A0;11: </span><span   class="content">Visualization of a virtual scene in a museum, inside the GTMV project. This simulation runs through the Internet proportioning a visit in a virtual museum facility including interaction with other visitors.</span></div><!--tex4ht:label?: x1-1900111 -->                                                                                                                                                                                     <!--l. 600-->    <p class="indent" >   </div><hr class="endfigure"> <!--l. 602-->    <p class="indent" >   The next achievement for this work is to determine texture level of detail. For this, we will use the basic grid to calculate the level of detail of textures in each grid block based on the algebraic block distance and its refinement level. Early results have so far shown this to be promising. Again, we hope to use this technique for both data visualization and transmission. In regards to transmission of models such as the Manhattan Model, the simple reduction of graphic mesh data wont be enough since its textures sizes are 4096x4096.    <h3 class="likesectionHead"><a   id="x1-200006"></a>Acknowledgment</h3> <!--l. 614-->    <p class="noindent" >The authors would like to thank Brazilian sponsoring agencies CNPq and CAPES for the grants of Ícaro L. L. da Cunha and Luiz M. G. Gonçalves. <!--l. 621-->    <p class="noindent" >    <h3 class="likesectionHead"><a   id="x1-210006"></a>References</h3> <!--l. 621-->    <p class="noindent" >         <div class="thebibliography">         <p class="bibitem" ><span class="biblabel">   [<a href="#br1">1</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XCastelo06"></a>A.&#x00A0;Castelo, L.&#x00A0;G. Nonato, M.&#x00A0;Siqueira, R.&#x00A0;Minghim, and G.&#x00A0;Tavares, &#8220;The <img  src="/img/revistas/cleiej/v19n1/1a0871x.png" alt="Ja1  "  class="math" > triangulation: An     adaptive triangulation in any dimension,&#8221; <span  class="cmti-10">Computer &amp; Graphics</span>, vol.&#x00A0;30, no.&#x00A0;5, pp. 737&#8211;753, 2006.     DOI:<a  href="http://dx.doi.org/10.1016/j.cag.2006.07.025" class="url" >http://dx.doi.org/10.1016/j.cag.2006.07.025</a>     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br2">2</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xcohen03"></a>D.&#x00A0;Cohen-Or,  Y.&#x00A0;L.  Chrysanthou,  C.&#x00A0;T.  Silva,  and  F.&#x00A0;Durand,  &#8220;A  survey  of  visibility  for     walkthrough applications,&#8221; <span  class="cmti-10">IEEE Transactions On Visualization and Computer</span>, vol.&#x00A0;9, no.&#x00A0;3, 2003.     </p>         ]]></body>
<body><![CDATA[<p class="bibitem" ><span class="biblabel">   [<a href="#br3">3</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xjones71"></a>C.&#x00A0;B. Jones, &#8220;A new approach to the &#8217;hidden line&#8217; problem,&#8221; <span  class="cmti-10">The Computer Journal</span>, vol.&#x00A0;14, no.&#x00A0;3,     pp. 232&#8211;237, 1971.     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br4">4</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xclark76"></a>J.&#x00A0;H. Clark, &#8220;Hierarchical geometric models for visible surface algorithms,&#8221; <span  class="cmti-10">Commun. ACM</span>, vol.&#x00A0;19,     no.&#x00A0;10, pp. 547&#8211;554, Oct. 1976.     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br5">5</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XMeagher83"></a>D.&#x00A0;Meagher, &#8220;Efficient synthetic image generation of arbitrary 3-d objects,&#8221; <span  class="cmti-10">IEEE Computer Society</span>     <span  class="cmti-10">Conference on Pattern Recognition and Image Processing</span>, pp. 473&#8211;478, 1982.     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br6">6</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XAirey90"></a>J.&#x00A0;M. Airey, J.&#x00A0;H. Rohlf, and F.&#x00A0;P. Brooks, Jr., &#8220;Towards image realism with interactive update     rates in complex virtual building environments,&#8221; <span  class="cmti-10">SIGGRAPH Comput. Graph.</span>, vol.&#x00A0;24, no.&#x00A0;2, pp. 41&#8211;50,     Feb. 1990.     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br7">7</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xteller92"></a>S.&#x00A0;J. Teller, &#8220;Visibility computations in densely occluded polyhedral environments,&#8221; Berkeley, CA,     USA, Tech. Rep., 1992.     </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br8">8</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xteller91"></a>S.&#x00A0;J. Teller and C.&#x00A0;H. Séquin, &#8220;Visibility preprocessing for interactive walkthroughs,&#8221; <span  class="cmti-10">SIGGRAPH</span>     <span  class="cmti-10">Comput. Graph.</span>, vol.&#x00A0;25, no.&#x00A0;4, pp. 61&#8211;70, Jul. 1991.                                                                                                                                                                                         </p>         <p class="bibitem" ><span class="biblabel">   [<a href="#br9">9</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xgreene93"></a>N.&#x00A0;Greene, M.&#x00A0;Kass, and G.&#x00A0;Miller, &#8220;Hierarchical z-buffer visibility,&#8221; in <span  class="cmti-10">Proceedings of the 20th</span>     <span  class="cmti-10">annual conference on Computer graphics and interactive techniques</span>, ser. SIGGRAPH &#8217;93.   New York,     NY, USA: ACM, 1993, pp. 231&#8211;238.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br10">10</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xfoley90"></a>J.&#x00A0;D.  Foley,  A.&#x00A0;van  Dam,  S.&#x00A0;K.  Feiner,  and  J.&#x00A0;F.  Hughes,  <span  class="cmti-10">Computer graphics: principles and</span>     <span  class="cmti-10">practice (2nd ed.)</span>.   Addison-Wesley Longman Publishing Co., Inc., 1990.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br11">11</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xgarlick90"></a>B.&#x00A0;Garlick, D.&#x00A0;D.&#x00A0;Baum, and J.&#x00A0;Winget, &#8220;Interactive viewing of large geometric data bases using     multiprocessor graphics workstations,&#8221; pp. 239&#8211;245, 1990.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br12">12</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xkumar96"></a>S.&#x00A0;Kumar,  D.&#x00A0;Manocha,  W.&#x00A0;Garrett,  and  M.&#x00A0;Lin,  &#8220;Hierarchical  back-face  computation,&#8221;  in     <span  class="cmti-10">Proceedings of the eurographics workshop on Rendering techniques &#8217;96</span>.   London, UK: Springer-Verlag,     1996, pp. 235&#8211;244.     </p>         ]]></body>
<body><![CDATA[<p class="bibitem" ><span class="biblabel">  [<a href="#br13">13</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xassarsson00"></a>U.&#x00A0;Assarsson and T.&#x00A0;Müller, &#8220;Optimized view frustum culling algorithms for bounding boxes,&#8221;     <span  class="cmti-10">Journal of Graphics Tools</span>, vol.&#x00A0;5, pp. 9&#8211;22, 2000.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br14">14</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xslater97"></a>M.&#x00A0;Slater and Y.&#x00A0;Chrysanthou, &#8220;View volume culling using a probabilistic cashing scheme,&#8221; <span  class="cmti-10">ACM</span>     <span  class="cmti-10">Virtual Reality Software and Technology VRST&#8217;97</span>, pp. 71&#8211;78, 1997.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br15">15</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XBittner09"></a>J.&#x00A0;Bittner,  O.&#x00A0;Mattausch,  P.&#x00A0;Wonka,  V.&#x00A0;Havran,  and  M.&#x00A0;Wimmer,  &#8220;Adaptive  global  visibility     sampling,&#8221;  in  <span  class="cmti-10">ACM  SIGGRAPH  2009  Papers</span>,  ser.  SIGGRAPH  &#8217;09,  2009,  pp.  94:1&#8211;94:10.  DOI:     <a  href="http://dx.doi.org/10.1145/1576246.1531400" class="url" >http://dx.doi.org/10.1145/1576246.1531400</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br16">16</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XChandak09"></a>A.&#x00A0;Chandak,  L.&#x00A0;Antani,  M.&#x00A0;Taylor,  and  D.&#x00A0;Manocha,  &#8220;Fastv:  From-point  visibility  culling  on     complex models,&#8221; in <span  class="cmti-10">Proceedings of the Twentieth Eurographics Conference on Rendering</span>, ser. EGSR&#8217;09,     2009, pp. 1237&#8211;1246. DOI: <a  href="http://dx.doi.org/10.1111/j.1467-8659.2009.01501.x" class="url" >http://dx.doi.org/10.1111/j.1467-8659.2009.01501.x</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br17">17</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XTian10"></a>F.&#x00A0;Tian,             W.&#x00A0;Hua,             Z.&#x00A0;Dong,             and             H.&#x00A0;Bao,             &#8220;Adaptive     voxels: Interactive rendering of massive 3d models,&#8221; <span  class="cmti-10">Vis. Comput.</span>, vol.&#x00A0;26, no. 6-8, pp. 409&#8211;419, 2010.     DOI: <a  href="http://dx.doi.org/10.1007/s00371-010-0465-7" class="url" >http://dx.doi.org/10.1007/s00371-010-0465-7</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br18">18</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xantani2010"></a>L.&#x00A0;Antani, A.&#x00A0;Chandak, M.&#x00A0;Taylor, and D.&#x00A0;Manocha, &#8220;Fast geometric sound propagation with     finite edge diffraction,&#8221; Technical Report TR10-011, University of North Carolina at Chapel Hill, 2010.,     Tech. Rep., 2010.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br19">19</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XBala99a"></a>K.&#x00A0;Bala,  J.&#x00A0;Dorsey,  and  S.&#x00A0;Teller,  &#8220;Radiance  interpolants  for  accelerated  bounded-error  ray     tracing,&#8221; <span  class="cmti-10">ACM Trans. Graph.</span>, vol.&#x00A0;18, no.&#x00A0;3, pp. 213&#8211;256, Jul. 1999.                                                                                                                                                                                         </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br20">20</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XBala99b"></a>&#8212;&#8212;, &#8220;Interactive ray-traced scene editing using ray segment trees,&#8221; in <span  class="cmti-10">Proceedings of the 10th</span>     <span  class="cmti-10">Eurographics conference on Rendering</span>, ser. EGWR&#8217;99, 1999, pp. 31&#8211;44.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br21">21</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XCohen-or95"></a>D.&#x00A0;Cohen-or and A.&#x00A0;Shaked, &#8220;Visibility and dead-zones in digital terrain maps,&#8221; <span  class="cmti-10">Computer Graphics</span>     <span  class="cmti-10">Forum</span>, vol.&#x00A0;14, pp. 171&#8211;180, 1995.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br22">22</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XCohen-Or96"></a>D.&#x00A0;Cohen-Or, E.&#x00A0;Rich, U.&#x00A0;Lerner, and V.&#x00A0;Shenkar, &#8220;A real-time photo-realistic visual flythrough,&#8221;     <span  class="cmti-10">IEEE Transactions on Visualization and Computer Graphics</span>, vol.&#x00A0;2, pp. 255&#8211;265, 1996.     </p>         ]]></body>
<body><![CDATA[<p class="bibitem" ><span class="biblabel">  [<a href="#br23">23</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XParker99"></a>S.&#x00A0;Parker, W.&#x00A0;Martin, P.&#x00A0;pike J.&#x00A0;Sloan, P.&#x00A0;Shirley, B.&#x00A0;Smits, and C.&#x00A0;Hansen, &#8220;Interactive ray     tracing,&#8221; in <span  class="cmti-10">In Symposium on interactive 3D graphics</span>, 1999, pp. 119&#8211;126.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br24">24</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XGreene94"></a>N.&#x00A0;Greene  and  M.&#x00A0;Kass,  &#8220;Error-bounded  antialiased  rendering  of  complex  environments,&#8221;  in     <span  class="cmti-10">Proceedings  of  the  21st  annual  conference  on  Computer  graphics  and  interactive  techniques</span>,  ser.     SIGGRAPH &#8217;94.   ACM, 1994, pp. 59&#8211;66.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br25">25</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XGreene99"></a>N.&#x00A0;Greene, &#8220;Occlusion culling with optimized hierachical z-buffering,&#8221; in <span  class="cmti-10">In ACM SIGGRAPH</span>     <span  class="cmti-10">Visual Proceedings</span>, 1999.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br26">26</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XGreene01b"></a>&#8212;&#8212;, &#8220;A quality knob for non-conservative culling with hierarchical z-buffering,&#8221; 2001.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br27">27</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="XLuebke95"></a>D.&#x00A0;Luebke and C.&#x00A0;Georges, &#8220;Portals and mirrors: simple, fast evaluation of potentially visible sets,&#8221;     in <span  class="cmti-10">Proceedings of the 1995 symposium on Interactive 3D graphics</span>, ser. I3D &#8217;95.  ACM, 1995, pp. 105&#8211;106.     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br28">28</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xde2013improved"></a>P.&#x00A0;R. de&#x00A0;Carvalho&#x00A0;Jr, M.&#x00A0;C. dos Santos, W.&#x00A0;R. Schwartz, and H.&#x00A0;Pedrini, &#8220;An immproved view     frustum culling method using octrees for 3d real-time rendering,&#8221; <span  class="cmti-10">International Journal of Image and</span>     <span  class="cmti-10">Graphics</span>, vol.&#x00A0;13, no.&#x00A0;03, p. 1350009, 2013. DOI: <a  href="http://dx.doi.org/10.1142/S0219467813500095" class="url" >http://dx.doi.org/10.1142/S0219467813500095</a>     </p>         <p class="bibitem" ><span class="biblabel">  [<a href="#br29">29</a>]<span class="bibsp">&#x00A0;&#x00A0;&#x00A0;</span></span><a   id="Xgtmv"></a>R.&#x00A0;R.  Dantas,  A.&#x00A0;M.&#x00A0;F.  Burlamaqui,  S.&#x00A0;Azevedo,  J.&#x00A0;Melo,  A.&#x00A0;A.&#x00A0;S.  Souza,  C.&#x00A0;Schneider,     J.&#x00A0;Xavier, and L.&#x00A0;M.&#x00A0;G. Gonçalves, &#8220;Gtmv: Virtual museum authoring systems,&#8221; <span  class="cmti-10">Proceedings of IEEE</span>     <span  class="cmti-10">International  Conference  on  Virtual  Environments,  Human-Computer  Interfaces  and  Measurement</span>     <span  class="cmti-10">Systems</span>, vol.&#x00A0;11, no.&#x00A0;3, pp. 1&#8211;6, 2009. DOI: <a  href="http://dx.doi.org/10.1109/VECIMS.2009.5068879" class="url" >http://dx.doi.org/10.1109/VECIMS.2009.5068879</a> </p>     </div>           ]]></body><back>
<ref-list>
<ref id="B1">
<label>1</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Castelo]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Nonato]]></surname>
<given-names><![CDATA[L. G.]]></given-names>
</name>
<name>
<surname><![CDATA[Siqueira]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Minghim]]></surname>
<given-names><![CDATA[R]]></given-names>
</name>
<name>
<surname><![CDATA[Tavares]]></surname>
<given-names><![CDATA[G]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[The <img width=32 height=32 id="_x0000_i1104" src="../../../../../img/revistas/cleiej/v19n1/1a0871x.png" alt="Ja1 " class=math>triangulation: An adaptive triangulation in any dimension]]></article-title>
<source><![CDATA[Computer & Graphics]]></source>
<year>2006</year>
<volume>30</volume>
<numero>5</numero>
<issue>5</issue>
<page-range>737-753</page-range></nlm-citation>
</ref>
<ref id="B2">
<label>2</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Cohen-Or]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[Chrysanthou]]></surname>
<given-names><![CDATA[Y. L]]></given-names>
</name>
<name>
<surname><![CDATA[Silva]]></surname>
<given-names><![CDATA[C. T.]]></given-names>
</name>
<name>
<surname><![CDATA[Durand]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A survey of visibility for walkthrough applications]]></article-title>
<source><![CDATA[IEEE Transactions On Visualization and Computer]]></source>
<year>2003</year>
<volume>9</volume>
<numero>3</numero>
<issue>3</issue>
</nlm-citation>
</ref>
<ref id="B3">
<label>3</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Jones]]></surname>
<given-names><![CDATA[C. B.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A new approach to the &#8217;hidden line&#8217; problem]]></article-title>
<source><![CDATA[The Computer Journal]]></source>
<year>1971</year>
<volume>14</volume>
<numero>no</numero>
<issue>no</issue>
<page-range>232-237</page-range></nlm-citation>
</ref>
<ref id="B4">
<label>4</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Clark]]></surname>
<given-names><![CDATA[J. H.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Hierarchical geometric models for visible surface algorithms]]></article-title>
<source><![CDATA[Commun. ACM]]></source>
<year>Oct.</year>
<month> 1</month>
<day>97</day>
<volume>19</volume>
<numero>10</numero>
<issue>10</issue>
<page-range>547-554</page-range></nlm-citation>
</ref>
<ref id="B5">
<label>5</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Meagher]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Efficient synthetic image generation of arbitrary 3-d objects]]></article-title>
<source><![CDATA[]]></source>
<year>1982</year>
<conf-name><![CDATA[ IEEE Computer Society Conference on Pattern Recognition and Image Processing]]></conf-name>
<conf-loc> </conf-loc>
<page-range>473-478</page-range></nlm-citation>
</ref>
<ref id="B6">
<label>6</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Airey]]></surname>
<given-names><![CDATA[J. M.]]></given-names>
</name>
<name>
<surname><![CDATA[Rohlf]]></surname>
<given-names><![CDATA[J. H.]]></given-names>
</name>
<name>
<surname><![CDATA[Brooks]]></surname>
<given-names><![CDATA[F. P.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Towards image realism with interactive update rates in complex virtual building environments]]></article-title>
<source><![CDATA[SIGGRAPH Comput. Graph]]></source>
<year>Feb.</year>
<month> 1</month>
<day>99</day>
<volume>24</volume>
<numero>2</numero>
<issue>2</issue>
<page-range>41-50</page-range></nlm-citation>
</ref>
<ref id="B7">
<label>7</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Teller]]></surname>
<given-names><![CDATA[S. J]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Visibility computations in densely occluded polyhedral environments]]></article-title>
<source><![CDATA[]]></source>
<year>1992</year>
<publisher-loc><![CDATA[^eCA CA]]></publisher-loc>
<publisher-name><![CDATA[Berkeley]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B8">
<label>8</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Teller]]></surname>
<given-names><![CDATA[S. J]]></given-names>
</name>
<name>
<surname><![CDATA[Séquin]]></surname>
<given-names><![CDATA[C. H.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Visibility preprocessing for interactive walkthroughs]]></article-title>
<source><![CDATA[SIGGRAPH Comput. Graph]]></source>
<year>Jul.</year>
<month> 1</month>
<day>99</day>
<volume>25</volume>
<numero>4</numero>
<issue>4</issue>
<page-range>61-70</page-range></nlm-citation>
</ref>
<ref id="B9">
<label>9</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Greene]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
</person-group>
<source><![CDATA[Proceedings of the 20th annual conference on Computer graphics and interactive techniques]]></source>
<year>1993</year>
<page-range>231-238</page-range><publisher-loc><![CDATA[New York^eNY NY]]></publisher-loc>
<publisher-name><![CDATA[SIGGRAPHACM]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B10">
<label>10</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Foley]]></surname>
<given-names><![CDATA[D.]]></given-names>
</name>
<name>
<surname><![CDATA[van Dam]]></surname>
<given-names><![CDATA[A.]]></given-names>
</name>
<name>
<surname><![CDATA[Feiner]]></surname>
<given-names><![CDATA[S. K.]]></given-names>
</name>
<name>
<surname><![CDATA[Hughes]]></surname>
<given-names><![CDATA[J. F.]]></given-names>
</name>
</person-group>
<source><![CDATA[Computer graphics: principles and practice]]></source>
<year>1990</year>
<edition>2nd</edition>
<publisher-name><![CDATA[Addison-Wesley Longman Publishing]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B11">
<label>11</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Garlick]]></surname>
<given-names><![CDATA[B]]></given-names>
</name>
<name>
<surname><![CDATA[Baum]]></surname>
<given-names><![CDATA[D. D.]]></given-names>
</name>
<name>
<surname><![CDATA[Winget]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
</person-group>
<source><![CDATA[Interactive viewing of large geometric data bases using multiprocessor graphics workstations]]></source>
<year>1990</year>
<page-range>239-245</page-range></nlm-citation>
</ref>
<ref id="B12">
<label>12</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Kumar]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Manocha]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Garrett]]></surname>
<given-names><![CDATA[W]]></given-names>
</name>
<name>
<surname><![CDATA[Lin]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Hierarchical back-face computation]]></article-title>
<source><![CDATA[Proceedings of the eurographics workshop on Rendering techniques &#8217;96]]></source>
<year>1996</year>
<page-range>235-244</page-range><publisher-loc><![CDATA[London^eUK UK]]></publisher-loc>
<publisher-name><![CDATA[Springer-Verlag]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B13">
<label>13</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Assarsson]]></surname>
<given-names><![CDATA[U]]></given-names>
</name>
<name>
<surname><![CDATA[Müller]]></surname>
<given-names><![CDATA[T]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Optimized view frustum culling algorithms for bounding boxes]]></article-title>
<source><![CDATA[Journal of Graphics Tools]]></source>
<year>2000</year>
<volume>5</volume>
<page-range>9-22</page-range></nlm-citation>
</ref>
<ref id="B14">
<label>14</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Slater]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Chrysanthou]]></surname>
<given-names><![CDATA[Y]]></given-names>
</name>
</person-group>
<source><![CDATA[View volume culling using a probabilistic cashing scheme]]></source>
<year>1997</year>
<page-range>71-78</page-range></nlm-citation>
</ref>
<ref id="B15">
<label>15</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Bittner]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Mattausch]]></surname>
<given-names><![CDATA[O]]></given-names>
</name>
<name>
<surname><![CDATA[Wonka]]></surname>
<given-names><![CDATA[P]]></given-names>
</name>
<name>
<surname><![CDATA[Havran]]></surname>
<given-names><![CDATA[V]]></given-names>
</name>
<name>
<surname><![CDATA[Wimmer]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Adaptive global visibility sampling]]></article-title>
<source><![CDATA[]]></source>
<year>2009</year>
<page-range>94:1-94:10</page-range></nlm-citation>
</ref>
<ref id="B16">
<label>16</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Chandak]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Antani]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Taylor]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Manocha]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Fastv: From-point visibility culling on complex models]]></article-title>
<source><![CDATA[]]></source>
<year>2009</year>
<page-range>1237-1246</page-range></nlm-citation>
</ref>
<ref id="B17">
<label>17</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Tian]]></surname>
<given-names><![CDATA[F]]></given-names>
</name>
<name>
<surname><![CDATA[Hua]]></surname>
<given-names><![CDATA[W]]></given-names>
</name>
<name>
<surname><![CDATA[Dong]]></surname>
<given-names><![CDATA[Z]]></given-names>
</name>
<name>
<surname><![CDATA[Bao]]></surname>
<given-names><![CDATA[H]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Adaptive voxels: Interactive rendering of massive 3d models]]></article-title>
<source><![CDATA[Vis. Comput]]></source>
<year>2010</year>
<volume>26</volume>
<numero>6-8</numero>
<issue>6-8</issue>
<page-range>409-419</page-range></nlm-citation>
</ref>
<ref id="B18">
<label>18</label><nlm-citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Antani]]></surname>
<given-names><![CDATA[L]]></given-names>
</name>
<name>
<surname><![CDATA[Chandak]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
<name>
<surname><![CDATA[Taylor]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
<name>
<surname><![CDATA[Manocha]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
</person-group>
<source><![CDATA[Fast geometric sound propagation with finite edge diffraction]]></source>
<year>2010</year>
<publisher-name><![CDATA[University of North Carolina at Chapel Hill]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B19">
<label>19</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Bala]]></surname>
<given-names><![CDATA[K]]></given-names>
</name>
<name>
<surname><![CDATA[Dorsey]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Teller]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Radiance interpolants for accelerated bounded-error ray tracing]]></article-title>
<source><![CDATA[ACM Trans. Graph]]></source>
<year>Jul.</year>
<month> 1</month>
<day>99</day>
<volume>18</volume>
<numero>3</numero>
<issue>3</issue>
<page-range>213-256</page-range></nlm-citation>
</ref>
<ref id="B20">
<label>20</label><nlm-citation citation-type="confpro">
<article-title xml:lang="en"><![CDATA[Interactive ray-traced scene editing using ray segment trees]]></article-title>
<source><![CDATA[]]></source>
<year>1999</year>
<conf-name><![CDATA[ 10th Eurographics conference on Rendering]]></conf-name>
<conf-loc> </conf-loc>
<page-range>31-44</page-range><publisher-name><![CDATA[EGWR&#8217;99]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B21">
<label>21</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Cohen]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Shaked]]></surname>
<given-names><![CDATA[A]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Visibility and dead-zones in digital terrain maps]]></article-title>
<source><![CDATA[Computer Graphics Forum]]></source>
<year>1995</year>
<volume>14</volume>
<page-range>171-180</page-range></nlm-citation>
</ref>
<ref id="B22">
<label>22</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Cohen]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Rich]]></surname>
<given-names><![CDATA[E]]></given-names>
</name>
<name>
<surname><![CDATA[Lerner]]></surname>
<given-names><![CDATA[U]]></given-names>
</name>
<name>
<surname><![CDATA[Shenkar]]></surname>
<given-names><![CDATA[V]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[A real-time photo-realistic visual flythrough]]></article-title>
<source><![CDATA[IEEE Transactions on Visualization and Computer Graphics]]></source>
<year>1996</year>
<volume>2</volume>
<page-range>255-265</page-range></nlm-citation>
</ref>
<ref id="B23">
<label>23</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Parker]]></surname>
<given-names><![CDATA[S.]]></given-names>
</name>
<name>
<surname><![CDATA[Martin]]></surname>
<given-names><![CDATA[W.]]></given-names>
</name>
<name>
<surname><![CDATA[Sloan]]></surname>
<given-names><![CDATA[J.]]></given-names>
</name>
<name>
<surname><![CDATA[Shirley]]></surname>
<given-names><![CDATA[P.]]></given-names>
</name>
<name>
<surname><![CDATA[Smits]]></surname>
<given-names><![CDATA[B]]></given-names>
</name>
<name>
<surname><![CDATA[Hansen]]></surname>
<given-names><![CDATA[C.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Interactive ray tracing]]></article-title>
<source><![CDATA[]]></source>
<year>1999</year>
<conf-name><![CDATA[ Symposium on interactive 3D graphics]]></conf-name>
<conf-loc> </conf-loc>
<page-range>119-126</page-range></nlm-citation>
</ref>
<ref id="B24">
<label>24</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Greene]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
<name>
<surname><![CDATA[Kass]]></surname>
<given-names><![CDATA[M]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Error-bounded antialiased rendering of complex environments]]></article-title>
<source><![CDATA[]]></source>
<year>1994</year>
<conf-name><![CDATA[ 21st annual conference on Computer graphics and interactive techniques]]></conf-name>
<conf-loc> </conf-loc>
<page-range>59-66</page-range><publisher-name><![CDATA[SIGGRAPH]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B25">
<label>25</label><nlm-citation citation-type="">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Greene]]></surname>
<given-names><![CDATA[N]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Occlusion culling with optimized hierachical z-buffering]]></article-title>
<source><![CDATA[]]></source>
<year>1999</year>
</nlm-citation>
</ref>
<ref id="B26">
<label>26</label><nlm-citation citation-type="">
<source><![CDATA[A quality knob for non-conservative culling with hierarchical z-buffering]]></source>
<year>2001</year>
</nlm-citation>
</ref>
<ref id="B27">
<label>27</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Luebke]]></surname>
<given-names><![CDATA[D]]></given-names>
</name>
<name>
<surname><![CDATA[Georges]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Portals and mirrors: simple, fast evaluation of potentially visible sets]]></article-title>
<source><![CDATA[]]></source>
<year>1995</year>
<conf-name><![CDATA[ symposium on Interactive 3D graphics]]></conf-name>
<conf-loc> </conf-loc>
<page-range>105-106</page-range><publisher-name><![CDATA[ACM]]></publisher-name>
</nlm-citation>
</ref>
<ref id="B28">
<label>28</label><nlm-citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname><![CDATA[de Carvalho]]></surname>
<given-names><![CDATA[P. R.]]></given-names>
</name>
<name>
<surname><![CDATA[dos Santos]]></surname>
<given-names><![CDATA[M. C.]]></given-names>
</name>
<name>
<surname><![CDATA[Schwartz]]></surname>
<given-names><![CDATA[W. R.]]></given-names>
</name>
<name>
<surname><![CDATA[Pedrini]]></surname>
<given-names><![CDATA[H.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[An immproved view frustum culling method using octrees for 3d real-time rendering]]></article-title>
<source><![CDATA[International Journal of Image and Graphics]]></source>
<year>2013</year>
<volume>13</volume>
<numero>03</numero>
<issue>03</issue>
</nlm-citation>
</ref>
<ref id="B29">
<label>29</label><nlm-citation citation-type="confpro">
<person-group person-group-type="author">
<name>
<surname><![CDATA[Dantas]]></surname>
<given-names><![CDATA[R. R.]]></given-names>
</name>
<name>
<surname><![CDATA[Burlamaqui]]></surname>
<given-names><![CDATA[A. M. F.]]></given-names>
</name>
<name>
<surname><![CDATA[Azevedo]]></surname>
<given-names><![CDATA[S]]></given-names>
</name>
<name>
<surname><![CDATA[Melo]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Souza]]></surname>
<given-names><![CDATA[A. A. S.]]></given-names>
</name>
<name>
<surname><![CDATA[Schneider]]></surname>
<given-names><![CDATA[C]]></given-names>
</name>
<name>
<surname><![CDATA[Xavier]]></surname>
<given-names><![CDATA[J]]></given-names>
</name>
<name>
<surname><![CDATA[Gonçalves]]></surname>
<given-names><![CDATA[L. M. G.]]></given-names>
</name>
</person-group>
<article-title xml:lang="en"><![CDATA[Gtmv: Virtual museum authoring systems]]></article-title>
<source><![CDATA[]]></source>
<year>2009</year>
<volume>11</volume>
<conf-name><![CDATA[ IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems]]></conf-name>
<conf-loc> </conf-loc>
<page-range>1-6</page-range></nlm-citation>
</ref>
</ref-list>
</back>
</article>
