Spelling suggestions: "subject:"level off detail"" "subject:"level off metail""
61 |
BIM I TOTALENTREPRENAD: PROJEKTERINGENS INFORMATIONSLEVERANSER FÖR PRODUKTIONENS GRUNDLÄGGNING / BIM IN A DESIGN-BUILD CONTRACT: THE INFORMATION DELIVERIES OF THE DESIGN PROCESS FOR THE FOUNDATION WORK OF THE PRODUCTIONLindell, Frans January 2017 (has links)
Syfte: Byggsektorn karaktäriseras av att till stor del bestå av temporära projektorganisationer med mycket samarbete mellan discipliner men utan någon vidare processkontinuitet mellan de unika projekten. Det finns ett stort fokus på tid och omedelbar handling som en del av aktörernas referensram och varje aktivitet och nytt arbetssätt bör ge omedelbara fördelar i form av tidsvinster och effektivare utförande för att villigt accepteras. Behoven av standardiserade arbetssätt blir mycket synliga när man börjar införa BIM i projekten. Att aktivt använda BIM-modellen på bygget skapar krav på innehållet och att man följer ett projekteringsschema. BuildingSMART International (2010) skriver: ”Om den nödvändiga informationen är tillgänglig när den behövs och kvalitén på informationen är tillfredställande, så blir själva byggprocessen signifikant förbättrad”. Målet med detta examensarbete är att ta fram ett förslag på hur sekvensen av informationsleveranser bör se ut för den inledande produktionen av bärande system i en totalentreprenad. Metod: För att nå målet har en litteraturstudie genomförts för att skapa arbetets teoretiska ramverk. En fallstudie på en totalentreprenad med djupgrundläggning har genomförts. I fallstudien har empiri insamlats genom intervjuer med några av projektets aktörer samt en dokumentanalys av relevanta ritningar, protokoll och övriga dokument i projektet. Resultat: Rapporten har kartlagt sekvensen av informationsleveranser (inklusive vilket informationsinnehåll och vilken detaljeringsnivå de behöver ha) för den inledande produktionen av bärande system i en totalentreprenad. Detta kunskapsbidrag med processkarta och informationsbehov kan byggas vidare på, för att kunna implementeras i BIM-verktygen och därigenom underlätta arbetet och föra BIM-användningen framåt. Konsekvenser: Rapporten har med ovan resultat bidragit med några påbörjade pusselbitar i ett stort pussel av information och leveranser i rätt sekvens och detaljeringsgrad. För att pusslet ska bli komplett måste även informationsleveranser från förfrågningsunderlaget fungera korrekt. Arbetet har belyst fördelar att arbeta med BIM i denna del av byggprocessen. Rapporten föreslår förtydligande av innebörd genom att ha både detaljeringsnivå och utvecklingsnivå som benämningar i Sverige. Begränsningar: Rapporten begränsas till att beskriva vad byggentreprenören behöver för informationsleveranser från konstruktören som rör pålning, grundsulor och fundament. Rapporten beskriver också de informationsleveranser konstruktören behöver för att uppfylla byggentreprenörens informationsbehov. / Purpose: The building industry is characterised by mainly consisting of temporary project organizations with much interdisciplinary cooperation but with little continuity of process between the unique projects. There is a big focus on time and immediate action as part of the actors’ frame of reference, and every activity and new work method should give immediate advantages in time savings and more efficient work to be readily accepted. The need for standardised work methods is very visible when BIM is introduced in the projects. To actively use the BIM-model on site creates demands on the content and that a design schedule is followed. BuildingSMART International (2010) writes: ”If the information required is available when it is needed and the quality of information is satisfactory, the construction process will itself be significantly improved”. The aim of this work is to produce a suggestion of how the sequence of information deliveries should be in the starting production of the loadbearing system in a design build project. Method: To reach the aim a literature study has been conducted to create the theoretical framework for the study. A case study has been conducted on a design build project with a deep foundation. Empirical data has been collected by means of interviewing some of the project actors and conducting a document analysis of project documents such as drawings and protocols. Findings: The report has mapped the sequence of information deliveries (including what information content and what level of development they need to have) for the initial production of the loadbearing system in a design build project. This contribution of knowledge with the process map and information demands can be built upon, to allow for implementation in the BIM tools and thus make the work easier and push the use of BIM forward. Implications: With the above results the report has contributed with a few started puzzle pieces in a big puzzle of information and deliveries in the correct sequence and level of development. To complete the puzzle the information deliveries from the specifications also needs to work correctly. The report has shown advantages by working with BIM in this part of the building process. The report suggests clarification of meaning by using both names, level of detail and level of development, in Sweden. Limitations: The report is limited to describe what information deliveries the contractor needs from the structural designer when it comes to piling and foundations. It also describes the information deliveries needed by the structural designer to be able to meet the information need of the contractor.
|
62 |
Triangulações regulares e aplicações / Regular triangulations and applicationsPires, Fernando Bissi 27 June 2008 (has links)
A triangulação de Delaunay de um conjunto de pontos é uma importante entidade geométrica cujas aplicações abrangem diversas áreas da ciência. Triangulações regulares, que podem ser vistas como uma generalização da triangulação de Delaunay, onde pesos são associados aos vértices, também têm sido aplicadas em diversos problemas como reconstrução a partir de nuvens de pontos [5], geração de malha [12], modelagem molecular [7] e muitos outros. Apesar de ser muito utilizada, a fundamentação teórica referente à triangulação regular ainda não está tão desenvolvida quanto para triangulação de Delaunay. Por exemplo, pouco se sabe a respeito da dinâmica de uma triangulação regular [22] quando os pesos associados aos vértices mudam. Este trabalho tem como objetivo principal desenvolver um arcabouço teórico e computacional que permita representar uma triangulação qualquer como uma triangulação regular. Para isso, um estudo da dinâmica das operações de flip frente à variação de pesos nos vértices deve ser realizado. Este estudo tem como base o mapeamento da triangulação em um politopo que define os possíveis pesos para os vértices. Tal politopo pode ser obtido por meio de um sistema de inequações que gera um problema de programação linear cuja solução fornece os pesos adequados. A transformação de uma triangulação qualquer em triangulação regular permite o desenvolvimento de novas técnicas de morphing entre malhas e algoritmos para modelar níveis de detalhe, sendo este mais um objetivo deste trabalho / Delaunay triangulation of a set of points is an important geometrical entity whose applications encompass a range of scientfic fields. Regular triangulations, which can be seen as a generalization of Delaunay triangulation where weights are assigned to vertices, have also been widely employed in several problems, as for example mesh reconstruction from point clouds [5], mesh generation [12] and molecular modelling [7]. In spite of their applicability, the theoretical background of regular triangulations is not so developed as the theory of Delaunay triangulation. For example, the dynamic of regular triangulation is not completely known when the vertices weights change [22]. This work aims at developing a computational and theoretical framework that allow to represent a given triangulation as a regular triangulation. In this context, an investigation into the dynamic of edge ip operations regarding changes in the vertices weight must be accomplished. This investigation is based on mapping the triangulation in a polytope that defines the space of vertices weights. Such polytope can be built from an inequation system that can be associate to a linear program problem whose solution supplies the appropriated weights. By representing a triangulation as a regular triangulation one can conceive a new mesh morphing scheme and level of detail algorithm, being this another goal of this work
|
63 |
Level of Detail in Agent Societies in Games / Approche par niveau de détail pour l'IA des jeux vidéosMahdi, Ghulam 21 May 2013 (has links)
Aujourd'hui, l'intelligence artificielle (IA) est une composante essentielle d'un jeu vidéo et de plus en plus d'efforts sont portés sur cet aspect afin de rendre les jeux plus ludiques et plus immersifs. Cette amélioration va cependant de pair avec une augmentation sans cesse croissante des ressources informatiques nécessaires au fonctionnement de l'IA. De fait, il arrive que ces besoins soient si importants qu'ils dégradent le taux de rafraîchissement (TR) du jeu et ainsi la qualité d'expérience (QoE) du joueur. Dans ce contexte, notre objectif est de de maintenir le TR au dessus d'un certain seuil en modulant la quantité de ressources requises par l'IA. Pour ce faire, nous proposons de donner la possibilité au programmeur de définir plusieurs niveaux de détails pour l'IA (Level Of Details LOD), à l'instar de ce qui se fait pour afficher une scène graphique.Les travaux utilisant ce type d'approches proposent généralement d'utiliser des critères de distance à la caméra et de visibilité. Cependant, élaborés dans le contexte du rendu graphique, ces critères sont finalement assez peu adaptés au contexte IA car ils ne permettent pas toujours de rendre compte de l'importance réelle d'un personnage pour le joueur. Dans cette thèse, nous proposons d'utiliser des concepts organisationnels tels que le groupe et le rôle pour définir l'importance d'un personnage pour l'IA. De cette façon, un jeu vidéo est considéré comme une société d'agents (les personnages du jeu) dont l'importance individuelle ou collective est déterminée en fonction de leurs positions dans l'organisation, ce qui permet de déterminer une distribution des ressources de calcul disponibles adaptée : les entités les plus importantes dans l'histoire du jeu sont privilégiées.Notre approche a été implémentée et intégrée au moteur de jeu AGDE (Moteur Agent de développement du jeu). L'évaluation expérimentale a été réalisée à l'aide d'un système de mesures répétées pour évaluer la différence entre les QoE d'un jeu avec et sans notre approche. / In recent years there have been many efforts to develop original video games by improving both their aesthetic and mechanics. The more the mechanics is rich and realistic, the more advanced models of programming are required. However, using advanced models of programming such as agent-oriented programming often comes with an overhead in terms of computational resources. Furthermore, this overhead on computational resources may degrade the frame rate and subsequently quality of experience (QoE) for the players.In this context, our aim is to propose the QoE support means for ensuring that, in any case, the frame rate does not fall below a given lower bound. We suggest adapting the amount of time allocated for agents depending upon the importance of their organization roles. In this regard, we use a level of detail (LoD) approach to compute the dynamics of the game.LoD in game AI is based on the idea to use the most of the computational effort on the game characters that are the most important to the player(s). One critical issue in LoD for game AI is to determine the criterion for defining the importance of game characters. Existing work propose to use the criteria of camera distance and visibility. However such criteria have been developed from the perspective of graphics. In this thesis, we have used the roles played by the game characters (in the context of a video game) as the criterion for determining their importance. In this way, a video game has been considered as an agent society, where the game characters get priority and relatively higher share in distribution of the computational resources based on their relative importance in the game story.Our approach has been implemented and integrated to the AGDE (Agent Game Development Engine) game engine. The experimental evaluation has been carried out using a repeated measure scheme to assess the difference in QoE metrics between a game implemented our approach and a control game. The null hypothesizes have been rejected using t-paired test: the players have found significant positive difference in the QoE.
|
64 |
Adapter les communications des jeux dans le cloud / Adapting Communications in Cloud GamesEwelle Ewelle, Richard 28 August 2015 (has links)
Le Cloud computing émerge comme le nouveau paradigme informatique dans lequel la virtualisation des ressources fournit des services fiables correspondant aux demandes des utilisateurs. De nos jours, la plupart des applications interactives et utilisant beaucoup de données sont développées sur le cloud: Le jeu vidéo en est un exemple. Avec l'arrivée du cloud computing, l'accessibilité et l'ubiquité du jeu ont un brillant avenir; Les jeux peuvent être hébergés dans un serveur centralisé et accessibles via l'Internet par un client léger sur une grande variété de dispositifs avec des capacités modestes : c'est le cloud gaming. Le Cloud computing, dans le contexte de jeu vidéo a beaucoup attiré l'attention en raison de ses facilités d'évolution, de disponibilité et capacité de calcul. Cependant, les systèmes de cloud gaming actuels ont des exigences très fortes en termes de ressources réseau, réduisant ainsi l'accessibilité et l'ubiquité des jeux dans le cloud, car les dispositifs clients avec peu de bande passante et les personnes situées dans la zone avec des conditions de réseau limitées et/ou instables, ne peuvent pas bénéficier de ces services de cloud computing. Dans cette thèse, nous présentons une technique d'adaptation inspirée par l'approche du niveau de détail (Level of detail) dans les graphiques 3D. Elle est basée sur un paradigme du cloud gaming dans l'objectif de fournir une accessibilité multi-plateforme, tout en améliorant la qualité d'expérience (QoE) du joueur en réduisant l'impact des mauvaises conditions réseau (delai, perte, gigue) sur l'interactivité et réactivité du jeu. Notre première contribution se compose de modèles de jeu reliant les objets du jeu à leurs besoins en termes de communication représentés par leurs importances dans le jeu. Nous avons ensuite fourni une approche de niveau de détail pour gérer la distribution des ressources réseau basée sur l'importance des objets dans la scène et les conditions réseau. Nous validons notre approche en utilisant des jeu prototypes et evaluons la QoE du joueur, par des expériences pilotes. Les résultats montrent que le framework proposé fournit une importante amélioration de la QoE. / With the arrival of cloud computing technology, game accessibility and ubiquity havea bright future. Games can be hosted in a centralize server and accessed through theInternet by a thin client on a wide variety of devices with modest capabilities: cloudgaming. Some of the advantages of using cloud computing in game context includes:device ubiquity, computing exibility, affordable cost and lowered set up overheads andcompatibility issues. However, current cloud gaming systems have very strong requirementsin terms of network resources, thus reducing their widespread adoption. In factdevices with little bandwidth and people located in area with limited network capacity,cannot take advantage of these cloud services. In this thesis we present an adaptationtechnique inspired by the level of detail (LoD) approach in 3D graphics. It is based ona cloud gaming paradigm in other to maintain user's quality of experience (QoE) byreducing the impact of poor network parameters (delay, loss, bandwidth) on game interactivity.Our first contribution consist of game models expressing game objects and theircommunications needs represented by their importance in the game. We provided twodifferent ways to manage objects' importance using agents organizations and gameplaycomponents. We then provided a level of detail approach for managing network resourcedistribution based on objects importance in the game scene and network conditions. Weexploited the dynamic objects importance adjustment models presented above to proposeLoD systems adapting to changes during game sessions. The experimental validation ofboth adaptation models showed that the suggested adaptation minimizes the effects oflow and/or unstable network conditions in maintaining game responsiveness and player'sQoE.
|
65 |
Triangulações regulares e aplicações / Regular triangulations and applicationsFernando Bissi Pires 27 June 2008 (has links)
A triangulação de Delaunay de um conjunto de pontos é uma importante entidade geométrica cujas aplicações abrangem diversas áreas da ciência. Triangulações regulares, que podem ser vistas como uma generalização da triangulação de Delaunay, onde pesos são associados aos vértices, também têm sido aplicadas em diversos problemas como reconstrução a partir de nuvens de pontos [5], geração de malha [12], modelagem molecular [7] e muitos outros. Apesar de ser muito utilizada, a fundamentação teórica referente à triangulação regular ainda não está tão desenvolvida quanto para triangulação de Delaunay. Por exemplo, pouco se sabe a respeito da dinâmica de uma triangulação regular [22] quando os pesos associados aos vértices mudam. Este trabalho tem como objetivo principal desenvolver um arcabouço teórico e computacional que permita representar uma triangulação qualquer como uma triangulação regular. Para isso, um estudo da dinâmica das operações de flip frente à variação de pesos nos vértices deve ser realizado. Este estudo tem como base o mapeamento da triangulação em um politopo que define os possíveis pesos para os vértices. Tal politopo pode ser obtido por meio de um sistema de inequações que gera um problema de programação linear cuja solução fornece os pesos adequados. A transformação de uma triangulação qualquer em triangulação regular permite o desenvolvimento de novas técnicas de morphing entre malhas e algoritmos para modelar níveis de detalhe, sendo este mais um objetivo deste trabalho / Delaunay triangulation of a set of points is an important geometrical entity whose applications encompass a range of scientfic fields. Regular triangulations, which can be seen as a generalization of Delaunay triangulation where weights are assigned to vertices, have also been widely employed in several problems, as for example mesh reconstruction from point clouds [5], mesh generation [12] and molecular modelling [7]. In spite of their applicability, the theoretical background of regular triangulations is not so developed as the theory of Delaunay triangulation. For example, the dynamic of regular triangulation is not completely known when the vertices weights change [22]. This work aims at developing a computational and theoretical framework that allow to represent a given triangulation as a regular triangulation. In this context, an investigation into the dynamic of edge ip operations regarding changes in the vertices weight must be accomplished. This investigation is based on mapping the triangulation in a polytope that defines the space of vertices weights. Such polytope can be built from an inequation system that can be associate to a linear program problem whose solution supplies the appropriated weights. By representing a triangulation as a regular triangulation one can conceive a new mesh morphing scheme and level of detail algorithm, being this another goal of this work
|
66 |
Correction et simplification de modèles géologiques par frontières : impact sur le maillage et la simulation numérique en sismologie et hydrodynamique / Repair and simplification of geological boundary representation models : impact on mesh and numerical simulation in seismology and hydrodynamicsAnquez, Pierre 12 June 2019 (has links)
Les modèles géologiques numériques 2D et 3D permettent de comprendre l'organisation spatiale des roches du sous-sol. Ils sont également conçus pour réaliser des simulations numériques afin d’étudier ou de prédire le comportement physique du sous-sol. Pour résoudre les équations qui gouvernent les phénomènes physiques, les structures internes des modèles géologiques peuvent être discrétisées spatialement à l’aide de maillages. Cependant, la qualité des maillages peut être considérablement altérée à cause de l’inadéquation entre, d’une part, la géométrie et la connectivité des objets géologiques à représenter et, d’autre part, les contraintes requises sur le nombre, la forme et la taille des éléments des maillages. Dans ce cas, il est souhaitable de modifier un modèle géologique afin de pouvoir générer des maillages de bonne qualité permettant la réalisation de simulations physiques fidèles en un temps raisonnable. Dans cette thèse, j’ai développé des stratégies de réparation et de simplification de modèles géologiques 2D dans le but de faciliter la génération de maillages et la simulation de processus physiques sur ces modèles. Je propose des outils permettant de détecter les éléments des modèles qui ne respectent pas le niveau de détail et les prérequis de validité spécifiés. Je présente une méthode pour réparer et simplifier des coupes géologiques de manière locale, limitant ainsi l’extension des modifications. Cette méthode fait appel à des opérations d’édition de la géométrie et de la connectivité des entités constitutives des modèles géologiques. Deux stratégies sont ainsi explorées : modifications géométriques (élargissements locaux de l'épaisseur des couches) et modifications topologiques (suppressions de petites composantes et fusions locales de couches fines). Ces opérations d’édition produisent un modèle sur lequel il est possible de générer un maillage et de réaliser des simulations numériques plus rapidement. Cependant, la simplification des modèles géologiques conduit inévitablement à la modification des résultats des simulations numériques. Afin de comparer les avantages et les inconvénients des simplifications de modèles sur la réalisation de simulations physiques, je présente trois exemples d'application de cette méthode : (1) la simulation de la propagation d'ondes sismiques sur une coupe au sein du bassin houiller lorrain, (2) l’évaluation des effets de site liés à l'amplification des ondes sismiques dans le bassin de la basse vallée du Var, et (3) la simulation d'écoulements fluides dans un milieu poreux fracturé. Je montre ainsi (1) qu'il est possible d’utiliser les paramètres physiques des simulations, la résolution sismique par exemple, pour contraindre la magnitude des simplifications et limiter leur impact sur les simulations numériques, (2) que ma méthode de simplification de modèles permet de réduire drastiquement le temps de calcul de simulations numériques (jusqu’à un facteur 55 sur une coupe 2D dans le cas de l’étude des effets de site) tout en conservant des réponses physiques équivalentes, et (3) que les résultats de simulations numériques peuvent être modifiés en fonction de la stratégie de simplification employée (en particulier, la modification de la connectivité d’un réseau de fractures peut modifier les écoulements fluides et ainsi surestimer ou sous-estimer la quantité des ressources produites). / Numerical geological models help to understand the spatial organization of the subsurface. They are also designed to perform numerical simulations to study or predict the rocks physical behavior. The internal structures of geological models are commonly discretized using meshes to solve the physical governing equations. The quality of the meshes can be, however, considerably degraded due to the mismatch between, on the one hand, the geometry and the connectivity of the geological objects to be discretized and, on the other hand, the constraints imposed on number, shape and size of the mesh elements. As a consequence, it may be desirable to modify a geological model in order to generate good quality meshes that allow realization of reliable physical simulations in a reasonable amount of time. In this thesis, I developed strategies for repairing and simplifying 2D geological models, with the goal of easing mesh generation and simulation of physical processes on these models. I propose tools to detect model elements that do not meet the specified validity and level of detail requirements. I present a method to repair and simplify geological cross-sections locally, thus limiting the extension of modifications. This method uses operations to edit both the geometry and the connectivity of the geological model features. Two strategies are thus explored: geometric modifications (local enlargements of the layer thickness) and topological modifications (deletions of small components and local fusions of thin layers). These editing operations produce a model on which it is possible to generate a mesh and to realize numerical simulations more efficiently. But the simplifications of geological models inevitably lead to the modification of the numerical simulation results. To compare the advantages and disadvantages of model simplifications on the physical simulations, I present three applications of the method: (1) the simulation of seismic wave propagation on a cross-section within the Lorraine coal basin, (2) the site effects evaluation related to the seismic wave amplifications in the basin of the lower Var river valley, and (3) the simulation of fluid flows in a fractured porous medium. I show that (1) it is possible to use the physical simulation parameters, like the seismic resolution, to constrain the magnitude of the simplifications and to limit their impact on the numerical simulations, (2) my method of model simplification is able to drastically reduce the computation time of numerical simulations (up to a factor of 55 in the site effects case study) while preserving an equivalent physical response, and (3) the results of numerical simulations can be changed depending on the simplification strategy employed (in particular, changing the connectivity of a fracture network can lead to a modification of fluid flow paths and overestimation or underestimation of the quantity of produced resources).
|
67 |
Large planetary data visualization using ROAM 2.0Persson, Anders January 2005 (has links)
<p>The problem of estimating an adequate level of detail for an object for a specific view is one of the important problems in computer 3d-graphics and is especially important in real-time applications. The well-known continuous level-of-detail technique, Real-time Optimally Adapting Meshes (ROAM), has been employed with success for almost 10 years but has at present, due to rapid development of graphics hardware, been found to be inadequate. Compared to many other level-of-detail techniques it cannot benefit from the higher triangle throughput available on graphics cards of today.</p><p>This thesis will describe the implementation of the new version of ROAM (informally known as ROAM 2.0) for the purpose of massive planetary data visualization. It will show how the problems of the old technique can be bridged to be able to adapt to newer graphics card while still benefiting from the advantages of ROAM. The resulting implementation that is presented here is specialized on spherical objects and handles both texture and geometry data of arbitrary large sizes in an efficient way.</p>
|
68 |
Efficient Medical Volume Visualization : An Approach Based on Domain KnowledgeLundström, Claes January 2007 (has links)
Direct Volume Rendering (DVR) is a visualization technique that has proved to be a very powerful tool in many scientific visualization applications. Diagnostic medical imaging is one domain where DVR could provide clear benefits in terms of unprecedented possibilities for analysis of complex cases and highly efficient work flow for certain routine examinations. The full potential of DVR in the clinical environment has not been reached, however, primarily due to limitations in conventional DVR methods and tools. This thesis presents methods addressing four major challenges for DVR in clinical use. The foundation of all methods is to incorporate the domain knowledge of the medical professional in the technical solutions. The first challenge is the very large data sets routinely produced in medical imaging today. To this end a multiresolution DVR pipeline is proposed, which dynamically prioritizes data according to the actual impact in the rendered image to be reviewed. Using this prioritization the system can reduce the data requirements throughout the pipeline and provide high performance and visual quality in any environment. Another problem addressed is how to achieve simple yet powerful interactive tissue classification in DVR. The methods presented define additional attributes that effectively captures readily available medical knowledge. The task of tissue detection is also important to solve in order to improve efficiency and consistency of diagnostic image review. Histogram-based techniques that exploit spatial relations in the data to achieve accurate and robust tissue detection are presented in this thesis. The final challenge is uncertainty visualization, which is very pertinent in clinical work for patient safety reasons. An animation method has been developed that automatically conveys feasible alternative renderings. The basis of this method is a probabilistic interpretation of the visualization parameters. Several clinically relevant evaluations of the developed techniques have been performed demonstrating their usefulness. Although there is a clear focus on DVR and medical imaging, most of the methods provide similar benefits also for other visualization techniques and application domains.
|
69 |
Método dinâmico para troca de representação em sistemas híbridos de renderização de multidões / A Dynamic Representation-Switch Method for Hybrid Crowd Rendering SystemsSilva Júnior, Erasmo Artur da January 2013 (has links)
SILVA JÚNIOR, Erasmo Artur da. Método dinâmico para troca de representação em sistemas híbridos de renderização de multidões. 2013. 52 f. : Dissertação (mestrado) - Universidade Federal do Ceará, Centro de Ciências, Departamento de Computação, Fortaleza-CE, 2013. / Submitted by guaracy araujo (guaraa3355@gmail.com) on 2016-06-21T19:52:02Z
No. of bitstreams: 1
2013_dis_easilvajunior.pdf: 12555422 bytes, checksum: fafd7f37a684a97a47f66846c82081ed (MD5) / Approved for entry into archive by guaracy araujo (guaraa3355@gmail.com) on 2016-06-21T19:53:00Z (GMT) No. of bitstreams: 1
2013_dis_easilvajunior.pdf: 12555422 bytes, checksum: fafd7f37a684a97a47f66846c82081ed (MD5) / Made available in DSpace on 2016-06-21T19:53:00Z (GMT). No. of bitstreams: 1
2013_dis_easilvajunior.pdf: 12555422 bytes, checksum: fafd7f37a684a97a47f66846c82081ed (MD5)
Previous issue date: 2013 / Environments populated with crowds are employed in various applications, such as games, simulators and editors. Many of these environments require not only a realistic and detailed rendering, but it must run smoothly in real-time. This task easily exhausts the system’s resources, even considering the current state-of-the-art hardware. Therefore, crowd rendering in real-time remains a challenge in computer graphics. Approaches exploiting levels of detail, visibility culling and image-based rendering are presented in order to facilitate this task. The first two increase the efficiency of rendering, but sometimes are not enough to keep an interactive frame rate. Some researches on this subject focus on image-based rendering techniques, specifically with the use of impostors. In this work it is proposed a method that balances the computational demand of rendering job by varying the threshold’s distance of the representation switch between full geometry (mesh) and image-based(impostors) models in accordance with the available resources. / Ambientes providos de multidões são empregados em diversas aplicações, como jogos, simuladores e editores. Muitas destas aplicações não requerem somente a renderização de agentes animados de forma realística e detalhada, mas que seja executada suavemente em tempo real, tarefa que facilmente esgota os recursos do sistema (mesmo considerando hardware no estado da arte). Por conta disso,a renderização de multidões em tempo real permanece como um desafio dentro da computação gráfica. Abordagens explorando nível de detalhe, descarte por visibilidade e renderização baseada em imagens foram propostas no intuito de viabilizar esta tarefa. As duas primeiras aumentam a eficiência da renderização, mas as vezes não são suficientes para manter taxas de quadros por segundo interativas. Grande parte dos estudos acerca do tema se concentra em técnicas de renderização baseadas em imagem, especificamente com o emprego de impostores. Neste trabalho é proposto um método que faz o balanço da demanda computacional da renderização através da variação da distância do limiar onde ocorre a troca de representação entre os modelos de geometria completa (malhas) e os baseados em imagem (impostores) de acordo com os recursos disponíveis.
|
70 |
Método dinâmico para troca de representação em sistemas híbridos de renderização de multidões / A Dynamic representation-Switch method for hybrid crowd rendering systemsSilva Júnior, Erasmo Artur da January 2013 (has links)
SILVA JÚNIOR, Erasmo Artur da. Método dinâmico para troca de representação em sistemas híbridos de renderização de multidões. 2013. 61 f. Dissertação (Mestrado em ciência da computação)- Universidade Federal do Ceará, Fortaleza-CE, 2013. / Submitted by Elineudson Ribeiro (elineudsonr@gmail.com) on 2016-07-11T13:53:21Z
No. of bitstreams: 1
2013_dis_easilvajunior.pdf: 12555422 bytes, checksum: fafd7f37a684a97a47f66846c82081ed (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2016-07-15T13:37:49Z (GMT) No. of bitstreams: 1
2013_dis_easilvajunior.pdf: 12555422 bytes, checksum: fafd7f37a684a97a47f66846c82081ed (MD5) / Made available in DSpace on 2016-07-15T13:37:49Z (GMT). No. of bitstreams: 1
2013_dis_easilvajunior.pdf: 12555422 bytes, checksum: fafd7f37a684a97a47f66846c82081ed (MD5)
Previous issue date: 2013 / Environments populated with crowds are employed in various applications, such as games, simulators and editors. Many of these environments require not only a realistic and detailed rendering, but it must run smoothly in real-time. This task easily exhausts the system’s resources, even considering the current state-of-the-art hardware. Therefore, crowd rendering in real-time remains a challenge in computer graphics. Approaches exploiting levels of detail, visibility culling and image-based rendering are presented in order to facilitate this task. The first two increase the efficiency of rendering, but sometimes are not enough to keep an interactive frame rate. Some researches on this subject focus on image-based rendering techniques, specifically with the use of impostors. In this work it is proposed a method that balances the computational demand of rendering job by varying the threshold’s distance of the representation switch between full geometry (mesh) and image-based(impostors) models in accordance with the available resources. / Ambientes providos de multidões são empregados em diversas aplicações, como jogos, simuladores e editores. Muitas destas aplicações não requerem somente a renderização de agentes animados de forma realística e detalhada, mas que seja executada suavemente em tempo real, tarefa que facilmente esgota os recursos do sistema (mesmo considerando hardware no estado da arte). Por conta disso,a renderização de multidões em tempo real permanece como um desafio dentro da computação gráfica. Abordagens explorando nível de detalhe, descarte por visibilidade e renderização baseada em imagens foram propostas no intuito de viabilizar esta tarefa. As duas primeiras aumentam a eficiência da renderização, mas as vezes não são suficientes para manter taxas de quadros por segundo interativas. Grande parte dos estudos acerca do tema se concentra em técnicas de renderização baseadas em imagem, especificamente com o emprego de impostores. Neste trabalho é proposto um método que faz o balanço da demanda computacional da renderização através da variação da distância do limiar onde ocorre a troca de representação entre os modelos de geometria completa (malhas) e os baseados em imagem (impostores) de acordo com os recursos disponíveis.
|
Page generated in 0.0698 seconds