381 |
Kulturens sårbarhet : Om det fria kulturlivets plats i ett accelererande stadsrumPersson Kjellerstedt, Anna January 2018 (has links)
Denna uppsats behandlar det fria kulturlivets förutsättningar i den postindustriella staden. Studiens syfte är att undersöka det fria kulturlivets rumsliga förutsättningar att bedriva kulturverksamhet i svenska städer idag. Detta har undersökts genom en fallstudie av Göteborgs Stad med fastigheten Sockerbruket i stadsdelen Majorna som exempel. Fallstudien har skett i två led, dels genom en dokumentanalys av Göteborgs stads budget, kulturplanen och det kommunala bostadsbolaget Higabs styrdokument för att titta på hur begreppet kultur och det fria kulturlivet adresseras i de olika styrdokumenten, dels genom intervjuer med fria kulturaktörer, kommunala politiker och tjänstemän vid Göteborgs stad och Higab. Resultatet visar att Göteborgs stads uttryckliga vilja att skapa goda villkor för det fria kulturlivet, där kulturutövarnas oberoende ställning ska främjas, är ett spel för galleriet. Det finns tydliga brister när det kommer till att skapa förutsättningar för att det fria kulturlivet långsiktigt ska kunna bedriva verksamhet i centrala Göteborg. Genom studien framkommer det att kultur som egenvärde får allt lägre prioritet i stadsrummet samt att den politiska viljan att inkludera och prioritera rum för fria kulturaktörer är låg – vilket tyder på en situativ politik. Detta bottnar i en social acceleration av stadsrummet, som drivs på av neoliberala ideal, där kulturverksamhet konkurreras ut i strävan att generera mer vinst genom den verksamhet som bedrivs i stadsrummet. Detta gör att det fria kulturlivet får en sårbar plats i staden.
|
382 |
Algorithmes d'accélération générique pour les méthodes d'optimisation en apprentissage statistique / Generic acceleration schemes for gradient-based optimization in machine learningLin, Hongzhou 16 November 2017 (has links)
Les problèmes d’optimisation apparaissent naturellement pendant l’entraine-ment de modèles d’apprentissage supervises. Un exemple typique est le problème deminimisation du risque empirique (ERM), qui vise a trouver un estimateur en mini-misant le risque sur un ensemble de données. Le principal défi consiste a concevoirdes algorithmes d’optimisation efficaces permettant de traiter un grand nombre dedonnées dans des espaces de grande dimension. Dans ce cadre, les méthodes classiques d’optimisation, telles que l’algorithme de descente de gradient et sa varianteaccélérée, sont couteux en termes de calcul car elles nécessitent de passer a traverstoutes les données a chaque évaluation du gradient. Ce défaut motive le développement de la classe des algorithmes incrémentaux qui effectuent des mises a jour avecdes gradients incrémentaux. Ces algorithmes réduisent le cout de calcul par itération, entrainant une amélioration significative du temps de calcul par rapport auxméthodes classiques. Une question naturelle se pose : serait-il possible d’accélérerdavantage ces méthodes incrémentales ? Nous donnons ici une réponse positive, enintroduisant plusieurs schémas d’accélération génériques.Dans le chapitre 2, nous développons une variante proximale de l’algorithmeFinito/MISO, qui est une méthode incrémentale initialement conçue pour des problèmes lisses et fortement convexes. Nous introduisons une étape proximale dans lamise a jour de l’algorithme pour prendre en compte la pénalité de régularisation quiest potentiellement non lisse. L’algorithme obtenu admet un taux de convergencesimilaire a l’algorithme Finito/MISO original.Dans le chapitre 3, nous introduisons un schéma d’accélération générique, appele Catalyst, qui s’applique a une grande classe de méthodes d’optimisation, dansle cadre d’optimisations convexes. La caractéristique générique de notre schémapermet l’utilisateur de sélectionner leur méthode préférée la plus adaptée aux problemes. Nous montrons que en appliquant Catalyst, nous obtenons un taux deconvergence accélère. Plus important, ce taux coïncide avec le taux optimale desméthodes incrémentales a un facteur logarithmique pres dans l’analyse du pire descas. Ainsi, notre approche est non seulement générique mais aussi presque optimale du point de vue théorique. Nous montrons ensuite que l’accélération est bienprésentée en pratique, surtout pour des problèmes mal conditionnes.Dans le chapitre 4, nous présentons une seconde approche générique qui appliqueles principes Quasi-Newton pour accélérer les méthodes de premier ordre, appeléeQNing. Le schéma s’applique a la même classe de méthodes que Catalyst. En outre,il admet une simple interprétation comme une combinaison de l’algorithme L-BFGSet de la régularisation Moreau-Yosida. A notre connaissance, QNing est le premieralgorithme de type Quasi-Newton compatible avec les objectifs composites et lastructure de somme finie.Nous concluons cette thèse en proposant une extension de l’algorithme Catalyst au cas non convexe. Il s’agit d’un travail en collaboration avec Dr. CourtneyPaquette et Pr. Dmitriy Drusvyatskiy, de l’Université de Washington, et mes encadrants de thèse. Le point fort de cette approche réside dans sa capacité a s’adapterautomatiquement a la convexité. En effet, aucune information sur la convexité de lafonction n’est nécessaire avant de lancer l’algorithme. Lorsque l’objectif est convexe,l’approche proposée présente les mêmes taux de convergence que l’algorithme Catalyst convexe, entrainant une accélération. Lorsque l’objectif est non-convexe, l’algorithme converge vers les points stationnaires avec le meilleur taux de convergencepour les méthodes de premier ordre. Des résultats expérimentaux prometteurs sontobserves en appliquant notre méthode a des problèmes de factorisation de matriceparcimonieuse et a l’entrainement de modèles de réseaux de neurones. / Optimization problems arise naturally in machine learning for supervised problems. A typical example is the empirical risk minimization (ERM) formulation, which aims to find the best a posteriori estimator minimizing the regularized risk on a given dataset. The current challenge is to design efficient optimization algorithms that are able to handle large amounts of data in high-dimensional feature spaces. Classical optimization methods such as the gradient descent algorithm and its accelerated variants are computationally expensive under this setting, because they require to pass through the entire dataset at each evaluation of the gradient. This was the motivation for the recent development of incremental algorithms. By loading a single data point (or a minibatch) for each update, incremental algorithms reduce the computational cost per-iteration, yielding a significant improvement compared to classical methods, both in theory and in practice. A natural question arises: is it possible to further accelerate these incremental methods? We provide a positive answer by introducing several generic acceleration schemes for first-order optimization methods, which is the main contribution of this manuscript. In chapter 2, we develop a proximal variant of the Finito/MISO algorithm, which is an incremental method originally designed for smooth strongly convex problems. In order to deal with the non-smooth regularization penalty, we modify the update by introducing an additional proximal step. The resulting algorithm enjoys a similar linear convergence rate as the original algorithm, when the problem is strongly convex. In chapter 3, we introduce a generic acceleration scheme, called Catalyst, for accelerating gradient-based optimization methods in the sense of Nesterov. Our approach applies to a large class of algorithms, including gradient descent, block coordinate descent, incremental algorithms such as SAG, SAGA, SDCA, SVRG, Finito/MISO, and their proximal variants. For all of these methods, we provide acceleration and explicit support for non-strongly convex objectives. The Catalyst algorithm can be viewed as an inexact accelerated proximal point algorithm, applying a given optimization method to approximately compute the proximal operator at each iteration. The key for achieving acceleration is to appropriately choose an inexactness criteria and control the required computational effort. We provide a global complexity analysis and show that acceleration is useful in practice. In chapter 4, we present another generic approach called QNing, which applies Quasi-Newton principles to accelerate gradient-based optimization methods. The algorithm is a combination of inexact L-BFGS algorithm and the Moreau-Yosida regularization, which applies to the same class of functions as Catalyst. To the best of our knowledge, QNing is the first Quasi-Newton type algorithm compatible with both composite objectives and the finite sum setting. We provide extensive experiments showing that QNing gives significant improvement over competing methods in large-scale machine learning problems. We conclude the thesis by extending the Catalyst algorithm into the nonconvex setting. This is a joint work with Courtney Paquette and Dmitriy Drusvyatskiy, from University of Washington, and my PhD advisors. The strength of the approach lies in the ability of the automatic adaptation to convexity, meaning that no information about the convexity of the objective function is required before running the algorithm. When the objective is convex, the proposed approach enjoys the same convergence result as the convex Catalyst algorithm, leading to acceleration. When the objective is nonconvex, it achieves the best known convergence rate to stationary points for first-order methods. Promising experimental results have been observed when applying to sparse matrix factorization problems and neural network models.
|
383 |
Integração de sistemas de partículas com detecção de colisões em ambientes de ray tracing / Integration of particle systems with colision detection in ray tracing environmentsSteigleder, Mauro January 1997 (has links)
Encontrar um modo de criar imagens fotorealísticas tem sido uma meta da Computação Gráfica por muitos anos [GLA 89]. Neste sentido, os aspectos que possuem principal importância são a modelagem e a iluminação. Ao considerar aspectos de modelagem, a obtenção de realismo mostra-se bastante difícil quando se pretende, através de técnicas tradicionais de modelagem, modelar objetos cujas formas não são bem definidas. Dentre alguns exemplos destes tipos de objetos, podem-se citar fogo, fumaça, nuvens, água, etc. Partindo deste fato, Reeves [REE 83] introduziu uma técnica denominada sistemas de partículas para efetuar a modelagem de fogo e explosões. Um sistema de partículas pode ser visto como um conjunto de partículas que evoluem ao longo do tempo. Os procedimentos envolvidos na animação de um sistema de partículas são bastante simples. Basicamente, a cada instante de tempo, novas partículas são geradas, os atributos das partículas antigas são alterados, ou estas partículas podem ser extintas de acordo com certas regras pré-definidas. Como as partículas de um sistema são entidades dinâmicas, os sistemas de partículas são especialmente adequados para o uso em animação. Ainda, dentre as principais vantagens dos sistemas de partículas quando comparados com as técnicas tradicionais de modelagem, podem-se citar a facilidade da obtenção de efeitos sobre as partículas (como borrão de movimento), a necessidade de poucos dados para a modelagem global do fenômeno, o controle por processos estocásticos, o nível de detalhamento ajustável e a possibilidade de grande controle sobre as suas deformações. Entretanto, os sistemas de partículas possuem algumas limitações e restrições que provocaram o pouco desenvolvimento de algoritmos específicos nesta área. Dentre estas limitações, as principais são a dificuldade de obtenção de efeitos realísticos de sombra e reflexão, o alto consumo de memória e o fato dos sistemas de partículas possuírem um processo de animação específico para cada efeito que se quer modelar. Poucos trabalhos foram desenvolvidos especificamente para a solução destes problemas, sendo que a maioria se destina à modelagem de fenômenos através de sistemas de partículas. Tendo em vista tais deficiências, este trabalho apresenta métodos para as soluções destes problemas. É apresentado um método para tornar viável a integração de sistemas de partículas em ambientes de Ray Tracing, através do uso de uma grade tridimensional. Também, são apresentadas técnicas para a eliminação de efeitos de aliasing das partículas, assim como para a redução da quantidades de memória exigida para o armazenamento dos sistemas de partículas. Considerando aspectos de animação de sistemas de partículas, também é apresentado uma técnica de aceleração para a detecção de colisões entre o sistema de partículas e os objetos de uma cena, baseada no uso de uma grade pentadimensional. Aspectos relativos à implementação, tempo de processamento e fatores de aceleração são apresentados no final do trabalho, assim como as possíveis extensões futuras e trabalhos sendo realizados. / Finding a way to create photorealistic images has been a goal of Computer Graphics for many years [GLA 89]. In this sense, the aspects that have main importance are modeling and illumination. Considering aspects of modeling, the obtention of realism is very difficult when it is intended to model fuzzy objects using traditional modeling techniques. Among some examples of these types of objects, fire, smoke, clouds, water, etc. can be mentioned. With this fact in mind, Reeves [REE 83] introduced a technique named particle systems for modeling of fire and explosions. A particle system can be seen as a set of particles that evolves over time. The procedures involved in the animation of particle systems are very simple. Basically, at each time instant, new particles are generated, the attributes of the old ones are changed, or these particles can be extinguished according to predefined rules. As the particles of a system are dynamic entities, particle systems are specially suitable for use in animation. Among the main advantages of particle systems, when compared to traditional techniques, it can be mentioned the facility of obtaining effects such as motion blur over the particles, the need of few data to the global modeling of a phenomen, the control by stochastic processes, an adjustable level of detail and a great control over their deformations. However, particle systems present some limitations and restrictions that cause the little development of specific algorithms in this area. Among this limitations, the main are the difficulty of obtention of realistic effects of shadow and reflection, the high requirement of memory and the fact that particle systems need a specific animation process for each effect intended to be modeled. Few works have been developed specifically for the solution of these problems; most of them are developed for the modeling of phenomena through the use of particle systems. Keeping these deficiencies in mind, this work presents methods for solving these problems. A method is presented to make practicable the integration of particle systems and ray tracing, through the use of a third-dimensional grid. Also, a technique is presented to eliminate effects of aliasing of particles, and to reduce the amount of memory required for the storage of particle systems. Considering particle systems animation, a technique is also presented to accelerate the collision detection between particle systems and the objects of a scene, based on the use of a fifth-dimensional grid. Aspects related to the implementation, processing time and acceleration factors are presented at the end of the work, as well as the possible future extensions and ongoing works.
|
384 |
RISK-TARGETED GROUND MOTION FOR PERFORMANCE- BASED BRIDGE DESIGNRana, Suman 01 May 2017 (has links)
The seismic design maps on ASCE 7-05, International Building Code- 2006/2009, assumed uniform hazard ground motion with 2% probability of exceedance in 50 years for the entire conterminous U.S. But, Luco et al in 2007 pointed out that as uncertainties in collapse capacity exists in structures, an adjustment on uniform hazard ground motion was proposed to develop new seismic design maps. Thus, risk-targeted ground motion with 1% probability collapse in 50 years is adopted on ASCE 7-10. Even though these seismic design maps are developed for buildings, performance-based bridge design is done using same maps. Because significance difference lies on design procedure of buildings and bridges this thesis suggests some adjustment should be made on uncertainty in the collapse capacity(β) when using for bridge design. This research is done in 3 cities of U.S— San Francisco, New Madrid and New York. Hazard curve is drawn using 2008 version of USGS hazard maps and risk- targeted ground motion is calculated using equation given by Luco et al adjusting the uncertainty in collapse capacity(β) to be 0.9 for bridge design instead of 0.8 as used for buildings. The result is compared with existing result from ASCE 7-10, which uses β=0.6. The sample design response spectrum for site classes A, B, C and D is computed for all 3 cities using equations given in ASCE 7-10 for all β. The design response spectrum curves are analyzed to concluded that adjustment on uncertainty in collapse capacity should be done on ASCE 7-10 seismic design maps to be used for performance-based bridge design.
|
385 |
Performance Metrics Analysis of GamingAnywhere with GPU accelerated NVIDIA CUDASreenibha Reddy, Byreddy January 2018 (has links)
The modern world has opened the gates to a lot of advancements in cloud computing, particularly in the field of Cloud Gaming. The most recent development made in this area is the open-source cloud gaming system called GamingAnywhere. The relationship between the CPU and GPU is what is the main object of our concentration in this thesis paper. The Graphical Processing Unit (GPU) performance plays a vital role in analyzing the playing experience and enhancement of GamingAnywhere. In this paper, the virtualization of the GPU has been concentrated on and is suggested that the acceleration of this unit using NVIDIA CUDA, is the key for better performance while using GamingAnywhere. After vast research, the technique employed for NVIDIA CUDA has been chosen as gVirtuS. There is an experimental study conducted to evaluate the feasibility and performance of GPU solutions by VMware in cloud gaming scenarios given by GamingAnywhere. Performance is measured in terms of bitrate, packet loss, jitter and frame rate. Different resolutions of the game are considered in our empirical research and our results show that the frame rate and bitrate have increased with different resolutions, and the usage of NVIDIA CUDA enhanced GPU.
|
386 |
Performance Metrics Analysis of GamingAnywhere with GPU accelerated Nvidia CUDASreenibha Reddy, Byreddy January 2018 (has links)
The modern world has opened the gates to a lot of advancements in cloud computing, particularly in the field of Cloud Gaming. The most recent development made in this area is the open-source cloud gaming system called GamingAnywhere. The relationship between the CPU and GPU is what is the main object of our concentration in this thesis paper. The Graphical Processing Unit (GPU) performance plays a vital role in analyzing the playing experience and enhancement of GamingAnywhere. In this paper, the virtualization of the GPU has been concentrated on and is suggested that the acceleration of this unit using NVIDIA CUDA, is the key for better performance while using GamingAnywhere. After vast research, the technique employed for NVIDIA CUDA has been chosen as gVirtuS. There is an experimental study conducted to evaluate the feasibility and performance of GPU solutions by VMware in cloud gaming scenarios given by GamingAnywhere. Performance is measured in terms of bitrate, packet loss, jitter and frame rate. Different resolutions of the game are considered in our empirical research and our results show that the frame rate and bitrate have increased with different resolutions, and the usage of NVIDIA CUDA enhanced GPU.
|
387 |
Performance Metrics Analysis of GamingAnywhere with GPU acceletayed NVIDIA CUDA using gVirtuSZaahid, Mohammed January 2018 (has links)
The modern world has opened the gates to a lot of advancements in cloud computing, particularly in the field of Cloud Gaming. The most recent development made in this area is the open-source cloud gaming system called GamingAnywhere. The relationship between the CPU and GPU is what is the main object of our concentration in this thesis paper. The Graphical Processing Unit (GPU) performance plays a vital role in analyzing the playing experience and enhancement of GamingAnywhere. In this paper, the virtualization of the GPU has been concentrated on and is suggested that the acceleration of this unit using NVIDIA CUDA, is the key for better performance while using GamingAnywhere. After vast research, the technique employed for NVIDIA CUDA has been chosen as gVirtuS. There is an experimental study conducted to evaluate the feasibility and performance of GPU solutions by VMware in cloud gaming scenarios given by GamingAnywhere. Performance is measured in terms of bitrate, packet loss, jitter and frame rate. Different resolutions of the game are considered in our empirical research and our results show that the frame rate and bitrate have increased with different resolutions, and the usage of NVIDIA CUDA enhanced GPU.
|
388 |
Interaction faisceau-plasma dans un plasma aleatoirement non-homogene du vent solaire / Beam-plasma interaction in randomly inhomogeneous solar windVoshchepynets, Andrii 09 November 2015 (has links)
Dans cette thèse nous avons présenté un modèle probabiliste auto cohérent décrivant la relaxation d'un faisceau d'électrons dans un vent solaire dont les fluctuations aléatoires de la densité ont les mêmes propriétés spectrales que celles mesurées à bord de satellites. On a supposé que, le système possédait différentes échelles caractéristiques en plus de l'échelle caractéristique des fluctuations de densité. Ceci nous a permis de décrire avec précision l'interaction onde-particule à des échelles inférieures à l'échelle caractéristique des fluctuations de densité en supposant que des paramètres d'onde sont connus: notamment, la phase, la fréquence et l'amplitude. Cependant, pour des échelles suffisamment plus grandes que l'échelle caractéristique des irrégularités de densité, l'interaction des ondes et des particules ne peut être caractérisée déterminé que par des quantités statistiques moyennes dans l'espace des vitesses à savoir: le taux de croissance/amortissement et le coefficient de diffusion des particules. En utilisant notre modèle, nous décrivons l'évolution de la fonction de distribution des électrons et d'énergie des ondes de Langmuir. Le schéma 1D suggérée est applicable pour des paramètres physiques de plasma du vent solaire à différentes distances du Soleil. Ainsi, nous pouvons utiliser nos calculs pour décrire des émissions solaires de Type III, ainsi que les interactions de faisceau avec le plasma, à des distances d'une Unité Astronomique du Soleil dans l'héliosphère et au voisinage des chocs planétaires. / This thesis is dedicated to effects of plasma density fluctuations in the solar wind on the relaxation of the electron beams ejected from the Sun. The density fluctuations are supposed to be responsible for the changes in the local phase velocity of the Langmuir waves generated by the beam instability. Changes in the wave phase velocity during the wave propagation can be described in terms of probability distribution function determined by distribution of the density fluctuations. Using these probability distributions we describe resonant wave particle interactions by a system of equations, similar to well known quasi-linear approximation, where the conventional velocity diffusion coefficient and the wave growth rate are replaced by the averaged in the velocity space. It was shown that the process of relaxation of electron beam is accompanied by transformation of significant part of the beam kinetic energy to energy of the accelerated particles via generation and absorption of the Langmuir waves. We discovered that for the very rapid beams the relaxation process consists of two well separated steps. On first step the major relaxation process occurs and the wave growth rate almost everywhere in the velocity space becomes close to zero or negative. At the seconde stage the system remains in the state close to state of marginal stability enough long to explain how the beam may be preserved traveling distances over 1 AU while still being able to generate the Langmuir waves.
|
389 |
O bilhar stadium dependente do tempo: aceleração de Fermi e o fenômeno de retardo de velocidadeLivorati, André Luís Prando [UNESP] 16 February 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:25:32Z (GMT). No. of bitstreams: 0
Previous issue date: 2011-02-16Bitstream added on 2014-06-13T18:28:57Z : No. of bitstreams: 1
livorati_alp_me_rcla.pdf: 934954 bytes, checksum: 1373f1521e1656d8b2c2126db694e3f5 (MD5) / Neste trabalho investigamos a dinâmica de uma partícula confinada dentro de um bilhar stadium-like. Em uma primeira aproximação, consideramos as fronteiras do bilhar estáticas, encontramos um mapeamento bidimensional não linear que preserva a área no espaço de fases e que descreve a dinâmica de uma partícula clássica sofrendo reflexões especulares com a fronteira. Variando os parâmetros geométricos da fronteira, pudemos observar uma transição de caos global para caos misto, quando os pontos fixos perdem sua estabilidade. Tal transição é caracterizada pelo mecanismo desfocalizador do bilhar, pela análise estatística do desvio do ângulo médio ψ e pela invariância de escala do expoente de Lyapunov máximo. Baseado nesses itens, descrevemos o bilhar através de um mapeamento genérico que apresenta transição semelhante. Introduzimos uma perturbação temporal na fronteira e consideremos a dinâmica de duas maneiras distintas; (i) onde a partícula pode sofrer colisões sucessivas com a mesma componente e (ii) colisões indiretas. Através da linearização do mapeamento obtido na versão estática, encontramos um valor crítico de velociade de ressonância, onde velocidades iniciais com valores menores do que esse valor crítico, sofrem um decréscimo em sua velocidade devido ao fenômeno de stickiness. Contudo, se a velocidade inicial é maior do que a velocidade crítica de ressonância, temos um comportamento típico de aceleração de Fermi, onde conseguimos descrever esse crescimento ilimitado de energia da partícula através de hipóteses de escala. Quando a disipação é introduzida via colisões inelásticas da partícula com a fronteira móvel, observamos... / In this work we consider the dynamics of a point particle confined inside a stadium-like billiard. In a first approximation, and considering static boundaries, we construct a two-dimensional nonlinear area preserving mapping. Ranging the control parameters, we observed a transition from partial chaos to global, when the fixed points loose their stability. This transition is characterized by the defocusing mechanism. A statistical analysis of the deviation of the average angle ψ, and the scaling invariance of the maximal Lyapunov exponent, give support to this transition. We also introduced a perturbation to the boundaries. Linearizing the unperturbed mapping, we found a critical value for the resonant velocity. For initial velocities smaller than the critical one, we observe a decreasing of the particle’s velocity caused by a stickiness phenomenum. However, when initial velocity is larger than the resonant one, we observe a typical behavior of Fermi acceleration, where we describe this unlimited energy growth by using scaling arguments. When dissipation is introduced via inelastic collisions, we observe a... (Complete abstract click electronic access below)
|
390 |
Integração de sistemas de partículas com detecção de colisões em ambientes de ray tracing / Integration of particle systems with colision detection in ray tracing environmentsSteigleder, Mauro January 1997 (has links)
Encontrar um modo de criar imagens fotorealísticas tem sido uma meta da Computação Gráfica por muitos anos [GLA 89]. Neste sentido, os aspectos que possuem principal importância são a modelagem e a iluminação. Ao considerar aspectos de modelagem, a obtenção de realismo mostra-se bastante difícil quando se pretende, através de técnicas tradicionais de modelagem, modelar objetos cujas formas não são bem definidas. Dentre alguns exemplos destes tipos de objetos, podem-se citar fogo, fumaça, nuvens, água, etc. Partindo deste fato, Reeves [REE 83] introduziu uma técnica denominada sistemas de partículas para efetuar a modelagem de fogo e explosões. Um sistema de partículas pode ser visto como um conjunto de partículas que evoluem ao longo do tempo. Os procedimentos envolvidos na animação de um sistema de partículas são bastante simples. Basicamente, a cada instante de tempo, novas partículas são geradas, os atributos das partículas antigas são alterados, ou estas partículas podem ser extintas de acordo com certas regras pré-definidas. Como as partículas de um sistema são entidades dinâmicas, os sistemas de partículas são especialmente adequados para o uso em animação. Ainda, dentre as principais vantagens dos sistemas de partículas quando comparados com as técnicas tradicionais de modelagem, podem-se citar a facilidade da obtenção de efeitos sobre as partículas (como borrão de movimento), a necessidade de poucos dados para a modelagem global do fenômeno, o controle por processos estocásticos, o nível de detalhamento ajustável e a possibilidade de grande controle sobre as suas deformações. Entretanto, os sistemas de partículas possuem algumas limitações e restrições que provocaram o pouco desenvolvimento de algoritmos específicos nesta área. Dentre estas limitações, as principais são a dificuldade de obtenção de efeitos realísticos de sombra e reflexão, o alto consumo de memória e o fato dos sistemas de partículas possuírem um processo de animação específico para cada efeito que se quer modelar. Poucos trabalhos foram desenvolvidos especificamente para a solução destes problemas, sendo que a maioria se destina à modelagem de fenômenos através de sistemas de partículas. Tendo em vista tais deficiências, este trabalho apresenta métodos para as soluções destes problemas. É apresentado um método para tornar viável a integração de sistemas de partículas em ambientes de Ray Tracing, através do uso de uma grade tridimensional. Também, são apresentadas técnicas para a eliminação de efeitos de aliasing das partículas, assim como para a redução da quantidades de memória exigida para o armazenamento dos sistemas de partículas. Considerando aspectos de animação de sistemas de partículas, também é apresentado uma técnica de aceleração para a detecção de colisões entre o sistema de partículas e os objetos de uma cena, baseada no uso de uma grade pentadimensional. Aspectos relativos à implementação, tempo de processamento e fatores de aceleração são apresentados no final do trabalho, assim como as possíveis extensões futuras e trabalhos sendo realizados. / Finding a way to create photorealistic images has been a goal of Computer Graphics for many years [GLA 89]. In this sense, the aspects that have main importance are modeling and illumination. Considering aspects of modeling, the obtention of realism is very difficult when it is intended to model fuzzy objects using traditional modeling techniques. Among some examples of these types of objects, fire, smoke, clouds, water, etc. can be mentioned. With this fact in mind, Reeves [REE 83] introduced a technique named particle systems for modeling of fire and explosions. A particle system can be seen as a set of particles that evolves over time. The procedures involved in the animation of particle systems are very simple. Basically, at each time instant, new particles are generated, the attributes of the old ones are changed, or these particles can be extinguished according to predefined rules. As the particles of a system are dynamic entities, particle systems are specially suitable for use in animation. Among the main advantages of particle systems, when compared to traditional techniques, it can be mentioned the facility of obtaining effects such as motion blur over the particles, the need of few data to the global modeling of a phenomen, the control by stochastic processes, an adjustable level of detail and a great control over their deformations. However, particle systems present some limitations and restrictions that cause the little development of specific algorithms in this area. Among this limitations, the main are the difficulty of obtention of realistic effects of shadow and reflection, the high requirement of memory and the fact that particle systems need a specific animation process for each effect intended to be modeled. Few works have been developed specifically for the solution of these problems; most of them are developed for the modeling of phenomena through the use of particle systems. Keeping these deficiencies in mind, this work presents methods for solving these problems. A method is presented to make practicable the integration of particle systems and ray tracing, through the use of a third-dimensional grid. Also, a technique is presented to eliminate effects of aliasing of particles, and to reduce the amount of memory required for the storage of particle systems. Considering particle systems animation, a technique is also presented to accelerate the collision detection between particle systems and the objects of a scene, based on the use of a fifth-dimensional grid. Aspects related to the implementation, processing time and acceleration factors are presented at the end of the work, as well as the possible future extensions and ongoing works.
|
Page generated in 0.1358 seconds