Spelling suggestions: "subject:"gendering"" "subject:"lendering""
401 |
Técnicas de reconstrução e renderização de vídeo-avatares para educação e jogos eletrônicos. / Video-avatar reconstruction and rendering techniques for education and games.Daniel Makoto Tokunaga 07 June 2010 (has links)
Propiciar uma boa experiência de imersão ao usuário, quando este interage com conteúdos ou ambientes virtuais, é um dos principais desafios dos desenvolvedores e pesquisadores de tecnologias interativas, com destaque para realidade virtual e aumentada. Em especial nas áreas de educação e entretenimento, nas quais o engajamento do usuário e a significância dos conteúdos são cruciais para o sucesso da aplicação, é comum a busca por uma experiência que se aproxime de uma imersão total do participante. Quando as atividades envolvem interações a distância, um possível meio de se obter maior imersão são os chamados sistemas de telecomunicação imersiva. Tais sistemas oferecem compartilhamento de um ambiente virtual e troca de informações entre participantes remotos, além da interação desses com o ambiente. Um componente importante desses sistemas é o vídeo-avatar, uma representação visual do participante dentro do ambiente, baseado em sua captura de vídeo em tempo-real. Este trabalho apresenta novas propostas de reconstrução geométrica e renderização para a geração e apresentação de um vídeo-avatar. Inicialmente, um modelo teórico para se modularizar as técnicas de reconstrução e renderização existentes foi proposto. Por meio desse modelo foi criada uma nova técnica de reconstrução geométrica e renderização, denominada Video-based Microfacet Billboarding. Essa técnica emprega uma reconstrução e renderização em tempo-real que possibilita a representação de detalhes e melhora a percepção de integração com o ambiente. É também proposta neste trabalho o conceito de non-photorealistic video-avatar, que visa aplicar um estilo não fotorrealístico único sobre toda a cena, afim de melhorar a integração do avatar com o ambiente e, por sua vez, aumentar a imersão do usuário. Os resultados obtidos através da implementação do vídeo-avatar com essas técnicas e testes preliminares com usuários dão fortes indícios de que é possível a geração de uma representação visual que possua todos os requisitos do sistema iLive, sistema de telecomunicação imersiva voltado a educação e jogos eletrônicos em desenvolvimento pelo Laboratório de Tecnologias Interativas (Interlab) da Escola Politécnica da Universidade de São Paulo. / Providing good immersion experiences for users interacting with virtual contents or an environment is one of the main challenges for developers and researchers in interactive technologies, mainly virtual and augmented reality. Especially in areas of education and entertainment, in which the engagement of the user and the significance of content are of crucial importance for the success of the application, the search for experiences that approximate to one of a total immersion of the participant is common practice. When interaction at distance is necessary, one way to provide better immersion experiences is the use of a solution called Immersive Telecommunication System. This kind of system can provide the sharing of the virtual environment among participants, information exchange among them, and also their interaction with the virtual environment. One of the most important component of these systems is the video-avatar, the representation of the participant in the virtual environment based on a video of the participant captured in real-time. This work presents new approaches of geometric reconstruction and rendering for the creation of a video-avatar. First, a theoretical model to modularize the existing reconstruction and rendering approaches was proposed. Based on this model, a new approach of geometric reconstruction and rendering, called Video-based Microfacet Billboarding, was conceived. This approach uses a technique of real-time reconstruction and rendering that enables the representation of the object details and improves the integration of the avatar in the virtual environment. In this work, it is also proposed the concept of non-photorealistic video-avatar, that aims to apply a non-photorealistic style over all the scene to improve the avatar integration with the environment, and with this, to enhance the user\'s immersion. The results obtained by the implementation of a video-avatar with these approaches, as well as preliminary users tests, gives us strong evidences that we could create a user representation that attends all the requisites of the iLive system, an immersive telecommunication system for education and gaming purposes, in development by the Interactive Technologies Laboratory (Interlab) of Escola Politécnica da Universidade de São Paulo.
|
402 |
Comparação e classificação de técnicas de estereoscopia para realidade aumentada e jogos. / Comparison and classification of stereoscopy techniques for augmented reality and games.Alexandre Nascimento Tomoyose 29 June 2010 (has links)
A estereoscopia é a área do conhecimento que aborda a visão em três dimensões, mas se limita, por definição, apenas às técnicas que possibilitam a reconstituição de uma cena tridimensional observada através de pelo menos dois pontos de vista distintos, que no caso dos seres humanos é a cena reconstituída no cérebro a partir das imagens obtidas pelos olhos. Inicialmente, o presente trabalho consolida os principais conceitos e técnicas desta área abrangente, para em seguida propor formas de análise, comparação e classificação de técnicas de estereoscopia através de conceitos teóricos e métricas qualitativas e quantitativas. Complementando esta proposta, com base na revisão da literatura e nos resultados experimentais, a pesquisa busca avaliar vantagens e desvantagens entre as técnicas estereoscópicas e na estereoscopia como um todo, para alimentar discussões, tendências e desafios encontrados na aplicação da estereoscopia a sistemas de Realidade Aumentada (RA), em particular sistemas de tele-presença com vídeo-avatar, e na área de jogos. / Stereoscopy is the area of knowledge that addresses three dimensional vision, but its restricted to, by definition, the techniques that allows the reconstruction of a three dimensional scene observed from two different points of view at least, in the case of human beings it is the scene reconstructed in the brain from the images obtained by the eyes. Initially, the present work consolidates the main concepts and techniques of this extended area, considering, after that, ways of analysis, comparison and classification of stereoscopy techniques through qualitative and quantitative metrics and theorical data. Additionally, based on literature review and in experimental results, the research try\'s to evaluate advantages and disadvantages between stereoscopy techniques and in stereoscopy as a whole, to stimulate discussion, trends and challenges found when merging stereoscopy to Augmented Reality (AR) systems, in particular tele-presence systems with video-avatar, and computer games.
|
403 |
Especificação de funções de transferência para visualização volumétrica / Transfer function specification for volumetric visualizationPrauchner, João Luis January 2005 (has links)
Técnicas de visualização volumétrica direta são utilizadas para visualizar e explorar volumes de dados complexos. Dados volumétricos provêm de diversas fontes, tais como dispositivos de diagnóstico médico, radares de sensoriamento remoto ou ainda simulações científicas assistidas por computador. Um problema fundamental na visualização volumétrica é a especificação de Funções de Transferência (FTs) que atribuem cor e opacidade aos valores escalares que compõem o volume de dados. Essas funções são importantes para a exibição de características e objetos de interesse do volume, porém sua definição não é trivial ou intuitiva. Abordagens tradicionais permitem a edição manual de pontos de controle que representam a FT a ser utilizada no volume. No entanto, essas técnicas acabam conduzindo o usuário a um processo de “tentativa e erro” para serem obtidos os resultados desejados. Considera-se também que técnicas automáticas que excluem o usuário do processo não são consideradas as mais adequadas, visto que o mesmo deve possuir algum controle sobre o processo de visualização. Este trabalho apresenta uma ferramenta semi-automática e interativa destinada a auxiliar o usuário na geração de FTs de cor e opacidade. A ferramenta proposta possui dois níveis de interação com o usuário. No primeiro nível são apresentados várias FTs candidatas renderizadas como thumbnails 3D, seguindo o método conhecido como Design Galleries (MARKS et al., 1997). São aplicadas técnicas para reduzir o escopo das funções candidatas para um conjunto mais razoável, sendo possível ainda um refinamento das mesmas. No segundo nível é possível definir cores para a FT de opacidade escolhida, e ainda refinar essa função de modo a melhorála de acordo com as necessidades do usuário. Dessa forma, um dos objetivos desse trabalho é permitir ao usuário lidar com diferentes aspectos da especificação de FTs, que normalmente são dependentes da aplicação em questão e do volume de dados sendo visualizado. Para o rendering do volume, são exploradas as capacidades de mapeamento de textura e os recursos do hardware gráfico programável provenientes das plácas gráficas atuais visando a interação em tempo real. Os resultados obtidos utilizam volumes de dados médicos e sintéticos, além de volumes conhecidos, para a análise da ferramenta proposta. No entanto, é dada ênfase na especificação de FTs de propósito geral, sem a necessidade do usuário prover um mapeamento direto representando a função desejada. / Direct volume rendering techniques are used to visualize and explore large scalar volumes. Volume data can be acquired from many sources including medical diagnoses scanners, remote sensing radars or even computer-aided scientific simulations. A key issue in volume rendering is the specification of Transfer Functions (TFs) which assign color and opacity to the scalar values which comprise the volume. These functions are important to the exhibition of features and objects of interest from the volume, but their specification is not trivial or intuitive. Traditional approaches allow the manual editing of a graphic plot with control points representing the TF being applied to the volume. However, these techniques lead the user to an unintuitive trial and error task, which is time-consuming. It is also considered that automatic methods that exclude the user from the process should be avoided, since the user must have some control of the visualization process. This work presents a semi-automatic and interactive tool to assist the user in the specification of color and opacity TFs. The proposed tool has two levels of user interaction. The first level presents to the user several candidate TFs rendered as 3D thumbnails, following the method known as Design Galleries (MARKS et al., 1997). Techniques are applied to reduce the scope of the candidate functions to a more reasonable one. It is also possible to further refine these functions at this level. In the second level is permitted to define and edit colors in the chosen TF, and refine this function if desired. One of the objectives of this work is to allow users to deal with different aspects of TF specification, which is generally dependent of the application or the dataset being visualized. To render the volume, the programmability of the current generation of graphics hardware is explored, as well as the features of texture mapping in order to achieve real time interaction. The tool is applied to medical and synthetic datasets, but the main objective is to propose a general-purpose tool to specify TFs without the need for an explicit mapping from the user.
|
404 |
Video based dynamic scene analysis and multi-style abstraction.January 2008 (has links)
Tao, Chenjun. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 89-97). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgements --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Window-oriented Retargeting --- p.1 / Chapter 1.2 --- Abstraction Rendering --- p.4 / Chapter 1.3 --- Thesis Outline --- p.6 / Chapter 2 --- Related Work --- p.7 / Chapter 2.1 --- Video Migration --- p.8 / Chapter 2.2 --- Video Synopsis --- p.9 / Chapter 2.3 --- Periodic Motion --- p.14 / Chapter 2.4 --- Video Tracking --- p.14 / Chapter 2.5 --- Video Stabilization --- p.15 / Chapter 2.6 --- Video Completion --- p.20 / Chapter 3 --- Active Window Oriented Video Retargeting --- p.21 / Chapter 3.1 --- System Model --- p.21 / Chapter 3.1.1 --- Foreground Extraction --- p.23 / Chapter 3.1.2 --- Optimizing Active Windows --- p.27 / Chapter 3.1.3 --- Initialization --- p.29 / Chapter 3.2 --- Experiments --- p.32 / Chapter 3.3 --- Summary --- p.37 / Chapter 4 --- Multi-Style Abstract Image Rendering --- p.39 / Chapter 4.1 --- Abstract Images --- p.39 / Chapter 4.2 --- Multi-Style Abstract Image Rendering --- p.42 / Chapter 4.2.1 --- Multi-style Processing --- p.45 / Chapter 4.2.2 --- Layer-based Rendering --- p.46 / Chapter 4.2.3 --- Abstraction --- p.47 / Chapter 4.3 --- Experimental Results --- p.49 / Chapter 4.4 --- Summary --- p.56 / Chapter 5 --- Interactive Abstract Videos --- p.58 / Chapter 5.1 --- Abstract Videos --- p.58 / Chapter 5.2 --- Multi-Style Abstract Video --- p.59 / Chapter 5.2.1 --- Abstract Images --- p.60 / Chapter 5.2.2 --- Video Morphing --- p.65 / Chapter 5.2.3 --- Interactive System --- p.69 / Chapter 5.3 --- Interactive Videos --- p.76 / Chapter 5.4 --- Summary --- p.77 / Chapter 6 --- Conclusions --- p.81 / Chapter A --- List of Publications --- p.83 / Chapter B --- Optical flow --- p.84 / Chapter C --- Belief Propagation --- p.86 / Bibliography --- p.89
|
405 |
Técnicas de reconstrução e renderização de vídeo-avatares para educação e jogos eletrônicos. / Video-avatar reconstruction and rendering techniques for education and games.Tokunaga, Daniel Makoto 07 June 2010 (has links)
Propiciar uma boa experiência de imersão ao usuário, quando este interage com conteúdos ou ambientes virtuais, é um dos principais desafios dos desenvolvedores e pesquisadores de tecnologias interativas, com destaque para realidade virtual e aumentada. Em especial nas áreas de educação e entretenimento, nas quais o engajamento do usuário e a significância dos conteúdos são cruciais para o sucesso da aplicação, é comum a busca por uma experiência que se aproxime de uma imersão total do participante. Quando as atividades envolvem interações a distância, um possível meio de se obter maior imersão são os chamados sistemas de telecomunicação imersiva. Tais sistemas oferecem compartilhamento de um ambiente virtual e troca de informações entre participantes remotos, além da interação desses com o ambiente. Um componente importante desses sistemas é o vídeo-avatar, uma representação visual do participante dentro do ambiente, baseado em sua captura de vídeo em tempo-real. Este trabalho apresenta novas propostas de reconstrução geométrica e renderização para a geração e apresentação de um vídeo-avatar. Inicialmente, um modelo teórico para se modularizar as técnicas de reconstrução e renderização existentes foi proposto. Por meio desse modelo foi criada uma nova técnica de reconstrução geométrica e renderização, denominada Video-based Microfacet Billboarding. Essa técnica emprega uma reconstrução e renderização em tempo-real que possibilita a representação de detalhes e melhora a percepção de integração com o ambiente. É também proposta neste trabalho o conceito de non-photorealistic video-avatar, que visa aplicar um estilo não fotorrealístico único sobre toda a cena, afim de melhorar a integração do avatar com o ambiente e, por sua vez, aumentar a imersão do usuário. Os resultados obtidos através da implementação do vídeo-avatar com essas técnicas e testes preliminares com usuários dão fortes indícios de que é possível a geração de uma representação visual que possua todos os requisitos do sistema iLive, sistema de telecomunicação imersiva voltado a educação e jogos eletrônicos em desenvolvimento pelo Laboratório de Tecnologias Interativas (Interlab) da Escola Politécnica da Universidade de São Paulo. / Providing good immersion experiences for users interacting with virtual contents or an environment is one of the main challenges for developers and researchers in interactive technologies, mainly virtual and augmented reality. Especially in areas of education and entertainment, in which the engagement of the user and the significance of content are of crucial importance for the success of the application, the search for experiences that approximate to one of a total immersion of the participant is common practice. When interaction at distance is necessary, one way to provide better immersion experiences is the use of a solution called Immersive Telecommunication System. This kind of system can provide the sharing of the virtual environment among participants, information exchange among them, and also their interaction with the virtual environment. One of the most important component of these systems is the video-avatar, the representation of the participant in the virtual environment based on a video of the participant captured in real-time. This work presents new approaches of geometric reconstruction and rendering for the creation of a video-avatar. First, a theoretical model to modularize the existing reconstruction and rendering approaches was proposed. Based on this model, a new approach of geometric reconstruction and rendering, called Video-based Microfacet Billboarding, was conceived. This approach uses a technique of real-time reconstruction and rendering that enables the representation of the object details and improves the integration of the avatar in the virtual environment. In this work, it is also proposed the concept of non-photorealistic video-avatar, that aims to apply a non-photorealistic style over all the scene to improve the avatar integration with the environment, and with this, to enhance the user\'s immersion. The results obtained by the implementation of a video-avatar with these approaches, as well as preliminary users tests, gives us strong evidences that we could create a user representation that attends all the requisites of the iLive system, an immersive telecommunication system for education and gaming purposes, in development by the Interactive Technologies Laboratory (Interlab) of Escola Politécnica da Universidade de São Paulo.
|
406 |
Anti-Aliased Low Discrepancy Samplers for Monte Carlo Estimators in Physically Based Rendering / Échantillonneurs basse discrepance anti aliassés pour du rendu réaliste avec estimateurs de Monte CarloPerrier, Hélène 07 March 2018 (has links)
Lorsque l'on affiche un objet 3D sur un écran d'ordinateur, on transforme cet objet en une image, c.a.d en un ensemble de pixels colorés. On appelle Rendu la discipline qui consiste à trouver la couleur à associer à ces pixels. Calculer la couleur d'un pixel revient à intégrer la quantité de lumière arrivant de toutes les directions que la surface renvoie dans la direction du plan image, le tout pondéré par une fonction binaire déterminant si un point est visible ou non. Malheureusement, l'ordinateur ne sait pas calculer des intégrales on a donc deux méthodes possibles : Trouver une expression analytique qui permet de supprimer l'intégrale de l'équation (approche basée statistique). Approximer numériquement l'équation en tirant des échantillons aléatoires dans le domaine d'intégration et en en déduisant la valeur de l'intégrale via des méthodes dites de Monte Carlo. Nous nous sommes ici intéressés à l'intégration numérique et à la théorie de l'échantillonnage. L'échantillonnage est au cœur des problématiques d'intégration numérique. En informatique graphique, il est capital qu'un échantillonneur génère des points uniformément dans le domaine d’échantillonnage pour garantir que l'intégration ne sera pas biaisée. Il faut également que le groupe de points généré ne présente aucune régularité structurelle visible, au risque de voir apparaître des artefacts dit d'aliassage dans l'image résultante. De plus, les groupes de points générés doivent minimiser la variance lors de l'intégration pour converger au plus vite vers le résultat. Il existe de nombreux types d'échantillonneurs que nous classeront ici grossièrement en 2 grandes familles : Les échantillonneurs bruit bleu, qui ont une faible la variance lors de l'intégration tout en générant de groupes de points non structurés. Le défaut de ces échantillonneurs est qu'ils sont extrêmement lents pour générer les points. Les échantillonneurs basse discrépance, qui minimisent la variance lors de l'intégration, génèrent des points extrêmement vite, mais qui présentent une forte structure, générant énormément d'aliassage. Notre travail a été de développer des échantillonneurs hybrides, combinant à la fois bruit bleu et basse discrépance / When you display a 3D object on a computer screen, we transform this 3D scene into a 2D image, which is a set of organized colored pixels. We call Rendering all the process that aims at finding the correct color to give those pixels. This is done by integrating all the light rays coming for every directions that the object's surface reflects back to the pixel, the whole being ponderated by a visibility function. Unfortunately, a computer can not compute an integrand. We therefore have two possibilities to solve this issue: We find an analytical expression to remove the integrand (statistic based strategy). Numerically approximate the equation by taking random samples in the integration domain and approximating the integrand value using Monte Carlo methods. Here we focused on numerical integration and sampling theory. Sampling is a fundamental part of numerical integration. A good sampler should generate points that cover the domain uniformly to prevent bias in the integration and, when used in Computer Graphics, the point set should not present any visible structure, otherwise this structure will appear as artifacts in the resulting image. Furthermore, a stochastic sampler should minimize the variance in integration to converge to a correct approximation using as few samples as possible. There exists many different samplers that we will regroup into two families: Blue Noise samplers, that have a low integration variance while generating unstructured point sets. The issue with those samplers is that they are often slow to generate a pointset. Low Discrepancy samplers, that minimize the variance in integration and are able to generate and enrich a point set very quickly. However, they present a lot of structural artifacts when used in Rendering. Our work aimed at developing hybriod samplers, that are both Blue Noise and Low Discrepancy
|
407 |
Rendu stylisé de scènes 3D animées temps-réel / Real-time stylized rendering of 3D animated scenesBleron, Alexandre 08 November 2018 (has links)
Le but du rendu stylisé est de produire un rendud'une scène 3D dans le style visuel particuliervoulu par un artiste.Cela nécessite de reproduire automatiquementsur ordinateur certaines caractéristiquesd'illustrations traditionnelles: par exemple,la façon dont un artiste représente les ombres et lalumière, les contours des objets, ou bien les coupsde pinceau qui ont servi à créer une peinture.Les problématiques du rendu stylisé sont pertinentesdans des domaines comme la réalisation de films d'animation 3Dou le jeu vidéo, où les studios cherchent de plus en plus à se démarquerpar des styles visuels originaux.Dans cette thèse, nous explorons des techniques destylisation qui peuvent s'intégrer dans des pipelinesde rendu temps-réel existants, et nous proposons deux contributions.La première est un outil de création de modèles d'illuminationstylisés pour des objets 3D.La conception de ces modèles est complexe et coûteuse en temps,car ils doivent produire un résultat cohérentsous une multitude d'angles de vue et d'éclairages.Nous proposons une méthode qui facilite la créationde modèles d'illumination pour le rendu stylisé,en les décomposant en sous-modèles plus simples à manipuler.Notre seconde contribution est un pipeline de rendude scènes 3D dans un style peinture,qui utilise une combinaison de bruits procéduraux 3Det de filtrage en espace écran.Des techniques de filtrage d'image ont déjà été proposéespour styliser des images ou des vidéos:le but de ce travail est d'utiliser ces filtres pourstyliser des scènes 3D tout en gardant la cohérence du mouvement.Cependant, directement appliquer un filtreen espace écran produit des défauts visuels au niveau dessilhouettes des objets.Nous proposons une méthode qui permet d'assurer la cohérence du mouvement,en guidant les filtres d'images avec des informations sur la géométrie extraites de G-buffers, et qui élimine les défauts aux silhouettes. / The goal of stylized rendering is to render 3D scenes in the visual style intended by an artist.This often entails reproducing, with some degree of automation,the visual features typically found in 2D illustrationsthat constitute the "style" of an artist.Examples of these features include the depiction of light and shade,the representation of the contours of objects,or the strokes on a canvas that make a painting.This field is relevant today in domains such as computer-generated animation orvideo games, where studios seek to differentiate themselveswith styles that deviate from photorealism.In this thesis, we explore stylization techniques that can be easilyinserted into existing real-time rendering pipelines, and propose two novel techniques in this domain.Our first contribution is a workflow that aims to facilitatethe design of complex stylized shading models for 3D objects.Designing a stylized shading model that follows artistic constraintsand stays consistent under a variety of lightingconditions and viewpoints is a difficult and time-consuming process.Specialized shading models intended for stylization existbut are still limited in the range of appearances and behaviors they can reproduce.We propose a way to build and experiment with complex shading modelsby combining several simple shading behaviors using a layered approach,which allows a more intuitive and efficient exploration of the design space of shading models.In our second contribution, we present a pipeline to render 3D scenes in painterly styles,simulating the appearance of brush strokes,using a combination of procedural noise andlocal image filtering in screen-space.Image filtering techniques can achieve a wide range of stylized effects on 2D pictures and video:our goal is to use those existing filtering techniques to stylize 3D scenes,in a way that is coherent with the underlying animation or camera movement.This is not a trivial process, as naive approaches to filtering in screen-spacecan introduce visual inconsistencies around the silhouette of objects.The proposed method ensures motion coherence by guiding filters with informationfrom G-buffers, and ensures a coherent stylization of silhouettes in a generic way.
|
408 |
Origin-centric techniques for optimising scalability and the fidelity of motion, interaction and renderingThorne, Chris January 2008 (has links)
[Truncated abstract] This research addresses endemic problems in the fields of computer graphics and simulation such as jittery motion, spatial scalability, rendering problems such as z-buffer tearing, the repeatability of physics dynamics and numerical error in positional systems. Designers of simulation and computer graphics software tend to map real world navigation rules onto the virtual world, expecting to see equivalent virtual behaviour. After all, if computers are programmed to simulate the real world, it is reasonable to expect the virtual behaviour to correspond. However, in computer simulation many behaviours and other computations show measurable problems inconsistent with realworld experience, particularly at large distances from the virtual world origin. Many of these problems, particularly in rendering, can be imperceptible, so users may be oblivious to them, but they are measurable using experimental methods. These effects, generically termed spatial jitter in this thesis, are found in this study to stem from floating point error in positional parameters such as spatial coordinates. This simulation error increases with distance from the coordinate origin and as the simulation progresses through the pipeline. The most common form of simulation error relevant to this study is spatial error which is found by this thesis to not be calculated, as may be expected, using numerical relative error propagation rules but using the rules of geometry. ... The thesis shows that the thinking behind real-world rules, such as for navigation, has to change in order to properly design for optimal fidelity simulation. Origincentric techniques, formulae, terms, architecture and processes are all presented as one holistic solution in the form of an optimised simulation pipeline. The results of analysis, experiments and case studies are used to derive a formula for relative spatial error that accounts for potential pathological cases. A formula for spatial error propagation is then derived by using the new knowledge of spatial error to extend numerical relative error propagation mathematics. Finally, analytical results are developed to provide a general mathematical expression for maximum simulation error and how it varies with distance from the origin and the number of mathematical operations performed. We conclude that the origin centric approach provides a general and optimal solution to spatial jitter. Along with changing the way one thinks about navigation, process guidelines and formulae developed in the study, the approach provides a new paradigm for positional computing. This paradigm can improve many aspects of computer simulation in areas such as entertainment, visualisation for education, industry, science, or training. Examples are: spatial scalability, the accuracy of motion, interaction and rendering; and the consistency and predictability of numerical computation in physics. This research also affords potential cost benefits through simplification of software design and code. These cost benefits come from some core techniques for minimising position dependent error, error propagation and also the simplifications and from new algorithms that flow naturally out of the core solution.
|
409 |
A floating polygon soup representation for 3D videoColleu, Thomas 06 December 2010 (has links) (PDF)
Cette thèse présente une nouvelle représentation appeléesoupe de polygones déformables pour les applications telles que 3DTV et FTV (Free Viewpoint TV). La soupe de polygones prend en compte les problèmes de compacité, efficacité de compression, et synthèse de vue. Les polygones sont définis en 2D avec des valeurs de profondeurs à chaque coin. Ils ne sont pas nécessairement connectés entre eux et peuvent se déformer en fonction du point de vue et de l'instant dans la séquence vidéo. A partir de données multi-vues plus profondeur (MVD), la construction tient en deux étapes: la décomposition en quadtree et la réduction des redondances inter-vues. Un ensemble compact de polygones est obtenu à la place des cartes de profondeur, tout en préservant les discontinuités de profondeurs et les détails géométriques. Ensuite, l'efficacité de compression et la qualité de synthèse de vue sont évaluées. Des méthodes classiques comme l'\emph{inpainting} et des post-traitements sont implémentées et adaptées à la soupe de polygones. Une nouvelle méthode de compression est proposée. Elle exploite la structure en quadtree et la prédiction spatiale. Les résultats sont comparés à un schéma de compression MVD utilisant le standard MPEG H.264/MVC. Des valeurs de PSNR légèrement supérieures sont obtenues à moyens et hauts débits, et les effets fantômes sont largement réduits. Enfin, la soupe de polygone est déformée en fonction du point de vue désiré. Cette géométrie dépendante du point de vue est guidée par l'estimation du mouvement entre les vues synthétisées et originales. Cela réduit les artefacts restants et améliore la qualité d'image.
|
410 |
Color Coded Depth Information in Medical Volume RenderingEdsborg, Karin January 2003 (has links)
<p>Contrast-enhanced magnetic resonance angiography (MRA) is used to obtain images showing the vascular system. To detect stenosis, which is narrowing of for example blood vessels, maximum intensity projection (MIP) is typically used. This technique often fails to demonstrate the stenosis if the projection angle is not suitably chosen. To improve identification of this region a color-coding algorithm could be helpful. The color should be carefully chosen depending on the vessel diameter. </p><p>In this thesis a segmentation to produce a binary 3d-volume is made, followed by a distance transform to approximate the Euclidean distance from the centerline of the vessel to the background. The distance is used to calculate the smallest diameter of the vessel and that value is mapped to a color. This way the color information regarding the diameter would be the same from all the projection angles. </p><p>Color-coded MIPs, where the color represents the maximum distance, are also implemented. The MIP will result in images with contradictory information depending on the angle choice. Looking in one angle you would see the actual stenosis and looking in another you would see a color representing the abnormal diameter.</p>
|
Page generated in 0.0881 seconds