• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 42
  • 42
  • 15
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Camera-based Texture Mapping: An Approach for Creating Digital Environments with Foreground Forms Using 2d Paintings

Samman, Juwana Nicole 10 October 2008 (has links)
This thesis develops the method of using textures projected from the perspective of a projection camera, in combination with two-dimensional paintings and threedimensional models, to create digital environments. Past uses have demonstrated effectiveness only for background and midground scene elements with limited camera movement. This work explores how camera animation can be maximized using the projected texture technique onto foreground environment forms. Through several case studies, general guidelines for artists are developed for using camera-based projected textures.
12

Morphing in two dimensions : image morphing /

Delport, Magdil. January 2007 (has links)
Thesis (MSc)--University of Stellenbosch, 2007. / Bibliography. Also available via the Internet.
13

Measuring 3D face geometry for integration with apperance models

Madan, Siddharth K. January 2008 (has links)
Thesis (M.S.)--Rutgers University, 2008. / "Graduate Program in Electrical and Computer Engineering." Includes bibliographical references (p. 82-84).
14

Advanced Texture Unit Design for 3D Rendering System

Lin, Huang-lun 05 September 2007 (has links)
In order to achieve more realistic visual effect, the texturing mapping has become a very important and popular technique used in three-dimensional (3D) graphic. Many advanced rendering effects including shadow, environment, and bump mapping all depend on various applications of texturing function. Therefore, how to design an efficient texture unit is very important for 3D graphic rendering system. This thesis proposes an advanced texture unit design targeted for the rendering system with the fill rate of two fragments per cycle. This unit can support various filtering functions including nearest neighbor, bi-linear and tri-linear filtering. It can also provide the mip-map function to automatically select the best texture images for rendering. In order to realize the high texel throughput requirement for some complex filtering function, the texture cache has been divided into four banks such that up to eight texels can be delivered every cycle. The data-path design for the filtering unit has adopted the common expression sharing technique to reduce the required arithmetic units. The proposed texturing unit architecture has been implemented and embedded into a 3D rendering accelerator which has been integrated with OpenGL-ES software module, Linux operation system and geometry module, and successfully prototyped on the ARM versatile platform. With the 0.18um technology, this unit can run up to 150 Mhz, and provide the peak throughput of 1.2G texel/s.
15

Texture Mapping By Multi-image Blending For 3d Face Models

Bayar, Hakan 01 December 2007 (has links) (PDF)
Computer interfaces has changed to 3D graphics environments due to its high number of applications ranging from scientific importance to entertainment. To enhance the realism of the 3D models, an established rendering technique, texture mapping, is used. In computer vision, a way to generate this texture is to combine extracted parts of multiple images of real objects and it is the topic studied in this thesis. While the 3D face model is obtained by using 3D scanner, the texture to cover the model is constructed from multiple images. After marking control points on images and also on 3D face model, a texture image to cover the 3D face model is generated. Moreover, effects of the some features of OpenGL, a graphical library, on 3D texture covered face model are studied.
16

Master Texture Space: An Efficient Encoding for Projectively Mapped Objects

Guinnip, David 01 January 2005 (has links)
Projectively textured models are used in an increasingly large number of applicationsthat dynamically combine images with a simple geometric surface in a viewpoint dependentway. These models can provide visual fidelity while retaining the effects affordedby geometric approximation such as shadow casting and accurate perspective distortion.However, the number of stored views can be quite large and novel views must be synthesizedduring the rendering process because no single view may correctly texture the entireobject surface. This work introduces the Master Texture encoding and demonstrates thatthe encoding increases the utility of projectively textured objects by reducing render-timeoperations. Encoding involves three steps; 1) all image regions that correspond to the samegeometric mesh element are extracted and warped to a facet of uniform size and shape,2) an efficient packing of these facets into a new Master Texture image is computed, and3) the visibility of each pixel in the new Master Texture data is guaranteed using a simplealgorithm to discard occluded pixels in each view. Because the encoding implicitly representsthe multi-view geometry of the multiple images, a single texture mesh is sufficientto render the view-dependent model. More importantly, every Master Texture image cancorrectly texture the entire surface of the object, removing expensive computations suchas visibility analysis from the rendering algorithm. A benefit of this encoding is the supportfor pixel-wise view synthesis. The utility of pixel-wise view synthesis is demonstratedwith a real-time Master Texture encoded VDTM application. Pixel-wise synthesis is alsodemonstrated with an algorithm that distills a set of Master Texture images to a singleview-independent Master Texture image.
17

Super résolution de texture pour la reconstruction 3D fine / Texture Super Resolution for 3D Reconstruction

Burns, Calum 23 March 2018 (has links)
La reconstruction 3D multi-vue atteint désormais un niveau de maturité industrielle : des utilisateurs non-experts peuvent produire des modèles 3D large-échelle de qualité à l'aide de logiciels commerciaux. Ces reconstructions utilisent des capteurs haut de gamme comme des LIDAR ou des appareils photos de type DSLR, montés sur un trépied et déplacés autour de la scène. Ces protocoles d'acquisition sont mal adaptés à l’inspection d’infrastructures de grande taille, à géométrie complexe. Avec l'évolution rapide des capacités des micro-drones, il devient envisageable de leur confier ce type de tâche. Un tel choix modifie les données d’acquisition : on passe d’un ensemble restreint de photos de qualité, soigneusement acquises par l’opérateur, à une séquence d'images à cadence vidéo, sujette à des variations de qualité image dues, par exemple, au bougé et au défocus.Les données vidéo posent problème aux logiciels de photogrammétrie du fait de la combinatoire élevée engendrée par le grand nombre d’images. Nous proposons d’exploiter l’intégralité des images en deux étapes. Au cours de la première, la reconstruction 3D est obtenue en sous-échantillonnant temporellement la séquence, lors de la seconde, la restitution haute résolution de texture est obtenue en exploitant l'ensemble des images. L'intérêt de la texture est de permettre de visualiser des détails fins du modèle numérisé qui ont été perdus dans le bruit géométrique de la reconstruction. Cette augmentation de qualité se fait via des techniques de Super Résolution (SR).Pour atteindre cet objectif nous avons conçu et réalisé une chaîne algorithmique prenant, en entrée, la séquence vidéo acquise et fournissant, en sortie, un modèle 3D de la scène avec une texture sur-résolue. Cette chaîne est construite autour d’un algorithme de reconstruction 3D multi-vues de l’état de l’art pour la partie géométrique.Une contribution centrale de notre chaîne est la méthode de recalage employée afin d’atteindre la précision sub-pixellique requise pour la SR. Contrairement aux données classiquement utilisées en SR, nos prises de vues sont affectées par un mouvement 3D, face à une scène à géométrie 3D, ce qui entraîne des mouvements image complexes. La précision intrinsèque des méthodes de reconstruction 3D est insuffisante pour effectuer un recalage purement géométrique, ainsi nous appliquons un raffinement supplémentaire par flot optique. Le résultat de cette méthode de restitution de texture SR est d'abord comparée qualitativement à une approche concurrente de l’état de l’art.Ces appréciations qualitatives sont renforcées par une évaluation quantitative de qualité image. Nous avons à cet effet élaboré un protocole d’évaluation quantitatif de techniques de SR appliquées sur des surfaces 3D. Il est fondé sur l'utilisation de mires fractales binaires, initialement proposées par S. Landeau. Nous avons étendu ces idées au contexte de SR sur des surfaces courbes. Cette méthode est employée ici pour valider les choix de notre méthode de SR, mais elle s'applique à l'évaluation de toute texturation de modèle 3D.Enfin, les surfaces spéculaires présentes dans les scènes induisent des artefacts au niveau des résultats de SR en raison de la perte de photoconsistence des pixels au travers des images à fusionner. Pour traiter ce problème nous avons proposé deux méthodes correctives permettant de recaler photométriquement nos images et restaurer la photoconsistence. La première méthode est basée sur une modélisation des phénomènes d’illumination dans un cas d'usage particulier, la seconde repose sur une égalisation photométrique locale. Les deux méthodes testées sur des données polluées par une illumination variable s'avèrent effectivement capables d'éliminer les artefacts. / Multi-view 3D reconstruction techniques have reached industrial level maturity : non-expert users are now able to use commercial software to produce quality, large scale, 3D models. These reconstructions use top of the line sensors such as LIDAR or DSLR cameras, mounted on tripods and moved around the scene. Such protocols are not designed to efficiently inspect large infrastructures with complex geometry. As the capabilities of micro-drones progress at a fast rate, it is becoming possible to delegate such tasks to them. This choice induces changes in the acquired data : rather than a set of carefully acquired images, micro-drones will produce a video sequence with varying image quality, due to such flaws as motion blur and defocus. Processing video data is challenging for photogrammetry software, due to the high combinatorial cost induced by the large number of images. We use the full image sequence in two steps. Firstly, a 3D reconstruction is obtained using a temporal sub-sampling of the data, then a high resolution texture is built from the full sequence. Texture allows the inspector to visualize small details that may be lost in the noise of the geometric reconstruction. We apply Super Resolution techniques to achieve texture quality augmentation. To reach this goal we developed an algorithmic pipeline that processes the video input and outputs a 3D model of the scene with super resolved texture. This pipeline uses a state of the art 3D reconstruction software for the geometric reconstruction step. The main contribution of this pipeline is the image registration method used to achieve the sub-pixel accuracy required for Super Resolution. Unlike the data on which Super Resolution is generally applied, our viewpoints are subject to relative 3D motion and are facing a scene with 3D geometry, which makes the motion field all the more complex. The intrinsic precision of current 3D reconstruction algorithms is insufficient to perform a purely geometric registration. Instead we refine the geometric registration with an optical flow algorithm. This approach is qualitatively to a competing state of the art method. qualitative comparisons are reinforced by a quantitative evaluation of the resulting image quality. For this we developed a quantitative evaluation protocol of Super Resolution techniques applied to 3D surfaces. This method is based on the Binary Fractal Targets proposed by S. Landeau. We extended these ideas to the context of curved surfaces. This method has been used to validate our choice of Super Resolution algorithm. Finally, specularities present on the scene surfaces induce artefacts in our Super Resolution results, due to the loss of photoconsistency among the set of images to be fused. To address this problem we propose two corrective methods designed to achieve photometric registration of our images and restore photoconsistency. The first method is based on a model of the illumination phenomena, valid in a specific setting, the second relies on local photometric equalization among the images. When tested on data polluted by varying illumination, both methods were able to eliminate these artefacts.
18

3-D Face Modeling from a 2-D Image with Shape and Head Pose Estimation

Oyini Mbouna, Ralph January 2014 (has links)
This paper presents 3-D face modeling with head pose and depth information estimated from a 2-D query face image. Many recent approaches to 3-D face modeling are based on a 3-D morphable model that separately encodes the shape and texture in a parameterized model. The model parameters are often obtained by applying statistical analysis to a set of scanned 3-D faces. Such approaches tend to depend on the number and quality of scanned 3-D faces, which are difficult to obtain and computationally intensive. To overcome the limitations of 3-D morphable models, several modeling techniques from 2-D images have been proposed. We propose a novel framework for depth estimation from a single 2-D image with an arbitrary pose. The proposed scheme uses a set of facial features in a query face image and a reference 3-D face model to estimate the head pose angles of the face. The depth information of the subject at each feature point is represented by the depth information of the reference 3-D face model multiplied by a vector of scale factors. We use the positions of a set of facial feature points on the query 2-D image to deform the reference face dense model into a person specific 3-D face by minimizing an objective function. The objective function is defined as the feature disparity between the facial features in the face image and the corresponding 3-D facial features on the rotated reference model projected onto 2-D space. The pose and depth parameters are iteratively refined until stopping criteria are reached. The proposed method requires only a face image of arbitrary pose for the reconstruction of the corresponding 3-D face dense model with texture. Experiment results with USF Human-ID and Pointing'04 databases show that the proposed approach is effective to estimate depth and head pose information with a single 2-D image. / Electrical and Computer Engineering
19

Génération de texture par anamorphose pour la décoration d’objets plastiques injectés / Texture generation for decoration of manufactured plastic objects by anamorphose

Belperin, Maxime 31 May 2013 (has links)
Le contexte de ma thèse rentre dans le cadre du projet IMD3D, supporté par le FUI. L'objectif de ce projet consiste à proposer une méthode automatisée permettant la décoration d'objets 3D quelconques. La solution choisie consiste à positionner un film imprimé dans le moule, ce film sera déformé par la fermeture du moule puis par injection. Ma thèse porte sur la génération de décoration. Les données dont nous disposons en entrée sont un maillage et une ou plusieurs images. Nous souhaitons d'abord obtenir le plaquage de cette image sur le maillage, de telle sorte que le rendu visuel soit équivalent à l'image initiale. Pour cela, nous avons décidé de choisir un point de vue par image et de le favoriser. Nous paramétrions alors le maillage par le biais d'une projection orthogonale ou perspective définie par ce point de vue. Nous réalisons alors la transformée inverse de déformation du maillage. L'utilisation d'une application conforme pour la déformation inverse permet de coller au mieux à la physique du problème. Nous visualisons donc le résultat à imprimer sur le film. Il reste alors à générer la texture permettant de décorer l'objet injecté par le procédé. Il suffit de parcourir bilinéairement l'intérieur des mailles et simultanément la partie de l'image correspondante, de manière à remplir les pixels de l'image. Ceci permet d'obtenir finalement la texture finale qui sera imprimée sur le film. Mais, lors des premiers essais effectués par les industriels avec une mire colorée, un effet de décoloration a été relevé. Nous avons donc pris en compte ce changement de couleur pour modifier l'image et obtenir le résultat visuel escompte, même au niveau du rendu des couleurs / This work takes part in a global industrial project called IMD3D, which is supported by FUI and aims at decorating 3D plastic objects using Insert Molding technology with an automated process. Our goal is to compute the decoration of 3D virtual objects, using data coming from polymer film characterization and mechanical simulation. My thesis deals with the generation of decoration. Firstly, we want to map the texture onto the mesh, so that the visual rendering would be equivalent to the initial picture. In order to do so, we decided to choose a viewpoint per texture and to favor it. Thus, a specific view-dependent parameterization is defined. Thus, the first goal which is to define the texture mapping with visual constraints is reached. After this step, the inverse distortion of the mesh is performed. The use of a conformal map for this inverse transform allows to respect the physics issues. Therefore we get a planar mesh representing the initial mesh of simulation whose associated textures have also been modified by this transform. The result to be printed on the film can be viewed. Finally, the texture enabling the decoration of the object injected by the process can be generated. This texture combines information from several mapped images. The inner part of the mesh and in the same time the part of the corresponding texture shall be followed in a bilinear way in order to fill the pixel of the generated picture. But during the first tests performed by industries with a colors pattern, a discoloration effect was detected. As a consequence, we thought to take into account this color change to modify the picture and to obtain the expected visual rendering
20

A obtenção de texturas na síntese de imagens realísticas num ambiente limitado

Walter, Marcelo January 1991 (has links)
As técnicas para síntese de textura em Computação Gráfica constituem um grupo bem específico cujo objetivo principal é a inclusão, na imagem, de alguma informação visual que aumente a percepção de realismo. São identificadas técnicas para síntese de textura com características favoráveis a implementação num ambiente limitado cujo aspecto central é a placa de vídeo VGA, a saber: mapeamento de textura, "bump mapping" e textura sólida. Um sistema para visualização de objetos simples com aplicação das técnicas selecionadas é descrito e implementado. Várias imagens são sintetizadas e os resultados analisados considerando-se o nível de realismo atingido. É explorado o uso de padrões para incremento da resolução visual das imagens em conjunto com a técnica de mapeamento de textura. Apresenta-se uma série de definições para textura das áreas de Computação Gráfica, Psicologia e Processamento de Imagens. Estas definições se integram e possibilitam a formação de um conceito genérico sobre o assunto. Os modelos e técnicas para descrição e síntese de texturas são apresentados, identificando-se as tendências nesta área. / The Computer Graphics texture synthesis techniques are a well defined group which main goal is to add visual information to the image. This visual information will increase the realism. Texture synthesis techniques for implementation in a limited environment are identified, namely texture mapping, bump mapping and solid texture. The main aspect of the environment is the VGA video card. A system is described and implemented to visualize simple objects where the selected texture synthesis techniques were applied . Some images are synthesized and the results are analysed. It is explored the use of patterns to increase images visual resolution in conection with texture mapping technique. It is presented a set of texture definitions from Computer Graphics, Psychology and Image Processing studies. These definitions are integrated and make possible to form a generic concept about the subject. Models and techniques for texture description and synthesis are presented. This survey identified trends in this area.

Page generated in 0.0854 seconds