• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 935
  • 173
  • 93
  • 66
  • 33
  • 32
  • 32
  • 32
  • 32
  • 32
  • 30
  • 30
  • 12
  • 8
  • 6
  • Tagged with
  • 1669
  • 1669
  • 255
  • 200
  • 189
  • 169
  • 160
  • 153
  • 149
  • 147
  • 144
  • 143
  • 143
  • 141
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Towards automatic oracles for the testing of mesh simplification software

Ho, Chun-fai, Jeffrey., 何晉輝. January 2005 (has links)
published_or_final_version / abstract / Computer Science / Master / Master of Philosophy
552

Object-based coding and transmission for plenoptic videos

Wu, Qing, 吳慶 January 2008 (has links)
published_or_final_version / abstract / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
553

A GENERAL PURPOSE GRAPHICS PROCESSOR.

Morreale, Jay Philip. January 1984 (has links)
No description available.
554

Vector graphics to improve BLAST graphic representations

Jimenez, Rafael. January 2007 (has links)
BLAST reports can be complicated. Viewing them graphically helps to understand them better, especially when the reports are long. At present "Web BLAST" and the stand-alone "wwwBLAST" versions, distributed by the NCBI, include graph- ical viewers for BLAST results. An alternative approach is "BLAST Graphic Viewer" developed by GMOD as part of the BioPerl library. It provides a more aesthetically pleasing and informative graphical visualization to represent BLAST results. All the strategies mentioned above are based on the use of bitmap graph- ics and dependent on JavaScript code embedded in HTML. We present Vector Graphic BLAST (VEGRA) a Python object orientated library based on BioPy- thon to yield graphical visualization of results from BLAST utilizing vector graph- ics. Graphics produced by VEGRA are better than bitmaps for illustration, more exible because they can be resized and stretched, require less memory, and their interactivity is more e ective as it is independent of tertiary technologies due to its integration into the graphic. In addition, the library facilitates a de nition of any layout for the di erent components of the graphic, as well as adjustment of size and colour properties. This dissertation studies previous alternatives and improves them by making use of vector graphics and thus allowing more e ective presentation of results. VEGRA is not just an improvement for BLAST visualiza- tion but a model that illustrates how other visualization tools could make use of vector graphics. VEGRA currently works with BLAST, nevertheless the library has been written to be extended to other visualization problems. / Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2007.
555

Articulated structure from motion.

Scheffler, Carl January 2004 (has links)
The structure from motion (SfM) problem is that of determining 3-dimensional (3D) information of a scene from sequences of 2-dimensional (2D) images [59]. This information consists of object shape and motion and relative camera motion. In general, objects may undergo complex non-rigid motion and may be occluded by other objects or themselves. These aspects make the general SfM problem under-constrained and the solution subject to missing or incomplete data.
556

Pencil Light Transport

Steigleder, Mauro January 2005 (has links)
Global illumination is an important area of computer graphics, having direct applications in architectural visualization, lighting design and entertainment. Indirect illumination effects such as soft shadows, color bleeding, caustics and glossy reflections provide essential visual information about the interaction of different regions of the environment. Global illumination is a research area that deals with these illumination effects. Interactivity is also a desirable feature for many computer graphics applications, especially with unrestricted manipulation of the environment and lighting conditions. However, the design of methods that can handle both unrestricted interactivity and global illumination effects on environments of reasonable complexity is still an open challenge. <br /><br /> We present a new formulation of the light transport equation, called <em>pencil light transport</em>, that makes progress towards this goal by exploiting graphics hardware rendering features. The proposed method performs the transport of radiance over a scene using sets of pencils. A pencil object consists of a center of projection and some associated directional data. We show that performing the radiance transport using pencils is suitable for implementation on current graphics hardware. The new algorithm exploits optimized operations available in the graphics hardware architecture, such as pinhole camera rendering of opaque triangles and texture mapping. We show how the light transport equation can be reformulated as a sequence of light transports between pencils and define a new light transport operator, called the <em>pencil light transport operator</em>, that is used to transfer radiance between sets of pencils.
557

Fast Extraction of BRDFs and Material Maps from Images

Jaroszkiewicz, Rafal January 2003 (has links)
The bidirectional reflectance distribution function has a four dimensional parameter space and such high dimensionality makes it impractical to use it directly in hardware rendering. When a BRDF has no analytical representation, common solutions to overcome this problem include expressing it as a sum of basis functions or factorizing it into several functions of smaller dimensions. This thesis describes factorization extensions that significantly improve factor computation speed and eliminate drawbacks of previous techniques that overemphasize low sample values. The improved algorithm is used to calculate factorizations and material maps from colored images. The technique presented in this thesis allows interactive definition of arbitrary materials, and although this method is based on physical parameters, it can be also used for achieving a variety of non-photorealistic effects.
558

A computer graphics based target detection model

Jones, Brian Edward. 09 1900 (has links)
Modeling of visual perception for computer-generated forces and intelligent software agents is usually fairly feeble in computer games and military simulations. Most of the time, tricks or shortcuts are employed in the perceptual model. Under certain conditions, these shortcuts cause unrealistic behavior and detract from military training and user immersion into the simulated environment. Many computer games and simulations trace a ray between the target and observer to determine if the observer can see the target. More complex models are sometime used in military simulations. One of these models used in Army simulations is the ACQUIRE model. This model still may produce debatable results. The ACQUIRE visual perception model uses a single value for the targetâ s contrast with its background. This can cause unrealistic results in certain conditions, allowing computer-generated forces to see targets that should not be seen and not see targets that should. Testing these more complex models needs to be completed to determine the conditions under which the model gives questionable results. Testing ACQUIRE against human subjects helped determine when ACQUIRE behaves reasonably. The study consisted of multiple scenes with a target in many positions, multiple postures, and many different lighting and fog conditions. Now that testing and analysis is complete, modifications can be made to the visual perception model allowing it to give better results in more varied conditions, such as: low light, excessive fog conditions, and partially hidden targets.
559

Forgotten Memories

Premeaux, Benjamin 01 January 2005 (has links)
Memories are experiences that are removed from our present time and space. The images I create are also removed; they are of a specific time and place, an instant or series of instances captured. The work I have produced in this program is an amalgamation of two artistic media, photography and paint. I choose to layer images to emphasize the complexity of experiences and to illustrate a sense of time. The combination of a mechanical and a handmade object emphasizes the intricacy of our experiences. What is revealed is a combination of color and image that creates multiple compositions within the whole. Layering paint with photography and sculpture allows me to continue to experiment and explore the variety of media that I find most interesting. I draw inspiration from many artists including Jackson Pollack, Willem DeKooning, Richard Diebenkorn, the Starn Twins, David Hockney and Frida Kahlo.These influences and my own interpretations are what makes my work my own.
560

Two problems of digital image formation : recovering the camera point spread function and boosting stochastic renderers by auto-similarity filtering / Deux problèmes dans la formation des images numériques : l'estimation du noyau local de flou d'une caméra et l'accélération de rendus stochastiques par filtrage auto-similaire

Delbracio, Mauricio 25 March 2013 (has links)
Cette thèse s'attaque à deux problèmes fondamentaux dans la formation des images numériques : la modélisation et l'estimation du flou introduit par une caméra numérique optique, et la génération rapide des images de synthèse photoréalistes. L'évaluation précise du flou intrinsèque d'une caméra est un problème récurrent en traitement d'image. Des progrès technologiques récents ont eu un impact significatif sur la qualité de l'image. Donc, une amélioration de la précision des procédures de calibration est impérative pour pousser plus loin cette évolution. La première partie de cette thèse présente une théorie mathématique de l'acquisition physique de l’image par un appareil photo numérique. Sur la base de cette mo\-dé\-li\-sa\-tion, deux algorithmes automatiques pour estimer le flou intrinsèque de la l’appareil sont proposés. Pour le premier, l'estimation est effectuée à partir d'une photographie d'une mire d’étallonnage spécialement conçue à cet effet. L'une des principales contributions de cette thèse est la preuve qu’une mire portant l’image d’un bruit blanc est proche de l'optimum pour estimer le noyau de flou. Le deuxième algorithme évite l'utilisation d'une mire d’étallonnage, procédure qui peut devenir un peu encombrante. En effet, nous montrons que deux photos d'une scène plane texturée, prises à deux distances différentes avec la même configuration de l’appareil photo, suffisent pour produire une estimation précise. Dans la deuxième partie de cette thèse, nous proposons un algorithme pour accélérer la synthèse d'images réalistes. Plusieurs heures, et même plusieurs jours peuvent être nécessaires pour produire des images de haute qualité. Dans un rendu typique, les pixels d'une image sont formés en établissant la moyenne de la contribution des rayons stochastiques lancés à partir d'une caméra virtuelle. Le principe d'accélération, simple mais puissant, consiste à détecter les pixels similaires en comparant leurs histogrammes de rayons et à leur faire partager leurs rayons. Les résultats montrent une accélération significative qui préserve la qualité de l’image. / This dissertation contributes to two fundamental problems of digital image formation: the modeling and estimation of the blur introduced by an optical digital camera and the fast generation of realistic synthetic images. The accurate estimation of the camera's intrinsic blur is a longstanding problem in image processing. Recent technological advances have significantly impacted on image quality. Thus improving the accuracy of calibration procedures is imperative to further push this development. The first part of this thesis presents a mathematical theory that models the physical acquisition of digital cameras. Based on this modeling, two fully automatic algorithms to estimate the intrinsic camera blur are introduced. For the first one, the estimation is performed from a photograph of a specially designed calibration pattern. One of the main contributions of this dissertation is the proof that a pattern with white noise characteristics is near optimal for the estimation purpose. The second algorithm circumvents the tedious process of using a calibration pattern. Indeed, we prove that two photographs of a textured planar scene, taken at two different distances with the same camera configuration, are enough to produce an accurate estimation. In the second part of this thesis, we propose an algorithm to accelerate realistic image synthesis. Several hours or even days may be necessary to produce high-quality images. In a typical renderer, image pixels are formed by averaging the contribution of stochastic rays cast from a virtual camera. The simple yet powerful acceleration principle consists of detecting similar pixels by comparing their ray histograms and letting them share their rays. Results show a significant acceleration while preserving image quality.

Page generated in 0.0736 seconds