• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 495
  • 123
  • 72
  • 59
  • 43
  • 24
  • 23
  • 10
  • 8
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • Tagged with
  • 957
  • 368
  • 210
  • 137
  • 136
  • 130
  • 128
  • 127
  • 124
  • 116
  • 108
  • 92
  • 87
  • 80
  • 76
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Advances in Modelling, Animation and Rendering

Vince, J.A., Earnshaw, Rae A. January 2002 (has links)
No / This volume contains the papers presented at Computer Graphics International 2002, in July, at the University of Bradford, UK. These papers represent original research in computer graphics from around the world.
372

Image-based approaches for photo-realistic rendering of complex objects

Hilsmann, Anna 03 April 2014 (has links)
Fotorealistisches Rendering ist eines der Hauptziele der Computer Grafik. Mittels physikalischer Simulation ist eine fotorealistische Darstellung immer noch rechenaufwändig. Diese Arbeit stellt neue Methoden für Bild-basiertes Rendering komplexer Objekte am Beispiel von Kleidung vor. Die vorgestellten Methoden nutzen Kamerabilder und deren fotorealistische Eigenschaften für komplexe Animationen und Texturmodifikationen. Basierend auf der Annahme, dass für eng anliegende Kleidung Faltenwurf hauptsächlich von der Pose des Trägers beeinflusst wird, schlägt diese Dissertation ein neues Bild-basiertes Verfahren vor, das neue Bilder von Kleidungsstücken abhängig von der Körperpose einer Person aus einer Datenbank von Bildern synthetisiert. Posen-abhängige Eigenschaften (Textur und Schattierung) werden über Abbildungsvorschriften zwischen den Bildern extrahiert und im Posenraum interpoliert. Um die Erscheinung eines Objekts zu verändern, wird ein Verfahren vorgestellt, das den Austausch von Texturen ohne Kenntnis der zugrundeliegenden Szeneneigenschaften ermöglicht. Texturdeformation und Schattierung werden über Bildregistrierung zu einem geeigneten Referenzbild extrahiert. Im Gegensatz zu klassischen Bild-basierten Verfahren, in denen die Synthese auf Blickpunktänderung beschränkt und eine Veränderung des Objekts nicht möglich ist, erlauben die vorgestellten Verfahren komplexe Animationen und Texturmodifikation. Beide Verfahren basieren auf örtlichen und photometrischen Abbildungen zwischen Bildern. Diese Abbildungen werden basierend auf einem angepassten Brightness Constancy Constraint mit Gitternetz-basierten Modellen optimiert. Die vorgestellten Verfahren verlagern einen großen Teil des Rechenaufwands von der Darstellungsphase in die vorangegangene Trainingsphase und erlauben eine realistische Visualisierung von Kleidung inklusive charakteristischer Details, ohne die zugrundeliegenden Szeneneigenschaften aufwändig zu simulieren. / One principal intention of computer graphics is the achievement of photorealism. With physically-based methods, achieving photorealism is still computationally demanding. This dissertation proposes new approaches for image-based visualization of complex objects, concentrating on clothes. The developed methods use real images as appearance examples to guide complex animation or texture modification processes, combining the photorealism of images with the ability to animate or modify an object. Under the assumption that wrinkling depends on the pose of a human body (for tight-fitting clothes), a new image-based rendering approach is proposed, which synthesizes images of clothing from a database of images based on pose information. Pose-dependent appearance and shading information is extracted by image warps and interpolated in pose-space using scattered data interpolation. To allow for appearance changes in image-based methods, a retexturing approach is proposed, which enables texture exchange without a-priori knowledge of the underlying scene properties. Texture deformation and shading are extracted from the input image by a warp to an appropriate reference image. In contrast to classical image-based visualization methods, where animation is restricted to viewpoint change and appearance modification is not possible, the proposed methods allow for complex pose animations and appearance changes. Both approaches build on image warps, not only in the spatial but also in the photometric domain. A new framework for joint spatial and photometric warp optimization is introduced, which estimates mesh-based warp models under a modified brightness constancy assumption. The presented approaches shift computational complexity from the rendering to an a-priori training phase and allow a photo-realistic visualization and modification of clothes, including fine and characteristic details without computationally demanding simulation of the underlying scene and object properties.
373

End-to-end 3D video communication over heterogeneous networks

Mohib, Hamdullah January 2014 (has links)
Three-dimensional technology, more commonly referred to as 3D technology, has revolutionised many fields including entertainment, medicine, and communications to name a few. In addition to 3D films, games, and sports channels, 3D perception has made tele-medicine a reality. By the year 2015, 30% of the all HD panels at home will be 3D enabled, predicted by consumer electronics manufacturers. Stereoscopic cameras, a comparatively mature technology compared to other 3D systems, are now being used by ordinary citizens to produce 3D content and share at a click of a button just like they do with the 2D counterparts via sites like YouTube. But technical challenges still exist, including with autostereoscopic multiview displays. 3D content requires many complex considerations--including how to represent it, and deciphering what is the best compression format--when considering transmission or storage, because of its increased amount of data. Any decision must be taken in the light of the available bandwidth or storage capacity, quality and user expectations. Free viewpoint navigation also remains partly unsolved. The most pressing issue getting in the way of widespread uptake of consumer 3D systems is the ability to deliver 3D content to heterogeneous consumer displays over the heterogeneous networks. Optimising 3D video communication solutions must consider the entire pipeline, starting with optimisation at the video source to the end display and transmission optimisation. Multi-view offers the most compelling solution for 3D videos with motion parallax and freedom from wearing headgear for 3D video perception. Optimising multi-view video for delivery and display could increase the demand for true 3D in the consumer market. This thesis focuses on an end-to-end quality optimisation in 3D video communication/transmission, offering solutions for optimisation at the compression, transmission, and decoder levels.
374

Post-production of holoscopic 3D image

Abdul Fatah, Obaidullah January 2015 (has links)
Holoscopic 3D imaging also known as “Integral imaging” was first proposed by Lippmann in 1908. It facilitates a promising technique for creating full colour spatial image that exists in space. It promotes a single lens aperture for recording spatial images of a real scene, thus it offers omnidirectional motion parallax and true 3D depth, which is the fundamental feature for digital refocusing. While stereoscopic and multiview 3D imaging systems simulate human eye technique, holoscopic 3D imaging system mimics fly’s eye technique, in which viewpoints are orthographic projection. This system enables true 3D representation of a real scene in space, thus it offers richer spatial cues compared to stereoscopic 3D and multiview 3D systems. Focus has been the greatest challenge since the beginning of photography. It is becoming even more critical in film production where focus pullers are finding it difficult to get the right focus with camera resolution becoming increasingly higher. Holoscopic 3D imaging enables the user to carry out re/focusing in post-production. There have been three main types of digital refocusing methods namely Shift and Integration, full resolution, and full resolution with blind. However, these methods suffer from artifacts and unsatisfactory resolution in the final resulting image. For instance the artifacts are in the form of blocky and blurry pictures, due to unmatched boundaries. An upsampling method is proposed that improves the resolution of the resulting image of shift and integration approach. Sub-pixel adjustment of elemental images including “upsampling technique” with smart filters are proposed to reduce the artifacts, introduced by full resolution with blind method as well as to improve both image quality and resolution of the final rendered image. A novel 3D object extraction method is proposed that takes advantage of disparity, which is also applied to generate stereoscopic 3D images from holoscopic 3D image. Cross correlation matching algorithm is used to obtain the disparity map from the disparity information and the desirable object is then extracted. In addition, 3D image conversion algorithm is proposed for the generation of stereoscopic and multiview 3D images from both unidirectional and omnidirectional holoscopic 3D images, which facilitates 3D content reformation.
375

Camera positioning for 3D panoramic image rendering

Audu, Abdulkadir Iyyaka January 2015 (has links)
Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects. To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality.
376

Image based human body rendering via regression & MRF energy minimization

Li, Xinfeng January 2011 (has links)
A machine learning method for synthesising human images is explored to create new images without relying on 3D modelling. Machine learning allows the creation of new images through prediction from existing data based on the use of training images. In the present study, image synthesis is performed at two levels: contour and pixel. A class of learning-based methods is formulated to create object contours from the training image for the synthetic image that allow pixel synthesis within the contours in the second level. The methods rely on applying robust object descriptions, dynamic learning models after appropriate motion segmentation, and machine learning-based frameworks. Image-based human image synthesis using machine learning is a research focus that has recently gained considerable attention in the field of computer graphics. It makes use of techniques from image/motion analysis in computer vision. The problem lies in the estimation of methods for image-based object configuration (i.e. segmentation, contour outline). Using the results of these analysis methods as bases, the research adopts the machine learning approach, in which human images are synthesised by executing the synthesis of contour and pixels through the learning from training image. Firstly, thesis shows how an accurate silhouette is distilled using developed background subtraction for accuracy and efficiency. The traditional vector machine approach is used to avoid ambiguities within the regression process. Images can be represented as a class of accurate and efficient vectors for single images as well as sequences. Secondly, the framework is explored using a unique view of machine learning methods, i.e., support vector regression (SVR), to obtain the convergence result of vectors for contour allocation. The changing relationship between the synthetic image and the training image is expressed as a vector and represented in functions. Finally, a pixel synthesis is performed based on belief propagation. This thesis proposes a novel image-based rendering method for colour image synthesis using SVR and belief propagation for generalisation to enable the prediction of contour and colour information from input colour images. The methods rely on using appropriately defined and robust input colour images, optimising the input contour images within a sparse SVR framework. Firstly, the thesis shows how contour can effectively and efficiently be predicted from small numbers of input contour images. In addition, the thesis exploits the sparse properties of SVR efficiency, and makes use of SVR to estimate regression function. The image-based rendering method employed in this study enables contour synthesis for the prediction of small numbers of input source images. This procedure avoids the use of complex models and geometry information. Secondly, the method used for human body contour colouring is extended to define eight differently connected pixels, and construct a link distance field via the belief propagation method. The link distance, which acts as the message in propagation, is transformed by improving the low-envelope method in fast distance transform. Finally, the methodology is tested by considering human facial and human body clothing information. The accuracy of the test results for the human body model confirms the efficiency of the proposed method.
377

Incident Light Fields

Unger, Jonas January 2009 (has links)
Image based lighting, (IBL), is a computer graphics technique for creating photorealistic renderings of synthetic objects such that they can be placed into real world scenes. IBL has been widely recognized and is today used in commercial production pipelines. However, the current techniques only use illumination captured at a single point in space. This means that traditional IBL cannot capture or recreate effects such as cast shadows, shafts of light or other important spatial variations in the illumination. Such lighting effects are, in many cases, artistically created or are there to emphasize certain features, and are therefore a very important part of the visual appearance of a scene. This thesis and the included papers present methods that extend IBL to allow for capture and rendering with spatially varying illumination. This is accomplished by measuring the light field incident onto a region in space, called an Incident Light Field, (ILF), and using it as illumination in renderings. This requires the illumination to be captured at a large number of points in space instead of just one. The complexity of the capture methods and rendering algorithms are then significantly increased. The technique for measuring spatially varying illumination in real scenes is based on capture of High Dynamic Range, (HDR), image sequences. For efficient measurement, the image capture is performed at video frame rates. The captured illumination information in the image sequences is processed such that it can be used in computer graphics rendering. By extracting high intensity regions from the captured data and representing them separately, this thesis also describes a technique for increasing rendering efficiency and methods for editing the captured illumination, for example artificially moving or turning on and of individual light sources.
378

Level Set Segmentation and Volume Visualization of Vascular Trees

Läthén, Gunnar January 2013 (has links)
Medical imaging is an important part of the clinical workflow. With the increasing amount and complexity of image data comes the need for automatic (or semi-automatic) analysis methods which aid the physician in the exploration of the data. One specific imaging technique is angiography, in which the blood vessels are imaged using an injected contrast agent which increases the contrast between blood and surrounding tissue. In these images, the blood vessels can be viewed as tubular structures with varying diameters. Deviations from this structure are signs of disease, such as stenoses introducing reduced blood flow, or aneurysms with a risk of rupture. This thesis focuses on segmentation and visualization of blood vessels, consituting the vascular tree, in angiography images. Segmentation is the problem of partitioning an image into separate regions. There is no general segmentation method which achieves good results for all possible applications. Instead, algorithms use prior knowledge and data models adapted to the problem at hand for good performance. We study blood vessel segmentation based on a two-step approach. First, we model the vessels as a collection of linear structures which are detected using multi-scale filtering techniques. Second, we develop machine-learning based level set segmentation methods to separate the vessels from the background, based on the output of the filtering. In many applications the three-dimensional structure of the vascular tree has to be presented to a radiologist or a member of the medical staff. For this, a visualization technique such as direct volume rendering is often used. In the case of computed tomography angiography one has to take into account that the image depends on both the geometrical structure of the vascular tree and the varying concentration of the injected contrast agent. The visualization should have an easy to understand interpretation for the user, to make diagnostical interpretations reliable. The mapping from the image data to the visualization should therefore closely follow routines that are commonly used by the radiologist. We developed an automatic method which adapts the visualization locally to the contrast agent, revealing a larger portion of the vascular tree while minimizing the manual intervention required from the radiologist. The effectiveness of this method is evaluated in a user study involving radiologists as domain experts.
379

Haptic rendering for 6/3-DOF haptic devices / Haptic rendering for 6/3-DOF haptic devices

Kadleček, Petr January 2013 (has links)
Application of haptic devices expanded to fields like virtual manufacturing, virtual assembly or medical simulations. Advances in development of haptic devices have resulted in a wide distribution of assymetric 6/3-DOF haptic devices. However, current haptic rendering algorithms work correctly only for symmetric devices. This thesis analyzes 3-DOF and 6-DOF haptic rendering algorithms and proposes an algorithm for 6/3-DOF haptic rendering involving pseudo-haptics. The 6/3-DOF haptic rendering algorithm is implemented based on the previous analysis and tested in a user study.
380

GPU implementace algoritmů irradiance a radiance caching / GPU implementation of the irradiance and radiance caching algorithms

Bulant, Martin January 2015 (has links)
The object of this work is to create software implementing two algorithms for global ilumination computing. Iradiance and radiance caching should be implemented in CUDA framework on graphics card (GPU). Parallel implementation on GPU should dramatically improve algoritm speed compared to CPU implementation. The software will be written using already done framework for global illumunation computation. That allow to focus to algorithm implementation only. This work should speed up testing of new or existing methods for global illumination computing, because saving and reusing of intermediate results can be used for other algorithms too. Powered by TCPDF (www.tcpdf.org)

Page generated in 1.1467 seconds