Spelling suggestions: "subject:" computer graphics"" "subject:" coomputer graphics""
221 |
Occlusion-resolving direct volume rendering /Mak, Wai Ho. January 2009 (has links)
Thesis (M.Phil.)--Hong Kong University of Science and Technology, 2009. / Includes bibliographical references (p. 53-57).
|
222 |
Real-time rendering of synthetic terrainMcRoberts, Duncan Andrew Keith 07 June 2012 (has links)
M.Sc. / Real-time terrain rendering (RTTR) is an exciting eld in computer graphics. The algorithms and techniques developed in this domain allow immersive virtual environments to be created for interactive applications. Many di culties are encountered in this eld of research, including acquiring the data to model virtual worlds, handling huge amounts of geometry, and texturing landscapes that appear to go on forever. RTTR has been widely studied, and powerful methodologies have been developed to overcome many of these obstacles. Complex natural terrain features such as detailed vertical surfaces, overhangs and caves, however, are not easily supported by the majority of existing algorithms. It becomes di cult to add such detail to a landscape. Existing techniques are incredibly e cient at rendering elevation data, where for any given position on a 2D horizontal plane we have exactly 1 altitude value. In this case we have a many-to-1 mapping between 2D position and altitude, as many 2D coordinates may map to 1 altitude value but any single 2D coordinate maps to 1 and only 1 altitude. In order to support the features mentioned above we need to allow for a many-to-many mapping. As an example, with a cave feature for a given 2D coordinate we would have elevation values for the oor, the roof and the outer ground. In this dissertation we build upon established techniques to allow for this manyto- many mapping, and thereby add support for complex terrain features. The many-to-many mapping is made possible by making use of geometry images in place of height-maps. Another common problem with existing RTTR algorithms is texture distortion. Texturing is an inexpensive means of adding detail to rendered terrain. Many existing technique map texture coordinates in 2D, leading to distortion on steep surfaces. Our research attempts to reduce texture distortion in such situations by allowing a more even spread of texture coordinates. Geometry images make this possible as they allow for a more even distribution of sample positions. Additionally we devise a novel means of blending tiled texture that enhances the important features of the individual textures. Fully sampled terrain employs a single global texture that covers the entire landscape. This technique provides great detail, but requires a huge volume of data. Tiled texturing requires comparatively little data, but su ers from disturbing regular patterns. We seek to reduce the gap between tiled textures and fully sampled textures. In particular, we aim at reducing the regularity of tiled textures by changing the blending function. In summary, the goal of this research is twofold. Firstly we aim to support complex natural terrain features|speci cally detailed vertical surfaces, over-hangs and caves. Secondly we wish to improve terrain texturing by reducing texture distortion, and by blending tiled texture together in a manner that appears more natural. We have developed a level of detail algorithm which operates on geometry images, and a new texture blending technique to support these goals.
|
223 |
Using catadioptrics for multidimensional interaction in computer graphicsLane, James Robert Timothy 23 November 2005 (has links)
This thesis introduces the use of catadioptrics for multidimensional interaction in the approach called Reflections. In computer graphics there is a need for multidimensional interaction that is not restricted by cabling connected to the input device. The use of a camera and computer vision presents a solution to the cabling problem. Unfortunately this solution presents an equally challenging problem: a single camera alone can not accurately calculate depth and is therefore not suitable for multidimensional interaction. This thesis presents a solution, called reflections to this problem. Reflections makes use of only a single camera and one or more mirrors to accurately calculate 3D, 5D, and 6D information in real time. Two applications in which this approach is used for natural, non-intrusive and multidimensional interaction are the Virtual Drums Project and Ndebele painting in virtual reality. The interaction in these applications and in particular the Virtual Drums is appropriate and intuitive, e.g. the user plays the drums with a real drumstick. Several computer vision algorithms are described in this thesis, which are used in the implementation of the Virtual Drums Project. / Dissertation (MSc ( Computer Science))--University of Pretoria, 2005. / Computer Science / unrestricted
|
224 |
Lightweight and Sufficient Two Viewpoint Connections for Augmented RealityChengyuan Lin (8793044) 05 May 2020 (has links)
<p></p><p>Augmented Reality (AR) is a powerful computer to human visual interface
that displays data overlaid onto the user's view of the real world. Compared to
conventional visualization on a computer display, AR has the advantage of
saving the user the cognitive effort of mapping the visualization to the real
world. For example, a user wearing AR glasses can find a destination in an
urban setting by following a virtual green line drawn by the AR system on the
sidewalk, which is easier to do than having to rely on navigational directions
displayed on a phone. Similarly, a surgeon looking at an operating field
through an AR display can see graphical annotations authored by a remote mentor
as if the mentor actually drew on the patient's body.</p>
<p> </p>
<p>However, several challenges remain to be addressed before AR can reach
its full potential. This research contributes solutions to four such
challenges. A first challenge is achieving visualization continuity for AR
displays. Since truly transparent displays are not feasible, AR relies on
simulating transparency by showing a live video on a conventional display. For
correct transparency, the display should show exactly what the user would see
if the display were not there. Since the video is not captured from the user
viewpoint, simply displaying each frame as acquired results in visualization
discontinuity and redundancy. A second challenge is providing the remote mentor
with an effective visualization of the mentee's workspace in AR telementoring.
Acquiring the workspace with a camera built into the mentee's AR headset is
appealing since it captures the workspace from the mentee's viewpoint, and
since it does not require external hardware. However, the workspace
visualization is unstable as it changes frequently, abruptly, and substantially
with each mentee head motion. A third challenge is occluder removal in
diminished reality. Whereas in conventional AR the user's visualization of a
real world scene is augmented with graphical annotations, diminished reality
aims to aid the user's understanding of complex real world scenes by removing
objects from the visualization. The challenge is to paint over occluder pixels
using auxiliary videos acquired from different viewpoints, in real time, and
with good visual quality. A fourth challenge is to acquire scene geometry from
the user viewpoint, as needed in AR, for example, to integrate virtual
annotations seamlessly into the real world scene through accurate depth
compositing, and shadow and reflection casting and receiving.</p>
<p> </p>
<p>Our solutions are based on the thesis that images acquired from
different viewpoints should not always be connected by computing a dense,
per-pixel set of correspondences, but rather by devising custom, lightweight,
yet sufficient connections between them, for each unique context. We have
developed a self-contained phone-based AR display that aligns the phone camera
and the user by views, reducing visualization discontinuity to less than 5% for
scene distances beyond 5m. We have developed and validated in user studies an
effective workspace visualization method by stabilizing the mentee first-person
video feed through reprojection on a planar proxy of the workspace. We have
developed a real-time occluder in-painting method for diminished reality based
on a two-stage coarse-then-fine mapping between the user and the auxiliary
view. The mapping is established in time linear with occluder contour length,
and it achieves good continuity across the occluder boundary. We have developed
a method for 3D scene acquisition from the user viewpoint based on single-image
triangulation of correspondences between left and right eye corneal
reflections. The method relies on a subpixel accurate calibration of the
catadioptric imaging system defined by two corneas and a camera, which enables
the extension of conventional epipolar geometry for a fast connection between
corneal reflections.</p><p></p>
|
225 |
Topology based global crowd controlBarnett, Adam January 2014 (has links)
We propose a method to determine the flow of large crowds of agents in a scene such that it is filled to its capacity with a coordinated, dynamically moving crowd. Our approach provides a focus on cooperative control across the entire crowd. This is done with a view to providing a method which animators can use to easily populate and fill a scene. We solve this global planning problem by first finding the topology of the scene using a Reeb graph, which is computed from a Harmonic field of the environment. The Maximum flow can then be calculated across this graph detailing how the agents should move through the space. This information is converted back from the topological level to the geometric using a route planner and the Harmonic field. We provide evidence of the system’s effectiveness in creating dynamic motion through comparison to a recent method. We also demonstrate how this system allows the crowd to be controlled globally with a couple of simple intuitive controls and how it can be useful for the purpose of designing buildings and providing control in team sports.
|
226 |
Artistic Content Representation and Modelling based on Visual Style FeaturesBuchanan, Philip Hamish January 2013 (has links)
This thesis aims to understand visual style in the context of computer science, using traditionally intangible artistic properties to enhance existing content manipulation algorithms and develop new content creation methods. The developed algorithms can be used to apply extracted properties to other drawings automatically; transfer a selected style; categorise images based upon perceived style; build 3D models using style features from concept artwork; and other style-based actions that change our perception of an object without changing our ability to recognise it. The research in this thesis aims to provide the style manipulation abilities that are missing from modern digital art creation pipelines.
|
227 |
Dynamic discontinuity meshingWorrall, Adam January 1998 (has links)
No description available.
|
228 |
Representing illusions : space, narrative and the spectator in fine art practiceO'Riley, Tim January 1998 (has links)
No description available.
|
229 |
Video rate textured image generation with anti-aliasing enhancementsCarter, Matthew Joseph January 1994 (has links)
No description available.
|
230 |
Task and data management for parallel particle tracingTidmus, Jonathan Paul January 1997 (has links)
No description available.
|
Page generated in 0.0621 seconds