• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 14
  • 14
  • 14
  • 14
  • 5
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Perceptual depth cues in support of medical data visualisation

Lyness, Caleb Alexander 01 June 2004 (has links)
This work investigates methods to provide clinically useful visualisations of the data produced by an X-ray/CT scanner. Specifically, it examines the use of perceptual depth cues (PDCs) and perceptual depth cue theory to create effective visualisations. Two visualisation systems are explored: one to display X-ray data and the other to display volumetric data. The systems are enhanced using stereoscopic and motion PDCs. The presented analyses show that these are the only possible enhancements common to both systems. The theoretical and practical aspects of implementing these enhancements are presented. Volume rendering techniques are explored to find an approach which gracefully handles poorly sampled data and provides the interactive rendering needed for motion cues. A low cost real time volume rendering system is developed and a novel stereo volume rendering technique is presented. The developed system uses commodity graphics hardware and Open-GL. To evaluate the visualisation systems a task-based user test is designed and implemented. The test requires the subjects to be observed while they complete a 3D diagnostic task using each system. The speed and accuracy with which the task is performed are used as metrics. The experimental results are used to compare the effectiveness of the augmented perceptual depth cues and to cross-compare the systems. The experiments show that the user performance in the visualisation systems are statistically equivalent. This suggests that the enhanced X-ray visualisation can be used in place of CT data for some tasks. The benefits of this are two fold: a decrease in the patient's exposure to radiation and a reduction in the data acquisition time.
2

A linear framework for character skinning

Merry, Bruce 01 January 2007 (has links)
Character animation is the process of modelling and rendering a mobile character in a virtual world. It has numerous applications both off-line, such as virtual actors in films, and real-time, such as in games and other virtual environments. There are a number of algorithms for determining the appearance of an animated character, with different trade-offs between quality, ease of control, and computational cost. We introduce a new method, animation space, which provides a good balance between the ease-of-use of very simple schemes and the quality of more complex schemes, together with excellent performance. It can also be integrated into a range of existing computer graphics algorithms. Animation space is described by a simple and elegant linear equation. Apart from making it fast and easy to implement, linearity facilitates mathematical analysis. We derive two metrics on the space of vertices (the “animation space”), which indicate the mean and maximum distances between two points on an animated character. We demonstrate the value of these metrics by applying them to the problems of parametrisation, level-of-detail (LOD) and frustum culling. These metrics provide information about the entire range of poses of an animated character, so they are able to produce better results than considering only a single pose of the character, as is commonly done. In order to compute parametrisations, it is necessary to segment the mesh into charts. We apply an existing algorithm based on greedy merging, but use a metric better suited to the problem than the one suggested by the original authors. To combine the parametrisations with level-of-detail, we require the charts to have straight edges. We explored a heuristic approach to straightening the edges produced by the automatic algorithm, but found that manual segmentation produced better results. Animation space is nevertheless beneficial in flattening the segmented charts; we use least squares conformal maps (LSCM), with the Euclidean distance metric replaced by one of our animation-space metrics. The resulting parametrisations have significantly less overall stretch than those computed based on a single pose. Similarly, we adapt appearance preserving simplification (APS), a progressive mesh-based LOD algorithm, to apply to animated characters by replacing the Euclidean metric with an animation-space metric. When using the memoryless form of APS (in which local rather than global error is considered), the use of animation space for computations reduces the geometric errors introduced by LOD decomposition, compared to simplification based on a single pose. User tests, in which users compared video clips of the two, demonstrated a statistically significant preference for the animation-space simplifications, indicating that the visual quality is better as well. While other methods exist to take multiple poses into account, they are based on a sampling of the pose space, and the computational cost scales with the number of samples used. In contrast, our method is analytic and uses samples only to gather statistics. The quality of LOD approximations by improved further by introducing a novel approach to LOD, influence simplification, in which we remove the influences of bones on vertices, and adjust the remaining influences to approximate the original vertex as closely as possible. Once again, we use an animation-space metric to determine the approximation error. By combining influence simplification with the progressive mesh structure, we can obtain further improvements in quality: for some models and at some detail levels, the error is reduced by an order of magnitude relative to a pure progressive mesh. User tests showed that for some models this significantly improves quality, while for others it makes no significant difference. Animation space is a generalisation of skeletal subspace deformation (SSD), a popular method for real-time character animation. This means that there is a large existing base of models that can immediately benefit from the modified algorithms mentioned above. Furthermore, animation space almost entirely eliminates the well-known shortcomings of SSD (the so-called “candy-wrapper” and “collapsing elbow” effects). We show that given a set of sample poses, we can fit an animation-space model to these poses by solving a linear least-squares problem. Finally, we demonstrate that animation space is suitable for real-time rendering, by implementing it, along with level-of-detail rendering, on a PC with a commodity video card. We show that although the extra degrees of freedom make the straightforward approach infeasible for complex models, it is still possible to obtain high performance; in fact, animation space requires fewer basic operations to transform a vertex position than SSD. We also consider two methods of lighting LOD-simplified models using the original normals: tangent-space normal maps, an existing method that is fast to render but does not capture dynamic structures such as wrinkles; and tangent maps, a novel approach that encodes animation-space tangent vectors into textures, and which captures dynamic structures. We compare the methods both for performance and quality, and find that tangent-space normal maps are at least an order of magnitude faster, while user tests failed to show any perceived difference in quality between them.
3

Voxel-Space Shape Grammars

Crumley, Zacharia 01 January 2012 (has links)
The field of Procedural Generation is being increasingly used in modern content generation for its ability to significantly decrease the cost and time involved. One such area of Procedural Generation is Shape Grammars, a type of formal grammar that operates on geometric shapes instead of symbols. Conventional shape grammar implementations use mesh representations of shapes, but this has two significant drawbacks. Firstly, mesh representations make Boolean geometry operations on shapes difficult to accomplish. Boolean geometry operations allow us to combine shapes using Boolean operators (and, or, not), producing complex, composite shapes. A second drawback is that sub-, or trans-shape detailing is challenging to achieve. To address these two problems with conventional mesh-based shape grammars, we present a novel extension to shape grammars, in which a voxel representation of the generated shapes is used. We outline a five stage algorithm for using these extensions and discuss a number of optional enhancements and optimizations. The final output of the algorithm is a detailed mesh model, suitable for use in real-time or offline graphics applications. We also test our extension’s performance and range of output with three categories of testing: performance testing, output range testing, and variation testing. The results of the testing with our proof-of-concept implementation show that our unoptimized algorithm is slower than conventional shape grammar implementations, with a running time that is O(N^3) for an N^3 voxel grid. However, there is scope for further optimization to our algorithm, which would significantly reduce running times and memory consumption. We outline and discuss several such avenues for performance enhancement. Additionally, testing reveals that our algorithm is able to successfully produce a broad range of detailed outputs, exhibiting many features that would be very difficult to accomplish using mesh-based shape grammar implementations. This range of 3D models includes fractals, skyscraper buildings, space ships, castles, and more. Further, stochastic rules can be used to produce a variety of models that share a basic archetype, but differ noticeably in their details.
4

Fast, Realistic Terrain Synthesis

Crause, Justin 01 December 2015 (has links)
The authoring of realistic terrain models is necessary to generate immersive virtual environments for computer games and film visual effects. However, creating these landscapes is difficult – it usually involves an artist spending many hours sculpting a model in a 3D design program. Specialised terrain generation programs exist to rapidly create artificial terrains, such as Bryce (2013) and Terragen (2013). These make use of complex algorithms to pseudo-randomly generate the terrains, which can then be exported into a 3D editing program for fine tuning. Height-maps are a 2D data-structure, which stores elevation values, and can be used to represent terrain data. They are also a common format used with terrain generation and editing systems. Height-maps share the same storage design as image files, as such they can be viewed like any picture and image transformation algorithms can be applied to them. Early techniques for generating terrains include fractal generation and physical simulation. These methods proved difficult to use as the algorithms were manipulated with a set of parameters. However, the outcome from changing the values is not known, which results in the user changing values over several iterations to produce their desired terrain. An improved technique brings in a higher degree of user control as well as improved realism, known as texture-based terrain synthesis. This borrows techniques from texture synthesis, which is the process of algorithmically generating a larger image from a smaller sample image. Texture-based terrain synthesis makes use or real-world terrain data to produce highly realistic landscapes, which improves upon previous techniques. Recent work in texture-based synthesis has focused on improving both the realism and user control, through the use of sketching interfaces. We present a patch-based terrain synthesis system that utilises a user sketch to control the location of desired terrain features, such as ridges and valleys. Digital Elevation Models (DEMs) of real landscapes are used as exemplars, from which candidate patches of data are extracted and matched against the user’s sketch. The best candidates are merged seamlessly into the final terrain. Because real landscapes are used the resulting terrain appears highly realistic. Our research contributes a new version of this approach that employs multiple input terrains and acceleration using a modern Graphics Processing Unit (GPU). The use of multiple inputs increases the candidate pool of patches and thus the system is capable of producing more varied terrains. This addresses the limitation where supplying the wrong type of input terrain would fail to synthesise anything useful, for example supplying the system with a mountainous DEM and expecting deep valleys in the output. We developed a hybrid multithreaded CPU and GPU implementation that achieves a 45 times speedup.
5

A connectionist explanation of presence in virtual environments

Nunez, David 01 February 2003 (has links)
Presence has various definitions, but can be understood as the sensation that a virtual environment is a real place, that the user is actually in the virtual environment rather than at the display terminal, or that the medium used to display the environment has disappeared leaving only the environment itself. We present an attempt to unite various presence approaches by reducing each to what we believe is a common basis – the psychology of behaviour selection and control – and re-conceptualizing presence in these terms by defining cognitive presence – the mental state where the VE rather than the real environment is acting as the basis for behaviour selection. The bulk of this work represents the construction of a three-layer connectionist model to explain and predict this concept of cognitive presence. This model takes input from two major sources: the perceptual modalities of the user (bottom-up processes), and the mental state of the user (top-down processes). These two basic sources of input competitively spread activation to a central layer which competitively determines which behaviour script will be applied to regulate behaviour. We demonstrate the ability of the model to cope with current notions of presence by using it to successfully predict two published findings: one (Hendrix & Barfield, 1995) showing that presence increases with an increase in the geometric field of view of the graphical display, and another (Sallnas, 1999), which demonstrates the positive relationship between presence and the stimulation of more than one sensory modality. Apart from this theoretical analysis, we also perform two experiments to test the central tenets of our model. The first experiment aimed to show that presence is affected by both perceptual inputs (bottom-up processes), conceptual inputs (top-down processes), and the interaction of these. We collected 103 observations from a 2x2 factorial design with stimulus quality (2 levels) and conceptual priming (2 levels) as independent variables, and as dependent variable we used three measures of presence (Slater, Usoh & Steed’s scale (1995), Witmer & Singer’s (1998) Presence Questionnaire and our own cognitive presence measure) for the dependent variable. We found a significant main effect for stimulus quality and a significant interaction, which created a striking effect: priming the subject with material related in theme to the content of the VE increased the mean presence score for those viewing the high quality display, but decreased the mean of those viewing the low quality display. For those not primed with material related to the VE, no mean presence difference was discernible between those using high and low quality displays. The results from this study suggest that both top-down and bottom-up activation should be taken into account when explaining the causality of presence. Our second study aimed to show that presence comes about as a result not of raw sensory information, but rather due to partly-processed perceptual information. To do this we created a simple three group comparative design, with 78 observations. Each one of the three groups viewed the same VE under three display conditions: high-quality graphical, low-quality graphical, and text-only. Using the model, we predicted that the text and low-quality graphics displays would produce the same presence levels, while the high-quality display would outperform them both. The results were mixed, with the Slater, Usoh & Steed scale showing the predicted pattern, but the Presence Questionnaire showing each condition producing a significantly different presence score (in the increasing order: text, low-quality graphics, high-quality graphics). We conclude from our studies that the model shows the correct basic structure, but that it requires some refinement with regards to its dealings with non-immersive displays. We examined the performance our presence measure, which was found to not perform satisfactorily. We conclude by proposing some points relevant to the methodology of presence research, and by suggesting some avenues for future expansion of our model.
6

Identification and Reconstruction of Bullets from Multiple X-Rays

Perkins, Simon 01 June 2004 (has links)
The 3D shape and position of objects inside the human body are commonly detected using Computed Tomography (CT) scanning. CT is an expensive diagnostic option in economically disadvantaged areas and the radiation dose experienced by the patient is significant. In this dissertation, we present a technique for reconstructing the 3D shape and position of bullets from multiple X-rays. This technique makes us of ubiquitous X-ray equipment and a small number of X-rays to reduce the radiation dose. Our work relies on Image Segmentation and Volume Reconstruction techniques. We present a method for segmenting bullets out of X-rays, based on their signature in intensity profiles. This signature takes the form of a distinct plateau which we model with a number of parameters. This model is used to identify horizontal and vertical line segments within an X-Ray corresponding to a bullet signature. Regions containing confluences of these line segments are selected as bullet candidates. The actual bullet is thresholded out of the region based on a range of intensities occupied by the intensity profiles that contributed to the region. A simple Volume Reconstruction algorithm is implemented that back-projects the silhouettes of bullets obtained from our segmentation technique. This algorithm operates on a 3D voxel volume represented as an octree. The reconstruction is reduced to the 2D case by reconstructing a slice of the voxel volume at a time. We achieve good results for our segmentation algorithm. When compared with a manual segmentation, our algorithm matches 90% of the bullet pixels in nine of the twelve test X-rays. Our reconstruction algorithm produces an acceptable results: It achieves a 70% match for a test case where we compare a simulated bullet with a reconstructed bullet.
7

Lattice Boltzmann Liquid Simulations on Graphics Hardware

Clough, Duncan 01 June 2013 (has links)
Fluid simulation is widely used in the visual effects industry. The high level of detail required to produce realistic visual effects requires significant computation. Usually, expensive computer clusters are used in order to reduce the time required. However, general purpose Graphics Processing Unit (GPU) computing has potential as a relatively inexpensive way to reduce these simulation times. In recent years, GPUs have been used to achieve enormous speedups via their massively parallel architectures. Within the field of fluid simulation, the Lattice Boltzmann Method (LBM) stands out as a candidate for GPU execution because its grid-based structure is a natural fit for GPU parallelism. This thesis describes the design and implementation of a GPU-based free-surface LBM fluid simulation. Broadly, our approach is to ensure that the steps that perform most of the work in the LBM (the stream and collide steps) make efficient use of GPU resources. We achieve this by removing complexity from the core stream and collide steps and handling interactions with obstacles and tracking of the fluid interface in separate GPU kernels. To determine the efficiency of our design, we perform separate, detailed analyses of the performance of the kernels associated with the stream and collide steps of the LBM. We demonstrate that these kernels make efficient use of GPU resources and achieve speedups of 29.6 and 223.7, respectively. Our analysis of the overall performance of all kernels shows that significant time is spent performing obstacle adjustment and interface movement as a result of limitations associated with GPU memory accesses. Lastly, we compare our GPU LBM implementation with a single-core CPU LBM implementation. Our results show speedups of up to 81.6 with no significant differences in output from the simulations on both platforms. We conclude that order of magnitude speedups are possible using GPUs to perform free-surface LBM fluid simulations, and that GPUs can, therefore, significantly reduce the cost of performing high-detail fluid simulations for visual effects.
8

Fast and Accurate Visibility Preprocessing

Nirenstein, Shaun 01 October 2003 (has links)
Visibility culling is a means of accelerating the graphical rendering of geometric models. Invisible objects are efficiently culled to prevent their submission to the standard graphics pipeline. It is advantageous to preprocess scenes in order to determine invisible objects from all possible camera views. This information is typically saved to disk and may then be reused until the model geometry changes. Such preprocessing algorithms are therefore used for scenes that are primarily static. Currently, the standard approach to visibility preprocessing algorithms is to use a form of approximate solution, known as conservative culling. Such algorithms over-estimate the set of visible polygons. This compromise has been considered necessary in order to perform visibility preprocessing quickly. These algorithms attempt to satisfy the goals of both rapid preprocessing and rapid run-time rendering. We observe, however, that there is a need for algorithms with superior performance in preprocessing, as well as for algorithms that are more accurate. For most applications these features are not required simultaneously. In this thesis we present two novel visibility preprocessing algorithms, each of which is strongly biased toward one of these requirements. The first algorithm has the advantage of performance. It executes quickly by exploiting graphics hardware. The algorithm also has the features of output sensitivity (to what is visible), and a logarithmic dependency in the size of the camera space partition. These advantages come at the cost of image error. We present a heuristic guided adaptive sampling methodology that minimises this error. We further show how this algorithm may be parallelised and also present a natural extension of the algorithm to five dimensions for accelerating generalised ray shooting. The second algorithm has the advantage of accuracy. No over-estimation is performed, nor are any sacrifices made in terms of image quality. The cost is primarily that of time. Despite the relatively long computation, the algorithm is still tractable and on average scales slightly superlinearly with the input size. This algorithm also has the advantage of output sensitivity. This is the first known tractable exact solution to the general 3D from-region visibility problem. In order to solve the exact from-region visibility problem, we had to first solve a more general form of the standard stabbing problem. An efficient solution to this problem is presented independently.
9

The Subjective Response of People Living with HIV to Illness Narratives in VR

Hamza, Sabeeha 01 January 2005 (has links)
This dissertation reports on the results on an exploratory investigation into the potential efficacy of VR as both a support mechanism to people living with HIV/AIDS, as well its capabilities as an emotive medium. Two hypothesis were presented viz. (1) VR will be a form of social support and (2) VR will have an emotional impact on participants. The research builds up on findings which demonstrate the therapeutic effectiveness of telling personal and collective narratives in an HIV/AIDS support group. This fact, together with the tested ability of VR as a therapeutic medium, let to the development of a virtual support group with an aim to test its therapeutic efficacy. A low cost, deployable desktop PC based system using custom software was developed. The system implemented a VR walkthrough experience of a tranquil campfire in a forest. The scene contained four interactive avatars who related narratives compiled from HIV/AIDS patients. These narratives covered the aspects of receiving an HIV+ diagnosis, intervention, and coping with living with HIV+ status. To evaluate the system, seven computer semi-literate HIV+ volunteers from townships around Cape Town used the system under the supervision of a clinical psychologist. The participants were interviewed about their experiences with their system, and the data was analyzed qualitatively using grounded theory. The group experiment showed extensive qualitative support for the potential efficacy of the VR system as both a support mechanism and an emotive medium. The comments received by the participants suggested that the VR medium would be effective as a source of social support, and could augment real counselling sessions, rather than replace them. The categories which emerged from the analysis of the interview data were emotional impact, emotional support, informational support, technology considerations, comparison with other forms of support, timing considerations and emotional presence. The categories can be grouped according to the research questions viz. + The efficacy of VR as an emotive medium (Presence, Emotional Impact, Computer Considerations) + The efficacy of the VR simulation as a source of social support (Emotional and Informational Support) Other themes not anticipated by the data included the following: Timing considerations and Comparison with other forms of counselling. The interviews suggested that both hypothesis 1 and 2 are correct viz. that the VR system provided a source of social support, and has an emotional impact on the participants.
10

A Comparison of Statistical and Geometric Reconstruction Techniques: Guidelines for Correcting Fossil Hominin Crania

Neeser, Rudolph 01 January 2007 (has links)
The study of human evolution centres, to a large extent, around the study of fossil morphology, including the comparison and interpretation of these remains within the context of what is known about morphological variation within living species. However, many fossils suffer from environmentally caused damage (taphonomic distortion) which hinders any such interpretation: fossil material may be broken and fragmented while the weight and motion of overlaying sediments can cause their plastic distortion. To date, a number of studies have focused on the reconstruction of such taphonomically damaged specimens. These studies have used myriad approaches to reconstruction, including thin plate spline methods, mirroring, and regression-based approaches. The efficacy of these techniques remains to be demonstrated, and it is not clear how different parameters (e.g., sample sizes, landmark density, etc.) might effect their accuracy. In order to partly address this issue, this thesis examines three techniques used in the virtual reconstruction of fossil remains by statistical or geometrical means: mean substitution, thin plate spline warping (TPS), and multiple linear regression. These methods are compared by reconstructing the same sample of individuals using each technique. Samples drawn from Homo sapiens, Pan troglodytes, Gorilla gorilla, and various hominin fossils are reconstructed by iteratively removing then estimating the landmarks. The testing determines the methods' behaviour in relation to the extant of landmark loss (i.e., amount of damage), reference sample sizes (this being the data used to guide the reconstructions), and the species of the population from which the reference samples are drawn (which may be different to the species of the damaged fossil). Given a large enough reference sample, the regression-based method is shown to produce the most accurate reconstructions. Various parameters effect this: when using small reference samples drawn from a population of the same species as the damaged specimen, thin plate splines is the better method, but only as long as there is little damage. As the damage becomes severe (missing 30% of the landmarks, or more), mean substitution should be used instead: thin plate splines are shown to have a rapid error growth in relation to the amount of damage. When the species of the damaged specimen is unknown, or it is the only known individual of its species, the smallest reconstruction errors are obtained with a regression-based approach using a large reference sample drawn from a living species. Testing shows that reference sample size (combined with the use of multiple linear regression) is more important than morphological similarity between the reference individuals and the damaged specimen. The main contribution of this work are recommendations to the researcher on which of the three methods to use, based on the amount of damage, number of reference individuals, and species of the reference individuals.

Page generated in 0.0236 seconds