• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 935
  • 173
  • 93
  • 66
  • 33
  • 32
  • 32
  • 32
  • 32
  • 32
  • 30
  • 30
  • 12
  • 8
  • 6
  • Tagged with
  • 1669
  • 1669
  • 255
  • 200
  • 189
  • 169
  • 160
  • 153
  • 149
  • 147
  • 144
  • 143
  • 143
  • 141
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1001

The Subjective Response of People Living with HIV to Illness Narratives in VR

Hamza, Sabeeha 01 January 2005 (has links)
This dissertation reports on the results on an exploratory investigation into the potential efficacy of VR as both a support mechanism to people living with HIV/AIDS, as well its capabilities as an emotive medium. Two hypothesis were presented viz. (1) VR will be a form of social support and (2) VR will have an emotional impact on participants. The research builds up on findings which demonstrate the therapeutic effectiveness of telling personal and collective narratives in an HIV/AIDS support group. This fact, together with the tested ability of VR as a therapeutic medium, let to the development of a virtual support group with an aim to test its therapeutic efficacy. A low cost, deployable desktop PC based system using custom software was developed. The system implemented a VR walkthrough experience of a tranquil campfire in a forest. The scene contained four interactive avatars who related narratives compiled from HIV/AIDS patients. These narratives covered the aspects of receiving an HIV+ diagnosis, intervention, and coping with living with HIV+ status. To evaluate the system, seven computer semi-literate HIV+ volunteers from townships around Cape Town used the system under the supervision of a clinical psychologist. The participants were interviewed about their experiences with their system, and the data was analyzed qualitatively using grounded theory. The group experiment showed extensive qualitative support for the potential efficacy of the VR system as both a support mechanism and an emotive medium. The comments received by the participants suggested that the VR medium would be effective as a source of social support, and could augment real counselling sessions, rather than replace them. The categories which emerged from the analysis of the interview data were emotional impact, emotional support, informational support, technology considerations, comparison with other forms of support, timing considerations and emotional presence. The categories can be grouped according to the research questions viz. + The efficacy of VR as an emotive medium (Presence, Emotional Impact, Computer Considerations) + The efficacy of the VR simulation as a source of social support (Emotional and Informational Support) Other themes not anticipated by the data included the following: Timing considerations and Comparison with other forms of counselling. The interviews suggested that both hypothesis 1 and 2 are correct viz. that the VR system provided a source of social support, and has an emotional impact on the participants.
1002

A Comparison of Statistical and Geometric Reconstruction Techniques: Guidelines for Correcting Fossil Hominin Crania

Neeser, Rudolph 01 January 2007 (has links)
The study of human evolution centres, to a large extent, around the study of fossil morphology, including the comparison and interpretation of these remains within the context of what is known about morphological variation within living species. However, many fossils suffer from environmentally caused damage (taphonomic distortion) which hinders any such interpretation: fossil material may be broken and fragmented while the weight and motion of overlaying sediments can cause their plastic distortion. To date, a number of studies have focused on the reconstruction of such taphonomically damaged specimens. These studies have used myriad approaches to reconstruction, including thin plate spline methods, mirroring, and regression-based approaches. The efficacy of these techniques remains to be demonstrated, and it is not clear how different parameters (e.g., sample sizes, landmark density, etc.) might effect their accuracy. In order to partly address this issue, this thesis examines three techniques used in the virtual reconstruction of fossil remains by statistical or geometrical means: mean substitution, thin plate spline warping (TPS), and multiple linear regression. These methods are compared by reconstructing the same sample of individuals using each technique. Samples drawn from Homo sapiens, Pan troglodytes, Gorilla gorilla, and various hominin fossils are reconstructed by iteratively removing then estimating the landmarks. The testing determines the methods' behaviour in relation to the extant of landmark loss (i.e., amount of damage), reference sample sizes (this being the data used to guide the reconstructions), and the species of the population from which the reference samples are drawn (which may be different to the species of the damaged fossil). Given a large enough reference sample, the regression-based method is shown to produce the most accurate reconstructions. Various parameters effect this: when using small reference samples drawn from a population of the same species as the damaged specimen, thin plate splines is the better method, but only as long as there is little damage. As the damage becomes severe (missing 30% of the landmarks, or more), mean substitution should be used instead: thin plate splines are shown to have a rapid error growth in relation to the amount of damage. When the species of the damaged specimen is unknown, or it is the only known individual of its species, the smallest reconstruction errors are obtained with a regression-based approach using a large reference sample drawn from a living species. Testing shows that reference sample size (combined with the use of multiple linear regression) is more important than morphological similarity between the reference individuals and the damaged specimen. The main contribution of this work are recommendations to the researcher on which of the three methods to use, based on the amount of damage, number of reference individuals, and species of the reference individuals.
1003

Large Image Support in Digital Repositories

Nel, Marius Francois 01 January 2011 (has links)
Many universities, libraries, government organisations and companies are implementing digital repositories to collect, preserve, administer and distribute their collections via the World Wide Web. In the process of building these digital archives and collections, images such as maps are often are captured in an uncompressed, high-resolution format to preserve as much detail as possible. This process, of high-resolution archiving gives rise to the problem of providing the end-user with access to these large (high-resolution) images, such as maps. This dissertation investigates methods of storing and delivering large images over the Internet while limiting the amount of data being transferred; and also documents efforts to incorporate large image support within the DSpace platform. An end-user usability study of various large image support solutions was conducted to establish how current digital repository large image solutions compared to commercial large image solutions. The study showed that the commercial large image solutions were superior to current digital repository solutions. A prototype large image solution was developed with a specific aim to provide DSpace with mechanisms to import and deliver large images in a bandwidth-conscious manner. It was found that by implementing and extending currently available open source large image processing software, large image support could be provided to the DSpace platform with minimal or no modification to the DSpace source code. An end user evaluation study was conducted to establish the usability and effectiveness of the prototype large image support solution. It was found that the prototype system provided an easy to use solution that provides DSpace with an effective large image archiving and delivery mechanism.
1004

Implementation of a proprietary CAD graphics subsystem using the GKS standard interface.

Davies, Trevor Rowland. January 1989 (has links)
This project involved porting a Graphical Software Package (GSP) from the proprietary IDS-BO Gerber CAD system onto a more modern computer that would allow student access for further study and development. Because of the popularity of Unix as an "open systems environment", the computer chosen was an HP9000 using the HP-UX operating system. In addition, it was decided to implement a standard Graphical Kernel System (GKS) interface to provide further portability and to cater for the expected growth of the GKS as an international standard. By way of introduction, a brief general overview of computer graphics, some of the essential considerations for the design of a graphics package and a description of the work undertaken are presented. Then follows a detailed presentation of the two systems central to this project i) the lDS-8O Gerber proprietary CAD system, with particular attention being paid to the Graphical Software Package (GSP) which it uses and ii) the Graphical Kernel System (GKS) which has become a widely accepted international graphics standard. The major differences between the lDS-8O Gerber GSP system and the GKS system are indicated. Following the theoretical presentation of the GSP and GKS systems, the practical work involved in first implementing a "skeleton" GKS interface on the HP9000 Unix System, incorporating the existing Advanced Graphics Package (AGP) is presented. The establishment of a GKS interface then allows an lDS-8O Gerber GSP interface to be developed and mapped onto this. Detailed description is given of the methods employed for this implementation and the reasons for the data structures chosen. The procedures and considerations for the testing and verification of the total .system implemented on the HP9000 then follow. Original lDS-8O Gerber 2-D .applications software was used for the purpose of testing. The implementation of the data base that this software uses is also presented. Conclusions on system performance are finally presented as well as suggested areas for possible further work. / Thesis (M.Sc.)-University of Natal, Durban, 1989.
1005

Minkštų šešėlių vaizdavimas realiuoju laiku / Rendering soft shadows in real-time

Pranckevičius, Aras 30 May 2005 (has links)
Shadows provide an important cue in computer graphics. In this thesis we focus on real-time soft shadow algorithms. Two new techniques are presented, both run entirely on modern graphics hardware. "Soft Shadows Using Precomputed Visibility Distance Functions" renders fake soft shadows in static scenes using precomputed visibility information. The technique handles dynamic local light sources and contains special computation steps to generate smooth shadows from hard visibility functions. The resulting images are not physically accurate, nevertheless the method renders plausible images that imitate global illumination. "Soft Projected Shadows" is a simple method for simulating natural shadow penumbra for projected grayscale shadow textures. Shadow blurring is performed entirely in image space and needs only a couple of special blurring passes on pixel shader 2.0 hardware. The technique treats shadow receivers as nearly planar surfaces and doesn’t handle self shadowing, but executes very fast and renders plausible soft shadows. Multiple overlapping shadow casters in a single shadow map are natively supported without any performance overhead.
1006

Two problems of digital image formation : recovering the camera point spread function and boosting stochastic renderers by auto-similarity filtering

Delbracio, Mauricio 25 March 2013 (has links) (PDF)
This dissertation contributes to two fundamental problems of digital image formation: the modeling and estimation of the blur introduced by an optical digital camera and the fast generation of realistic synthetic images. The accurate estimation of the camera's intrinsic blur is a longstanding problem in image processing. Recent technological advances have significantly impacted on image quality. Thus improving the accuracy of calibration procedures is imperative to further push this development. The first part of this thesis presents a mathematical theory that models the physical acquisition of digital cameras. Based on this modeling, two fully automatic algorithms to estimate the intrinsic camera blur are introduced. For the first one, the estimation is performed from a photograph of a specially designed calibration pattern. One of the main contributions of this dissertation is the proof that a pattern with white noise characteristics is near optimal for the estimation purpose. The second algorithm circumvents the tedious process of using a calibration pattern. Indeed, we prove that two photographs of a textured planar scene, taken at two different distances with the same camera configuration, are enough to produce an accurate estimation. In the second part of this thesis, we propose an algorithm to accelerate realistic image synthesis. Several hours or even days may be necessary to produce high-quality images. In a typical renderer, image pixels are formed by averaging the contribution of stochastic rays cast from a virtual camera. The simple yet powerful acceleration principle consists of detecting similar pixels by comparing their ray histograms and letting them share their rays. Results show a significant acceleration while preserving image quality.
1007

Statistical ray-tracing analysis of the linear Fresnel mirror solar concentrator

Ying, Xiaomin January 1993 (has links)
The Monte Carlo-type statistical ray-tracing method was used to investigate the performance of the line-focusing Fresnel mirror solar concentrator. An optical model of the line-focusing Fresnel mirror concentrator using the statistical ray-tracing approach was developed. Many rays of sunlight from the solar disk were selected at random and traced through the concentrator in this model. This optical model permits calculation of the local and geometric concentration ratios. The latter requires an energyloss analysis. Small sun-tracking errors of the diurnal or transverse type were included in the model.Based on the optical model and the Monte Carlo-type statistical ray-tracing method, a computer program was written implementing the model and computations using Pascal. To facilitate performance comparisons, a baseline concentrator design was adopted. To study the effects of imperfect tracking, performance data were generated for small tracking errors up to approximately two and one-half degrees. The selected mirror configuration permitted comparisons between the statistical approach and previous applications of the "extreme ray" analysis for an imperfectly tracking mirror concentrator.Simulation results demonstrated that the concentration characteristics are highly sensitive to the tracking error. The geometric concentration ratio dramatically decreases when the tracking error increases, which is the same as the "extreme ray" analysis. Results of some typical numerical calculations are presented graphically and discussed. / Department of Physics and Astronomy
1008

An extensible Java system for graph editing and algorithm animation

Nall, Aaron J. January 1998 (has links)
The G-Net research group at Ball State University previously developed a graph editor, written in Java, with limited algorithm support. This editor was modified until the code had the instability of a legacy system. It was decided that, rather than continue working with the old system, a new version would be created.The enhancements planned for this new version were more efficient data structures, easy addition of new algorithms, and animated algorithm output. Additionally, the new version was to be written in compliance with the latest Java standards. This paper describes the structure of this new program, Jedit3.1. An overview of the structure of the program and detailed descriptions of the material that future programmers will need to understand in order to add new algorithms is included. Appropriate descriptions are included for files that future programmers should understand but not necessarily modify. / Department of Computer Science
1009

Issues of implementing X windows on a non-X windows device

Kreiner, Barrett January 1991 (has links)
X windows is a graphic display management system. It is designed to work on a variety of machines and display adapters, however it is not designed for terminals with local graphics capabilities. X windows can be made to work on this type of terminal, although in a slower and restricted form. The problem with designing a variation of X for these terminals is the translation from X requests to native graphics commands, and the mapping of terminal input into X events. These implementation issues are discussed and example code is provided. / Department of Computer Science
1010

Real-time Generation of Procedural Forests

Kenwood, Julian 01 January 2014 (has links)
The creation of 3D models for games and simulations is generally a time-consuming and labour intensive task. Forested landscapes are an important component of many large virtual environments in games and film. To create the many individual tree models required for forests requires a large numbers of artists and a great deal of time. In order to reduce modelling time procedural methods are often used. Such methods allow tree models to be created automatically and relatively quickly, albeit at potentially reduced quality. Although the process is faster than manual creation, it can still be slow and resource-intensive for large forests. The main contribution of this work is the development of an efficient procedural generation system for creating large forests. Our system uses L-Systems, a grammar based procedural technique, to generate each tree. We explore two approaches to accelerating the creation of large forests. First, we demonstrate performance improvements for the creation of individual trees in the forest, by reducing the computation required by the underlying L-Systems. Second, we reduce the memory overhead by sharing geometry between trees using a novel branch instancing approach. Test results show that our scheme significantly improves the speed of forest generation over naive methods: our system is able to generate over 100, 000 trees in approximately 2 seconds, while using a modest amount of memory. With respect to improving L-System processing, one of our methods achieves a 25% speed up over traditional methods at the cost of a small amount of additional memory, while our second method manages a 99% reduction in memory at the expense of a small amount of extra processing.

Page generated in 0.0833 seconds