• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 551
  • 51
  • 43
  • 41
  • 32
  • 29
  • 27
  • 20
  • 17
  • 16
  • 15
  • 14
  • 13
  • 12
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Parallel processing methods applied to two and three dimensional geo-electromagnetic induction modelling

MacDonald, Kenneth J. January 1996 (has links)
Two existing finite difference algorithms for solving the forward modelling problem of geoelectromagnetic induction have been recoded to take advantage of high performance massively parallel SIMD (single instruction multiple data) computer architectures. Poll's solves the two scalar polarised fields in the two dimensional (2D) problem, and the other from Pu solves for all three components of the magnetic field in three dimensional (3D) structures. Both models apply integral boundary conditions at the top and bottom of the grid to limit total mesh size. The 3D model introduces a thin sheet at the top of the model to describe near surface features. An efficient data parallel algorithm ensures the evaluation of the integrals maintains a high ratio of processor utilisation on the parallel hardware. Data parallel versions of the point Jacobian, Gauss-Seidel and successive overrelaxation iterative solvers have been developed. The latter two require two level black-white ordering, which to equalise the processor load balance, has been implemented in both a horizontally banded and chequer boarded remapping of grid nodes. The 2D model was also developed to form a task farm, whereby the solution for each period is performed on one of a cluster of workstations. These solutions are independent of each other, so are executed simultaneously on however many workstations are available at the time. Modern workstations, coupled with the original 2D Gauss-Jordan solver, are faster than the SIMD computers for all but the largest grid sizes. However, the 3D code certainly benefited from the parallel processing for any but the smallest models. A new automatic meshing algorithm, which stretches a predefined number of grid points over the conductivity structure, has also been developed. In part, this was to control the mesh sizes and hence load balancing on the SIMD computers, but investigations into grid spacing for 2D models show that severely restricting the number of grid points results in a much faster estimated solution.
42

Computer design and optimisation of holographic phase elements

Samus, Sergei January 1995 (has links)
This thesis presents a brief review of modern methods of phase holograms design. The simulated annealing algorithm is incorporated in the Dammann gratings to find a global solution. The theoretical basis of the binary and multi-phase holograms is given. The global strategy for the binary and multi-phase holograms design is developed. The methods to increase the computational speed of the hologram design are developed, explained and systemized. An iterative design method for electrically switchable novel liquid crystal fan-out continuous hologram using a constrained growing technique that ensures electrical conductivity is presented. In-depth analysis of methods of continuous holograms design is given with using visual design principles. Multi-phase holograms are described and digital simulation results are presented. The technique for pseudo four level sandwich holograms is described and extended. The sandwich holograms which are tolerant to 2-pixel disalignment are developed. Various types of structured overlays for sandwich holograms including overlays for error diffusion are developed and described. The comparative characteristics of the various types of phase holograms are given. This thesis presents a novel algorithm ("missing pixel holograms") for varying the intensity of a small number of output spots by taking combinations of predesigned primitive holograms with additional filling of the non-overlapped areas. The algorithm makes possible to vary the intensities of spots quickly and without further design of the holograms. The potential of phase holograms for laser beam scanning and optical interconnects are examined. Most of the simulations are approved by the optical results of highest quality. The implementations of binary holograms on SLM are made. The optical results of Dammann gratings, holographic animation, beam steering, connected holograms and others are presented. This thesis introduces a new visual approach to the holograms design and the unique software package "Holomaster 1", that allows to implement visual methods in practice. The software package incorporates many methods of holograms design developed in the thesis and summarises all our work. The wide range of the fragments of source codes in C and C++ programming languages which implement various algorithms described in the thesis are provided.
43

Parallel algorithms for atmospheric modelling

Tett, Simon F. B. January 1992 (has links)
In this thesis, the usefulness of massively parallel computers of the MIMD class to satisfy atmospheric modellers' demands for increased computing power is examined. Algorithms to use these computers, for the dynamics, are developed for both grid-point and spectral methods. Scaling formulae for these algorithms are developed and the algorithms are implemented on the Edinburgh Concurrent Supercomputer (ECS). Another component of atmospheric models is parameterization; in which the effects of unresolved phenomenan on the mean-flow are modelled. Two parameterization schemes are implemented on the ECS and a study of the effects of load-balancing is made. Furthermore it is concluded that implementation of parameterization schemes on data-parallel computers is likely to be difficult, unlike MIMD machines where the implementation is straightforward.
44

Pseudo inverse filter design for improving the axial resolution of ultrasound images

Young, Warren Frazer January 1998 (has links)
This thesis is concerned with improving the quality of images from medical ultrasound examinations which often suffer from poor picture quality. In addition, the resolution of ultrasound images is poor compared to other medical imaging techniques. Ultrasound images have comparatively poor resolution (axial, lateral and contrast), and poor dynamic range. The axial resolution, and contrast resolution of medical ultrasound images can be improved by appropriate filtering of the RF ultrasound signal, prior to signal demodulation. In this thesis I present a pseudo inverse filter, suitable for improving the axial and contrast resolution of medical ultrasound images. This filter is based on a sigmoid function, and it approximates an inverse filter - but is less sensitive to noise than an inverse filter. I present results from applying the proposed filter, with a variety of parameter settings, to ultrasound images of three target objects: a near perfect plane reflector, a tissue mimicking phantom, and intravascular ultrasound (IVUS) images from the aorta of a rabbit. These three targets provide good test cases for the filter; the near perfect plane reflector is a good test of the filter under (near) ideal circumstances; the IVUS images are a realistic test of a medical situation in that they are virtually identical to human IVUS images; and the phantom provides a good midpoint between the first two cases. This midpoint is useful since it allows me to analyse a realistic image, while knowing exactly what the structure and shape of the target object is - this knowledge is not available for the IVUS images. Lastly, I present extensive comparisons with other filters that have been proposed for the filtering of ultrasound. These comparisons are both subjective and - more importantly - objective. In particular I make comparisons with the following filters: matched, inverse, phase correction, square root based, fourth root based, and Wiener filters.
45

Human factors in computer-aided mammography

Hartswood, Mark January 1999 (has links)
Breast screening requires film readers to exercise considerable expertise when examining breast X-rays (or 'mammograms') for signs of malignancy. Understandably; errors are sometimes made, and the screening programme is continually investigating ways to improve detection performance. In recent years, interest has grown in using computer based prompting systems to assist with reading. Prompting systems use image analysis techniques to identify possible cancers within a digitised mammogram and cue film readers to their location with the aim of preventing cancers from being overlooked. A qualitative analysis of clinic work practices show reading to be a situated activity with important collaborative dimensions. Tensions were found to exist between making decision-making visible (hence rendering it accountable and providing a reference by which performance can be monitored) and the possibility of being biased by exposure to the decision processes of others. It is argued that use of PROMAM offers a similar mix of advantages and pitfalls, and that lessons can be learned for prompting from how these tensions are managed for conventional forms of evidence. In subsequent investigations of prompting it was found that readers' interpretation and use of PROMAM were often problematic. Readers often had difficulties understanding prompts, and used them in ways contingent on the particular problem at hand rather than purely to aid detection. It is argued that effective prompting is not only a problem of achieving sufficient system performance, but also one of ensuring prompts are comprehensible, accountable, and appropriately used. Achieving the latter requires an understanding of how readers make sense of prompts in the context of their conventional reading practice.
46

Analogue VLSI for temporal frequency analysis of visual data

Sutherland, Alasdair January 2003 (has links)
When viewed with an electronic imager, any variation in light intensity over time can provide valuable information regarding the nature of the object or light source causing the intensity change. By estimating the frequency of such light intensity variations, temporal frequencies can be extracted from the visual data, which may prove useful in a variety of applications. For instance, certain objects exhibit unique temporal frequencies, which could facilitate identification or classification. Other potential applications include remote, early failure detection for rotating machinery, as well as the possible detection of cancerous breasts using infra-red imaging techniques. The aim of the research reported in this thesis is the development of a CMOS image-processor, capable of extracting such temporal frequencies from any scene it is exposed to. In addition to finding the fundamental frequency, the sensor aims to extract the relative strength of up to the first four harmonics, performing a Fourier style decomposition of the incident light intensity into a <i>temporal frequency signature</i>. A heavy emphasis was placed on low power operation, leading to an investigation of analogue signal processing techniques with transistors biased in the subthreshold region of operation. The parallel processing advantages of combining light sensitive elements with signal processing elements in each pixel were also investigated, resulting in a system incorporating focal-plane computation. Software simulations of various novel system level algorithms are reported, with the successful approach used to create fundamental frequency maps of test data. The approach was also simulated to prove its robustness to noise commonly found in CMOS imager implementations. Circuits are presented which accurately extract the fundamental frequency of variations in light intensity, while benefiting from the low power consumption of subthreshold analogue circuitry. A novel algorithm which places a band pass filter onto the fundamental frequency of any incident light intensity with an accuracy of 3 % is also presented. The system can tune from 20 Hz to 10 kHz at a maximum rate of 9 kHz/s, and can be considered the first step in the creation of a single-chip pseudo-Fourier light intensity processing unit.
47

Effective software support for chemical research

Welsh, Amanda Jayne January 1993 (has links)
This thesis describes the design of a software tool-kit, and the implementation of a sub-set of tools therein. The tool-kit is aimed at the scientific community, providing tools for research in the physical sciences. However, the development of an environment providing effective software support across the entire range of physical sciences is beyond the scope of this thesis. Thus, chemical research, was adopted as a <I>proof of concept</I> case study. A user requirements analysis led to the selection of two applications for implementation: a molecular graphics tool - <I>the Visualisor</I> - and a theoretical modelling tool - the <I>Locator</I>. Molecular graphics was selected because it was considered that such a tool would be useful to many researchers, and because the available molecular graphics tools were observed to be lacking in terms of human-computer interaction principles. Further, such a tool could be used as a base for subsequent implementations. Locating metal-bound hydride ligands in transition metal cluster compounds can be difficult in conventional analytical chemistry. Thus many empirical and theoretical methods have been developed for locating such ligands. One such method has been implemented in the Complete Coordinate Convergence Program (<I>CCCP</I>). However, this tool was found to be user-unfriendly and its success was shown to be dependent on an initial estimate required from users. Thus the Locator was developed. This tool provides an initial estimate of the hydride ligand position(s), for subsequent optimisation by the CCCP code. The performance of the Locator was tested on 178 models, resulting in a favourable success rate of 72%. This figure rises to 84% when the bonding interactions of the hydride ligand(s) are provided by the user. It is considered that the adoption of the Visualisor interface and the addition of the initial estimate routine significantly increased the usability of the tool and the success and accuracy of the results produced.
48

Post-cochlear auditory modelling for sound localisation using bio-inspired techniques

Wall, Julie January 2010 (has links)
This thesis presents spiking neural architectures which simulate the sound localisation capability of the mammalian auditory pathways. This localisation ability is achieved by exploiting important differences in the sound stimulus received by each ear, known as binaural cues. Interaural time difference and interaural intensity difference are the two binaural cues which play the most significant role in mammalian sound localisation. These cues are processed by different regions within the auditory pathways and enable the localisation of sounds at different frequency ranges; interaural time difference is used to localise low frequency sounds whereas interaural intensity difference localises high frequency sounds. Interaural time difference refers to the different points in time at which a sound from a single location arrives at each ear and interaural intensity difference refers to the difference in sound pressure levels of the sound at each ear, measured in decibels. Taking inspiration from the mammalian brain, two spiking neural network topologies were designed to extract each of these cues. The architecture of the spiking neural network designed to process the interaural time difference cue was inspired by the medial superior olive. The lateral superior olive was the inspiration for the architecture designed to process the interaural intensity difference cue. The development of these spiking neural network architectures required the integration of other biological models, such as an auditory periphery (cochlea) model, models of bushy cells and the medial nucleus of the trapezoid body, leaky integrate and fire spiking neurons, facilitating synapses, receptive fields and the appropriate use of excitatory and inhibitory neurons. Two biologically inspired learning algorithms were used to train the architectures to perform sound localisation. Experimentally derived HRTF acoustical data from adult domestic cats was employed to validate the localisation ability of the two architectures. The localisation abilities of the two models are comparable to other computational techniques employed in the literature. The experimental results demonstrate that the two SNN models behave in a similar way to the mammalian auditory system, i.e. the spiking neural network for interaural time difference extraction performs best when it is localising low frequency data, and the interaural intensity difference spiking neuron model performs best when it is localising high frequency data. Thus, the combined models form a duplex system of sound localisation. Additionally, both spiking neural network architectures show a high degree of robustness when the HRTF acoustical data is corrupted by noise.
49

Methods and metrics for image classification with application to low vision

Everingham, Mark Richard January 2002 (has links)
No description available.
50

Application of high level languages for water network modelling

Bounds, Peter Lewis Mark January 2001 (has links)
No description available.

Page generated in 0.0177 seconds