• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 551
  • 51
  • 43
  • 41
  • 32
  • 29
  • 27
  • 20
  • 17
  • 16
  • 15
  • 14
  • 13
  • 12
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

The application of neural network techniques to the analysis of reinforced concrete beam-column joints subjected to axial load and bi-axial bending

Jadid, Mansour Nasser January 1994 (has links)
The application of neural networks in the form of parameter predictions to the behaviour and strength of beam-column joints under axial load and biaxial bending has been studied. Computation algorithms in the form of numerical analysis were performed on the beam-column joints to simulate the existing experimental data. A systematic approach was provided by implementing neural networks in the form of prediction by backpropagation algorithms. The objective of this study was to demonstrate a concept and methodology, rather than to build a full-scale knowledge-based system model, by incorporating most of the fundamental aspects of a neural network to solve the complex non-linear mapping of a beam-column joint. In general, it should be possible to identify certain parameters and allow the neural network to develop the model, thus accounting for the observed behaviour without relying on a particular algorithm but depending entirely on the manipulation of numerical data. The aim of this study was to view available experimental data on beam-column joint parameters from different angles and establish a <I>concept </I>and <I>methodology</I> that would provide rapid and economic benefits to experimental research. The focus of this study is to reconstruct previous experimental work by evaluating several parameters and establish valid mathematical relationships based on neural networks which are in agreement with relationships based on the experimental results. The computational methodology considered for the analysis of the beam-column joints has been formulated by adopting three stages to establish a procedure to implement the <I>concept</I> and <I>methodology</I> proposed. The procedure is demonstrated by the evaluation of the ultimate flexural strength of the reinforced concrete members, the moment-curvature relationship and the shear strength of the beam-column joint.
32

Generating depth maps from stereo image pairs

Walton, Nicholas W. January 2002 (has links)
In the 1960s, a group of AI researchers came to the conclusion that automatic depth recovery by two cameras and a computer was a simple task, and conquerable within an estimated five to ten years. Emboldened by the Information Gestalt theories, which affirm that the task is at least tractable, the field of machine vision was born. Forty years later, the problem is still far from being solved, and is still a very active field of study. This thesis summarizes the developments which have taken place in stereo vision research since its inception. The various solutions which have been developed are described and their deficiencies noted. An approach to generating depth maps from stereo image pairs is proposed. This technique is suitable for implementation on low cost computing hardware and is intended top provide assistance to a human operator. Results are presented which show the operation of this system on synthetic and natural images and the observed performance is analysed and discussed. The discussion covers many of the issues which cannot normally be found in a standard searcjh of the literature and brings them together in one location.
33

Neural network studies of lithofacies classification

Harris, David Anthony January 1994 (has links)
Exploration for hydrocarbons and other resources requires that large amounts of data be interpreted and used to infer the geology of extensive regions. Many different types of data are used. They are interpreted by geologists and sedimentologists in the light of experience. Artificial neural network models implemented on computers provide a powerful means of performing tasks such as pattern classification. Such tasks are difficult to perform using rule based methods, as we often do not know how to specify appropriate rules. We show that artificial neural networks can be used to discriminate between images of different lithofacies (types of rocks). This discrimination is based upon textural differences in the rocks, which are quantified by measures of texture derived from the rock images and used as inputs to the network. Neural network performance is good compared to a very simple alternative technique, that of K nearest neighbours. A particular set of texture measures is that based on the grey level coocurrence method. These measures have interesting properties; in particular, their expectation values can be calculated exactly for images generated by exactly solvable Ising models. The measures are themselves probabilities for the joint distribution of pixel values in an image, so that they can be used to generate images in a stochastic process.
34

A VLSI smart sensor-processor for fingerprint comparison

Anderson, Stuart January 1993 (has links)
Image processing techniques are increasingly being applied to new applications beyond their traditional uses for remote sensing image data enhancement. These new areas, such as machine vision for automated production line monitoring and control and financial transaction security, require low-cost compact but highly reliable systems. This thesis discusses some of the problems in achieving this goal and presents a novel approach to the implementation of low-cost real-time image processing systems. The method presented in this thesis utilises the usual system design leverage offered by VLSI of reduced cost, power, size and weight; achieved as a result of the freedom to efficiently map algorithms to hardware. In addition, substantial further advantages are obtained by integrating the image sensor and preprocessing interface circuits onto the same silicon substrate. During the course of this work three custom integrated circuits for real-time image processing were designed, simulated, fabricated and tested. Two of the devices form the image processing core of an entirely new, working, fingerprint based access control system. These designs then led to the development of the third device and the main focus of this thesis, a highly integrated sensor-processor for fingerprint comparison. This device has applications in many fields where personal identification is vital such as physical access control, financial transaction and health care. The architecture can also be adapted to address more general pattern recognition tasks. It is shown that through the efficient integration of the sensing, processing and memory elements of the fingerprint comparison system, increased performance and greatly reduced manufacturing costs can be achieved.
35

Decomposition of unstructured meshes for efficient parallel computation

Davey, Robert A. January 1997 (has links)
This thesis addresses issues relating to the use of parallel high performance computer architectures for unstructured mesh calculations. The finite element and finite volume methods are typical examples of such calculations which arise in a wide range of scientific and engineering applications. The work in this thesis is focused on the development at Edinburgh Parallel Computing Centre of a software library to support static mesh decomposition, known as PUL-md. The library provides a variety of mesh decomposition and graph partitioning algorithms, including both global methods and local refinement techniques. The library implements simple random, cyclic and lexico-graphic partitioning, Farhat's greedy algorithm, recursive layered, coordinate, inertial and spectral bisections, together with subsequent refinement by either the Kernighan and Lin algorithm or by one of two variants of the Mob algorithm. The decomposition library is closely associated with another library, PUL-sm, which provides run-time support for unstructured mesh calculations. The decomposition of unstructured meshes is related to the partitioning of undirected graphs. We present an exhaustive survey of algorithms for these related tasks. Implementation of the decomposition algorithms provided by PUL-md is discussed, and the tunable parameters that optimise the algorithm's behaviour are detailed. On the basis of various metrics of decomposition quality, we evaluate the relative merits of the algorithms and explore the tunable parameter space. To validate these metrics, and further demonstrate the utility of the library, we examine how the runtime of a demonstration application (a finite element code) depends on decomposition quality. Additional related work is presented, including research into the development of a novel 'seed-based' optimisation approach to graph partitioning. In this context gradient descent, simulated annealing and parallel genetic algorithms are explored.
36

ed@ed : a new gas-phase electron diffraction structural refinement program

Johnston, Blair Fraser January 2002 (has links)
As gas-phase electron diffraction (GED) is only used routinely for structure determination by a small number of groups world-wide, structural refinement software tends to be developed specifically for each particular group. Work in this thesis is concerned with the upgrading of the old MS-DOS based structural refinement software, Ed96, to a new MS-Windows application, ed@ed, complete with graphical user interface. The modifications made to the original Ed96 code in producing ed@ed warranted thorough testing to ensure that errors had not been introduced to the new program. Seven distinctive structural refinements were carried out using ed@ed in order to achieve this. The test cases presented in this thesis are 1,3,5-trisilylbenzene and hexasilylbenzene (in chapter 3), 1 -bromopentaborane(9) and 2-bromopentaborane(9) (in chapter 4), Ru(η-C<sub>5</sub>Me<sub>5</sub>) (η-C<sub>5</sub>F<sub>5</sub>) (in chapter 5) and Me<sub>3</sub>SnC<sub>4</sub>F<sub>9</sub> and Me<sub>3</sub>SnO<sub>2</sub>CC<sub>2</sub>F<sub>5</sub> (in chapter 6).
37

The statistical mechanics of image restoration

Pryce, Jonathan Michael January 1993 (has links)
Image restoration is concerned with the recovery of an 'improved' image from a corrupted picture, utilizing a prior model of the source and noise processes. We present a Bayesian derivation of the posterior probability distribution, which describes the relative probabilities that a certain image was the original source, given the corrupted picture. The ensemble of such restored images is modelled as a Markov random field (Ising spin system). Using a prior on the density of edges in the source, we obtain the cost function of Geman and Geman via information theoretic arguments. Using a combination of Monte Carlo simulation, the mean field approximation, and series expansion methods, we investigate the performance of the restoration scheme as a function of the parameters we have identified in the posterior distribution. We find phase transitions separating regions in which the posterior distribution is data-like, from regions in which it is prior-like, and we can explain these sudden changes of behaviour in terms of the relative free energies of metastable states. We construct a measure of the quality of the posterior distribution and use this to explore the way in which the appropriateness of the choice of prior affects the performance of the restoration. The data-like and prior-like characteristics of the posterior distribution indicate the regions of parameter space where the restoration scheme is effective and ineffective respectively. We examine the question of how best to <I>use</I> the posterior distribution to prescribe a single 'optimal' restored image. We make a detailed comparison of two different estimators to determine which better characterizes the posterior distribution. We propose that the TPM estimate, based on the mean of the posterior, is a more sensible choice than the MAP estimate (the mode of the posterior), both in principle and in practice, and we provide several practical and theoretical arguments in support.
38

Adaptive, linear, subspatial projections for invariant recognition of objects in real infrared images

Smart, Michael Howard William January 1998 (has links)
In recent years computer technology has advanced to a state whereby large quantities of data can be processed. This advancement has fuelled a dramatic increased in research into areas of image processing which were previously impractical, such as automated vision systems for, both military, and domestic purposes. Automatic Target Recognition (ATR) systems are one such example of these automated processes. ATR is the automatic detection, isolation and identification of objects, often derived from raw video, in a real-world, potentially hostile environment. The ability to rapidly, and accurately, process each frame of the incoming video stream is paramount to the success of the system, in order to output suitable actions against constantly changing situations. One of the main functions of an ATR system is to identify correctly all the objects detected in each frame of data. The standard approach to implementing this component is to divide the identification process into two separate modules; feature extraction and classification. However, it is often difficult to optimise such a dual system with respect to reducing the probability of mis-identification. This can lead to reduced performance. One potential solution is a neural network that accepts image data at the input, and outputs estimated classification. Unfortunately, neural network models of this type are prone to misuse due to their apparent black box solutions. In this thesis a new technique, based on existing adaptive wavelet algorithms, is implemented that offers ease-of-use, adaptability to new environments, and good generalisation in a single image-in-classification-out model that avoids many of the problems of the neural network approach. This new model is compared with the standard two stage approach using real-world, infrared, ATR data. Various extensions to the model are proposed to incorporate invariance to particular object deformations, such as size and rotation, which are necessary for reliable ATR performance. Further work increases the flexibility of the model to further improve generalisation. Other aspects, such as data analysis and object generation accuracy, which are often neglected, are also considered.
39

Spike timing dependent adaptation : minimising the effect of transistor mismatch in an analogue VLSI neuromorphic system

Cameron, Katherine January 2006 (has links)
Neuromorphic systems often call for sub-threshold operation where transistor mismatch is a particular problem and this mismatch can affect the time constants of the design. This work is an investigation into whether Spike Timing Dependent Plasticity (STDP), a neural algorithm capable of adapting time delays within neural systems, can provide a method to minimise the effect of transistor match. This work is set within the context of a depth-from-motion algorithm, the performance of which will be degraded by mismatch when implemented in analogue VLSI. A circuit is designed which predicts the arrival of a spike from the timing of two earlier spikes. The error between the actual spike arrival time and the prediction is used to improve future predictions. Two spike timing dependent adaptation methods are described. These were fabricated using an AMS 0.35μm process and the results are reported. The key measure is the prediction error: before adaptation the error reflects the amount of mismatch within the prediction circuitry; after adaptation the error indicates to what extent the adaptive circuitry can minimise the effect of transistor mismatch. For both designs it is shown that the effect of transistor mismatch can be greatly reduced through spike timing dependent adaptation.
40

An investigation into the application of parallel computers for the dynamic simulation of chemical processes

McKinnel, Roderick C. January 1994 (has links)
The detailed dynamic simulation of chemical processes is computationally expensive. Standard single processor (sequential) computers are not of sufficient power to tackle such simulations in a reasonable time frame. In particular, it is not possible to run complex simulations in less than real time. The solution to obtaining the processing power required lies in moving towards the use of multiple processor (parallel) computers. Unfortunately, obtaining the full benefit from parallelism requires the problem being solved to be partitionable into parts, each of which can be solved concurrently. For the majority of problems, locating this parallelism is not trivial. An investigation into the use of MIMD parallel computers for dynamic process simulation has been performed. Initially the parallel dynamic simulation of distillation was studied. Later work moved on to the parallel dynamic simulation of complete processes. As a result, two parallel process simulators have been produced: PDist (Parallel Distillation simulator) and PNet (Parallel Process Network simulator). Throughout the work a parallel modular approach, rather than a parallel equation based approach, has been adopted. Results show that the parallel modular approach maps efficiently to parallelism and that excellent reductions in execution time can be obtained. As well as the exploitation of parallelism for processing power reasons, a large amount of the work aimed to show the benefits which the parallel modular approach offered from a usability point of view. Both PDist and PNet were designed with usability in mind. The simulation model interfaces created were designed to hide the majority of the parallelisation from the modeller.

Page generated in 0.0339 seconds