• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1377
  • 588
  • 539
  • 537
  • 491
  • 466
  • 190
  • 136
  • 56
  • 46
  • 46
  • 45
  • 43
  • 42
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

The application of neural networks to problems in fringe analysis

Tipper, David John January 1999 (has links)
No description available.
472

The extraction and recognition of text from multimedia document images

Smith, R. W. January 1987 (has links)
No description available.
473

Satellite image processing for remote sensing applications

Hong, Guowei January 1995 (has links)
This thesis investigates areas of image compression with particular reference to remote sensing imagery. The research described was carried out in four specific areas, namely, discrete cosine transform (DCT) for remote sensing imagery, lossless image compression based on conditional statistics, exploiting interband redundancy for remote sensing imagery, neural networks for lossless image compression. The effect of using standard compression algorithm (JPEG's DCT) on the remote sensing image data is investigated. This involves visual and statistical assessment of the errors produced, both in the data itself, and with reference to the results of the processing (i. e., classification) normally performed using such data. It has been reported that the DCT characteristics can be modified to achieve a trade-off between compression ratio and pixel value error. It is feasible therefore that the user of remote sensing data could find a suitable compromise that could offer some of the compression benefits offered by the DCT, while. retaining sufficient accuracy of image data for the required applications. An approach for lossless image compression using conditional statistics is investigated. That is encoding each pixel value with one of several variable-length codes depending on previous pixel values (context). The author's method achieved its aim by approximating the probability distribution function (PDF) for each context and coding the image data using arithmetic coding. Experimental results are included to show that this method has achieved some improvement in lossless image compression and can achieve an average bits per pixel lower than the zero-order entropy of the prediction-error image. In the area of exploiting interband correlation for remote sensing imagery, two new techniques, namely joint entropy coding and interband prediction, are described. Joint entropy coding is based on the idea that to code a pair of pixel values from two different bands is more effective than to code them individually if there is interband correlation among them. Interband prediction is based on the fact that the structure of one band data can generally give some information about the structure of other bands. The results demonstrate and compare the usefulness of both techniques in improving the overall lossless compression ratio for remote sensing imagery. The idea of using neural networks for lossless image coding is introduced. A novel approach to pixel prediction based on a three-layer perceptron neural network using a backpropagation learning algorithm is described, which is aimed at improving the pixel prediction accuracy, thus improving the lossless compression ratio. Experimental results show this neural network approach consistently achieves better prediction than conventional linear prediction techniques in terms of minimizing the mean square error, although the results for the overall compression ratio are not significantly improved.
474

General motion estimation and segmentation

Wu, Siu Fan January 1990 (has links)
In this thesis, estimation of motion from an image sequence is investigated. The emphasis is on the novel use of motion model for describing two dimensional motion. Special attention is directed towards general motion models which are not restricted to translational motion. In contrast to translational motion, the 2-D motion is described by the model using motion parameters. There are two major areas which can benefit from the study of general motion model. The first one is image sequence processing and compression. In this context, the use of motion model provides a more compact description of the motion information because the model can be applied to a larger area. The second area is computer vision. The general motion parameters provide clues to the understanding of the environment. This offers a simpler alternative to techniques such as optical flow analysis. A direct approach is adopted here to estimate the motion parameters directly from an image sequence. This has the advantage of avoiding the error caused by the estimation of optical flow. A differential method has been developed for the purpose. This is applied in conjunction with a multi-resolution scheme. An initial estimate is obtained by applying the algorithm to a low resolution image. The initial estimate is then refined by applying the algorithm to image of higher resolutions. In this way, even severe motion can be estimated with high resolution. However, the algorithm is unable to cope with the situation of multiple moving objects, mainly because of the least square estimator used. A second algorithm, inspired by the Hough transform, is therefore developed to estimate the motion parameters of multiple objects. By formulating the problem as an optimization problem, the Hough transform is computed only implicitly. This drastically reduces the computational requirement as compared with the Hough transform. The criterion used in optimization is a measure of the degree of match between two images. It has been shown that the measure is a well behaving function in the vicinity of the motion parameter vectors describing the motion of the objects, depending on the smoothness of the images. Therefore, smoothing an image has the effect of allowing longer range motion to be estimated. Segmentation of the image according to motion is achieved at the same time. The ability to estimate general motion in the situation of multiple moving objects represents a major step forward in 2-D motion estimation. Finally, the application of motion compensation to the problem of frame rate conversion is considered. The handling of the covered and uncovered background has been investigated. A new algorithm to obtain a pixel value for the pixels in those areas is introduced. Unlike published algorithms, the background is not assumed stationary. This presents a major obstacle which requires the study of occlusion in the image. During the research, the art of motion estimation hcis been advanced from simple motion vector estimation to a more descriptive level: The ability to point out that a certain area in an image is undergoing a zooming operation is one example. Only low level information such as image gradient and intensity function is used. In many different situations, problems are caused by the lack of higher level information. This seems to suggest that general motion estimation is much more than using a general motion model and developing an algorithm to estimate the parameters. To advance further the state of the art of general motion estimation, it is believed that future research effort should focus on higher level aspects of motion understanding.
475

Control issues in high level vision

Remagnino, Paolo January 1993 (has links)
Vision entails complex processes to sense, interpret and reason about the external world. The performance of such processes in a dynamic environment needs to be regulated by flexible and reliable control mechanisms. This thesis is concerned with aspects of control in high level vision. The study of control problems in vision defines a research area which only recently has received adequate attention. Classification criteria such as scope of application, knowledge representation, control structure and communication have been chosen to establish means of comparisons between the existing vision systems. Control problems have recently become of great topical interest as a result of the basic ideas of the active vision paradigm. The proponents of active vision suggest that robust solutions to vision problems arise when sensing and analysis are controlled (i.e. purposively adjusted) to exploit both data and available knowledge (temporal context). The work reported in this thesis follows the basic tenets of active vision. It is directed at the study of control of sensor gaze, scene interpretation and visual strategy monitoring. Control of the visual sensor is an important aspect of active vision. A vision system must be able to establish its orientation with respect to the partially known environment and have control strategies for selecting targets to be viewed. In this thesis algorithms are implemented for establishing vision system pose relative to prestored environment landmarks and for directing gaze to points defined by objects in an established scene model. Particular emphasis has been placed on accounting for and propagating estimation errors arising from both measured image data and inaccuracy of stored scene knowledge. In order to minimise the effect of such errors a hierarchical scene model has been adopted with contextually related objects grouped together. Object positions are described relative to local determined landmarks and this keeps the size of errors within tolerable bounds. The scene interpretation module takes image descriptions in terms of low level features and produces a symbolic description of the scene in terms of known objects classes and their attributes. The construction of the scene model is an incremental process which is achieved by means of several knowledge sources independently controlled by separate modules. The scene interpreter has been carefully structured and operates in a loop of perception that is controlled by high level commands delivered from the system supervisor module. The individual scene interpreter modules operate as locally controlled modules and are instructed as to what visual task to perform, where to look in the scene and what subset of data to use. The module processing takes into account the existing partial scene interpretation. These mechanisms embody the concepts of spatial focus of attention and exploitation of temporal context. Robust scene interpretation is achieved via temporal integration of the interpretation. The element of control concerned with visual strategy monitoring is at the system supervisor level. The supervisor takes a user given task and decides the best strategy to follow in order to satisfy it. This may involve interrogation of existing knowledge or the initiation of new data collection and analysis. In the case of new analysis the supervisor has to express the task in terms of a set of achievable visual tasks and then these are encoded into a control word which is passed to the scene interpreter. The vocabulary of the scene supervisor includes tasks such as general scene exploration, the finding of a specific object, the monitoring of a specified object, the description of attributes of single objects or relationships between two or more objects. The supervisor has to schedule sub-tasks in such a way as to achieve a good solution to the given problem. A considerable number of experiments, which make use of real and synthetic data, demonstrate the advantages of the proposed approach by means of the current implementation (written in C and in the rule based system Clips).
476

An optimization approach to labelling problems in computer vision

Yang, Dekun January 1995 (has links)
This thesis is concerned with the development of an optimization based approach to solving labelling problems which involve the assignment of image entities into interpretation categories in computer vision. Attention is mainly focussed on the theoretical basis and computational aspect of continuous relaxation for solving a discrete labelling problem based on an optimization framework. First, a theoretical basis for continuous relaxation is presented which includes the formulation of a discrete labelling problem as a continuous minimization problem and an analysis of labelling unambiguity associated with continuous relaxation. The main advantage of the formulation over existing formulations is the embedding of relational measurements into the specification of a consistent labelling. The analysis provides a sufficient condition for a continuous labelling formulation to ensure that a consistent labelling is unambiguous. Second, a continuous relaxation labelling algorithm based on mean field theory is presented with the aim of approximating simulated annealing in a deterministic manner. The novelty of the algorithm lies in the utilization of mean field theory technique to avoid stochastic optimization for approximating the global optimum of a consistent labelling criterion. This is contrast to the conventional methods which find a local optimum near an initial estimate of labelling. A special three-frame discrete labelling problem of establishing trinocular stereo correspondence and a mixed labelling problem of interpreting image entities in terms of cylindrical objects and their locations are also addressed. For the former, two orientation based geometric constraints are suggested for matching lines among three viewpoints and a method is presented to find a consistent labelling using simulated annealing. For the latter, the image interpretation of 3D cylindrical objects and their 3D locations is achieved using three knowledge sources: edge map, region map and the ground plane constraint. The method differs from existing methods in that it exploits an integrated use of multiple image cues to simplify the interpretation task and improve the interpretation performance. Experimental results on both synthetic data and real images are provided to demonstrate the viability and the potential of the proposed methods throughout the thesis.
477

The design, development and evaluation of an active stereoscopic telepresence system

Asbery, Richard January 1997 (has links)
The work presented in this thesis documents the design, development and evaluation of a high performance stereoscopic telepresence system. Such a system offers the ability to enhance the operator perception of a remote and potentially hazardous environment as an aid to performing a remote task. To achieve this sensation of presence demands the design of a highly responsive remote camera system. A high performance stereo platform has been designed which utilises state- of-the-art cameras, servo drives and gearboxes. It possesses four degrees of freedom; pan, elevation and two camera vergence motions, all of which are controlled simultaneously in real-time by an open architecture controller. This has been developed on a PC/AT bus architecture and utilises a PID control regime. The controller can be easily interfaced to a range of input devices such as electromagnetic head tracking systems which provide the trajectory data for controlling the remote mechatronic platform. Experiments have been performed to evaluate both the mechatronic system and operator oriented performance aspects of the telepresence system. The mechatronic system investigations identify the overall system latency to be 80ms, which is considerably less than other current systems. The operator oriented evaluation demonstrates the necessity for a head tracked telepresence system with a head mounted display system. The need for a low latency period to achieve high operator performance and comfort during certain tasks is also established. This is evident during trajectory following experiments where the operator is required to track a highly dynamic target. The telepresence system has been fully evaluated and demonstrated to enhance operator spatial perception via a sensation of visual immersion in the remote environment.
478

A transition display system for colour map displays

Yedekcioglu, O. A. January 1984 (has links)
No description available.
479

A filtering approach to the integration of stereo and motion

Rios Figueroa, Homero Vladimir January 1993 (has links)
No description available.
480

Three dimensional modelling of Electrical Impedance Tomography

Kleinermann, Frederic January 2000 (has links)
Electrical Impedance Tomography (ElT) is an emerging imaging technique with applications in the medical field and in the field of industrial process tomography (lPT). Until recently, data acquisition and image reconstruction schemes have been constructed with the assumption that the object being imaged is two-dimensional. In recent years, some research groups have started to address the third dimensional aspects of ElT by both building three dimensional enabled data acquisition systems and solving the three dimensional Forward Problem numerically since this allows the possibility of modelling complex shapes. However, solving the Forward Problem analytically is still very attractive as an analytical solution does not depend on the way the domain has been meshed. Furthermore, if dynamic images are reconstructed which are less sensitive to the model of the electrodes employed, the shape of the object being imaged and the position of the electrodes, an analytical solution to the Forward Problem can be used to reconstruct dynamic three dimensional images. This thesis will start by describing how a full analytical solution for a finite right circular cylinder (which approximately models the human thorax) on which two electrodes have been placed, is derived. It will be shown that the analytical solution has two different forms. Results will be presented detailing the convergence performance of the two different forms as well as comparisons between the analytical solution and experimentally obtained data. Finally three dimensional images reconstructed using these methods will be presented. In order to better approximate the shape of the human thorax, the above work has been extended to provide an analytical solution for an elliptical cylinder and this is presented in this thesis for the first time together with some simulation results. Today in Multi-frequency Electrical Impedance Tomography (MEIT), new hardware for recording measurements operating above 1MHz is now available. This high operating frequency raises the question of the validity of the employed quasi-static conditions used in the associated Forward Problem modelling. It is important to be able to determine when the quasi-static conditions fail and to investigate the differences between a solution to the Forward Problem based on quasi-static conditions and the one based on non quasistatic conditions at these frequencies. This thesis details the derivation of a new analytical solution based on non quasi-static conditions for a finite right circular cylinder having two electrodes placed on its boundary. Some comparisons between the new analytical solution and data obtained from in-vitro experiments will be presented in this thesis. A comparison between the new analytical solution and the analytical solution derived earlier in this thesis (which is based on quasi-static conditions) is also conducted. Whilst these results are preliminary results, they reveal that for situations associated with imaging the human thorax the quasi-static assumption appear violated when most modern MEIT systems are employed. This frequency dependent three dimensional analytical Forward Problem work has wide ranging implications for the future of MEIT. The thesis will conclude with some initial thoughts on how to incorporate anisotropy into three dimensional Forward Problem solutions.

Page generated in 0.0236 seconds