• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1377
  • 588
  • 539
  • 537
  • 491
  • 466
  • 190
  • 136
  • 56
  • 46
  • 46
  • 45
  • 43
  • 42
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Application based image compression for micro-satellite optical imaging

Hou, Peixin January 1999 (has links)
No description available.
432

Robust correlation and support vector machines for face identification

Jonsson, K. T. January 2000 (has links)
No description available.
433

Fractal image coding techniques and their applications

Ali, Maaruf January 1997 (has links)
No description available.
434

Realistic 3-D displays from cartographic data

Mohamed, B. January 1986 (has links)
No description available.
435

Methodology and tools for designing concept tutoring systems

Direne, Alexandre Ibrahim January 1993 (has links)
No description available.
436

Motion analysis of cinematographic image sequences

Giaccone, Paul January 2000 (has links)
Many digital special effects require knowledge of the motion present in an image sequence. In order for these effects to be realistic, blending seamlessly with unmodified live action or animation, motion must be represented accurately. Most existing methods of motion estimation are unsuitable for use in postproduction for one or more reasons; namely poor accuracy; corruption, by aliasing and the aperture problem, of estimation of large-magnitude motion; failure to handle multiple motions and motion boundaries; representation of curvilinear motion as concatenated translations instead of as smooth curves; slowness of execution and inefficiency in the presence of small variations between successive images. Novel methods of motion estimation are proposed here that are specifically designed for use in postproduction and address all of the above problems. The techniques are based on parametric estimation of optical-flow fields, reformulated in terms of displacements rather than velocities. The paradigm of displacement estimation leads to techniques for iterative updating of motion estimation for accuracy; faster motion estimation by exploiting redundancies between successive images; representation of motion over a sequence of images with a single set of parameters; and curvilinear representation of motion. Robust statistics provides a means for distinguishing separate types of motion and overcoming the problems of motion boundaries. Accurate recovery of the motion of the background in a sequence, combined with other image characteristics, leads to a segmentation procedure that greatly accelerates the rotoscoping and compositing tasks commonly carried out in postproduction. Comparative evaluation of the proposed methods with other techniques for motion estimation and image segmentation indicates that, in most cases, the new work provides considerable improvements in quality.
437

An acoustic-phonetic approach in automatic Arabic speech recognition

Al-Zabibi, Marwan January 1990 (has links)
In a large vocabulary speech recognition system the broad phonetic classification technique is used instead of detailed phonetic analysis to overcome the variability in the acoustic realisation of utterances. The broad phonetic description of a word is used as a means of lexical access, where the lexicon is structured into sets of words sharing the same broad phonetic labelling. This approach has been applied to a large vocabulary isolated word Arabic speech recognition system. Statistical studies have been carried out on 10,000 Arabic words (converted to phonemic form) involving different combinations of broad phonetic classes. Some particular features of the Arabic language have been exploited. The results show that vowels represent about 43% of the total number of phonemes. They also show that about 38% of the words can uniquely be represented at this level by using eight broad phonetic classes. When introducing detailed vowel identification the percentage of uniquely specified words rises to 83%. These results suggest that a fully detailed phonetic analysis of the speech signal is perhaps unnecessary. In the adopted word recognition model, the consonants are classified into four broad phonetic classes, while the vowels are described by their phonemic form. A set of 100 words uttered by several speakers has been used to test the performance of the implemented approach. In the implemented recognition model, three procedures have been developed, namely voiced-unvoiced-silence segmentation, vowel detection and identification, and automatic spectral transition detection between phonemes within a word. The accuracy of both the V-UV-S and vowel recognition procedures is almost perfect. A broad phonetic segmentation procedure has been implemented, which exploits information from the above mentioned three procedures. Simple phonological constraints have been used to improve the accuracy of the segmentation process. The resultant sequence of labels are used for lexical access to retrieve the word or a small set of words sharing the same broad phonetic labelling. For the case of having more than one word-candidates, a verification procedure is used to choose the most likely one.
438

Speaker independent isolated word recognition

Mwangi, Elijah January 1987 (has links)
The work presented in this thesis concerns the recognition of isolated words using a pattern matching approach. In such a system, an unknown speech utterance, which is to be identified, is transformed into a pattern of characteristic features. These features are then compared with a set of pre-stored reference patterns that were generated from the vocabulary words. The unknown word is identified as that vocabulary word for which the reference pattern gives the best match. One of the major difficul ties in the pattern comparison process is that speech patterns, obtained from the same word, exhibit non-linear temporal fluctuations and thus a high degree of redundancy. The initial part of this thesis considers various dynamic time warping techniques used for normalizing the temporal differences between speech patterns. Redundancy removal methods are also considered, and their effect on the recognition accuracy is assessed. Although the use of dynamic time warping algorithms provide considerable improvement in the accuracy of isolated word recognition schemes, the performance is ultimately limited by their poor ability to discriminate between acoustically similar words. Methods for enhancing the identification rate among acoustically similar words, by using common pattern features for similar sounding regions, are investigated. Pattern matching based, speaker independent systems, can only operate with a high recognition rate, by using multiple reference patterns for each of the words included in the vocabulary. These patterns are obtained from the utterances of a group of speakers. The use of multiple reference patterns, not only leads to a large increase in the memory requirements of the recognizer, but also an increase in the computational load. A recognition system is proposed in this thesis, which overcomes these difficulties by (i) employing vector quantization techniques to reduce the storage of reference patterns, and (ii) eliminating the need for dynamic time warping which reduces the computational complexity of the system. Finally, a method of identifying the acoustic structure of an utterance in terms of voiced, unvoiced, and silence segments by using fuzzy set theory is proposed. The acoustic structure is then employed to enhance the recognition accuracy of a conventional isolated word recognizer.
439

The design and implementation of a purely digital stereo-photogrammetric system on the IBM 3090 multi-user mainframe computer

Azizi, Ali January 1990 (has links)
This thesis is concerned with an investigation into the possibilities of implementing various aspects of a purely digital stereo-photogrammetric (DSP) system on the IBM 3090 150E mainframe multi-user computer. The main aspects discussed within the context of this thesis are:-i) Mathematical modelling of the process of formation of digital images in the space and frequency domains.ii) Experiments on improving the pictorial quality of digital aerial photos using Inverse and Wiener filters.iii) Devising and implementing an approach for the automatic sub-pixel measurement of cross-type fiducial marks for the inner orientation, using the Gradient operator and image modelling least squares (IML) approach.iv) Devising and implementing a method for the digital rectification of overlapping aerial photos and the formation of the stereo-model.v) Design and implementation of a digital stereo-photogrammetric system (DSP) and the generation of a DTM using visual measurement.vi) Investigating the feasibility of stereo-viewing of binary images and the possibility of performing measurements on such images.vii) Implementing a method for the automatic generation of a DTM using a one-dimensional image correlation along epipolar lines and experimentally optimizing the size of the correlation window.viii) Assessment of the accuracy of the DTM data generated both by the DSP and the automatic correlation method.ix) Vectorization of the rectification and correlation programs to achieve higher speed-up factors in the computational process.
440

Application of constrained optimisation techniques in electrical impedance tomography

Bayford, R. H. F. W. January 1994 (has links)
A Constrained Optimisation technique is described for the reconstruction of temporal resistivity images. The approach solves the Inverse problem by optimising a cost function under constraints, in the form of normalised boundary potentials. Mathematical models have been developed for two different data collection methods for the chosen criterion. Both of these models express the reconstructed image in terms of one dimensional (I-D) Lagrange multiplier functions. The reconstruction problem becomes one of estimating these 1-D functions from the normalised boundary potentials. These models are based on a cost criterion of the minimisation of the variance between the reconstructed resistivity distribution and the true resistivity distribution. The methods presented In this research extend the algorithms previously developed for X-ray systems. Computational efficiency is enhanced by exploiting the structure of the associated system matrices. The structure of the system matrices was preserved in the Electrical Impedance Tomography (EIT) implementations by applying a weighting due to non-linear current distribution during the backprojection of the Lagrange multiplier functions. In order to obtain the best possible reconstruction it is important to consider the effects of noise in the boundary data. This is achieved by using a fast algorithm which matches the statistics of the error in the approximate inverse of the associated system matrix with the statistics of the noise error in the boundary data. This yields the optimum solution with the available boundary data. Novel approaches have been developed to produce the Lagrange multiplier functions. Two alternative methods are given for the design of VLSI implementations of hardware accelerators to improve computational efficiencies. These accelerators are designed to implement parallel geometries and are modelled using a verification description language to assess their performance capabilities.

Page generated in 0.0146 seconds