• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1377
  • 588
  • 539
  • 537
  • 491
  • 466
  • 190
  • 136
  • 56
  • 46
  • 46
  • 45
  • 43
  • 42
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

The development of an enhanced electropalatography system for speech research

Chiu, Wilson Sien Chun January 1995 (has links)
To understand how speech is produced by individual human beings, it is fundamentally important to be able to determine exactly the three-dimensional shape of the vocal tract. The vocal tract is inaccessible so its exact form is difficult to determine with live subjects. There is a wide variety of methods that provide information on the vocal tract shape. The technique of Electropalatography (EPG) is cheap, relatively simple, non-invasive and highly informative. Using EPG on its own, it is possible to deduce information about the shape, movement and position of tongue-palate contact during continuous speech. However, data provided by EPG is in the form of a two-dimensional representation in which all absolute positional information is lost. This thesis describe the development of an enhanced Electropalatography (eEPG) system, which retains most of the advantages of EPG while overcoming some of the disadvantages by representing the three-dimensional (3D) shape of the palate. The eEPG system uses digitised palate shape data to display the tongue-palate contact pattern in 3D. The 3D palate shape is displayed on a Silicon Graphics workstation as a surface made up of polygons represented by a quadrilateral mesh. EPG contact patterns are superimposed onto the 3D palate shape by displaying the relevant polygons in a different colour. By using this system, differences in shape between individual palates, apparent on visual inspection of the actual palates, are also apparent in the image on screen. The contact patterns can be related more easily to articulatory features such as the alveolar ridge since the ridge is visible on the 3D display. Further, methods have been devised for computing absolute distances along paths lying on the palate surface. Combining this with calibrated palate shape data allows measurements accurate to 1 mm to be made between contact locations on the palate shape. These have been validated with manual measurements. The sampling rate for EPG is 100Hz and the data rate is equivalent to 62 bits per 10ms. In the past few years, some coding (parameterization) methods have been introduced to try to reduce the amount of data while retaining the important aspects. Feature coding methods are proposed here and several parameters are investigated, expressed in terms of both conventional measures such as row number, and in absolute measures of distance and area (i.e. mm and mm2). Features studied include location of constriction and degree of constriction. Finally, in order to reduce the amount of data while retaining the spatial information, composite frames that represent a series of EPG frames are computed. Measures of goodness of the composite frames that do and do not use 3D data are described. Some example are given in which fricative data has been processed by generating a composite frame for the entire fricative, and computing an area estimate for each row of the composite frame using the assumption of a flat tongue. This thesis demonstrates the current capability and inherent flexibility of the enhanced electropalatography system. In the future, the eEPG system can be extended to compute volume estimates again using a flat tongue model. By incorporating information on the tongue surface provided by other imaging methods such as ultrasound, more accurate area and volume estimates can be obtained.
462

Automatic gait recognition via statistical approaches

Huang, Ping Sheng January 1999 (has links)
No description available.
463

Extending the feature set for automatic face recognition

Jia, Xiaoguang January 1993 (has links)
Automatic face recognition has long been studied because it has a wide potential for application. Several systems have been developed to identify faces from small face populations via detailed face feature analysis, or by using neural nets, or through model based approaches. This study has aimed to provide satisfactory recognition within large populations of human faces and has concentrated on improving feature definition and extraction to establish an extended feature set to lead to a fully structured recognition system based on a single frontal view. An overall review on the development and the techniques of automatic face recognition is included, and performances of earlier systems are discussed. A novel profile description has been achieved from a frontal view of a face and is represented by a Walsh power spectrum which was selected from seven different descriptions due to its ability to distinguish the differences between profiles of different faces. A further feature has concerned the face contour which is extracted by iterative curve fitting and described by normalized Fourier descriptors. To accompany an extended set of geometric measurements, the eye region feature is described statistically by eye-centred moments. Hair texture has also been studied for the purpose of segmenting it from other parts of the face and to investigate the possibility of using it as a set of feature. These new features combine to form an extended feature vector to describe a face. The algorithms for feature extraction have been implemented on face images from different subjects and multiple views from the same person but without the face normal to the camera or without constant illumination. Features have been assessed in consequence on each feature set separately and on the composite feature vector. The results have continued to emphasize that though each description can be used to recognise a face there is a clear need for an extended feature set to cope with the requirements of recognizing faces within large populations.
464

Development of an electrical impedance tomograph for complex impedance imaging

Leung, Hing Tong Lucullus January 1991 (has links)
This project concerns the development of electrical impedance tomography towards the production of complex impedance images. The prime intention was to investigate the feasibility of developing suitable instrumentation; but not clinical applications. It was aimed to develop techniques for the performance evaluation of data collection systems. To achieve this it was necessary to design and develop a multi· current source type impedance tomography system, to act as a platform for the current study and for future work. The system developed is capable of producing conductivity and permittivity images. It employs microprocessor based data collection electronics, providing portability between a range of possible host computers. The development of the system included a study of constant amplitude current source circuits leading to the design and employment of a novel circuit. In order to aid system testing, a surface mount technology resistor-mesh test object was produced. This has been adopted by the EEC Concerted Action on Impedance Tomography (CAIT) programme as the first standard test object. A computer model of the phantom was produced using the industry standard ASTEC3 circuit simulation package. This development allows the theoretical performance of any system topology, at any level of detail, to be established. The imaging system has been used to produce images from test objects, as well as forearm and lung images on humans. Whilst the conductivity images produced were good, the permittivity in-vivo images were noisy, despite good permittivity images from test objects. A study of the relative merits of multiple and single stimulus type systems was carried out as a result of the discrepancies in the in-vivo and test object images. This study involved a comparison of the author's system with that of Griffiths at the University Hospital of Wales. The results showed that the multi current source type system, whilst able to reduce stray capacitance, creates other more significant errors due to circuit matching; future development in semiconductor device technology may help to overcome this difficulty. It was identified that contact impedances together with the effective capacitance between the measurement electrode pairs in four-electrode systems reduces the measurability of changes in phase. A number of benchmarking indices were developed and implemented, both for system characterisation and for practical/theoretical design comparisons.
465

Improving the precision of leg ulcer area measurement with active contour models

Jones, Timothy David January 1999 (has links)
A leg ulcer is a chronic wound of the skin that, at best, takes many months to fully heal and causes great distress to the patient. Treating leg ulcers places a large financial burden upon the National Health Service in the United Kingdom, estimated to be in excess of £300M annually. Measurement of the size of leg ulcers is a guide to assessing the progress of wound healing, and the use of non-invasive measurement techniques avoids damaging or infecting the wound. The area of a leg ulcer is currently measured by presenting a human observer with a captured video image of a wound, who then uses a mouse or pointing device to delineate the wounded region. Typically, the standard deviation of area measurements taken this way is approximately 5% of the wound area. In addition, different observers can show a bias difference in their area measurements from 3% to 25% of the wound area. It is proposed to reduce the incidence of such errors by using an active contour model to improve the delineation. Four different models are developed by adapting and applying several contributions made to the active contour model paradigm. Novel features include an external force that acts normally, but not tangentially, to the boundary, a new external energy term that promotes homogeneity of the gray level at the edge of the wound and the application of the minimax principle for setting the parameters of an active contour model with piecewise b-spline curves. The algorithms provide the physician with a new and practical tool for producing area measurements with improved precision and are semi-automatic, requiring only a manual delineation to start the algorithm. In most cases, measurement precision is improved by application of the algorithms. Many wounds give rise to measurable bias differences between average manual area measurements and the corresponding algorithmic area measurements, typically averaging 3% to 4% of wound area. With some wounds the bias magnitude can exceed 10% as a result of the contour partly deviating from the true edge of the wound and following a false edge.
466

Neural network techniques for the identification and classification of marine phytoplankton from flow cytometric data

Al-Haddad, Luan Marie January 2001 (has links)
This thesis documents the research that has led to advances in the Artificial Neural Network (ANN) approach to analysing flow cytometric data from phytoplankton cells. The superiority of radial basis function networks (RBF) over multi-layer perception networks (MLP), for data of this nature, has been established, and analysis of 62 marine species of phytoplankton represents an advancement in the number of classes investigated. The complexity and abundance of heterogeneous phytoplankton populations, renders an original multi-class network redundant each time a novel species is encountered. To encompass the additional species, the original multiclass network requires complete retraining, involving long optimisation procedures to be carried out by ANN scientists. An alternative multiple network approach presented (and compared to the multi-class network), allows identification of the expanse of real world data sets and the easy addition of new species. The structure comprises a number of pre-trained single species networks as the front end to a combinatorial decision process for determining species identification. The simplicity of the architecture, and of the subsequent data produced by the technique, allows scientists unfamiliar with ANNs to dynamically alter the species of interest as required, without the need for complete re-training. Kohonens Self Organising Map (SOM), capable of discovering its own classification scheme, indicated areas of discrepancy between flow cytometric signatures of some species and their respective morphological groupings. In an attempt to improve identification to taxonomic group or genus level by supervised networks, class labels more reflective of flow cytometric signatures must be introduced. Methods for boundary recognition and cluster distinction in the output space of the SOM have been investigated, directed towards the possibility of an alternative flow cytometric structuring system. Performance of the alternative multiple network approach was comparable to that of the original multi-class network when identifying data from various environmental and laboratory culturing conditions. Improved generalisation can be achieved through employment of optical characteristics more representative of those found in nature.
467

The application of optimal transputer architecture to concurrent processing in the implementation of vision processing algorithms

Bennett, Ian Bramley January 1989 (has links)
Repetitive low level image processing transformations can be performed at high speeds by SIMD arrays, DSP and dedicated VLSI devices. These strategies cannot be adppted with more complex and time consuming data dependent algorithms. A flexible and programmable component must be used, and the use of many such devices in parallel, using dynamic load balancing techniques, is necessary to enable acceptable execution performance to be obtained. The transputer is a powerful new microprocessor with unique on chip communications facilities. Together with the new parallel programming language, occam, the transputer was specifically designed for parallel processing applications. Large transputer networks can be used for computationally intensive applications. This work has investigated the use of transputers for performing image processing algorithms of all three levels of complexity. Techniques were devised and implemented for the execution of low, medium and high levels of image processing algorithms on a multi-transputer network. A software architecture using SUPPLY and DEMAND processes was designed, and dynamic work load balancing was achieved, operating on a ternary tree network of up to 32 transputers. Some 80 image processing algorithms were successfully implemented within the software architecture. In particular, the more complex operation of Feature Extraction was achieved using the multi-transputer system. The Features extracted, involving Convex Hull, Convex Hull Deficiencies, Areas and Perimeters, and Shape Factors were used to build a Feature Vector. The use of this Feature Vector in Scene Interpretation, to realise Learn and Recognise functions has been investigated. The results of the work clearly show that while the system proposed is not as effective at executing repetitive, data intensive transformations as methods mentioned earlier, it can execute more complex Feature Extraction and Scene Interpretation algorithms efficiently. An Efficiency of 85% was achieved for Convex Hull formation, using 32 transputers.
468

An image reconstruction algorithm for a dual modality tomographic system

Nordin, Md. Jan January 1995 (has links)
This thesis describes an investigation into the use of dual modality tomography to measure component concentrations within a cross-section. The benefits and limitations of using dual modality compared with single modality are investigated and discussed. A number of methods are available to provide imaging systems for process tomography applications and seven imaging techniques are reviewed. Two modalities of tomography were chosen for investigation (i.e. Electrical Impedance Tomography (EIT) and optical tomography) and the proposed dual modality system is presented. Image reconstruction algorithms for EIT (based on modified Newton-Raphson method), optical tomography (based on back-projection method) and with both modalities combined together to produce a single tomographic imaging system are described, enabling comparisons to be made between the individual and combined modalities. To analyse the performance of the image reconstruction algorithms used in the EIT, optical tomography and dual modality investigations, a sequence of reconstructions using a series of phantoms is performed on a simulated vessel. Results from two distinct cases are presented, a) simulation of a vertical pipe in which the cross-section is filled with liquid or liquid and objects being imaged and b) simulation of a horizontal pipe where the conveying liquid level may vary from pipe full down to 14% of liquid. A computer simulation of an EIT imaging system based on a 16 electrode sensor array is used. The quantitative images obtained from simulated reconstruction are compared in term of percentage area with the actual cross-section of the model. It is shown from the results that useful reconstructions may be obtained with widely differing levels of liquid, despite the limitations in accuracy of the reconstructions. The test results obtained using the phantoms with optical tomography, based on two projections each of sixteen views, show that the images produced agree closely on a quantitative basis with the physical models. The accuracy of the optical reconstructions, neglecting the effects of aliasing due to only two projections, is much higher than for the EIT reconstructions. Neglecting aliasing, the measured accuracies range from 0.1% to 0.8% for the pipe filled with water. For the sewer condition, i.e. the pipe not filled with water, the major phase is measured with an accuracy of 1% to 3.4%. For the single optical modality the minor components are measured with accuracies 6.6% to 19%. The test results obtained using the phantoms show that the images produced by combining both EIT and optical tomography method agree quantitatively with the physical models. The EIT eliminates most of the aliasing and the results now show that the optical part of the system provides accuracies for the minor components in the range 1% to 5%. It is concluded that the dual modality system shows a measurable increase in accuracy compared with the single modality systems. The dual modality system should be investigated further using laboratory flow rigs in order to check accuracies and determine practical limitations. Finally, suggestions for future work on improving the accuracy, speed and resolution of the dual modality imaging system is presented.
469

The development and application of optically generated spatial carrier fringes for the quantitative measurement of flowfields and solid surfaces

Chan, Ping Hai January 1996 (has links)
There are two main approaches to fringe pattern analysis: spatial carrier and temporal carrier. The phase-stepping (PS) method plays a prominent part in the temporal carrier approach. The principal technique of the spatial carrier approach is the discrete Fourier transform (DFT) method. In this thesis, both methods are reviewed and illustrated with examples. The problems associated with these methods are discussed. The effect of weighting function and filtering window on the accuracy of the DFT method is investigated. A review of the minimum spanning tree approach to phase unwrapping is presented. A new fringe evaluation technique has been explored. The technique combines the computational simplicity of the PS method with the singleimage analysis capability of the DFT method. This technique is implemented over two steps. Firstly, a fringe pattern with spatial carrier is subdivided into three component images. Then, the phase is calculated by a three phase step algorithm from these images. A computer simulation has been undertaken to demonstrate the theory and to analyse the systematic errors. Applications in the analysis of the transonic flow-fields are presented. The Fourier Transform Profilometry (FTP) decodes the 3-D shape information from the phase stored in a 2-D image of an object onto which a Ronchi grating is projected. Two different optical geometries used in the FTP have been compared. The phase information can be separated from the image signal by either the phase subtraction method or the spectrum shift method. The result of an experimental comparison between two phase extraction methods is also presented.
470

Hybrid techniques for speech coding

Burnett, I. S. January 1992 (has links)
No description available.

Page generated in 0.4508 seconds