• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2661
  • 782
  • 758
  • 243
  • 184
  • 156
  • 135
  • 45
  • 35
  • 27
  • 24
  • 24
  • 24
  • 24
  • 24
  • Tagged with
  • 6270
  • 6270
  • 2010
  • 1526
  • 1196
  • 1150
  • 1030
  • 1002
  • 952
  • 927
  • 896
  • 804
  • 771
  • 661
  • 660
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
861

Surface evaluation by the signal processing of ultrasonic pulses

Smith, Philip F. January 1990 (has links)
The development of a surface texture evaluation technique for the study of roughnesses of the order of a few microns using the signal processing of ultrasonic pulse-echo signals is described. The technique of extracting surface information by means of deconvolution is introduced. Strictly, a solution to the deconvolution problem normally does not exist or is not unique. The chosen method of approaching a solution is by the nonlinear Maximum Entropy Method (MEM), which offers superior image quality over many other filters. The algorithm is described and translated into a standalone computer programme-the development of this software is described in detail. The performance of the algorithm in the field of ultrasonics is assessed by means of the study of simulations involving images similar to those obtainable in a real application. Comparison with the linear Wiener-Hopf filter is provided particularly in instances where the comparison shows weaknesses of either technique. Also examined is the frequency restoration property of the algorithm (not shown by the Wiener-Hopf filter)-potential applications of this property are also described. The final part of the study of the MEM is an examination of the effect on performance of some of the algorithm's parameters and on computer system dependencies. A brief overview of some of the surface metrology techniques currently used is given. The aim is an introduction to surface metrology and an assessment of where the technique described here fits into the general surface metrology field. The experimental system, which of course is essential to practical applications, is considered in some detail. Also considered is a wide range of ultrasonic transducers available for the research. These show a considerable variety of characteristics. Some assessment is carried out using the Maximum Entropy Method with simulated and real data to try and establish the properties of a transducer best suited to the application intended. Finally, results from grating-type test surfaces and more general rough surfaces are presented. The former are intended as a means of establishing the potential performance of the technique; the latter build on the grating results to analyse real surfaces as made by a variety of engineering techniques. Results are compared with those obtained by a stylus instrument. Generally good agreement is found, with roughnesses of around 2 microns being accurately assessed. With the accuracy of these results being less than a micron, it is concluded that this technique has a valuable contribution to the surface metrology field.
862

A cellular automaton-based system for the identification of topological features of carotid artery plaques

Delaney, Matthew January 2014 (has links)
The formation of a plaque in one or both of the internal carotid arteries poses a serious threat to the lives of those in whom it occurs. This thesis describes a technique designed to detect level of occlusion and provide topological information about such plaques. In order to negate the cost of specialised hardware, only the sound produced by blood-flow around the occlusion is used; this raises problems that prevent the application of existing medical imaging techniques, however, these can be overcome by the application of a nonlinear technique that takes full advantage of the discrete nature of digital computers. Results indicate that both level of occlusion and presence or absence of various topological features can be determined in this way. Beginning with a review of existing work in medical-imaging and in more general but related techniques, the EPI process of Friden (2004) is identified as the strongest approach to a situation where it is desirable to work with both signal and noise yet avoid the computational cost and other pitfalls of established techniques. The remained of the thesis discusses attempts to automate the EPI process which, in the form given by Frieden (2004), requires a degree of human mathematical creative problem-solving. Initially, a numerical-methods inspired approach based on genetic algorithms was attempted but found to be both computationally costly and insufficiently true to the nature of the EPI equations. A second approach, based on the idea of creating a formal system allowing entropy, direction and logic to be manipulated together proved to lack certain key properties and require an amount of work beyond the scope of the project described in this thesis in order to be extended into a form that was usable for the EPI process. The approach upon which the imaging system described is ultimately built is based on an abstracted form of constraint-logic programming resulting in a cellular-automaton based model which is shown to produce distinct images for different sizes and topologies of plaque in a reliable and human-interpretable way.
863

Cursive script recognition in real time

Papageorgiu, Dimitrios January 1990 (has links)
No description available.
864

Dynamic analysis of anthropomorphic manipulators in computer animation

Loizidou, Stephania M. January 1992 (has links)
No description available.
865

The analysis and synthesis of texture in sidescan sonar data

Clarke, Stuart J. January 1992 (has links)
No description available.
866

Application of artificial neural networks to synchronous generator condition monitoring

Jiang, Hongwei January 1995 (has links)
This thesis presents an Artificial Neural Networks (ANNs) based, automatic Pattern Recognition(PR) technique for electrical machines Condition Monitoring(CM) applications. The performance of synchronous generators has been studied under a variety of conditions and monitored using a range automatic pattern recognition approaches. The harmonic components of the generator stator, rotor and excitation currents have been analysed initially to gain information of fault conditions in the machines, and then as a source of data for input training patterns to the neural nets. Artificial neural networks; their architecture, algorithms and their application to pattern recognition have been studied. Two unsupervised self-organising neural networks were chosen for further investigation and applied to the automatic pattern recognition tasks. These two neural network models can be classified as Kohonen neural nets and Adaptive Resonance Theory nets. A computer implementation of Kohonen Self Organising Feature Maps(KSOFM) and a simulation that interprets the continuous valued model of adaptive resonance theory (ART2 net), have been studied in detail. General condition monitoring techniques for electrical machines have been briefly reviewed and statistical pattern recognition methods have also been described. To confirm the utility of the proposed ANNs based automatic pattern recognition techniques for electrical machine condition monitoring, two synchronous generators with different capacities, one of 8kva was used for training the networks, and another of 11kva for testing the networks, were employed in the experimental study. The stator, rotor and excitation current signals of a generator have been used to provide the networks' input patterns, and Kohenon networks and adaptive resonance theory networks applicability to electrical machines condition monitoring compared. The possibility of using the proposed techniques to real industrial systems has been discussed. Finally, some of the difficulties of implementing ANNs for condition monitoring are considered.
867

Spatial frequencies and face recognition

Costen, Nicholas Paul January 1994 (has links)
If face images are degraded by spatial quantisation there is a non-linear acceleration of the decline of recognition accuracy as block-size increases, suggesting recognition requires a critical minimum range of object spatial frequencies. These may define the facial configuration, reflecting the structural properties allowing differentiation of faces. Experiment 1 measured speed and accuracy of recognition of six fronto-parallel faces shown with 11, 21 and 42 pixels/face, produced by quantisation, a Fourier low-pass filter and Gaussian blurring. Performance declined with image quality in a significant, non-linear manner, but faster for the quantised images. Experiment 2 found some of this additional decline was due to frequency-domain masking. Experiment 3 compared recognition for quantised, Fourier low-pass and high-pass versions, recognition was only impaired when the frequency limit exceeded the range 4.5-12.5 cycles/face. Experiment 4 found this was not due to contrast differences. Experiments 5, 6 and 7 used octave band-pass filters centred on 4.14, 9.67 and 22.15 cycles/face, varying view-point for both sequential matching and recognition. The spatial frequency effect was not found for matching, but was for recognition. Experiment 8 also measured recognition of band-passed images, presented with octave bands centred on 2.46-50.15 cycles/face and at 0-90 degrees from fronto-parallel. Spatial frequency effects were found at all angles, with best performance for semi-profile images and 11.10 cycles/face. Experiment 9 replicated this, with perceptually equal contrasts and the outer facial contour removed. Modeling showed this reflected a single spatial-frequency channel two octaves wide, centred on 9 cycles/face. Experiment 10 measured response time for successive matching of faces across a size-disparity, finding an asymmetrical effect.
868

Quantitation of contrast enhancement in dynamic magnetic resonance imaging of the breast

Brookes, Jason A. January 1996 (has links)
This thesis explores issues relating to the quantitation of both signal enhancement and contrast agent uptake, along with problems associated with such quantitation, with the aim of improving the specificity of dynamic, contrast enhanced breast MRI. A variable flip angle technique for measuring T1 in vivo was implemented using 2D and 3D FLASH sequences, in order to monitor the differential relaxation rate following injection of contrast agent. Experiments (with phantom objects) investigating sources of error in these techniques found that (i) the rf transmit power calibration automatically performed by the imaging system was 13.5% in error, (ii) significant non-uniformity in the rf transmit field existed over the breast coil volume and (iii) a 2D FLASH sequence developed locally from an editable scout sequence was significantly more accurate at measuring T1 than a commercially supplied 2D FLASH sequence. Since the in vivo measurement of T1 requires complicated imaging protocols and data analysis, two simple indices commonly used to quantitate signal enhancement were evaluated by computer simulation and comparison in a group of patients. The postulate that the index least influenced by pre-contrast tissue T1 (when using a contrast enhanced gradient echo imaging protocol) would be better able to correctly classify an undiagnosed lesion as either benign or malignant was used to evaluate which index was the most appropriate for quantitating signal enhancement in breast MRI. An index which normalised the difference between pre- and post-contrast signal to the fat signal intensity proved to be the better of the two indices. One problem with this index, however, is that it is sensitive to variations in fat signal through the breast. A simple uniformity correction scheme was implemented to reduce this problem and tested on both phantom and patient image data sets.
869

Real-time stereo image matching for a real time photogrammetry system

Ord, Leslie B. January 1997 (has links)
With the development of powerful, relatively low cost, digital image processing hardware capable of handling multiple image streams, it has become possible to implement affordable digital photogrammetry systems based on this technology. In addition, high speed versions of this hardware have the ability to manipulate these image streams in 'realtime', enabling the photogrammetry systems developed to expand their functionality from the off-line surveying of conventional photogrammetry to more time-critical domains such as object tracking and control systems. One major hurdle facing these 'real-time' photogrammetry systems is the need to extract the corresponding points from the multiple input images in order that they may be processed and measurements obtained. Even a highly skilled operator is not capable of manually processing the images in such a time that the speed of operation of the system would not be severely compromised. Thus an automatic system of matching these points is required. The use of automated point matching in the field of photogrammetry has been extensively investigated in the past. The objective has, however, been primarily to reduce the need for trained operators employed in the extraction of data from conventional photogrammetric studies and in the automation of data extraction from large data sets. The work presented here attempts to adapt these methods to the more time dominated problem of 'real-time' image matching.
870

Measurement of image quality in nuclear medicine and radiology

Cantell, Gillian Diane January 1995 (has links)
The imaging process can be thought of as the acquisition of data and the processing and display of data. The image quality of the acquired data is assessed using objective methods. The spatial transfer characteristic was measured using the MTF, and the noise properties assessed using Wiener spectra for gamma camera and film-screen systems. An overall measure of image quality, noise equivalent quanta, can then be calculated. The image quality of the displayed data is assessed using subjective methods. Contrast detail test objects have been used for film-screen systems and forced choice experiments for nuclear medicine data. The Wiener spectrum noise measurement has been investigated as a measure of uniformity. Simulated and gamma camera flood images were produced. Observer tests were carried out to give a contrast at which the non-uniform flood images could be distinguished from the uniform flood images. Wiener spectra were produced and single number indices derived. Statistical tests were performed to determine the contrast at which the uniform and non-uniform Wiener spectra can be distinguished. Results showed that Wiener spectra measurements can be used as a measure of uniformity under certain conditions. The application of resolution and noise measurements to the evaluation of film-screen systems and radiographic techniques has been considered. The results follow the trends presented in the literature. Provided that the scanning equipment is available tests on film-screen systems are practical to perform and are an important addition to other evaluation tests. Results show the ideal observer approach of measuring the resolution, noise and hence noise equivalent quanta, is a practical method of assessing image quality in a hospital environment.

Page generated in 0.0737 seconds