• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 906
  • 325
  • 88
  • 81
  • 27
  • 21
  • 19
  • 18
  • 18
  • 18
  • 18
  • 18
  • 17
  • 15
  • 13
  • Tagged with
  • 2340
  • 2340
  • 1000
  • 951
  • 658
  • 621
  • 500
  • 486
  • 401
  • 339
  • 277
  • 276
  • 256
  • 230
  • 218
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Spatial frequencies and face recognition

Costen, Nicholas Paul January 1994 (has links)
If face images are degraded by spatial quantisation there is a non-linear acceleration of the decline of recognition accuracy as block-size increases, suggesting recognition requires a critical minimum range of object spatial frequencies. These may define the facial configuration, reflecting the structural properties allowing differentiation of faces. Experiment 1 measured speed and accuracy of recognition of six fronto-parallel faces shown with 11, 21 and 42 pixels/face, produced by quantisation, a Fourier low-pass filter and Gaussian blurring. Performance declined with image quality in a significant, non-linear manner, but faster for the quantised images. Experiment 2 found some of this additional decline was due to frequency-domain masking. Experiment 3 compared recognition for quantised, Fourier low-pass and high-pass versions, recognition was only impaired when the frequency limit exceeded the range 4.5-12.5 cycles/face. Experiment 4 found this was not due to contrast differences. Experiments 5, 6 and 7 used octave band-pass filters centred on 4.14, 9.67 and 22.15 cycles/face, varying view-point for both sequential matching and recognition. The spatial frequency effect was not found for matching, but was for recognition. Experiment 8 also measured recognition of band-passed images, presented with octave bands centred on 2.46-50.15 cycles/face and at 0-90 degrees from fronto-parallel. Spatial frequency effects were found at all angles, with best performance for semi-profile images and 11.10 cycles/face. Experiment 9 replicated this, with perceptually equal contrasts and the outer facial contour removed. Modeling showed this reflected a single spatial-frequency channel two octaves wide, centred on 9 cycles/face. Experiment 10 measured response time for successive matching of faces across a size-disparity, finding an asymmetrical effect.
452

Quantitation of contrast enhancement in dynamic magnetic resonance imaging of the breast

Brookes, Jason A. January 1996 (has links)
This thesis explores issues relating to the quantitation of both signal enhancement and contrast agent uptake, along with problems associated with such quantitation, with the aim of improving the specificity of dynamic, contrast enhanced breast MRI. A variable flip angle technique for measuring T1 in vivo was implemented using 2D and 3D FLASH sequences, in order to monitor the differential relaxation rate following injection of contrast agent. Experiments (with phantom objects) investigating sources of error in these techniques found that (i) the rf transmit power calibration automatically performed by the imaging system was 13.5% in error, (ii) significant non-uniformity in the rf transmit field existed over the breast coil volume and (iii) a 2D FLASH sequence developed locally from an editable scout sequence was significantly more accurate at measuring T1 than a commercially supplied 2D FLASH sequence. Since the in vivo measurement of T1 requires complicated imaging protocols and data analysis, two simple indices commonly used to quantitate signal enhancement were evaluated by computer simulation and comparison in a group of patients. The postulate that the index least influenced by pre-contrast tissue T1 (when using a contrast enhanced gradient echo imaging protocol) would be better able to correctly classify an undiagnosed lesion as either benign or malignant was used to evaluate which index was the most appropriate for quantitating signal enhancement in breast MRI. An index which normalised the difference between pre- and post-contrast signal to the fat signal intensity proved to be the better of the two indices. One problem with this index, however, is that it is sensitive to variations in fat signal through the breast. A simple uniformity correction scheme was implemented to reduce this problem and tested on both phantom and patient image data sets.
453

Real-time stereo image matching for a real time photogrammetry system

Ord, Leslie B. January 1997 (has links)
With the development of powerful, relatively low cost, digital image processing hardware capable of handling multiple image streams, it has become possible to implement affordable digital photogrammetry systems based on this technology. In addition, high speed versions of this hardware have the ability to manipulate these image streams in 'realtime', enabling the photogrammetry systems developed to expand their functionality from the off-line surveying of conventional photogrammetry to more time-critical domains such as object tracking and control systems. One major hurdle facing these 'real-time' photogrammetry systems is the need to extract the corresponding points from the multiple input images in order that they may be processed and measurements obtained. Even a highly skilled operator is not capable of manually processing the images in such a time that the speed of operation of the system would not be severely compromised. Thus an automatic system of matching these points is required. The use of automated point matching in the field of photogrammetry has been extensively investigated in the past. The objective has, however, been primarily to reduce the need for trained operators employed in the extraction of data from conventional photogrammetric studies and in the automation of data extraction from large data sets. The work presented here attempts to adapt these methods to the more time dominated problem of 'real-time' image matching.
454

Measurement of image quality in nuclear medicine and radiology

Cantell, Gillian Diane January 1995 (has links)
The imaging process can be thought of as the acquisition of data and the processing and display of data. The image quality of the acquired data is assessed using objective methods. The spatial transfer characteristic was measured using the MTF, and the noise properties assessed using Wiener spectra for gamma camera and film-screen systems. An overall measure of image quality, noise equivalent quanta, can then be calculated. The image quality of the displayed data is assessed using subjective methods. Contrast detail test objects have been used for film-screen systems and forced choice experiments for nuclear medicine data. The Wiener spectrum noise measurement has been investigated as a measure of uniformity. Simulated and gamma camera flood images were produced. Observer tests were carried out to give a contrast at which the non-uniform flood images could be distinguished from the uniform flood images. Wiener spectra were produced and single number indices derived. Statistical tests were performed to determine the contrast at which the uniform and non-uniform Wiener spectra can be distinguished. Results showed that Wiener spectra measurements can be used as a measure of uniformity under certain conditions. The application of resolution and noise measurements to the evaluation of film-screen systems and radiographic techniques has been considered. The results follow the trends presented in the literature. Provided that the scanning equipment is available tests on film-screen systems are practical to perform and are an important addition to other evaluation tests. Results show the ideal observer approach of measuring the resolution, noise and hence noise equivalent quanta, is a practical method of assessing image quality in a hospital environment.
455

Tools for image processing and computer vision

Hunt, Neil January 1990 (has links)
The thesis describes progress towards the construction of a seeing machine. Currently, we do not understand enough about the task to build more than the simplest computer vision systems; what is understood, however, is that tremendous processing power will surely be involved. I explore the pipelined architecture for vision computers, and I discuss how it can offer both powerful processing and flexibility. I describe a proposed family of VLSI chips based upon such an architecture, each chip performing a specific image processing task. The specialisation of each chip allows high performance to be achieved, and a common pixel interconnect interface on each chip allows them to be connected in arbitrary configurations in order to solve different kinds of computational problems. While such a family of processing components can be assembled in many different ways, a programmable computer offers certain advantages, in that it is possible to change the operation of such a machine very quickly, simply by substituting a different program. I describe a software design tool which attempts to secure the same kind of programmability advantage for exploring applications of the pipelined processors. This design tool simulates complete systems consisting of several of the proposed processing components, in a configuration described by a graphical schematic diagram. A novel time skew simulation technique developed for this application allows coarse grain simulation for efficiency, while preserving the fine grain timing details. Finally, I describe some experiments which have been performed using the tools discussed earlier, showing how the tools can be put to use to handle real problems.
456

The application of multiple bandpass filters in image processing

Lloyd, R. O. January 1981 (has links)
No description available.
457

A real-time computer generated imagery system for flight simulators

Lok, Y. C. F. January 1983 (has links)
No description available.
458

Pattern recognition in the case of strong background noise

Wang, Xingmei January 2001 (has links)
Dissertation submitted in compliance with the requirements for Maters Degree in Technology: Mechanical Engineering, Technikon Natal, 2001. / This study presents a development of a method for recognition of a class of patterns in signals contaminated by strong noise. The class of signals considered is described by a finite alphabet. The target class of patterns is assumed to have specific statistical properties that can be conveniently captured by the position weight matrices (PWM) description. Itis also assumed thatthe 'signals: contain numerous patterns si~ilar to the patterns of the target class, but which belong to different classes. These other patterns represent the noise in the signals. The method for-improved recogrrition of the target class of patterns is based on clustering of the target motifs with regard to distance form the reference point (event) in the signal. This positional clustering enables more precise description of the target class of patterns by means of the PWMs. However, it requires the use of as many PWMs as there are clusters of the target class. The method developed is of general nature, applicable to the situations described. It is however, applied to the recognition of the specific short motifs in DNA sequences. The short motif considered is the TATA-box,one of the most important docking sites for proteins in Eukaryotic polymerase II promoter regions. The reference point in the singals obtained form DNA sequences the transcription .start site (TSS). Thus the positional dustering of the TATA-box motif resulted in 20 different PWMs, instead of only one that describes the whole TATA motif class. This however, resulted in more discriminative PWMs and the recognition accuracy has increased by about a factor of two when compared to the recognition of the TATA moti f based on the original PWM. / M
459

A knowledge-based system for extraction and recognition of linear features in high resolution remotely-sensed imagery

Peacegood, Gillian January 1989 (has links)
A knowledge-based system for the automatic extraction and recognition of linear features from digital imagery has been developed, with a knowledge base applied to the recognition of linear features in high resolution remotely sensed imagery, such as SPOT HRV and XS, Thematic Mapper and high altitude aerial photography. In contrast to many knowledge-based vision systems, emphasis is placed on uncertainty and the exploitation of context via statistical inferencing techniques, and issues of strategy and control are given less emphasis. Linear features are extracted from imagery, which may be multiband imagery, using an edge detection and tracking algorithm. A relational database for the representation of linear features has been developed, and this is shown to be useful in a number of applications, including general purpose query and display. A number of proximity relationships between the linear features in the database are established, using computationally efficient algorithms. Three techniques for classifying the linear features by exploiting uncertainty and context have been implemented and are compared. These are Bayesian inferencing using belief networks, a new inferencing technique based on belief functions and relaxation labelling using belief functions. The two inferencing techniques are shown to produce more realistic results than probabilistic relaxation, and the new inferericing method based on belief functions to perform best in practical situations. Overall, the system is shown to produce reasonably good classification results on hand extracted linear features, although the classification is less good on automatically extracted linear features because of shortcomings in the edge detection and extraction processes. The system adopts many of the features of expert systems, including complete separation of control from stored knowledge and justification for the conclusions reached.
460

Perceptual models in speech quality assessment and coding

Savvides, Vasos E. January 1988 (has links)
The ever-increasing demand for good communications/toll quality speech has created a renewed interest into the perceptual impact of rate compression. Two general areas are investigated in this work, namely speech quality assessment and speech coding. In the field of speech quality assessment, a model is developed which simulates the processing stages of the peripheral auditory system. At the output of the model a "running" auditory spectrum is obtained. This represents the auditory (spectral) equivalent of any acoustic sound such as speech. Auditory spectra from coded speech segments serve as inputs to a second model. This model simulates the information centre in the brain which performs the speech quality assessment.

Page generated in 0.0351 seconds