• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 1377
  • 588
  • 539
  • 537
  • 491
  • 466
  • 190
  • 136
  • 56
  • 46
  • 46
  • 45
  • 43
  • 42
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Statistical image analysis and confocal microscopy

Alawadhi, Fahimah January 2001 (has links)
No description available.
452

Polygon-based hidden surface elimination algorithms : serial and parallel

Highfield, Julian Charles January 1994 (has links)
Chapter 1 introduces the need for rapid solutions of hidden surface elimination (HSE) problems in the interactive display of objects and scenes, as used in many application areas such as flight and driving simulators and CAD systems. It reviews the existing approaches to high-performance computer graphics and to parallel computing. It then introduces the central tenet of this thesis: that general purpose parallel computers may be usefully applied to the solution of HSE problems. Finally it introduces a set of metrics for describing sets of scene data, and applies them to the test scenes used in this thesis. Chapter 2 describes variants of several common image space hidden surface elimination algorithms, which solve the HSE problem for scenes described as collections of polygons. Implementations of these HSE algorithms on a traditional, serial, single microprocessor computer are introduced and theoretical estimates of their performance are derived. The algorithms are compared under identical conditions for various sets of test data. The results of this comparison are then placed in context with existing historical results. Chapter 3 examines the application of MIMD style parallelism to accelerate the solution of HSE problems. MIMD parallel implementations of the previously considered HSE algorithms are introduced. Their behaviour under various system configurations and for various data sets is investigated and compared with theoretical estimates. The theoretical estimates are found to match closely the experimental findings. Chapter 4 summarises the conclusions of this thesis, finding that HSE algorithms can be implemented to use an MIMD parallel computer effectively, and that of the HSE algorithms examined the z-buffer algorithm generally proves to be a good compromise solution.
453

Perception-driven automatic segmentation of colour images using mathematical morphology

Shafarenko, Leila January 1996 (has links)
This thesis is a study of perception-driven automatic segmentation of colour images. Despite immediate practical interest for this task, there exist very few reliable algorithms suitable for unsupervised processing. Most of the results presented in this thesis are based on mathematical morphology. This is a relatively new field which explores topological and geometrical properties of images and which has proven to be useful for image processing. The overview of morphological techniques can be found in chapter 2. A brief overview of segmentation methods is presented in, chapter 3. Only a small proportion of the vast number of publications on the subject is reviewed, namely those that are papers directly relevant to the subject of the thesis. Two novel non-parametric algorithms have been developed by the author for processing colour images. The first one is for processing randomly textured images. It uses a bottom-up segmentation algorithm which takes into consideration both colour and texture properties of the image. An "LUV gradient" is introduced which provides both a colour similarity measure and a basis for applying the watershed transform. The patches of watershed mosaic are merged according to their colour contrast until a termination criterion is met. This criterion is based on the topology of a typical processed image. The resulting algorithm does not require any additional information, be it various thresholds, marker extraction rules and suchlike, thus being suitable for automatic processing. The second algorithm deals with non-textured images and takes into consideration the noise that is present during the image acquisition. The watershed algorithm is used to segment either the 2- or 3-dimensional colour histogram of an image. To comply with the way humans perceive colour, this segmentation has to take place in a perceptually uniform colour space such as the Luv space. To avoid over segmentation, the watershed algorithm has to be applied to a smoothed-out histogram. The noise, however, is inhomogeneous in the Luv space and noise analysis for this space based on experimentally justified assumptions is presented. Both algorithms have been extensively tested on real data and were found to give stable results that are in good accord with human perception.
454

Quantitative and qualitative imaging in single photon emission tomography for nuclear medicine applications

Masoomi, Mojtaba Arash January 1989 (has links)
An important goal of single photon emission tomography (SPECT) is the determination of absolute regional radionuclide concentration as a function of time. Quantitative and qualitative studies of SPECT with regard to clinical application is the object of this work. Three basic approaches for image reconstruction and factors which affect the choice of a reconstruction algorithm have been reviewed, discussed and the reconstruction techniques, GRADY and CBP evaluated, based on computer modelling. A sophisticated package of computational subroutines, RECLBL, for image reconstruction and for generation of phantoms, which was fully implemented on PRIME was used throughout this study. Two different systems, a rotating gamma-camera and a prototype scanning-rig have been used to carry out tomography experiments with different phantoms in emission and transmission mode. Performance assessment and reproducibility of the gamma-camera was tested prior to the experimental work. SPECT studies are generally hampered for a number of reasons, the most severe being attenuation and scattering. The effect of scattered photons on image quality was discussed, three distinct techniques were utilised to correct the images and results were compared. Determination of the depth of the source, Am-241 and Tc-99m in the attenuating media, water and TEMEX by analysing the spectroscopic data base on the SPR and spatial resolution was studied, results revealed that both techniques had the same range of depth sensitivity. A method of simultaneous emission and transmission tomography was developed to correct the images for attenuation. The reproducibility of the technique was examined. Results showed that the technique is able to present a promising and a practical approach to more accurate quantitative SPECT imaging. A procedure to evaluate images, under certain conditions has been defined, its properties were evaluated using computer modelling as well as real data. Usefulness of the odd sampling technique to improve image quality has been investigated and is recommended.
455

Three dimensional sensing via coloured spots

Davies, Colin J. January 1996 (has links)
No description available.
456

Real-time computer generated imagery using stream processing techniques

Evemy, Jeffrey Dennis January 1989 (has links)
No description available.
457

Primitive extraction via gathering evidence of global parameterised models

Aguado Guadarrama, Alberto Sergio January 1996 (has links)
The extraction of geometric primitives from images is a fundamental task in computer vision. The objective of shape extraction is to find the position and recognise descriptive features of objects (such as size and rotation) for scene analysis and interpretation. The Hough transform is an established technique for extracting geometric shapes based on the duality definition of the points on a curve and their parameters. This technique has been developed for extracting simple geometric shapes such as lines, circles and ellipses as well as arbitrary shapes represented in a non-analytically tabular form. The main drawback of the Hough transform technique is the computational requirement which has an exponential growth of memory space and processing time as the number of parameters used to represent a primitive increases. For this reason most of the research on the Hough transform has focused on reducing the computational burden for extracting simple geometric shapes. This thesis presents two novel techniques based on the Hough transform approach, one for ellipse extraction and the other for arbitrary shape extraction. The ellipse extraction technique confronts the primary problems of the Hough transform, namely the storage and computational load, by considering the angular changes in the position vector function of the points in an ellipse. These changes are expressed in terms of sets of points and gradient direction to obtain simplified mappings which split the five-dimensional parameter space required for ellipse extraction into two two-dimensional and one one-dimensional spaces. The new technique for arbitrary shape extraction uses an analytic representation of arbitrary shapes. This representation extends the applicability of the Hough transform from lines and quadratic forms, such as circles and ellipses, to arbitrary shapes avoiding the discretisation problems inherent in current (tabular) approaches. The analytic representation of shapes is based on the Fourier expansion of a curve and the extraction process is formulated by including this representation in a general novel definition of the Hough transform. In the development of this technique some strategies of parameter reduction are implemented and evaluated.
458

Combining multiple features in texture classification

Ng, Liang Shing January 1999 (has links)
No description available.
459

Novel techniques for image texture classification

Chen, Yan Qiu January 1995 (has links)
Texture plays an increasingly important role in computer vision. It has found wide application in remote sensing, medical diagnosis, quality control, food inspection and so forth. This thesis investigates the problem of classifying texture in digital images, following the convention of splitting the problem into feature extraction and classification. Texture feature descriptions considered in this thesis include Liu's features, features from the Fourier transform using geometrical regions, the Statistical Gray-Level Dependency Matrix, and the Statistical Feature Matrix. Classification techniques that are considered in this thesis include the K-Nearest Neighbour Rule and the Error Back-Propagation method. Novel techniques developed during the author's Ph.D study include (1) a Generating Shrinking Algorithm that builds a three-layer feed-forward network to classify arbitrary patterns with guaranteed convergence and known generalisation behaviour, (2) a set of Statistical Geometrical Features for texture analysis based on the statistics of the geometrical properties of connected regions in a sequence of binary images obtained from a texture image, (3) a neural implementation of the K-Nearest Neighbour Rule that can complete a classification task within 2K clock cycles. Experimental evaluation using the entire Brodatz texture database shows that (1) the Statistical Geometrical Features give the best performance for all the considered classifiers, (2) the Generating Shrinking Algorithm offers better performance over the Error Back-Propagation method and the K-Nearest Neighbour Rule's performance is comparable to that of the Generating Shrinking Algorithm, (3) the combination of the Statistical Geometrical Features with the Generating-Shrinking Algorithm constitutes one of the best texture classification systems considered.
460

Attentive visual tracking and trajectory estimation for dynamic scene segmentation

Roberts, Jonathan Michael January 1994 (has links)
Intelligent Co-Pilot Systems (ICPS) offer the next challenge to vehicle-highway automation. The key to ICPSs is the detection of moving objects (other vehicles) from the moving observer using a visual sensor. The aim of the work presented in this thesis was to design and implement a feature detection and tracking strategy that is capable of tracking image features independently, in parallel, and in real-time and to cluster/segment features utilising the inherent temporal information contained within feature trajectories. Most images contain areas that are of little or no interest to vision tasks. An attentive, data-driven, approach to feature detection and tracking is proposed which aims to increase the efficiency of feature detection and tracking by focusing attention onto relevant regions of the image likely to contain scene structure. This attentive algorithm lends itself naturally to parallelisation and results from a parallel implementation are presented. A scene may be segmented into independently moving objects based on the assumption that features belonging to the same object will move in an identical way in three dimensions (this assumes objects are rigid). A model for scene segmentation is proposed that uses information contained within feature trajectories to cluster, or group, features into independently moving objects. This information includes: image-plane position, time-to-collision of a feature with the image-plane, and the type of motion observed. The Multiple Model Adaptive Estimator (MMAE) algorithm is extended to cope with constituent filters with different states (MMAE2) in an attempt to accurately estimate the time-to-collision of a feature and provide a reliable idea of the type of motion observed (in the form of a model belief measure). Finally, poor state initialisation is identified as a likely prime cause for poor Extended Kalman Filter (EKF) performance (and hence poor MMAE2 performance) when using high order models. The idea of the neurofuzzy initialised EKF (NF-EKF) is introduced which attempts to reduce the time for an EKF to converge by improving the accuracy of the EKF's initial state estimates.

Page generated in 0.036 seconds