• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 323
  • 85
  • 66
  • 65
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 576
  • 576
  • 576
  • 561
  • 201
  • 136
  • 93
  • 89
  • 86
  • 83
  • 78
  • 74
  • 74
  • 74
  • 73
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Atomic representation for subspace clustering and pattern classification

Wang, Yu Long January 2017 (has links)
University of Macau / Faculty of Science and Technology / Department of Computer and Information Science
102

Non-rigid visual object tracking with statistical learning of appearance model

Lin, Cong January 2017 (has links)
University of Macau / Faculty of Science and Technology / Department of Computer and Information Science
103

Image analysis using digitized video input

Spijkerman, Lambertus Gerrit 20 November 2014 (has links)
M.Sc. (Computer Science) / This dissertation examines the field of computer vision, with special attention being given to vision systems that support digitized video image analysis. The study may be broadly divided into three main sections. The first part offers an introduction to standard vision systems, focusing on the hardware architectures and the image analysis techniques used on them. Hardware configurations depend mainly on the selected frame-grabber and processor type. Parallel architectures are highlighted, as they represent the most suitable platform for a real-time digitized video image analysis system. The image analysis techniques discussed include: image preprocessing, segmentation, edge detection, optical flow analysis and optical character recognition. The second part of the study covers a number of real-world computer vision applications, and commercially available development environments. Several traffic surveillance systems are discussed in detail, as they relate to the practical vehicle identification system developed in the third part of the study. As mentioned above, the development of a Vehicle Identification Prototype, called VIP, forms the basis for the third and final part of this study. The VIP hardware requirements are given, and the software development and use is explained. The VIP's source code is provided so that it may be evaluated, or modified, by any interested parties.
104

A study on Hough transform-based fingerprint alignment algorithms

Mlambo, Cynthia Sthembile 26 June 2015 (has links)
M.Ing. (Electrical and Electronic Engineering) / Please refer to full text to view abstract
105

The extraction of landslides in a satellite image using a digital elevation model

Donahue, John Patrick January 1987 (has links)
Landslides in the landscape exhibit predictable properties of shape, structure and orientation. These properties are reflected to varying degrees in their depiction in a satellite image. Landslides can be isolated along with similar objects in a digital image using differential and template operators. Extraction of the landslide features from these images can proceed using a logic-based model which draws on an appropriate object definition approximating the depiction of the landslides in an edge-operated image and a digital elevation model. An object extraction algorithm based on these concepts is used in repeated trials to ascertain the effectiveness of this automated approach. A low resolution linear object definition (Fischler et al. , 1981) is used to isolate candidate pixel segments in three enhanced images. These segments are classified as landslides or non-landslides according to their image pixel intensity, length, slope, and orientation. Digital elevation data is used to evaluate slope and orientation criteria. Results are compared to an inventory of landslides made using aerial photographs. Study results indicate that 17% to 28% of landslides in the image are identified for trials that produce a commission error rate of less than 50%. Commission errors are dominated by image objects related to roads and waste wood areas in clearcuts. A higher rate of successful identification was noted for landslides which occurred within 15 years of image acquisition (24% to 32%), and was most apparent for the subset of that group which was located in areas that were harvested more than 15 years before acquisition or were unharvested (29% to 38%). Successful identifications in the trials are dominated by events greater than 300 metres long and wider than 20 metres. The results suggest that the approach is more reliable in unharvested areas of the image. The poor quality of the digital elevation data, specifically artifacts produced by the contour-to-grid algorithm, was partly responsible for errors of commission and omission. The simplicity of the object definition used is another factor in error production. The methodology is not operational, but represents a realistic approach to scene segmentation for resource management given further refinement. / Forestry, Faculty of / Graduate
106

Restoration of images degraded by systems of random impulse response

Revelant, Ivan L. January 1987 (has links)
The problem of restoring an image distorted by a system consisting of a stochastic impulse response in conjuction with additive noise is investigated. The method of constrained least squares is extended to this problem, and leads to the development of a new technique based on the minimization of a weighted error function. Results obtained using the new method are compared with those obtained by constrained least squares, and by the Wiener filter and approximations thereof. It is found that the new technique, "Weighted Least Squares", gives superior results if the noise in the impulse response is comparable to or greater than the additive noise. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
107

A cooperative scheme for image understanding using multiple sources of information

Glicksman, Jay January 1982 (has links)
One method of resolving the ambiguity inherent in interpreting images is to add different sources of information. The multiple information source paradigm emphasizes the ability to utilize knowledge gained from one source that may not be present in another. However, utilizing disparate information may create situations in which data from different sources are inconsistent. A schemata-based system has been developed that can take advantage of multiple sources of information. Schemata are combined into a semantic network via the relations decomposition, specialization, instance of, and neighbour. Control depends on the structure of the evolving network and a cycle of perception. Schemata cooperate by message passing so that attention can be directed where it will be most advantageous. This system has been implemented to interpret aerial photographs of small urban scenes. Geographic features are identified using up to three information sources: the intensity image, a sketch map, and information provided by the user. The product is a robust system where the accuracy of the results reflects the quality and amount of data provided. Images of several geographic locales are analyzed, and positive-results are reported. / Science, Faculty of / Computer Science, Department of / Graduate
108

Application of pattern recognition to projective 3D image processing problems.

Danaila, Mariana Liana 12 March 2014 (has links)
This dissertation presents the development and performance of a few algorithms used for automated scene matching. The objective is to recognise and predict the location of a template (reference image) inside a degraded scene image (sensed image). A set of perspective, projective optical images of relatively well defined man-made objects located in areas of varying background is used as database. Perturbations to the grey levels of the image cause artefacts that easily destroy the unique match location and generate false fixes. Therefore, suitable enhancement and noise removal techniques are applied first. Several different types of features are investigated to decide upon those that are best suited to describe the original content of the scene. Statistical features, such as invariant moments are chosen for one of the algorithms, Multibcmd Ima^e using Moments (MBIMOM). The second one, Spatial Multiband Image (SMBI) algorithm, uses the spatial correlation of the pixels within a neighbourhood as initial descriptive features. Each algorithm uses either Principal Components transform or Maximum Noise Fraction transform for dimensionality and noise reduction. A normalised correlation coefficient of 1.00 was achieved by the SMBI algorithm. The final design of the algorithms is a trade-off between speed and accuracy.
109

Classification of wheat kernels by machine-vision measurement

Schmalzried, Terry Eugene. January 1985 (has links)
Call number: LD2668 .T4 1985 S334 / Master of Science
110

Digital image noise smoothing using high frequency information

Jarrett, David Ward, 1963- January 1987 (has links)
The goal of digital image noise smoothing is to smooth noise in the image without smoothing edges and other high frequency information. Statistically optimal methods must use accurate statistical models of the image and noise. Subjective methods must also characterize the image. Two methods using high frequency information to augment existing noise smoothing methods are investigated: two component model (TCM) smoothing and second derivative enhancement (SDE) smoothing. TCM smoothing applies an optimal noise smoothing filter to a high frequency residual, extracted from the noisy image using a two component source model. The lower variance and increased stationarity of the residual compared to the original image increases this filters effectiveness. SDE smoothing enhances the edges of the low pass filtered noisy image with the second derivative, extracted from the noisy image. Both methods are shown to perform better than the methods they augment, through objective (statistical) and subjective (visual) comparisons.

Page generated in 0.0734 seconds