• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2660
  • 782
  • 758
  • 243
  • 184
  • 156
  • 135
  • 45
  • 35
  • 27
  • 24
  • 24
  • 24
  • 24
  • 24
  • Tagged with
  • 6266
  • 6266
  • 2008
  • 1526
  • 1196
  • 1150
  • 1028
  • 1001
  • 952
  • 927
  • 895
  • 802
  • 771
  • 661
  • 660
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
871

Tools for image processing and computer vision

Hunt, Neil January 1990 (has links)
The thesis describes progress towards the construction of a seeing machine. Currently, we do not understand enough about the task to build more than the simplest computer vision systems; what is understood, however, is that tremendous processing power will surely be involved. I explore the pipelined architecture for vision computers, and I discuss how it can offer both powerful processing and flexibility. I describe a proposed family of VLSI chips based upon such an architecture, each chip performing a specific image processing task. The specialisation of each chip allows high performance to be achieved, and a common pixel interconnect interface on each chip allows them to be connected in arbitrary configurations in order to solve different kinds of computational problems. While such a family of processing components can be assembled in many different ways, a programmable computer offers certain advantages, in that it is possible to change the operation of such a machine very quickly, simply by substituting a different program. I describe a software design tool which attempts to secure the same kind of programmability advantage for exploring applications of the pipelined processors. This design tool simulates complete systems consisting of several of the proposed processing components, in a configuration described by a graphical schematic diagram. A novel time skew simulation technique developed for this application allows coarse grain simulation for efficiency, while preserving the fine grain timing details. Finally, I describe some experiments which have been performed using the tools discussed earlier, showing how the tools can be put to use to handle real problems.
872

The application of multiple bandpass filters in image processing

Lloyd, R. O. January 1981 (has links)
No description available.
873

A real-time computer generated imagery system for flight simulators

Lok, Y. C. F. January 1983 (has links)
No description available.
874

Image-based “D”-crack detection in pavements

Day, Allison January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Balasubramaniam Natarajan / This thesis proposes an automated crack detection and classification algorithm to detect durability cracking (“D”-cracking) in pavement by using image processing and pattern recognition techniques. For the Departments of Transportation across the country, efficient and effective crack detection is vital to maintaining quality roadways. Manual inspection of roadways is tedious and cumbersome. Previous research has focus on distinct transverse and longitudinal cracks. However, “D”-cracking presents a unique challenge since the cracks are fine and have a distinctive shape surrounding the intersection of the transverse and longitudinal joints. This thesis presents an automated crack detection and classification system using several known image processing techniques. The algorithm consists of four sections: 1) lighting correction, 2) subimage processing, 3) postprocessing and 4) classification. Some images contain uneven lighting, which are corrected based on a model of the lighting system. The region of interest is identified by locating the lateral joints. These regions are then divided into overlapping subimages, which are then divided into cracked and noncracked pixels using thresholds on the residual error. Postprocessing includes a row/column sum filter and morphological open operation to reduce noise. Finally, metrics are calculated from the final crack map to classify each section as cracked or noncracked using the Mahalanobis distance from the noncracked distribution.
875

Subjective analysis of image coding errors

26 February 2009 (has links)
D.Ing. / The rapid use of digital images and the necessity to compress them, has created the need for the development of image quality metrics. Subjective evaluation is the most accurate of the image quality evaluation methods, but it is time consuming, tedious and expensive. In the mean time widely used objective evaluations such as the mean squared error measure has proven that they do not assess the image quality the way a human observer does. Since the human observer is the final receiver of most visual information, taking the way humans perceive visual information will be greatly beneficial for the development of an objective image quality metric that will reflect the subjective evaluation of distorted images. Many attempts have been carried out in the past, which tried to develop distortion metrics that model the processes of the human visual system, and many promising results have been achieved. However most of these metrics were developed with the use of simple visual stimuli, and most of these models were based on the visibility threshold measures, which are not representative of the distortion introduced in complex natural compressed images. In this thesis, a new image quality metric based on the human visual system properties as related to image perception is proposed. This metric provides an objective image quality measure for the subjective quality of coded natural images with suprathreshold degradation. This proposed model specifically takes into account the structure of the natural images, by analyzing the images into their different components, namely: the edges, texture and background (smooth) components, as these components influence the formation of perception in the HVS differently. Hence the HVS sensitivity to errors in images depends on weather these errors lie in more active areas of the image, such as strong edges or texture, or in the less active areas such as the smooth areas. These components are then summed to obtain the combined image which represents the way the HVS is postulated to perceive the image. Extensive subjective evaluation was carried out for the different image components and the combined image, obtained for the coded images at different qualities. The objective (RMSE) for these images was also calculated. A transformation between the subjective and the objective quality measures was performed, from which the objective metric that can predict the human perception of image quality was developed. The metric was shown to provide an accurate prediction of image quality, which agrees well with the prediction provided by the expensive and lengthy process of subjective evaluation. Furthermore it has the desired properties of the RMSE of being easier and cheaper to implement. Therefore, this metric will be useful for evaluating error mechanisms present in proposed coding schemes.
876

Minimum absolute error as an image restoration criterion

Karaguleff, Chris, Karaguleff, Chris January 1981 (has links)
No description available.
877

Generating Chinese calligraphy masterpiece from tablet versions

Ding, Lian Chao January 2018 (has links)
University of Macau / Faculty of Science and Technology. / Department of Computer and Information Science
878

THREE DIMENSIONAL SEGMENTATION AND DETECTION OF FLUORESCENCE MICROSCOPY IMAGES

David J. Ho (5929748) 10 June 2019 (has links)
Fluorescence microscopy is an essential tool for imaging subcellular structures in tissue. Two-photon microscopy enables imaging deeper into tissue using near-infrared light. The use of image analysis and computer vision tools to detect and extract information from the images is still challenging due to the degraded microscopy volumes by blurring and noise during the image acquisition and the complexity of subcellular structures presented in the volumes. In this thesis we describe methods for segmentation and detection of fluorescence microscopy images in 3D. We segment tubule boundaries by distinguishing them from other structures using three dimensional steerable filters. These filters can capture strong directional tendencies of the voxels on a tubule boundary. We also describe multiple three dimensional convolutional neural networks (CNNs) to segment nuclei. Training the CNNs usually require a large set of labeled images which is extremely difficult to obtain in biomedical images. We describe methods to generate synthetic microscopy volumes and to train our 3D CNNs using these synthetic volumes without using any real ground truth volumes. The locations and sizes of the nuclei are detected using of our CNNs, known as the Sphere Estimation Network. Our methods are evaluated using real ground truth volumes and are shown to outperform other techniques.
879

A knowledge-based system for extraction and recognition of linear features in high resolution remotely-sensed imagery

Peacegood, Gillian January 1989 (has links)
A knowledge-based system for the automatic extraction and recognition of linear features from digital imagery has been developed, with a knowledge base applied to the recognition of linear features in high resolution remotely sensed imagery, such as SPOT HRV and XS, Thematic Mapper and high altitude aerial photography. In contrast to many knowledge-based vision systems, emphasis is placed on uncertainty and the exploitation of context via statistical inferencing techniques, and issues of strategy and control are given less emphasis. Linear features are extracted from imagery, which may be multiband imagery, using an edge detection and tracking algorithm. A relational database for the representation of linear features has been developed, and this is shown to be useful in a number of applications, including general purpose query and display. A number of proximity relationships between the linear features in the database are established, using computationally efficient algorithms. Three techniques for classifying the linear features by exploiting uncertainty and context have been implemented and are compared. These are Bayesian inferencing using belief networks, a new inferencing technique based on belief functions and relaxation labelling using belief functions. The two inferencing techniques are shown to produce more realistic results than probabilistic relaxation, and the new inferericing method based on belief functions to perform best in practical situations. Overall, the system is shown to produce reasonably good classification results on hand extracted linear features, although the classification is less good on automatically extracted linear features because of shortcomings in the edge detection and extraction processes. The system adopts many of the features of expert systems, including complete separation of control from stored knowledge and justification for the conclusions reached.
880

Perceptual models in speech quality assessment and coding

Savvides, Vasos E. January 1988 (has links)
The ever-increasing demand for good communications/toll quality speech has created a renewed interest into the perceptual impact of rate compression. Two general areas are investigated in this work, namely speech quality assessment and speech coding. In the field of speech quality assessment, a model is developed which simulates the processing stages of the peripheral auditory system. At the output of the model a "running" auditory spectrum is obtained. This represents the auditory (spectral) equivalent of any acoustic sound such as speech. Auditory spectra from coded speech segments serve as inputs to a second model. This model simulates the information centre in the brain which performs the speech quality assessment.

Page generated in 0.101 seconds