• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 62
  • 29
  • 9
  • 8
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 148
  • 148
  • 65
  • 32
  • 28
  • 27
  • 26
  • 24
  • 22
  • 20
  • 20
  • 18
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

PhETA: An Interactive Tool for Analyzing the Quality of Digital Photographs from Edge Transitions

Allowatt, Anthony James 08 December 2005 (has links)
The goal of this thesis is to build an interactive tool for analyzing the quality of a digital image and predicting the scale at which it may be published. Since edges are present almost everywhere in most digital images, we use a mathematical edge model as the basis of analysis. In particular, we are interested in the luminance and chromaticity behavior at edge boundaries. We use this model to develop PhETA — Photograph Edge Transition Analyzer — an interactive tool that allows novice users to view and understand the results gained from this analysis in a clear and simple manner. / Master of Science
12

Improving Edge Detection Using Intersection Consistency

Ciftci, Serdar 01 October 2011 (has links) (PDF)
Edge detection is an important step in computer vision since edges are utilized by the successor visual processing stages including many tasks such as motion estimation, stereopsis, shape representation and matching, etc. In this study, we test whether a local consistency measure based on image orientation (which we call Intersection Consistency - IC), which was previously shown to improve detection of junctions, can be used for improving the quality of edge detection of seven different detectors / namely, Canny, Roberts, Prewitt, Sobel, Laplacian of Gaussian (LoG), Intrinsic Dimensionality, Line Segment Detector (LSD). IC works well on images that contain prominent objects which are different in color from their surroundings. IC give good results on natural images that have especially cluttered background. On images involving human made objects, IC leads to good results as well. But, depending on the amount of clutter, the loss of true positives might be more crucial. Through our comprehensive investigation, we show that approximately 21% increase in f-score is obtained whereas some important edges are lost. We conclude from our experiments that IC is suitable for improving the quality of edge detection in some detectors such as Canny, LoG and LSD.
13

Automated hippocampal location and extraction

Bonnici, Heidi M. January 2010 (has links)
The hippocampus is a complex brain structure that has been studied extensively and is subject to abnormal structural change in various neuropsychiatric disorders. The highest definition in vivo method of visualizing the anatomy of this structure is structural Magnetic Resonance Imaging (MRI). Gross structure can be assessed by the naked eye inspection of MRI scans but measurement is required to compare scans from individuals within normal ranges, and to assess change over time in individuals. The gold standard of such measurement is manual tracing of the boundaries of the hippocampus on scans. This is known as a Region Of Interest (ROI) approach. ROI is laborious and there are difficulties with test-retest and inter-rater reliability. These difficulties are primarily due to uncertainty in designation of the hippocampus boundary. An improved, less labour intensive and more reliable method is clearly desirable. This thesis describes a fully automated hybrid methodology that is able to first locate and then extract hippocampal volumes from 3D 1.5T MRI T1 brain scans automatically. The hybrid algorithm uses brain atlas mappings and fuzzy inference to locate hippocampal areas and create initial hippocampal boundaries. This initial location is used to seed a deformable manifold algorithm. Rule based deformations are then applied to refine the estimate of the hippocampus locations. Finally, the hippocampus boundaries are corrected through an inference process that assures adherence to an expected hippocampus volume. The ICC values of this methodology when compared to the manual segmentation of the same hippocampi result in a 0.73 for the left and 0.81 for the right hippocampi. These values both fall within the range of reliability testing according to the manual ‘gold standard’ technique. Thus, this thesis describes the development and validation of a genuinely automated approach to hippocampal volume extraction of potential utility in studies of a range of neuropsychiatric disorders and could eventually find clinical applications.
14

Classification of skin tumours through the analysis of unconstrained images

Viana, Joaquim Mesquita da Cunha January 2009 (has links)
Skin cancer is the most frequent malignant neoplasm for Caucasian individuals. According to the Skin Cancer Foundation, the incidence of melanoma, the most malignant of skin tumours, and resultant mortality, have increased exponentially during the past 30 years, and continues to grow. [1]. Although often intractable in advanced stages, skin cancer in general and melanoma in particular, if detected in an early stage, can achieve cure ratios of over 95% [1,55]. Early screening of the lesions is, therefore, crucial, if a cure is to be achieved. Most skin lesions classification systems rely on a human expert supported dermatoscopy, which is an enhanced and zoomed photograph of the lesion zone. Nevertheless and although contrary claims exist, as far as is known by the author, classification results are currently rather inaccurate and need to be verified through a laboratory analysis of a piece of the lesion’s tissue. The aim of this research was to design and implement a system that was able to automatically classify skin spots as inoffensive or dangerous, with a small margin of error; if possible, with higher accuracy than the results achieved normally by a human expert and certainly better than any existing automatic system. The system described in this thesis meets these criteria. It is able to capture an unconstrained image of the affected skin area and extract a set of relevant features that may lead to, and be representative of, the four main classification characteristics of skin lesions: Asymmetry; Border; Colour; and Diameter. These relevant features are then evaluated either through a Bayesian statistical process - both a simple k-Nearest Neighbour as well as a Fuzzy k-Nearest Neighbour classifier - a Support Vector Machine and an Artificial Neural Network in order to classify the skin spot as either being a Melanoma or not. The characteristics selected and used through all this work are, to the author’s knowledge, combined in an innovative manner. Rather than simply selecting absolute values from the images characteristics, those numbers were combined into ratios, providing a much greater independence from environment conditions during the process of image capture. Along this work, image gathering became one of the most challenging activities. In fact several of the initially potential sources failed and so, the author had to use all the pictures he could find, namely on the Internet. This limited the test set to 136 images, only. Nevertheless, the process results were excellent. The algorithms developed were implemented into a fully working system which was extensively tested. It gives a correct classification of between 76% and 92% – depending on the percentage of pictures used to train the system. In particular, the system gave no false negatives. This is crucial, since a system which gave false negatives may deter a patient from seeking further treatment with a disastrous outcome. These results are achieved by detecting precise edges for every lesion image, extracting features considered relevant according to the giving different weights to the various extracted features and submitting these values to six classification algorithms – k-Nearest Neighbour, Fuzzy k-Nearest Neighbour, Naïve Bayes, Tree Augmented Naïve Bayes, Support Vector Machine and Multilayer Perceptron - in order to determine the most reliable combined process. Training was carried out in a supervised way – all the lesions were previously classified by an expert on the field before being subject to the scrutiny of the system. The author is convinced that the work presented on this PhD thesis is a valid contribution to the field of skin cancer diagnostics. Albeit its scope is limited – one lesion per image – the results achieved by this arrangement of segmentation, feature extraction and classification algorithms showed this is the right path to achieving a reliable early screening system. If and when, to all these data, values for age, gender and evolution might be used as classification features, the results will, no doubt, become even more accurate, allowing for an improvement in the survival rates of skin cancer patients.
15

Machine vision for finding a joint to guide a welding robot

Larsson, Mathias January 2009 (has links)
<p>This report contains a description on how it is possible to guide a robot along an edge, by using a camera mounted on the robot. If stereo matching is used to calculate 3Dcoordinates of an object or an edge, it requires two images from different known positions and orientations to calculate where it is. In the image analysis in this project, the Canny edge filter has been used. The result from the filter is not useful directly, because it finds too many edges and it misses some pixels. The Canny edge result must be sorted and finally filled up before the final calculations can be started. This additional work with the image decreases unfortunately the accuracy in the calculations. The accuracy is estimated through comparison between measured coordinates of the edge using a coordinate measuring machine and the calculated coordinates. There is a deviation of up to three mm in the calculated edge. The camera calibration has been described in earlier thesis so it is not mentioned in this report, although it is a prerequisite of this project.</p>
16

Toward a Surface Primal Sketch

Ponce, Jean, Brady, Michael 01 April 1985 (has links)
This paper reports progress toward the development of a representation of significant surface changes in dense depth maps. We call the representation the Surface Primal Sketch by analogy with representation of intensity changes, image structure, and changes in curvature of planar curves. We describe an implemented program that detects, localizes, and symbolically describes: steps, where the surface height function is discontinuous; roofs, where the surface is continuous but the surface normal is discontinuous; smooth joins, where the surface normal is continuous but a principle curvature is discontinuous and changes sign; and shoulders, which consists of two roofs and correspond to a step viewed obliquely. We illustrate the performance of the program on range maps of objects of varying complexity.
17

Feature Extraction Without Edge Detection

Chaney, Ronald D. 01 September 1993 (has links)
Information representation is a critical issue in machine vision. The representation strategy in the primitive stages of a vision system has enormous implications for the performance in subsequent stages. Existing feature extraction paradigms, like edge detection, provide sparse and unreliable representations of the image information. In this thesis, we propose a novel feature extraction paradigm. The features consist of salient, simple parts of regions bounded by zero-crossings. The features are dense, stable, and robust. The primary advantage of the features is that they have abstract geometric attributes pertaining to their size and shape. To demonstrate the utility of the feature extraction paradigm, we apply it to passive navigation. We argue that the paradigm is applicable to other early vision problems.
18

Edge Detection on Underwater Laser Spot

Tseng, Pin-hsien 04 September 2007 (has links)
none
19

Investigating Polynomial Fitting Schemes for Image Compression

Ameer, Salah 13 January 2009 (has links)
Image compression is a means to perform transmission or storage of visual data in the most economical way. Though many algorithms have been reported, research is still needed to cope with the continuous demand for more efficient transmission or storage. This research work explores and implements polynomial fitting techniques as means to perform block-based lossy image compression. In an attempt to investigate nonpolynomial models, a region-based scheme is implemented to fit the whole image using bell-shaped functions. The idea is simply to view an image as a 3D geographical map consisting of hills and valleys. However, the scheme suffers from high computational demands and inferiority to many available image compression schemes. Hence, only polynomial models get further considerations. A first order polynomial (plane) model is designed to work in a multiplication- and division-free (MDF) environment. The intensity values of each image block are fitted to a plane and the parameters are then quantized and coded. Blocking artefacts, a common drawback of block-based image compression techniques, are reduced using an MDF line-fitting scheme at blocks’ boundaries. It is shown that a compression ratio of 62:1 at 28.8dB is attainable for the standard image PEPPER, outperforming JPEG, both objectively and subjectively for this part of the rate-distortion characteristics. Inter-block prediction can substantially improve the compression performance of the plane model to reach a compression ratio of 112:1 at 27.9dB. This improvement, however, slightly increases computational complexity and reduces pipelining capability. Although JPEG2000 is not a block-based scheme, it is encouraging that the proposed prediction scheme performs better in comparison to JPEG 2000, computationally and qualitatively. However, more experiments are needed to have a more concrete comparison. To reduce blocking artefacts, a new postprocessing scheme, based on Weber’s law, is employed. It is reported that images postprocessed using this scheme are subjectively more pleasing with a marginal increase in PSNR (<0.3 dB). The Weber’s law is modified to perform edge detection and quality assessment tasks. These results motivate the exploration of higher order polynomials, using three parameters to maintain comparable compression performance. To investigate the impact of higher order polynomials, through an approximate asymptotic behaviour, a novel linear mapping scheme is designed. Though computationally demanding, the performances of higher order polynomial approximation schemes are comparable to that of the plane model. This clearly demonstrates the powerful approximation capability of the plane model. As such, the proposed linear mapping scheme constitutes a new approach in image modeling, and hence worth future consideration.
20

Machine vision for finding a joint to guide a welding robot

Larsson, Mathias January 2009 (has links)
This report contains a description on how it is possible to guide a robot along an edge, by using a camera mounted on the robot. If stereo matching is used to calculate 3Dcoordinates of an object or an edge, it requires two images from different known positions and orientations to calculate where it is. In the image analysis in this project, the Canny edge filter has been used. The result from the filter is not useful directly, because it finds too many edges and it misses some pixels. The Canny edge result must be sorted and finally filled up before the final calculations can be started. This additional work with the image decreases unfortunately the accuracy in the calculations. The accuracy is estimated through comparison between measured coordinates of the edge using a coordinate measuring machine and the calculated coordinates. There is a deviation of up to three mm in the calculated edge. The camera calibration has been described in earlier thesis so it is not mentioned in this report, although it is a prerequisite of this project.

Page generated in 0.1066 seconds