11 |
Automated Ice-Water Classification using Dual Polarization SAR ImageryLeigh, Steve January 2013 (has links)
Mapping ice and open water in ocean bodies is important for numerous purposes including environmental analysis and ship navigation. The Canadian Ice Service (CIS) currently has several expert ice analysts manually generate ice maps on a daily basis. The CIS would like to augment their current process with an automated ice-water discrimination algorithm capable of operating on dual-pol synthetic aperture radar (SAR) images produced by RADARSAT-2. Automated methods can provide mappings in larger volumes, with more consistency, and in finer resolutions that are otherwise impractical to generate.
We have developed such an automated ice-water discrimination system called MAGIC. The algorithm first classifies the HV scene using the glocal method, a hierarchical region-based classification method. The glocal method incorporates spatial context information into the classification model using a modified watershed segmentation and a previously developed MRF classification algorithm called IRGS. Second, a pixel-based support vector machine (SVM) using a nonlinear RBF kernel classification is performed exploiting SAR grey-level co-occurrence matrix (GLCM) texture and backscatter features. Finally, the IRGS and SVM classification results are combined using the IRGS approach but with a modified energy function to accommodate the SVM pixel-based information.
The combined classifier was tested on 61 ground truthed dual-pol RADARSAT-2 scenes of the Beaufort Sea containing a variety of ice types and water patterns across melt, summer, and freeze-up periods. The average leave-one-out classification accuracy with respect to these ground truths is 95.8% and MAGIC attains an accuracy of 90% or above on 88% of the scenes. The MAGIC system is now under consideration by CIS for operational use.
|
12 |
Metody texturní analýzy v medicínských obrazech / Methods for texture analysis in ophthalmologic imagesHanyášová, Lucie January 2008 (has links)
This thesis is focused on texture analysis methods. The project contains an overview of widely used methods. The main aim of the thesis is to develop a method for texture analysis of retinal images, which will be used for distinction of two patient groups, one with glaucoma eyes and one healthy. It is observed that glaucoma patients don´t have a texture on the eye ground. Preprocessing of the images is found by transfer of the image to different color spaces to achieve the best emphasis of the eye ground texture. Co-occurrence matrix is chosen for texture analysis of this data. The thesis contains detail description of the chosen solutions and feature discussion and the result is a list of features, which can be used for distinction between glaucoma and healthy eyes. The method is implemented in Matlab environment.
|
13 |
Photometric Methods for Autonomous Tree Species Classification and NIR Quality InspectionValieva, Inna January 2015 (has links)
In this paper the brief overview of methods available for individual tree stems quality evaluation and tree species classification has been performed. The use of Near Infrared photometry based on conifer’s canopy reflectance measurement in near infrared range of spectrum has been evaluated for the use in autonomous forest harvesting. Photometric method based on the image processing of the bark pattern has been proposed to perform classification between main construction timber tree species in Scandinavia: Norway spruce (Picea abies) and Scots Pine (Pinus sylvestris). Several feature extraction algorithms have been evaluated, resulting two methods selected: Statistical Analysis using gray level co-occurrence matrix and maximally stable extremal regions feature detector. Feedforward Neural Network with Backpropagation training algorithm and Support Vector Machine classifiers have been implemented and compared. The verification of the proposed algorithm has been performed by real-time testing.
|
14 |
Texturní analýza snímků sítnice se zaměřením na detekci nervových vláken / Texture analysis of retinal images oriented towards detection of neronal fibre layerGazárek, Jiří January 2008 (has links)
The thesis is focused on detection of local disappearance of the neural layer on retina in fundus-camera images. The first chapter describes the human eye physiology, the glaucoma disease and the analyzed data. The second chapter compares four different approaches that should enable automatic detection of a possible damage to the retinal neural layer. These four approaches have been tested and evaluated; three of them showed an acceptable correlation with the medical expert conclusions – the directional spectral approach, the edge based approach and the difference local brightness. The last approch via local co-occurrence matrices has not turned out to be informative with the respect to the issue concerned. Then a program for the automatic detection of the nerve fibre layer loss areas has been designed, realized and evaluated. This task is solved in the last chapter. A relatively good agreement between the medical expert conclusions and the conclusions detected automatically by this program has been reached.
|
15 |
Measuring Semantic Distance using Distributional Profiles of ConceptsMohammad, Saif 01 August 2008 (has links)
Semantic distance is a measure of how close or distant in meaning two units of language are. A large number of important natural language problems, including machine
translation and word sense disambiguation,
can be viewed as semantic distance problems.
The two dominant approaches to estimating semantic distance are the WordNet-based semantic measures and the corpus-based distributional measures. In this thesis, I compare them, both qualitatively and quantitatively, and identify the limitations of each.
This thesis argues that estimating semantic distance is essentially a property of
concepts (rather than words) and that
two concepts are semantically close if they occur in similar contexts.
Instead of identifying the co-occurrence (distributional) profiles of words (distributional hypothesis), I argue that distributional profiles of concepts (DPCs) can be used to infer the semantic properties of concepts and indeed to estimate semantic distance more accurately. I propose a new hybrid approach to calculating semantic distance that combines corpus statistics and a published thesaurus (Macquarie Thesaurus).
The algorithm determines estimates of the DPCs using the categories in the thesaurus as very coarse concepts and, notably, without requiring any sense-annotated data. Even though the use of only about 1000 concepts to represent the vocabulary of a language seems drastic, I show that the method achieves results better than the state-of-the-art in a number of natural language tasks.
I show how cross-lingual DPCs can be created by combining text in one language with a thesaurus from another. Using these cross-lingual DPCs, we can solve problems
in one, possibly resource-poor, language using a knowledge source from another,
possibly resource-rich, language. I show that the approach is also useful in tasks that inherently involve two or more languages, such as machine translation and multilingual text summarization.
The proposed approach is computationally inexpensive, it can estimate both semantic
relatedness and semantic similarity, and it can be applied to all parts of speech.
Extensive experiments on ranking word pairs as per semantic distance, real-word spelling correction, solving Reader's Digest word choice problems, determining word sense dominance, word sense disambiguation, and
word translation show that the new approach is markedly superior to previous ones.
|
16 |
Measuring Semantic Distance using Distributional Profiles of ConceptsMohammad, Saif 01 August 2008 (has links)
Semantic distance is a measure of how close or distant in meaning two units of language are. A large number of important natural language problems, including machine
translation and word sense disambiguation,
can be viewed as semantic distance problems.
The two dominant approaches to estimating semantic distance are the WordNet-based semantic measures and the corpus-based distributional measures. In this thesis, I compare them, both qualitatively and quantitatively, and identify the limitations of each.
This thesis argues that estimating semantic distance is essentially a property of
concepts (rather than words) and that
two concepts are semantically close if they occur in similar contexts.
Instead of identifying the co-occurrence (distributional) profiles of words (distributional hypothesis), I argue that distributional profiles of concepts (DPCs) can be used to infer the semantic properties of concepts and indeed to estimate semantic distance more accurately. I propose a new hybrid approach to calculating semantic distance that combines corpus statistics and a published thesaurus (Macquarie Thesaurus).
The algorithm determines estimates of the DPCs using the categories in the thesaurus as very coarse concepts and, notably, without requiring any sense-annotated data. Even though the use of only about 1000 concepts to represent the vocabulary of a language seems drastic, I show that the method achieves results better than the state-of-the-art in a number of natural language tasks.
I show how cross-lingual DPCs can be created by combining text in one language with a thesaurus from another. Using these cross-lingual DPCs, we can solve problems
in one, possibly resource-poor, language using a knowledge source from another,
possibly resource-rich, language. I show that the approach is also useful in tasks that inherently involve two or more languages, such as machine translation and multilingual text summarization.
The proposed approach is computationally inexpensive, it can estimate both semantic
relatedness and semantic similarity, and it can be applied to all parts of speech.
Extensive experiments on ranking word pairs as per semantic distance, real-word spelling correction, solving Reader's Digest word choice problems, determining word sense dominance, word sense disambiguation, and
word translation show that the new approach is markedly superior to previous ones.
|
Page generated in 0.0839 seconds