51 |
Exploring Transfer Learning via Convolutional Neural Networks for Image Classification and Super-ResolutionRibeiro, Eduardo Ferreira 22 March 2018 (has links)
This work presents my research about the use of Convolutional Neural Network (CNNs) for
transfer learning through its application for colonic polyp classification and iris super-resolution.
Traditionally, machine learning methods use the same feature space and the same distribution
for training and testing the tools. Several problems in this approach can emerge as, for example,
when the number of samples for training (especially in a supervised training) is limited. In the
medical field, this problem is recurrent mainly because obtaining a database large enough with
appropriate annotations for training is highly costly and may become impractical. Another
problem relates to the distribution of textural features in a image database which may be too
large such as the texture patterns of the human iris. In this case a single and specific training
database might not get enough generalization to be applied to the entire domain. In this work
we explore the use of texture transfer learning to surpass these problems for two applications:
colonic polyp classification and iris super-resolution.
The leading cause of deaths related to intestinal tract is the development of cancer cells
(polyps) in its many parts. An early detection (when the cancer is still at an early stage) can
reduce the risk of mortality among these patients. More specifically, colonic polyps (benign tumors
or growths which arise on the inner colon surface) have a high occurrence and are known
to be precursors of colon cancer development. Several studies have shown that automatic detection
and classification of image regions which may contain polyps within the colon can be
used to assist specialists in order to decrease the polyp miss rate.
However, the classification can be a difficult task due to several factors such as the lack or
excess of illumination, the blurring due to movement or water injection and the different appearances
of polyps. Also, to find a robust and a global feature extractor that summarizes and
represents all these pit-patterns structures in a single vector is very difficult and Deep Learning
can be a good alternative to surpass these problems.
One of the goals of this work is show the effectiveness of CNNs trained from scratch for
colonic polyp classification besides the capability of knowledge transfer between natural images
and medical images using off-the-shelf pretrained CNNs for colonic polyp classification. In this
case, the CNN will project the target database samples into a vector space where the classes are
more likely to be separable.
The second part of this work dedicates to the transfer learning for iris super-resolution. The
main goal of Super-Resolution (SR) is to produce, from one or more images, an image with a
higher resolution (with more pixels) at the same time that produces a more detailed and realistic
image being faithful to the low resolution image(s). Currently, most iris recognition systems
require the user to present their iris for the sensor at a close distance. However, at present, there
is a constant pressure to make that relaxed conditions of acquisitions in such systems could be
allowed. In this work we show that the use of deep learning and transfer learning for single
image super resolution applied to iris recognition can be an alternative for Iris Recognition of
low resolution images. For this purpose, we explore if the nature of the images as well as if the
pattern from the iris can influence the CNN transfer learning and, consequently, the results in
the recognition process. / Diese Arbeit pr¨asentiert meine Forschung hinsichtlich der Verwendung von ”Transfer-Learning”
(TL) in Kombination mit Convolutional Neural Networks (CNNs), um dadurch die Klassifikation
von Dickdarmpolypen und die Qualit¨at von Iris Bildern (”Iris-Super-Resolution”) zu
verbessern.
Herk¨ommlicherweise verwenden Verfahren des maschinellen Lernens den gleichen Merkmalsraum
und die gleiche Verteilung zum Trainieren und Testen der abgewendeten Methoden.
Mehrere Probleme k¨onnen bei diesem Ansatz jedoch auftreten. Zum Beispiel ist es
m¨ oglich, dass die Anzahl der zu trainierenden Daten (insbesondere in einem ”supervised training”
Szenario) begrenzt ist. Im Speziellen im medizinischen Anwendungsfall ist man regelm¨aßig
mit dem angesprochenen Problem konfrontiert, da die Zusammenstellung einer Datenbank,
welche ¨ uber eine geeignete Anzahl an verwendbaren Daten verf ¨ ugt, entweder sehr kostspielig
ist und/oder sich als ¨ uber die Maßen zeitaufw¨andig herausstellt. Ein anderes Problem betrifft
die Verteilung von Strukturmerkmalen in einer Bilddatenbank, die zu groß sein kann, wie es
im Fall der Verwendung von Texturmustern der menschlichen Iris auftritt. Dies kann zu dem
Umstand f ¨ uhren, dass eine einzelne und sehr spezifische Trainingsdatenbank m¨oglicherweise
nicht ausreichend verallgemeinert wird, um sie auf die gesamte betrachtete Dom¨ane anzuwenden.
In dieser Arbeit wird die Verwendung von TL auf diverse Texturen untersucht, um die
zuvor angesprochenen Probleme f ¨ ur zwei Anwendungen zu ¨ uberwinden: in der Klassifikation
von Dickdarmpolypen und in Iris Super-Resolution.
Die Hauptursache f ¨ ur Todesf¨alle im Zusammenhang mit dem Darmtrakt ist die Entwicklung
von Krebszellen (Polypen) in vielen unterschiedlichen Auspr¨agungen. Eine Fr ¨uherkennung
kann das Mortalit¨atsrisiko bei Patienten verringern, wenn sich der Krebs noch in einem fr ¨uhen
Stadium befindet. Genauer gesagt, Dickdarmpolypen (gutartige Tumore oder Wucherungen,
die an der inneren Dickdarmoberfl¨ache entstehen) haben ein hohes Vorkommen und sind bekanntermaßen
Vorl¨aufer von Darmkrebsentwicklung. Mehrere Studien haben gezeigt, dass die automatische
Erkennung und Klassifizierung von Bildregionen, die Polypen innerhalb des Dickdarms
m¨oglicherweise enthalten, verwendet werden k¨onnen, um Spezialisten zu helfen, die
Fehlerrate bei Polypen zu verringern.
Die Klassifizierung kann sich jedoch aufgrund mehrerer Faktoren als eine schwierige Aufgabe
herausstellen. ZumBeispiel kann das Fehlen oder ein U¨ bermaß an Beleuchtung zu starken
Problemen hinsichtlich der Kontrastinformation der Bilder f ¨ uhren, wohingegen Unsch¨arfe aufgrund
von Bewegung/Wassereinspritzung die Qualit¨at des Bildmaterials ebenfalls verschlechtert.
Daten, welche ein unterschiedlich starkes Auftreten von Polypen repr¨asentieren, bieten auch
dieM¨oglichkeit zu einer Reduktion der Klassifizierungsgenauigkeit. Weiters ist es sehr schwierig,
einen robusten und vor allem globalen Feature-Extraktor zu finden, der all die notwendigen
Pit-Pattern-Strukturen in einem einzigen Vektor zusammenfasst und darstellt. Um mit diesen
Problemen ad¨aquat umzugehen, kann die Anwendung von CNNs eine gute Alternative bieten.
Eines der Ziele dieser Arbeit ist es, die Wirksamkeit von CNNs, die von Grund auf f ¨ ur
die Klassifikation von Dickdarmpolypen konstruiert wurden, zu zeigen. Des Weiteren soll
die Anwendung von TL unter der Verwendung vorgefertigter CNNs f ¨ ur die Klassifikation
von Dickdarmpolypen untersucht werden. Hierbei wird zus¨atzliche Information von nichtmedizinischen
Bildern hinzugezogen und mit den verwendeten medizinischen Daten verbunden:
Information wird also transferiert - TL entsteht. Auch in diesem Fall projiziert das CNN
iii
die Zieldatenbank (die Polypenbilder) in einen vorher trainierten Vektorraum, in dem die zu
separierenden Klassen dann eher trennbar sind, daWissen aus den nicht-medizinischen Bildern
einfließt.
Der zweite Teil dieser Arbeit widmet sich dem TL hinsichtlich der Verbesserung der Bildqualit¨at
von Iris Bilder - ”Iris- Super-Resolution”. Das Hauptziel von Super-Resolution (SR) ist es, aus
einem oder mehreren Bildern gleichzeitig ein Bild mit einer h¨oheren Aufl¨osung (mit mehr
Pixeln) zu erzeugen, welches dadurch zu einem detaillierteren und somit realistischeren Bild
wird, wobei der visuelle Bildinhalt unver¨andert bleibt. Gegenw¨artig fordern die meisten Iris-
Erkennungssysteme, dass der Benutzer seine Iris f ¨ ur den Sensor in geringer Entfernung pr¨asentiert.
Jedoch ist es ein Anliegen der Industrie die bisher notwendigen Bedingungen - kurzer
Abstand zwischen Sensor und Iris, sowie Verwendung von sehr teuren hochqualitativen Sensoren
- zu ver¨andern. Diese Ver¨anderung betrifft einerseits die Verwendung von billigeren
Sensoren und andererseits die Vergr¨oßerung des Abstandes zwischen Iris und Sensor. Beide
Anpassungen f ¨ uhren zu Reduktion der Bildqualit¨at, was sich direkt auf die Erkennungsgenauigkeit
der aktuell verwendeten Iris- erkennungssysteme auswirkt. In dieser Arbeit zeigen
wir, dass die Verwendung von CNNs und TL f ¨ ur die ”Single Image Super-Resolution”, die bei
der Iriserkennung angewendet wird, eine Alternative f ¨ ur die Iriserkennung von Bildern mit
niedriger Aufl¨osung sein kann. Zu diesem Zweck untersuchen wir, ob die Art der Bilder sowie
das Muster der Iris das CNN-TL beeinflusst und folglich die Ergebnisse im Erkennungsprozess
ver¨andern kann.
|
52 |
Représentations optimales pour la recherche dans les bases d'images patrimoniales / Optimal representation for searching the image databases heritageNegrel, Romain 03 December 2014 (has links)
Depuis plusieurs décennies, le développement des technologies de numérisation et de stockage ont permis la mise en œuvre de nombreux projets de numérisation du patrimoine culturel.L'approvisionnement massif et continu de ces bases de données numériques du patrimoine culturel entraîne de nombreux problèmes d'indexation.En effet, il n'est plus possible d'effectuer une indexation manuelle de toutes les données.Pour indexer et rendre accessible facilement les données, des méthodes d'indexation automatique et d'aide à l'indexation se sont développées depuis plusieurs années.Cependant, les méthodes d'indexation automatique pour les documents non-textuels (image, vidéo, son, modèle 3D, …) sont encore complexes à mettre en œuvre pour de grands volumes de données.Dans cette thèse, nous nous intéressons en particulier à l'indexation automatique d'images.Pour effectuer des tâches d'indexation automatique ou d'aide à l'indexation, il est nécessaire de construire une méthode permettant d'évaluer la similarité entre deux images.Nos travaux sont basés sur les méthodes à signatures d'image ; ces méthodes consistent à résumer le contenu visuel de chaque image dans une signature (vecteur unique), puis d'utiliser ces signatures pour calculer la similarité entre deux images.Pour extraire les signatures, nous utilisons la chaîne d'extraction suivante : en premier, nous extrayons de l'image un grande nombre de descripteurs locaux ; puis nous résumons l'ensemble de ces descripteurs dans une signature de grande dimension ; enfin nous réduisons fortement la dimension de la signature.Les signatures de l'état de l'art basées sur cette chaîne d'extraction permettent d'obtenir de très bonnes performance en indexation automatique et en aide à l'indexation.Cependant, les méthodes de l'état de l'art ont généralement de forts coûts mémoires et calculatoires qui rendent impossible leurs mise en œuvre sur des grands volumes de données.Dans cette thèse, notre objectif est double : d'une part nous voulons améliorer les signatures d'images pour obtenir de très bonnes performances dans les problèmes d'indexation automatique ; d'autre part, nous voulons réduire les coûts de la chaîne de traitement, pour permettre le passage à l'échelle.Nous proposons des améliorations d'une signature d'image de l'état de l'art nommée VLAT (Vectors of Locally Aggregated Tensors).Ces améliorations permettent de rendre la signature plus discriminante tout en réduisant sa dimension.Pour réduire la dimension des signatures, nous effectuons une projection linéaire de la signature dans un espace de petite dimension.Nous proposons deux méthodes pour obtenir des projecteurs de réduction de dimension tout en conservant les performances des signatures d'origine.Notre première méthode consiste à calculer les projecteurs qui permettent d'approximer le mieux possible les scores de similarités entre les signatures d'origine.La deuxième méthode est basée sur le problème de recherche de quasi-copies ; nous calculons les projecteurs qui permettent de respecter un ensemble de contraintes sur le rang des images dans la recherche par rapport à l'image requête.L'étape la plus coûteuse de la chaîne d'extraction est la réduction de dimension de la signature à cause de la grande dimension des projecteurs.Pour les réduire, nous proposons d'utiliser des projecteurs creux en introduisant une contrainte de parcimonie dans nos méthodes de calcul des projecteurs.Comme il est généralement complexe de résoudre un problème d'optimisation avec une contrainte de parcimonie stricte, nous proposons pour chacun des problèmes une méthode pour obtenir une approximation des projecteurs creux recherchés.L'ensemble de ces travaux font l'objet d'expériences montrant l'intérêt pratique des méthodes proposées par comparaison avec les méthodes de l'état de l'art. / In the last decades, the development of scanning and storing technologies resulted in the development of many projects of cultural heritage digitization.The massive and continuous flow of numerical data in cultural heritage databases causes many problems for indexing.Indeed, it is no longer possible to perform a manual indexing of all data.To index and ease the access to data, many methods of automatic and semi-automatic indexing have been proposed in the last years.The current available methods for automatic indexing of non-textual documents (images, video, sound, 3D model, ...) are still too complex to implement for large volumes of data.In this thesis, we focus on the automatic indexing of images.To perform automatic or semi-automatic indexing, it is necessary to build an automatic method for evaluating the similarity between two images.Our work is based on image signature methods ; these methods involve summarising the visual content of each image in a signature (single vector), and then using these signatures to compute the similarity between two images.To extract the signatures, we use the following pipeline: First, we extract a large number of local descriptors of the image; Then we summarize all these descriptors in a large signature; Finally, we strongly reduce the dimensionality of the resulting signature.The state of the art signatures based on this pipeline provide very good performance in automatic indexing.However, these methods generally incur high storage and computational costs that make their implementation impossible on large volumes of data.In this thesis, our goal is twofold : First, we wish to improve the image signatures to achieve very good performance in automatic indexing problems ; Second, we want to reduce the cost of the processing chain to enable scalability.We propose to improve an image signature of the state of the art named VLAT (Vectors of Locally Aggregated Tensors).Our improvements increase the discriminative power of the signature.To reduce the size of the signatures, we perform linear projections of the signatures in a lower dimensional space.We propose two methods to compute the projectors while maintaining the performance of the original signatures.Our first approach is to compute the projectors that best approximate the similarities between the original signatures.The second method is based on the retrieval of quasi-copies; We compute the projectors that meet the constraints on the rank of retrieved images with respect to the query image.The most expensive step of the extraction pipeline is the dimentionality reduction step; these costs are due to the large dimentionality of the projectors.To reduce these costs, we propose to use sparse projectors by introducing a sparsity constraint in our methods.Since it is generally complex to solve an optimization problem with a strict sparsity constraint, we propose for each problem a method for approximating sparse projectors.This thesis work is the subject of experiments showing the practical value of the proposed methods in comparison with existing methods
|
53 |
Introdução de dados auxiliares na classificação de imagens digitais de sensoriamento remoto aplicando conceitos da teoria da evidênciaLersch, Rodrigo Pereira January 2008 (has links)
Nesta tese investiga-se uma nova abordagem visando implementar os conceitos propostos na Teoria da Evidencia para fins de classificação de imagens digitais em Sensoriamento Remoto. Propõe-se aqui a utilização de variáveis auxiliares, estruturadas na forma de Planos de Informação (P.I.s) como em um SIG para gerar dados de confiança e de plausibilidade. São então aplicados limiares aos dados de confiança e de plausibilidade, com a finalidade de detectar erros de inclusão e de omissão, respectivamente, na imagem temática. Propõe-se nesta tese que estes dois limiares sejam estimados em função das acurácias do usuário e do produtor. A metodologia proposta nesta tese foi testada em uma área teste, coberta pela classe Mata Nativa com Araucária. O experimento mostrou que a metodologia aqui proposta atinge seus objetivos. / In this thesis we investigate a new approach to implement concepts developed by the Theory of Evidence to Remote Sensing digital image classification. In the proposed approach auxiliary variables are structured as layers in a GIS-like format to produce layers of belief and plausibility. Thresholds are applied to the layers of belief and plausibility to detect errors of commission and omission, respectively on the thematic image. The thresholds are estimated as functions of the user’s and producer’s accuracy. Preliminary tests were performed over an area covered by natural forest with Araucaria, showing some promising results.
|
54 |
Visual feature learning with application to medical image classificationManivannan, Siyamalan January 2015 (has links)
Various hand-crafted features have been explored for medical image classification, which include SIFT and Local Binary Patterns (LBP). However, hand-crafted features may not be optimally discriminative for classifying images from particular domains (e.g. colonoscopy), as not necessarily tuned to the domain’s characteristics. In this work, I give emphasis on learning highly discriminative local features and image representations to achieve the best possible classification performance for medical images, particularly for colonoscopy and histology (cell) images. I propose approaches to learn local features using unsupervised and weakly-supervised methods, and an approach to improve the feature encoding methods such as bag-of-words. Unlike the existing work, the proposed weakly-supervised approach uses image-level labels to learn the local features. Requiring image-labels instead of region-level labels makes annotations less expensive, and closer to the data normally available from normal clinical practice, hence more feasible in practice. In this thesis, first, I propose a generalised version of the LBP descriptor called the Generalised Local Ternary Patterns (gLTP), which is inspired by the success of LBP and its variants for colonoscopy image classification. gLTP is robust to both noise and illumination changes, and I demonstrate its competitive performance compared to the best performing LBP-based descriptors on two different datasets (colonoscopy and histology). However LBP-based descriptors (including gLTP) lose information due to the binarisation step involved in their construction. Therefore, I then propose a descriptor called the Extended Multi-Resolution Local Patterns (xMRLP), which is real-valued and reduces information loss. I propose unsupervised and weakly-supervised learning approaches to learn the set of parameters in xMRLP. I show that the learned descriptors give competitive or better performance compared to other descriptors such as root-SIFT and Random Projections. Finally, I propose an approach to improve feature encoding methods. The approach captures inter-cluster features, providing context information in the feature as well as in the image spaces, in addition to the intra-cluster features often captured by conventional feature encoding approaches. The proposed approaches have been evaluated on three datasets, 2-class colonoscopy (2, 100 images), 3-class colonoscopy (2, 800 images) and histology (public dataset, containing 13, 596 images). Some experiments on radiology images (IRMA dataset, public) also were given. I show state-of-the-art or superior classification performance on colonoscopy and histology datasets.
|
55 |
An HMM-based segmentation method for traffic monitoring moviesKato, Jien, Watanabe, Toyohide, Joga, Sebastien, Jens, Rittscher, Andrew, Blake, 加藤, ジェーン, 渡邉, 豊英 09 1900 (has links)
No description available.
|
56 |
Evaluating the effect of different distances on the pixels per object and image classificationSamaei, Amiryousef January 2015 (has links)
In the last decades camera systems have continuously evolved and have found wide range of applications. One of the main applications of a modern camera system is surveillance in outdoor areas. The camera system, based on local computations, can detect and classify objects autonomously. However, the distance of the objects from the camera plays a vital role on the classification results. This could be specially challenging when lighting conditions are varying. Therefore, in this thesis, we are examining the effect of changing dis-tances on object in terms of number of pixels. In addition, the effect of distance on classification is studied by preparing four different sets. For consideration of high signal to noise ratio, we are integrating thermal and visual image sensors for the same test in order to achieve better spectral resolution. In this study, four different data sets, thermal, visu-al, binary from visual and binary from thermal have been prepared to train the classifier. The categorized objects include bicycle, human and vehicle. Comparative studies have been performed in order to identify the data sets accuracy. It has been demonstrated that for fixed distances bi-level data sets, obtained from visual images, have better accuracy. By using our setup, the object (human) with a length of 179 and width of 30 has been classified correctly with minor error up to 150 meters for thermal, visual as well as binary from visual. Moreover, for bi-level images from thermal, the human object has been correctly classified as far away as 250 meters.
|
57 |
Building and Using Knowledge Models for Semantic Image AnnotationBannour, Hichem 08 February 2013 (has links) (PDF)
This dissertation proposes a new methodology for building and using structured knowledge models for automatic image annotation. Specifically, our first proposals deal with the automatic building of explicit and structured knowledge models, such as semantic hierarchies and multimedia ontologies, dedicated to image annotation. Thereby, we propose a new approach for building semantic hierarchies faithful to image semantics. Our approach is based on a new image-semantic similarity measure between concepts and on a set of rules that allow connecting the concepts with higher relatedness till the building of the final hierarchy. Afterwards, we propose to go further in the modeling of image semantics through the building of explicit knowledge models that incorporate richer semantic relationships between image concepts. Therefore, we propose a new approach for automatically building multimedia ontologies consisting of subsumption relationships between concepts, and also other semantic relationships such as contextual and spatial relations. Fuzzy description logics are used as a formalism to represent our ontology and to deal with the uncertainty and the imprecision of concept relationships. In order to assess the effectiveness of the built structured knowledge models, we propose subsequently to use them in a framework for image annotation. We propose therefore an approach, based on the structure of semantic hierarchies, to effectively perform hierarchical image classification. Furthermore, we propose a generic approach for image annotation combining machine learning techniques, such as hierarchical image classification, and fuzzy ontological-reasoning in order to achieve a semantically relevant image annotation. Empirical evaluations of our approaches have shown significant improvement in the image annotation accuracy.
|
58 |
Discrimination of Agricultural Land Management Practices using Polarimetric Synthetic Aperture RADARMcKeown, Steven 04 September 2012 (has links)
This thesis investigates the sensitivity and separability of post-harvest tillage conditions using polarimetric Synthetic Aperture RADAR in southwestern Ontario. Variables examined include: linear polarizations HH, HV, and VV and polarimetric variables: pedestal height, co-polarized complex correlation coefficient magnitude, left and right co-polarized circular polarizations and co-polarized phase difference. Six fine-quad polarimetric, high incidence angle (49°) RADARSAT-2 images acquired over three dates in fall 2010 were used. Over 100 fields were monitored, coincident with satellite overpasses. OMAFRA’s AgRI, a high-resolution polygon network was used to extract average response from fields. Discrimination between tillage practices was best later in the fall season, due to sample size and low soil moisture conditions. Variables most sensitive to tillage activities include HH and VV polarizations and co-polarized complex correlation coefficient magnitude. A supervised support vector machine (SVM) classifier classified no-till and conventional tillage with 91.5% overall accuracy. These results highlight the potential of RADARSAT-2 for monitoring tillage conditions.
|
59 |
Developing image informatics methods for histopathological computer-aided decision support systemsKothari, Sonal 12 January 2015 (has links)
This dissertation focuses on developing imaging informatics algorithms for clinical decision support systems (CDSSs) based on histopathological whole-slide images (WSIs). Currently, histopathological analysis is a common clinical procedure for diagnosing cancer presence, type, and progression. While diagnosing patients using biopsy slides, pathologists manually assess nuclear morphology. However, making decisions manually from a slide with millions of nuclei can be time-consuming and subjective. Researchers have proposed CDSSs that help in decision making but they have limited reproducibility. The development of robust CDSSs for WSIs faces several informatics challenges: (1) Lack of robust segmentation methods for histopathological images, (2) Semantic gap between quantitative information and pathologist’s knowledge, (3) Lack of batch-invariant imaging informatics methods, (4) Lack of knowledge models for capturing informative patterns in large WSIs, and (5) Lack of guidelines for optimizing and validating diagnostic models. I conducted advanced imaging informatics research to overcome these challenges and developed novel methods to extract information from WSIs, to model knowledge embedded in large histopathological datasets, such as The Cancer Genome Atlas (TCGA), and to assist decision making with biological and clinical validation. I validated my methods for two applications: (1) diagnosis of histopathology-based endpoints such as subtype and grade and (2) prediction of clinical endpoints such as metastasis, stage, lymphnode spread, and survival. The statistically emergent feature subsets in the diagnostic models for histopathology-based endpoints were concordant with pathologists’ knowledge.
|
60 |
Developing An Integrated System For Semi-automated Segmentation Of Remotely Sensed ImageryKok, Emre Hamit 01 May 2005 (has links) (PDF)
Classification of the agricultural fields using remote sensing images is one of the most popular methods used for crop mapping. Most recent classification techniques are based on per-field approach that works as assigning a crop label for each field. Commonly, the spatial vector data is used for the boundaries of the fields for applying the classification within them. However, crop variation within the fields is a very common problem. In this case, the existing field boundaries may be insufficient for performing the field-based classification and therefore, image segmentation is needed to be employed to detect these homogeneous segments within the fields.
This study proposed a field-based approach to segment the crop fields in an image within the integrated environment of Geographic Information System (GIS) and Remote Sensing. In this method, each field is processed separately and the segments within each field are detected. First, an edge detection is applied to the images, and the detected edges are vectorized to generate the straight line segments. Next, these line segments are correlated with the existing field boundaries using the perceptual grouping techniques to form the closed regions in the image. The closed regions represent the segments each of which contain a distinct crop type. To implement the proposed methodology, a software was developed. The implementation was carried out using the 10 meter spatial resolution SPOT 5 and the 20 meter spatial resolution SPOT 4 satellite images covering a part of Karacabey Plain, Turkey. The evaluations of the obtained results are presented using different band combinations of the images.
|
Page generated in 0.099 seconds