• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 11
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 71
  • 71
  • 27
  • 24
  • 15
  • 12
  • 11
  • 11
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Robust target localization and segmentation using statistical methods

Arif, Omar 05 April 2010 (has links)
This thesis aims to contribute to the area of visual tracking, which is the process of identifying an object of interest through a sequence of successive images. The thesis explores kernel-based statistical methods, which map the data to a higher dimensional space. A pre-image framework is provided to find the mapping from the embedding space to the input space for several manifold learning and dimensional learning algorithms. Two algorithms are developed for visual tracking that are robust to noise and occlusions. In the first algorithm, a kernel PCA-based eigenspace representation is used. The de-noising and clustering capabilities of the kernel PCA procedure lead to a robust algorithm. This framework is extended to incorporate the background information in an energy based formulation, which is minimized using graph cut and to track multiple objects using a single learned model. In the second method, a robust density comparison framework is developed that is applied to visual tracking, where an object is tracked by minimizing the distance between a model distribution and given candidate distributions. The superior performance of kernel-based algorithms comes at a price of increased storage and computational requirements. A novel method is developed that takes advantage of the universal approximation capabilities of generalized radial basis function neural networks to reduce the computational and storage requirements for kernel-based methods.
22

Kernel Methods Fast Algorithms and real life applications

Vishwanathan, S V N 06 1900 (has links)
Support Vector Machines (SVM) have recently gained prominence in the field of machine learning and pattern classification (Vapnik, 1995, Herbrich, 2002, Scholkopf and Smola, 2002). Classification is achieved by finding a separating hyperplane in a feature space, which can be mapped back onto a non-linear surface in the input space. However, training an SVM involves solving a quadratic optimization problem, which tends to be computationally intensive. Furthermore, it can be subject to stability problems and is non-trivial to implement. This thesis proposes an fast iterative Support Vector training algorithm which overcomes some of these problems. Our algorithm, which we christen Simple SVM, works mainly for the quadratic soft margin loss (also called the l2 formulation). We also sketch an extension for the linear soft-margin loss (also called the l1 formulation). Simple SVM works by incrementally changing a candidate Support Vector set using a locally greedy approach, until the supporting hyperplane is found within a finite number of iterations. It is derived by a simple (yet computationally crucial) modification of the incremental SVM training algorithms of Cauwenberghs and Poggio (2001) which allows us to perform update operations very efficiently. Constant-time methods for initialization of the algorithm and experimental evidence for the speed of the proposed algorithm, when compared to methods such as Sequential Minimal Optimization and the Nearest Point Algorithm are given. We present results on a variety of real life datasets to validate our claims. In many real life applications, especially for the l2 formulation, the kernel matrix K є R n x n can be written as K = Z T Z + Λ , where, Z є R n x m with m << n and Λ є R n x n is diagonal with nonnegative entries. Hence the matrix K - Λ is rank-degenerate, Extending the work of Fine and Scheinberg (2001) and Gill et al. (1975) we propose an efficient factorization algorithm which can be used to find a L D LT factorization of K in 0(nm2) time. The modified factorization, after a rank one update of K, can be computed in 0(m2) time. We show how the Simple SVM algorithm can be sped up by taking advantage of this new factorization. We also demonstrate applications of our factorization to interior point methods. We show a close relation between the LDV factorization of a rectangular matrix and our LDLT factorization (Gill et al., 1975). An important feature of SVM's is that they can work with data from any input domain as long as a suitable mapping into a Hilbert space can be found, in other words, given the input data we should be able to compute a positive semi definite kernel matrix of the data (Scholkopf and Smola, 2002). In this thesis we propose kernels on a variety of discrete objects, such as strings, trees, Finite State Automata, and Pushdown Automata. We show that our kernels include as special cases the celebrated Pair-HMM kernels (Durbin et al., 1998, Watkins, 2000), the spectrum kernel (Leslie et al., 20024, convolution kernels for NLP (Collins and Duffy, 2001), graph diffusion kernels (Kondor and Lafferty, 2002) and various other string-matching kernels. Because of their widespread applications in bio-informatics and web document based algorithms, string kernels are of special practical importance. By intelligently using the matching statistics algorithm of Chang and Lawler (1994), we propose, perhaps, the first ever algorithm to compute string kernels in linear time. This obviates dynamic programming with quadratic time complexity and makes string kernels a viable alternative for the practitioner. We also propose extensions of our string kernels to compute kernels on trees efficiently. This thesis presents a linear time algorithm for ordered trees and a log-linear time algorithm for unordered trees. In general, SVM's require time proportional to the number of Support Vectors for prediction. In case the dataset is noisy a large fraction of the data points become Support Vectors and thus time required for prediction increases. But, in many applications like search engines or web document retrieval, the dataset is noisy, yet, the speed of prediction is critical. We propose a method for string kernels by which the prediction time can be reduced to linear in the length of the sequence to be classified, regardless of the number of Support Vectors. We achieve this by using a weighted version of our string kernel algorithm. We explore the relationship between dynamic systems and kernels. We define kernels on various kinds of dynamic systems including Markov chains (both discrete and continuous), diffusion processes on graphs and Markov chains, Finite State Automata, various linear time-invariant systems etc Trajectories arc used to define kernels introduced on initial conditions lying underlying dynamic system. The same idea is extended to define Kernels on a. dynamic system with respect to a set of initial conditions. This framework leads to a large number of novel kernels and also generalize many previously proposed kernels. Lack of adequate training data is a problem which plagues classifiers. We propose n new method to generate virtual training samples in the case of handwritten digit data. Our method uses the two dimensional suffix tree representation of a set of matrices to encode an exponential number of virtual samples in linear space thus leading to an increase in classification accuracy. This in turn, leads us naturally to a, compact data dependent representation of a test pattern which we call the description tree. We propose a new kernel for images and demonstrate a quadratic time algorithm for computing it by wing the suffix tree representation of an image. We also describe a method to reduce the prediction time to quadratic in the size of the test image by using techniques similar to those used for string kernels.
23

kernlab - An S4 package for kernel methods in R

Karatzoglou, Alexandros, Smola, Alex, Hornik, Kurt, Zeileis, Achim January 2004 (has links) (PDF)
kernlab is an extensible package for kernel-based machine learning methods in R. It takes advantage of R's new S4 object model and provides a framework for creating and using kernel-based algorithms. The package contains dot product primitives (kernels), implementations of support vector machines and the relevance vector machine, Gaussian processes, a ranking algorithm, kernel PCA, kernel CCA, and a spectral clustering algorithm. Moreover it provides a general purpose quadratic programming solver, and an incomplete Cholesky decomposition method. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
24

Spectral Approaches to Learning Predictive Representations

Boots, Byron 01 September 2012 (has links)
A central problem in artificial intelligence is to choose actions to maximize reward in a partially observable, uncertain environment. To do so, we must obtain an accurate environment model, and then plan to maximize reward. However, for complex domains, specifying a model by hand can be a time consuming process. This motivates an alternative approach: learning a model directly from observations. Unfortunately, learning algorithms often recover a model that is too inaccurate to support planning or too large and complex for planning to succeed; or, they require excessive prior domain knowledge or fail to provide guarantees such as statistical consistency. To address this gap, we propose spectral subspace identification algorithms which provably learn compact, accurate, predictive models of partially observable dynamical systems directly from sequences of action-observation pairs. Our research agenda includes several variations of this general approach: spectral methods for classical models like Kalman filters and hidden Markov models, batch algorithms and online algorithms, and kernel-based algorithms for learning models in high- and infinite-dimensional feature spaces. All of these approaches share a common framework: the model’s belief space is represented as predictions of observable quantities and spectral algorithms are applied to learn the model parameters. Unlike the popular EM algorithm, spectral learning algorithms are statistically consistent, computationally efficient, and easy to implement using established matrixalgebra techniques. We evaluate our learning algorithms on a series of prediction and planning tasks involving simulated data and real robotic systems.
25

Effective and Efficient Optimization Methods for Kernel Based Classification Problems

Tayal, Aditya January 2014 (has links)
Kernel methods are a popular choice in solving a number of problems in statistical machine learning. In this thesis, we propose new methods for two important kernel based classification problems: 1) learning from highly unbalanced large-scale datasets and 2) selecting a relevant subset of input features for a given kernel specification. The first problem is known as the rare class problem, which is characterized by a highly skewed or unbalanced class distribution. Unbalanced datasets can introduce significant bias in standard classification methods. In addition, due to the increase of data in recent years, large datasets with millions of observations have become commonplace. We propose an approach to address both the problem of bias and computational complexity in rare class problems by optimizing area under the receiver operating characteristic curve and by using a rare class only kernel representation, respectively. We justify the proposed approach theoretically and computationally. Theoretically, we establish an upper bound on the difference between selecting a hypothesis from a reproducing kernel Hilbert space and a hypothesis space which can be represented using a subset of kernel functions. This bound shows that for a fixed number of kernel functions, it is optimal to first include functions corresponding to rare class samples. We also discuss the connection of a subset kernel representation with the Nystrom method for a general class of regularized loss minimization methods. Computationally, we illustrate that the rare class representation produces statistically equivalent test error results on highly unbalanced datasets compared to using the full kernel representation, but with significantly better time and space complexity. Finally, we extend the method to rare class ordinal ranking, and apply it to a recent public competition problem in health informatics. The second problem studied in the thesis is known as the feature selection problem in literature. Embedding feature selection in kernel classification leads to a non-convex optimization problem. We specify a primal formulation and solve the problem using a second-order trust region algorithm. To improve efficiency, we use the two-block Gauss-Seidel method, breaking the problem into a convex support vector machine subproblem and a non-convex feature selection subproblem. We reduce possibility of saddle point convergence and improve solution quality by sharing an explicit functional margin variable between block iterates. We illustrate how our algorithm improves upon state-of-the-art methods.
26

Μέθοδοι μη παραμετρικής παλινδρόμησης

Βαρελάς, Γεώργιος 08 July 2011 (has links)
Ένα πράγμα που θέτει τους στατιστικολόγους πέρα από άλλους επιστήμονες είναι σχετική άγνοια του κοινού γενικά σχετικά με το τι είναι στην πραγματικότητα το πεδίο της στατιστικής. Ο κόσμος έχει μια μικρή γενική ιδέα του τι είναι η χημεία ή η βιολογία — αλλά τι είναι αυτό ακριβώς που κάνουν οι στατιστικολόγοι; Μία απάντηση στο ερώτημα αυτό έχει ως εξής: στατιστική είναι η επιστήμη που ασχολείται με τη συλλογή, περιληπτική παρουσίαση της πληροφορίας, παρουσίαση και ερμηνεία των δεδομένων. Τα δεδομένα είναι το κλειδί, φυσικά — τα πράγματα από τα οποία εμείς αποκτούμε γνώσεις και βγάζουμε αποφάσεις. Ένας πίνακας δεδομένων παρουσιάζει μια συλλογή έγκυρων δεδομένων, αλλά είναι σαφές ότι είναι εντελώς ανεπαρκής για την σύνοψη ή την ερμηνεία τους.Το πρόβλημα είναι ότι δεν έγιναν παραδοχές σχετικά με τη διαδικασία που δημιούργησε αυτά τα δεδομένα (πιο απλά, η ανάλυση είναι καθαρά μη παραμετρική, υπό την έννοια ότι δεν επιβάλλεται καμία τυπική δομή για τα δεδομένα). Επομένως, καμία πραγματική περίληψη ή σύνοψη δεν είναι δυνατή. Η κλασική προσέγγιση σε αυτή τη δυσκολία είναι να υποθέσουμε ένα παραμετρικό μοντέλο για την υποκείμενη διαδικασία, καθορίζοντας μια συγκεκριμένη φόρμα για την υποκείμενη πυκνότητα. Στη συνέχεια, μπορούν να υπολογιστούν διάφορα στατιστικά στοιχεία και μπορούν να παρουσιαστούν μέσω μιας προσαρμοσμένης πυκνότητας.Δυστυχώς, η ισχύς της παραμετρικής μοντελοποίησης είναι επίσης η αδυναμία της. Συνδέοντας ένα συγκεκριμένο μοντέλο, μπορούμε να έχουμε μεγάλα οφέλη, αλλά μόνο εάν το πρότυπο θεωρείται ότι ισχύει (τουλάχιστον κατά προσέγγιση). Εάν το υποτιθέμενο μοντέλο δεν είναι σωστό, οι αποφάσεις που θα αντλήσουμε από αυτό μπορεί να είναι χειρότερες από άχρηστες, οδηγώντας μας σε παραπλανητικές ερμηνείες των δεδομένων. / A thing that places the statisticians beyond other scientists is relative ignorance of public as generally speaking with regard to what it is in reality the field of statistics. The world does have a small general idea what is chemistry or biology - but what is precisely that statisticians do? An answer in this question has as follows: statistics is the science that deals with the collection, general presentation of information, presentation and interpretation of data. The data are the key, from which we acquire knowledge and make decisions. A table of data presents a collection of valid data, but it is obvious that it is completely insufficient for their synopsis or their interpretation. The problem is that no assumptions have been made about the process that created these data (more simply, the analysis is no parametric, under the significance that is no formal structure is imposed on the data). Consequently, no real summary or synopsis is possible. The classical approach in this difficulty is to assume a parametric model for the underlying process, determining a concrete form for the underlying density. Afterwards, can be calculated various statistical elements and a fitted density can manifest itself. The power of parametric modelling is also its weakness. By linking inference to a specific model, we can have big profits, but only if the model is true. If the assumed model is not correct, the decisions that we will draw from this can be worse than useless, leading us to misleading interpretations of data.
27

Αναδρομικές τεχνικές πυρήνα

Βουγιούκας, Κωνσταντίνος 03 October 2011 (has links)
Στη διπλωματική εργασία αυτή ασχοληθήκαμε με την πρόβλεψη της εξόδου μη-γραμμικών συστημάτων με τη χρήση αναδρομικών αλγορίθμων που χρησιμοποιούν συναρτήσεις πυρήνα. Παρουσιάζεται ο δικός μας αναδρομικός αλγόριθμος πρόβλεψης και βλέπουμε πως αποδίδει σε σχέση με έναν άλλο ήδη υπάρχων και ιδιαίτερα δημοφιλή αλγόριθμο. Στο πρώτο κεφάλαιο δίνουμε μια σύντομη περιγραφή του προβλήματος που καλούμαστε να λύσουμε. Στη συνέχεια δείχνουμε πως οι συναρτήσεις πυρήνα μπορούν να χρησιμοποιηθούν για να μας βοηθήσουν να λύσουμε το πρόβλημα αυτό. Στο δεύτερο κεφάλαιο αναλύουμε περισσότερο τις συναρτήσεις πυρήνα και τις ιδιότητες που τις χαρακτηρίζουν. Παρουσιάζουμε τα βασικά θεωρήματα και βλέπουμε πώς διαμορφώνεται το πρόβλημα της πρόβλεψης με την εφαρμογή αυτών. Επιπλέον παρουσιάζουμε πως το πρόβλημα μας μετατρέπεται στο γνωστό πρόβλημα γραμμικών ελαχίστων τετραγώνων στην περίπτωση που χρησιμοποιήσουμε γραμμικό πυρήνα. Στο τρίτο κεφάλαιο παρουσιάζουμε τον αλγόριθμο μας, αναλύοντας το συλλογισμό που μας οδήγησε σε αυτόν. Δίνουμε επίσης μια περιγραφή ενός άλλου αλγορίθμου που χρησιμοποιείται ήδη για την επίλυση τέτοιων προβλημάτων. Στο τέταρτο κεφάλαιο γίνονται μια σειρά από προσομοιώσεις σε MATLAB οπού βλέπουμε πόσο καλά μπορεί να κάνει την πρόβλεψη των εξόδων μη-γραμικών συστημάτων ο αλγόριθμός μας. Επίσης αντιπαραθέτουμε και την απόδοση του ανταγωνιστικού αλγορίθμου. Στα πειράματα μας εξετάζουμε το σφάλμα πρόβλεψης των προαναφερθέντων αλγορίθμων, την ταχύτητα σύγκλισης τους καθώς και την σθεναρότητα τους. Τέλος παρουσιάζουμε τα συμπεράσματα μας εξηγώντας γιατί πιστεύουμε ότι η δικία μας προσέγγιση υπερτερεί της άλλης. / This dissertation deals with the problem of predicting the output of non-linear systems using recursive kernel methods. We will present our own prediction algorithm and see how it performs in relation to a widely used alternative algorithm. In the first chapter we provide a short description of the problem of non-linear prediction. We then describe how kernel methods could help us solve this problem. In the second chapter we further analyze kernel functions and their properties. We present the basic theorems and see how these affect and transform the problem at hand. Furthermore, we explain how this problem results in the linear least squares problem in case we use the linear kernel. In the third chapter we present our algorithm and reasoning that led to it. We also describe a different algorithm that is already used to predict such signals. In the fourth chapter we perform a series of simulations in the Matlab environment were we evaluate how well the two approaches predict the output. In this evaluation we consider the complexity, the error and robustness of the algorithms. Finally we present our conclusion and explain why our algorithm is superior to the alternative.
28

Species-independent MicroRNA Gene Discovery

Kamanu, Timothy K. 12 1900 (has links)
MicroRNA (miRNA) are a class of small endogenous non-coding RNA that are mainly negative transcriptional and post-transcriptional regulators in both plants and animals. Recent studies have shown that miRNA are involved in different types of cancer and other incurable diseases such as autism and Alzheimer’s. Functional miRNAs are excised from hairpin-like sequences that are known as miRNA genes. There are about 21,000 known miRNA genes, most of which have been determined using experimental methods. miRNA genes are classified into different groups (miRNA families). This study reports about 19,000 unknown miRNA genes in nine species whereby approximately 15,300 predictions were computationally validated to contain at least one experimentally verified functional miRNA product. The predictions are based on a novel computational strategy which relies on miRNA family groupings and exploits the physics and geometry of miRNA genes to unveil the hidden palindromic signals and symmetries in miRNA gene sequences. Unlike conventional computational miRNA gene discovery methods, the algorithm developed here is species-independent: it allows prediction at higher accuracy and resolution from arbitrary RNA/DNA sequences in any species and thus enables examination of repeat-prone genomic regions which are thought to be non-informative or ’junk’ sequences. The information non-redundancy of uni-directional RNA sequences compared to information redundancy of bi-directional DNA is demonstrated, a fact that is overlooked by most pattern discovery algorithms. A novel method for computing upstream and downstream miRNA gene boundaries based on mathematical/statistical functions is suggested, as well as cutoffs for annotation of miRNA genes in different miRNA families. Another tool is proposed to allow hypotheses generation and visualization of data matrices, intra- and inter-species chromosomal distribution of miRNA genes or miRNA families. Our results indicate that: miRNA and miRNA genes are not only species-specific but may also be DNA strand-specific and chromosome-specific; the genomic distribution of miRNA genes is conserved at the chromosomal level across species; miRNA are conserved; More than one miRNA with different regulatory targets can be excised from one miRNA gene; Repeat-related miRNA and miRNA genes with palindromic sequences may be the largest subclass of miRNA class that have eluded detection by most computational and experimental methods.
29

Uma metodologia para a detecção de mudanças em imagens multitemporais de sensoriamento remoto empregando Support Vector Machines

Ferreira, Rute Henrique da Silva January 2014 (has links)
Esta tese investiga uma abordagem supervisionada para o problema da detecção de mudanças em imagens multitemporais de sensoriamento remoto empregando Support Vector Machines (SVM) com o uso dos kernels polinomial e gaussiano (RBF). A proposta metodológica está baseada na diferença das imagens-fração produzidas para cada data. Em imagens de cenas naturais a diferença nas frações de solo e vegetação tendem a apresentar uma distribuição simétrica em torno da origem. Esse fato pode ser usado para modelar duas distribuições normais multivariadas: mudança e não-mudança. O algoritmo Expectation-Maximization (EM) é implementado para estimar os parâmetros (vetor de médias, matriz de covariância e probabilidade a priori) associados a essas duas distribuições. Amostras aleatórias são extraídas dessas distribuições e usadas para treinar o classificador SVM nesta abordagem supervisionada. A metodologia proposta realiza testes com o uso de conjuntos de dados multitemporais de imagens multiespectrais TM-Landsat, que cobrem a mesma cena em duas datas diferentes. Os resultados são comparados com outros procedimentos, incluindo trabalhos anteriores, um conjunto de dados sintéticos e o classificador SVM One-Class. / In this thesis, we investigate a supervised approach to change detection in remote sensing multi-temporal image data by applying Support Vector Machines (SVM) technique using polynomial kernel and Gaussian kernel (RBF). The methodology is based on the difference-fraction images produced for two dates. In natural scenes, the difference in the fractions such as vegetation and bare soil occurring in two different dates tend to present a distribution symmetric around the origin of the coordinate system. This fact can be used to model two normal multivariate distributions: class change and no-change. The Expectation-Maximization algorithm (EM) is implemented to estimate the parameters (mean vector, covariance matrix and a priori probability) associated with these two distributions. Random samples are drawn from these distributions and used to train the SVM classifier in this supervised approach.The proposed methodology performs tests using multi-temporal TMLandsat multispectral image data covering the same scene in two different dates. The results are compared to other procedures including previous work, a synthetic data set and SVM One-Class.
30

Uma abordagem para a detecção de mudanças em imagens multitemporais de sensoriamento remoto empregando Support Vector Machines com uma nova métrica de pertinência

Angelo, Neide Pizzolato January 2014 (has links)
Esta tese investiga uma abordagem não supervisionada para o problema da detecção de mudanças em imagens multiespectrais e multitemporais de sensoriamento remoto empregando Support Vector Machines (SVM) com o uso dos kernels polinomial e RBF e de uma nova métrica de pertinência de pixels. A proposta metodológica está baseada na diferença das imagens-fração produzidas para cada data. Em imagens de cenas naturais essa diferença nas frações de solo e vegetação tendem a apresentar uma distribuição simétrica próxima à origem. Essa caracteristica pode ser usada para modelar as distribuições normais multivariadas das classes mudança e não-mudança. O algoritmo Expectation-Maximization (EM) é implementado com a finalidade de estimar os parâmetros (vetor de médias, matriz de covariância e probabilidade a priori) associados a essas duas distribuições. A seguir, amostras aleatórias e normalmente distribuidas são extraídas dessas distribuições e rotuladas segundo sua pertinência em uma das classes. Essas amostras são então usadas no treinamento do classificador SVM. A partir desta classificação é estimada uma nova métrica de pertinência de pixels. A metodologia proposta realiza testes com o uso de conjuntos de dados multitemporais de imagens multiespectrais Landsat-TM que cobrem a mesma cena em duas datas diferentes. A métrica de pertinência proposta é validada através de amostras de teste controladas obtidas a partir da técnica Change Vetor Analysis, além disso, os resultados de pertinência obtidos para a imagem original com essa nova métrica são comparados aos resultados de pertinência obtidos para a mesma imagem pela métrica proposta em (Zanotta, 2010). Baseado nos resultados apresentados neste trabalho que mostram que a métrica para determinação de pertinência é válida e também apresenta resultados compatíveis com outra técnica de pertinência publicada na literatura e considerando que para obter esses resultados utilizou-se poucas amostras de treinamento, espera-se que essa métrica deva apresentar melhores resultados que os que seriam apresentados com classificadores paramétricos quando aplicado a imagens multitemporais e hiperespectrais. / This thesis investigates a unsupervised approach to the problem of change detection in multispectral and multitemporal remote sensing images using Support Vector Machines (SVM) with the use of polynomial and RBF kernels and a new metric of pertinence of pixels. The methodology is based on the difference-fraction images produced for each date. In images of natural scenes. This difference in the fractions of bare soil and vegetation tend to have a symmetrical distribution close to the origin. This feature can be used to model the multivariate normal distributions of the classes change and no-change. The Expectation- Maximization algorithm (EM) is implemented in order to estimate the parameters (mean vector, covariance matrix and a priori probability) associated with these two distributions. Then random and normally distributed samples are extracted from these distributions and labeled according to their pertinence to the classes. These samples are then used in the training of SVM classifier. From this classification is estimated a new metric of pertinence of pixel. The proposed methodology performs tests using multitemporal data sets of multispectral Landsat-TM images that cover the same scene at two different dates. The proposed metric of pertinence is validated via controlled test samples obtained from Change Vector Analysis technique. In addition, the results obtained at the original image with the new metric are compared to the results obtained at the same image applying the pertinence metric proposed in (Zanotta, 2010). Based on the results presented here showing that the metric of pertinence is valid, and also provides results consistent with other published in the relevant technical literature, and considering that to obtain these results was used a few training samples, it is expected that the metric proposed should present better results than those that would be presented with parametric classifiers when applied to multitemporal and hyperspectral images.

Page generated in 0.053 seconds