• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36348
  • 19912
  • 1440
  • 971
  • 509
  • 296
  • 182
  • 155
  • 113
  • 88
  • 69
  • 51
  • 51
  • 50
  • 47
  • Tagged with
  • 60680
  • 52734
  • 52612
  • 52610
  • 8159
  • 5114
  • 4988
  • 4518
  • 4293
  • 3903
  • 3711
  • 3234
  • 3183
  • 2818
  • 2680
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Schémas de classification et repérage des documents administratifs électroniques dans un contexte de gestion décentralisée des ressources informationnelles

Mas, Sabine 05 1900 (has links)
Les employés d’un organisme utilisent souvent un schéma de classification personnel pour organiser les documents électroniques qui sont sous leur contrôle direct, ce qui suggère la difficulté pour d’autres employés de repérer ces documents et la perte possible de documentation pour l’organisme. Aucune étude empirique n’a été menée à ce jour afin de vérifier dans quelle mesure les schémas de classification personnels permettent, ou même facilitent, le repérage des documents électroniques par des tiers, dans le cadre d’un travail collaboratif par exemple, ou lorsqu’il s’agit de reconstituer un dossier. Le premier objectif de notre recherche était de décrire les caractéristiques de schémas de classification personnels utilisés pour organiser et classer des documents administratifs électroniques. Le deuxième objectif consistait à vérifier, dans un environnement contrôlé, les différences sur le plan de l’efficacité du repérage de documents électroniques qui sont fonction du schéma de classification utilisé. Nous voulions vérifier s’il était possible de repérer un document avec la même efficacité, quel que soit le schéma de classification utilisé pour ce faire. Une collecte de données en deux étapes fut réalisée pour atteindre ces objectifs. Nous avons d’abord identifié les caractéristiques structurelles, logiques et sémantiques de 21 schémas de classification utilisés par des employés de l’Université de Montréal pour organiser et classer les documents électroniques qui sont sous leur contrôle direct. Par la suite, nous avons comparé, à partir d'une expérimentation contrôlée, la capacité d’un groupe de 70 répondants à repérer des documents électroniques à l’aide de cinq schémas de classification ayant des caractéristiques structurelles, logiques et sémantiques variées. Trois variables ont été utilisées pour mesurer l’efficacité du repérage : la proportion de documents repérés, le temps moyen requis (en secondes) pour repérer les documents et la proportion de documents repérés dès le premier essai. Les résultats révèlent plusieurs caractéristiques structurelles, logiques et sémantiques communes à une majorité de schémas de classification personnels : macro-structure étendue, structure peu profonde, complexe et déséquilibrée, regroupement par thème, ordre alphabétique des classes, etc. Les résultats des tests d’analyse de la variance révèlent des différences significatives sur le plan de l’efficacité du repérage de documents électroniques qui sont fonction des caractéristiques structurelles, logiques et sémantiques du schéma de classification utilisé. Un schéma de classification caractérisé par une macro-structure peu étendue et une logique basée partiellement sur une division par classes d’activités augmente la probabilité de repérer plus rapidement les documents. Au plan sémantique, une dénomination explicite des classes (par exemple, par utilisation de définitions ou en évitant acronymes et abréviations) augmente la probabilité de succès au repérage. Enfin, un schéma de classification caractérisé par une macro-structure peu étendue, une logique basée partiellement sur une division par classes d’activités et une sémantique qui utilise peu d’abréviations augmente la probabilité de repérer les documents dès le premier essai. / The employees of an organization often use a personal classification scheme to organize electronic documents residing on their own workstations. As this may make it hard for other employees to retrieve these documents, there is a risk for the organization of losing track of needed documentation. To this day, no empirical study has been conducted to verify whether personal classification schemes allow, or even facilitate the retrieval of documents created and classed by someone else, in collaborative work, for example, or when it becomes necessary to reconstruct a “dossier”. The first objective of our research was to describe the characteristics of personal classification schemes used to organize and classify administrative electronic documents. Our second objective was to verify, in a controlled environment, differences as to retrieval effectiveness which would be linked to the characteristics of classification schemes. More precisely, we wanted to verify if it was possible to find a document with the same effectiveness, whatever the classification scheme used. Two types of data collection were necessary to reach those objectives. We first identified the structural, logical and semantic characteristics of 21 classification schemes used by Université de Montréal employees to organize and classify electronic documents residing on their own workstations. We then compared, in a controlled experimentation, the capacity of 70 participants to find electronic documents with the help of five classification schemes exhibiting variations in their structural, logical and semantic characteristics. Three variables were used to measure retrieval effectiveness : the number of documents found, the average time needed (in seconds) to locate the documents and the number of documents found on the first try. Results revealed many structural, logical and semantic characteristics common to a majority of personal classification schemes : extended macro-structures, shallow, complex and unbalanced structures, thematic grouping, alphabetical order of classes, etc. An analysis of variance revealed significant differences as to retrieval effectiveness that are related to the structural, logical and semantic characteristics of the classification scheme. A classification scheme characterized by a narrow macro-structure and a logic based on classes of activities increases the probability of finding documents more rapidly. On the semantic level, more explicit denominations of classes (for example, by using definitions or avoiding acronyms and abbreviations) increases the probability of success in finding documents. Finally, a classification scheme characterized by a narrow macro-structure, a logic based on classes of activities, and a semantic that uses few abbreviations minimizes the risk of error and failure in retrieval.
552

Polymorphism in twelve species of Neritidae: (Mollusca : Gastropoda : Prosobranchia) from Hong Kong

Huang, Qin., 黃勤. January 1995 (has links)
published_or_final_version / Zoology / Doctoral / Doctor of Philosophy
553

Balkanisering och klassifikation : En komparativ studie av klassifikationen av forna Jugoslavien, beträffande språk, geografi och historia, i DDC och SAB

Gustafsson, Oskar January 2014 (has links)
This master's thesis examines the possibilities of correction and change in a classification scheme, with regard to the changes that occur in the world the classification system intends to describe. Applying a comparative method and classification theory, the classification of the example of the former Yugoslavia (1918-1941, 1945-1991), its republics and successor states, and the languages, formerly known as Serbo-Croatian are examined through a comparison of the main classes and divisions of language, geography, and history, in Dewey Decimal Classification (DDC), and Klassifikationssystem för svenska bibliotek [Classification for Swedish Libraries] (SAB). Eight editions of DDC, from 1876 to 2014, are compared to seven editions of SAB, from 1921 to 2013. The editions have been selected in order to show the changes prior to, and following, the First World War, changes after the Second World War, and changes following the collapse of Yugoslavia in 1991. The examination shows that both systems have updated their editions according to the changes in former Yugoslavia over the years. DDC has well constructed facet schedules, especially Table 2 concerning geography, but fails, in some cases, to construct a logic and hierarchical structure for the republics and languages of Yugoslavia, partly due to the fixed classes and divisions that survive from the very first edition of DDC from 1876, but also as a result of the decimal notation, and its limitations, itself. SAB seeks to construct a hierarchically logic and equal scheme for the languages, areas, and states of the former Yugoslavia. Although the facets for geography and chronology aren't as developed as the ones in DDC, the overall result is that of a logically consistent and hierarchically clear classification, with short notation codes, thanks to the alphabetic mixed notation, which allows more subdivisions than the numerals and the pure notation of DDC. This study is a two years master's thesis in Archive, Library and Museum studies.
554

PATTERN RECOGNITION IN CLASS IMBALANCED DATASETS

Siddique, Nahian A 01 January 2016 (has links)
Class imbalanced datasets constitute a significant portion of the machine learning problems of interest, where recog­nizing the ‘rare class’ is the primary objective for most applications. Traditional linear machine learning algorithms are often not effective in recognizing the rare class. In this research work, a specifically optimized feed-forward artificial neural network (ANN) is proposed and developed to train from moderate to highly imbalanced datasets. The proposed methodology deals with the difficulty in classification task in multiple stages—by optimizing the training dataset, modifying kernel function to generate the gram matrix and optimizing the NN structure. First, the training dataset is extracted from the available sample set through an iterative process of selective under-sampling. Then, the proposed artificial NN comprises of a kernel function optimizer to specifically enhance class boundaries for imbalanced datasets by conformally transforming the kernel functions. Finally, a single hidden layer weighted neural network structure is proposed to train models from the imbalanced dataset. The proposed NN architecture is derived to effectively classify any binary dataset with even very high imbalance ratio with appropriate parameter tuning and sufficient number of processing elements. Effectiveness of the proposed method is tested on accuracy based performance metrics, achieving close to and above 90%, with several imbalanced datasets of generic nature and compared with state of the art methods. The proposed model is also used for classification of a 25GB computed tomographic colonography database to test its applicability for big data. Also the effectiveness of under-sampling, kernel optimization for training of the NN model from the modified kernel gram matrix representing the imbalanced data distribution is analyzed experimentally. Computation time analysis shows the feasibility of the system for practical purposes. This report is concluded with discussion of prospect of the developed model and suggestion for further development works in this direction.
555

Minimisation de fonctions de perte calibrée pour la classification des images / Minimization of calibrated loss functions for image classification

Bel Haj Ali, Wafa 11 October 2013 (has links)
La classification des images est aujourd'hui un défi d'une grande ampleur puisque ça concerne d’un côté les millions voir des milliards d'images qui se trouvent partout sur le web et d’autre part des images pour des applications temps réel critiques. Cette classification fait appel en général à des méthodes d'apprentissage et à des classifieurs qui doivent répondre à la fois à la précision ainsi qu'à la rapidité. Ces problèmes d'apprentissage touchent aujourd'hui un grand nombre de domaines d'applications: à savoir, le web (profiling, ciblage, réseaux sociaux, moteurs de recherche), les "Big Data" et bien évidemment la vision par ordinateur tel que la reconnaissance d'objets et la classification des images. La présente thèse se situe dans cette dernière catégorie et présente des algorithmes d'apprentissage supervisé basés sur la minimisation de fonctions de perte (erreur) dites "calibrées" pour deux types de classifieurs: k-Plus Proches voisins (kNN) et classifieurs linéaires. Ces méthodes d'apprentissage ont été testées sur de grandes bases d'images et appliquées par la suite à des images biomédicales. Ainsi, cette thèse reformule dans une première étape un algorithme de Boosting des kNN et présente ensuite une deuxième méthode d'apprentissage de ces classifieurs NN mais avec une approche de descente de Newton pour une convergence plus rapide. Dans une seconde partie, cette thèse introduit un nouvel algorithme d'apprentissage par descente stochastique de Newton pour les classifieurs linéaires connus pour leur simplicité et leur rapidité de calcul. Enfin, ces trois méthodes ont été utilisées dans une application médicale qui concerne la classification de cellules en biologie et en pathologie. / Image classification becomes a big challenge since it concerns on the one hand millions or billions of images that are available on the web and on the other hand images used for critical real-time applications. This classification involves in general learning methods and classifiers that must require both precision as well as speed performance. These learning problems concern a large number of application areas: namely, web applications (profiling, targeting, social networks, search engines), "Big Data" and of course computer vision such as the object recognition and image classification. This thesis concerns the last category of applications and is about supervised learning algorithms based on the minimization of loss functions (error) called "calibrated" for two kinds of classifiers: k-Nearest Neighbours (kNN) and linear classifiers. Those learning methods have been tested on large databases of images and then applied to biomedical images. In a first step, this thesis revisited a Boosting kNN algorithm for large scale classification. Then, we introduced a new method of learning these NN classifiers using a Newton descent approach for a faster convergence. In a second part, this thesis introduces a new learning algorithm based on stochastic Newton descent for linear classifiers known for their simplicity and their speed of convergence. Finally, these three methods have been used in a medical application regarding the classification of cells in biology and pathology.
556

Association Rule Based Classification

Palanisamy, Senthil Kumar 03 May 2006 (has links)
In this thesis, we focused on the construction of classification models based on association rules. Although association rules have been predominantly used for data exploration and description, the interest in using them for prediction has rapidly increased in the data mining community. In order to mine only rules that can be used for classification, we modified the well known association rule mining algorithm Apriori to handle user-defined input constraints. We considered constraints that require the presence/absence of particular items, or that limit the number of items, in the antecedents and/or the consequents of the rules. We developed a characterization of those itemsets that will potentially form rules that satisfy the given constraints. This characterization allows us to prune during itemset construction itemsets such that neither they nor any of their supersets will form valid rules. This improves the time performance of itemset construction. Using this characterization, we implemented a classification system based on association rules and compared the performance of several model construction methods, including CBA, and several model deployment modes to make predictions. Although the data mining community has dealt only with the classification of single-valued attributes, there are several domains in which the classification target is set-valued. Hence, we enhanced our classification system with a novel approach to handle the prediction of set-valued class attributes. Since the traditional classification accuracy measure is inappropriate in this context, we developed an evaluation method for set-valued classification based on the E-Measure. Furthermore, we enhanced our algorithm by not relying on the typical support/confidence framework, and instead mining for the best possible rules above a user-defined minimum confidence and within a desired range for the number of rules. This avoids long mining times that might produce large collections of rules with low predictive power. For this purpose, we developed a heuristic function to determine an initial minimum support and then adjusted it using a binary search strategy until a number of rules within the given range was obtained. We implemented all of our techniques described above in WEKA, an open source suite of machine learning algorithms. We used several datasets from the UCI Machine Learning Repository to test and evaluate our techniques.
557

An approach to resource modelling in support of the life cycle engineering of enterprise systems

Li, Guihua January 1997 (has links)
Enterprise modelling can facilitate the design, analysis, control and construction of contemporary enterprises which can compete in world-wide Product markets. This research involves a systematic study of enterprise modelling with a particular focus on resource modelling in support of the life cycle engineering of enterprise systems. This led to the specification and design of a framework for resource modelling. This framework was conceived to: classify resource types; identify the different functions that resource modelling can support, with respect to different life phases of enterprise systems; clarify the relationship between resource models and other modelling perspectives provide mechanisms which link resource models and other types of models; identify guidelines for the capture of information - on resources, leading to the establishment of a set of resource reference models. The author also designed and implemented a resource modelling tool which conforms to the principles laid down by the framework. This tool realises important aspects of the resource modeffing concepts so defined. Furthermore, two case studies have been carried out. One models a metal cutting environment, and the other is based on an electronics industry problem area. In this way, the feasibility of concepts embodied in the framework and the design of the resource modelling tool has been tested and evaluated. Following a literature survey and preliminary investigation, the CIMOSA enterprise modelling and integration methodology was adopted and extended within this research. Here the resource modelling tool was built by extending SEWOSA (System Engineering Workbench for Open System Architecture) and utilising the CIMBIOSYS (CINI-Building Integrated Open SYStems) integrating infrastructure. The main contributions of the research are that: a framework for resource modelling has been established; means and mechanisms have been proposed, implemented and tested which link and coordinate different modelling perspectives into an unified enterprise model; the mechanisms and resource models generated by this research support each Pfe phase of systems engineering projects and demonstrate benefits by increasing the degree to which the derivation process among models is automated.
558

Une nouvelle classification des myopathies inflammatoires fondée sur des manifestations cliniques et la présence d'auto-anticorps spécifiques par analyses multidimensionnelles / A new classification of inflammatory myopathies based on clinical manifestations and the presence of myositis-specific autoantibodies by multidimensional analysis

Mariampillai, Kubéraka 15 December 2017 (has links)
Les myopathies inflammatoires idiopathiques (MII) sont hétérogènes dans leurs physiopathologies et pronostics. L'émergence d'auto-anticorps spécifiques de myosites (ASM) suggère des sous-groupes plus homogènes de patients. Notre but est de trouver une nouvelle classification des MII fondée des critères phénotypiques, biologiques et immunologiques. Une étude observationnelle, rétrospective, multicentrique a été conduite à partir de la base de données du réseau français des myosites. Nous avons inclus 260 myosites, définies selon les classifications historiques pour la polymyosite (PM), la dermatomyosite (DM) et la myosite à inclusions (MI). Tous les patients ont eu au moins un dot myosite testant les anti-Jo1, anti-PL7, anti-PL12, anti-Mi-2, anti-Ku, anti-PMScl, anti-Scl70 and anti-SRP. Nous avons utilisé l'analyse des correspondances multiples suivie d'une classification hiérarchique ascendante afin d'agréger les patients dans des sous-groupes plus homogènes. Quatre clusters émergent. Le premier cluster (n=77) regroupe principalement des MI, avec des vacuoles bordées, des anomalies mitochondriales et de l'inflammation avec des fibres envahies. Le second cluster (n=91) était caractérisé par des myopathies nécrosantes auto-immunes (MNAi) en majorité, avec des anticorps anti-SRP et anti-HMGCR. Le troisième cluster (n=52) regroupe essentiellement des DM avec des anticorps anti-Mi-2, anti-MDA5, ou anti-TiF1 gamma. Le quatrième cluster (n=40) était défini par le SAS (n=36), avec notamment la présence des anti-Jo1 ou anti-PL7. Les critères histologiques sont dispensables pour la prédiction des clusters, soulignant l'importance d'une classification clinico-sérologique. / Idiopathic inflammatory myopathies (IIM or myositis) are heterogeneous in their pathophysiology and prognosis. The emergence of myositis-specific autoantibodies (MSA) suggests homogenous subgroups of patients. Our aim was to find a new classification of IIM based on phenotypic, biological and immunological criteria. An observational, retrospective, multicentre study was led from the database of the myositis French network. We included 260 adult myositis, defined according to historical classifications for polymyositis (PM), dermatomyositis (DM) and inclusion body myositis (IBM). All patients did at least a screening with a line blot assays testing anti-Jo1, anti-PL7, anti-PL12, anti-Mi-2, anti-Ku, anti-PMScl, anti-Scl70 and anti-SRP. We performed multiple correspondence analysis and hierarchical clustering analysis to aggregate patients in homogenous subgroups. Four clusters emerged. The first cluster (n=77) regrouped primarily IBM patients with vacuolated fibres, mitochondrial abnormalities and inflammation with invaded fibres. The second cluster (n=91) was characterized by immune-mediated necrotizing myopathy (IMNM) in the majority of patients, with anti-SRP and anti-HMGCR antibodies. The third cluster (n=52) regrouped mainly DM patients with anti-Mi-2, anti-MDA5, or anti-TiF1 gamma antibodies. The fourth cluster (n=40) was defined by anti-synthetase syndrome (ASS), with the notable presence of anti-Jo1 or anti-PL7 antibodies. The histological criteria are dispensable for the prediction of the clusters, underlining the importance of a clinico-serological classification.
559

Detecção de mudanças a partir de imagens de fração

Bittencourt, Helio Radke January 2011 (has links)
A detecção de mudanças na superfície terrestre é o principal objetivo em aplicações de sensoriamento remoto multitemporal. Sabe-se que imagens adquiridas em datas distintas tendem a ser altamente influenciadas por problemas radiométricos e de registro. Utilizando imagens de fração, obtidas a partir do modelo linear de mistura espectral (MLME), problemas radiométricos podem ser minimizados e a interpretação dos tipos de mudança na superfície terrestre é facilitada, pois as frações têm um significado físico direto. Além disso, interpretações ao nível de subpixel são possíveis. Esta tese propõe três algoritmos – rígido, suave e fuzzy – para a detecção de mudanças entre um par de imagens de fração, gerando mapas de mudança como produtos finais. As propostas requerem a suposição de normalidade multivariada para as diferenças de fração e necessitam de pouca intervenção por parte do analista. A proposta rígida cria mapas de mudança binários seguindo a mesma metodologia de um teste de hipóteses, baseando-se no fato de que os contornos de densidade constante na distribuição normal multivariada são definidos por valores da distribuição qui-quadrado, de acordo com a escolha do nível de confiança. O classificador suave permite gerar estimativas da probabilidade do pixel pertencer à classe de mudança, a partir de um modelo de regressão logística. Essas probabilidades são usadas para criar um mapa de probabilidades de mudança. A abordagem fuzzy é aquela que melhor se adapta ao conceito de pixel mistura, visto que as mudanças no uso e cobertura do solo podem ocorrer em nível de subpixel. Com base nisso, mapas dos graus de pertinência à classe de mudança foram criados. Outras ferramentas matemáticas e estatísticas foram utilizadas, tais como operações morfológicas, curvas ROC e algoritmos de clustering. As três propostas foram testadas utilizando-se imagens sintéticas e reais (Landsat-TM) e avaliadas qualitativa e quantitativamente. Os resultados indicam a viabilidade da utilização de imagens de fração em estudos de detecção de mudanças por meio dos algoritmos propostos. / Land cover change detection is a major goal in multitemporal remote sensing applications. It is well known that images acquired on different dates tend to be highly influenced by radiometric differences and registration problems. Using fraction images, obtained from the linear model of spectral mixing (LMSM), radiometric problems can be minimized and the interpretation of changes in land cover is facilitated because the fractions have a physical meaning. Furthermore, interpretations at the subpixel level are possible. This thesis presents three algorithms – hard, soft and fuzzy – for detecting changes between a pair of fraction images. The algorithms require multivariate normality for the differences among fractions and very little intervention by the analyst. The hard algorithm creates binary change maps following the same methodology of hypothesis testing, based on the fact that the contours of constant density are defined by chi-square values, according to the choice of the probability level. The soft one allows for the generation of estimates of the probability of each pixel belonging to the change class by using a logistic regression model. These probabilities are used to create a map of change probabilities. The fuzzy approach is the one that best fits the concept behind the fraction images because the changes in land cover can occurr at a subpixel level. Based on these algorithms, maps of membership degrees were created. Other mathematical and statistical techniques were also used, such as morphological operations, ROC curves and a clustering algorithm. The algorithms were tested using synthetic and real images (Landsat-TM) and the results were analyzed qualitatively and quantitatively. The results indicate that fraction images can be used in change detection studies by using the proposed algorithms.
560

Detecção de mudanças a partir de imagens de fração

Bittencourt, Helio Radke January 2011 (has links)
A detecção de mudanças na superfície terrestre é o principal objetivo em aplicações de sensoriamento remoto multitemporal. Sabe-se que imagens adquiridas em datas distintas tendem a ser altamente influenciadas por problemas radiométricos e de registro. Utilizando imagens de fração, obtidas a partir do modelo linear de mistura espectral (MLME), problemas radiométricos podem ser minimizados e a interpretação dos tipos de mudança na superfície terrestre é facilitada, pois as frações têm um significado físico direto. Além disso, interpretações ao nível de subpixel são possíveis. Esta tese propõe três algoritmos – rígido, suave e fuzzy – para a detecção de mudanças entre um par de imagens de fração, gerando mapas de mudança como produtos finais. As propostas requerem a suposição de normalidade multivariada para as diferenças de fração e necessitam de pouca intervenção por parte do analista. A proposta rígida cria mapas de mudança binários seguindo a mesma metodologia de um teste de hipóteses, baseando-se no fato de que os contornos de densidade constante na distribuição normal multivariada são definidos por valores da distribuição qui-quadrado, de acordo com a escolha do nível de confiança. O classificador suave permite gerar estimativas da probabilidade do pixel pertencer à classe de mudança, a partir de um modelo de regressão logística. Essas probabilidades são usadas para criar um mapa de probabilidades de mudança. A abordagem fuzzy é aquela que melhor se adapta ao conceito de pixel mistura, visto que as mudanças no uso e cobertura do solo podem ocorrer em nível de subpixel. Com base nisso, mapas dos graus de pertinência à classe de mudança foram criados. Outras ferramentas matemáticas e estatísticas foram utilizadas, tais como operações morfológicas, curvas ROC e algoritmos de clustering. As três propostas foram testadas utilizando-se imagens sintéticas e reais (Landsat-TM) e avaliadas qualitativa e quantitativamente. Os resultados indicam a viabilidade da utilização de imagens de fração em estudos de detecção de mudanças por meio dos algoritmos propostos. / Land cover change detection is a major goal in multitemporal remote sensing applications. It is well known that images acquired on different dates tend to be highly influenced by radiometric differences and registration problems. Using fraction images, obtained from the linear model of spectral mixing (LMSM), radiometric problems can be minimized and the interpretation of changes in land cover is facilitated because the fractions have a physical meaning. Furthermore, interpretations at the subpixel level are possible. This thesis presents three algorithms – hard, soft and fuzzy – for detecting changes between a pair of fraction images. The algorithms require multivariate normality for the differences among fractions and very little intervention by the analyst. The hard algorithm creates binary change maps following the same methodology of hypothesis testing, based on the fact that the contours of constant density are defined by chi-square values, according to the choice of the probability level. The soft one allows for the generation of estimates of the probability of each pixel belonging to the change class by using a logistic regression model. These probabilities are used to create a map of change probabilities. The fuzzy approach is the one that best fits the concept behind the fraction images because the changes in land cover can occurr at a subpixel level. Based on these algorithms, maps of membership degrees were created. Other mathematical and statistical techniques were also used, such as morphological operations, ROC curves and a clustering algorithm. The algorithms were tested using synthetic and real images (Landsat-TM) and the results were analyzed qualitatively and quantitatively. The results indicate that fraction images can be used in change detection studies by using the proposed algorithms.

Page generated in 0.1737 seconds