• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36348
  • 19912
  • 1440
  • 971
  • 509
  • 296
  • 182
  • 155
  • 113
  • 88
  • 69
  • 51
  • 51
  • 50
  • 47
  • Tagged with
  • 60680
  • 52734
  • 52612
  • 52610
  • 8159
  • 5114
  • 4988
  • 4518
  • 4293
  • 3903
  • 3711
  • 3234
  • 3183
  • 2818
  • 2680
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Detecção de mudanças a partir de imagens de fração

Bittencourt, Helio Radke January 2011 (has links)
A detecção de mudanças na superfície terrestre é o principal objetivo em aplicações de sensoriamento remoto multitemporal. Sabe-se que imagens adquiridas em datas distintas tendem a ser altamente influenciadas por problemas radiométricos e de registro. Utilizando imagens de fração, obtidas a partir do modelo linear de mistura espectral (MLME), problemas radiométricos podem ser minimizados e a interpretação dos tipos de mudança na superfície terrestre é facilitada, pois as frações têm um significado físico direto. Além disso, interpretações ao nível de subpixel são possíveis. Esta tese propõe três algoritmos – rígido, suave e fuzzy – para a detecção de mudanças entre um par de imagens de fração, gerando mapas de mudança como produtos finais. As propostas requerem a suposição de normalidade multivariada para as diferenças de fração e necessitam de pouca intervenção por parte do analista. A proposta rígida cria mapas de mudança binários seguindo a mesma metodologia de um teste de hipóteses, baseando-se no fato de que os contornos de densidade constante na distribuição normal multivariada são definidos por valores da distribuição qui-quadrado, de acordo com a escolha do nível de confiança. O classificador suave permite gerar estimativas da probabilidade do pixel pertencer à classe de mudança, a partir de um modelo de regressão logística. Essas probabilidades são usadas para criar um mapa de probabilidades de mudança. A abordagem fuzzy é aquela que melhor se adapta ao conceito de pixel mistura, visto que as mudanças no uso e cobertura do solo podem ocorrer em nível de subpixel. Com base nisso, mapas dos graus de pertinência à classe de mudança foram criados. Outras ferramentas matemáticas e estatísticas foram utilizadas, tais como operações morfológicas, curvas ROC e algoritmos de clustering. As três propostas foram testadas utilizando-se imagens sintéticas e reais (Landsat-TM) e avaliadas qualitativa e quantitativamente. Os resultados indicam a viabilidade da utilização de imagens de fração em estudos de detecção de mudanças por meio dos algoritmos propostos. / Land cover change detection is a major goal in multitemporal remote sensing applications. It is well known that images acquired on different dates tend to be highly influenced by radiometric differences and registration problems. Using fraction images, obtained from the linear model of spectral mixing (LMSM), radiometric problems can be minimized and the interpretation of changes in land cover is facilitated because the fractions have a physical meaning. Furthermore, interpretations at the subpixel level are possible. This thesis presents three algorithms – hard, soft and fuzzy – for detecting changes between a pair of fraction images. The algorithms require multivariate normality for the differences among fractions and very little intervention by the analyst. The hard algorithm creates binary change maps following the same methodology of hypothesis testing, based on the fact that the contours of constant density are defined by chi-square values, according to the choice of the probability level. The soft one allows for the generation of estimates of the probability of each pixel belonging to the change class by using a logistic regression model. These probabilities are used to create a map of change probabilities. The fuzzy approach is the one that best fits the concept behind the fraction images because the changes in land cover can occurr at a subpixel level. Based on these algorithms, maps of membership degrees were created. Other mathematical and statistical techniques were also used, such as morphological operations, ROC curves and a clustering algorithm. The algorithms were tested using synthetic and real images (Landsat-TM) and the results were analyzed qualitatively and quantitatively. The results indicate that fraction images can be used in change detection studies by using the proposed algorithms.
562

Data measures that characterise classification problems

Van der Walt, Christiaan Maarten 29 August 2008 (has links)
We have a wide-range of classifiers today that are employed in numerous applications, from credit scoring to speech-processing, with great technical and commercial success. No classifier, however, exists that will outperform all other classifiers on all classification tasks, and the process of classifier selection is still mainly one of trial and error. The optimal classifier for a classification task is determined by the characteristics of the data set employed; understanding the relationship between data characteristics and the performance of classifiers is therefore crucial to the process of classifier selection. Empirical and theoretical approaches have been employed in the literature to define this relationship. None of these approaches have, however, been very successful in accurately predicting or explaining classifier performance on real-world data. We use theoretical properties of classifiers to identify data characteristics that influence classifier performance; these data properties guide us in the development of measures that describe the relationship between data characteristics and classifier performance. We employ these data measures on real-world and artificial data to construct a meta-classification system. We use theoretical properties of classifiers to identify data characteristics that influence classifier performance; these data properties guide us in the development of measures that describe the relationship between data characteristics and classifier performance. We employ these data measures on real-world and artificial data to construct a meta-classification system. The purpose of this meta-classifier is two-fold: (1) to predict the classification performance of real-world classification tasks, and (2) to explain these predictions in order to gain insight into the properties of real-world data. We show that these data measures can be employed successfully to predict the classification performance of real-world data sets; these predictions are accurate in some instances but there is still unpredictable behaviour in other instances. We illustrate that these data measures can give valuable insight into the properties and data structures of real-world data; these insights are extremely valuable for high-dimensional classification problems. / Dissertation (MEng)--University of Pretoria, 2008. / Electrical, Electronic and Computer Engineering / unrestricted
563

Neural Networks for the Web Services Classification

Silva, Jesús, Senior Naveda, Alexa, Solórzano Movilla, José, Niebles Núẽz, William, Hernández Palma, Hugo 07 January 2020 (has links)
This article introduces a n-gram-based approach to automatic classification of Web services using a multilayer perceptron-type artificial neural network. Web services contain information that is useful for achieving a classification based on its functionality. The approach relies on word n-grams extracted from the web service description to determine its membership in a category. The experimentation carried out shows promising results, achieving a classification with a measure F=0.995 using unigrams (2-grams) of words (characteristics composed of a lexical unit) and a TF-IDF weight.
564

Classification Storage : A practical solution to file classification for information security / Classification Storage : En praktisk lösning till fil klassificering för informationssäkerhet

Sloof, Joël January 2021 (has links)
In the information age we currently live in, data has become the most valuable resource in the world. These data resources are high value targets for cyber criminals and digital warfare. To mitigate these threats, information security, laws and legislation is required. It can be challenging for organisations to have control over their data, to comply with laws and legislation that require data classification. Data classification is often required to determine appropriate security measured for storing sensitive data. The goal of this thesis is to create a system that makes it easy for organisations to handle file classifications, and raise information security awareness among users. In this thesis, the Classification Storage system is designed, implemented and evaluated. The Classification Storage system is a Client--Server solution that together create a virtual filesystem.  The virtual filesystem is presented as one network drive, while data is stored separately, based on the classifications that are set by users. Evaluating the Classification Storage system is realised through a usability study. The study shows that users find the Classification Storage system to be intuitive, easy to use and users become more information security aware by using the system. / I dagens informationsålder har data blivit den mest värdefulla tillgången i världen. Datatillgångar har blivit högt prioriterade mål för cyberkriminella och digital krigsföring. För att minska dessa hot, finns det ett behov av informationssäkerhet, lagar och lagstiftning. Det kan vara utmanande för organisationer att ha kontroll över sitt data för att följa lagar som kräver data klassificering för att lagra känsligt data. Målet med avhandlingen är att skapa ett system som gör det lättare för organisationer att hantera filklassificering och som ökar informationssäkerhets medvetande bland användare. Classification Storage systemet har designats, implementerats och evaluerats i avhandlingen. Classification Storage systemet är en Klient--Server lösning som tillsammans skapar ett virtuellt filsystem. Det virtuella filsystemet är presenterad som en nätverksenhet, där data lagras separat, beroende på den klassificeringen användare sätter. Classification Storage systemet är evaluerat genom en användbarhetsstudie. Studien visar att användare tycker att Classification Storage systemet är intuitivt, lätt att använda och användare blir mer informationssäkerhets medveten genom att använda systemet.
565

Trust as a factor in the information classification process

Andersson, Simon January 2021 (has links)
Risk management is an important part of every business. In order to properly conduct it, risk assessment and within it, information classification is needed. The information classification produces a list of information assets and states how they are valued within the organization. That is then used as an important part of the risk assessment process. In order to conduct such a valuation, users are consulted as they often times understand the value of information. However, using the CIA-Triad when communicating has proved to be difficult for users not knowledgeable in information security. Trust as a concept has been proven to have some connection to the concepts of the CIA-Triad and has been proposed as a possible translator in order to ease the communication of information value in the process of information classification. Semi-structured interviews were held with information security professionals in order to further understand the connection between the CIA-triad concepts and trust as well as to gain further understanding in the important parts of information classification. A thematic analysis showed how confidentiality and integrity are prominent factors that connect to trust, with availability, while still being mentioned as having a connection, was not as prominent. Further, the empirical data was used to build a model based on trust and importance that allows for a translation of the CIA-triad concepts. This resulted in a classification-scheme based model that allows trust as a concept to be used as a translator of the CIA-concepts, thus including trust as a concept in the information classification process.
566

Efficient extreme classification / Classification extreme a faible complexité

Cisse, Mouhamadou Moustapha 25 July 2014 (has links)
Dans cette thèse, nous proposons des méthodes a faible complexité pour la classification en présence d'un très grand nombre de catégories. Ces methodes permettent d'accelerer la prediction des classifieurs afin des les rendre utilisables dans les applications courantes. Nous proposons deux methodes destinées respectivement a la classification monolabel et a la classification multilabel. La première méthode utilise l'information hierarchique existante entre les catégories afin de créer un représentation binaire compact de celles-ci. La seconde approche , destinée aux problemes multilabel adpate le framework des Filtres de Bloom a la representation de sous ensembles de labels sous forme de de vecteurs binaires sparses. Dans chacun des cas, des classifieurs binaires sont appris afin de prédire les representations des catégories/labels et un algorithme permettant de retrouver l'ensemble de catégories pertinentes a partir de la représentation prédite est proposée. Les méthodes proposées sont validées par des expérience sur des données de grandes échelles et donnent des performances supérieures aux méthodes classiquement utilisées pour la classification extreme. / We propose in this thesis new methods to tackle classification problems with a large number of labes also called extreme classification. The proposed approaches aim at reducing the inference conplexity in comparison with the classical methods such as one-versus-rest in order to make learning machines usable in a real life scenario. We propose two types of methods respectively for single label and multilable classification. The first proposed approach uses existing hierarchical information among the categories in order to learn low dimensional binary representation of the categories. The second type of approaches, dedicated to multilabel problems, adapts the framework of Bloom Filters to represent subsets of labels with sparse low dimensional binary vectors. In both approaches, binary classifiers are learned to predict the new low dimensional representation of the categories and several algorithms are also proposed to recover the set of relevant labels. Large scale experiments validate the methods.
567

Terrestrial Ecosystem Classification in the Rocky Mountains, Northern Utah

Kusbach, Antonin 01 May 2010 (has links)
Currently, there is no comprehensive terrestrial ecosystem classification for the central Rocky Mountains of the United States. A comprehensive classification of terrestrial ecosystems in a mountainous study area in northern Utah was developed incorporating direct gradient analysis, spatial hierarchy theory, the zonal concept, and concepts of diagnostic species and fidelity, together with the biogeoclimatic ecosystem classification approach used in British Columbia, Canada. This classification was derived from vegetation and environmental sampling of both forest and non-forest ecosystems. The SNOwpack TELemetry (SNOTEL) and The National Weather Service (NWS) Cooperative Observer Program (COOP) weather station network were used to approximate climate of 163 sample plots. Within the large environmental diversity of the study area, three levels of ecosystem organization were distinguished: (1) macroclimatic - regional climate; (2) mesoclimatic, accounting for local climate and moisture distribution; and (3) edaphic - soil fertility. These three levels represent, in order, the L+1, L, and L-1 levels in a spatial hierarchy. Based on vegetation physiognomy, climatic data, and taxonomic classification of zonal soils, two vegetation geo-climatic zones were identified at the macroclimatic (L+1) level: (1) montane zone with Rocky Mountain juniper and Douglas-fir; and (2) subalpine zone with Engelmann spruce and subalpine fir as climatic climax species. A vegetation classification was developed by combining vegetation samples (relevés) into meaningful vegetation units. A site classification was developed, based on dominant environmental gradients within the subalpine vegetation geo-climatic zone. Site classes were specified and a site grid was constructed. This site classification was coupled with the vegetation classification. Each plant community was associated with its environmental space within the site grid. This vegetation-site overlay allowed ecosystems to be differentiated environmentally and a structure, combining zonal, vegetation, and site classifications, forms a comprehensive ecosystem classification. Based on assessment of plant communities' environmental demands and site vegetation potential, the comprehensive classification system enables inferences about site history and successional status of ecosystems. This classification is consistent with the recent USDA, Forest Service ECOMAP and Terrestrial Ecological Unit Inventory structure and may serve as a valuable tool not only in vegetation, climatic, or soil studies but also in practical ecosystem management.
568

Image Classification for Remote Sensing Using Data-Mining Techniques

Alam, Mohammad Tanveer 11 August 2011 (has links)
No description available.
569

A revision of Neobesseya in the United States and Cuba

Abel, Arlene Edith. January 1963 (has links)
Call number: LD2668 .T4 1963 A25 / Master of Science
570

A retrieval system for an historic costume collection

Austin, Janice Vance. January 1978 (has links)
Call number: LD2668 .T4 1978 A95 / Master of Science

Page generated in 0.1675 seconds