• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 7
  • 5
  • 1
  • 1
  • Tagged with
  • 31
  • 31
  • 16
  • 11
  • 11
  • 9
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Evaluation of Supervised Machine LearningAlgorithms for Detecting Anomalies in Vehicle’s Off-Board Sensor Data

Wahab, Nor-Ul January 2018 (has links)
A diesel particulate filter (DPF) is designed to physically remove diesel particulate matter or soot from the exhaust gas of a diesel engine. Frequently replacing DPF is a waste of resource and waiting for full utilization is risky and very costly, so, what is the optimal time/milage to change DPF? Answering this question is very difficult without knowing when the DPF is changed in a vehicle. We are finding the answer with supervised machine learning algorithms for detecting anomalies in vehicles off-board sensor data (operational data of vehicles). Filter change is considered an anomaly because it is rare as compared to normal data. Non-sequential machine learning algorithms for anomaly detection like oneclass support vector machine (OC-SVM), k-nearest neighbor (K-NN), and random forest (RF) are applied for the first time on DPF dataset. The dataset is unbalanced, and accuracy is found misleading as a performance measure for the algorithms. Precision, recall, and F1-score are found good measure for the performance of the machine learning algorithms when the data is unbalanced. RF gave highest F1-score of 0.55 than K-NN (0.52) and OCSVM (0.51). It means that RF perform better than K-NN and OC-SVM but after further investigation it is concluded that the results are not satisfactory. However, a sequential approach should have been tried which could yield better result.
12

Modelos de distribuição potencial em escala fina: metodologia de validação em campo e aplicação para espécies arbóreas / Potential distribution models in fine scale: validation methodology in the field and application to tree species

Ferreira, Larissa Campos 11 November 2015 (has links)
Submitted by Milena Rubi (milenarubi@ufscar.br) on 2017-02-15T14:11:09Z No. of bitstreams: 1 FERREIRA_Larissa_2015.pdf: 46221411 bytes, checksum: ae8a0358ebf5e33024f58e5c75dae037 (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2017-02-15T14:11:21Z (GMT) No. of bitstreams: 1 FERREIRA_Larissa_2015.pdf: 46221411 bytes, checksum: ae8a0358ebf5e33024f58e5c75dae037 (MD5) / Approved for entry into archive by Milena Rubi (milenarubi@ufscar.br) on 2017-02-15T14:11:31Z (GMT) No. of bitstreams: 1 FERREIRA_Larissa_2015.pdf: 46221411 bytes, checksum: ae8a0358ebf5e33024f58e5c75dae037 (MD5) / Made available in DSpace on 2017-02-15T14:11:38Z (GMT). No. of bitstreams: 1 FERREIRA_Larissa_2015.pdf: 46221411 bytes, checksum: ae8a0358ebf5e33024f58e5c75dae037 (MD5) Previous issue date: 2015-11-11 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Some conservation actions require the knowledge of the geographical distribution of species, however, this knowledge is far from being achieved for most species. The species distribution models (SDMs) have proved a useful tool to predict the distribution of species and guide field research to find new records. The SDMs using field data occurrence and environmental variables to indicate potential sites for the occurrence of a species. The quality and quantity of the data used are important to a successful result prediction models and application to conservation. The choice of environmental data and the algorithm and their settings are important for the development of models, the choice of these variables have directly influences to the quality of the models. Another very important step in modeling is the quality assessment and validation of the model, is that it may decrease the risk of accepting as true models with gross errors. The objective of this study is to evaluate the applicability of models generated by MaxEnt to find new populations of plants considering different data configurations used. For this, considering that the field validation is the most appropriate in the literature, but the most costly, the first chapter proposes a validation methodology of the models as easy application field. The methodology was able to find new records in the field, therefore, indicated for the validation of models. In the second chapter, knowing of the existence of a wide variety of variables that influence the performance of the models, the aim was to test the influence of the sample size, the spatial bias, the set of climate data and settings available for the MaxEnt algorithm in the areas of prediction potential distribution. The results demonstrated that the use of sampling and climate data restricted to the limit of the study area and also the use of soil data generate more accurate models. / Algumas ações conservacionistas necessitam do conhecimento da distribuição geográfica das espécies, porém, esse conhecimento está longe de ser alcançado para a maioria das espécies. Os modelos de distribuição de espécies (MDEs) têm se mostrado uma ferramenta útil para prever a distribuição das espécies e guiar pesquisas de campo para encontrar novos registros. Os MDEs utilizam dados de ocorrência e variáveis ambientais para indicar locais potenciais para a ocorrência de uma espécie. A precisão e quantidade dos dados utilizados são importantes para um bom resultado de predição dos modelos e aplicação à conservação. A escolha dos dados ambientais e do algoritmo e suas configurações são essenciais para o desenvolvimento dos modelos, pois influenciam diretamente na qualidade dos mesmos. Outra etapa bastante importante na modelagem é a validação do modelo, pois é ela que diminui o risco de aceitar como verdadeiros modelos que possuem erros grosseiros. O objetivo principal deste estudo é avaliar a aplicabilidade de modelos gerados pelo MaxEnt para encontrar populações de plantas, considerando diferentes configurações dos dados utilizados. Para isso o primeiro capítulo propõe uma metodologia de validação dos modelos em campo de fácil aplicação, uma vez que a validação em campo é a mais indicada pela literatura. A metodologia proposta no capítulo um é uma adaptação ao método de “caminhamento” ou método expedito de levantamento e caracterização da vegetação. A metodologia proposta foi eficaz para a localização das espécies em campo e mostrou que a caracterização da vegetação é uma etapa importante para a interpretação dos resultados, uma vez que explicou a ausência de duas espécies em áreas onde o modelo havia previsto presença. Apresenta como principal desvantagem a necessidade de pessoas experientes para o reconhecimento das espécies de plantas para a sua aplicação de forma agilizada. No segundo capítulo, foi testada a influência da área de amostragem, do conjunto de dados climáticos e das configurações do algoritmo Maxent na predição de áreas potenciais de distribuição. Os resultados obtidos demonstraram que o uso de dados amostrais e climáticos restritos aos limites da área de interesse para a busca das espécies e a inclusão de dados de solo geram modelos mais acurados. Mostrou também que as diferentes configurações do Maxent geraram modelos muito similares.
13

Comparative Study of Methods for Linguistic Modeling of Numerical Data

Visa, Sofia January 2002 (has links)
No description available.
14

Kvantitativ Modellering av förmögenhetsrättsliga dispositiva tvistemål / Quantitative legal prediction : Modeling cases amenable to out-of-court Settlements

Egil, Martinsson January 2014 (has links)
I den här uppsatsen beskrivs en ansats till att med hjälp av statistiska metoder förutse utfallet i förmögenhetsrättsliga dispositiva tvistemål. Logistiska- och multilogistiska regressionsmodeller skattades på data för 13299 tvistemål från 5 tingsrätter och användes  till att förutse utfallet för 1522 tvistemål från 3 andra tingsrätter.   Modellerna presterade bättre än slumpen vilket ger stöd för slutsatsen att man kan använda statistiska metoder för att förutse utfallet i denna typ av tvistemål. / BACKROUND: The idea of legal automatization is a controversial topic that's been discussed for hundreds of years, in modern times in the context of Law & Artificial Intelligence. Strangely, real world applications are very rare. Assuming that the judicial system is like any system that transforms inputs into outputs one would think that we should be able measure it and and gain insight into its inner workings and ultimately use these measurements to make predictions about its output. In this thesis, civil procedures on commercial matters amenable to out-of-court settlement (Förmögenhetsrättsliga Dispositiva Tvistemål) was devoted particular interest and the question was posed: Can we predict the outcome of civil procedures using Statistical Methods? METHOD: By analyzing procedural law and legal doctrin, the civil procedure was modeled in terms of a random variable with a discrete observable outcome. Some data for 14821 cases was extracted from eight different courts. Five of these courts (13299 cases) were used to train the models and three courts (1522 cases) were chosen randomly and kept untouched for validation. Most cases seemed to concern monetary claims (66%) and/or damages (12%). Binary- and Multinomial- logistic regression methods were used as classifiers. RESULTS: The models where found to be uncalibrated but they clearly outperformed random score assignment at separating classes and at a preset threshold gave accuracies significantly higher (p<<0.001) than that of random guessing and in identifying settlements or the correct type of verdict performance was significantly better (p<<0.003) than consequently guessing the most common outcome. CONCLUSION: Using data for cases from one set of courts can to some extent predict the outcomes of cases from another set of courts. The results from applying the models to new data concludes that the outcome in civil processes can be predicted using statistical methods.
15

Míry kvality klasifikačních modelů a jejich převod / Quality measures of classification models and their conversion

Hanusek, Lubomír January 2003 (has links)
Predictive power of classification models can be evaluated by various measures. The most popular measures in data mining (DM) are Gini coefficient, Kolmogorov-Smirnov statistic and lift. These measures are each based on a completely different way of calculation. If an analyst is used to one of these measures it can be difficult for him to asses the predictive power of a model evaluated by another measure. The aim of this thesis is to develop a method how to convert one performance measure into another. Even though this thesis focuses mainly on the above-mentioned measures, it deals also with other measures like sensitivity, specificity, total accuracy and area under ROC curve. During development of DM models you may need to work with a sample that is stratified by values of the target variable Y instead of working with the whole population containing millions of observations. If you evaluate a model developed on a stratified data you may need to convert these measures to the whole population. This thesis describes a way, how to carry out this conversion. A software application (CPM) enabling all these conversions makes part of this thesis. With this application you can not only convert one performance measure to another, but you can also convert measures calculated on a stratified sample to the whole population. Besides the above mentioned performance measures (sensitivity, specificity, total accuracy, Gini coefficient, Kolmogorov-Smirnov statistic), CPM will also generate confusion matrix and performance charts (lift chart, gains chart, ROC chart and KS chart). This thesis comprises the user manual to this application as well as the web address where the application can be downloaded. The theory described in this thesis was verified on the real data.
16

Numerical Evaluation of Classification Techniques for Flaw Detection

Vallamsundar, Suriyapriya January 2007 (has links)
Nondestructive testing is used extensively throughout the industry for quality assessment and detection of defects in engineering materials. The range and variety of anomalies is enormous and critical assessment of their location and size is often complicated. Depending upon final operational considerations, some of these anomalies may be critical and their detection and classification is therefore of importance. Despite the several advantages of using Nondestructive testing for flaw detection, the conventional NDT techniques based on the heuristic experience-based pattern identification methods have many drawbacks in terms of cost, length and result in erratic analysis and thus lead to discrepancies in results. The use of several statistical and soft computing techniques in the evaluation and classification operations result in the development of an automatic decision support system for defect characterization that offers the possibility of an impartial standardized performance. The present work evaluates the application of both supervised and unsupervised classification techniques for flaw detection and classification in a semi-infinite half space. Finite element models to simulate the MASW test in the presence and absence of voids were developed using the commercial package LS-DYNA. To simulate anomalies, voids of different sizes were inserted on elastic medium. Features for the discrimination of received responses were extracted in time and frequency domains by applying suitable transformations. The compact feature vector is then classified by different techniques: supervised classification (backpropagation neural network, adaptive neuro-fuzzy inference system, k-nearest neighbor classifier, linear discriminate classifier) and unsupervised classification (fuzzy c-means clustering). The classification results show that the performance of k-nearest Neighbor Classifier proved superior when compared with the other techniques with an overall accuracy of 94% in detection of presence of voids and an accuracy of 81% in determining the size of the void in the medium. The assessment of the various classifiers’ performance proved to be valuable in comparing the different techniques and establishing the applicability of simplified classification methods such as k-NN in defect characterization. The obtained classification accuracies for the detection and classification of voids are very encouraging, showing the suitability of the proposed approach to the development of a decision support system for non-destructive testing of materials for defect characterization.
17

Numerical Evaluation of Classification Techniques for Flaw Detection

Vallamsundar, Suriyapriya January 2007 (has links)
Nondestructive testing is used extensively throughout the industry for quality assessment and detection of defects in engineering materials. The range and variety of anomalies is enormous and critical assessment of their location and size is often complicated. Depending upon final operational considerations, some of these anomalies may be critical and their detection and classification is therefore of importance. Despite the several advantages of using Nondestructive testing for flaw detection, the conventional NDT techniques based on the heuristic experience-based pattern identification methods have many drawbacks in terms of cost, length and result in erratic analysis and thus lead to discrepancies in results. The use of several statistical and soft computing techniques in the evaluation and classification operations result in the development of an automatic decision support system for defect characterization that offers the possibility of an impartial standardized performance. The present work evaluates the application of both supervised and unsupervised classification techniques for flaw detection and classification in a semi-infinite half space. Finite element models to simulate the MASW test in the presence and absence of voids were developed using the commercial package LS-DYNA. To simulate anomalies, voids of different sizes were inserted on elastic medium. Features for the discrimination of received responses were extracted in time and frequency domains by applying suitable transformations. The compact feature vector is then classified by different techniques: supervised classification (backpropagation neural network, adaptive neuro-fuzzy inference system, k-nearest neighbor classifier, linear discriminate classifier) and unsupervised classification (fuzzy c-means clustering). The classification results show that the performance of k-nearest Neighbor Classifier proved superior when compared with the other techniques with an overall accuracy of 94% in detection of presence of voids and an accuracy of 81% in determining the size of the void in the medium. The assessment of the various classifiers’ performance proved to be valuable in comparing the different techniques and establishing the applicability of simplified classification methods such as k-NN in defect characterization. The obtained classification accuracies for the detection and classification of voids are very encouraging, showing the suitability of the proposed approach to the development of a decision support system for non-destructive testing of materials for defect characterization.
18

Seleção de características para identificação de diferentes proporções de tipos de fibras musculares por meio da eletromiografia de superfície

Freitas, Amanda Medeiros de 14 August 2015 (has links)
Fundação de Amparo a Pesquisa do Estado de Minas Gerais / Skeletal muscle consists of muscle fiber types that have different physiological and biochemical characteristics. Basically, the muscle fiber can be classified into type I and type II, presenting, among other features, contraction speed and sensitivity to fatigue different for each type of muscle fiber. These fibers coexist in the skeletal muscles and their relative proportions are modulated according to the muscle functionality and the stimulus that is submitted. To identify the different proportions of fiber types in the muscle composition, many studies use biopsy as standard procedure. As the surface electromyography (EMGs) allows to extract information about the recruitment of different motor units, this study is based on the assumption that it is possible to use the EMG to identify different proportions of fiber types in a muscle. The goal of this study was to identify the characteristics of the EMG signals which are able to distinguish, more precisely, different proportions of fiber types. Also was investigated the combination of characteristics using appropriate mathematical models. To achieve the proposed objective, simulated signals were developed with different proportions of motor units recruited and with different signal-to-noise ratios. Thirteen characteristics in function of time and the frequency were extracted from emulated signals. The results for each extracted feature of the signals were submitted to the clustering algorithm k-means to separate the different proportions of motor units recruited on the emulated signals. Mathematical techniques (confusion matrix and analysis of capability) were implemented to select the characteristics able to identify different proportions of muscle fiber types. As a result, the average frequency and median frequency were selected as able to distinguish, with more precision, the proportions of different muscle fiber types. Posteriorly, the features considered most able were analyzed in an associated way through principal component analysis. Were found two principal components of the signals emulated without noise (CP1 and CP2) and two principal components of the noisy signals (CP1 and CP2 ). The first principal components (CP1 and CP1 ) were identified as being able to distinguish different proportions of muscle fiber types. The selected characteristics (median frequency, mean frequency, CP1 and CP1 ) were used to analyze real EMGs signals, comparing sedentary people with physically active people who practice strength training (weight training). The results obtained with the different groups of volunteers show that the physically active people obtained higher values of mean frequency, median frequency and principal components compared with the sedentary people. Moreover, these values decreased with increasing power level for both groups, however, the decline was more accented for the group of physically active people. Based on these results, it is assumed that the volunteers of the physically active group have higher proportions of type II fibers than sedentary people. Finally, based on these results, we can conclude that the selected characteristics were able to distinguish different proportions of muscle fiber types, both for the emulated signals as to the real signals. These characteristics can be used in several studies, for example, to evaluate the progress of people with myopathy and neuromyopathy due to the physiotherapy, and also to analyze the development of athletes to improve their muscle capacity according to their sport. In both cases, the extraction of these characteristics from the surface electromyography signals provides a feedback to the physiotherapist and the coach physical, who can analyze the increase in the proportion of a given type of fiber, as desired in each case. / A musculatura esquelética é constituída por tipos de fibras musculares que possuem características fisiológicas e bioquímicas distintas. Basicamente, elas podem ser classificadas em fibras do tipo I e fibras do tipo II, apresentando, dentre outras características, velocidade de contração e sensibilidade à fadiga diferentes para cada tipo de fibra muscular. Estas fibras coexistem na musculatura esquelética e suas proporções relativas são moduladas de acordo com a funcionalidade do músculo e com o estímulo a que é submetido. Para identificar as diferentes proporções de tipos de fibra na composição muscular, muitos estudos utilizam a biópsia como procedimento padrão. Como a eletromiografia de superfície (EMGs) nos permite extrair informações sobre o recrutamento de diferentes unidades motoras, este estudo parte da hipótese de que seja possível utilizar a EMGs para identificar diferentes proporções de tipos de fibras em uma musculatura. O objetivo deste estudo foi identificar as características dos sinais EMGs que sejam capazes de distinguir, com maior precisão, diferentes proporções de tipos de fibras. Também foi investigado a combinação de características por meio de modelos matemáticos apropriados. Para alcançar o objetivo proposto, sinais emulados foram desenvolvidos com diferentes proporções de unidades motoras recrutadas e diferentes razões sinal-ruído. Treze características no domínio do tempo e da frequência foram extraídas do sinais emulados. Os resultados de cada característica extraída dos sinais emulados foram submetidos ao algorítimo de agrupamento k-means para separar as diferentes proporções de unidades motoras recrutadas nos sinais emulados. Técnicas matemáticas (matriz confusão e técnica de capabilidade) foram implementadas para selecionar as características capazes de identificar diferentes proporções de tipos de fibras musculares. Como resultado, a frequência média e a frequência mediana foram selecionadas como capazes de distinguir com maior precisão as diferentes proporções de tipos de fibras musculares. Posteriormente, as características consideradas mais capazes foram analisadas de forma associada por meio da análise de componentes principais. Foram encontradas duas componentes principais para os sinais emulados sem ruído (CP1 e CP2) e duas componentes principais para os sinais com ruído (CP1 e CP2 ), sendo as primeiras componentes principais (CP1 e CP1 ) identificadas como capazes de distinguirem diferentes proporções de fibras. As características selecionadas (frequência mediana, frequência média, CP1 e CP1 ) foram utilizadas para analisar sinais EMGs reais, comparando pessoas sedentárias com pessoas fisicamente ativas praticantes de treinamentos físicos de força (musculação). Os resultados obtidos com os diferentes grupos de voluntários mostram que as pessoas fisicamente ativas obtiveram valores mais elevados de frequência média, frequência mediana e componentes principais em comparação com as pessoas sedentárias. Além disto, estes valores decaíram com o aumento do nível de força para ambos os grupo, entretanto, o decaimento foi mais acentuado para o grupo de pessoas fisicamente ativas. Com base nestes resultados, presume-se que os voluntários do grupo fisicamente ativo apresentam maiores proporções de fibras do tipo II, se comparado com as pessoas sedentárias. Por fim, com base nos resultados obtidos, pode-se concluir que as características selecionadas foram capazes de distinguir diferentes proporções de tipos de fibras musculares, tanto para os sinais emulados quanto para os sinais reais. Estas características podem ser utilizadas em vários estudos, como por exemplo, para avaliar a evolução de pessoas com miopatias e neuromiopatia em decorrência da reabilitação fisioterápica, e também para analisar o desenvolvimento de atletas que visam melhorar sua capacidade muscular de acordo com sua modalidade esportiva. Em ambos os casos, a extração destas características dos sinais de eletromiografia de superfície proporciona um feedback ao fisioterapeuta e ao treinador físico, que podem analisar o aumento na proporção de determinado tipo de fibra, conforme desejado em cada caso. / Mestre em Ciências
19

Comparação de técnicas para a determinação de semelhança entre imagens digitais

Tannús, Marco Túlio Faissol 25 May 2008 (has links)
The retrieval of similar images in databases is a wide and complex research field that shows a great demand for good performance applications. The increasing volume of information available in the Internet and the success of textual search engines motivate the development of tools that make possible image searches by content similarity. Many features can be applied in determining the similarity between images, such as size, color, shape, color variation, texture, objects and their spatial distribution, among others. Texture and color are the most important features which allow a preliminary analysis of image similarity. This dissertation presents many techniques introduced in the literature, which analyze texture and color. Some of them were implemented, their performances were compared and the results were presented. This comparison allows the determination of the best techniques, making possible the analysis of their applicability and can be used as a reference in future works. The quantitative performance analyses were done using the ANMRR metric, defined in the MPEG-7 standard, and the confusion matrices were presented for each of the tested techniques. Two groups of quantitative tests were realized: the first one was applied upon a gray scale texture database and the second one, upon a color image database. For the experiment with the gray scale texture images, the techniques PBLIRU16, MCNC and their combination presented the best performances. For the experiment with the color images, SCD, HDCIG and CSD techniques performed best. / A recuperação de imagens semelhantes em bancos de dados é um campo de pesquisa amplo, complexo e que apresenta grande demanda por aplicativos que apresentem bons resultados. O volume crescente de informações disponibilizadas ao público e o sucesso das ferramentas de busca textuais na Internet motivam a criação de utilitários que possibilitem a busca de imagens por semelhança de conteúdo. Podem-se utilizar várias características para a determinação da semelhança entre imagens digitais, tais como tamanho, cor, forma, variação de cores, textura, objetos e sua disposição espacial, entre outras. A textura e a cor são as duas características mais importantes que permitem uma análise preliminar da semelhança. Este trabalho apresenta várias técnicas constantes da literatura, que analisam textura e cor. Algumas dessas técnicas foram implementadas, seus desempenhos foram analisados e comparados e os resultados foram apresentados detalhadamente. Esse comparativo amplo permite determinar as melhores técnicas, possibilita a análise da aplicabilidade de cada uma delas e pode ser utilizada como referência em estudos futuros. As análises quantitativas de desempenho foram realizadas utilizando a métrica ANMRR, definida no padrão MPEG-7, e as matrizes de confusão, apresentadas para cada técnica testada. Dois grupos de testes quantitativos foram realizados: o primeiro utilizando um banco de imagens de texturas em tons de cinza e o segundo utilizando um banco de imagens coloridas. Os resultados dos testes com o banco de texturas em tons de cinza mostraram que as técnicas PBLIRU16, MCNC e sua combinação apresentaram os melhores desempenhos. Para o banco de imagens coloridas, os melhores desempenhos foram observados com a utilização das técnicas SCD, HDCIG e CSD. / Mestre em Ciências
20

From confusion noise to active learning : playing on label availability in linear classification problems / Du bruit de confusion à l’apprentissage actif : jouer sur la disponibilité des étiquettes dans les problèmes de classification linéaire

Louche, Ugo 04 July 2016 (has links)
Les travaux présentés dans cette thèse relèvent de l'étude des méthodes de classification linéaires, c'est à dire l'étude de méthodes ayant pour but la catégorisation de données en différents groupes à partir d'un jeu d'exemples, préalablement étiquetés, disponible en amont et appelés ensemble d'apprentissage. En pratique, l'acquisition d'un tel ensemble d'apprentissage peut être difficile et/ou couteux, la catégorisation d'un exemple étant de fait plus ardu que l'obtention de dudit exemple. Cette disparité entre la disponibilité des données et notre capacité à constituer un ensemble d'apprentissage étiqueté a été un des problèmes centraux de l'apprentissage automatique et ce manuscrit s’intéresse à deux solutions usuellement considérées pour contourner ce problème : l'apprentissage en présence de données bruitées et l'apprentissage actif. / The works presented in this thesis fall within the general framework of linear classification, that is the problem of categorizing data into two or more classes based on on a training set of labelled data. In practice though acquiring labeled examples might prove challenging and/or costly as data are inherently easier to obtain than to label. Dealing with label scarceness have been a motivational goal in the machine learning literature and this work discuss two settings related to this problem: learning in the presence of noise and active learning.

Page generated in 0.0557 seconds