Spelling suggestions: "subject:"nonsupervised learning"" "subject:"onsupervised learning""
91 |
Weakly supervised methods for learning actions and objectsPrest, Alessandro 04 September 2012 (has links) (PDF)
Modern Computer Vision systems learn visual concepts through examples (i.e. images) which have been manually annotated by humans. While this paradigm allowed the field to tremendously progress in the last decade, it has now become one of its major bottlenecks. Teaching a new visual concept requires an expensive human annotation effort, limiting systems to scale to thousands of visual concepts from the few dozens that work today. The exponential growth of visual data available on the net represents an invaluable resource for visual learning algorithms and calls for new methods able to exploit this information to learn visual concepts without the need of major human annotation effort. As a first contribution, we introduce an approach for learning human actions as interac- tions between persons and objects in realistic images. By exploiting the spatial structure of human-object interactions, we are able to learn action models automatically from a set of still images annotated only with the action label (weakly-supervised). Extensive experimental evaluation demonstrates that our weakly-supervised approach achieves the same performance of popular fully-supervised methods despite using substantially less supervision. In the second part of this thesis we extend this reasoning to human-object interactions in realistic video and feature length movies. Popular methods represent actions with low- level features such as image gradients or optical flow. In our approach instead, interactions are modeled as the trajectory of the object wrt to the person position, providing a rich and natural description of actions. Our interaction descriptor is an informative cue on its own and is complimentary to traditional low-level features. Finally, in the third part we propose an approach for learning object detectors from real- world web videos (i.e. YouTube). As opposed to the standard paradigm of learning from still images annotated with bounding-boxes, we propose a technique to learn from videos known only to contain objects of a target class. We demonstrate that learning detec- tors from video alone already delivers good performance requiring much less supervision compared to training from images annotated with bounding boxes. We additionally show that training from a combination of weakly annotated videos and fully annotated still images improves over training from still images alone.
|
92 |
Statistical Feature Selection : With Applications in Life ScienceNilsson, Roland January 2007 (has links)
The sequencing of the human genome has changed life science research in many ways. Novel measurement technologies such as microarray expression analysis, genome-wide SNP typing and mass spectrometry are now producing experimental data of extremely high dimensions. While these techniques provide unprecedented opportunities for exploratory data analysis, the increase in dimensionality also introduces many difficulties. A key problem is to discover the most relevant variables, or features, among the tens of thousands of parallel measurements in a particular experiment. This is referred to as feature selection. For feature selection to be principled, one needs to decide exactly what it means for a feature to be ”relevant”. This thesis considers relevance from a statistical viewpoint, as a measure of statistical dependence on a given target variable. The target variable might be continuous, such as a patient’s blood glucose level, or categorical, such as ”smoker” vs. ”non-smoker”. Several forms of relevance are examined and related to each other to form a coherent theory. Each form of relevance then defines a different feature selection problem. The predictive features are those that allow an accurate predictive model, for example for disease diagnosis. I prove that finding redictive features is a tractable problem, in that consistent estimates can be computed in polynomial time. This is a substantial improvement upon current theory. However, I also demonstrate that selecting features to optimize prediction accuracy does not control feature error rates. This is a severe drawback in life science, where the selected features per se are important, for example as candidate drug targets. To address this problem, I propose a statistical method which to my knowledge is the first to achieve error control. Moreover, I show that in high dimensions, feature sets can be impossible to replicate in independent experiments even with controlled error rates. This finding may explain the lack of agreement among genome-wide association studies and molecular signatures of disease. The most predictive features may not always be the most relevant ones from a biological perspective, since the predictive power of a given feature may depend on measurement noise rather than biological properties. I therefore consider a wider definition of relevance that avoids this problem. The resulting feature selection problem is shown to be asymptotically intractable in the general case; however, I derive a set of simplifying assumptions which admit an intuitive, consistent polynomial-time algorithm. Moreover, I present a method that controls error rates also for this problem. This algorithm is evaluated on microarray data from case studies in diabetes and cancer. In some cases however, I find that these statistical relevance concepts are insufficient to prioritize among candidate features in a biologically reasonable manner. Therefore, effective feature selection for life science requires both a careful definition of relevance and a principled integration of existing biological knowledge. / Sekvenseringen av det mänskliga genomet i början på 2000-talet tillsammans och de senare sekvenseringsprojekten för olika modellorganismer har möjliggjort revolutionerade nya biologiska mätmetoder som omfattar hela genom. Microarrayer, mass-spektrometri och SNP-typning är exempel på sådana mätmetoder. Dessa metoder genererar mycket högdimensionell data. Ett centralt problem i modern biologisk forskning är således att identifiera de relevanta variablerna bland dessa tusentals mätningar. Detta kallas f¨or variabelsökning. För att kunna studera variabelsökning på ett systematiskt sätt är en exakt definition av begreppet ”relevans” nödvändig. I denna avhandling behandlas relevans ur statistisk synvinkel: ”relevans” innebär ett statistiskt beroende av en målvariabel ; denna kan vara kontinuerlig, till exempel en blodtrycksmätning på en patient, eller diskret, till exempel en indikatorvariabel såsom ”rökare” eller ”icke-rökare”. Olika former av relevans behandlas och en sammanhängande teori presenteras. Varje relevansdefinition ger därefter upphov till ett specifikt variabelsökningsproblem. Prediktiva variabler är sådana som kan användas för att konstruera prediktionsmodeller. Detta är viktigt exempelvis i kliniska diagnossystem. Här bevisas att en konsistent skattning av sådana variabler kan beräknas i polynomisk tid, så att variabelssökning är möjlig inom rimlig beräkningstid. Detta är ett genombrott jämfört med tidigare forskning. Dock visas även att metoder för att optimera prediktionsmodeller ofta ger höga andelar irrelevanta varibler, vilket är mycket problematiskt inom biologisk forskning. Därför presenteras också en ny variabelsökningsmetod med vilken de funna variablernas relevans är statistiskt säkerställd. I detta sammanhang visas också att variabelsökningsmetoder inte är reproducerbara i vanlig bemärkelse i höga dimensioner, även då relevans är statistiskt säkerställd. Detta förklarar till viss del varför genetiska associationsstudier som behandlar hela genom hittills har varit svåra att reproducera. Här behandlas också fallet där alla relevanta variabler eftersöks. Detta problem bevisas kräva exponentiell beräkningstid i det allmänna fallet. Dock presenteras en metod som löser problemet i polynomisk tid under vissa statistiska antaganden, vilka kan anses rimliga för biologisk data. Också här tas problemet med falska positiver i beaktande, och en statistisk metod presenteras som säkerställer relevans. Denna metod tillämpas på fallstudier i typ 2-diabetes och cancer. I vissa fall är dock mängden relevanta variabler mycket stor. Statistisk behandling av en enskild datatyp är då otillräcklig. I sådana situationer är det viktigt att nyttja olika datakällor samt existerande biologisk kunskap för att för att sortera fram de viktigaste fynden.
|
93 |
Generative manifold learning for the exploration of partially labeled dataCruz Barbosa, Raúl 01 October 2009 (has links)
In many real-world application problems, the availability of data labels for supervised learning is rather limited. Incompletely labeled datasets are common in many of the databases generated in some of the currently most active areas of research. It is often the case that a limited number of labeled cases is accompanied by a larger number of unlabeled ones. This is the setting for semi-supervised learning, in which unsupervised approaches assist the supervised problem and vice versa.
A manifold learning model, namely Generative Topographic Mapping (GTM), is the basis of the methods developed in this thesis. The non-linearity of the mapping that GTM generates makes it prone to trustworthiness and continuity errors that would reduce the faithfulness of the data representation, especially for datasets of convoluted geometry. In this thesis, a variant of GTM that uses a graph approximation to the geodesic metric is first defined. This model is capable of representing data of convoluted geometries. The standard GTM is here modified to prioritize neighbourhood relationships along the generated manifold. This is accomplished by penalizing the possible divergences between the Euclidean distances from the data points to the model prototypes and the corresponding geodesic distances along the manifold. The resulting Geodesic GTM (Geo-GTM) model is shown to improve the continuity and trustworthiness of the representation generated by the model, as well as to behave robustly in the presence of noise.
The thesis then leads towards the definition and development of semi-supervised versions of GTM for partially-labeled data exploration. As a first step in this direction, a two-stage clustering procedure that uses class information is presented. A class information-enriched variant of GTM, namely class-GTM, yields a first cluster description of the data. The number of clusters defined by GTM is usually large for visualization purposes and does not necessarily correspond to the overall class structure. Consequently, in a second stage, clusters are agglomerated using the K-means algorithm with different novel initialization strategies that benefit from the probabilistic definition of GTM. We evaluate if the use of class information influences cluster-wise class separability. A robust variant of GTM that detects outliers while effectively minimizing their negative impact in the clustering process is also assessed in this context.
We then proceed to the definition of a novel semi-supervised model, SS-Geo-GTM, that extends Geo-GTM to deal with semi-supervised problems. In SS-Geo-GTM, the model prototypes are linked by the nearest neighbour to the data manifold constructed by Geo-GTM. The resulting proximity graph is used as the basis for a class label propagation algorithm. The performance of SS-Geo-GTM is experimentally assessed, comparing positively with that of an Euclidean distance-based counterpart and that of the alternative Laplacian Eigenmaps method. Finally, the developed models (the two-stage clustering procedure and the semi-supervised models) are applied to the analysis of a human brain tumour dataset (obtained by Nuclear Magnetic Resonance Spectroscopy), where the tasks are, in turn, data clustering and survival prognostic modeling. / Resum de la tesi (màxim 4000 caràcters. Si se supera aquest límit, el resum es tallarà automàticament al caràcter 4000)
En muchos problemas de aplicación del mundo real, la disponibilidad de etiquetas de datos para aprendizaje supervisado es bastante limitada. La existencia de conjuntos de datos etiquetados de manera incompleta es común en muchas de las bases de datos generadas en algunas de las áreas de investigación actualmente más activas. Es frecuente que un número limitado de casos etiquetados venga acompañado de un número mucho mayor de datos no etiquetados. Éste es el contexto en el que opera el aprendizaje semi-supervisado, en el cual enfoques no-supervisados prestan ayuda a problemas supervisados y vice versa.
Un modelo de aprendizaje de variaciones (manifold learning, en inglés), llamado Mapeo Topográfico Generativo (GTM, en acrónimo de su nombre en inglés), es la base de los métodos desarrollados en esta tesis. La no-linealidad del mapeo que GTM genera hace que éste sea propenso a errores de fiabilidad y continuidad, los cuales pueden reducir la fidelidad de la representación de los datos, especialmente para conjuntos de datos de geometría intrincada. En esta tesis, una extensión de GTM que utiliza una aproximación vía grafos a la métrica geodésica es definida en primer lugar. Este modelo es capaz de representar datos con geometrías intrincadas. En él, el GTM estándar es modificado para priorizar relaciones de vecindad a lo largo de la variación generada. Esto se logra penalizando las divergencias existentes entre las distancias Euclideanas de los datos a los prototipos del modelo y las correspondientes distancias geodésicas a lo largo de la variación. Se muestra que el modelo Geo-GTM resultante mejora la continuidad y fiabilidad de la representación generada y que se comporta de manera robusta en presencia de ruido.
Más adelante, la tesis nos lleva a la definición y desarrollo de versiones semi-supervisadas de GTM para la exploración de conjuntos de datos parcialmente etiquetados. Como un primer paso en esta dirección, se presenta un procedimiento de agrupamiento en dos etapas que utiliza información de pertenencia a clase. Una extensión de GTM enriquecida con información de pertenencia a clase, llamada class-GTM, produce una primera descripción de grupos de los datos. El número de grupos definidos por GTM es normalmente grande para propósitos de visualización y no corresponde necesariamente con la estructura de clases global. Por ello, en una segunda etapa, los grupos son aglomerados usando el algoritmo K-means con diferentes estrategias de inicialización novedosas las cuales se benefician de la definición probabilística de GTM. Evaluamos si el uso de información de clase influye en la separabilidad de clase por grupos. Una extensión robusta de GTM que detecta datos atípicos a un tiempo que minimiza de forma efectiva su impacto negativo en el proceso de agrupamiento es evaluada también en este contexto.
Se procede después a la definición de un nuevo modelo semi-supervisado, SS-Geo-GTM, que extiende Geo-GTM para ocuparse de problemas semi-supervisados. En SS-Geo-GTM, los prototipos del modelo son vinculados al vecino más cercano a la variación construída por Geo-GTM. El grafo de proximidad resultante es utilizado como base para un algoritmo de propagación de etiquetas de clase. El rendimiento de SS-Geo-GTM es valorado experimentalmente, comparando positivamente tanto con la contraparte de este modelo basada en la distancia Euclideana como con el método alternativo Laplacian Eigenmaps. Finalmente, los modelos desarrollados (el procedimiento de agrupamiento en dos etapas y los modelos semi-supervisados) son aplicados al análisis de un conjunto de datos de tumores cerebrales humanos (obtenidos mediante Espectroscopia de Resonancia Magnética Nuclear), donde las tareas a realizar son el agrupamiento de datos y el modelado de pronóstico de supervivencia.
|
94 |
Learning from Partially Labeled Data: Unsupervised and Semi-supervised Learning on Graphs and Learning with Distribution ShiftingHuang, Jiayuan January 2007 (has links)
This thesis focuses on two fundamental machine learning problems:unsupervised learning, where no label information is available, and semi-supervised learning, where a small amount of labels are given in
addition to unlabeled data. These problems arise in many real word applications, such as Web analysis and bioinformatics,where a large amount of data is available, but no or only a small amount of labeled data exists. Obtaining classification labels in these domains is usually quite difficult because it involves either manual labeling or physical experimentation.
This thesis approaches these problems from two perspectives:
graph based and distribution based.
First, I investigate a series of graph based learning algorithms that are able to exploit information embedded in different types of graph structures. These algorithms allow label information to be shared between nodes
in the graph---ultimately communicating information globally to yield effective unsupervised and semi-supervised learning.
In particular, I extend existing graph based learning algorithms, currently based on undirected graphs, to more general graph types, including directed graphs, hypergraphs and complex networks. These richer graph representations allow one to more naturally
capture the intrinsic data relationships that exist, for example, in Web data, relational data, bioinformatics and social networks.
For each of these generalized graph structures I show how information propagation can be characterized by distinct random walk models, and then use this characterization
to develop new unsupervised and semi-supervised learning algorithms.
Second, I investigate a more statistically oriented approach that explicitly models a learning scenario where the training and test examples come from different distributions.
This is a difficult situation for standard statistical learning approaches, since they typically incorporate an assumption that the distributions for training and test sets are similar, if not identical. To achieve good performance in this scenario, I utilize unlabeled data to correct the bias between the training and test distributions. A key idea is to produce resampling weights for bias correction by working directly in a feature space and bypassing the problem
of explicit density estimation. The technique can be easily applied to many different supervised learning algorithms, automatically adapting their behavior to cope with distribution shifting between training and test data.
|
95 |
Fundamental Limitations of Semi-Supervised LearningLu, Tyler (Tian) 30 April 2009 (has links)
The emergence of a new paradigm in machine learning known as semi-supervised learning (SSL) has seen benefits to many applications where labeled data is expensive to obtain. However, unlike supervised learning (SL), which enjoys a rich and deep theoretical foundation, semi-supervised learning, which uses additional unlabeled data for training, still remains a theoretical mystery lacking a sound fundamental understanding. The purpose of this research thesis is to take a first step towards bridging this theory-practice gap.
We focus on investigating the inherent limitations of the benefits SSL can provide over SL. We develop a framework under which one can analyze the potential benefits, as measured by the sample complexity of SSL. Our framework is utopian in the sense that a SSL algorithm trains on a labeled sample and an unlabeled distribution, as opposed to an unlabeled sample in the usual SSL model. Thus, any lower bound on the sample complexity of SSL in this model implies lower bounds in the usual model.
Roughly, our conclusion is that unless the learner is absolutely certain there is some non-trivial relationship between labels and the unlabeled distribution (``SSL type assumption''), SSL cannot provide significant advantages over SL. Technically speaking, we show that the sample complexity of SSL is no more than a constant factor better than SL for any unlabeled distribution, under a no-prior-knowledge setting (i.e. without SSL type assumptions). We prove that for the class of thresholds in the realizable setting the sample complexity of SL is at most twice that of SSL. Also, we prove that in the agnostic setting for the classes of thresholds and union of intervals the sample complexity of SL is at most a constant factor larger than that of SSL. We conjecture this to be a general phenomenon applying to any hypothesis class.
We also discuss issues regarding SSL type assumptions, and in particular the popular cluster assumption. We give examples that show even in the most accommodating circumstances, learning under the cluster assumption can be hazardous and lead to prediction performance much worse than simply ignoring the unlabeled data and doing supervised learning.
We conclude with a look into future research directions that build on our investigation.
|
96 |
Contributions to Unsupervised and Semi-Supervised LearningPal, David 21 May 2009 (has links)
This thesis studies two problems in theoretical machine learning. The first
part of the thesis investigates the statistical stability of clustering
algorithms. In the second part, we study the relative advantage of having
unlabeled data in classification problems.
Clustering stability was proposed and used as a model selection method in
clustering tasks. The main idea of the method is that from a given data set
two independent samples are taken. Each sample individually is clustered with
the same clustering algorithm, with the same setting of its parameters. If the
two resulting clusterings turn out to be close in some metric, it is concluded
that the clustering algorithm and the setting of its parameters match the data
set, and that clusterings obtained are meaningful. We study asymptotic
properties of this method for certain types of cost minimizing clustering
algorithms and relate their asymptotic stability to the number of optimal
solutions of the underlying optimization problem.
In classification problems, it is often expensive to obtain labeled data, but
on the other hand, unlabeled data are often plentiful and cheap. We study how
the access to unlabeled data can decrease the amount of labeled data
needed in the worst-case sense. We propose an extension of the probably
approximately correct (PAC) model in which this question can be naturally
studied. We show that for certain basic tasks the access to unlabeled data
might, at best, halve the amount of labeled data needed.
|
97 |
Kernelized Supervised Dictionary LearningJabbarzadeh Gangeh, Mehrdad 24 April 2013 (has links)
The representation of a signal using a learned dictionary instead of predefined operators, such as wavelets, has led to state-of-the-art results in various applications such as denoising, texture analysis, and face recognition. The area of dictionary learning is closely associated with sparse representation, which means that the signal is represented using few atoms in the dictionary. Despite recent advances in the computation of a dictionary using fast algorithms such as K-SVD, online learning, and cyclic coordinate descent, which make the computation of a dictionary from millions of data samples computationally feasible, the dictionary is mainly computed using unsupervised approaches such as k-means. These approaches learn the dictionary by minimizing the reconstruction error without taking into account the category information, which is not optimal in classification tasks.
In this thesis, we propose a supervised dictionary learning (SDL) approach by incorporating information on class labels into the learning of the dictionary. To this end, we propose to learn the dictionary in a space where the dependency between the signals and their corresponding labels is maximized. To maximize this dependency, the recently-introduced Hilbert Schmidt independence criterion (HSIC) is used. The learned dictionary is compact and has closed form; the proposed approach is fast. We show that it outperforms other unsupervised and supervised dictionary learning approaches in the literature on real-world data.
Moreover, the proposed SDL approach has as its main advantage that it can be easily kernelized, particularly by incorporating a data-driven kernel such as a compression-based kernel, into the formulation. In this thesis, we propose a novel compression-based (dis)similarity measure. The proposed measure utilizes a 2D MPEG-1 encoder, which takes into consideration the spatial locality and connectivity of pixels in the images. The proposed formulation has been carefully designed based on MPEG encoder functionality. To this end, by design, it solely uses P-frame coding to find the (dis)similarity among patches/images. We show that the proposed measure works properly on both small and large patch sizes on textures. Experimental results show that by incorporating the proposed measure as a kernel into our SDL, it significantly improves the performance of a supervised pixel-based texture classification on Brodatz and outdoor images compared to other compression-based dissimilarity measures, as well as state-of-the-art SDL methods. It also improves the computation speed by about 40% compared to its closest rival.
Eventually, we have extended the proposed SDL to multiview learning, where more than one representation is available on a dataset. We propose two different multiview approaches: one fusing the feature sets in the original space and then learning the dictionary and sparse coefficients on the fused set; and the other by learning one dictionary and the corresponding coefficients in each view separately, and then fusing the representations in the space of the dictionaries learned. We will show that the proposed multiview approaches benefit from the complementary information in multiple views, and investigate the relative performance of these approaches in the application of emotion recognition.
|
98 |
Learning from Partially Labeled Data: Unsupervised and Semi-supervised Learning on Graphs and Learning with Distribution ShiftingHuang, Jiayuan January 2007 (has links)
This thesis focuses on two fundamental machine learning problems:unsupervised learning, where no label information is available, and semi-supervised learning, where a small amount of labels are given in
addition to unlabeled data. These problems arise in many real word applications, such as Web analysis and bioinformatics,where a large amount of data is available, but no or only a small amount of labeled data exists. Obtaining classification labels in these domains is usually quite difficult because it involves either manual labeling or physical experimentation.
This thesis approaches these problems from two perspectives:
graph based and distribution based.
First, I investigate a series of graph based learning algorithms that are able to exploit information embedded in different types of graph structures. These algorithms allow label information to be shared between nodes
in the graph---ultimately communicating information globally to yield effective unsupervised and semi-supervised learning.
In particular, I extend existing graph based learning algorithms, currently based on undirected graphs, to more general graph types, including directed graphs, hypergraphs and complex networks. These richer graph representations allow one to more naturally
capture the intrinsic data relationships that exist, for example, in Web data, relational data, bioinformatics and social networks.
For each of these generalized graph structures I show how information propagation can be characterized by distinct random walk models, and then use this characterization
to develop new unsupervised and semi-supervised learning algorithms.
Second, I investigate a more statistically oriented approach that explicitly models a learning scenario where the training and test examples come from different distributions.
This is a difficult situation for standard statistical learning approaches, since they typically incorporate an assumption that the distributions for training and test sets are similar, if not identical. To achieve good performance in this scenario, I utilize unlabeled data to correct the bias between the training and test distributions. A key idea is to produce resampling weights for bias correction by working directly in a feature space and bypassing the problem
of explicit density estimation. The technique can be easily applied to many different supervised learning algorithms, automatically adapting their behavior to cope with distribution shifting between training and test data.
|
99 |
Fundamental Limitations of Semi-Supervised LearningLu, Tyler (Tian) 30 April 2009 (has links)
The emergence of a new paradigm in machine learning known as semi-supervised learning (SSL) has seen benefits to many applications where labeled data is expensive to obtain. However, unlike supervised learning (SL), which enjoys a rich and deep theoretical foundation, semi-supervised learning, which uses additional unlabeled data for training, still remains a theoretical mystery lacking a sound fundamental understanding. The purpose of this research thesis is to take a first step towards bridging this theory-practice gap.
We focus on investigating the inherent limitations of the benefits SSL can provide over SL. We develop a framework under which one can analyze the potential benefits, as measured by the sample complexity of SSL. Our framework is utopian in the sense that a SSL algorithm trains on a labeled sample and an unlabeled distribution, as opposed to an unlabeled sample in the usual SSL model. Thus, any lower bound on the sample complexity of SSL in this model implies lower bounds in the usual model.
Roughly, our conclusion is that unless the learner is absolutely certain there is some non-trivial relationship between labels and the unlabeled distribution (``SSL type assumption''), SSL cannot provide significant advantages over SL. Technically speaking, we show that the sample complexity of SSL is no more than a constant factor better than SL for any unlabeled distribution, under a no-prior-knowledge setting (i.e. without SSL type assumptions). We prove that for the class of thresholds in the realizable setting the sample complexity of SL is at most twice that of SSL. Also, we prove that in the agnostic setting for the classes of thresholds and union of intervals the sample complexity of SL is at most a constant factor larger than that of SSL. We conjecture this to be a general phenomenon applying to any hypothesis class.
We also discuss issues regarding SSL type assumptions, and in particular the popular cluster assumption. We give examples that show even in the most accommodating circumstances, learning under the cluster assumption can be hazardous and lead to prediction performance much worse than simply ignoring the unlabeled data and doing supervised learning.
We conclude with a look into future research directions that build on our investigation.
|
100 |
Contributions to Unsupervised and Semi-Supervised LearningPal, David 21 May 2009 (has links)
This thesis studies two problems in theoretical machine learning. The first
part of the thesis investigates the statistical stability of clustering
algorithms. In the second part, we study the relative advantage of having
unlabeled data in classification problems.
Clustering stability was proposed and used as a model selection method in
clustering tasks. The main idea of the method is that from a given data set
two independent samples are taken. Each sample individually is clustered with
the same clustering algorithm, with the same setting of its parameters. If the
two resulting clusterings turn out to be close in some metric, it is concluded
that the clustering algorithm and the setting of its parameters match the data
set, and that clusterings obtained are meaningful. We study asymptotic
properties of this method for certain types of cost minimizing clustering
algorithms and relate their asymptotic stability to the number of optimal
solutions of the underlying optimization problem.
In classification problems, it is often expensive to obtain labeled data, but
on the other hand, unlabeled data are often plentiful and cheap. We study how
the access to unlabeled data can decrease the amount of labeled data
needed in the worst-case sense. We propose an extension of the probably
approximately correct (PAC) model in which this question can be naturally
studied. We show that for certain basic tasks the access to unlabeled data
might, at best, halve the amount of labeled data needed.
|
Page generated in 0.3516 seconds