• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 328
  • 50
  • 40
  • 20
  • 7
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 558
  • 426
  • 185
  • 175
  • 134
  • 122
  • 120
  • 99
  • 89
  • 82
  • 82
  • 78
  • 76
  • 76
  • 74
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Learning techniques for expert systems : an investigation, using simulation techniques, into the possibilities and requirements for reliable un-supervised learning for industrial expert systems

Olley, Peter January 1992 (has links)
No description available.
12

Autonomous Terrain Classification Through Unsupervised Learning

Zeltner, Felix January 2016 (has links)
A key component of autonomous outdoor navigation in unstructured environments is the classification of terrain. Recent development in the area of machine learning show promising results in the task of scene segmentation but are limited by the labels used during their supervised training. In this work, we present and evaluate a flexible strategy for terrain classification based on three components: A deep convolutional neural network trained on colour, depth and infrared data which provides feature vectors for image segmentation, a set of exchangeable segmentation engines that operate in this feature space and a novel, air pressure based actuator responsible for distinguishing rigid obstacles from those that only appear as such. Through the use of unsupervised learning we eliminate the need for labeled training data and allow our system to adapt to previously unseen terrain classes. We evaluate the performance of this classification scheme on a mobile robot platform in an environment containing vegetation and trees with a Kinect v2 sensor as low-cost depth camera. Our experiments show that the features generated by our neural network are currently not competitive with state of the art implementations and that our system is not yet ready for real world applications.
13

A voting-merging clustering algorithm

Dimitriadou, Evgenia, Weingessel, Andreas, Hornik, Kurt January 1999 (has links) (PDF)
In this paper we propose an unsupervised voting-merging scheme that is capable of clustering data sets, and also of finding the number of clusters existing in them. The voting part of the algorithm allows us to combine several runs of clustering algorithms resulting in a common partition. This helps us to overcome instabilities of the clustering algorithms and to improve the ability to find structures in a data set. Moreover, we develop a strategy to understand, analyze and interpret these results. In the second part of the scheme, a merging procedure starts on the clusters resulting by voting, in order to find the number of clusters in the data set. / Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
14

Voting in clustering and finding the number of clusters

Dimitriadou, Evgenia, Weingessel, Andreas, Hornik, Kurt January 1999 (has links) (PDF)
In this paper we present an unsupervised algorithm which performs clustering given a data set and which can also find the number of clusters existing in it. This algorithm consists of two techniques. The first, the voting technique, allows us to combine several runs of clustering algorithms, with the number of clusters predefined, resulting in a common partition. We introduce the idea that there are cases where an input point has a structure with a certain degree of confidence and may belong to more than one cluster with a certain degree of "belongingness". The second part consists of an index measure which receives the results of every voting process for diffrent number of clusters and makes the decision in favor of one. This algorithm is a complete clustering scheme which can be applied to any clustering method and to any type of data set. Moreover, it helps us to overcome instabilities of the clustering algorithms and to improve the ability of a clustering algorithm to find structures in a data set. / Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
15

Distance Effects in Similarity Based Free Categorization

Miller, Benjamin Alan 01 September 2015 (has links)
This experiment investigated the processes underlying similarity-based free categorization. Of particular interest was how temporal distance between similar objects affects the likelihood that people will put them into the same novel category. Participants engaged in a free categorization task referred to as binomial labeling. This task required participants to generate a two-part label (A1, B1, C1, etc.) indicating family (superordinate) and species (subordinate) levels of categorization for each object in a visual display. Participants were shown the objects one at a time in a sequential presentation; after labeling each object, they were asked to describe the similarity between that object and previous objects by selecting one of five choices from a drop down menu. Our main prediction was that temporal distance should affect categorization, specifically, that people should be less likely to give two identical objects the same category label the farther apart they are shown in the display. The primary question being addressed in this study was whether the effects of distance are due to a decreased likelihood of remembering the first object when labeling the second (what we refer to as a stage 1 or sampling effect) or to factors during the actual comparison itself (a stage 2 or decision effect)? Our results showed a significant effect of distance on both the likelihood of giving identical objects the same label as well as on the likelihood of mentioning the first object when labeling the second object in an identical pair. Specifically, as the distance between two identical objects increased, the likelihood of giving them the same label, as well as mentioning their similarity, both decreased. Importantly, the decreased probability of giving the second object the same label seemed entirely due to the decreased probability of remembering (sampling) the first object, as indicated by the menu responses. These results provide strong support for the idea that the effect of temporal distance on free categorization is mainly due to stage 1 factors, specifically to its effect on the availability of the first instance in memory when labeling the second. No strong evidence was found in this experiment supporting a separate distance effect at the comparison-decision stage (i.e., stage 2).
16

Unsupervised Learning for Plant Recognition

Jelacic, Mersad January 2006 (has links)
<p>Six methods are used for clustering data containing two different objects: sugar-beet plants </p><p>and weed. These objects are described by 19 different features, i.e. shape and color features. </p><p>There is also information about the distance between sugar-beet plants that is used for </p><p>labeling clusters. The methods that are evaluated: k-means, k-medoids, hierarchical clustering, </p><p>competitive learning, self-organizing maps and fuzzy c-means. After using the methods on </p><p>plant data, clusters are formed. The clusters are labeled with three different proposed </p><p>methods: expert, database and context method. Expert method is using a human for giving </p><p>initial cluster centers that are labeled. The database method is using a database as an expert </p><p>that provides initial cluster centers. The context method is using information about the </p><p>environment, which is the distance between sugar-beet plants, for labeling the clusters. </p><p> </p><p>The algorithms that were tested, with the lowest achieved corresponding error, are: k-means </p><p>(3.3%), k-medoids (3.8%), hierarchical clustering (5.3%), competitive learning (6.8%), self- </p><p>organizing maps (4.9%) and fuzzy c-means (7.9%). Three different datasets were used and the </p><p>lowest error on dataset0 is 3.3%, compared to supervised learning methods where it is 3%. </p><p>For dataset1 the error is 18.7% and for dataset2 it is 5.8%. Compared to supervised methods, </p><p>the error on dataset1 is 11% and for dataset2 it is 5.1%. The high error rate on dataset1 is due </p><p>to the samples are not very well separated in different clusters. The features from dataset1 are </p><p>extracted from lower resolution on images than the other datasets, and another difference </p><p>between the datasets are the sugar-beet plants that are in different growth stages. </p><p> </p><p>The performance of the three methods for labeling clusters is: expert method (6.8% as the </p><p>lowest error achieved), database method (3.7%) and context method (6.8%). These results </p><p>show the clustering results by competitive learning where the real error is 6.8%. </p><p> </p><p>Unsupervised-learning methods for clustering can very well be used for plant identification. </p><p>Because the samples are not classified, an automatic labeling technique must be used if plants </p><p>are to be identified. The three proposed techniques can be used for automatic labeling of </p><p>plants.</p>
17

Learning Linear, Sparse, Factorial Codes

Olshausen, Bruno A. 01 December 1996 (has links)
In previous work (Olshausen & Field 1996), an algorithm was described for learning linear sparse codes which, when trained on natural images, produces a set of basis functions that are spatially localized, oriented, and bandpass (i.e., wavelet-like). This note shows how the algorithm may be interpreted within a maximum-likelihood framework. Several useful insights emerge from this connection: it makes explicit the relation to statistical independence (i.e., factorial coding), it shows a formal relationship to the algorithm of Bell and Sejnowski (1995), and it suggests how to adapt parameters that were previously fixed.
18

Knowledge Extraction from Logged Truck Data using Unsupervised Learning Methods

Grubinger, Thomas January 2008 (has links)
<p>The goal was to extract knowledge from data that is logged by the electronic system of</p><p>every Volvo truck. This allowed the evaluation of large populations of trucks without requiring additional measuring devices and facilities.</p><p>An evaluation cycle, similar to the knowledge discovery from databases model, was</p><p>developed and applied to extract knowledge from data. The focus was on extracting</p><p>information in the logged data that is related to the class labels of different populations,</p><p>but also supported knowledge extraction inherent from the given classes. The methods</p><p>used come from the field of unsupervised learning, a sub-field of machine learning and</p><p>include the methods self-organizing maps, multi-dimensional scaling and fuzzy c-means</p><p>clustering.</p><p>The developed evaluation cycle was exemplied by the evaluation of three data-sets.</p><p>Two data-sets were arranged from populations of trucks differing by their operating</p><p>environment regarding road condition or gross combination weight. The results showed</p><p>that there is relevant information in the logged data that describes these differences</p><p>in the operating environment. A third data-set consisted of populations with different</p><p>engine configurations, causing the two groups of trucks being unequally powerful.</p><p>Using the knowledge extracted in this task, engines that were sold in one of the two</p><p>configurations and were modified later, could be detected.</p><p>Information in the logged data that describes the vehicle's operating environment,</p><p>allows to detect trucks that are operated differently of their intended use. Initial experiments</p><p>to find such vehicles were conducted and recommendations for an automated</p><p>application were given.</p>
19

Unsupervised learning to cluster the disease stages in parkinson's disease

Srinivasan, BadriNarayanan January 2011 (has links)
Parkinson's disease (PD) is the second most common neurodegenerative disorder (after Alzheimer's disease) and directly affects upto 5 million people worldwide. The stages (Hoehn and Yaar) of disease has been predicted by many methods which will be helpful for the doctors to give the dosage according to it. So these methods were brought up based on the data set which includes about seventy patients at nine clinics in Sweden. The purpose of the work is to analyze unsupervised technique with supervised neural network techniques in order to make sure the collected data sets are reliable to make decisions. The data which is available was preprocessed before calculating the features of it. One of the complex and efficient feature called wavelets has been calculated to present the data set to the network. The dimension of the final feature set has been reduced using principle component analysis. For unsupervised learning k-means gives the closer result around 76% while comparing with supervised techniques. Back propagation and J4 has been used as supervised model to classify the stages of Parkinson's disease where back propagation gives the variance percentage of 76-82%. The results of both these models have been analyzed. This proves that the data which are collected are reliable to predict the disease stages in Parkinson's disease.
20

Unsupervised Learning for Plant Recognition

Jelacic, Mersad January 2006 (has links)
Six methods are used for clustering data containing two different objects: sugar-beet plants and weed. These objects are described by 19 different features, i.e. shape and color features. There is also information about the distance between sugar-beet plants that is used for labeling clusters. The methods that are evaluated: k-means, k-medoids, hierarchical clustering, competitive learning, self-organizing maps and fuzzy c-means. After using the methods on plant data, clusters are formed. The clusters are labeled with three different proposed methods: expert, database and context method. Expert method is using a human for giving initial cluster centers that are labeled. The database method is using a database as an expert that provides initial cluster centers. The context method is using information about the environment, which is the distance between sugar-beet plants, for labeling the clusters. The algorithms that were tested, with the lowest achieved corresponding error, are: k-means (3.3%), k-medoids (3.8%), hierarchical clustering (5.3%), competitive learning (6.8%), self- organizing maps (4.9%) and fuzzy c-means (7.9%). Three different datasets were used and the lowest error on dataset0 is 3.3%, compared to supervised learning methods where it is 3%. For dataset1 the error is 18.7% and for dataset2 it is 5.8%. Compared to supervised methods, the error on dataset1 is 11% and for dataset2 it is 5.1%. The high error rate on dataset1 is due to the samples are not very well separated in different clusters. The features from dataset1 are extracted from lower resolution on images than the other datasets, and another difference between the datasets are the sugar-beet plants that are in different growth stages. The performance of the three methods for labeling clusters is: expert method (6.8% as the lowest error achieved), database method (3.7%) and context method (6.8%). These results show the clustering results by competitive learning where the real error is 6.8%. Unsupervised-learning methods for clustering can very well be used for plant identification. Because the samples are not classified, an automatic labeling technique must be used if plants are to be identified. The three proposed techniques can be used for automatic labeling of plants.

Page generated in 0.0589 seconds