• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4541
  • 576
  • 275
  • 248
  • 156
  • 123
  • 83
  • 46
  • 44
  • 39
  • 21
  • 20
  • 19
  • 17
  • 12
  • Tagged with
  • 7696
  • 7696
  • 2440
  • 1389
  • 1290
  • 1219
  • 1174
  • 1141
  • 1090
  • 1084
  • 988
  • 949
  • 907
  • 904
  • 857
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Some topics on similarity metric learning

Cao, Qiong January 2015 (has links)
The success of many computer vision problems and machine learning algorithms critically depends on the quality of the chosen distance metrics or similarity functions. Due to the fact that the real-data at hand is inherently task- and data-dependent, learning an appropriate distance metric or similarity function from data for each specific task is usually superior to the default Euclidean distance or cosine similarity. This thesis mainly focuses on developing new metric and similarity learning models for three tasks: unconstrained face verification, person re-identification and kNN classification. Unconstrained face verification is a binary matching problem, the target of which is to predict whether two images/videos are from the same person or not. Concurrently, person re-identification handles pedestrian matching and ranking across non-overlapping camera views. Both vision problems are very challenging because of the large transformation differences in images or videos caused by pose, expression, occlusion, problematic lighting and viewpoint. To address the above concerns, two novel methods are proposed. Firstly, we introduce a new dimensionality reduction method called Intra-PCA by considering the robustness to large transformation differences. We show that Intra-PCA significantly outperforms the classic dimensionality reduction methods (e.g. PCA and LDA). Secondly, we propose a novel regularization framework called Sub-SML to learn distance metrics and similarity functions for unconstrained face verifica- tion and person re-identification. The main novelty of our formulation is to incorporate both the robustness of Intra-PCA to large transformation variations and the discriminative power of metric and similarity learning, a property that most existing methods do not hold. Working with the task of kNN classification which relies a distance metric to identify the nearest neighbors, we revisit some popular existing methods for metric learning and develop a general formulation called DMLp for learning a distance metric from data. To obtain the optimal solution, a gradient-based optimization algorithm is proposed which only needs the computation of the largest eigenvector of a matrix per iteration. Although there is a large number of studies devoted to metric/similarity learning based on different objective functions, few studies address the generalization analysis of such methods. We describe a novel approch for generalization analysis of metric/similarity learning which can deal with general matrix regularization terms including the Frobenius norm, sparse L1-norm, mixed (2, 1)-norm and trace-norm. The novel models developed in this thesis are evaluated on four challenging databases: the Labeled Faces in the Wild dataset for unconstrained face verification in still images; the YouTube Faces database for video-based face verification in the wild; the Viewpoint Invariant Pedestrian Recognition database for person re-identification; the UCI datasets for kNN classification. Experimental results show that the proposed methods yield competitive or state-of-the-art performance.
72

Generative probabilistic models for object segmentation

Eslami, Seyed Mohammadali January 2014 (has links)
One of the long-standing open problems in machine vision has been the task of ‘object segmentation’, in which an image is partitioned into two sets of pixels: those that belong to the object of interest, and those that do not. A closely related task is that of ‘parts-based object segmentation’, where additionally each of the object’s pixels are labelled as belonging to one of several predetermined parts. There is broad agreement that segmentation is coupled to the task of object recognition. Knowledge of the object’s class can lead to more accurate segmentations, and in turn accurate segmentations can be used to obtain higher recognition rates. In this thesis we focus on one side of this relationship: given the object’s class and its bounding box, how accurately can we segment it? Segmentation is challenging primarily due to the huge amount of variability one sees in images of natural scenes. A large number of factors combine in complex ways to generate the pixel intensities that make up any given image. In this work we approach the problem by developing generative probabilistic models of the objects in question. Not only does this allow us to express notions of variability and uncertainty in a principled way, but also to separate the problems of model design and inference. The thesis makes the following contributions: First, we demonstrate an explicit probabilistic model of images of objects based on a latent Gaussian model of shape. This can be learned from images in an unsupervised fashion. Through experiments on a variety of datasets we demonstrate the advantages of explicitly modelling shape variability. We then focus on the task of constructing more accurate models of shape. We present a type of layered probabilistic model that we call a Shape Boltzmann Machine (SBM) for the task of modelling foreground/background (binary) and parts-based (categorical) shapes. We demonstrate that it constitutes the state-of-the-art and characterises a ‘strong’ model of shape, in that samples from the model look realistic and that it generalises to generate samples that differ from training examples. Finally, we demonstrate how the SBM can be used in conjunction with an appearance model to form a fully generative model of images of objects. We show how parts-based object segmentations can be obtained simply by performing probabilistic inference in this joint model. We apply the model to several challenging datasets and find that its performance is comparable to the state-of-the-art.
73

Machine learning using fuzzy logic with applications in medicine

Norris, D. E. January 1986 (has links)
No description available.
74

Modular on-line function approximation for scaling up reinforcement learning

Tham, Chen Khong January 1994 (has links)
No description available.
75

Design, implementation and applications of the Support Vector method and learning algorithm

Stitson, Mark Oliver January 1999 (has links)
No description available.
76

Evolutionary generalisation and genetic programming

Kuscu, Ibrahim January 1998 (has links)
No description available.
77

Similarity as representational distortion : an experimental investigation

Ananiadou, Katerina January 2000 (has links)
No description available.
78

A machine induction approach to the protein folding problem

Alnahi, Haitham G. January 2000 (has links)
No description available.
79

A goal directed learning agent for the Semantic Web

Grimnes, Gunnar Aastrand January 2008 (has links)
This thesis is motivated by the need for autonomous agents on the Semantic Web to be able to learn The Semantic Web is an effort for extending the existing Web with machine understandable information, thus enabling intelligent agents to understand the content of web-pages and help users carrying out tasks online. For such autonomous personal agents working on a world wide Semantic Web we make two observations. Firstly, every user is different and the Semantic Web will never cater for them all - - therefore, it is crucial for an agent to be able to learn from the user and the world around it to provide a personalised view of the web. Secondly, due to the immense amounts of information available on the world wide Semantic Web an agent cannot read and process all available data. We argue that to deal with the information overload a goal-directed approach is needed; an agent must be able to reason about the external world, the internal state and the actions available and only carry out the actions that help activate the current goal. In the first part of this thesis we explore the application of two machine learning techniques to Semantic Web data. Firstly, we investigate the classification of Semantic Web resources, we discuss the issues of mapping Semantic Web format to an input representation suitable for a selection of well-known algorithms, and outline the requirements for these algorithms to work well in a Semantic Web context. Secondly, we consider the clustering of Semantic Web resources. Here we focus on the definition of the similarity between two resources, and how we can determine what part of a large Semantic Web graph is relevant to a single resource. In the second part of the thesis we describe our goal-directed learning agent Smeagol. We present explicit definitions of the classification and clustering techniques devised in the first part of the thesis, allowing Smeagol to use a planning approach to create plans of actions that may fulfil a given top-level goal. We also investigate different ways that Smeagol can dynamically replan when steps within the initial plan fail and show that Smeagol can offer plausible learned answers to a given query, even when no explicit correct answer exists.
80

Adaptive parallelism mapping in dynamic environments using machine learning

Emani, Murali Krishna January 2015 (has links)
Modern day hardware platforms are parallel and diverse, ranging from mobiles to data centers. Mainstream parallel applications execute in the same system competing for resources. This resource contention may lead to a drastic degradation in a program’s performance. In addition, the execution environment composed of workloads and hardware resources, is dynamic and unpredictable. Efficient matching of program parallelism to machine parallelism under uncertainty is hard. The mapping policies that determine the optimal allocation of work to threads should anticipate these variations. This thesis proposes solutions to the mapping of parallel programs in dynamic environments. It employs predictive modelling techniques to determine the best degree of parallelism. Firstly, this thesis proposes a machine learning-based model to determine the optimal thread number for a target program co-executing with varying workloads. For this purpose, this offline trained model uses static code features and dynamic runtime information as input. Next, this thesis proposes a novel solution to monitor the proposed offline model and adjust its decisions in response to the environment changes. It develops a second predictive model for determining how the future environment should be, if the current thread prediction was optimal. Depending on how close this prediction was to the actual environment, the predicted thread numbers are adjusted. Furthermore, considering the multitude of potential execution scenarios where no single policy is best suited in all cases, this work proposes an approach based on the idea of mixture of experts. It considers a number of offline experts or mapping policies, each specialized for a given scenario, and learns online the best expert that is optimal for the current execution. When evaluated on highly dynamic executions, these solutions are proven to surpass default, state-of-art adaptive and analytic approaches.

Page generated in 0.072 seconds