• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 151
  • 109
  • 49
  • 22
  • 14
  • Tagged with
  • 389
  • 294
  • 290
  • 264
  • 264
  • 236
  • 197
  • 197
  • 194
  • 194
  • 192
  • 146
  • 118
  • 109
  • 96
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Low and Mid-level Shape Priors for Image Segmentation

Levinshtein, Alex 15 February 2011 (has links)
Perceptual grouping is essential to manage the complexity of real world scenes. We explore bottom-up grouping at three different levels. Starting from low-level grouping, we propose a novel method for oversegmenting an image into compact superpixels, reducing the complexity of many high-level tasks. Unlike most low-level segmentation techniques, our geometric flow formulation enables us to impose additional compactness constraints, resulting in a fast method with minimal undersegmentation. Our subsequent work utilizes compact superpixels to detect two important mid-level shape regularities, closure and symmetry. Unlike the majority of closure detection approaches, we transform the closure detection problem into one of finding a subset of superpixels whose collective boundary has strong edge support in the image. Building on superpixels, we define a closure cost which is a ratio of a novel learned boundary gap measure to area, and show how it can be globally minimized to recover a small set of promising shape hypotheses. In our final contribution, motivated by the success of shape skeletons, we recover and group symmetric parts without assuming prior figure-ground segmentation. Further exploiting superpixel compactness, superpixels are this time used as an approximation to deformable maximal discs that comprise a medial axis. A learned measure of affinity between neighboring superpixels and between symmetric parts enables the purely bottom-up recovery of a skeleton-like structure, facilitating indexing and generic object recognition in complex real images.
52

A Methodological Framework for Decision-theoretic Adaptation of Software Interaction and Assistance

Hui, Bowen 09 January 2012 (has links)
In order to facilitate software interaction and increase user satisfaction, various research efforts have tackled the problem of software customization by modeling the user’s goals, skills, and preferences. In this thesis, we focus on run-time solutions for adapting various interface and interaction aspects of software. From an intelligent agent’s perspective, the system views this customization problem as a decision-theoretic planning problem under uncertainty about the user. We propose a methodological framework for developing intelligent software interaction and assistance. This framework has been instantiated in various case studies which are reviewed in the thesis. Through efforts of data collection experiments to learn model parameters, simulation experiments to assess system feasibility and adaptivity, and usability testing to assess user receptiveness, our case studies show that our approach can effectively carry out customizations according to different user preferences and adapt to changing preferences over time.
53

Remodeling Planning Domains Using Macro Operators and Machine Learning

Alhossaini, Maher 08 January 2014 (has links)
The thesis of this dissertation is that automating domain remodeling in AI planning using macro operators and making remodeling more flexible and applicable can improve the planning performance and can enrich planning. In this dissertation, we present three novel ideas: (1) we present an instance-specific domain remodeling framework, (2) we recast the planning domain remodeling with macros as a parameter optimization problem, and (3) we combine two domain remodeling approaches in the instance-specific remodeling context. In the instance-specific domain remodeling, we choose the best macro-augmented domain model for every incoming problem instance using a predictor that relies on previously solved problem instances to estimate the macros to be added the domain. Training the predictor is achieved off-line based on the observed relation between the instance features and the planner performance in the macro-augmented domain models. On-line, the predictor is used to find the best remodeling of the domain based on the problem instance features. Our empirical results over a number of standard benchmark planning domains demonstrate that our predictors can speed up the fixed-remodeling method that chooses the best set of macros by up to 2.5 times. The results also show that there is a large room for improving the performance using instance-specific over fixed remodeling approaches. The second idea is recasting the domain remodeling with macros as a parameter optimization. We show that this remodeling approach can outperform standard macro learning tools, and that it can significantly speed up the domain evaluation preprocessing required to train the predictors in instance-specific remodeling, while maintaining similar performance. The final idea applies macro addition and operator removal to the instance-specific domain remodeling. While maintaining an acceptable probability of solubility preservation, we build a predictor that adds macros and removes original operators based on the instance’s features. The results show that this new remodeling significantly outperforms the macro-only fixed remodeling, and that it is better than the fixed domain models in a number of domains.
54

Remodeling Planning Domains Using Macro Operators and Machine Learning

Alhossaini, Maher 08 January 2014 (has links)
The thesis of this dissertation is that automating domain remodeling in AI planning using macro operators and making remodeling more flexible and applicable can improve the planning performance and can enrich planning. In this dissertation, we present three novel ideas: (1) we present an instance-specific domain remodeling framework, (2) we recast the planning domain remodeling with macros as a parameter optimization problem, and (3) we combine two domain remodeling approaches in the instance-specific remodeling context. In the instance-specific domain remodeling, we choose the best macro-augmented domain model for every incoming problem instance using a predictor that relies on previously solved problem instances to estimate the macros to be added the domain. Training the predictor is achieved off-line based on the observed relation between the instance features and the planner performance in the macro-augmented domain models. On-line, the predictor is used to find the best remodeling of the domain based on the problem instance features. Our empirical results over a number of standard benchmark planning domains demonstrate that our predictors can speed up the fixed-remodeling method that chooses the best set of macros by up to 2.5 times. The results also show that there is a large room for improving the performance using instance-specific over fixed remodeling approaches. The second idea is recasting the domain remodeling with macros as a parameter optimization. We show that this remodeling approach can outperform standard macro learning tools, and that it can significantly speed up the domain evaluation preprocessing required to train the predictors in instance-specific remodeling, while maintaining similar performance. The final idea applies macro addition and operator removal to the instance-specific domain remodeling. While maintaining an acceptable probability of solubility preservation, we build a predictor that adds macros and removes original operators based on the instance’s features. The results show that this new remodeling significantly outperforms the macro-only fixed remodeling, and that it is better than the fixed domain models in a number of domains.
55

Exploiting Problem Structure in QBF Solving

Goultiaeva, Alexandra 27 March 2014 (has links)
Deciding the truth of a Quantified Boolean Formula (QBF) is a canonical PSPACE-complete problem. It provides a powerful framework for encoding problems that lie in PSPACE. These include many problems in automatic verification, and problems with discrete uncertainty or non-determinism. Two person adversarial games are another type of problem that are naturally encoded in QBF. It is standard practice to use Conjunctive Normal Form (CNF) when representing QBFs. Any propositional formula can be efficiently translated to CNF via the addition of new variables, and solvers can be implemented more efficiently due to the structural simplicity of CNF. However, the translation to CNF involves a loss of some structural information. This thesis shows that this structural information is important for efficient QBF solving, and shows how this structural information can be utilized to improve state-of-the-art QBF solving. First, a non-CNF circuit-based solver is presented. It makes use of information not present in CNF to improve its performance. We present techniques that allow it to exploit the duality between solutions and conflicts that is lost when working with CNF. This duality can also be utilized in the production of certificates, allowing both true and false formulas to have easy-to-verify certificates of the same form. Then, it is shown that most modern CNF-based solvers can benefit from the additional information derived from duality using only minor modifications. Furthermore, even partial duality information can be helpful. We show that for standard methods for conversion to CNF, some of the required information can be reconstructed from the CNF and greatly benefit the solver.
56

Missing Data Problems in Machine Learning

Marlin, Benjamin 01 August 2008 (has links)
Learning, inference, and prediction in the presence of missing data are pervasive problems in machine learning and statistical data analysis. This thesis focuses on the problems of collaborative prediction with non-random missing data and classification with missing features. We begin by presenting and elaborating on the theory of missing data due to Little and Rubin. We place a particular emphasis on the missing at random assumption in the multivariate setting with arbitrary patterns of missing data. We derive inference and prediction methods in the presence of random missing data for a variety of probabilistic models including finite mixture models, Dirichlet process mixture models, and factor analysis. Based on this foundation, we develop several novel models and inference procedures for both the collaborative prediction problem and the problem of classification with missing features. We develop models and methods for collaborative prediction with non-random missing data by combining standard models for complete data with models of the missing data process. Using a novel recommender system data set and experimental protocol, we show that each proposed method achieves a substantial increase in rating prediction performance compared to models that assume missing ratings are missing at random. We describe several strategies for classification with missing features including the use of generative classifiers, and the combination of standard discriminative classifiers with single imputation, multiple imputation, classification in subspaces, and an approach based on modifying the classifier input representation to include response indicators. Results on real and synthetic data sets show that in some cases performance gains over baseline methods can be achieved by methods that do not learn a detailed model of the feature space.
57

Missing Data Problems in Machine Learning

Marlin, Benjamin 01 August 2008 (has links)
Learning, inference, and prediction in the presence of missing data are pervasive problems in machine learning and statistical data analysis. This thesis focuses on the problems of collaborative prediction with non-random missing data and classification with missing features. We begin by presenting and elaborating on the theory of missing data due to Little and Rubin. We place a particular emphasis on the missing at random assumption in the multivariate setting with arbitrary patterns of missing data. We derive inference and prediction methods in the presence of random missing data for a variety of probabilistic models including finite mixture models, Dirichlet process mixture models, and factor analysis. Based on this foundation, we develop several novel models and inference procedures for both the collaborative prediction problem and the problem of classification with missing features. We develop models and methods for collaborative prediction with non-random missing data by combining standard models for complete data with models of the missing data process. Using a novel recommender system data set and experimental protocol, we show that each proposed method achieves a substantial increase in rating prediction performance compared to models that assume missing ratings are missing at random. We describe several strategies for classification with missing features including the use of generative classifiers, and the combination of standard discriminative classifiers with single imputation, multiple imputation, classification in subspaces, and an approach based on modifying the classifier input representation to include response indicators. Results on real and synthetic data sets show that in some cases performance gains over baseline methods can be achieved by methods that do not learn a detailed model of the feature space.
58

Exploiting Problem Structure in QBF Solving

Goultiaeva, Alexandra 27 March 2014 (has links)
Deciding the truth of a Quantified Boolean Formula (QBF) is a canonical PSPACE-complete problem. It provides a powerful framework for encoding problems that lie in PSPACE. These include many problems in automatic verification, and problems with discrete uncertainty or non-determinism. Two person adversarial games are another type of problem that are naturally encoded in QBF. It is standard practice to use Conjunctive Normal Form (CNF) when representing QBFs. Any propositional formula can be efficiently translated to CNF via the addition of new variables, and solvers can be implemented more efficiently due to the structural simplicity of CNF. However, the translation to CNF involves a loss of some structural information. This thesis shows that this structural information is important for efficient QBF solving, and shows how this structural information can be utilized to improve state-of-the-art QBF solving. First, a non-CNF circuit-based solver is presented. It makes use of information not present in CNF to improve its performance. We present techniques that allow it to exploit the duality between solutions and conflicts that is lost when working with CNF. This duality can also be utilized in the production of certificates, allowing both true and false formulas to have easy-to-verify certificates of the same form. Then, it is shown that most modern CNF-based solvers can benefit from the additional information derived from duality using only minor modifications. Furthermore, even partial duality information can be helpful. We show that for standard methods for conversion to CNF, some of the required information can be reconstructed from the CNF and greatly benefit the solver.
59

Physical Models of Human Motion for Estimation and Scene Analysis

Brubaker, Marcus Anthony 05 January 2012 (has links)
This thesis explores the use of physics based human motion models in the context of video-based human motion estimation and scene analysis. Two abstract models of human locomotion are described and used as the basis for video-based estimation. These models demonstrate the power of physics based models to provide meaningful cues for estimation without the use of motion capture data. However promising, the abstract nature of these models limit the range of motion they can faithfully capture. A more detailed model of human motion and ground interaction is also described. This model is used to estimate the ground surface which a subject interacts with, the forces driving the motion and, finally, to smooth corrupted motions from existing trackers in a physically realistic fashion. This thesis suggests that one of the key difficulties in using physical models is the discontinuous nature of contact and collisions. Two different approaches to handling ground contacts are demonstrated,one using explicit detection and collision resolution and the other using a continuous approximation. This difficulty also distinguishes the models used here from others used in other areas which often sidestep the issue of collisions.
60

Graphical Epitome Processing

Cheung, Vincent 02 August 2013 (has links)
This thesis introduces principled, broadly applicable, and efficient patch-based models for data processing applications. Recently, "epitomes" were introduced as patch-based probability models that are learned by compiling together a large number of examples of patches from input images. This thesis describes how epitomes can be used to model video data and a significant computational speedup is introduced that can be incorporated into the epitome inference and learning algorithm. In the case of videos, epitomes are estimated so as to model most of the small space-time cubes from the input data. Then, the epitome can be used for various modelling and reconstruction tasks, of which we show results for video super-resolution, video interpolation, and object removal. Besides computational efficiency, an interesting advantage of the epitome as a representation is that it can be reliably estimated even from videos with large amounts of missing data. This ability is illustrated on the task of reconstructing the dropped frames in a video broadcast using only the degraded video. Further, a new patch-based model is introduced, that when applied to epitomes, accounts for the varying geometric configurations of object features. The power of this model is illustrated on tasks such as multiple object registration and detection and missing data interpolation, including a difficult task of photograph relighting.

Page generated in 0.0388 seconds