• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 109
  • 89
  • 49
  • 14
  • 14
  • Tagged with
  • 319
  • 224
  • 220
  • 196
  • 196
  • 194
  • 194
  • 193
  • 193
  • 180
  • 139
  • 123
  • 93
  • 84
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Learning Patch-based Structural Element Models with Hierarchical Palettes

Chua, Jeroen 21 November 2012 (has links)
Image patches can be factorized into ‘shapelets’ that describe segmentation patterns, and palettes that describe how to paint the segments. This allows a flexible factorization of local shape (segmentation patterns) and appearance (palettes), which we argue is useful for tasks such as object and scene recognition. Here, we introduce the ‘shapelet’ model- a framework that is able to learn a library of ‘shapelet’ segmentation patterns to capture local shape, and hierarchical palettes of colors to capture appearance. Using a learned shapelet library, image patches can be analyzed using a variational technique to produce descriptors that separately describe local shape and local appearance. These descriptors can be used for high-level vision tasks, such as object and scene recognition. We show that the shapelet model is competitive with SIFT-based methods and structure element (stel) model variants on the object recognition datasets Caltech28 and Caltech101, and the scene recognition dataset All-I-Have-Seen.
12

Learning Patch-based Structural Element Models with Hierarchical Palettes

Chua, Jeroen 21 November 2012 (has links)
Image patches can be factorized into ‘shapelets’ that describe segmentation patterns, and palettes that describe how to paint the segments. This allows a flexible factorization of local shape (segmentation patterns) and appearance (palettes), which we argue is useful for tasks such as object and scene recognition. Here, we introduce the ‘shapelet’ model- a framework that is able to learn a library of ‘shapelet’ segmentation patterns to capture local shape, and hierarchical palettes of colors to capture appearance. Using a learned shapelet library, image patches can be analyzed using a variational technique to produce descriptors that separately describe local shape and local appearance. These descriptors can be used for high-level vision tasks, such as object and scene recognition. We show that the shapelet model is competitive with SIFT-based methods and structure element (stel) model variants on the object recognition datasets Caltech28 and Caltech101, and the scene recognition dataset All-I-Have-Seen.
13

Visual Object Recognition Using Generative Models of Images

Nair, Vinod 01 September 2010 (has links)
Visual object recognition is one of the key human capabilities that we would like machines to have. The problem is the following: given an image of an object (e.g. someone's face), predict its label (e.g. that person's name) from a set of possible object labels. The predominant approach to solving the recognition problem has been to learn a discriminative model, i.e. a model of the conditional probability $P(l|v)$ over possible object labels $l$ given an image $v$. Here we consider an alternative class of models, broadly referred to as \emph{generative models}, that learns the latent structure of the image so as to explain how it was generated. This is in contrast to discriminative models, which dedicate their parameters exclusively to representing the conditional distribution $P(l|v)$. Making finer distinctions among generative models, we consider a supervised generative model of the joint distribution $P(v,l)$ over image-label pairs, an unsupervised generative model of the distribution $P(v)$ over images alone, and an unsupervised \emph{reconstructive} model, which includes models such as autoencoders that can reconstruct a given image, but do not define a proper distribution over images. The goal of this thesis is to empirically demonstrate various ways of using these models for object recognition. Its main conclusion is that such models are not only useful for recognition, but can even outperform purely discriminative models on difficult recognition tasks. We explore four types of applications of generative/reconstructive models for recognition: 1) incorporating complex domain knowledge into the learning by inverting a synthesis model, 2) using the latent image representations of generative/reconstructive models for recognition, 3) optimizing a hybrid generative-discriminative loss function, and 4) creating additional synthetic data for training more accurate discriminative models. Taken together, the results for these applications support the idea that generative/reconstructive models and unsupervised learning have a key role to play in building object recognition systems.
14

Decision-theoretic Elicitation of Generalized Additive Utilities

Braziunas, Darius 20 August 2012 (has links)
In this thesis, we present a decision-theoretic framework for building decision support systems that incrementally elicit preferences of individual users over multiattribute outcomes and then provide recommendations based on the acquired preference information. By combining decision-theoretically sound modeling with effective computational techniques and certain user-centric considerations, we demonstrate the feasibility and potential of practical autonomous preference elicitation and recommendation systems. More concretely, we focus on decision scenarios in which a user can obtain any outcome from a finite set of available outcomes. The outcome is space is multiattribute; each outcome can be viewed as an instantiation of a set of attributes with finite domains. The user has preferences over outcomes that can be represented by a utility function. We assume that user preferences are generalized additively independent (GAI), and, therefore, can be represented by a GAI utility function. GAI utilities provide a flexible representation framework for structured preferences over multiattribute outcomes; they are less restrictive and, therefore, more widely applicable than additive utilities. In many decision scenarios with large and complex decision spaces (such as making travel plans or choosing an apartment to rent from thousands of available options), selecting the optimal decision can require a lot of time and effort on the part of the user. Since obtaining the user's complete utility function is generally infeasible, the decision support system has to support recommendation with partial preference information. We provide solutions for effective elicitation of GAI utilities in situations where a probabilistic prior about the user's utility function is available, and in situations where the system's uncertainty about user utilities is represented by maintaining a set of feasible user utilities. In the first case, we use Bayesian criteria for decision making and query selection. In the second case, recommendations (and query strategies) are based on the robust minimax regret criterion which recommends the outcome with the smallest maximum regret (with respect to all adversarial instantiations of feasible utility functions). Our proposed framework is implemented in the UTPref recommendation system that searches multiattribute product databases using the minimax regret criterion. UTPref is tested with a study involving 40 users interacting with the system. The study measures the effectiveness of regret-based elicitation, evaluates user comprehension and acceptance of minimax regret, and assesses the relative difficulty of different query types.
15

Software Evolution: A Requirements Engineering Perspective

Ernst, Neil 21 August 2012 (has links)
This thesis examines the issue of software evolution from a Requirements Engineering perspective. This perspective is founded on the premise that software evolution is best managed with reference to the requirements of a given software system. In particular, I follow the Requirements Problem approach to software development: the problem of developing software can be characterized as finding a specification that satisfies user requirements, subject to domain constraints. To enable this, I propose a shift from treating requirements as artifacts to treating requirements as design knowledge, embedded in knowledge bases. Most requirements today, when they exist in tangible form at all, are static objects. Such artifacts are quickly out of date and difficult to update. Instead, I propose that requirements be maintained in a knowledge base which supports knowledge-level operations for asserting new knowledge and updating existing knowledge. Consistency checks and entailment of new specifications is done automatically by answering simple queries. Maintaining a requirements knowledge base in parallel with running code means that changes precipitated by evolution are always addressed relative to the ultimate purpose of the system. This thesis begins with empirical studies which establish the nature of the requirements evolution problem. I use an extended case study of payment cards to motivate the following discussion. I begin at an abstract level, by introducing a requirements engineering knowledge base (REKB) using a functional specification. Since it is functional, the specifics of the implementation are left open. I then describe one implementation, using a reason-maintenance system, and show how this implementation can a) solve static requirements problems; b) help stakeholders bring requirements and implementation following a change in the requirements problem; c) propose paraconsistent reasoning to support inconsistency tolerance in the REKB. The end result of my work on the REKB is a tool and approach which can guide software developers and software maintainers in design and decision-making in the context of software evolution.
16

Visual Object Recognition Using Generative Models of Images

Nair, Vinod 01 September 2010 (has links)
Visual object recognition is one of the key human capabilities that we would like machines to have. The problem is the following: given an image of an object (e.g. someone's face), predict its label (e.g. that person's name) from a set of possible object labels. The predominant approach to solving the recognition problem has been to learn a discriminative model, i.e. a model of the conditional probability $P(l|v)$ over possible object labels $l$ given an image $v$. Here we consider an alternative class of models, broadly referred to as \emph{generative models}, that learns the latent structure of the image so as to explain how it was generated. This is in contrast to discriminative models, which dedicate their parameters exclusively to representing the conditional distribution $P(l|v)$. Making finer distinctions among generative models, we consider a supervised generative model of the joint distribution $P(v,l)$ over image-label pairs, an unsupervised generative model of the distribution $P(v)$ over images alone, and an unsupervised \emph{reconstructive} model, which includes models such as autoencoders that can reconstruct a given image, but do not define a proper distribution over images. The goal of this thesis is to empirically demonstrate various ways of using these models for object recognition. Its main conclusion is that such models are not only useful for recognition, but can even outperform purely discriminative models on difficult recognition tasks. We explore four types of applications of generative/reconstructive models for recognition: 1) incorporating complex domain knowledge into the learning by inverting a synthesis model, 2) using the latent image representations of generative/reconstructive models for recognition, 3) optimizing a hybrid generative-discriminative loss function, and 4) creating additional synthetic data for training more accurate discriminative models. Taken together, the results for these applications support the idea that generative/reconstructive models and unsupervised learning have a key role to play in building object recognition systems.
17

Decision-theoretic Elicitation of Generalized Additive Utilities

Braziunas, Darius 20 August 2012 (has links)
In this thesis, we present a decision-theoretic framework for building decision support systems that incrementally elicit preferences of individual users over multiattribute outcomes and then provide recommendations based on the acquired preference information. By combining decision-theoretically sound modeling with effective computational techniques and certain user-centric considerations, we demonstrate the feasibility and potential of practical autonomous preference elicitation and recommendation systems. More concretely, we focus on decision scenarios in which a user can obtain any outcome from a finite set of available outcomes. The outcome is space is multiattribute; each outcome can be viewed as an instantiation of a set of attributes with finite domains. The user has preferences over outcomes that can be represented by a utility function. We assume that user preferences are generalized additively independent (GAI), and, therefore, can be represented by a GAI utility function. GAI utilities provide a flexible representation framework for structured preferences over multiattribute outcomes; they are less restrictive and, therefore, more widely applicable than additive utilities. In many decision scenarios with large and complex decision spaces (such as making travel plans or choosing an apartment to rent from thousands of available options), selecting the optimal decision can require a lot of time and effort on the part of the user. Since obtaining the user's complete utility function is generally infeasible, the decision support system has to support recommendation with partial preference information. We provide solutions for effective elicitation of GAI utilities in situations where a probabilistic prior about the user's utility function is available, and in situations where the system's uncertainty about user utilities is represented by maintaining a set of feasible user utilities. In the first case, we use Bayesian criteria for decision making and query selection. In the second case, recommendations (and query strategies) are based on the robust minimax regret criterion which recommends the outcome with the smallest maximum regret (with respect to all adversarial instantiations of feasible utility functions). Our proposed framework is implemented in the UTPref recommendation system that searches multiattribute product databases using the minimax regret criterion. UTPref is tested with a study involving 40 users interacting with the system. The study measures the effectiveness of regret-based elicitation, evaluates user comprehension and acceptance of minimax regret, and assesses the relative difficulty of different query types.
18

Software Evolution: A Requirements Engineering Perspective

Ernst, Neil 21 August 2012 (has links)
This thesis examines the issue of software evolution from a Requirements Engineering perspective. This perspective is founded on the premise that software evolution is best managed with reference to the requirements of a given software system. In particular, I follow the Requirements Problem approach to software development: the problem of developing software can be characterized as finding a specification that satisfies user requirements, subject to domain constraints. To enable this, I propose a shift from treating requirements as artifacts to treating requirements as design knowledge, embedded in knowledge bases. Most requirements today, when they exist in tangible form at all, are static objects. Such artifacts are quickly out of date and difficult to update. Instead, I propose that requirements be maintained in a knowledge base which supports knowledge-level operations for asserting new knowledge and updating existing knowledge. Consistency checks and entailment of new specifications is done automatically by answering simple queries. Maintaining a requirements knowledge base in parallel with running code means that changes precipitated by evolution are always addressed relative to the ultimate purpose of the system. This thesis begins with empirical studies which establish the nature of the requirements evolution problem. I use an extended case study of payment cards to motivate the following discussion. I begin at an abstract level, by introducing a requirements engineering knowledge base (REKB) using a functional specification. Since it is functional, the specifics of the implementation are left open. I then describe one implementation, using a reason-maintenance system, and show how this implementation can a) solve static requirements problems; b) help stakeholders bring requirements and implementation following a change in the requirements problem; c) propose paraconsistent reasoning to support inconsistency tolerance in the REKB. The end result of my work on the REKB is a tool and approach which can guide software developers and software maintainers in design and decision-making in the context of software evolution.
19

Machine Learning Methods and Models for Ranking

Volkovs, Maksims 13 August 2013 (has links)
Ranking problems are ubiquitous and occur in a variety of domains that include social choice, information retrieval, computational biology and many others. Recent advancements in information technology have opened new data processing possibilities and signi cantly increased the complexity of computationally feasible methods. Through these advancements ranking models are now beginning to be applied to many new and diverse problems. Across these problems data, which ranges from gene expressions to images and web-documents, has vastly di erent properties and is often not human generated. This makes it challenging to apply many of the existing models for ranking which primarily originate in social choice and are typically designed for human generated preference data. As the field continues to evolve a new trend has recently emerged where machine learning methods are being used to automatically learn the ranking models. While these methods typically lack the theoretical support of the social choice models they often show excellent empirical performance and are able to handle large and diverse data placing virtually no restrictions on the data type. These model have now been successfully applied to many diverse ranking problems including image retrieval, protein selection, machine translation and many others. Inspired by these promising results the work presented in this thesis aims to advance machine methods for ranking and develop new techniques to allow e ective modeling of existing and future problems. The presented work concentrates on three di erent but related domains: information retrieval, preference aggregation and collaborative ltering. In each domain we develop new models together with learning and inference methods and empirically verify our models on real-life data.
20

Monitoring the Generation and Execution of Optimal Plans

Fritz, Christian Wilhelm 24 September 2009 (has links)
In dynamic domains, the state of the world may change in unexpected ways during the generation or execution of plans. Regardless of the cause of such changes, they raise the question of whether they interfere with ongoing planning efforts. Unexpected changes during plan generation may invalidate the current planning effort, while discrepancies between expected and actual state of the world during execution may render the executing plan invalid or sub-optimal, with respect to previously identified planning objectives. In this thesis we develop a general monitoring technique that can be used during both plan generation and plan execution to determine the relevance of unexpected changes and which supports recovery. This way, time intensive replanning from scratch in the new and unexpected state can often be avoided. The technique can be applied to a variety of objectives, including monitoring the optimality of plans, rather then just their validity. Intuitively, the technique operates in two steps: during planning the plan is annotated with additional information that is relevant to the achievement of the objective; then, when an unexpected change occurs, this information is used to determine the relevance of the discrepancy with respect to the objective. We substantiate the claim of broad applicability of this relevance-based technique by developing four concrete applications: generating optimal plans despite frequent, unexpected changes to the initial state of the world, monitoring plan optimality during execution, monitoring the execution of near-optimal policies in stochastic domains, and monitoring the generation and execution of plans with procedural hard constraints. In all cases, we use the formal notion of regression to identify what is relevant for achieving the objective. We prove the soundness of these concrete approaches and present empirical results demonstrating that in some contexts orders of magnitude speed-ups can be gained by our technique compared to replanning from scratch.

Page generated in 0.0538 seconds