• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 151
  • 109
  • 49
  • 22
  • 14
  • Tagged with
  • 389
  • 294
  • 290
  • 264
  • 264
  • 236
  • 197
  • 197
  • 194
  • 194
  • 192
  • 146
  • 118
  • 109
  • 96
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modeling Uncertainty with Evolutionary Improved Fuzzy Functions

Celikyilmaz, Fethiye Asli 30 July 2008 (has links)
Fuzzy system modeling (FSM)– meaning the construction of a representation of fuzzy systems models–is a difficult task. It demands an identification of many parameters. This thesis analyses fuzzy-modeling problems and different approaches to cope with it. It focuses on a novel evolutionary FSM approach–the design of “Improved Fuzzy Functions” system models with the use of evolutionary algorithms. In order to promote this analysis, local structures are identified with a new improved fuzzy clustering method and represented with novel “fuzzy functions”. The central contribution of this work is the use of evolutionary algorithms – in particular, genetic algorithms– to find uncertainty interval of parameters to improve “Fuzzy Function” models. To replace the standard fuzzy rule bases (FRBs) with the new “Improved Fuzzy Functions” succeeds in capturing essential relationships in structure identification processes and overcomes limitations exhibited by earlier FRB methods because there are abundance of fuzzy operations and hence the difficulty of the choice of amongst the t-norms and co-norms. Designing an autonomous and robust FSM and reasoning with it is the prime goal of this approach. This new FSM approach implements higher-level fuzzy sets to identify the uncertainties in: (1) the system parameters, and (2) the structure of “Fuzzy Functions”. With the identification of these parameters, an interval valued fuzzy sets and “Fuzzy Functions” are identified. Finally, an evolutionary computing approach with the proposed uncertainty identification strategy is combined to build FSMs that can automatically identify these uncertainty intervals. After testing proposed FSM tool on various benchmark problems, the algorithms are successfully applied to model decision processes in two real problem domains: desulphurization process in steel making and stock price prediction activities. For both problems, the proposed methods produce robust and high performance models, which are comparable (if not better) than the best system modeling approaches known in current literature. Several aspects of the proposed methodologies are thoroughly analyzed to provide a deeper understanding. These analyses show consistency of the results. / Full thesis submitted in paper.
2

Nonparametric Bayesian Methods for Extracting Structure from Data

Meeds, Edward 01 August 2008 (has links)
One desirable property of machine learning algorithms is the ability to balance the number of parameters in a model in accordance with the amount of available data. Incorporating nonparametric Bayesian priors into models is one approach of automatically adjusting model capacity to the amount of available data: with small datasets, models are less complex (require storing fewer parameters in memory), whereas with larger datasets, models are implicitly more complex (require storing more parameters in memory). Thus, nonparametric Bayesian priors satisfy frequentist intuitions about model complexity within a fully Bayesian framework. This thesis presents several novel machine learning models and applications that use nonparametric Bayesian priors. We introduce two novel models that use flat, Dirichlet process priors. The first is an infinite mixture of experts model, which builds a fully generative, joint density model of the input and output space. The second is a Bayesian biclustering model, which simultaneously organizes a data matrix into block-constant biclusters. The model capable of efficiently processing very large, sparse matrices, enabling cluster analysis on incomplete data matrices. We introduce binary matrix factorization, a novel matrix factorization model that, in contrast to classic factorization methods, such as singular value decomposition, decomposes a matrix using latent binary matrices. We describe two nonparametric Bayesian priors over tree structures. The first is an infinitely exchangeable generalization of the nested Chinese restaurant process that generates data-vectors at a single node in the tree. The second is a novel, finitely exchangeable prior generates trees by first partitioning data indices into groups and then by randomly assigning groups to a tree. We present two applications of the tree priors: the first automatically learns probabilistic stick-figure models of motion-capture data that recover plausible structure and are robust to missing marker data. The second learns hierarchical allocation models based on the latent Dirichlet allocation topic model for document corpora, where nodes in a topic-tree are latent ``super-topics", and nodes in a document-tree are latent categories. The thesis concludes with a summary of contributions, a discussion of the models and their limitations, and a brief outline of potential future research directions.
3

Modeling Uncertainty with Evolutionary Improved Fuzzy Functions

Celikyilmaz, Fethiye Asli 30 July 2008 (has links)
Fuzzy system modeling (FSM)– meaning the construction of a representation of fuzzy systems models–is a difficult task. It demands an identification of many parameters. This thesis analyses fuzzy-modeling problems and different approaches to cope with it. It focuses on a novel evolutionary FSM approach–the design of “Improved Fuzzy Functions” system models with the use of evolutionary algorithms. In order to promote this analysis, local structures are identified with a new improved fuzzy clustering method and represented with novel “fuzzy functions”. The central contribution of this work is the use of evolutionary algorithms – in particular, genetic algorithms– to find uncertainty interval of parameters to improve “Fuzzy Function” models. To replace the standard fuzzy rule bases (FRBs) with the new “Improved Fuzzy Functions” succeeds in capturing essential relationships in structure identification processes and overcomes limitations exhibited by earlier FRB methods because there are abundance of fuzzy operations and hence the difficulty of the choice of amongst the t-norms and co-norms. Designing an autonomous and robust FSM and reasoning with it is the prime goal of this approach. This new FSM approach implements higher-level fuzzy sets to identify the uncertainties in: (1) the system parameters, and (2) the structure of “Fuzzy Functions”. With the identification of these parameters, an interval valued fuzzy sets and “Fuzzy Functions” are identified. Finally, an evolutionary computing approach with the proposed uncertainty identification strategy is combined to build FSMs that can automatically identify these uncertainty intervals. After testing proposed FSM tool on various benchmark problems, the algorithms are successfully applied to model decision processes in two real problem domains: desulphurization process in steel making and stock price prediction activities. For both problems, the proposed methods produce robust and high performance models, which are comparable (if not better) than the best system modeling approaches known in current literature. Several aspects of the proposed methodologies are thoroughly analyzed to provide a deeper understanding. These analyses show consistency of the results. / Full thesis submitted in paper.
4

Nonparametric Bayesian Methods for Extracting Structure from Data

Meeds, Edward 01 August 2008 (has links)
One desirable property of machine learning algorithms is the ability to balance the number of parameters in a model in accordance with the amount of available data. Incorporating nonparametric Bayesian priors into models is one approach of automatically adjusting model capacity to the amount of available data: with small datasets, models are less complex (require storing fewer parameters in memory), whereas with larger datasets, models are implicitly more complex (require storing more parameters in memory). Thus, nonparametric Bayesian priors satisfy frequentist intuitions about model complexity within a fully Bayesian framework. This thesis presents several novel machine learning models and applications that use nonparametric Bayesian priors. We introduce two novel models that use flat, Dirichlet process priors. The first is an infinite mixture of experts model, which builds a fully generative, joint density model of the input and output space. The second is a Bayesian biclustering model, which simultaneously organizes a data matrix into block-constant biclusters. The model capable of efficiently processing very large, sparse matrices, enabling cluster analysis on incomplete data matrices. We introduce binary matrix factorization, a novel matrix factorization model that, in contrast to classic factorization methods, such as singular value decomposition, decomposes a matrix using latent binary matrices. We describe two nonparametric Bayesian priors over tree structures. The first is an infinitely exchangeable generalization of the nested Chinese restaurant process that generates data-vectors at a single node in the tree. The second is a novel, finitely exchangeable prior generates trees by first partitioning data indices into groups and then by randomly assigning groups to a tree. We present two applications of the tree priors: the first automatically learns probabilistic stick-figure models of motion-capture data that recover plausible structure and are robust to missing marker data. The second learns hierarchical allocation models based on the latent Dirichlet allocation topic model for document corpora, where nodes in a topic-tree are latent ``super-topics", and nodes in a document-tree are latent categories. The thesis concludes with a summary of contributions, a discussion of the models and their limitations, and a brief outline of potential future research directions.
5

First-order Decision-theoretic Planning in Structured Relational Environments

Sanner, Scott 28 July 2008 (has links)
We consider the general framework of first-order decision-theoretic planning in structured relational environments. Most traditional solution approaches to these planning problems ground the relational specification w.r.t. a specific domain instantiation and apply a solution approach directly to the resulting ground Markov decision process (MDP). Unfortunately, the space and time complexity of these solution algorithms scale linearly with the domain size in the best case and exponentially in the worst case. An alternate approach to grounding a relational planning problem is to lift it to a first-order MDP (FOMDP) specification. This FOMDP can then be solved directly, resulting in a domain-independent solution whose space and time complexity either do not scale with domain size or can scale sublinearly in the domain size. However, such generality does not come without its own set of challenges and the first purpose of this thesis is to explore exact and approximate solution techniques for practically solving FOMDPs. The second purpose of this thesis is to extend the FOMDP specification to succinctly capture factored actions and additive rewards while extending the exact and approximate solution techniques to directly exploit this structure. In addition, we provide a proof of correctness of the first-order symbolic dynamic programming approach w.r.t. its well-studied ground MDP counterpart.
6

First-order Decision-theoretic Planning in Structured Relational Environments

Sanner, Scott 28 July 2008 (has links)
We consider the general framework of first-order decision-theoretic planning in structured relational environments. Most traditional solution approaches to these planning problems ground the relational specification w.r.t. a specific domain instantiation and apply a solution approach directly to the resulting ground Markov decision process (MDP). Unfortunately, the space and time complexity of these solution algorithms scale linearly with the domain size in the best case and exponentially in the worst case. An alternate approach to grounding a relational planning problem is to lift it to a first-order MDP (FOMDP) specification. This FOMDP can then be solved directly, resulting in a domain-independent solution whose space and time complexity either do not scale with domain size or can scale sublinearly in the domain size. However, such generality does not come without its own set of challenges and the first purpose of this thesis is to explore exact and approximate solution techniques for practically solving FOMDPs. The second purpose of this thesis is to extend the FOMDP specification to succinctly capture factored actions and additive rewards while extending the exact and approximate solution techniques to directly exploit this structure. In addition, we provide a proof of correctness of the first-order symbolic dynamic programming approach w.r.t. its well-studied ground MDP counterpart.
7

Non-linear Latent Factor Models for Revealing Structure in High-dimensional Data

Memisevic, Roland 28 July 2008 (has links)
Real world data is not random: The variability in the data-sets that arise in computer vision, signal processing and other areas is often highly constrained and governed by a number of degrees of freedom that is much smaller than the superficial dimensionality of the data. Unsupervised learning methods can be used to automatically discover the “true”, underlying structure in such data-sets and are therefore a central component in many systems that deal with high-dimensional data. In this thesis we develop several new approaches to modeling the low-dimensional structure in data. We introduce a new non-parametric framework for latent variable modelling, that in contrast to previous methods generalizes learned embeddings beyond the training data and its latent representatives. We show that the computational complexity for learning and applying the model is much smaller than that of existing methods, and we illustrate its applicability on several problems. We also show how we can introduce supervision signals into latent variable models using conditioning. Supervision signals make it possible to attach “meaning” to the axes of a latent representation and to untangle the factors that contribute to the variability in the data. We develop a model that uses conditional latent variables to extract rich distributed representations of image transformations, and we describe a new model for learning transformation features in structured supervised learning problems.
8

Non-linear Latent Factor Models for Revealing Structure in High-dimensional Data

Memisevic, Roland 28 July 2008 (has links)
Real world data is not random: The variability in the data-sets that arise in computer vision, signal processing and other areas is often highly constrained and governed by a number of degrees of freedom that is much smaller than the superficial dimensionality of the data. Unsupervised learning methods can be used to automatically discover the “true”, underlying structure in such data-sets and are therefore a central component in many systems that deal with high-dimensional data. In this thesis we develop several new approaches to modeling the low-dimensional structure in data. We introduce a new non-parametric framework for latent variable modelling, that in contrast to previous methods generalizes learned embeddings beyond the training data and its latent representatives. We show that the computational complexity for learning and applying the model is much smaller than that of existing methods, and we illustrate its applicability on several problems. We also show how we can introduce supervision signals into latent variable models using conditioning. Supervision signals make it possible to attach “meaning” to the axes of a latent representation and to untangle the factors that contribute to the variability in the data. We develop a model that uses conditional latent variables to extract rich distributed representations of image transformations, and we describe a new model for learning transformation features in structured supervised learning problems.
9

Efficient Machine Learning with High Order and Combinatorial Structures

Tarlow, Daniel 13 August 2013 (has links)
The overaching goal in this thesis is to develop the representational frameworks, the inference algorithms, and the learning methods necessary for the accurate modeling of domains that exhibit complex and non-local dependency structures. There are three parts to this thesis. In the first part, we develop a toolbox of high order potentials (HOPs) that are useful for defining interactions and constraints that would be inefficient or otherwise difficult to use within the standard graphical modeling framework. For each potential, we develop associated algorithms so that the type of interaction can be used efficiently in a variety of settings. We further show that this HOP toolbox is useful not only for defining models, but also for defining loss functions. In the second part, we look at the similarities and differences between special-purpose and general-purpose inference algorithms, with the aim of learning from the special-purpose algorithms so that we can build better general-purpose algorithms. Specifically, we show how to cast a popular special-purpose algorithm (graph cuts) in terms of the degrees of freedom available to a popular general-purpose algorithm (max-product belief propagation). After, we look at how to take the lessons learned and build a better general-purpose algorithm. Finally, we develop a class of model that allows for the discrete optimization algorithms studied in the previous sections (as well as other discrete optimization algorithms) to be used as the centerpoint of probabilistic models. This allows us to build probabilistic models that have fast exact inference procedures in domains where the standard probabilistic formulation would lead to intractability.
10

Efficient Machine Learning with High Order and Combinatorial Structures

Tarlow, Daniel 13 August 2013 (has links)
The overaching goal in this thesis is to develop the representational frameworks, the inference algorithms, and the learning methods necessary for the accurate modeling of domains that exhibit complex and non-local dependency structures. There are three parts to this thesis. In the first part, we develop a toolbox of high order potentials (HOPs) that are useful for defining interactions and constraints that would be inefficient or otherwise difficult to use within the standard graphical modeling framework. For each potential, we develop associated algorithms so that the type of interaction can be used efficiently in a variety of settings. We further show that this HOP toolbox is useful not only for defining models, but also for defining loss functions. In the second part, we look at the similarities and differences between special-purpose and general-purpose inference algorithms, with the aim of learning from the special-purpose algorithms so that we can build better general-purpose algorithms. Specifically, we show how to cast a popular special-purpose algorithm (graph cuts) in terms of the degrees of freedom available to a popular general-purpose algorithm (max-product belief propagation). After, we look at how to take the lessons learned and build a better general-purpose algorithm. Finally, we develop a class of model that allows for the discrete optimization algorithms studied in the previous sections (as well as other discrete optimization algorithms) to be used as the centerpoint of probabilistic models. This allows us to build probabilistic models that have fast exact inference procedures in domains where the standard probabilistic formulation would lead to intractability.

Page generated in 0.0196 seconds