1 
Modeling Uncertainty with Evolutionary Improved Fuzzy FunctionsCelikyilmaz, Fethiye Asli 30 July 2008 (has links)
Fuzzy system modeling (FSM)– meaning the construction of a representation of
fuzzy systems models–is a difficult task. It demands an identification of many
parameters. This thesis analyses fuzzymodeling problems and different approaches to
cope with it. It focuses on a novel evolutionary FSM approach–the design of “Improved
Fuzzy Functions” system models with the use of evolutionary algorithms. In order to
promote this analysis, local structures are identified with a new improved fuzzy
clustering method and represented with novel “fuzzy functions”.
The central contribution of this work is the use of evolutionary algorithms – in
particular, genetic algorithms– to find uncertainty interval of parameters to improve
“Fuzzy Function” models. To replace the standard fuzzy rule bases (FRBs) with the new
“Improved Fuzzy Functions” succeeds in capturing essential relationships in structure
identification processes and overcomes limitations exhibited by earlier FRB methods
because there are abundance of fuzzy operations and hence the difficulty of the choice of
amongst the tnorms and conorms.
Designing an autonomous and robust FSM and reasoning with it is the prime goal of
this approach. This new FSM approach implements higherlevel fuzzy sets to identify the
uncertainties in: (1) the system parameters, and (2) the structure of “Fuzzy Functions”.
With the identification of these parameters, an interval valued fuzzy sets and “Fuzzy
Functions” are identified. Finally, an evolutionary computing approach with the proposed
uncertainty identification strategy is combined to build FSMs that can automatically
identify these uncertainty intervals.
After testing proposed FSM tool on various benchmark problems, the algorithms are
successfully applied to model decision processes in two real problem domains:
desulphurization process in steel making and stock price prediction activities. For both
problems, the proposed methods produce robust and high performance models, which are
comparable (if not better) than the best system modeling approaches known in current
literature. Several aspects of the proposed methodologies are thoroughly analyzed to
provide a deeper understanding. These analyses show consistency of the results. / Full thesis submitted in paper.

2 
Nonparametric Bayesian Methods for Extracting Structure from DataMeeds, Edward 01 August 2008 (has links)
One desirable property of machine learning
algorithms is the ability to balance
the number of parameters in a model
in accordance with the amount of available data.
Incorporating nonparametric Bayesian priors into models is
one approach of automatically
adjusting model capacity to the amount of available data: with small
datasets, models are less complex
(require storing fewer parameters in memory), whereas with larger datasets, models
are implicitly more complex
(require storing more parameters in memory).
Thus, nonparametric Bayesian priors satisfy frequentist intuitions
about model complexity within a fully Bayesian framework.
This thesis presents several novel
machine learning models and applications that use
nonparametric Bayesian priors.
We introduce two novel models that use flat,
Dirichlet process priors. The first is an infinite mixture
of experts model, which builds
a fully generative, joint density model of the input and output space.
The second is a Bayesian
biclustering model, which simultaneously
organizes a
data matrix into
blockconstant biclusters.
The model capable of efficiently processing very large, sparse matrices,
enabling cluster analysis on incomplete data matrices.
We introduce binary matrix factorization,
a novel matrix factorization model that, in contrast to
classic factorization methods, such as singular value decomposition,
decomposes a matrix using latent binary matrices.
We describe two nonparametric Bayesian priors
over tree structures. The first is an infinitely exchangeable
generalization of the nested
Chinese restaurant process that generates
datavectors at a single node in the tree.
The second is a novel, finitely exchangeable
prior generates trees by first partitioning data indices into groups
and then by randomly
assigning groups to a tree.
We present two applications of the tree priors: the first
automatically learns probabilistic stickfigure models of motioncapture
data that recover
plausible structure and are robust to missing
marker data.
The second learns hierarchical
allocation models based on the latent Dirichlet allocation
topic model for document corpora,
where nodes in a topictree
are latent ``supertopics", and nodes
in a documenttree are latent
categories.
The thesis concludes
with a summary of contributions, a discussion
of the models and their limitations, and a brief outline
of potential future research
directions.

3 
Firstorder Decisiontheoretic Planning in Structured Relational EnvironmentsSanner, Scott 28 July 2008 (has links)
We consider the general framework of firstorder decisiontheoretic planning in structured relational environments. Most traditional solution approaches to these planning problems ground the relational specification w.r.t. a specific domain instantiation and apply a solution approach directly to the resulting ground Markov decision process (MDP). Unfortunately, the space and time complexity of these solution algorithms scale linearly with the domain size in the best case and exponentially in the worst case. An alternate approach to grounding a relational planning problem is to lift it to a firstorder MDP (FOMDP) specification. This FOMDP can then be solved directly, resulting in a domainindependent solution whose space and time complexity either do not scale with domain size or can scale sublinearly in the domain size. However, such generality does not come without its own set of challenges and the first purpose of this thesis is to explore exact and approximate solution techniques for practically solving FOMDPs. The second purpose of this thesis is to extend the FOMDP specification to succinctly capture factored actions and additive rewards while extending the exact and approximate solution techniques to directly exploit this structure. In addition, we provide a proof of correctness of the firstorder symbolic dynamic programming approach w.r.t. its wellstudied ground MDP counterpart.

4 
Modeling Uncertainty with Evolutionary Improved Fuzzy FunctionsCelikyilmaz, Fethiye Asli 30 July 2008 (has links)
Fuzzy system modeling (FSM)– meaning the construction of a representation of
fuzzy systems models–is a difficult task. It demands an identification of many
parameters. This thesis analyses fuzzymodeling problems and different approaches to
cope with it. It focuses on a novel evolutionary FSM approach–the design of “Improved
Fuzzy Functions” system models with the use of evolutionary algorithms. In order to
promote this analysis, local structures are identified with a new improved fuzzy
clustering method and represented with novel “fuzzy functions”.
The central contribution of this work is the use of evolutionary algorithms – in
particular, genetic algorithms– to find uncertainty interval of parameters to improve
“Fuzzy Function” models. To replace the standard fuzzy rule bases (FRBs) with the new
“Improved Fuzzy Functions” succeeds in capturing essential relationships in structure
identification processes and overcomes limitations exhibited by earlier FRB methods
because there are abundance of fuzzy operations and hence the difficulty of the choice of
amongst the tnorms and conorms.
Designing an autonomous and robust FSM and reasoning with it is the prime goal of
this approach. This new FSM approach implements higherlevel fuzzy sets to identify the
uncertainties in: (1) the system parameters, and (2) the structure of “Fuzzy Functions”.
With the identification of these parameters, an interval valued fuzzy sets and “Fuzzy
Functions” are identified. Finally, an evolutionary computing approach with the proposed
uncertainty identification strategy is combined to build FSMs that can automatically
identify these uncertainty intervals.
After testing proposed FSM tool on various benchmark problems, the algorithms are
successfully applied to model decision processes in two real problem domains:
desulphurization process in steel making and stock price prediction activities. For both
problems, the proposed methods produce robust and high performance models, which are
comparable (if not better) than the best system modeling approaches known in current
literature. Several aspects of the proposed methodologies are thoroughly analyzed to
provide a deeper understanding. These analyses show consistency of the results. / Full thesis submitted in paper.

5 
Nonparametric Bayesian Methods for Extracting Structure from DataMeeds, Edward 01 August 2008 (has links)
One desirable property of machine learning
algorithms is the ability to balance
the number of parameters in a model
in accordance with the amount of available data.
Incorporating nonparametric Bayesian priors into models is
one approach of automatically
adjusting model capacity to the amount of available data: with small
datasets, models are less complex
(require storing fewer parameters in memory), whereas with larger datasets, models
are implicitly more complex
(require storing more parameters in memory).
Thus, nonparametric Bayesian priors satisfy frequentist intuitions
about model complexity within a fully Bayesian framework.
This thesis presents several novel
machine learning models and applications that use
nonparametric Bayesian priors.
We introduce two novel models that use flat,
Dirichlet process priors. The first is an infinite mixture
of experts model, which builds
a fully generative, joint density model of the input and output space.
The second is a Bayesian
biclustering model, which simultaneously
organizes a
data matrix into
blockconstant biclusters.
The model capable of efficiently processing very large, sparse matrices,
enabling cluster analysis on incomplete data matrices.
We introduce binary matrix factorization,
a novel matrix factorization model that, in contrast to
classic factorization methods, such as singular value decomposition,
decomposes a matrix using latent binary matrices.
We describe two nonparametric Bayesian priors
over tree structures. The first is an infinitely exchangeable
generalization of the nested
Chinese restaurant process that generates
datavectors at a single node in the tree.
The second is a novel, finitely exchangeable
prior generates trees by first partitioning data indices into groups
and then by randomly
assigning groups to a tree.
We present two applications of the tree priors: the first
automatically learns probabilistic stickfigure models of motioncapture
data that recover
plausible structure and are robust to missing
marker data.
The second learns hierarchical
allocation models based on the latent Dirichlet allocation
topic model for document corpora,
where nodes in a topictree
are latent ``supertopics", and nodes
in a documenttree are latent
categories.
The thesis concludes
with a summary of contributions, a discussion
of the models and their limitations, and a brief outline
of potential future research
directions.

6 
Firstorder Decisiontheoretic Planning in Structured Relational EnvironmentsSanner, Scott 28 July 2008 (has links)
We consider the general framework of firstorder decisiontheoretic planning in structured relational environments. Most traditional solution approaches to these planning problems ground the relational specification w.r.t. a specific domain instantiation and apply a solution approach directly to the resulting ground Markov decision process (MDP). Unfortunately, the space and time complexity of these solution algorithms scale linearly with the domain size in the best case and exponentially in the worst case. An alternate approach to grounding a relational planning problem is to lift it to a firstorder MDP (FOMDP) specification. This FOMDP can then be solved directly, resulting in a domainindependent solution whose space and time complexity either do not scale with domain size or can scale sublinearly in the domain size. However, such generality does not come without its own set of challenges and the first purpose of this thesis is to explore exact and approximate solution techniques for practically solving FOMDPs. The second purpose of this thesis is to extend the FOMDP specification to succinctly capture factored actions and additive rewards while extending the exact and approximate solution techniques to directly exploit this structure. In addition, we provide a proof of correctness of the firstorder symbolic dynamic programming approach w.r.t. its wellstudied ground MDP counterpart.

7 
Efficient Machine Learning with High Order and Combinatorial StructuresTarlow, Daniel 13 August 2013 (has links)
The overaching goal in this thesis is to develop the representational frameworks, the inference algorithms, and the learning methods necessary for the accurate modeling of domains that exhibit complex and nonlocal dependency structures. There are three parts to this thesis. In the first part, we develop a toolbox of high order potentials (HOPs) that are useful for defining interactions and constraints that would be inefficient or otherwise difficult to use within the standard graphical modeling framework. For each potential, we develop associated algorithms so that the type of interaction can be used efficiently in a variety of settings. We further show that this HOP toolbox is useful not only for defining models, but also for defining loss functions.
In the second part, we look at the similarities and differences between specialpurpose and generalpurpose inference algorithms, with the aim of learning from the specialpurpose algorithms so that we can build better generalpurpose algorithms. Specifically, we show how to cast a popular specialpurpose algorithm (graph cuts) in terms of the degrees of freedom available to a popular generalpurpose algorithm (maxproduct belief propagation). After, we look at how to take the lessons learned and build a better generalpurpose algorithm.
Finally, we develop a class of model that allows for the discrete optimization algorithms studied in the previous sections (as well as other discrete optimization algorithms) to be used as the centerpoint of probabilistic models. This allows us to build probabilistic models that have fast exact inference procedures in domains where the standard probabilistic formulation would lead to intractability.

8 
Efficient Machine Learning with High Order and Combinatorial StructuresTarlow, Daniel 13 August 2013 (has links)
The overaching goal in this thesis is to develop the representational frameworks, the inference algorithms, and the learning methods necessary for the accurate modeling of domains that exhibit complex and nonlocal dependency structures. There are three parts to this thesis. In the first part, we develop a toolbox of high order potentials (HOPs) that are useful for defining interactions and constraints that would be inefficient or otherwise difficult to use within the standard graphical modeling framework. For each potential, we develop associated algorithms so that the type of interaction can be used efficiently in a variety of settings. We further show that this HOP toolbox is useful not only for defining models, but also for defining loss functions.
In the second part, we look at the similarities and differences between specialpurpose and generalpurpose inference algorithms, with the aim of learning from the specialpurpose algorithms so that we can build better generalpurpose algorithms. Specifically, we show how to cast a popular specialpurpose algorithm (graph cuts) in terms of the degrees of freedom available to a popular generalpurpose algorithm (maxproduct belief propagation). After, we look at how to take the lessons learned and build a better generalpurpose algorithm.
Finally, we develop a class of model that allows for the discrete optimization algorithms studied in the previous sections (as well as other discrete optimization algorithms) to be used as the centerpoint of probabilistic models. This allows us to build probabilistic models that have fast exact inference procedures in domains where the standard probabilistic formulation would lead to intractability.

9 
Nonlinear Latent Factor Models for Revealing Structure in Highdimensional DataMemisevic, Roland 28 July 2008 (has links)
Real world data is not random: The variability in the datasets that arise in computer vision,
signal processing and other areas is often highly constrained and governed by a number of
degrees of freedom that is much smaller than the superficial dimensionality of the data.
Unsupervised learning methods can be used to automatically discover the “true”, underlying
structure in such datasets and are therefore a central component in many systems that deal
with highdimensional data.
In this thesis we develop several new approaches to modeling the lowdimensional structure
in data. We introduce a new nonparametric framework for latent variable modelling, that in
contrast to previous methods generalizes learned embeddings beyond the training data and its
latent representatives. We show that the computational complexity for learning and applying
the model is much smaller than that of existing methods, and we illustrate its applicability
on several problems.
We also show how we can introduce supervision signals into latent variable models using
conditioning. Supervision signals make it possible to attach “meaning” to the axes of a latent
representation and to untangle the factors that contribute to the variability in the data. We
develop a model that uses conditional latent variables to extract rich distributed representations
of image transformations, and we describe a new model for learning transformation
features in structured supervised learning problems.

10 
Nonlinear Latent Factor Models for Revealing Structure in Highdimensional DataMemisevic, Roland 28 July 2008 (has links)
Real world data is not random: The variability in the datasets that arise in computer vision,
signal processing and other areas is often highly constrained and governed by a number of
degrees of freedom that is much smaller than the superficial dimensionality of the data.
Unsupervised learning methods can be used to automatically discover the “true”, underlying
structure in such datasets and are therefore a central component in many systems that deal
with highdimensional data.
In this thesis we develop several new approaches to modeling the lowdimensional structure
in data. We introduce a new nonparametric framework for latent variable modelling, that in
contrast to previous methods generalizes learned embeddings beyond the training data and its
latent representatives. We show that the computational complexity for learning and applying
the model is much smaller than that of existing methods, and we illustrate its applicability
on several problems.
We also show how we can introduce supervision signals into latent variable models using
conditioning. Supervision signals make it possible to attach “meaning” to the axes of a latent
representation and to untangle the factors that contribute to the variability in the data. We
develop a model that uses conditional latent variables to extract rich distributed representations
of image transformations, and we describe a new model for learning transformation
features in structured supervised learning problems.

Page generated in 0.0333 seconds