• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 9
  • 9
  • 9
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Adding temporal logic to dynamic epistemic logic

Sack, Joshua. January 2007 (has links)
Thesis (Ph.D.)--Indiana University, Dept. of Mathematics, 2007. / Source: Dissertation Abstracts International, Volume: 68-07, Section: B, page: 4531. Adviser: Lawrence Moss. Title from dissertation home page (viewed Apr. 22, 2008).
2

Optimization-based approximate dynamic programming

Petrik, Marek 01 January 2010 (has links)
Reinforcement learning algorithms hold promise in many complex domains, such as resource management and planning under uncertainty. Most reinforcement learning algorithms are iterative—they successively approximate the solution based on a set of samples and features. Although these iterative algorithms can achieve impressive results in some domains, they are not sufficiently reliable for wide applicability; they often require extensive parameter tweaking to work well and provide only weak guarantees of solution quality. Some of the most interesting reinforcement learning algorithms are based on approximate dynamic programming (ADP). ADP, also known as value function approximation, approximates the value of being in each state. This thesis presents new reliable algorithms for ADP that use optimization instead of iterative improvement. Because these optimization–based algorithms explicitly seek solutions with favorable properties, they are easy to analyze, offer much stronger guarantees than iterative algorithms, and have few or no parameters to tweak. In particular, we improve on approximate linear programming — an existing method — and derive approximate bilinear programming — a new robust approximate method. The strong guarantees of optimization–based algorithms not only increase confidence in the solution quality, but also make it easier to combine the algorithms with other ADP components. The other components of ADP are samples and features used to approximate the value function. Relying on the simplified analysis of optimization–based methods, we derive new bounds on the error due to missing samples. These bounds are simpler, tighter, and more practical than the existing bounds for iterative algorithms and can be used to evaluate solution quality in practical settings. Finally, we propose homotopy methods that use the sampling bounds to automatically select good approximation features for optimization–based algorithms. Automatic feature selection significantly increases the flexibility and applicability of the proposed ADP methods. The methods presented in this thesis can potentially be used in many practical applications in artificial intelligence, operations research, and engineering. Our experimental results show that optimization–based methods may perform well on resource-management problems and standard benchmark problems and therefore represent an attractive alternative to traditional iterative methods.
3

Enhanced Machine Learning Engine Engineering Using Innovative Blending, Tuning, and Feature Optimization

Uddin, Muhammad Fahim 21 March 2019 (has links)
<p> Investigated into and motivated by Ensemble Machine Learning (<i>ML</i>) techniques, this thesis contributes to addressing performance, consistency, and integrity issues such as overfitting, underfitting, predictive errors, accuracy paradox, and poor generalization for the <i>ML</i> models. Ensemble <i>ML</i> methods have shown promising outcome when a single algorithm failed to approximate the true prediction function. Using meta-learning, a super learner is engineered by combining weak learners. Generally, several methods in Supervised Learning (<i>SL</i>) are evaluated to find the best fit to the underlying data and predictive analytics (<i>i.e.</i>, &ldquo;<i>No Free Lunch</i>&rdquo; Theorem relevance). This thesis addresses three main challenges/problems, <i> i</i>) determining the optimum blend of algorithms/methods for enhanced <i> SL</i> ensemble models, <i>ii</i>) engineering the selection and grouping of features that aggregate to the highest possible predictive and non-redundant value in the training data set, and <i>iii</i>) addressing the performance integrity issues such as accuracy paradox. Therefore, an enhanced Machine Learning Engine Engineering (<i>eMLEE</i>) is inimitably constructed via built-in parallel processing and specially designed novel constructs for error and gain functions to optimally score the classifier elements for improved training experience and validation procedures. <i> eMLEE</i>, as based on stochastic thinking, is built on; <i>i</i>) one centralized unit as Logical Table unit (<i>LT</i>), <i> ii</i>) two explicit units as enhanced Algorithm Blend and Tuning (<i> eABT</i>) and enhanced Feature Engineering and Selection (<i>eFES </i>), and two implicit constructs as enhanced Weighted Performance Metric (<i>eWPM</i>) and enhanced Cross Validation and Split (<i> eCVS</i>). Hence, it proposes an enhancement to the internals of the <i> SL</i> ensemble approaches. </p><p> Motivated by nature inspired metaheuristics algorithms (such as <i> GA, PSO, ACO</i>, etc.), feedback mechanisms are improved by introducing a specialized function as <i>Learning from the Mistakes</i> (<i> LFM</i>) to mimic the human learning experience. <i>LFM</i> has shown significant improvement towards refining the predictive accuracy on the testing data by utilizing the computational processing of wrong predictions to increase the weighting scoring of the weak classifiers and features. <i> LFM</i> further ensures the training layer experiences maximum mistakes (<i>i.e.</i>, errors) for optimum tuning. With this designed in the engine, stochastic modeling/thinking is implicitly implemented. </p><p> Motivated by OOP paradigm in the high-level programming, <i>eMLEE </i> provides interface infrastructure using <i>LT</i> objects for the main units (<i>i.e.</i>, Unit A and Unit B) to use the functions on demand during the classifier learning process. This approach also assists the utilization of <i>eMLEE</i> API by the outer real-world usage for predictive modeling to further customize the classifier learning process and tuning elements trade-off, subject to the data type and end model in goal. </p><p> Motivated by higher dimensional processing and Analysis (<i>i.e. </i>, <i>3D</i>) for improved analytics and learning mechanics, <i> eMLEE</i> incorporates <i>3D</i> Modeling of fitness metrics such as <i>x</i> for overfit, <i>y</i> for underfit, and <i> z</i> for optimum fit, and then creates logical cubes using <i> LT</i> handles to locate the optimum space during ensemble process. This approach ensures the fine tuning of ensemble learning process with improved accuracy metric. </p><p> To support the built and implementation of the proposed scheme, mathematical models (<i>i.e.</i>, <i>Definitions, Lemmas, Rules</i>, and <i>Procedures</i>) along with the governing algorithms&rsquo; definitions (and <i>pseudo-code</i>), and necessary illustrations (<i>to assist in elaborating the concepts</i>) are provided. Diverse sets of data are used to improve the generalization of the engine and tune the underlying constructs during development-testing phases. To show the practicality and stability of the proposed scheme, several results are presented with a comprehensive analysis of the outcomes for the metrics (<i>i.e.</i>, <i> via integrity, corroboration</i>, and <i>quantification</i>) of the engine. Two approaches are followed to corroborate the engine, <i> i</i>) testing inner layers (<i>i.e.</i>, internal constructs) of the engine (<i>i.e.</i>, <i>Unit-A, Unit-B</i>, and <i> C-Unit</i>) to stabilize and test the fundamentals, and <i>ii</i>) testing outer layer (<i>i.e.</i>, <i>engine as a black box </i>) for standard measuring metrics for the real-world endorsement. Comparison with various existing techniques in the state of the art are also reported. In conclusion of the extensive literature review, research undertaken, investigative approach, engine construction and tuning, validation approach, experimental study, and results visualization, the <i>eMLEE</i> is found to be outperforming the existing techniques most of the time, in terms of the classifier learning, generalization, metrics trade-off, optimum-fitness, feature engineering, and validation.</p><p>
4

Probabilistic reasoning on metric spaces

Lee, Seunghwan Han. January 2009 (has links)
Thesis (Ph.D.)--Indiana University, Dept. of Mathematics and Cognitive Science, 2009. / Title from PDF t.p. (viewed on Jul 19, 2010). Source: Dissertation Abstracts International, Volume: 70-12, Section: B, page: 7604. Adviser: Lawrence S. Moss.
5

A modal arithmetic for reasoning about multilevel systems of finite state machines

Yodaiken, Victor J 01 January 1990 (has links)
This dissertation advances a novel approach to formal specification and verification of systems-level computation. The approach is based on a purely finite state model of computation, and makes use of algebraic and syntactic techniques which have not been previously applied to the problem. The approach makes use of a modal extension of the primitive recursive functions to specify system behavior, and uses an algebraic feedback product of automata to provide semantic content for specifications. The modal functions are shown to provide highly compact and expressive notation for defining, composing, and verifying the properties of large-scale finite state machines. The feedback product allows for a very general model of composition, multi-level dynamic behavior, and concurrency. Techniques are developed for specifying both detailed operation of algorithms, and abstract system properties such as liveness and safety. Several significant examples are provided to illustrate application of the method to complex algorithms and designs.
6

Transformations of representation in constraint satisfaction

Salamon, András Z. January 2013 (has links)
In this thesis I study constraint satisfaction problems or CSPs. These require determining whether values can be assigned to variables so that all constraints are satisfied. An important challenge is to identify tractable CSPs which can be solved efficiently. CSP instances have usually been grouped together by restricting either the allowed combinations of values, or the way the variables are allowed to interact. Such restrictions sometimes yield tractable CSPs. A weakness of this method is that it cannot explain why all-different constraints form a tractable CSP. In this common type of constraint, all variables must be assigned values that are different from each other. New techniques are therefore needed to explain why such CSPs can be solved efficiently. My main contribution is an investigation of such hybrid CSPs which cannot be defined with either one of these kinds of restrictions. The main technique I use is a transformation of a CSP instance to the microstructure representation. This represents an instance as a collection of sets, and a solution of the instance corresponds to an independent set in the clause structure. For the common case where all constraints involve only two variables, I show how the microstructure can be used to define CSPs that are tractable because their clause structures fall within classes of graphs for which an independent set of specified size can be found efficiently. Such tractable hereditary classes are defined by using the technique of excluded induced subgraphs, such as classes of graphs that contain neither odd cycles with five or more vertices, nor their complements. I also develop finer grained techniques, by allowing vertices of the microstructure representation to be assigned colours, and the variables to be ordered. I show that these techniques define a new tractable CSP that forbids an ordered vertex-coloured subgraph in the microstructure representation.
7

Category-theoretic quantitative compositional distributional models of natural language semantics

Grefenstette, Edward Thomas January 2013 (has links)
This thesis is about the problem of compositionality in distributional semantics. Distributional semantics presupposes that the meanings of words are a function of their occurrences in textual contexts. It models words as distributions over these contexts and represents them as vectors in high dimensional spaces. The problem of compositionality for such models concerns itself with how to produce distributional representations for larger units of text (such as a verb and its arguments) by composing the distributional representations of smaller units of text (such as individual words). This thesis focuses on a particular approach to this compositionality problem, namely using the categorical framework developed by Coecke, Sadrzadeh, and Clark, which combines syntactic analysis formalisms with distributional semantic representations of meaning to produce syntactically motivated composition operations. This thesis shows how this approach can be theoretically extended and practically implemented to produce concrete compositional distributional models of natural language semantics. It furthermore demonstrates that such models can perform on par with, or better than, other competing approaches in the field of natural language processing. There are three principal contributions to computational linguistics in this thesis. The first is to extend the DisCoCat framework on the syntactic front and semantic front, incorporating a number of syntactic analysis formalisms and providing learning procedures allowing for the generation of concrete compositional distributional models. The second contribution is to evaluate the models developed from the procedures presented here, showing that they outperform other compositional distributional models present in the literature. The third contribution is to show how using category theory to solve linguistic problems forms a sound basis for research, illustrated by examples of work on this topic, that also suggest directions for future research.
8

Hybrid tractability of constraint satisfaction problems with global constraints

Thorstensen, Evgenij January 2013 (has links)
A wide range of problems can be modelled as constraint satisfaction problems (CSPs), that is, a set of constraints that must be satisfied simultaneously. Constraints can either be represented extensionally, by explicitly listing allowed combinations of values, or intensionally, whether by an equation, propositional logic formula, or other means. Intensionally represented constraints, known as global constraints, are a powerful modelling technique, and many modern CSP solvers provide them. We give examples to show how problems that deal with product configuration can be modelled with such constraints, and how this approach relates to other modelling formalisms. The complexity of CSPs with extensionally represented constraints is well understood, and there are several known techniques that can be used to identify tractable classes of such problems. For CSPs with global constraints, however, many of these techniques fail, and far fewer tractable classes are known. In order to remedy this state of affairs, we undertake a systematic review of research into the tractability of CSPs. In particular, we look at CSPs with extensionally represented constraints in order to understand why many of the techniques that give tractable classes for this case fail for CSPs with global constraints. The above investigation leads to two discoveries. First, many restrictions on how the constraints of a CSP interact implicitly rely on a property of extensionally represented constraints to guarantee tractability. We identify this property as being a bound on the number of solutions in key parts of the instance, and find classes of global constraints that also possess this property. For such classes, we show that many known tractability results apply. Furthermore, global constraints allow us to treat entire CSP instances as constraints. We combine this observation with the above result, and obtain new tractable classes of CSPs by dividing a CSP into smaller CSPs drawn from known tractable classes. Second, for CSPs that simply do not possess the above property, we look at how the constraints of an instance overlap, and how assignments to the overlapping parts extend to the rest of the problem. We show that assignments that extend in the same way can be identified. Combined with a new structural restriction, this observation leads to a second set of tractable classes. We conclude with a summary, as well as some observations about potential for future work in this area.
9

From interactive to semantic image segmentation

Gulshan, Varun January 2011 (has links)
This thesis investigates two well defined problems in image segmentation, viz. interactive and semantic image segmentation. Interactive segmentation involves power assisting a user in cutting out objects from an image, whereas semantic segmentation involves partitioning pixels in an image into object categories. We investigate various models and energy formulations for both these problems in this thesis. In order to improve the performance of interactive systems, low level texture features are introduced as a replacement for the more commonly used RGB features. To quantify the improvement obtained by using these texture features, two annotated datasets of images are introduced (one consisting of natural images, and the other consisting of camouflaged objects). A significant improvement in performance is observed when using texture features for the case of monochrome images and images containing camouflaged objects. We also explore adding mid-level cues such as shape constraints into interactive segmentation by introducing the idea of geodesic star convexity, which extends the existing notion of a star convexity prior in two important ways: (i) It allows for multiple star centres as opposed to single stars in the original prior and (ii) It generalises the shape constraint by allowing for Geodesic paths as opposed to Euclidean rays. Global minima of our energy function can be obtained subject to these new constraints. We also introduce Geodesic Forests, which exploit the structure of shortest paths in implementing the extended constraints. These extensions to star convexity allow us to use such constraints in a practical segmentation system. This system is evaluated by means of a “robot user” to measure the amount of interaction required in a precise way, and it is shown that having shape constraints reduces user effort significantly compared to existing interactive systems. We also introduce a new and harder dataset which augments the existing GrabCut dataset with more realistic images and ground truth taken from the PASCAL VOC segmentation challenge. In the latter part of the thesis, we bring in object category level information in order to make the interactive segmentation tasks easier, and move towards fully automated semantic segmentation. An algorithm to automatically segment humans from cluttered images given their bounding boxes is presented. A top down segmentation of the human is obtained using classifiers trained to predict segmentation masks from local HOG descriptors. These masks are then combined with bottom up image information in a local GrabCut like procedure. This algorithm is later completely automated to segment humans without requiring a bounding box, and is quantitatively compared with other semantic segmentation methods. We also introduce a novel way to acquire large quantities of segmented training data relatively effortlessly using the Kinect. In the final part of this work, we explore various semantic segmentation methods based on learning using bottom up super-pixelisations. Different methods of combining multiple super-pixelisations are discussed and quantitatively evaluated on two segmentation datasets. We observe that simple combinations of independently trained classifiers on single super-pixelisations perform almost as good as complex methods based on jointly learning across multiple super-pixelisations. We also explore CRF based formulations for semantic segmentation, and introduce novel visual words based object boundary description in the energy formulation. The object appearance and boundary parameters are trained jointly using structured output learning methods, and the benefit of adding pairwise terms is quantified on two different datasets.

Page generated in 0.1182 seconds