• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 9
  • 9
  • 9
  • 9
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Inapproximability Reductions and Integrality Gaps

Popat, Preyas 03 October 2013 (has links)
<p> In this thesis we prove intractability results for several well studied problems in combinatorial optimization. </p><p> <b>Closest Vector Problem with Pre-processing (CVPP):</b> We show that the pre-processing version of the well known C<p style="font-variant: small-caps">LOSEST</p> V<p style="font-variant: small-caps">ECTOR</p> P<p style="font-variant: small-caps">ROBLEM</p> is hard to approximate to an almost polynomial factor unless NP is in quasi polynomial time. The approximability of CVPP is closely related to the security of lattice based cryptosystems. </p><p> <b>Pricing Loss Leaders:</b> We show hardness of approximation results for the problem of maximizing profit from buyers with <i>single minded valuations</i> where each buyer is interested in bundles of at most <i>k</i> items, and the items are allowed to have negative prices ("Loss Leaders"). For <i>k</i> = 2, we show that assuming the U<p style="font-variant: small-caps">NIQUE</p> G<p style="font-variant: small-caps">AMES</p> C<p style="font-variant: small-caps">ONJECTURE</p>, it is hard to approximate the profit to any constant factor. For <i> k</i> &ge; 2, we show the same result assuming <i>P</i> &ne; <i> NP</i>. </p><p> <b>Integrality gaps:</b> We show Semi-Definite Programming (SDP) integrality gaps for U<p style="font-variant: small-caps">NIQUE</p> G<p style="font-variant: small-caps">AMES</p> and 2-to-1 G<p style="font-variant: small-caps">AMES</p>. Inapproximability results for these problems imply inapproximability results for many fundamental optimization problems. For the first problem, we show "approximate" integrality gaps for super constant rounds of the powerful Lasserre hierarchy. For the second problem we show integrality gaps for the basic SDP relaxation with perfect completeness.</p>
2

A Differential Geometric Approach using Orientation Fields for Shape from Shading

Kunsberg, Benjamin 02 July 2014 (has links)
No description available.
3

Hierarchically Normalized Models of Visual Distortion Sensitivity Physiology, Perception, and Application

Berardino, Alexander 01 August 2018 (has links)
<p> How does the visual system determine when changes to an image are unnatural (image distortions), how does it weight different types of distortions, and where are these computations carried out in the brain? These questions have plagued neuroscientists, psychologists, and engineers alike for several decades. Different academic communities have approached the problem from different directions, with varying degrees of success. The one thing that all groups agree on is that there is value in knowing the answer to the question. Models that appropriately capture human sensitivity to image distortions can be used as a stand in for human observers in order to optimize any algorithm in which fidelity to human perception is necessary (i.e. image and video compression). </p><p> In this thesis, we approach the problem by building models informed and constrained by both visual physiology, and the statistics of natural images, and train them to match human psychophysical judgments about image distortions. We then develop a novel synthesis method that forces the models to make testable predictions, and quantify the quality of those predictions with human psychophysics. Because our approach links physiology and perception, it allows us to pinpoint what elements of physiology are necessary to capture human sensitivity to image distortions. We consider several different models of the visual system, some developed from known neural physiology, and some inspired by recent breakthroughs in artificial intelligence (deep neural networks trained to recognize objects within images at human performance levels). We show that models inspired by early brain areas (retina and LGN) consistently capture human sensitivity to image distortions better than both the state of the art, and better than competing models of the visual system. We argue that divisive normalization, a ubiquitous computation in the visual system, is integral to correctly capturing human sensitivity. </p><p> After establishing that our models of the retina and the LGN outperform all other tested models, we develop a novel framework for optimally rendering images on any display for human observers. We show that a model of this kind can be used as a stand in for human observers within this optimization framework, and produces images that are better than other state of the art algorithms. We also show that other tested models fail as a stand in for human observers within this framework. </p><p> Finally, we propose and test a normative framework for thinking about human sensitivity to image distortions. In this framework, we hypothesize that the human visual system decomposes images into structural changes (those that change the identity of objects and scenes), and non-structural changes (those that preserve object and scene identity), and weights these changes differently. We test human sensitivity to distortions that fall into each of these categories, and use this data to identify potential weaknesses of our model that can be improved in further work.</p><p>
4

Learning an activity-based semantic scene model

Makris, Dimitrios January 2004 (has links)
No description available.
5

Speech-based creation and editing of mathematical content

Wigmore, Angela Michelle January 2011 (has links)
For most people, the creation and editing of mathematical text in electronic documents is a slow, tedious and error-prone activity. For people with disabilities, especially blindness or severe visual impairments, this is far more of a problem. The lack of easy access to good mathematical resources limits the educational and career opportunities for people with such disabilities. Automatic Speech Recognition (ASR) could enable both able-bodied and people who are physically disabled gain better access to mathematics. However, whilst ASR has improved over recent years, most speech recognition systems do not support the input and editing of dictated mathematical expressions. In this thesis, we present results of studies of how students and staff at Kingston University, of various levels of mathematical achievement, read-out given expressions in English. Furthermore, we analyse evidence, both from our own studies, and from transcriptions of mathematics classes recorded in the British National Corpus (BNC), that people do consistently place pauses to mark the grouping of subexpressions. The results from this enabled us to create an innovative context-free attribute grammar capable of capturing a high proportion of GCSE-Ievel spoken mathematics, of which can be syntactically incorrect and/or incomplete. This attribute grammar was implemented, tested and evaluated in our prototype system TalkMaths. We also compiled statistics of "common sequences" of mathematics-related keywords from these two sources, with a view to using these to develop a "predictive model" for use in our system. We implemented and evaluated a prototype system TalkMaths, that enables the dictation of mathematical expressions, up to approximately GCSE level, and converts them into various electronic document formats Our evaluations of this system showed that people of various levels of mathematical ability can learn how to produce electronic mathematical documents by speech. These studies have demonstrated that producing mathematical documents by speech is a viable alternative to using the keyboard & mouse, especially for those who rely on speech recognition software to use a computer. A novel editing paradigm, based on a "hybrid grid" is proposed, implemented and tested in a further usability study. Although the evaluation of this editing paradigm is incomplete, it has demonstrated that it is promising and worthy of further research.
6

An integrated modeling framework for concept formation : developing number-sense, a partial resolution of the learning paradox

Rendell, Gerard Vincent Alfred January 2012 (has links)
The development of mathematics is foundational. For the most part in early childhood it is seldom insurmountable. Various constructions exhibit conceptual change in the child, which is evidence of overcoming the learning paradox. If one tries to account for learning by means of mental actions carried out by the learner, then it is necessary to attribute to the learner a prior structure , one that is as advanced or as complex as the one to be acquired, unless there is emergence. This thesis reinterprets Piaget's theory using research from neurophysiology, biology, machine learning and demonstrates a novel approach to partially resolve the learning paradox for a simulation that experiences a number line world, exhibiting emergence of structure using a model of Drosophila. In doing so, the research evaluates other models of cognitive development against a real-world, worked example of number-sense from childhood mathematics. The purpose is to determine if they assume a prior capacity to solve problems or provide parallel assumptions within the learning process as additional capabilities not seen in children. Technically, the research uses an artificial neural network with reinforcement learning to confirm the emergence of permanent object invariants. It then evaluates an evolved dialectic system with hierarchical finite state automata within a reactive Argos framework to confirm the reevaluated Piagetian developmental model against the worked example. This research thesis establishes that the emergence of new concepts is a critical need in the development of autonomous evolvable systems that can act, learn and plan in novel ways, in noisy situations.
7

Symbolic algorithms for the local analysis of systems of pseudo-linear equations

Broughton, Gary John January 2013 (has links)
This thesis is concerned with the design and implementation of algorithms in Computer Algebra - a discipline which pursues a symbolic approach to solving mathematical equations and problems in contrast to computing solutions numerically. More precisely, we study sys¬tems of pseudo-linear equations, which unify the classes of linear differential, difference and q-difference systems. Whilst the classical mathematical theory of asymptotic expansions and the notion of formal solutions of this type of solutions are well established for all these indi-vidual cases, no unifying theoretical framework for pseudo-linear systems was known prior to our work. From an algorithmic point of view, the computation of a complete fundamental system of formal solutions is implemented by the formal reduction process. The formal reduction of linear differential systems had been treated in the past, and linear difference systems were also investigated and partly solved. In the case of linear q-difference systems, the structure of the formal solution is much easier which results in an alleviated formal reduction. However, no satisfying algorithm had been published that would be suitable to compute the formal solutions. We place ourselves in the generic setting and show that various algorithms that are known to be building blocks for the formal reduction in the differential case can be extended to the general pseudo-linear setting. In particular, the family of Moser- and super-reduction algorithms as well as the Classical Splitting Lemma and the Generalised Splitting Lemma are amongst the fundamental ingredients that we consider and which are essential for an effective formal reduction procedure. Whereas some of these techniques had been considered and adapted for systems of difference or q-difference equations, our novel contribution is to show that they can be extended and formulated in such a way that they are valid generically. Based on these results, we then design our generic formal reduction method, again in-spired by the differential case. Apart from the resulting unified approach, this also yields a novel approach to the formal reduction of difference and q-difference systems. Together with a generalisation of an efficient algorithm for computing regular formal solutions that was devised for linear differential systems, we finally obtain a complete and generic algorithm for computing formal solutions of systems of pseudo-linear equations. We show that we are able to compute a complete basis of formal solutions of large classes of linear functional systems, using our formal reduction method. The algorithms presented in this thesis have been implemented in the Computer Algebra System Maple as part of the Open Source project ISOLDE.
8

Statistical language modelling and novel parsing techniques for enhanced creation and editing of mathematical e-content using spoken input

Attanayake, Dilaksha Rajiv January 2014 (has links)
The work described in this thesis aims at facilitating the design and im- plementation of web-based editors, driven by speech or natural language input, with a focus on editing mathematics. First, a taxonomy for system architectures of speech-based applications is given. This classification is based on the location of the speech recognition, the speech, and application logic and the resulting flow of data between client and server components. This contribution extends existing system architecture approaches to take into account the characteristics of speech- based systems. We then show, using statistical language modelling techniques, that math- ematics, either spoken or typed, is more predictable than everyday natu- ral languages. We illustrate how these models, in combination with error correction algorithms, can be used to successfully assist the process of cre- ating mathematical expressions within electronic documents using speech. We have successfully implemented systems to demonstrate our findings, which have also been evaluated using standard language modelling evalua- tion techniques. This work is novel as applying statistical language models to the recognition of spoken mathematics has not been evaluated to this extent prior to our work. We create a parsing framework for spoken mathematics, based on mixfix operators, operator precedences and non-deterministic parsing techniques. This framework can significantly improve the design and parsing of spoken command languages such as spoken mathematics. A novel robust error recovery method for an adaptation of the XGLR parsing approach to our operator precedence setting is presented. This greatly enhances the range of spoken or typed mathematics that can be parsed. The novel parsing framework, algorithms and error recovery that we have designed are suitable for more general structured spoken command languages, as well. The algorithms devised in this thesis have been implemented and integrated in a research prototype system called TalkMaths. We evaluate our contri- butions to the new version of this system by comparing the power of our parser with that contained in previous versions, and by conducting a field study where students engage with our system in a real classroom-based environment. We show that using TalkMaths, rather than a conventional mathematics editor, had a positive impact on the learning and understand- ing of mathematical concepts of the participants.
9

Approximate inference in graphical models

Hennig, Philipp January 2011 (has links)
Probability theory provides a mathematically rigorous yet conceptually flexible calculus of uncertainty, allowing the construction of complex hierarchical models for real-world inference tasks. Unfortunately, exact inference in probabilistic models is often computationally expensive or even intractable. A close inspection in such situations often reveals that computational bottlenecks are confined to certain aspects of the model, which can be circumvented by approximations without having to sacrifice the model's interesting aspects. The conceptual framework of graphical models provides an elegant means of representing probabilistic models and deriving both exact and approximate inference algorithms in terms of local computations. This makes graphical models an ideal aid in the development of generalizable approximations. This thesis contains a brief introduction to approximate inference in graphical models (Chapter 2), followed by three extensive case studies in which approximate inference algorithms are developed for challenging applied inference problems. Chapter 3 derives the first probabilistic game tree search algorithm. Chapter 4 provides a novel expressive model for inference in psychometric questionnaires. Chapter 5 develops a model for the topics of large corpora of text documents, conditional on document metadata, with a focus on computational speed. In each case, graphical models help in two important ways: They first provide important structural insight into the problem; and then suggest practical approximations to the exact probabilistic solution.

Page generated in 0.0907 seconds