Spelling suggestions: "subject:"mathematics computer cience"" "subject:"mathematics computer cscience""
21 |
Problems in generic combinatorial rigidity: Sparsity, sliders, and emergence of componentsTheran, Louis 01 January 2010 (has links)
Rigidity theory deals in problems of the following form: given a structure defined by geometric constraints on a set of objects, what information about its geometric behavior is implied by the underlying combinatorial structure. The most well-studied class of structures is the bar-joint framework, which is made of fixed-length bars connected by universal joints with full rotational degrees of freedom; the allowed motions preserve the lengths and connectivity of the bars, and a framework is rigid if the only allowed motions are trivial motions of Euclidean space. A remarkable theorem of Maxwell-Laman says that rigidity of generic bar-joint frameworks depends only on the graph that has as its edges the bars and as its vertices the joints. We generalize the "degree of freedom counts" that appear in the Maxwell-Laman theorem to the very general setting of (k, ℓ)-sparse and (k, ℓ)-graded sparse hypergraphs. We characterize these in terms of their graph-graph theoretic and matroidal properties. For the fundamental algorithmic problems Decision, Extraction, Components, and Decomposition, we give efficient, implementable pebble game algorithms for all the (k, ℓ)-sparse and (k, ℓ)-graded-sparse families of hypergraphs we study. We then prove that all the matroids arising from (k , ℓ)-sparse are linearly representable by matrices with a certain "natural" structure that captures the incidence structure of the hypergraph and the sparsity parameters k and ℓ. Building on the combinatorial and linear theory discussed above, we introduce a new rigidity model: slider-pinning rigidity. This is an elaboration of the planar bar-joint model to include sliders, which constrain a vertex to move on a specific line. We prove the analogue of the Maxwell-Laman Theorem for slider pinning, using, as a lemma, a new proof of Whiteley's Parallel Redrawing Theorem. We conclude by studying the emergence of non-trivial rigid substructures in generic planar frameworks given by Erdos-Renyi random graphs. We prove that there is a sharp threshold for such substructures to emerge, and that, when they do, they are all linear size. This is consistent with experimental and simulation-based work done in the physics community on the formation of certain glasses.
|
22 |
Modeling natural microimage statisticsKoloydenko, Alexey Alexandrovich 01 January 2000 (has links)
A large collection of digital images of natural scenes provides a database for analyzing and modeling small scene patches (e.g., 2 x 2) referred to as natural microimages. A pivotal finding is the stability of the empirical microimage distribution across scene samples and with respect to scaling. With a view toward potential applications (e.g. classification, clutter modeling, segmentation), we present a hierarchy of microimage probability models which capture essential local image statistics. Tools from information theory, algebraic geometry and of course statistical hypothesis testing are employed to assess the “match” between candidate models and the empirical distribution. Geometric symmetries play a key role in the model selection process. One central result is that the microimage distribution exhibits reflection and rotation symmetry and is well-represented by a Gibbs law with only pairwise interactions. However, the acceptance of the up-down reflection symmetry hypothesis is borderline and intensity inversion symmetry is rejected. Finally, possible extensions to larger patches via entropy maximization and to patch classification via vector quantization are briefly discussed.
|
23 |
Symbolic model checking using algebraic geometryEcke, Volker 01 January 2003 (has links)
Symbolic Model Checking is a technique for checking certain properties of a finite state model of a computer system. The most widely used symbolic representation is based on Ordered Binary Decision Diagrams. In [4], G. Avrunin showed how computational geometry and Gröbner basis techniques may be used for symbolic model checking. The present work investigates the details of this approach, and its practicality, using theoretical means and experiments based on a prototype implementation.
|
24 |
Atomism and infinite divisibilityKenyon, Ralph Edward 01 January 1994 (has links)
This work analyzes two perspectives, Atomism and Infinite Divisibility, in the light of modern mathematical knowledge and recent developments in computer graphics. A developmental perspective is taken which relates ideas leading to atomism and infinite divisibility. A detailed analysis of and a new resolution for Zeno's paradoxes are presented. Aristotle's arguments are analyzed. The arguments of some other philosophers are also presented and discussed. All arguments purporting to prove one position over the other are shown to be faulty, mostly by question begging. Included is a sketch of the consistency of infinite divisibility and a development of the atomic perspective modeled on computer graphics screen displays. The Pythagorean theorem is shown to depend upon the assumption of infinite divisibility. The work concludes that Atomism and infinite divisibility are independantly consistent, though mutually incompatible, not unlike the wave/particle distinction in physics.
|
25 |
Graph diffusions and matrix functions| Fast algorithms and localization resultsKloster, Kyle 01 September 2016 (has links)
<p>Network analysis provides tools for addressing fundamental applications in graphs such as webpage ranking, protein-function prediction, and product categorization and recommendation. As real-world networks grow to have millions of nodes and billions of edges, the scalability of network analysis algorithms becomes increasingly important. Whereas many standard graph algorithms rely on matrix-vector operations that require exploring the entire graph, this thesis is concerned with graph algorithms that are local (that explore only the graph region near the nodes of interest) as well as the localized behavior of global algorithms. We prove that two well-studied matrix functions for graph analysis, PageRank and the matrix exponential, stay localized on networks that have a skewed degree sequence related to the power-law degree distribution common to many real-world networks. Our results give the first theoretical explanation of a localization phenomenon that has long been observed in real-world networks. We prove our novel method for the matrix exponential converges in sublinear work on graphs with the specified degree sequence, and we adapt our method to produce the first deterministic algorithm for computing the related heat kernel diffusion in constant-time. Finally, we generalize this framework to compute any graph diffusion in constant time. </p>
|
26 |
Constructing strategies for games with simultaneous movementKeffer, Jeremy 24 October 2015 (has links)
<p> From the early days of AI, computers have been programmed to play games against human players. Most of the AI work has sought to build world-champion programs to play turn-based games such as Chess and Checkers, however computer games increasingly provide for entertaining real-time play. In this dissertation, we present an extension of recursive game theory, which can be used to analyze games involving simultaneous movement. We include an algorithm which can be used to practically solve recursive games, and present a proof of its correctness. We also define a game theory of lowered expectations to deal with situations where game theory currently fails to give players a definitive strategy, and demonstrate its applicability using several example games.</p>
|
27 |
Hierarchically Normalized Models of Visual Distortion Sensitivity Physiology, Perception, and ApplicationBerardino, Alexander 01 August 2018 (has links)
<p> How does the visual system determine when changes to an image are unnatural (image distortions), how does it weight different types of distortions, and where are these computations carried out in the brain? These questions have plagued neuroscientists, psychologists, and engineers alike for several decades. Different academic communities have approached the problem from different directions, with varying degrees of success. The one thing that all groups agree on is that there is value in knowing the answer to the question. Models that appropriately capture human sensitivity to image distortions can be used as a stand in for human observers in order to optimize any algorithm in which fidelity to human perception is necessary (i.e. image and video compression). </p><p> In this thesis, we approach the problem by building models informed and constrained by both visual physiology, and the statistics of natural images, and train them to match human psychophysical judgments about image distortions. We then develop a novel synthesis method that forces the models to make testable predictions, and quantify the quality of those predictions with human psychophysics. Because our approach links physiology and perception, it allows us to pinpoint what elements of physiology are necessary to capture human sensitivity to image distortions. We consider several different models of the visual system, some developed from known neural physiology, and some inspired by recent breakthroughs in artificial intelligence (deep neural networks trained to recognize objects within images at human performance levels). We show that models inspired by early brain areas (retina and LGN) consistently capture human sensitivity to image distortions better than both the state of the art, and better than competing models of the visual system. We argue that divisive normalization, a ubiquitous computation in the visual system, is integral to correctly capturing human sensitivity. </p><p> After establishing that our models of the retina and the LGN outperform all other tested models, we develop a novel framework for optimally rendering images on any display for human observers. We show that a model of this kind can be used as a stand in for human observers within this optimization framework, and produces images that are better than other state of the art algorithms. We also show that other tested models fail as a stand in for human observers within this framework. </p><p> Finally, we propose and test a normative framework for thinking about human sensitivity to image distortions. In this framework, we hypothesize that the human visual system decomposes images into structural changes (those that change the identity of objects and scenes), and non-structural changes (those that preserve object and scene identity), and weights these changes differently. We test human sensitivity to distortions that fall into each of these categories, and use this data to identify potential weaknesses of our model that can be improved in further work.</p><p>
|
28 |
Integral Equation Methods for the Heat Equation in Moving GeometryWang, Jun 22 November 2017 (has links)
<p> Many problems in physics and engineering require the solution of the heat equation in moving geometry. Integral representations are particularly appropriate in this setting since they satisfy the governing equation automatically and, in the homogeneous case, require the discretization of the space-time boundary alone. Unlike methods based on direct discretization of the partial differential equation, they are unconditonally stable. Moreover, while a naive implementation of this approach is impractical, several efforts have been made over the past few years to reduce the overall computational cost. Of particular note are Fourier-based methods which achieve optimal complexity so long as the time step <i>Δt</i> is of the same order as <i> Δx,</i> the mesh size in the spatial variables. As the time step goes to zero, however, the cost of the Fourier-based fast algorithms grows without bound. A second difficulty with existing schemes has been the lack of efficient, high-order local-in-time quadratures for layer heat potentials. </p><p> In this dissertation, we present a new method for evaluating heat potentials that makes use of a spatially adaptive mesh instead of a Fourier series, a new version of the fast Gauss transform, and a new hybrid asymptotic/numerical method for local-in-time quadrature. The method is robust and efficient for any <i>Δt,</i> with essentially optimal computational complexity. We demonstrate its performance with numerical examples and discuss its implications for subsequent work in diffusion, heat flow, solidification and fluid dynamics. </p><p>
|
29 |
Learning an activity-based semantic scene modelMakris, Dimitrios January 2004 (has links)
No description available.
|
30 |
Speech-based creation and editing of mathematical contentWigmore, Angela Michelle January 2011 (has links)
For most people, the creation and editing of mathematical text in electronic documents is a slow, tedious and error-prone activity. For people with disabilities, especially blindness or severe visual impairments, this is far more of a problem. The lack of easy access to good mathematical resources limits the educational and career opportunities for people with such disabilities. Automatic Speech Recognition (ASR) could enable both able-bodied and people who are physically disabled gain better access to mathematics. However, whilst ASR has improved over recent years, most speech recognition systems do not support the input and editing of dictated mathematical expressions. In this thesis, we present results of studies of how students and staff at Kingston University, of various levels of mathematical achievement, read-out given expressions in English. Furthermore, we analyse evidence, both from our own studies, and from transcriptions of mathematics classes recorded in the British National Corpus (BNC), that people do consistently place pauses to mark the grouping of subexpressions. The results from this enabled us to create an innovative context-free attribute grammar capable of capturing a high proportion of GCSE-Ievel spoken mathematics, of which can be syntactically incorrect and/or incomplete. This attribute grammar was implemented, tested and evaluated in our prototype system TalkMaths. We also compiled statistics of "common sequences" of mathematics-related keywords from these two sources, with a view to using these to develop a "predictive model" for use in our system. We implemented and evaluated a prototype system TalkMaths, that enables the dictation of mathematical expressions, up to approximately GCSE level, and converts them into various electronic document formats Our evaluations of this system showed that people of various levels of mathematical ability can learn how to produce electronic mathematical documents by speech. These studies have demonstrated that producing mathematical documents by speech is a viable alternative to using the keyboard & mouse, especially for those who rely on speech recognition software to use a computer. A novel editing paradigm, based on a "hybrid grid" is proposed, implemented and tested in a further usability study. Although the evaluation of this editing paradigm is incomplete, it has demonstrated that it is promising and worthy of further research.
|
Page generated in 0.1513 seconds