• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Monad For Randomized Algorithms

January 2016 (has links)
This thesis presents new domain-theoretic models for randomized algorithms. A randomized algorithm can use random bits from an oracle to control its computation. The possible random bits form a binary tree, where each random choice of a bit is a branching of the tree. The randomized algorithm then determines what the output should be for each branch. This idea forms the basis of our random choice functors. However, the functor only provides one half of the model. We must also show how multiple randomized algorithms can be combined or composed. This is where the monadic structure comes into play. If we wish to join multiple randomized algorithms to form one resulting algorithm, then we can run each algorithm in parallel, using the same random bits for each. Monads are used to add a computational effect to an existing semantic model. In order to work with models of the lambda calculus, it is important to work in a Cartesian closed category of domains, due to Lambek's theorem and Scott's corollary. Our first random choice monad is shown to be an endofunctor of the Cartesian closed category BCD. If we wish to add multiple computational effects, then we can compose monads as long as the monads enjoy a distributive law. It is shown that in the category BCD, our first random choice monad enjoys a distributive law with the lower powerdomain for nondeterminism. Two variations of the random choice monad are then given. The first variation has a distributive law with the convex powerdomain in the categories RB and FS, while the second variation has a distributive law with the upper powerdomain in BCD. We use the random choice monads to develop a new programming language, Randomized PCF. This extends the language PCF by adding in random choice, allowing for the programming of randomized algorithms. A full operational semantics is given for Randomized PCF, and a random choice monad is used to give it a mathematical model (denotational semantics). Finally, an implementation of Randomized PCF is developed, and the Miller-Rabin algorithm is implemented in Randomized PCF. / Tyler Barker
2

Counting and sampling problems on Eulerian graphs

Creed, Patrick John January 2010 (has links)
In this thesis we consider two sets of combinatorial structures defined on an Eulerian graph: the Eulerian orientations and Euler tours. We are interested in the computational problems of counting (computing the number of elements in the set) and sampling (generating a random element of the set). Specifically, we are interested in the question of when there exists an efficient algorithm for counting or sampling the elements of either set. The Eulerian orientations of a number of classes of planar lattices are of practical significance as they correspond to configurations of certain models studied in statistical physics. In 1992 Mihail and Winkler showed that counting Eulerian orientations of a general Eulerian graph is #P-complete and demonstrated that the problem of sampling an Eulerian orientation can be reduced to the tractable problem of sampling a perfect matching of a bipartite graph. We present a proof that this problem remains #Pcomplete when the input is restricted to being a planar graph, and analyse a natural algorithm for generating random Eulerian orientations of one of the afore-mentioned planar lattices. Moreover, we make some progress towards classifying the range of planar graphs on which this algorithm is rapidly mixing by exhibiting an infinite class of planar graphs for which the algorithm will always take an exponential amount of time to converge. The problem of counting the Euler tours of undirected graphs has proven to be less amenable to analysis than that of Eulerian orientations. Although it has been known for many years that the number of Euler tours of any directed graph can be computed in polynomial time, until recently very little was known about the complexity of counting Euler tours of an undirected graph. Brightwell and Winkler showed that this problem is #P-complete in 2005 and, apart from a few very simple examples, e.g., series-parellel graphs, there are no known tractable cases, nor are there any good reasons to believe the problem to be intractable. Moreover, despite several unsuccessful attempts, there has been no progress made on the question of approximability. Indeed, this problem was considered to be one of the more difficult open problems in approximate counting since long before the complexity of exact counting was resolved. By considering a randomised input model, we are able to show that a very simple algorithm can sample or approximately count the Euler tours of almost every d-in/d-out directed graph in expected polynomial time. Then, we present some partial results towards showing that this algorithm can be used to sample or approximately count the Euler tours of almost every 2d-regular graph in expected polynomial time. We also provide some empirical evidence to support the unproven conjecture required to obtain this result. As a sideresult of this work, we obtain an asymptotic characterisation of the distribution of the number of Eulerian orientations of a random 2d-regular graph.
3

Scalable analytics of massive graphs

Popova, Diana 20 December 2018 (has links)
Graphs are commonly selected as a model of scientific information: graphs can successfully represent imprecise, uncertain, noisy data; and graph theory has a well-developed mathematical apparatus forming a solid and sound foundation for graph research. Design and experimental confirmation of new, scalable, and practical analytics for massive graphs have been actively researched for decades. Our work concentrates on developing new accurate and efficient algorithms that calculate the most influential nodes and communities in an arbitrary graph. Our algorithms for graph decomposition into families of most influential communities compute influential communities faster and using smaller memory footprint than existing algorithms for the problem. Our algorithms solving the problem of influence maximization in large graphs use much smaller memory than the existing state-of-the-art algorithms while providing solutions with equal accuracy. Our main contribution is designing data structures and algorithms that drastically cut the memory footprint and scale up the computation of influential communities and nodes to massive modern graphs. The algorithms and their implementations can efficiently handle networks of billions of edges using a single consumer-grade machine. These claims are supported by extensive experiments on large real-world graphs of different types. / Graduate
4

Planification de mouvement pour systèmes anthropomorphes / Motion planning for anthropomorphic systems

Dalibard, Sébastien 22 July 2011 (has links)
L'objet de cette thèse est le développement et l'étude des algorithmes de planification de mouvement pour les systèmes hautement dimensionnés que sont les robots humanoïdes et les acteurs virtuels. Plusieurs adaptations des méthodes génériques de planification de mouvement randomisées sont proposées et discutées. Une première contribution concerne l'utilisation de techniques de réduction de dimension linéaire pour accélérer les algorithmes d'échantillonnage. Cette méthode permet d'identifier en ligne quand un processus de planification passe par un passage étroit de l'espace des configurations et adapte l'exploration en fonction. Cet algorithme convient particulièrement bien aux problèmes difficiles de la planification de mouvement pour l'animation graphique. La deuxième contribution est le développement d'algorithmes randomisés de planification sous contraintes. Il s'agit d'une intégration d'outils de cinématique inverse hiérarchisée aux algorithmes de planification de mouvement randomisés. On illustre cette méthode sur différents problèmes de manipulation pour robots humanoïdes. Cette contribution est généralisée à la planification de mouvements corps-complet nécessitant de la marche. La dernière contribution présentée dans cette thèse est l'utilisation des méthodes précédentes pour résoudre des tâches de manipulation complexes par un robot humanoïde. Nous présentons en particulier un formalisme destiné à représenter les informations propres à l'objet manipulé utilisables par un planificateur de mouvement. Ce formalisme est présenté sous le nom d'« objets documentés». / This thesis deals with the development and analysis of motion planning algorithms for high dimensional systems: humanoid robots and digital actors. Several adaptations of generic randomized motion planning methods are proposed and discussed. A first contribution concerns the use of linear dimensionality reduction techniques to speed up sampling algorithms. This method identifies on line when a planning process goes through a narrow passage of some configuration space, and adapts the exploration accordingly. This algorithm is particularly suited to difficult problems of motion planning for computer animation. The second contribution is the development of randomized algorithms for motion planning under constraints. It consists in the integration of prioritized inverse kinematics tools within randomized motion planning. We demonstrate the use of this method on different manipulation planning problems for humanoid robots. This contribution is generalized to whole-body motion planning with locomotion. The last contribution of this thesis is the use of previous methods to solve complex manipulation tasks by humanoid robots. More specifically, we present a formalism that represents information specific to a manipulated object usable by a motion planner. This formalism is presented under the name of "documented object".
5

Two algorithms for leader election and network size estimation in mobile ad hoc networks

Neumann, Nicholas Gerard 17 February 2005 (has links)
We develop two algorithms for important problems in mobile ad hoc networks (MANETs). A MANET is a collection of mobile processors (“nodes”) which communicate via message passing over wireless links. Each node can communicate directly with other nodes within a specified transmission radius; other communication is accomplished via message relay. Communication links may go up and down in a MANET (as nodes move toward or away from each other); thus, the MANET can consist of multiple connected components, and connected components can split and merge over time. We first present a deterministic leader election algorithm for asynchronous MANETs along with a correctness proof for it. Our work involves substantial modifications of an existing algorithm and its proof, and we adapt the existing algorithm to the asynchronous environment. Our algorithm’s running time and message complexity compare favorably with existing algorithms for leader election in MANETs. Second, many algorithms for MANETs require or can benefit from knowledge about the size of the network in terms of the number of processors. As such, we present an algorithm to approximately determine the size of a MANET. While the algorithm’s approximations of network size are only rough ones, the algorithm has the important qualities of requiring little communication overhead and being tolerant of link failures.
6

Σχεδιασμός και ανάλυση αλγορίθμων προσέγγισης με μητρώα χαμηλής τάξης / Algorithms for fast matrix computations

Ζούζιας, Αναστάσιος 24 January 2012 (has links)
Στόχος της εργασίας είναι η μελέτη πιθανοτικών αλγορίθμων για προσεγγιστική επίλυση προβλημάτων του επιστημονικού υπολογισμού. Τα προβλήματα τα οποία θα μας απασχολήσουν είναι ο πολλαπλασιασμός μητρών, ο υπολογισμός της διάσπασης ιδιαζουσών τιμών (SVD) ενός μητρώου και ο υπολογισμός μιας "συμπιεσμένης" διάσπασης ενός μητρώου. / -
7

[en] A STUDY ON UNIT-DEMAND AUCTIONS / [pt] UM ESTUDO SOBRE LEILÕES DE DEMANDA UNITÁRIA

MARCELO ALBUQUERQUE FERNANDES MAS 27 October 2006 (has links)
[pt] Este trabalho se concentra no desenvolvimento de mecanismos de leilões reveladores aleatorizados que buscam maximizar simultaneamente a receita e a eficiência econômica, ou função social, de leilões de demanda unitária. Em um leilão de demanda unitária, um conjunto de k bens é leiloado para um conjunto de n consumidores, com a restrição de que nenhum consumidor pode comprar mais de um bem. É apresentado um arcabouço para o desenvolvimento de mecanismos reveladores aleatorizados de complexidade polinomial derivados do mecanismo de Vickrey-Clarke- Groves, ou VCG. Ao invés de utilizar preços de reserva, estas variantes do VCG utilizam como parâmetro o número de bens que devem ser efetivamente vendidos. Os mecanismos se diferenciam entre si pela maneira como é feito o cálculo do número de bens que devem ser vendidos e permitem um balanço interessante entre receita e eficiência econômica, ao mesmo tempo que melhoram os resultados teóricos alcançados para o problema de Leilões de Demanda Unitária. / [en] This work focuses on the development of randomized truthful mechanisms that seek to maximize both the revenue and the economic efficiency, or social welfare, of unit-demand auctions. In a unit-demand auction a set of k items is auctioned to a set of n consumers and no consumer can purchase more than one item. A framework is presented for devising polynomial-time randomized truthful mechanisms that are based on a new variant of the Vickrey-Clarke-Groves (VCG) mechanism. Instead of using reserve prices, this variant of VCG uses the number of objects that we wish to sell as a parameter. The mechanisms obtained differ er from each other in the way they select the number of items to be sold and allow an interesting trade-off between revenue and economic effciency, while improving upon the stateof- the-art results for the Unit-Demand Auction problem (09).
8

Approximation algorithms for distributed systems

Pandit, Saurav 01 December 2010 (has links)
Distributed Approximation is a new and rapidly developing discipline that lies at the crossroads of various well-established areas of Computer Science - Distributed Computing, Approximation Algorithms, Graph Theory and often, Computational Geometry. This thesis focuses on the design and analysis of distributed algorithms to solve optimization problems that usually arise in large-scale, heavily dynamic, resource constrained networks, e.g. wireless ad-hoc and sensor networks, P2P systems, mobile networks etc. These problems can often be abstracted by variations of well-known combinatorial optimization problems, such as topology control, clustering etc. Many of these problems are known to be hard (NP-complete). But we need fast and light-weight distributed algorithms for these problems, that yield near-optimal solutions. The results presented in this thesis can be broadly divided in two parts. The first part contains a set of results that obtain improved solutions to the classic problem of computing a sparse "backbone" for Wireless Sensor Networks (WSNs). In graph-theoretic terms, the goal is to compute a spanning subgraph of the input graph, that is sparse, lightweight and has low stretch. The term "low stretch" indicates that in spite of dropping many edges, the distance between any two nodes in the graph is not increased by much. We model WSNs as geometric graphs - unit ball graphs, quasi-unit ball graphs etc. in Euclidean spaces, as well as in more general metric spaces of low doubling dimension. We identify and exploit a variety of geometric features of those models to obtain our results. In the second part of the thesis we focus on distributed algorithms for clustering problems. We present several distributed approximation algorithms for clustering problems (e.g., minimum dominating set, facility location problems) that improve on best known results so far. The main contribution here is the design of distributed algorithms where the running time is a "tunable" parameter. The advent of distributed systems of unprecedented scale and complexity motivates the question of whether it is possible to design algorithms that can provide non-trivial approximation guarantees even after very few rounds of computation and message exchanges. We call these algorithms "k-round algorithms". We design k-round algorithms for various clustering problems that yield non-trivial approximation factors even if k is a constant. Additionally, if k assumes poly-logarithmic values, our algorithms match or improve on the best-known approximation factors for these problems.
9

Hierarchical Matrix Operations on GPUs

Boukaram, Wagih Halim 26 April 2020 (has links)
Large dense matrices are ubiquitous in scientific computing, arising from the discretization of integral operators associated with elliptic pdes, Schur complement methods, covariances in spatial statistics, kernel-based machine learning, and numerical optimization problems. Hierarchical matrices are an efficient way for storing the dense matrices of very large dimension that appear in these and related settings. They exploit the fact that the underlying matrices, while formally dense, are data sparse. They have a structure consisting of blocks many of which can be well-approximated by low rank factorizations. A hierarchical organization of the blocks avoids superlinear growth in memory requirements to store n × n dense matrices in a scalable manner, requiring O(n) units of storage with a constant depending on a representative rank k for the low rank blocks. The asymptotically optimal storage requirement of the resulting hierarchical matrices is a critical advantage, particularly in extreme computing environments, characterized by low memory per processing core. The challenge then becomes to develop the parallel linear algebra operations that can be performed directly on this compressed representation. In this dissertation, I implement a set of hierarchical basic linear algebra subroutines (HBLAS) optimized for GPUs, including hierarchical matrix vector multiplication, orthogonalization, compression, low rank updates, and matrix multiplication. I develop a library of open source batched kernel operations previously missing on GPUs for the high performance implementation of the H2 operations, while relying wherever possible on existing open source and vendor kernels to ride future improvements in the technology. Fast marshaling routines extract the batch operation data from an efficient representation of the trees that compose the hierarchical matrices. The methods developed for GPUs extend to CPUs using the same code base with simple abstractions around the batched routine execution. To demonstrate the scalability of the hierarchical operations I implement a distributed memory multi-GPU hierarchical matrix vector product that focuses on reducing communication volume and hiding communication overhead and areas of low GPU utilization using low priority streams. Two demonstrations involving Hessians of inverse problems governed by pdes and space-fractional diffusion equations show the effectiveness of the hierarchical operations in realistic applications.
10

Path Centrality: A New Centrality Measure in Networks

Alahakoon, Tharaka 28 May 2010 (has links)
In network analysis, it is useful to identify important vertices in a network. Based on the varying notions of importance of vertices, a number of centrality measures are defined and studied in the literature. Some popular centrality measures, such as betweenness centrality, are computationally prohibitive for large-scale networks. In this thesis, we propose a new centrality measure called k-path centrality and experimentally compare this measure with betweenness centrality. We present a polynomial-time randomized algorithm for distinguishing high k-path centrality vertices from low k-path centrality vertices in any given (unweighted or weighted) graph. Specifically, for any graph G = (V, E) with n vertices and for every choice of parameters α ∈ (0, 1), ε ∈ (0, 1/2), and integer k ∈ [1, n], with probability at least 1 − 1/n2 our randomized algorithm distinguishes all vertices v ∈ V that have k-path centrality Ck(v) more than nα(1 + 2ε) from all vertices v ∈ V that have k-path centrality Ck(v) less than nα(1 − 2ε). The running time of the algorithm is O(k2ε −2n1−α ln n). Theoretically and experimentally, our algorithms are (for suitable choices of parameters) significantly faster than the best known deterministic algorithm for computing exact betweenness centrality values (Brandes’ algorithm). Through experimentations on both real and randomly generated networks, we demonstrate that vertices that have high betweenness centrality values also have high k-path centrality values.

Page generated in 0.0701 seconds