1 |
On the synthesis of integral and dynamic recurrencesRapanotti, Lucia January 1996 (has links)
Synthesis techniques for regular arrays provide a disciplined and well-founded approach to the design of classes of parallel algorithms. The design process is guided by a methodology which is based upon a formal notation and transformations. The mathematical model underlying synthesis techniques is that of affine Euclidean geometry with embedded lattice spaces. Because of this model, computationally powerful methods are provided as an effective way of engineering regular arrays. However, at present the applicability of such methods is limited to so-called affine problems. The work presented in this thesis aims at widening the applicability of standard synthesis methods to more general classes of problems. The major contributions of this thesis are the characterisation of classes of integral and dynamic problems, and the provision of techniques for their systematic treatment within the framework of established synthesis methods. The basic idea is the transformation of the initial algorithm specification into a specification with data dependencies of increased regularity, so that corresponding regular arrays can be obtained by a direct application of the standard mapping techniques. We will complement the formal development of the techniques with the illustration of a number of case studies from the literature.
|
2 |
Engineering Algorithms for Solving Geometric and Graph Problems on Large Data SetsCosgaya Lozano, Adan Jose 14 March 2011 (has links)
This thesis focuses on the engineering of algorithms for massive data sets. In recent years, massive data sets have become ubiquitous and existing computing applications, for the most part, cannot handle these data sets efficiently: either they crash or their performance degrades to a point where they take unacceptably long to process the input. Parallel computing and I/O-efficient algorithms provide the means to process massive amounts of data efficiently. The work presented in this thesis makes use of these techniques and focuses on obtaining practically efficient solutions for specific problems in computational geometry and graph theory. We focus our attention first on skyline computations. This problem arises in decision-making applications and has been well studied in computational geometry and also by the database community in recent years. Most of the previous work on this problem has focused on sequential computations using a single processor, and the algorithms produced are not able to efficiently process data sets beyond the capacity of main memory. Such massive data sets are becoming more common; thus, parallelizing the skyline computation and eliminating the I/O bottleneck in large-scale computations is increasingly important in order to retrieve the results in a reasonable amount of time. Furthermore, we address two fundamental problems of graph analysis that appear in many application areas and which have eluded efforts to develop theoretically I/O-efficient solutions: computing the strongly connected components of a directed graph and topological sorting of a directed acyclic graph. To approach these problems, we designed algorithms, developed efficient implementations and, using extensive experiments, verified that they perform well in practice. Our solutions are based on well understood algorithmic techniques. The experiments show that, even though some of these techniques do not lead to provably efficient algorithms, they do lead to practically efficient heuristic solutions. In particular, our parallel algorithm for skyline computation is based on divide-and-conquer, while the strong connectivity and topological sorting algorithms use techniques such as graph contraction, the Euler technique, list ranking, and time-forward processing.
|
3 |
On the design of architecture-aware algorithms for emerging applicationsKang, Seunghwa 30 January 2011 (has links)
This dissertation maps various kernels and applications to a spectrum of programming models and architectures and also presents architecture-aware algorithms for different systems. The kernels and applications discussed in this dissertation have widely varying computational characteristics. For example, we consider both dense numerical computations and sparse graph algorithms. This dissertation also covers emerging applications from image processing, complex network analysis, and computational biology.
We map these problems to diverse multicore processors and manycore accelerators. We also use new programming models (such as Transactional Memory, MapReduce, and Intel TBB) to address the performance and productivity challenges in the problems. Our experiences highlight the importance of mapping applications to appropriate programming models and architectures. We also find several limitations of current system software and architectures and directions to improve those. The discussion focuses on system software and architectural support for nested irregular parallelism, Transactional Memory, and hybrid data transfer mechanisms. We believe that the complexity of parallel programming can be significantly reduced via collaborative efforts among researchers and practitioners from different domains. This dissertation participates in the efforts by providing benchmarks and suggestions to improve system software and architectures.
|
4 |
Computing Measures of Non-PlanarityWiedera, Tilo 22 December 2021 (has links)
Planar graphs have a rich history that dates back to the 18th Century. They form one of the core concepts of graph theory. In computational graph theory, they offer broad advantages to algorithm design and many groundbreaking results are based on them. Formally, a given graph is either planar or non-planar. However, there exists a diverse set of established measures to estimate how far away from being planar any given graph is. In this thesis, we aim at evaluating and improving algorithms to compute these measures of non-planarity. Particularly, we study (1) the problem of finding a maximum planar subgraph, i.e., a planar subgraph with the least number of edges removed; (2) the problem of embedding a graph on a lowest possible genus surface; and finally (3) the problem of drawing a graph such that there are as few edge crossings as possible. These problems constitute classical questions studied in graph drawing and each of them is NP-hard. Still, exact (exponential time) algorithms for them are of high interest and have been subject to study for decades. We propose novel mathematical programming models, based on different planarity criteria, to compute maximum planar subgraphs and low-genus embeddings. The key aspect of our most successful new models is that they carefully describe also the relation between embedded (sub-)graphs and their duals. Based on these models, we design algorithms that beat the respective state-of-the-art by orders of magnitude. We back these claims by extensive computational studies and rigorously show the theoretical advantages of our new models. Besides exact algorithms, we consider heuristic and approximate approaches to the maximum planar subgraph problem. Furthermore, in the realm of crossing numbers, we present an automated proof extraction to
easily verify the crossing number of any given graph; a new hardness result for a subproblem that arises, e.g., when enumerating simple drawings; and resolve a conjecture regarding high node degree in minimal obstructions for low crossing number.
|
5 |
Automated Performance Test Generation and Comparison for Complex Data Structures - Exemplified on High-Dimensional Spatio-Temporal IndicesMenninghaus, Mathias 23 August 2018 (has links)
There exist numerous approaches to index either spatio-temporal or high-dimensional data. None of them is able to efficiently index hybrid data types, thus spatio-temporal and high-dimensional data. As the best high-dimensional indexing techniques are only able to index point-data and not now-relative data and the best spatio-temporal indexing techniques suffer from the curse of dimensionality, this thesis introduces the Spatio-Temporal Pyramid Adapter (STPA). The STPA maps spatio-temporal data on points, now-values on the median of the data set and indexes them with the pyramid technique. For high-dimensional and spatio-temporal index structures no generally accepted benchmark exists. Most index structures are only evaluated by custom benchmarks and compared to a tiny set of competitors. Benchmarks may be biased as a structure may be created to perform well in a certain benchmark or a benchmark does not cover a certain speciality of the investigated structures. In this thesis, the Interface Based Performance Comparison (IBPC) technique is introduced. It automatically generates test sets with a high code coverage on the system under test (SUT) on the basis of all functions defined by a certain interface which all competitors support. Every test set is performed on every SUT and the performance results are weighted by the achieved coverage and summed up. These weighted performance results are then used to compare the structures. An implementation of the IBPC, the Performance Test Automation Framework (PTAF) is compared to a classic custom benchmark, a workload generator whose parameters are optimized by a genetic algorithm and a specific PTAF alternative which incorporates the specific behavior of the systems under test. This is done for a set of two high-dimensional spatio-temporal indices and twelve variants of the R-tree. The evaluation indicates that PTAF performs at least as good as the other approaches in terms of minimal test cases with a maximized coverage. Several case studies on PTAF demonstrate its widespread abilities.
|
Page generated in 0.1081 seconds