• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 42
  • 35
  • 10
  • 9
  • 9
  • 8
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Exploiting parallelism in decomposition methods for constraint satisfaction

Akatov, Dmitri January 2010 (has links)
Constraint Satisfaction Problems (CSPs) are NP-complete in general, however, there are many tractable subclasses that rely on the restriction of the structure of their underlying hypergraphs. It is a well-known fact, for instance, that CSPs whose underlying hypergraph is acyclic are tractable. Trying to define “nearly acyclic” hypergraphs led to the definition of various hypergraph decomposition methods. An important member in this class is the hypertree decomposition method, introduced by Gottlob et al. It possesses the property that CSPs falling into this class can be solved efficiently, and that hypergraphs in this class can be recognized efficiently as well. Apart from polynomial tractability, complexity analysis has shown, that both afore-mentioned problems lie in the low complexity class LOGCFL and are thus moreover efficiently parallelizable. A parallel algorithm has been proposed for the “evaluation problem”, however all algorithms for the “recognition problem” presented to date are sequential. The main contribution of this dissertation is the creation of an object oriented programming library including a task scheduler which allows the parallelization of a whole range of computational problems, fulfilling certain complexity-theoretic restrictions. This library merely requires the programmer to provide the implementation of several classes and methods, representing a general alternating algorithm, while the mechanics of the task scheduler remain hidden. In particular, we use this library to create an efficient parallel algorithm, which computes hypertree decompositions of a fixed width. Another result of a more theoretical nature is the definition of a new type of decomposition method, called Balanced Decompositions. Solving CSPs of bounded balanced width and recognizing such hypergraphs is only quasi-polynomial, however still parallelizable to a certain extent. A complexity-theoretic analysis leads to the definition of a new complexity class hierarchy, called the DC-hierarchy, with the first class in this hierarchy, DC1 , precisely capturing the complexity of solving CSPs of bounded balanced width.
12

A director's mash-up of She stoops to conquer or The mistakes of a night by Oliver Goldsmith

Clippard, Kristin 01 May 2013 (has links)
Kristin Clippard's MFA Thesis Project: A Director's Mash-up of She Stoops to Conquer or the Mistakes of a Night by Oliver Goldsmith
13

Large-scale and high-quality multi-view stereo

Vu, Hoang Hiep 05 December 2011 (has links) (PDF)
Acquisition of 3D model of real objects and scenes is indispensable and useful in many practical applications, such as digital archives, game and entertainment industries, engineering, advertisement. There are 2 main methods for 3D acquisition : laser-based reconstruction (active method) and image-based reconstruction from multiple images of the scene in different points of view (passive method). While laser-based reconstruction achieves high accuracy, it is complex, expensive and difficult to set up for large-scale outdoor reconstruction. Image-based, or multi-view stereo methods are more versatile, easier, faster and cheaper. By the time we begin this thesis, most multi-view methods could handle only low resolution images under controlled environment. This thesis targets multi-view stereo both both in large scale and high accuracy issues. We significantly improve some previous methods and combine them into a remarkably effective multi-view pipeline with GPU acceleration. From high-resolution images, we produce highly complete and accurate meshes that achieve best scores in many international recognized benchmarks. Aiming even larger scale, on one hand, we develop Divide and Conquer approaches in order to reconstruct many small parts of a big scene. On the other hand, to combine separate partial results, we create a new merging method, which can merge automatically and quickly hundreds of meshes. With all these components, we are successful to reconstruct highly accurate water-tight meshes for cities and historical monuments from large collections of high-resolution images (around 1600 images of 5 M Pixel images)
14

Large-scale and high-quality multi-view stereo

Vu, Hoang Hiep 05 December 2011 (has links) (PDF)
Acquisition of 3D model of real objects and scenes is indispensable and useful in many practical applications, such as digital archives, game and entertainment industries, engineering, advertisement. There are 2 main methods for 3D acquisition : laser-based reconstruction (active method) and image-based reconstruction from multiple images of the scene in different points of view (passive method). While laser-based reconstruction achieves high accuracy, it is complex, expensive and difficult to set up for large-scale outdoor reconstruction. Image-based, or multi-view stereo methods are more versatile, easier, faster and cheaper. By the time we begin this thesis, most multi-view methods could handle only low resolution images under controlled environment. This thesis targets multi-view stereo both both in large scale and high accuracy issues. We significantly improve some previous methods and combine them into a remarkably effective multi-view pipeline with GPU acceleration. From high-resolution images, we produce highly complete and accurate meshes that achieve best scores in many international recognized benchmarks. Aiming even larger scale, on one hand, we develop Divide and Conquer approaches in order to reconstruct many small parts of a big scene. On the other hand, to combine separate partial results, we create a new merging method, which can merge automatically and quickly hundreds of meshes. With all these components, we are successful to reconstruct highly accurate water-tight meshes for cities and historical monuments from large collections of high-resolution images (around 1600 images of 5 M Pixel images)
15

A Scalable Partial-Order Data Structure for Distributed-System Observation

Ward, Paul January 2001 (has links)
Distributed-system observation is foundational to understanding and controlling distributed computations. Existing tools for distributed-system observation are constrained in the size of computation that they can observe by three fundamental problems. They lack scalable information collection, scalable data-structures for storing and querying the information collected, and scalable information-abstraction schemes. This dissertation addresses the second of these problems. Two core problems were identified in providing a scalable data structure. First, in spite of the existence of several distributed-system-observation tools, the requirements of such a structure were not well-defined. Rather, current tools appear to be built on the basis of events as the core data structure. Events were assigned logical timestamps, typically Fidge/Mattern, as needed to capture causality. Algorithms then took advantage of additional properties of these timestamps that are not explicit in the formal semantics. This dissertation defines the data-structure interface precisely, and goes some way toward reworking algorithms in terms of that interface. The second problem is providing an efficient, scalable implementation for the defined data structure. The key issue in solving this is to provide a scalable precedence-test operation. Current tools use the Fidge/Mattern timestamp for this. While this provides a constant-time test, it requires space per event equal to the number of processes. As the number of processes increases, the space consumption becomes sufficient to affect the precedence-test time because of caching effects. It also becomes problematic when the timestamps need to be copied between processes or written to a file. Worse, existing theory suggested that the space-consumption requirement of Fidge/Mattern timestamps was optimal. In this dissertation we present two alternate timestamp algorithms that require substantially less space than does the Fidge/Mattern algorithm.
16

A Scalable Partial-Order Data Structure for Distributed-System Observation

Ward, Paul January 2001 (has links)
Distributed-system observation is foundational to understanding and controlling distributed computations. Existing tools for distributed-system observation are constrained in the size of computation that they can observe by three fundamental problems. They lack scalable information collection, scalable data-structures for storing and querying the information collected, and scalable information-abstraction schemes. This dissertation addresses the second of these problems. Two core problems were identified in providing a scalable data structure. First, in spite of the existence of several distributed-system-observation tools, the requirements of such a structure were not well-defined. Rather, current tools appear to be built on the basis of events as the core data structure. Events were assigned logical timestamps, typically Fidge/Mattern, as needed to capture causality. Algorithms then took advantage of additional properties of these timestamps that are not explicit in the formal semantics. This dissertation defines the data-structure interface precisely, and goes some way toward reworking algorithms in terms of that interface. The second problem is providing an efficient, scalable implementation for the defined data structure. The key issue in solving this is to provide a scalable precedence-test operation. Current tools use the Fidge/Mattern timestamp for this. While this provides a constant-time test, it requires space per event equal to the number of processes. As the number of processes increases, the space consumption becomes sufficient to affect the precedence-test time because of caching effects. It also becomes problematic when the timestamps need to be copied between processes or written to a file. Worse, existing theory suggested that the space-consumption requirement of Fidge/Mattern timestamps was optimal. In this dissertation we present two alternate timestamp algorithms that require substantially less space than does the Fidge/Mattern algorithm.
17

Shortest Path Queries in Very Large Spatial Databases

Zhang, Ning January 2001 (has links)
Finding the shortest paths in a graph has been studied for a long time, and there are many main memory based algorithms dealing with this problem. Among these, Dijkstra's shortest path algorithm is one of the most commonly used efficient algorithms to the non-negative graphs. Even more efficient algorithms have been developed recently for graphs with particular properties such as the weights of edges fall into a range of integer. All of the mentioned algorithms require the graph totally reside in the main memory. Howevery, for very large graphs, such as the digital maps managed by Geographic Information Systems (GIS), the requirement cannot be satisfied in most cases, so the algorithms mentioned above are not appropriate. My objective in this thesis is to design and evaluate the performance of external memory (disk-based) shortest path algorithms and data structures to solve the shortest path problem in very large digital maps. In particular the following questions are studied:What have other researchers done on the shortest path queries in very large digital maps?What could be improved on the previous works? How efficient are our new shortest paths algorithms on the digital maps, and what factors affect the efficiency? What can be done based on the algorithm? In this thesis, we give a disk-based Dijkstra's-like algorithm to answer shortest path queries based on pre-processing information. Experiments based on our Java implementation are given to show what factors affect the running time of our algorithms.
18

Tectonic smoothing and mapping

Ni, Kai 16 May 2011 (has links)
Large-scale mapping has become the key to numerous applications, e.g. simultaneous localization and mapping (SLAM) for autonomous robots. Despite of the success of many SLAM projects, there are still some challenging scenarios in which most of the current algorithms are not able to deliver an exact solution fast enough. One of these challenges is the size of SLAM problems, which has increased by several magnitudes over the last decade. Another challenge for SLAM problems is the large amount of noise baked in the measurements, which often yields poor initializations and slows or even fails the optimization. Urban 3D reconstruction is another popular application for large-scale mapping and has received considerable attention recently from the computer vision community. High-quality 3D models are useful in various successful cartographic and architectural applications, such as Google Earth or Microsoft Live Local. At the heart of urban reconstruction problems is structure from motion (SfM). Due to the wide availability of cameras, especially on handhold devices, SfM is becoming a more and more crucial technique to handle a large amount of images. In the thesis, I present a novel batch algorithm, namely Tectonic Smoothing and Mapping (TSAM). I will show that the original SLAM graph can be recursively partitioned into multiple-level submaps using the nested dissection algorithm, which leads to the cluster tree, a powerful graph representation. By employing the nested dissection algorithm, the algorithm greatly minimizes the dependencies between two subtrees, and the optimization of the original graph can be done using a bottom-up inference along the corresponding cluster tree. To speed up the computation, a base node is introduced for each submap and is used to represent the rigid transformation of the submap in the global coordinate frame. As a result, the optimization moves the base nodes rather than the actual submap variables. I will also show that TSAM can be successfully applied to the SfM problem as well, in which a hypergraph representation is employed to capture the pairwise constraints between cameras. The hierarchical partitioning based on the hypergraph not only yields a cluster tree as in the SLAM problem but also forces resulting submaps to be nonsingular. I will demonstrate the TSAM algorithm using various simulation and real-world data sets.
19

Structured numerical problems in contemporary applications

Sustik, Mátyás Attila 31 October 2013 (has links)
The presence of structure in a computational problem can often be exploited and can lead to a more efficient numerical algorithm. In this dissertation, we look at structured numerical problems that arise from applications in wireless communications and machine learning that also impact other areas of scientific computing. In wireless communication system designs, certain structured matrices (frames) need to be generated. The design of such matrices is equivalent to a symmetric inverse eigenvalue problem where the values of the diagonal elements are prescribed. We present algorithms that are capable of generating a larger set of these constructions than previous algorithms. We also discuss the existence of equiangular tight frames---frames that satisfy additional structural properties. Kernel learning is an important class of problems in machine learning. It often relies on efficient numerical algorithms that solve underlying convex optimization problems. In our work, the objective functions to be minimized are the von Neumann and the LogDet Bregman matrix divergences. The algorithm that solves this optimization problem performs matrix updates based on repeated eigendecompositions of diagonal plus rank-one matrices in the case of von Neumann matrix divergence, and Cholesky updates in case of the LogDet Bregman matrix divergence. Our contribution exploits the low-rank representations and the structure of the constraint matrices, resulting in more efficient algorithms than previously known. We also present two specialized zero-finding algorithms where we exploit the structure through the shape and exact formulation of the objective function. The first zero-finding task arises during the matrix update step which is part of the above-mentioned kernel learning application. The second zero-finding problem is for the secular equation; it is equivalent to the computation of the eigenvalues of a diagonal plus rank-one matrix. The secular equation arises in various applications, the most well-known is the divide-and-conquer eigensolver. In our solutions, we build upon a somewhat forgotten zero-finding method by P. Jarratt, first described in 1966. The method employs first derivatives only and needs the same amount of evaluations as Newton's method, but converges faster. Our contributions are the more efficient specialized zero-finding algorithms. / text
20

Escalonamento Work-Stealing de programas Divisão-e-Conquista com MPI-2 / Scheduling Divide-and-Conquer programs by Work-Stealing with MPI-2

Pezzi, Guilherme Peretti January 2006 (has links)
Com o objetivo de ser portável e eficiente em arquiteturas HPC atuais, a execução de um programa paralelo deve ser adaptável. Este trabalho mostra como isso pode ser atingido utilizando MPI, através de criação dinâmica de processos, integrada com programação Divisão-e-Conquista e uma estratégia Work-Stealing para balancear os processos MPI, em ambientes heterogêneos e/ou dinâmicos, em tempo de execução. Este trabalho explica como implementar uma aplicação segundo o modelo de Divisão-e-Conquista com MPI, bem como a implementação de uma estratégia Work-Stealing. São apresentados resultados experimentais baseados em uma aplicação sintética, o problema das N-Rainhas (N-Queens). Valida-se tanto a adaptabilidade e a eficiência do código. Os resultados mostram que é possível utilizar um padrão amplamente difundido como o MPI, mesmo em plataformas de HPC não tão homogêneas como um cluster. / In order to be portable and efficient on modern HPC architectures, the execution of a parallel program must be adaptable. This work shows how to achieve this in MPI, by the dynamic creation of processes, coupled with Divide-and-Conquer programming and a Work-Stealing strategy to balance the MPI processes, in a heterogeneous and/or dynamic environment, at runtime. The application of Divide and Conquer with MPI is explained, as well as the implementation of a Work-Stealing strategy. Experimental results are provided, based on a synthetic application, the N-Queens computation. Both the adaptability of the code and its efficiency are validated. The results show that it is possible to use widely spread standards such as MPI, even in parallel HPC platforms that are not as homogeneous as a Cluster.

Page generated in 0.0285 seconds