• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 25
  • 25
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Algorithmes à front d'onde et accès transparent aux données / Wavefront algorithms and transparent data access

Clauss, Pierre-Nicolas 05 November 2009 (has links)
Cette thèse introduit deux outils pour l'accès performant aux données d'un algorithme à front d'onde dans un contexte d'exécution out-of-core. Ces algorithmes sont facilement parallélisables en utilisant des techniques de macro-pipelining, qui permettent un recouvrement des calculs et des communications. Le premier outil part du constat que les performances des opérations de lecture/écriture dans une telle situation sont désastreuses: les données sont éclatées sur disque et leur rapatriement en mémoire est long et coûteux. Le nouvel agencement de données sur disque proposé permet de résoudre ces problèmes en accédant aux données uniquement de manière contiguë. Si ce premier outil décrit comment accéder aux données, le deuxième est un modèle de synchronisation qui décrit quand y accéder. En effet, l'exécution parallèle et concurrente des algorithmes à front d'onde nécessite un contrôle strict des temps d'accès et des temps d'attente. Le modèle présenté dans cette thèse remplit ce rôle, tout en donnant des garanties de propriétés intéressantes pour les applications itératives: verrouillage pro-actif, évolution sans interblocages, progression homogène des tâches. L'utilisation de ces deux outils a été intensivement testée sur un benchmark de référence et expérimentée sur des machines de la plate-forme Grid'5000. / This thesis introduces two tools for efficiently access data of a wavefront algorithm in an out-of-core execution context. These algorithms are easily to parallelize by using macro-pipelining techniques which allow to overlap computations and communications. The first tool is build on the observation that input/output operations in such a situation have disastrous performance: data is scattered on disk and getting them in and out of memory is long and expensive. The new data layout on disk that is proposed resolves these issues by accessing data only in a contiguous way. If the first tool describes how to access data, the second one is a synchronization model that describes when to access them. Indeed, concurrent and parallel execution of wavefront algorithms require strict control over accessing and waiting periods. The model presented in this thesis fulfills this role while providing guarantees on interesting properties for iterative applications: proactive locking, deadlock-free evolution, homogeneous progression of tasks. The usage of these two tools has been intensively tested on a reference benchmark and experimented on machines from the Grid'5000 platform.
2

Optimizing locality and parallelism through program reorganization

Krishnamoorthy, Sriram 07 January 2008 (has links)
No description available.
3

ERROR CONTROL AND EFFICIENT MEMORY MANAGEMENT FOR SPARSE INTEGRAL EQUATION SOLVERS BASED ON LOCAL-GLOBAL SOLUTION MODES

Choi, Jun-shik 01 January 2014 (has links)
This dissertation presents and analyzes two new algorithms for sparse direct solution methods based on the use of local-global solution (LOGOS) modes. One of the new algorithms is a rigorous error control strategy for LOGOS-based matrix factorizations that utilize overlapped, localizing modes (OL-LOGOS) on a shifted grid. The use of OL-LOGOS modes is critical to obtaining asymptotically efficient factorizations from LOGOS-based methods. Unfortunately, the approach also introduces a non-orthogonal basis function structure. This can cause errors to accumulate across levels of a multilevel implementation, which has previously posed a barrier to rigorous error control for the OL-LOGOS factorization method. This limitation is overcome, and it is shown that it is possible to efficiently decouple the fundamentally non-orthogonal factorization subspaces in a manner that prevents multilevel error propagation. This renders the OL-LOGOS factorization error controllable in a relative RMS sense. The impact of the new, error-controlled OL-LOGOS factorization algorithm on computational resource utilization is discussed and several numerical examples are presented to illustrate the performance of the improved algorithm relative to previously reported results. The second algorithmic development considered is the development of efficient out-of-core (OOC) versions of the OL-LOGOS factorization algorithm that allow associated software tools to take advantage of additional resources for memory management. The proposed OOC algorithm incorporates a memory page definition that is tailored to match the flow of the OL-LOGOS factorization procedure. Efficiency of the function of the part is evaluated using a quantitative approach, because the tested massive storage device performances do not follow analytical results. The performance latency and the memory usage of the resulting OOC tools are compared with in-core performance results. Both the new error control algorithm and the OOC method have been incorporated into previously existing software tools, and the dissertation presents results for real-world simulation problems.
4

Algorithmes et structures de données compactes pour la visualisation interactive d’objets 3D volumineux / Algorithms and compact data structures for interactive visualization of gigantic 3D objects

Jamin, Clément 25 September 2009 (has links)
Les méthodes de compression progressives sont désormais arrivées à maturité (les taux de compression sont proches des taux théoriques) et la visualisation interactive de maillages volumineux est devenue une réalité depuis quelques années. Cependant, même si l’association de la compression et de la visualisation est souvent mentionnée comme perspective, très peu d’articles traitent réellement ce problème, et les fichiers créés par les algorithmes de visualisation sont souvent beaucoup plus volumineux que les originaux. En réalité, la compression favorise une taille réduite de fichier au détriment de l’accès rapide aux données, alors que les méthodes de visualisation se concentrent sur la rapidité de rendu : les deux objectifs s’opposent et se font concurrence. A partir d’une méthode de compression progressive existante incompatible avec le raffinement sélectif et interactif, et uniquement utilisable sur des maillages de taille modeste, cette thèse tente de réconcilier compression sans perte et visualisation en proposant de nouveaux algorithmes et structures de données qui réduisent la taille des objets tout en proposant une visualisation rapide et interactive. En plus de cette double capacité, la méthode proposée est out-of-core et peut traiter des maillages de plusieurs centaines de millions de points. Par ailleurs, elle présente l’avantage de traiter tout complexe simplicial de dimension n, des soupes de triangles aux maillages volumiques. / Progressive compression methods are now mature (obtained rates are close to theoretical bounds) and interactive visualization of huge meshes has been a reality for a few years. However, even if the combination of compression and visualization is often mentioned as a perspective, very few papers deal with this problem, and the files created by visualization algorithms are often much larger than the original ones. In fact, compression favors a low file size to the detriment of a fast data access, whereas visualization methods focus on rendering speed : both goals are opposing and competing. Starting from an existing progressive compression method incompatible with selective and interactive refinements and usable on small-sized meshes only, this thesis tries to reconcile lossless compression and visualization by proposing new algorithms and data structures which radically reduce the size of the objects while supporting a fast interactive navigation. In addition to this double capability, our method works out-of-core and can handle meshes containing several hundreds of millions vertices. Furthermore, it presents the advantage of dealing with any n-dimensional simplicial complex, which includes triangle soups or volumetric meshes.
5

Scaling out-of-core k-nearest neighbors computation on single machines / Faire passer à l'échelle le calcul "out-of-core" des K-plus proche voisins sur une seule machine

Olivares, Javier 19 December 2016 (has links)
La technique des K-plus proches voisins (K-Nearest Neighbors (KNN) en Anglais) est une méthode efficace pour trouver des données similaires au sein d'un grand ensemble de données. Au fil des années, un grand nombre d'applications ont utilisé les capacités du KNN pour découvrir des similitudes dans des jeux de données de divers domaines tels que les affaires, la médecine, la musique, ou l'informatique. Bien que des années de recherche aient apporté plusieurs approches de cet algorithme, sa mise en œuvre reste un défi, en particulier aujourd'hui alors que les quantités de données croissent à des vitesses inimaginables. Dans ce contexte, l'exécution du KNN sur de grands ensembles pose deux problèmes majeurs: d'énormes empreintes mémoire et de très longs temps d'exécution. En raison de ces coût élevés en termes de ressources de calcul et de temps, les travaux de l'état de l'art ne considèrent pas le fait que les données peuvent changer au fil du temps, et supposent toujours que les données restent statiques tout au long du calcul, ce qui n'est malheureusement pas du tout conforme à la réalité. Nos contributions dans cette thèse répondent à ces défis. Tout d'abord, nous proposons une approche out-of-core pour calculer les KNN sur de grands ensembles de données en utilisant un seul ordinateur. Nous préconisons cette approche comme un moyen moins coûteux pour faire passer à l'échelle le calcul des KNN par rapport au coût élevé d'un algorithme distribué, tant en termes de ressources de calcul que de temps de développement, de débogage et de déploiement. Deuxièmement, nous proposons une approche out-of-core multithreadée (i.e. utilisant plusieurs fils d'exécution) pour faire face aux défis du calcul des KNN sur des données qui changent rapidement et continuellement au cours du temps. Après une évaluation approfondie, nous constatons que nos principales contributions font face aux défis du calcul des KNN sur de grands ensembles de données, en tirant parti des ressources limitées d'une machine unique, en diminuant les temps d'exécution par rapport aux performances actuelles, et en permettant le passage à l'échelle du calcul, à la fois sur des données statiques et des données dynamiques. / The K-Nearest Neighbors (KNN) is an efficient method to find similar data among a large set of it. Over the years, a huge number of applications have used KNN's capabilities to discover similarities within the data generated in diverse areas such as business, medicine, music, and computer science. Despite years of research have brought several approaches of this algorithm, its implementation still remains a challenge, particularly today where the data is growing at unthinkable rates. In this context, running KNN on large datasets brings two major issues: huge memory footprints and very long runtimes. Because of these high costs in terms of computational resources and time, KNN state-of the-art works do not consider the fact that data can change over time, assuming always that the data remains static throughout the computation, which unfortunately does not conform to reality at all. In this thesis, we address these challenges in our contributions. Firstly, we propose an out-of-core approach to compute KNN on large datasets, using a commodity single PC. We advocate this approach as an inexpensive way to scale the KNN computation compared to the high cost of a distributed algorithm, both in terms of computational resources as well as coding, debugging and deployment effort. Secondly, we propose a multithreading out-of-core approach to face the challenges of computing KNN on data that changes rapidly and continuously over time. After a thorough evaluation, we observe that our main contributions address the challenges of computing the KNN on large datasets, leveraging the restricted resources of a single machine, decreasing runtimes compared to that of the baselines, and scaling the computation both on static and dynamic datasets.
6

[pt] EXTRAÇÃO DE ISOSUPERFÍCIES DE DOMOS DE SAL EM VOLUMES BINÁRIOS MASSIVOS / [en] ISOSURFACE EXTRACTION OF MASSIVE SALT DOME BINARY VOLUME DATA

SAMUEL BASTOS DE SOUZA JUNIOR 19 January 2021 (has links)
[pt] Ao extrair isosuperfícies de dados volumétricos massivos, em geral a superfície de saída é densa, podendo demandar muita memória para seu processamento. Além disso, dependendo do método de extração utilizado, podese também obter um resultado contendo diversos problemas geométricos e topológicos. Neste estudo, experimentamos combinações de diferentes métodos de extração de isosuperfícies juntamente com estratégias out-of-core que permitem uso inteligente do recurso computacional para sintetizar aproximações poligonais dessas superfícies, preservando a topologia original segmentada. O método implementado foi testado em um volume sísmico real para extração da superfície de domo de sal. / [en] When extracting isosurfaces from massive volumetric datasets, in general, the output surface is dense, and may require a lot of memory for processing. In addition to this, depending on the extraction method used, the result can also include several geometric and topological problems. In this study, we experimented combinations of different isosurface extraction methods along out-of-core strategies to generate polygonal approximations to these surfaces, preserving the original topology segmented in the volumetric dataset. The implemented method was tested in a real seismic volume dataset for the salt dome extraction.
7

Real-time rendering of very large 3D scenes using hierarchical mesh simplification

Jönsson, Daniel January 2009 (has links)
<p>Captured and generated 3D data can be so large that it creates a problem for today's computers since they do not fit into the main or graphics card memory. Therefore methods for handling and rendering the data must be developed. This thesis presents a way to pre-process and render out-of-core height map data for real time use. The pre-processing uses a mesh decimation API called Simplygon developed by Donya Labs to optimize the geometry. From the height map a normal map can also be created and used at render time to increase the visual quality. In addition to the 3D data textures are also supported. To decrease the time to load an object the normal and texture maps can be compressed on the graphics card prior to rendering. Three different methods for covering gaps are explored of which one turns out to be insufficient for rendering cylindrical equidistant projected data.At render time two threads work in parallel. One thread is used to page the data from the hard drive to the main and graphics card memory. The other thread is responsible for rendering all data. To handle precision errors caused by spatial difference in the data each object receives a local origin and is then rendered relative to the camera. An atmosphere which handles views from both space and ground is computed on the graphics card.The result is an application adapted to current graphics card technology which can page out-of-core data and render a dataset covering the entire earth at 500 meters spatial resolution with a realistic atmosphere.</p>
8

Transcriptomic Data Analysis Using Graph-Based Out-of-Core Methods

Rogers, Gary L 01 August 2011 (has links)
Biological data derived from high-throughput microarrays can be transformed into finite, simple, undirected graphs and analyzed using tools first introduced by the Langston Lab at the University of Tennessee. Transforming raw data can be broken down into three main tasks: data normalization, generation of similarity metrics, and threshold selection. The choice of methods used in each of these steps effect the final outcome of the graph, with respect to size, density, and structure. A number of different algorithms are examined and analyzed to illustrate the magnitude of the effects. Graph-based tools are then used to extract putative gene networks. These tools are loosely based on the concept of clique, which generates clusters optimized for density. Innovative additions to the paraclique algorithm, developed at the Langston Lab, are introduced to generate results that have highest average correlation or highest density. A new suite of algorithms is then presented that exploits the use of a priori gene interactions. Aptly named the anchored analysis toolkit, these algorithms use known interactions as anchor points for generating subgraphs, which are then analyzed for their graph structure. This results in clusters that might have otherwise been lost in noise. A main product of this thesis is a novel collection of algorithms to generate exact solutions to the maximum clique problem for graphs that are too large to fit within core memory. No other algorithms are currently known that produce exact solutions to this problem for extremely large graphs. A combination of in-core and out-of-core techniques is used in conjunction with a distributed-memory programming model. These algorithms take into consideration such pitfalls as external disk I/O and hardware failure and recovery. Finally, a web-based tool is described that provides researchers access the aforementioned algorithms. The Graph Algorithms Pipeline for Pathway Analysis tool, GrAPPA, was previously developed by the Langston Lab and provides the software needed to take raw microarray data as input and preprocess, analyze, and post-process it in a single package. GrAPPA also provides access to high-performance computing resources, via the TeraGrid.
9

Una aproximación de alto nivel a la resolución de problemas matriciales con almacenamiento en disco

Marqués Andrés, M. Mercedes 30 April 2010 (has links)
Diversos treballs en el camp de la modelització d'aplicacions cientí¬fiques, tecnològiques i industrials requereixen la resolució de sistemes d'equacions lineals i problemes lineals de mí¬nims quadrats densos de gran dimensió. Davant tals necessitats, l'objectiu plantejat en aquesta tesi ha estat dissenyar, desenvolupar i avaluar una col·lecció de rutines altament eficients per a resoldre sistemes d'equacions lineals i problemes lineals de mínims quadrats de dimensió elevada (matrius amb desenes de milers de files/columnes) sobre arquitectures actuals, fent ús de tècniques out-of-core. Les tècniques out-of-core estenen la jerarquia de memòria per a abastar el nivell de l'emmagatzematge secundari, fent possible la resolució de sistemes d'equacions lineals densos de gran dimensió en plataformes amb una memòria principal de grandària reduïda. En aquesta tesi s'exploten, a més, les característiques dels processadors actuals com les arquitectures multihebra i els processadors gràfics.
10

Real-time rendering of very large 3D scenes using hierarchical mesh simplification

Jönsson, Daniel January 2009 (has links)
Captured and generated 3D data can be so large that it creates a problem for today's computers since they do not fit into the main or graphics card memory. Therefore methods for handling and rendering the data must be developed. This thesis presents a way to pre-process and render out-of-core height map data for real time use. The pre-processing uses a mesh decimation API called Simplygon developed by Donya Labs to optimize the geometry. From the height map a normal map can also be created and used at render time to increase the visual quality. In addition to the 3D data textures are also supported. To decrease the time to load an object the normal and texture maps can be compressed on the graphics card prior to rendering. Three different methods for covering gaps are explored of which one turns out to be insufficient for rendering cylindrical equidistant projected data.At render time two threads work in parallel. One thread is used to page the data from the hard drive to the main and graphics card memory. The other thread is responsible for rendering all data. To handle precision errors caused by spatial difference in the data each object receives a local origin and is then rendered relative to the camera. An atmosphere which handles views from both space and ground is computed on the graphics card.The result is an application adapted to current graphics card technology which can page out-of-core data and render a dataset covering the entire earth at 500 meters spatial resolution with a realistic atmosphere.

Page generated in 0.0612 seconds