331 |
Storage Format Selection and Optimization for Materialized Intermediate Results in Data-Intensive FlowsMunir, Rana Faisal 01 February 2021 (has links)
Modern organizations produce and collect large volumes of data, that need to be processed repeatedly and quickly for gaining business insights. For such processing, typically, Data-intensive Flows (DIFs) are deployed on distributed processing frameworks. The DIFs of different users have many computation overlaps (i.e., parts of the processing are duplicated), thus wasting computational resources and increasing the overall cost. The output of these computation overlaps (known as intermediate results) can be materialized for reuse, which helps in reducing the cost and saves computational resources if properly done. Furthermore, the way such outputs are materialized must be considered, as different storage layouts (i.e., horizontal, vertical, and hybrid) can be used to reduce the I/O cost.
In this PhD work, we first propose a novel approach for automatically materializing the intermediate results of DIFs through a multi-objective optimization method, which can tackle multiple and conflicting quality metrics. Next, we study the behavior of different operators of DIFs that are the first to process the loaded materialized results. Based on this study, we devise a rule-based approach, that decides the storage layout for materialized results based on the subsequent operation types. Despite improving the cost in general, the heuristic rules do not consider the amount of data read while making the choice, which could lead to a wrong decision. Thus, we design
a cost model that is capable of finding the right storage layout for every scenario. The cost model uses data and workload characteristics to estimate the I/O cost of a materialized intermediate results with different storage layouts and chooses the one which has minimum cost. The results show that storage layouts help to reduce the loading time of materialized results and overall, they improve the performance of DIFs.
The thesis also focuses on the optimization of the configurable parameters of hybrid layouts. We propose ATUN-HL (Auto TUNing Hybrid Layouts), which based on the same cost model and given the workload and characteristics of data, finds the optimal values for configurable parameters in hybrid layouts (i.e., Parquet).
Finally, the thesis also studies the impact of parallelism in DIFs and hybrid layouts. Our proposed cost model helps to devise an approach for fine-tuning the parallelism by deciding the number of tasks and machines to process the data. Thus, the cost model proposed in this thesis, enables in choosing the best possible storage layout for materialized intermediate results, tuning the configurable parameters of hybrid layouts, and estimating the number of tasks and machines for the execution of DIFs. / Moderne Unternehmen produzieren und sammeln große Datenmengen, die
wiederholt und schnell verarbeitet werden müssen, um geschäftliche Erkenntnisse zu gewinnen. Für die Verarbeitung dieser Daten werden typischerweise Datenintensive Prozesse (DIFs) auf verteilten Systemen wie z.B. MapReduce bereitgestellt. Dabei ist festzustellen, dass die DIFs verschiedener Nutzer sich in großen Teilen überschneiden, wodurch viel Arbeit mehrfach geleistet, Ressourcen verschwendet und damit die Gesamtkosten erhöht werden. Um diesen Effekt entgegenzuwirken, können die Zwischenergebnisse der DIFs für spätere Wiederverwendungen materialisiert werden. Hierbei müssen vor allem die unterschiedlichen Speicherlayouts (horizontal, vertikal und hybrid) berücksichtigt werden.
In dieser Doktorarbeit wird ein neuartiger Ansatz zur automatischen Materialisierung der Zwischenergebnisse von DIFs durch eine mehrkriterielle Optimierungsmethode vorgeschlagen, der in der Lage ist widersprüchliche Qualitätsmetriken zu behandeln. Des Weiteren wird untersucht die Wechselwirkung zwischen verschiedenen
peratortypen und unterschiedlichen Speicherlayouts untersucht. Basierend auf dieser Untersuchung wird ein regelbasierter Ansatz vorgeschlagen, der das Speicherlayout für materialisierte Ergebnisse, basierend auf den nachfolgenden Operationstypen, festlegt. Obwohl sich die Gesamtkosten für die Ausführung der DIFs im Allgemeinen verbessern, ist der heuristische Ansatz nicht in der Lage die gelesene Datenmenge bei der Auswahl des Speicherlayouts zu berücksichtigen.
Dies kann in einigen Fällen zu falschen Entscheidung führen. Aus diesem Grund wird ein Kostenmodell entwickelt, mit dem für jedes Szenario das richtige Speicherlayout gefunden werden kann. Das Kostenmodell schätzt anhand von Daten und Auslastungsmerkmalen die E/A-Kosten eines materialisierten Zwischenergebnisses mit unterschiedlichen Speicherlayouts und wählt das kostenminimale aus. Die Ergebnisse zeigen, dass Speicherlayouts die Ladezeit materialisierter Ergebnisse verkürzen und insgesamt die Leistung von DIFs verbessern.
Die Arbeit befasst sich auch mit der Optimierung der konfigurierbaren
Parameter von hybriden Layouts. Konkret wird der sogenannte ATUN-HL Ansatz (Auto TUNing Hybrid Layouts) entwickelt, der auf der Grundlage des gleichen Kostenmodells und unter Berücksichtigung der Auslastung und der Merkmale der Daten die optimalen Werte für konfigurierbare Parameter in Parquet, d.h. eine Implementierung von hybrider Layouts.
Schließlich werden in dieser Arbeit auch die Auswirkungen von Parallelität in DIFs und hybriden Layouts untersucht. Dazu wird ein Ansatz entwickelt, der in der Lage ist die Anzahl der Aufgaben und dafür notwendigen Maschinen automatisch zu bestimmen. Zusammengefasst lässt sich festhalten, dass das in dieser Arbeit vorgeschlagene Kostenmodell es ermöglicht, das bestmögliche Speicherlayout für materialisierte Zwischenergebnisse zu ermitteln, die konfigurierbaren Parameter hybrider Layouts festzulegen und die Anzahl der Aufgaben und Maschinen für die Ausführung von DIFs zu schätzen.
|
332 |
Kahn process networks as concurrent data structures : lock freedom, parallelism, relaxation in shared memory / Les réseaux de processus de Kahn : progrès non bloquant, parallélisme, relâchement en mémoire partagéeLê, Nhat Minh 09 December 2016 (has links)
La thèse porte sur les réseaux de Kahn, un modèle de concurrence simple et expressif proposé par Gilles Kahn dans les années 70, et leur implémentation sur des architectures multi-coeurs modernes, à mémoire partagée. Dans un réseau de Kahn, le programmeur décrit un programme parallèle comme un ensemble de processus et de canaux communicants, reliant chacun exactement un processus producteur à un consommateur. Nous nous concentrons ici sur les aspects algorithmiques et les choix de conception liés à l'implémentation, avec deux points clefs : les garanties non bloquantes et la mémoire relâchée. Le développement d'algorithmes non bloquants efficaces s'inscrit dans une optique de gestion des ressources et de garantie de performance sur les plateformes à ordonnancement irrégulier, telles que les machines virtuelles ou les GPU. Un travail complémentaire sur les modèles de mémoire relâchée vient compléter cette approche théorique par un prolongement plus pratique dans le monde des architectures à mémoire partagée contemporaines. Nous présentons un nouvel algorithme non bloquant pour l'interprétation de réseaux de Kahn. Celui-ci est parallèle sur les accès disjoints : il permet à plusieurs processeursde travailler simultanément sur un même réseau de Kahn partagé, tout en exploitant le parallélisme entre processus indépendants. Il offre dans le même temps des garanties de progrès non bloquant : en mémoire bornée et en présence de retards sur les processeurs. L'ensemble forme, à notre connaissance, le premier système complètement non bloquant de cette envergure : techniques classiques de programmation non bloquante et contributions spécifiques aux réseaux de Kahn. Nous discutons également d'une variante bloquante destinée au calcul haute performance, avec des résultats expérimentaux encourageants. / In this thesis, we are interested in Kahn process networks, a simple yet expressive model of concurrency, and its parallel implementation on modern shared-memory architectures. Kahn process networks expose concurrency to the programmer through an arrangement of sequential processes and single-producer single-consumer channels. The focus is on the implementation aspects. Of particular importance to our study are two parameters: lock freedom and relaxed memory. The development of fast andefficient lock-free algorithms ties into concerns of controlled resource consumption and reliable performance on current and future platforms with unfair or skewed scheduling such as virtual machines and GPUs. Our work with relaxed memory models complements this more theoretical approach by offering a window into realistic sharedmemory architectures. We present a new lock-free algorithm for a Kahn process network interpreter. It is disjoint-access parallel: we allow multiple threads to work on the same shared Kahn process network, fully utilizing the parallelism exhibited by independent processes. It is nonblockingin that it guarantees global progress in bounded memory, even in the presence of (possibly infinite) delays affecting the executing threads. To our knowledge, it is the first lock-free system of this size, and integrates various well-known non-blocking techniques and concepts (e.g., safe memory reclamation, multi-word updates, assistance) with ideas and optimizations specific to the Kahn network setting. We also discuss a variant of the algorithm, which is blocking and targeted at high-performance computing, with encouraging experimental results.
|
333 |
Simulace lomové zkoušky ve stavebnictví / Simulation of Fracture Tests in Civil EngineeringBordovský, Gabriel January 2017 (has links)
In this thesis, a program for fracture test in civil engineering has been optimized. The simulation is used for a validation of the fracture characteristics for blocks of construct material used for historic buildings reconstructure. This thesis illustrates the possibilities of an effective usage of the processor’s potential without the loss of the output quality. The individual parts of the simulation are analyzed and this thesis proposes for the critical sections some possible optimizations such as vectorization or parallel processing. The techniques used in this thesis may be used on similar computing problems and help shorten the required runtime. The prototype of the simulation was able to process the simulation in 7.7 hours. Optimized version is capable to process the same simulation in 2.1 hours on one core or 21 minutes on eight cores. The parallel optimized version is 21 times faster than the prototype.
|
334 |
Problematika přechodu od jednojádrové k vícejádrové implementaci operačního systému / Issue of Migrating from Single-Core to Multi-Core Implementation of Operating SystemMatyáš, Jan January 2014 (has links)
This thesis discuss necessary changes needed in order to run MicroC/OS-II on multicore processor, mainly Zynq 7000 All Programmable SoC which uses two ARM Cortex-A9 cores. Problems that arise during this transition are also discussed.
|
335 |
Algoritmy číslicového zpracování obrazu na grafických kartách / The algorithms of digital image processing on graphics cardsBielczyk, Marek January 2016 (has links)
Purpose of this work is show possibility of using grapichs cart for imaging a video signal. This work is particularly focused on technology CUDA and OpenCL. The solution is first focused on graphics cart and show how has been changed components and how has been changed performaces of graphics cart. Then show CUDA and OpenCL technology itself, and show samples of codes with explain, what which code do. Output of this work is some programs, witch defined for both technology and for both procesors unit. Contribution of this work is show differents between procesors unit, witch can be used to right choose for design your own algorithm.
|
336 |
Komparace postavení současné maďarské menšiny ve Vojvodině a v Transylvánii / Comparison of the position of the current Hungarian minority in Vojvodina and TransylvaniaHanušová, Tereza January 2021 (has links)
The diploma thesis deals with the position of the Hungarian minority in Serbian Vojvodina and Romanian Transylvania using the comparative method. Hungarians in Serbia and Romania represent a very large national minority and they became an integral part of the local culture and society. The level of Hungarian minority rights in the host countries is compared in four areas: legislation, political representation and institutionalization of the minority, mother tongue education opportunities and the Hungarian minority media. Apart from a brief outline of the historical context, the work focuses exclusively on the period after the fall of communism in both states to the present. During these years, there has been the biggest shift in the area of minority rights. The concept of ethnic parallelism is applied to all researched areas. Related to this, the so-called ethnolinguistic vitality approach is used, which deals with the conditions for the preservation of minority languages in the majority society. Special attention is paid to the influence of the Hungarian government under Primer Minister Viktor Orbán on the life of Hungarians abroad, which is significantly growing.
|
337 |
Advanced Concepts for Automatic Differentiation based on Operator OverloadingKowarz, Andreas 20 March 2008 (has links)
Mit Hilfe der Technik des Automatischen Differenzierens (AD) lassen sich für Funktionen, die als Programmquellcode gegeben sind, Ableitungsinformationen rechentechnisch effizient und mit geringem Aufwand für den Nutzer bereitstellen. Eine Variante der Implementierung von AD basiert auf der Überladung von Operatoren und Funktionen, die von vielen modernen Programmiersprachen ermöglicht wird. Durch Ausnutzung des Konzepts der Überladung wird eine interne Funktions-Repräsentation (Tape) generiert, die anschließend für die Ableitungsberechnung herangezogen wird. In der Dissertation werden neue Techniken erarbeitet, die eine effizientere Tape-Erstellung und die parallele Tape-Auswertung ermöglichen. Anhand von Laufzeituntersuchungen für numerische Beispiele werden die Möglichkeiten der neuen Techniken verdeutlicht. / Using the technique of Automatic Differentiation (AD), derivative information can be computed efficiently for any function that is given as source code in a supported programming languages. One basic implementation strategy is based on the concept of operator overloading that is available for many programming languages. Due the overloading of operators, an internal representation of the function can be generated at runtime. This so-called tape can then be used for computing derivatives. In the thesis, new techniques are introduced that allow a more efficient tape creation and the parallel evaluation of tapes. Advantages of the new techniques are demonstrated by means of runtime analyses for numerical examples.
|
338 |
Composable, Sound Transformations for Nested Recursion and LoopsKirshanthan Sundararajah (16647885) 26 July 2023 (has links)
<p> </p>
<p>Programs that use loops to operate over arrays and matrices are generally known as <em>regular programs</em>. These programs appear in critical applications such as image processing, differential equation solvers, and machine learning. Over the past few decades, extensive research has been done on composing, verifying, and applying scheduling transformations like loop interchange and loop tiling for regular programs. As a result, we have general frameworks such as the polyhedral model to handle transformations for loop-based programs. Similarly, programs that use recursion and loops to manipulate pointer-based data structures are known as <em>irregular programs</em>. Irregular programs also appear in essential applications such as scientific simulations, data mining, and graphics rendering. However, there is no analogous framework for recursive programs. In the last decade, although many scheduling transformations have been developed for irregular programs, they are ad-hoc in various aspects, such as being developed for a specific application and lacking portability. This dissertation examines principled ways to handle scheduling transformations for recursive programs through a unified framework resulting in performance enhancement. </p>
<p>Finding principled approaches to optimize irregular programs at compile-time is a long-standing problem. We specifically focus on scheduling transformations that reorder a program’s operations to improve performance by enhancing locality and exploiting parallelism. In the first part of this dissertation, we present PolyRec, a unified general framework that can compose and apply scheduling transformations to nested recursive programs and reason about the correctness of composed transformations. PolyRec is a first-of-its-kind unified general transformation framework for irregular programs consisting of nested recursion and loops. It is built on solid theoretical foundations from the world of automata and transducers and provides a fundamentally novel way to think about recursive programs and scheduling transformations for them. The core idea is designing mechanisms to strike a balance between the expressivity in representing the set of dynamic instances of computations, transformations, and dependences and the decidability of checking the correctness of composed transformations. We use <em>multi-tape </em>automata and transducers to represent the set of dynamic instances of computations and transformations, respectively. These machines are similar yet more expressive than their classical single-tape counterparts. While in general decidable properties of classical machines are undecidable for multi-tape machines, we have proven that those properties are decidable for the class of machines we consider, and we present algorithms to verify these properties. Therefore these machines provide the building blocks to compose and verify scheduling transformations for nested recursion and loops. The crux of the PolyRec framework is its regular string-based representation of dynamic instances that allows to lexicographically order instances identically to their execution order. All the transformations considered in PolyRec require different ordering of these strings representable only with <em>additive </em>changes to the strings. </p>
<p>Loop transformations such as <em>skewing </em>require performing arithmetic on the representation of dynamic instances. In the second part of this dissertation, we explore this space of transformations by introducing skewing to nested recursion. Skewing plays an essential role in producing easily parallelizable loop nests from seemingly difficult ones due to dependences carried across loops. The inclusion of skewing for nested recursion to PolyRec requires significant extensions to representing dynamic instances and transformations that facilitate <em>performing arithmetic using strings</em>. First, we prove that the machines that represent the transformations are still composable. Then we prove that the representation of dependences and the algorithm that checks the correctness of composed transformations hold with minimal changes. Our new extended framework is known as UniRec, since it resembles the unimodular transformations for perfectly nested loop nests, which consider any combination of the primary transformations interchange, reversal, and skewing. UniRec opens possibilities of producing newly composed transformations for nested recursion and loops and verifying their correctness. We claim that UniRec completely subsumes the unimodular framework for loop transformations since nested recursion is more general than loop nests. </p>
|
339 |
Formal Structure in Puccini's Suor Angelica: Expanding Hepokoski's Rotational AnalysisJarvis, Brian Edward 23 June 2011 (has links)
No description available.
|
340 |
Smith-Waterman Sequence Alignment For Massively Parallel High-Performance Computing ArchitecturesSteinfadt, Shannon Irene 19 April 2010 (has links)
No description available.
|
Page generated in 0.0533 seconds