• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • Tagged with
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optimierung der Energie-Effizienz für Algorithmen der Linearen Algebra durch SIMD-Programmierung und AVX-Vektorisierung

Jakobs, Thomas 10 January 2022 (has links)
Neben einer kurzen Ausführungszeit rückt bei der Optimierung von Anwendungen und Algorithmen ein geringer Energieverbrauch der genutzten Rechenressourcen in den Fokus der aktuellen Forschung. Eine hohe Energie-Effizienz von Programmen wird dabei erreicht, indem der Energieverbrauch von Programmen und Technologien reduziert wird, ohne dafür die Ausführungszeit übermäßig zu erhöhen. Im parallelen wissenschaftlichen Rechnen ist der Bedarf an energie-effizienten Programmausführungen vor allem für Algorithmen der linearen Algebra gegeben, die als Unterfunktionen in einer Vielzahl von Anwendungen eingesetzt werden. Die Vektorisierung von Programmen durch die Prozessor- und Instruktionssatzerweiterung AVX zeigt Potenzial zur energie-effizienten Ausführung von Algorithmen der linearen Algebra, wobei die erzielte Energie-Effizienz von der Umsetzung der Implementierung abhängt. Für die gezeigten Untersuchungen werden drei repräsentativ ausgewählte Algorithmen der linearen Algebra für die Ausführung auf AVX-Vektoreinheiten genutzt. Bei der AVX-Vektorisierung der Algorithmen werden verschiedene Programmvarianten erstellt, mit denen Ausführungszeit und Energieverbrauch bei der Ausführung ermittelt werden. Die Programmvarianten unterscheiden sich dabei unter anderem in der Anwendung von Programmtransformationen, wie Loop Tiling oder einer veränderten Speicherzugriffsstruktur. Zusätzlich wird gezeigt, wie die Umsetzung verschiedener Programmieransätze, wie Autovektorisierung oder unterschiedlicher Instruktionssätze, sowie Implementierungsvarianten durch die Auswahl der verwendeten Instruktionen, die Ausführungszeit und den Energieverbrauch der Programmausführung beeinflussen. Die so erstellten Programmvarianten werden auf modernen Prozessoren verschiedener Architekturfamilien mit unterschiedlichen Ausführungsparametern, wie der eingestellten Prozessorfrequenz, ausgeführt. Die Untersuchungen zeigen, dass sich Ausführungszeit und Energieverbrauch von Programmen durch die Vektorisierung reduzieren lassen. Die Auswahl der Programmtransformationen, des Programmieransatzes und der Ausführungsparameter für die energie-effiziente Ausführung von vektorisierten Programmen kann dabei anwendungsspezifisch aufgrund der Eigenschaften des ausgewählten Algorithmus getroffen werden.
2

Cost-Based Vectorization of Instance-Based Integration Processes

Boehm, Matthias, Habich, Dirk, Preissler, Steffen, Lehner, Wolfgang 19 January 2023 (has links)
The inefficiency of integration processes - as an abstraction of workflow-based integration tasks - is often reasoned by low resource utilization and significant waiting times for external systems. With the aim to overcome these problems, we proposed the concept of process vectorization. There, instance-based integration processes are transparently executed with the pipes-and-filters execution model. Here, the term vectorization is used in the sense of processing a sequence (vector) of messages by one standing process. Although it has been shown that process vectorization achieves a significant throughput improvement, this concept has two major drawbacks. First, the theoretical performance of a vectorized integration process mainly depends on the performance of the most cost-intensive operator. Second, the practical performance strongly depends on the number of available threads. In this paper, we present an advanced optimization approach that addresses the mentioned problems. Therefore, we generalize the vectorization problem and explain how to vectorize process plans in a cost-based manner. Due to the exponential complexity, we provide a heuristic computation approach and formally analyze its optimality. In conclusion of our evaluation, the message throughput can be significantly increased compared to both the instance-based execution as well as the rule-based process vectorization.
3

SVG Weather: Entwicklung einer SVG Web Mapping Applikation zur Visualisierung von vierdimensionalen Daten am Beispiel von Wettervorhersagedaten

Kunze, Ralf 07 November 2006 (has links)
Thema dieser Arbeit ist die automatisierte grafische Aufarbeitung der Rohdaten einer Wetter- oder Klimaprognose für das Internet, um eine interaktive Anwendung zur Betrachtung der Daten zu erhalten. Im Internet finden sich zwar Wetterdarstellungen, die gute Ansätze aufweisen, allerdings werden interaktive und dynamische Techniken nur wenig genutzt. Daher wurde erstmalig eine Software erstellt, mit der eine komplexe Web Mapping Applikation generiert werden kann, die sowohl für die Darstellung von allgemeinen geographischen Daten, als auch für die Visualisierung von räumlichen Daten mit einem zeitlichen Aspekt geeignet ist. Besonderen Wert wurde auf die Visualisierung von Wetterdaten gelegt, die in der Web Mapping Applikation komfortabel eingebunden werden können. Dadurch ist es möglich, eine interaktive und dynamische Visualisierung einer Wettervorhersage im Internet zu präsentieren. Zusätzlich wurde darauf geachtet, dass die Visualisierung einer Wettervorhersage in Kombination mit der Web Mapping Applikation automatisiert erfolgt und dennoch sehr frei und einfach konfiguriert werden kann. Es ist keinerlei Fremdsoftware erforderlich, um eine solche Web Mapping Anwendung zuerzeugen. Lediglich die zu visualisierenden Daten müssen vorhanden sein. Die in der Web Mapping Applikation vorhandenen Daten sind frei kombinierbar, Ausschnitte können gezielt betrachtet werden, eine Navigation durch die Zeit wird ermöglicht und die Wettervorhersagedaten können mit einer beliebig genauen Geographie versehen werden. Dadurch ist es erstmals gelungen eine komfortable Wettervorhersage für das Internet zu präsentieren, die eine umfangreiche Analyse der kommenden Wetterlage erlaubt. Um die Anwendbarkeit der automatisierten Prozesskette zu demonstrieren, kann eine täglich aktualisierte interaktive Wettervorhersage für Europa unter folgender URL betrachtet werden: http://www.svg-weather.de
4

Evaluating the Vector Supercomputer SX-Aurora TSUBASA as a Co-Processor for In-Memory Database Systems

Pietrzyk, Johannes, Habich, Dirk, Damme, Patrick, Focht, Erich, Lehner, Wolfgang 16 June 2023 (has links)
In-memory column-store database systems are state of the art for the efficient processing of analytical workloads. In these systems, data compression as well as vectorization play an important role. Currently, the vectorized processing is done using regular SIMD (Single Instruction Multiple Data) extensions of modern processors. For example, Intel’s latest SIMD extension supports 512-bit vector registers which allows the parallel processing of 8× 64-bit values. From a database system perspective, this vectorization technique is not only very interesting for compression and decompression to reduce the computational overhead, but also for all database operators like joins, scan, as well as groupings. In contrast to these SIMD extensions, NEC Corporation has recently introduced a novel pure vector engine (supercomputer) as a co-processor called SX-Aurora TSUBASA. This vector engine features a vector length of 16.384 bits with the world’s highest bandwidth of up to 1.2 TB/s, which perfectly fits to data-intensive applications like in-memory database systems. Therefore, we describe the unique architecture and properties of this novel vector engine in this paper. Moreover, we present selected in-memory column-store-specific evaluation results to show the benefits of this vector engine compared to regular SIMD extensions. Finally, we conclude the paper with an outlook on our ongoing research activities in this direction.
5

Conflict Detection-Based Run-Length Encoding: AVX-512 CD Instruction Set in Action

Lehner, Wolfgang, Ungethum, Annett, Pietrzyk, Johannes, Damme, Patrick, Habich, Dirk 18 January 2023 (has links)
Data as well as hardware characteristics are two key aspects for efficient data management. This holds in particular for the field of in-memory data processing. Aside from increasing main memory capacities, efficient in-memory processing benefits from novel processing concepts based on lightweight compressed data. Thus, an active research field deals with the adaptation of new hardware features such as vectorization using SIMD instructions to speedup lightweight data compression algorithms. Following this trend, we propose a novel approach for run-length encoding, a well-known and often applied lightweight compression technique. Our novel approach is based on newly introduced conflict detection (CD) instructions in Intel's AVX-512 instruction set extension. As we are going to show, our CD-based approach has unique properties and outperforms the state-of-the-art RLE approach for data sets with small run lengths.
6

FPGA vs. SIMD: Comparison for Main Memory-Based Fast Column Scan

Nusrat, Jahan Lisa, Ungethüm, Annett, Habich, Dirk, Lehner, Wolfgang, Nguyen, Duy Anh Tuan, Kumar, Akash 23 March 2023 (has links)
The ever-increasing growth of data demands reliable data-base system with high-throughput and low-latency. Main memory-based column store database systems are state-of-the-art on this perspective, whereby data (values) in relational tables are organized by columns rather than by rows. In such systems, a full column scan is a fundamental key operation and thus, the optimization of the key operation is very crucial. This leads to have compact storage layout based fast column scan techniques through intra-value parallelism. For this reason, we investigated on different well-known fast column scan techniques using SIMD (Single Instruction Multiple Data) vectorization as well as using Field Programmable Gate Arrays (FPGA). Moreover, we present selective results of our exhaustive evaluation. Based on this evaluation, we find out the best column scan technique as per implementation mechanism–FPGA and SIMD. Finally, we conclude this paper via mentioning some lessons learned for our ongoing research activities.
7

Vectorizing Instance-Based Integration Processes

Boehm, Matthias, Habich, Dirk, Preissler, Steffen, Lehner, Wolfgang, Wloka, Uwe 13 January 2023 (has links)
The inefficiency of integration processes as an abstraction of workflow-based integration tasks is often reasoned by low resource utilization and significant waiting times for external systems. Due to the increasing use of integration processes within IT infrastructures, the throughput optimization has high influence on the overall performance of such an infrastructure. In the area of computational engineering, low resource utilization is addressed with vectorization techniques. In this paper, we introduce the concept of vectorization in the context of integration processes in order to achieve a higher degree of parallelism. Here, transactional behavior and serialized execution must be ensured.In conclusion of our evaluation, the message throughput can be significantly increased.
8

Make Larger Vector Register Sizes New Challenges?: Lessons Learned from the Area of Vectorized Lightweight Compression Algorithms

Habich, Dirk, Damme, Patrick, Ungethüm, Annett, Lehner, Wolfgang 15 September 2022 (has links)
The exploitation of data as well as hardware properties is a core aspect for efficient data management. This holds in particular for the field of in-memory data processing. Aside from increasing main memory capacities, in-memory data processing also benefits from novel processing concepts based on lightweight compressed data. To speed up compression as well as decompression, an active research field deals with the specialization of these algorithms to hardware features such as vectorization using SIMD instructions. Most of the vectorized implementations have been proposed for 128 bit vector registers. However, hardware vendors still increase the vector register sizes, whereby a straightforward transformation to these wider vector sizes is possible in most-cases. Thus, we systematically investigated the impact of different SIMD instruction set extensions with wider vector sizes on the behavior of straightforward transformed implementations. In this paper, we will describe our evaluation methodology and present selective results of our exhaustive evaluation. In particular, we will highlight some challenges and present first approaches to tackle them.
9

MorphStore — In-Memory Query Processing based on Morphing Compressed Intermediates LIVE

Habich, Dirk, Damme, Patrick, Ungethüm, Annett, Pietrzyk, Johannes, Krause, Alexander, Hildebrandt, Juliana, Lehner, Wolfgang 15 September 2022 (has links)
In this demo, we present MorphStore, an in-memory column store with a novel compression-aware query processing concept. Basically, compression using lightweight integer compression algorithms already plays an important role in existing in-memory column stores, but mainly for base data. The continuous handling of compression from the base data to the intermediate results during query processing has already been discussed, but not investigated in detail since the computational effort for compression as well as decompression is often assumed to exceed the benefits of a reduced transfer cost between CPU and main memory. However, this argument increasingly loses its validity as we are going to show in our demo. Generally, our novel compression-aware query processing concept is characterized by the fact that we are able to speed up the query execution by morphing compressed intermediate results from one scheme to another scheme to dynamically adapt to the changing data characteristics during query processing. Our morphing decisions are made using a cost-based approach.
10

From a Comprehensive Experimental Survey to a Cost-based Selection Strategy for Lightweight Integer Compression Algorithms

Damme, Patrick, Ungethüm, Annett, Hildebrandt, Juliana, Habich, Dirk, Lehner, Wolfgang 11 January 2023 (has links)
Lightweight integer compression algorithms are frequently applied in in-memory database systems to tackle the growing gap between processor speed and main memory bandwidth. In recent years, the vectorization of basic techniques such as delta coding and null suppression has considerably enlarged the corpus of available algorithms. As a result, today there is a large number of algorithms to choose from, while different algorithms are tailored to different data characteristics. However, a comparative evaluation of these algorithms with different data and hardware characteristics has never been sufficiently conducted in the literature. To close this gap, we conducted an exhaustive experimental survey by evaluating several state-of-the-art lightweight integer compression algorithms as well as cascades of basic techniques. We systematically investigated the influence of data as well as hardware properties on the performance and the compression rates. The evaluated algorithms are based on publicly available implementations as well as our own vectorized reimplementations. We summarize our experimental findings leading to several new insights and to the conclusion that there is no single-best algorithm. Moreover, in this article, we also introduce and evaluate a novel cost model for the selection of a suitable lightweight integer compression algorithm for a given dataset.

Page generated in 0.0486 seconds