• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 6
  • 3
  • Tagged with
  • 25
  • 24
  • 19
  • 12
  • 12
  • 12
  • 12
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cost-Based Optimization of Integration Flows

Böhm, Matthias 02 May 2011 (has links) (PDF)
Integration flows are increasingly used to specify and execute data-intensive integration tasks between heterogeneous systems and applications. There are many different application areas such as real-time ETL and data synchronization between operational systems. For the reasons of an increasing amount of data, highly distributed IT infrastructures, and high requirements for data consistency and up-to-dateness of query results, many instances of integration flows are executed over time. Due to this high load and blocking synchronous source systems, the performance of the central integration platform is crucial for an IT infrastructure. To tackle these high performance requirements, we introduce the concept of cost-based optimization of imperative integration flows that relies on incremental statistics maintenance and inter-instance plan re-optimization. As a foundation, we introduce the concept of periodical re-optimization including novel cost-based optimization techniques that are tailor-made for integration flows. Furthermore, we refine the periodical re-optimization to on-demand re-optimization in order to overcome the problems of many unnecessary re-optimization steps and adaptation delays, where we miss optimization opportunities. This approach ensures low optimization overhead and fast workload adaptation.
2

Energy-Aware Data Management on NUMA Architectures

Kissinger, Thomas 29 May 2017 (has links) (PDF)
The ever-increasing need for more computing and data processing power demands for a continuous and rapid growth of power-hungry data center capacities all over the world. As a first study in 2008 revealed, energy consumption of such data centers is becoming a critical problem, since their power consumption is about to double every 5 years. However, a recently (2016) released follow-up study points out that this threatening trend was dramatically throttled within the past years, due to the increased energy efficiency actions taken by data center operators. Furthermore, the authors of the study emphasize that making and keeping data centers energy-efficient is a continuous task, because more and more computing power is demanded from the same or an even lower energy budget, and that this threatening energy consumption trend will resume as soon as energy efficiency research efforts and its market adoption are reduced. An important class of applications running in data centers are data management systems, which are a fundamental component of nearly every application stack. While those systems were traditionally designed as disk-based databases that are optimized for keeping disk accesses as low a possible, modern state-of-the-art database systems are main memory-centric and store the entire data pool in the main memory, which replaces the disk as main bottleneck. To scale up such in-memory database systems, non-uniform memory access (NUMA) hardware architectures are employed that face a decreased bandwidth and an increased latency when accessing remote memory compared to the local memory. In this thesis, we investigate energy awareness aspects of large scale-up NUMA systems in the context of in-memory data management systems. To do so, we pick up the idea of a fine-grained data-oriented architecture and improve the concept in a way that it keeps pace with increased absolute performance numbers of a pure in-memory DBMS and scales up on NUMA systems in the large scale. To achieve this goal, we design and build ERIS, the first scale-up in-memory data management system that is designed from scratch to implement a data-oriented architecture. With the help of the ERIS platform, we explore our novel core concept for energy awareness, which is Energy Awareness by Adaptivity. The concept describes that software and especially database systems have to quickly respond to environmental changes (i.e., workload changes) by adapting themselves to enter a state of low energy consumption. We present the hierarchically organized Energy-Control Loop (ECL), which is a reactive control loop and provides two concrete implementations of our Energy Awareness by Adaptivity concept, namely the hardware-centric Resource Adaptivity and the software-centric Storage Adaptivity. Finally, we will give an exhaustive evaluation regarding the scalability of ERIS as well as our adaptivity facilities.
3

Energy-Aware Data Management on NUMA Architectures

Kissinger, Thomas 23 March 2017 (has links)
The ever-increasing need for more computing and data processing power demands for a continuous and rapid growth of power-hungry data center capacities all over the world. As a first study in 2008 revealed, energy consumption of such data centers is becoming a critical problem, since their power consumption is about to double every 5 years. However, a recently (2016) released follow-up study points out that this threatening trend was dramatically throttled within the past years, due to the increased energy efficiency actions taken by data center operators. Furthermore, the authors of the study emphasize that making and keeping data centers energy-efficient is a continuous task, because more and more computing power is demanded from the same or an even lower energy budget, and that this threatening energy consumption trend will resume as soon as energy efficiency research efforts and its market adoption are reduced. An important class of applications running in data centers are data management systems, which are a fundamental component of nearly every application stack. While those systems were traditionally designed as disk-based databases that are optimized for keeping disk accesses as low a possible, modern state-of-the-art database systems are main memory-centric and store the entire data pool in the main memory, which replaces the disk as main bottleneck. To scale up such in-memory database systems, non-uniform memory access (NUMA) hardware architectures are employed that face a decreased bandwidth and an increased latency when accessing remote memory compared to the local memory. In this thesis, we investigate energy awareness aspects of large scale-up NUMA systems in the context of in-memory data management systems. To do so, we pick up the idea of a fine-grained data-oriented architecture and improve the concept in a way that it keeps pace with increased absolute performance numbers of a pure in-memory DBMS and scales up on NUMA systems in the large scale. To achieve this goal, we design and build ERIS, the first scale-up in-memory data management system that is designed from scratch to implement a data-oriented architecture. With the help of the ERIS platform, we explore our novel core concept for energy awareness, which is Energy Awareness by Adaptivity. The concept describes that software and especially database systems have to quickly respond to environmental changes (i.e., workload changes) by adapting themselves to enter a state of low energy consumption. We present the hierarchically organized Energy-Control Loop (ECL), which is a reactive control loop and provides two concrete implementations of our Energy Awareness by Adaptivity concept, namely the hardware-centric Resource Adaptivity and the software-centric Storage Adaptivity. Finally, we will give an exhaustive evaluation regarding the scalability of ERIS as well as our adaptivity facilities.
4

Energy Elasticity on Heterogeneous Hardware using Adaptive Resource Reconfiguration LIVE

Ungethüm, Annett, Kissinger, Thomas, Mentzel, Willi-Wolfram, Habich, Dirk, Lehner, Wolfgang 11 August 2022 (has links)
Energy awareness of database systems has emerged as a critical research topic, since energy consumption is becoming a major limiter for their scalability. Recent energy-related hardware developments trend towards offering more and more configuration opportunities for the software to control its own energy consumption. Existing research so far mainly focused on leveraging this configuration spectrum to find the most energy-efficient configuration for specific operators or entire queries. In this demo, we introduce the concept of energy elasticity and propose the energy-control loop as an implementation of this concept. Energy elasticity refers to the ability of software to behave energy-proportional and energy-efficient at the same time while maintaining a certain quality of service. Thus, our system does not draw the least energy possible but the least energy necessary to still perform reasonably. We demonstrate our overall approach using a rich interactive GUI to give attendees the opportunity to learn more about our concept.
5

Cost-Based Optimization of Integration Flows

Böhm, Matthias 15 March 2011 (has links)
Integration flows are increasingly used to specify and execute data-intensive integration tasks between heterogeneous systems and applications. There are many different application areas such as real-time ETL and data synchronization between operational systems. For the reasons of an increasing amount of data, highly distributed IT infrastructures, and high requirements for data consistency and up-to-dateness of query results, many instances of integration flows are executed over time. Due to this high load and blocking synchronous source systems, the performance of the central integration platform is crucial for an IT infrastructure. To tackle these high performance requirements, we introduce the concept of cost-based optimization of imperative integration flows that relies on incremental statistics maintenance and inter-instance plan re-optimization. As a foundation, we introduce the concept of periodical re-optimization including novel cost-based optimization techniques that are tailor-made for integration flows. Furthermore, we refine the periodical re-optimization to on-demand re-optimization in order to overcome the problems of many unnecessary re-optimization steps and adaptation delays, where we miss optimization opportunities. This approach ensures low optimization overhead and fast workload adaptation.
6

Adaptive Energy-Control for In-Memory Database Systems

Kissinger, Thomas, Habich, Dirk, Lehner, Wolfgang 30 May 2022 (has links)
The ever-increasing demand for scalable database systems is limited by their energy consumption, which is one of the major challenges in research today. While existing approaches mainly focused on transaction-oriented disk-based database systems, we are investigating and optimizing the energy consumption and performance of data-oriented scale-up in-memory database systems that make heavy use of the main power consumers, which are processors and main memory. We give an in-depth energy analysis of a current mainstream server system and show that modern processors provide a rich set of energy-control features, but lack the capability of controlling them appropriately, because of missing application-specific knowledge. Thus, we propose the Energy-Control Loop (ECL) as an DBMS-integrated approach for adaptive energy-control on scale-up in-memory database systems that obeys a query latency limit as a soft constraint and actively optimizes energy efficiency and performance of the DBMS. The ECL relies on adaptive workload-dependent energy profiles that are continuously maintained at runtime. In our evaluation, we observed energy savings ranging from 20% to 40% for a real-world load profile.
7

Adaptive Bediensysteme im Ackerschlepper

Schempp, Timo, Hülle, Björn-Gerrit, Racs, Marcel 09 October 2024 (has links)
Dieser Beitrag beschreibt die wissenschaftliche Untersuchung des Einsatzes adaptiver Bediensysteme im Ackerschlepper. Die Motivation beschreibt den vielseitigen Einsatz des Ackerschleppers und die naheliegende Konsequenz für den Einsatz adaptiver Bediensysteme in diesem. Im Gegensatz dazu zeigt der Stand der Technik mit weitestgehend statischen Bediensystemen eine Lücke zum Einsatz adaptiver Bediensysteme auf, die mit nachfolgend beschriebener Forschungsarbeit geschlossen werden soll. Die Beschreibung des Forschungsziels, der genutzten Methoden und der Ergebnisse zeigen den wissenschaftlichen Weg, über den eine signifikante Verbesserung der Ergonomie durch den Einsatz eines adaptiven Bediensystems festgestellt werden konnte.
8

Advanced Numerical Modelling of Discontinuities in Coupled Boundary Value Problems / Numerische Modellierung von Diskontinuitäten in Gekoppelten Randwertproblemen

Kästner, Markus 18 August 2016 (has links) (PDF)
Industrial development processes as well as research in physics, materials and engineering science rely on computer modelling and simulation techniques today. With increasing computer power, computations are carried out on multiple scales and involve the analysis of coupled problems. In this work, continuum modelling is therefore applied at different scales in order to facilitate a prediction of the effective material or structural behaviour based on the local morphology and the properties of the individual constituents. This provides valueable insight into the structure-property relations which are of interest for any design process. In order to obtain reasonable predictions for the effective behaviour, numerical models which capture the essential fine scale features are required. In this context, the efficient representation of discontinuities as they arise at, e.g. material interfaces or cracks, becomes more important than in purely phenomenological macroscopic approaches. In this work, two different approaches to the modelling of discontinuities are discussed: (i) a sharp interface representation which requires the localisation of interfaces by the mesh topology. Since many interesting macroscopic phenomena are related to the temporal evolution of certain microscopic features, (ii) diffuse interface models which regularise the interface in terms of an additional field variable and therefore avoid topological mesh updates are considered as an alternative. With the two combinations (i) Extended Finite Elemente Method (XFEM) + sharp interface model, and (ii) Isogeometric Analysis (IGA) + diffuse interface model, two fundamentally different approaches to the modelling of discontinuities are investigated in this work. XFEM reduces the continuity of the approximation by introducing suitable enrichment functions according to the discontinuity to be modelled. Instead, diffuse models regularise the interface which in many cases requires even an increased continuity that is provided by the spline-based approximation. To further increase the efficiency of isogeometric discretisations of diffuse interfaces, adaptive mesh refinement and coarsening techniques based on hierarchical splines are presented. The adaptive meshes are found to reduce the number of degrees of freedom required for a certain accuracy of the approximation significantly. Selected discretisation techniques are applied to solve a coupled magneto-mechanical problem for particulate microstructures of Magnetorheological Elastomers (MRE). In combination with a computational homogenisation approach, these microscopic models allow for the prediction of the effective coupled magneto-mechanical response of MRE. Moreover, finite element models of generic MRE microstructures are coupled with a BEM domain that represents the surrounding free space in order to take into account finite sample geometries. The macroscopic behaviour is analysed in terms of actuation stresses, magnetostrictive deformations, and magnetorheological effects. The results obtained for different microstructures and various loadings have been found to be in qualitative agreement with experiments on MRE as well as analytical results. / Industrielle Entwicklungsprozesse und die Forschung in Physik, Material- und Ingenieurwissenschaft greifen in einem immer stärkeren Umfang auf rechnergestützte Modellierungs- und Simulationsverfahren zurück. Die ständig steigende Rechenleistung ermöglicht dabei auch die Analyse mehrskaliger und gekoppelter Probleme. In dieser Arbeit kommt daher ein kontinuumsmechanischer Modellierungsansatz auf verschiedenen Skalen zum Einsatz. Das Ziel der Berechnungen ist dabei die Vorhersage des effektiven Material- bzw. Strukturverhaltens auf der Grundlage der lokalen Werkstoffstruktur und der Eigenschafen der konstitutiven Bestandteile. Derartige Simulationen liefern interessante Aussagen zu den Struktur-Eigenschaftsbeziehungen, deren Verständnis entscheidend für das Material- und Strukturdesign ist. Um aussagekräftige Vorhersagen des effektiven Verhaltens zu erhalten, sind numerische Modelle erforderlich, die wesentliche Eigenschaften der lokalen Materialstruktur abbilden. Dabei kommt der effizienten Modellierung von Diskontinuitäten, beispielsweise Materialgrenzen oder Rissen, eine deutlich größere Bedeutung zu als bei einer makroskopischen Betrachtung. In der vorliegenden Arbeit werden zwei unterschiedliche Modellierungsansätze für Unstetigkeiten diskutiert: (i) eine scharfe Abbildung, die üblicherweise konforme Berechnungsnetze erfordert. Da eine Evolution der Mikrostruktur bei einer derartigen Modellierung eine Topologieänderung bzw. eine aufwendige Neuvernetzung nach sich zieht, werden alternativ (ii) diffuse Modelle, die eine zusätzliche Feldvariable zur Regularisierung der Grenzfläche verwenden, betrachtet. Mit der Kombination von (i) Erweiterter Finite-Elemente-Methode (XFEM) + scharfem Grenzflächenmodell sowie (ii) Isogeometrischer Analyse (IGA) + diffuser Grenzflächenmodellierung werden in der vorliegenden Arbeit zwei fundamental verschiedene Zugänge zur Modellierung von Unstetigkeiten betrachtet. Bei der Diskretisierung mit XFEM wird die Kontinuität der Approximation durch eine Anreicherung der Ansatzfunktionen gemäß der abzubildenden Unstetigkeit reduziert. Demgegenüber erfolgt bei einer diffusen Grenzflächenmodellierung eine Regularisierung. Die dazu erforderliche zusätzliche Feldvariable führt oft zu Feldgleichungen mit partiellen Ableitungen höherer Ordnung und weist in ihrem Verlauf starke Gradienten auf. Die daraus resultierenden Anforderungen an den Ansatz werden durch eine Spline-basierte Approximation erfüllt. Um die Effizienz dieser isogeometrischen Diskretisierung weiter zu erhöhen, werden auf der Grundlage hierarchischer Splines adaptive Verfeinerungs- und Vergröberungstechniken entwickelt. Ausgewählte Diskretisierungsverfahren werden zur mehrskaligen Modellierung des gekoppelten magnetomechanischen Verhaltens von Magnetorheologischen Elastomeren (MRE) angewendet. In Kombination mit numerischen Homogenisierungsverfahren, ermöglichen die Mikrostrukturmodelle eine Vorhersage des effektiven magnetomechanischen Verhaltens von MRE. Außerderm wurden Verfahren zur Kopplung von FE-Modellen der MRE-Mikrostruktur mit einem Randelement-Modell der Umgebung vorgestellt. Mit Hilfe der entwickelten Verfahren kann das Verhalten von MRE in Form von Aktuatorspannungen, magnetostriktiven Deformationen und magnetischen Steifigkeitsänderungen vorhergesagt werden. Im Gegensatz zu zahlreichen anderen Modellierungsansätzen, stimmen die mit den hier vorgestellten Methoden für unterschiedliche Mikrostrukturen erzielten Vorhersagen sowohl mit analytischen als auch experimentellen Ergebnissen überein.
9

Test Modeling of Dynamic Variable Systems using Feature Petri Nets

Püschel, Georg, Seidl, Christoph, Neufert, Mathias, Gorzel, André, Aßmann, Uwe 08 November 2013 (has links) (PDF)
In order to generate substantial market impact, mobile applications must be able to run on multiple platforms. Hence, software engineers face a multitude of technologies and system versions resulting in static variability. Furthermore, due to the dependence on sensors and connectivity, mobile software has to adapt its behavior accordingly at runtime resulting in dynamic variability. However, software engineers need to assure quality of a mobile application even with this large amount of variability—in our approach by the use of model-based testing (i.e., the generation of test cases from models). Recent concepts of test metamodels cannot efficiently handle dynamic variability. To overcome this problem, we propose a process for creating black-box test models based on dynamic feature Petri nets, which allow the description of configuration-dependent behavior and reconfiguration. We use feature models to define variability in the system under test. Furthermore, we illustrate our approach by introducing an example translator application.
10

Adaptive sequential feature selection in visual perception and pattern recognition / Adaptive sequentielle Featureasuwahl in visuelle Wahrnehmung und Mustererkennung

Avdiyenko, Liliya 08 October 2014 (has links) (PDF)
In the human visual system, one of the most prominent functions of the extensive feedback from the higher brain areas within and outside of the visual cortex is attentional modulation. The feedback helps the brain to concentrate its resources on visual features that are relevant for recognition, i. e. it iteratively selects certain aspects of the visual scene for refined processing by the lower areas until the inference process in the higher areas converges to a single hypothesis about this scene. In order to minimize a number of required selection-refinement iterations, one has to find a short sequence of maximally informative portions of the visual input. Since the feedback is not static, the selection process is adapted to a scene that should be recognized. To find a scene-specific subset of informative features, the adaptive selection process on every iteration utilizes results of previous processing in order to reduce the remaining uncertainty about the visual scene. This phenomenon inspired us to develop a computational algorithm solving a visual classification task that would incorporate such principle, adaptive feature selection. It is especially interesting because usually feature selection methods are not adaptive as they define a unique set of informative features for a task and use them for classifying all objects. However, an adaptive algorithm selects features that are the most informative for the particular input. Thus, the selection process should be driven by statistics of the environment concerning the current task and the object to be classified. Applied to a classification task, our adaptive feature selection algorithm favors features that maximally reduce the current class uncertainty, which is iteratively updated with values of the previously selected features that are observed on the testing sample. In information-theoretical terms, the selection criterion is the mutual information of a class variable and a feature-candidate conditioned on the already selected features, which take values observed on the current testing sample. Then, the main question investigated in this thesis is whether the proposed adaptive way of selecting features is advantageous over the conventional feature selection and in which situations. Further, we studied whether the proposed adaptive information-theoretical selection scheme, which is a computationally complex algorithm, is utilized by humans while they perform a visual classification task. For this, we constructed a psychophysical experiment where people had to select image parts that as they think are relevant for classification of these images. We present the analysis of behavioral data where we investigate whether human strategies of task-dependent selective attention can be explained by a simple ranker based on the mutual information, a more complex feature selection algorithm based on the conventional static mutual information and the proposed here adaptive feature selector that mimics a mechanism of the iterative hypothesis refinement. Hereby, the main contribution of this work is the adaptive feature selection criterion based on the conditional mutual information. Also it is shown that such adaptive selection strategy is indeed used by people while performing visual classification.

Page generated in 0.0509 seconds