• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 53
  • 53
  • 10
  • 10
  • 9
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Quantile Function-based Models for Resource Utilization and Power Consumption of Applications

Möbius, Christoph 14 June 2016 (has links)
Server consolidation is currently widely employed in order to improve the energy efficiency of data centers. While being a promising technique, server consolidation may lead to resource interference between applications and thus, reduced performance of applications. Current approaches to account for possible resource interference are not well suited to respect the variation in the workloads for the applications. As a consequence, these approaches cannot prevent resource interference if workload for applications vary. It is assumed that having models for the resource utilization and power consumption of applications as functions of the workload to the applications can improve decision making and help to prevent resource interference in scenarios with varying workload. This thesis aims to develop such models for selected applications. To produce varying workload that resembles statistical properties of real-world workload a workload generator is developed in a first step. Usually, the measurement data for such models origins from different sensors and equipment, all producing data at different frequencies. In order to account for these different frequencies, in a second step this thesis particularly investigates the feasibility to employ quantile functions as model inputs. Complementary, since conventional goodness-of-fit tests are not appropriate for this approach, an alternative to assess the estimation error is presented.:1 Introduction 2 Thesis Overview 2.1 Testbed 2.2 Contributions and Thesis Structure 2.3 Scope, Assumptions, and Limitations 3 Generation of Realistic Workload 3.1 Statistical Properties of Internet Traffic 3.2 Statistical Properties of Video Server Traffic 3.3 Implementation of Workload Generation 3.4 Summary 4 Models for Resource Utilization and for Power Consumption 4.1 Introduction 4.2 Prior Work 4.3 Test Cases 4.4 Applying Regression To Samples Of Different Length 4.5 Models for Resource Utilization as Function of Request Size 4.6 Models for Power Consumption as Function of Resource Utilization 4.7 Summary 5 Conclusion & Future Work 5.1 Summary 5.2 Future Work Appendices / Serverkonsolidierung wird derzeit weithin zur Verbesserung der Energieeffizienz von Rechenzentren eingesetzt. Während diese Technik vielversprechende Ergebnisse zeitigt, kann sie zu Ressourceninterferenz und somit zu verringerter Performanz von Anwendungen führen. Derzeitige Ansätze, um dieses Problem zu adressieren, sind nicht gut für Szenarien geeignet, in denen die Workload für die Anwendungen variiert. Als Konsequenz daraus folgt, dass diese Ansätze Ressourceninterferenz in solchen Szenarien nicht verhindern können. Es wird angenommen, dass Modelle für Anwendungen, die deren Ressourenauslastung und die Leistungsaufnahme als Funktion der Workload beschreiben, die Entscheidungsfindung bei der Konsolidierung verbessern und Ressourceninterferenz verhindern können. Diese Arbeit zielt darauf ab, solche Modelle für ausgewählte Anwendungen zu entwickeln. Um variierende Workload zu erzeugen, welche den statistischen Eigenschaften realer Workload folgt, wird zunächst ein Workload-Generator entwickelt. Gewöhnlicherweise stammen Messdaten für die Modelle aus verschienenen Sensoren und Messgeräten, welche jeweils mit unterschiedlichen Frequenzen Daten erzeugen. Um diesen verschiedenen Frequenzen Rechnung zu tragen, untersucht diese Arbeit insbesondere die Möglichkeit, Quantilfunktionen als Eingabeparameter für die Modelle zu verwenden. Da konventionelle Anpassungsgütetests bei diesem Ansatz ungeeignet sind, wird ergänzend eine Alternative vorgestellt, um den durch die Modellierung entstehenden Schätzfehler zu bemessen.:1 Introduction 2 Thesis Overview 2.1 Testbed 2.2 Contributions and Thesis Structure 2.3 Scope, Assumptions, and Limitations 3 Generation of Realistic Workload 3.1 Statistical Properties of Internet Traffic 3.2 Statistical Properties of Video Server Traffic 3.3 Implementation of Workload Generation 3.4 Summary 4 Models for Resource Utilization and for Power Consumption 4.1 Introduction 4.2 Prior Work 4.3 Test Cases 4.4 Applying Regression To Samples Of Different Length 4.5 Models for Resource Utilization as Function of Request Size 4.6 Models for Power Consumption as Function of Resource Utilization 4.7 Summary 5 Conclusion & Future Work 5.1 Summary 5.2 Future Work Appendices
52

Green Computing – Power Efficient Management in Data Centers Using Resource Utilization as a Proxy for Power

Da Silva, Ralston A. January 2009 (has links)
No description available.
53

Auto-Tuning Apache Spark Parameters for Processing Large Datasets / Auto-Optimering av Apache Spark-parametrar för bearbetning av stora datamängder

Zhou, Shidi January 2023 (has links)
Apache Spark is a popular open-source distributed processing framework that enables efficient processing of large amounts of data. Apache Spark has a large number of configuration parameters that are strongly related to performance. Selecting an optimal configuration for Apache Spark application deployed in a cloud environment is a complex task. Making a poor choice may not only result in poor performance but also increases costs. Manually adjusting the Apache Spark configuration parameters can take a lot of time and may not lead to the best outcomes, particularly in a cloud environment where computing resources are allocated dynamically, and workloads can fluctuate significantly. The focus of this thesis project is the development of an auto-tuning approach for Apache Spark configuration parameters. Four machine learning models are formulated and evaluated to predict Apache Spark’s performance. Additionally, two models for Apache Spark configuration parameter search are created and evaluated to identify the most suitable parameters, resulting in the shortest execution time. The obtained results demonstrates that with the developed auto-tuning approach and adjusting Apache Spark configuration parameters, Apache Spark applications can achieve a shorter execution time than when using the default parameters. The developed auto-tuning approach gives an improved cluster utilization and shorter job execution time, with an average performance improvement of 49.98%, 53.84%, and 64.16% for the three different types of Apache Spark applications benchmarked. / Apache Spark är en populär öppen källkodslösning för distribuerad databehandling som möjliggör effektiv bearbetning av stora mängder data. Apache Spark har ett stort antal konfigurationsparametrar som starkt påverkar prestandan. Att välja en optimal konfiguration för en Apache Spark-applikation som distribueras i en molnmiljö är en komplex uppgift. Ett dåligt val kan inte bara leda till dålig prestanda utan också ökade kostnader. Manuell anpassning av Apache Spark-konfigurationsparametrar kan ta mycket tid och leda till suboptimala resultat, särskilt i en molnmiljö där beräkningsresurser tilldelas dynamiskt och arbetsbelastningen kan variera avsevärt. Fokus för detta examensprojekt är att utveckla en automatisk optimeringsmetod för konfigurationsparametrarna i Apache Spark. Fyra maskininlärningsmodeller formuleras och utvärderas för att förutsäga Apache Sparks prestanda. Dessutom skapas och utvärderas två modeller för att söka efter de mest lämpliga konfigurationsparametrarna för Apache Spark, vilket resulterar i kortast möjliga exekveringstid. De erhållna resultaten visar att den utvecklade automatiska optimeringsmetoden, med anpassning av Apache Sparks konfigurationsparameterar, bidrar till att Apache Spark-applikationer kan uppnå kortare exekveringstider än vid användning av standard-parametrar. Den utvecklade metoden för automatisk optimering bidrar till en förbättrad användning av klustret och kortare exekveringstider, med en genomsnittlig prestandaförbättring på 49,98%, 53,84% och 64,16% för de tre olika typerna av Apache Spark-applikationer som testades.

Page generated in 0.1036 seconds