• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 8
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 42
  • 42
  • 9
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Thema mit Variablen: Zur Phänomenologie der Jazzkomposition und musikalischer Analyse

Dreps, Krystoffer 23 October 2023 (has links)
No description available.
32

Scheduling Local and Remote Memory in Cluster Computers

Serrano Gómez, Mónica 02 September 2013 (has links)
Los cl'usters de computadores representan una soluci'on alternativa a los supercomputadores. En este tipo de sistemas, se suele restringir el espacio de direccionamiento de memoria de un procesador dado a la placa madre local. Restringir el sistema de esta manera es mucho m'as barato que usar una implementaci'on de memoria compartida entre las placas. Sin embargo, las diferentes necesidades de memoria de las aplicaciones que se ejecutan en cada placa pueden dar lugar a un desequilibrio en el uso de memoria entre las placas. Esta situaci'on puede desencadenar intercambios de datos con el disco, los cuales degradan notablemente las prestaciones del sistema, a pesar de que pueda haber memoria no utilizada en otras placas. Una soluci'on directa consiste en aumentar la cantidad de memoria disponible en cada placa, pero el coste de esta soluci'on puede ser prohibitivo. Por otra parte, el hardware de acceso a memoria remota (RMA) es una forma de facilitar interconexiones r'apidas entre las placas de un cl'uster de computadores. En trabajos recientes, esta caracter'¿stica se ha usado para aumentar el espacio de direccionamiento en ciertas placas. En este trabajo, la m'aquina base usa esta capacidad como mecanismo r'apido para permitir al sistema operativo local acceder a la memoria DRAM instalada en una placa remota. En este contexto, una plani¿caci'on de memoria e¿ciente constituye una cuesti'on cr'¿tica, ya que las latencias de memoria tienen un impacto importante sobre el tiempo de ejecuci'on global de las aplicaciones, debido a que las latencias de memoria remota pueden ser varios 'ordenes de magnitud m'as altas que los accesos locales. Adem'as, el hecho de cambiar la distribuci'on de memoria es un proceso lento que puede involucrar a varias placas, as'¿ pues, el plani¿cador de memoria ha de asegurarse de que la distribuci'on objetivo proporciona mejores prestaciones que la actual. La presente disertaci'on pretende abordar los asuntos mencionados anteriormente mediante la propuesta de varias pol'¿ticas de plani¿caci'on de memoria. En primer lugar, se presenta un algoritmo ideal y una estrategia heur'¿stica para asignar memoria principal ubicada en las diferentes regiones de memoria. Adicionalmente, se ha dise¿nado un mecanismo de control de Calidad de Servicio para evitar que las prestaciones de las aplicaciones en ejecuci'on se degraden de forma inadmisible. El algoritmo ideal encuentra la distribuci'on de memoria 'optima pero su complejidad computacional es prohibitiva dado un alto n'umero de aplicaciones. De este inconveniente se encarga la estrategia heur'¿stica, la cual se aproxima a la mejor distribuci'on de memoria local y remota con un coste computacional aceptable. Los algoritmos anteriores se basan en pro¿ling. Para tratar este defecto potencial, nos centramos en soluciones anal'¿ticas. Esta disertaci'on propone un modelo anal'¿tico que estima el tiempo de ejecuci'on de una aplicaci'on dada para cierta distribuci'on de memoria. Dicha t'ecnica se usa como un predictor de prestaciones que proporciona la informaci'on de entrada a un plani¿cador de memoria. El plani¿cador de memoria usa las estimaciones para elegir din'amicamente la distribuci'on de memoria objetivo 'optima para cada aplicaci'on que se est'e ejecutando en el sistema, de forma que se alcancen las mejores prestaciones globales. La plani¿caci'on a granularidad m'as alta permite pol'¿ticas de plani¿caci'on m'as simples. Este trabajo estudia la viabilidad de plani¿car a nivel de granularidad de p'agina del sistema operativo. Un entrelazado convencional basado en hardware a nivel de bloque y un entrelazado a nivel de p'agina de sistema operativo se han tomado como esquemas de referencia. De la comparaci'on de ambos esquemas de referencia, hemos concluido que solo algunas aplicaciones se ven afectadas de forma signi¿cativa por el uso del entrelazado a nivel de p'agina. Las razones que causan este impacto en las prestaciones han sido estudiadas y han de¿nido la base para el dise¿no de dos pol'¿ticas de distribuci'on de memoria basadas en sistema operativo. La primera se denomina on-demand (OD), y es una estrategia simple que funciona colocando las p'aginas nuevas en memoria local hasta que dicha regi'on se llena, de manera que se bene¿cia de la premisa de que las p'aginas m'as accedidas se piden y se ubican antes que las menos accedidas para mejorar las prestaciones. Sin embargo, ante la ausencia de dicha premisa para algunos de los benchmarks, OD funciona peor. La segunda pol'¿tica, denominada Most-accessed in-local (Mail), se propone con el objetivo de evitar este problema. / Cluster computers represent a cost-effective alternative solution to supercomputers. In these systems, it is common to constrain the memory address space of a given processor to the local motherboard. Constraining the system in this way is much cheaper than using a full-fledged shared memory implementation among motherboards. However, memory usage among motherboards may be unfairly balanced depending on the memory requirements of the applications running on each motherboard. This situation can lead to disk-swapping, which severely degrades system performance, although there may be unused memory on other motherboards. A straightforward solution is to increase the amount of available memory in each motherboard, but the cost of this solution may become prohibitive. On the other hand, remote memory access (RMA) hardware provides fast interconnects among the motherboards of a cluster computer. In recent works, this characteristic has been used to extend the addressable memory space of selected motherboards. In this work, the baseline machine uses this capability as a fast mechanism to allow the local OS to access to DRAM memory installed in a remote motherboard. In this context, efficient memory scheduling becomes a major concern since main memory latencies have a strong impact on the overall execution time of the applications, provided that remote memory accesses may be several orders of magnitude higher than local accesses. Additionally, changing the memory distribution is a slow process which may involve several motherboards, hence the memory scheduler needs to make sure that the target distribution provides better performance than the current one. This dissertation aims to address the aforementioned issues by proposing several memory scheduling policies. First, an ideal algorithm and a heuristic strategy to assign main memory from the different memory regions are presented. Additionally, a Quality of Service control mechanism has been devised in order to prevent unacceptable performance degradation for the running applications. The ideal algorithm finds the optimal memory distribution but its computational cost is prohibitive for a high number of applications. This drawback is handled by the heuristic strategy, which approximates the best local and remote memory distribution among applications at an acceptable computational cost. The previous algorithms are based on profiling. To deal with this potential shortcoming we focus on analytical solutions. This dissertation proposes an analytical model that estimates the execution time of a given application for a given memory distribution. This technique is used as a performance predictor that provides the input to a memory scheduler. The estimates are used by the memory scheduler to dynamically choose the optimal target memory distribution for each application running in the system in order to achieve the best overall performance. Scheduling at a higher granularity allows simpler scheduler policies. This work studies the feasibility of scheduling at OS page granularity. A conventional hardware-based block interleaving and an OS-based page interleaving have been assumed as the baseline schemes. From the comparison of the two baseline schemes, we have concluded that only the performance of some applications is significantly affected by page-based interleaving. The reasons that cause this impact on performance have been studied and have provided the basis for the design of two OS-based memory allocation policies. The first one, namely on-demand (OD), is a simple strategy that works by placing new pages in local memory until this region is full, thus benefiting from the premise that most of the accessed pages are requested and allocated before than the least accessed ones to improve the performance. Nevertheless, in the absence of this premise for some benchmarks, OD performs worse. The second policy, namely Most-accessed in-local (Mail), is proposed to avoid this problem / Serrano Gómez, M. (2013). Scheduling Local and Remote Memory in Cluster Computers [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31639
33

Análise interpretativa de cinco obras corais sacras do compositor Ernani Aguiar / Interpretative analysis of five sacred choral works from the composer Ernani Aguiar

Hammerer, Mariana Ferraz Simões 12 November 2015 (has links)
Este trabalho busca apresentar uma análise focada na interpretação de cinco obras sacras para coro à cappella do compositor Ernani Aguiar. As obras são: Três Motetinos n° 1 (1975-1978), Três Motetinos n° 2 (1982), Três Motetinos n° 3 (1980-1986), Três Motetinos n° 4 e Três Motetinos n° 5 (1992). A metodologia de análise musical está sustentada a partir do Referencial Silva Ramos de Análise de Obras Corais, respondendo as questões pertinentes e depois transformando-as em texto corrido. Apresentamos um pequeno texto sobre sua trajetória de vida, mostrando sucintamente a atividade de Ernani Aguiar como compositor, regente e professor. Em seguida elencamos o conjunto de sua obra coral sacra para coro à cappella, apresentando informações como data de composição, texto utilizado, estreias e gravações das mesmas. Ainda, apresentamos a fundo outros detalhes sobre as composições das cinco obras estudadas. Na continuidade, apresentamos as análises de cada uma das cinco obras e discutimos pontos que ocorreram durante nosso processo de chegada a uma concepção para performance das mesmas. E é na esteira desse trabalho que abordamos as constâncias composicionais de Aguiar, seu modo de escrita e algumas de suas escolhas estéticas, concluindo assim esta dissertação. / This study aims to present an analysis focused on the interpretation of five sacred works for choir a cappella from the composer Ernani Aguiar. The works are: Três Motetinos No. 1 (1975-1978), Três Motetinos No. 2 (1982), Três Motetinos No. 3 (1980-1986), Três Motetinos No. 4 and Três Motetinos No. 5 (1992). The methodology of the musical analysis is sustained from the Silva Ramos Reference of Coral Work Analysis, answering the relevant questions and then turning them into plain text. We present a small text about his life story, succinctly showing Ernani Aguiar\'s activity as a composer, conductor and teacher. Then we list his sacred choral work\'s ensemble to the a cappella choir, presenting information, such as composition date, text used, premieres and its recordings. We also thoroughly present other details about the compositions of the five studied works. Continuing, we present the analysis of each of the five works and we discuss points which occurred during our process of arriving at a guideline for its performance. And it is in the wake of this work that we approach the compositional constancies of Aguiar, his writing mode and some of its aesthetic choices, thus concluding this dissertation.
34

Análise interpretativa de cinco obras corais sacras do compositor Ernani Aguiar / Interpretative analysis of five sacred choral works from the composer Ernani Aguiar

Mariana Ferraz Simões Hammerer 12 November 2015 (has links)
Este trabalho busca apresentar uma análise focada na interpretação de cinco obras sacras para coro à cappella do compositor Ernani Aguiar. As obras são: Três Motetinos n° 1 (1975-1978), Três Motetinos n° 2 (1982), Três Motetinos n° 3 (1980-1986), Três Motetinos n° 4 e Três Motetinos n° 5 (1992). A metodologia de análise musical está sustentada a partir do Referencial Silva Ramos de Análise de Obras Corais, respondendo as questões pertinentes e depois transformando-as em texto corrido. Apresentamos um pequeno texto sobre sua trajetória de vida, mostrando sucintamente a atividade de Ernani Aguiar como compositor, regente e professor. Em seguida elencamos o conjunto de sua obra coral sacra para coro à cappella, apresentando informações como data de composição, texto utilizado, estreias e gravações das mesmas. Ainda, apresentamos a fundo outros detalhes sobre as composições das cinco obras estudadas. Na continuidade, apresentamos as análises de cada uma das cinco obras e discutimos pontos que ocorreram durante nosso processo de chegada a uma concepção para performance das mesmas. E é na esteira desse trabalho que abordamos as constâncias composicionais de Aguiar, seu modo de escrita e algumas de suas escolhas estéticas, concluindo assim esta dissertação. / This study aims to present an analysis focused on the interpretation of five sacred works for choir a cappella from the composer Ernani Aguiar. The works are: Três Motetinos No. 1 (1975-1978), Três Motetinos No. 2 (1982), Três Motetinos No. 3 (1980-1986), Três Motetinos No. 4 and Três Motetinos No. 5 (1992). The methodology of the musical analysis is sustained from the Silva Ramos Reference of Coral Work Analysis, answering the relevant questions and then turning them into plain text. We present a small text about his life story, succinctly showing Ernani Aguiar\'s activity as a composer, conductor and teacher. Then we list his sacred choral work\'s ensemble to the a cappella choir, presenting information, such as composition date, text used, premieres and its recordings. We also thoroughly present other details about the compositions of the five studied works. Continuing, we present the analysis of each of the five works and we discuss points which occurred during our process of arriving at a guideline for its performance. And it is in the wake of this work that we approach the compositional constancies of Aguiar, his writing mode and some of its aesthetic choices, thus concluding this dissertation.
35

Performance analysis of a large-scale ground source heat pump system

Naicker, Selvaraj Soosaiappa January 2015 (has links)
The UK government’s Carbon Plan-2011 aims for 80% carbon emission reduction by 2050, and the 2009 UK National Renewable Energy Action Plan has set a target of delivering 15% of total energy demand by renewable energy sources by 2020. Ground Source Heat Pump (GSHP) systems can play a critical role in reaching these goals within the building sector. Achieving such benefits relies on proper design, integration, installation, commissioning, and operation of these systems. This work seeks to provide evidence to improve the practices in design, installation and operations of large GSHP systems. This evidence has been based on collection and analysis of data from an operational large-scale GSHP system providing heating and cooling to a university building. The data set is of significance in that it is collected from a large-scale system incorporating fifty-six borehole heat exchangers and four heat pumps. The data has been collected at high frequency since the start of operation and for a period of three years. The borehole heat exchanger data is intended to form a reference data set for use by other workers in model validation studies. The ground thermal properties at the site have been estimated using a novel combination of numerical model and parameter estimation methods. The utility of the reference data set has been demonstrated through application in a validation study of a numerical borehole heat exchanger model. The system heat balances and power consumption data have firstly been analysed to derive a range of performance metrics such as Seasonal Performance Factors. Analysis has been carried out at the system and individual heat pump level. Annual performance has been found satisfactory overall. A series of analyses have been carried out to investigate the roles of circulating pump energy, control system operation and dynamic behaviour. Monitoring data from one of the heat pumps has also been analysed in further detail to make comparisons with manufacturer’s steady-state performance data and with consideration to variations in fluid properties. Some modest degradation from stated performance has been identified. The most significant operational factors accounting for degradation of overall system performance have been excessive pump energy demands and short cycling behaviour. Some faults in operation of the system during the monitoring period have also been identified. A series of recommendations are made as to ways to improve the design and operation of large-scale GSHP systems based on this evidence. These recommendations are chiefly concerned with better design for part-load operation, reduction in pump energy demands and more robust control systems.
36

Training Needs Analysis For Identifying Vocational Teachers' Competency Needs in ICT Expertise Program in Vocational High Schools in Bali Province

Seri Wahyuni, Dessy 16 June 2020 (has links)
The aims of this study to reveal (1) the description of characteristic vocational teacher, (2) the criterion competency, (3) the account of important competency, (4) the description of actual competency performance, (5)the identification of competency gaps, (6) the determination of training priority order, (7) the recommendations regarding with training methods and training organizerThis study employed a mixed method with exploratory sequential combination. The research subjects comprised the Vocational Technical Teachers with ICT expertise program especially for Network and Computer Engineering expertise competence. This study devised competency needs for training program incorporating Training Needs Analysis. The data were collected through FGD, questionnaires and an interview guide. The data were analysed using Fuzzy Delphi method to determine criterion competency by screening process. Analytic Hierarchy Process method was conducted for determining the important competency. 360-degree rater as evaluation teaching performance. Importance Performance Analysis diagram were used for describing the competency gaps. The determination of Training Priority Order based on quadrant in IPA diagram. The results of this study showed that: (1) Vocational teachers from multiple expertise program are still lack of ICT knowledge and practice mastery especially in network engineering field because they had no ICT educational background. They still look confused and nervous in teaching and practicing in front of the class. (2) criterion competency consists of pedagogy-andragogy aspect with 11 domain areas and 34 sub-domain, professional aspect with 3 domain areas and 7 sub-domain, vocational aspect with 3 domain areas and 8 sub-domain and technology aspect with 4 domains. (3) the order of importance in terms of competency aspect is pedagogy-andragogy with weight of 0.466, vocational around 0.300, professional with weight of 0.172, technology approximately 0.063. (4) the lowest performance in pedagogy-andragogy aspect is ability in guidance and supervision internship program with 3.19 total performance, Whereas in professional aspect is the competency in application of vocational content with 3.35 total performance, in vocational aspect is competency in networking and collaboration with 2.82 total performance and In technology aspect is ability using and utilizing ICT for self-development with 3.56 total performance. (5) the competency gaps fall into the vocational knowledge & skills, application of content, content knowledge, networking and collaboration, continuing professionalism development and entrepreneurship. (6) TPO based on competencies needs has described in IPA diagram most of training needs is located in vocational and professional aspect. (7) In House Training, specific training, and short courses training were recommended as effective training methods. The training organizers may come from P4TK BMTI, P4TK BOE, Private Institutions, Universities/LPTK, Industry.:CHAPTER I INTRODUCTION A. Research Background B. Problem Identification C. Research Focus D. Formulations of the Problem E. Research Objectives F. Significances of the Research CHAPTER II LITERATURE REVIEW A. Theoretical Review 1. The Concept of Vocational 2. Philosophy of Vocational Education 3. Theory and Assumption of Vocational Education 4. The Theory of Adult Learning 5. Adult Learning Frameworks in Vocational Education 6. Andragogy in Vocational Education 7. Employability Skills 8. Human Resource Management –Vocational Teacher 9. The Professional of Vocational Teacher 10. Needs Analysis 11. Competencies Needs Analysis 12. Training Needs Analysis 13. Fuzzy Delphi Technique 14. Analytic Hierarchy Process 15. Vocational Teacher Performance Evaluation 16. Importance Performance Analysis B. Conceptual Framework C. Relevance Research D. Research Question CHAPTER III RESEARCH METHOD A. Research Approach B. Qualitative Method 1. Research Location 2. Source of Data 3. Data Generating Technique 4. Analysis Data Technique 5. Data Credibility 6. Preliminary Findings Formulation C. Quantitative Method 1. Data Collecting Technique 2. Research Instruments 3. Analysis Data Technique D. Time and Place Research E. Data Analysis in Qualitative Quantitative Method CHAPTER IV FINDINGS AND DISCUSSION A. Findings 1. Vocational Teacher Conditions 2. Teachers Competency with Balinese Local Wisdom 3. The Criterion Competencies of Vocational Teacher 4. The Importance Competencies of Vocational Teacher 5. The Actual Competency of Vocational Teacher 6. Competency Gaps Analysis using IPA 7. Training Priority Order B. Discussion C. Limitation of Research CHAPTER V CONCLUSIONS AND RECOMMENDATIONS A. Conclusions B. Recommendations REFERENCES
37

Hodnocení výkonnosti podniku / The Company Performance Measurement

Koběrská, Simona January 2017 (has links)
The presented thesis is focused on the evaluation of the company’s performance using Balanced Scorecard method. The theoretical part introduces the system of the performance measurement and theoretical knowledge from literature to which the next sections refer. The analytical part is addressed to the introduction of the chosen company, its activities, situation and financial analysis. The outputs of these analyses are becoming the background material for the proposal section which deals with the implementation of Balanced Scorecard to the analysed company for the purpose of increasing its performance and ensuring the subsequent development.
38

Zhodnocení finanční situace podniku a návrhy na její zlepšení / Evaluation of the financial situation of the company and suggestions for its improvement

Stupka, Radim January 2020 (has links)
This diploma thesis focuses on the evaluation of the financial situation of the company ProMedica spol. s r.o., which is achieved through the tools of situational and financial analysis, and on the basis of this evaluation, proposals are further formulated to improve financial health and increase competitiveness.
39

Hodnocení výkonnosti podniku / Company Performance Measurement

Křivová, Tereza January 2017 (has links)
The main objective of this thesis is to evaluate the performance of the selected company and subsequent implementation of the Balanced Scorecard model. The theoretical part explains the concept of performance and performance access to its evaluation. There is also the description of selected strategic analysis in this part. It is explained the essence of Balanced Scorecard and gradual steps in its implementation of corporate management in detail. The practical part describes the current situation in the company and on the basis of the results of each analysis it is proposing a project to implement the Balanced Scorecard. The final part of the thesis includes Balanced Scorecard implementation into the company to increase its efficiency and further development.
40

Energy Measurements of High Performance Computing Systems: From Instrumentation to Analysis

Ilsche, Thomas 31 July 2020 (has links)
Energy efficiency is a major criterion for computing in general and High Performance Computing in particular. When optimizing for energy efficiency, it is essential to measure the underlying metric: energy consumption. To fully leverage energy measurements, their quality needs to be well-understood. To that end, this thesis provides a rigorous evaluation of various energy measurement techniques. I demonstrate how the deliberate selection of instrumentation points, sensors, and analog processing schemes can enhance the temporal and spatial resolution while preserving a well-known accuracy. Further, I evaluate a scalable energy measurement solution for production HPC systems and address its shortcomings. Such high-resolution and large-scale measurements present challenges regarding the management of large volumes of generated metric data. I address these challenges with a scalable infrastructure for collecting, storing, and analyzing metric data. With this infrastructure, I also introduce a novel persistent storage scheme for metric time series data, which allows efficient queries for aggregate timelines. To ensure that it satisfies the demanding requirements for scalable power measurements, I conduct an extensive performance evaluation and describe a productive deployment of the infrastructure. Finally, I describe different approaches and practical examples of analyses based on energy measurement data. In particular, I focus on the combination of energy measurements and application performance traces. However, interweaving fine-grained power recordings and application events requires accurately synchronized timestamps on both sides. To overcome this obstacle, I develop a resilient and automated technique for time synchronization, which utilizes crosscorrelation of a specifically influenced power measurement signal. Ultimately, this careful combination of sophisticated energy measurements and application performance traces yields a detailed insight into application and system energy efficiency at full-scale HPC systems and down to millisecond-range regions.:1 Introduction 2 Background and Related Work 2.1 Basic Concepts of Energy Measurements 2.1.1 Basics of Metrology 2.1.2 Measuring Voltage, Current, and Power 2.1.3 Measurement Signal Conditioning and Analog-to-Digital Conversion 2.2 Power Measurements for Computing Systems 2.2.1 Measuring Compute Nodes using External Power Meters 2.2.2 Custom Solutions for Measuring Compute Node Power 2.2.3 Measurement Solutions of System Integrators 2.2.4 CPU Energy Counters 2.2.5 Using Models to Determine Energy Consumption 2.3 Processing of Power Measurement Data 2.3.1 Time Series Databases 2.3.2 Data Center Monitoring Systems 2.4 Influences on the Energy Consumption of Computing Systems 2.4.1 Processor Power Consumption Breakdown 2.4.2 Energy-Efficient Hardware Configuration 2.5 HPC Performance and Energy Analysis 2.5.1 Performance Analysis Techniques 2.5.2 HPC Performance Analysis Tools 2.5.3 Combining Application and Power Measurements 2.6 Conclusion 3 Evaluating and Improving Energy Measurements 3.1 Description of the Systems Under Test 3.2 Instrumentation Points and Measurement Sensors 3.2.1 Analog Measurement at Voltage Regulators 3.2.2 Instrumentation with Hall Effect Transducers 3.2.3 Modular Instrumentation of DC Consumers 3.2.4 Optimal Wiring for Shunt-Based Measurements 3.2.5 Node-Level Instrumentation for HPC Systems 3.3 Analog Signal Conditioning and Analog-to-Digital Conversion 3.3.1 Signal Amplification 3.3.2 Analog Filtering and Analog-To-Digital Conversion 3.3.3 Integrated Solutions for High-Resolution Measurement 3.4 Accuracy Evaluation and Calibration 3.4.1 Synthetic Workloads for Evaluating Power Measurements 3.4.2 Improving and Evaluating the Accuracy of a Single-Node Measuring System 3.4.3 Absolute Accuracy Evaluation of a Many-Node Measuring System 3.5 Evaluating Temporal Granularity and Energy Correctness 3.5.1 Measurement Signal Bandwidth at Different Instrumentation Points 3.5.2 Retaining Energy Correctness During Digital Processing 3.6 Evaluating CPU Energy Counters 3.6.1 Energy Readouts with RAPL 3.6.2 Methodology 3.6.3 RAPL on Intel Sandy Bridge-EP 3.6.4 RAPL on Intel Haswell-EP and Skylake-SP 3.7 Conclusion 4 A Scalable Infrastructure for Processing Power Measurement Data 4.1 Requirements for Power Measurement Data Processing 4.2 Concepts and Implementation of Measurement Data Management 4.2.1 Message-Based Communication between Agents 4.2.2 Protocols 4.2.3 Application Programming Interfaces 4.2.4 Efficient Metric Time Series Storage and Retrieval 4.2.5 Hierarchical Timeline Aggregation 4.3 Performance Evaluation 4.3.1 Benchmark Hardware Specifications 4.3.2 Throughput in Symmetric Configuration with Replication 4.3.3 Throughput with Many Data Sources and Single Consumers 4.3.4 Temporary Storage in Message Queues 4.3.5 Persistent Metric Time Series Request Performance 4.3.6 Performance Comparison with Contemporary Time Series Storage Solutions 4.3.7 Practical Usage of MetricQ 4.4 Conclusion 5 Energy Efficiency Analysis 5.1 General Energy Efficiency Analysis Scenarios 5.1.1 Live Visualization of Power Measurements 5.1.2 Visualization of Long-Term Measurements 5.1.3 Integration in Application Performance Traces 5.1.4 Graphical Analysis of Application Power Traces 5.2 Correlating Power Measurements with Application Events 5.2.1 Challenges for Time Synchronization of Power Measurements 5.2.2 Reliable Automatic Time Synchronization with Correlation Sequences 5.2.3 Creating a Correlation Signal on a Power Measurement Channel 5.2.4 Processing the Correlation Signal and Measured Power Values 5.2.5 Common Oversampling of the Correlation Signals at Different Rates 5.2.6 Evaluation of Correlation and Time Synchronization 5.3 Use Cases for Application Power Traces 5.3.1 Analyzing Complex Power Anomalies 5.3.2 Quantifying C-State Transitions 5.3.3 Measuring the Dynamic Power Consumption of HPC Applications 5.4 Conclusion 6 Summary and Outlook

Page generated in 0.0943 seconds