Spelling suggestions: "subject:"cofficient computing"" "subject:"cofficient acomputing""
41 |
Measuring energy consumption for short code paths using RAPLHähnel, Marcus, Döbel, Björn, Völp, Marcus, Härtig, Hermann 28 May 2013 (has links)
Measuring the energy consumption of software components is a major building block for generating models that allow for energy-aware scheduling, accounting and budgeting. Current measurement techniques focus on coarse-grained measurements of application or system events. However, fine grain adjustments in particular in the operating-system kernel and in application-level servers require power profiles at the level of a single software function. Until recently, this appeared to be impossible due to the lacking fine grain resolution and high costs of measurement equipment. In this paper we report on our experience in using the Running Average Power Limit (RAPL) energy sensors available in recent Intel CPUs for measuring energy consumption of short code paths. We investigate the granularity at which RAPL measurements can be performed and discuss practical obstacles that occur when performing these measurements on complex modern CPUs. Furthermore, we demonstrate how to use the RAPL infrastructure to characterize the energy costs for decoding video slices.
|
42 |
Energy Measurements of High Performance Computing Systems: From Instrumentation to AnalysisIlsche, Thomas 31 July 2020 (has links)
Energy efficiency is a major criterion for computing in general and High Performance Computing in particular. When optimizing for energy efficiency, it is essential to measure the underlying metric: energy consumption. To fully leverage energy measurements, their quality needs to be well-understood. To that end, this thesis provides a rigorous evaluation of various energy measurement techniques. I demonstrate how the deliberate selection of instrumentation points, sensors, and analog processing schemes can enhance the temporal and spatial resolution while preserving a well-known accuracy. Further, I evaluate a scalable energy measurement solution for production HPC systems and address its shortcomings.
Such high-resolution and large-scale measurements present challenges regarding the management of large volumes of generated metric data. I address these challenges with a scalable infrastructure for collecting, storing, and analyzing metric data. With this infrastructure, I also introduce a novel persistent storage scheme for metric time series data, which allows efficient queries for aggregate timelines.
To ensure that it satisfies the demanding requirements for scalable power measurements, I conduct an extensive performance evaluation and describe a productive deployment of the infrastructure.
Finally, I describe different approaches and practical examples of analyses based on energy measurement data. In particular, I focus on the combination of energy measurements and application performance traces. However, interweaving fine-grained power recordings and application events requires accurately synchronized timestamps on both sides. To overcome this obstacle, I develop a resilient and automated technique for time synchronization, which utilizes crosscorrelation of a specifically influenced power measurement signal. Ultimately, this careful combination of sophisticated energy measurements and application performance traces yields a detailed insight into application and system energy efficiency at full-scale HPC systems and down to millisecond-range regions.:1 Introduction
2 Background and Related Work
2.1 Basic Concepts of Energy Measurements
2.1.1 Basics of Metrology
2.1.2 Measuring Voltage, Current, and Power
2.1.3 Measurement Signal Conditioning and Analog-to-Digital Conversion
2.2 Power Measurements for Computing Systems
2.2.1 Measuring Compute Nodes using External Power Meters
2.2.2 Custom Solutions for Measuring Compute Node Power
2.2.3 Measurement Solutions of System Integrators
2.2.4 CPU Energy Counters
2.2.5 Using Models to Determine Energy Consumption
2.3 Processing of Power Measurement Data
2.3.1 Time Series Databases
2.3.2 Data Center Monitoring Systems
2.4 Influences on the Energy Consumption of Computing Systems
2.4.1 Processor Power Consumption Breakdown
2.4.2 Energy-Efficient Hardware Configuration
2.5 HPC Performance and Energy Analysis
2.5.1 Performance Analysis Techniques
2.5.2 HPC Performance Analysis Tools
2.5.3 Combining Application and Power Measurements
2.6 Conclusion
3 Evaluating and Improving Energy Measurements
3.1 Description of the Systems Under Test
3.2 Instrumentation Points and Measurement Sensors
3.2.1 Analog Measurement at Voltage Regulators
3.2.2 Instrumentation with Hall Effect Transducers
3.2.3 Modular Instrumentation of DC Consumers
3.2.4 Optimal Wiring for Shunt-Based Measurements
3.2.5 Node-Level Instrumentation for HPC Systems
3.3 Analog Signal Conditioning and Analog-to-Digital Conversion
3.3.1 Signal Amplification
3.3.2 Analog Filtering and Analog-To-Digital Conversion
3.3.3 Integrated Solutions for High-Resolution Measurement
3.4 Accuracy Evaluation and Calibration
3.4.1 Synthetic Workloads for Evaluating Power Measurements
3.4.2 Improving and Evaluating the Accuracy of a Single-Node Measuring System
3.4.3 Absolute Accuracy Evaluation of a Many-Node Measuring System
3.5 Evaluating Temporal Granularity and Energy Correctness
3.5.1 Measurement Signal Bandwidth at Different Instrumentation Points
3.5.2 Retaining Energy Correctness During Digital Processing
3.6 Evaluating CPU Energy Counters
3.6.1 Energy Readouts with RAPL
3.6.2 Methodology
3.6.3 RAPL on Intel Sandy Bridge-EP
3.6.4 RAPL on Intel Haswell-EP and Skylake-SP
3.7 Conclusion
4 A Scalable Infrastructure for Processing Power Measurement Data
4.1 Requirements for Power Measurement Data Processing
4.2 Concepts and Implementation of Measurement Data Management
4.2.1 Message-Based Communication between Agents
4.2.2 Protocols
4.2.3 Application Programming Interfaces
4.2.4 Efficient Metric Time Series Storage and Retrieval
4.2.5 Hierarchical Timeline Aggregation
4.3 Performance Evaluation
4.3.1 Benchmark Hardware Specifications
4.3.2 Throughput in Symmetric Configuration with Replication
4.3.3 Throughput with Many Data Sources and Single Consumers
4.3.4 Temporary Storage in Message Queues
4.3.5 Persistent Metric Time Series Request Performance
4.3.6 Performance Comparison with Contemporary Time Series Storage Solutions
4.3.7 Practical Usage of MetricQ
4.4 Conclusion
5 Energy Efficiency Analysis
5.1 General Energy Efficiency Analysis Scenarios
5.1.1 Live Visualization of Power Measurements
5.1.2 Visualization of Long-Term Measurements
5.1.3 Integration in Application Performance Traces
5.1.4 Graphical Analysis of Application Power Traces
5.2 Correlating Power Measurements with Application Events
5.2.1 Challenges for Time Synchronization of Power Measurements
5.2.2 Reliable Automatic Time Synchronization with Correlation Sequences
5.2.3 Creating a Correlation Signal on a Power Measurement Channel
5.2.4 Processing the Correlation Signal and Measured Power Values
5.2.5 Common Oversampling of the Correlation Signals at Different Rates
5.2.6 Evaluation of Correlation and Time Synchronization
5.3 Use Cases for Application Power Traces
5.3.1 Analyzing Complex Power Anomalies
5.3.2 Quantifying C-State Transitions
5.3.3 Measuring the Dynamic Power Consumption of HPC Applications
5.4 Conclusion
6 Summary and Outlook
|
43 |
Energy-Efficient In-Memory Database ComputingLehner, Wolfgang January 2013 (has links)
The efficient and flexible management of large datasets is one of the core requirements of modern business applications. Having access to consistent and up-to-date information is the foundation for operational, tactical, and strategic decision making. Within the last few years, the database community sparked a large number of extremely innovative research projects to push the envelope in the context of modern database system architectures. In this paper, we outline requirements and influencing factors to identify some of the hot research topics in database management systems. We argue that—even after 30 years of active database research—the time is right to rethink some of the core architectural principles and come up with novel approaches to meet the requirements of the next decades in data management. The sheer number of diverse and novel (e.g., scientific) application areas, the existence of modern hardware capabilities, and the need of large data centers to become more energy-efficient will be the drivers for database research in the years to come.
|
44 |
Wireless Interconnect for Board and Chip LevelFettweis, Gerhard P., ul Hassan, Najeeb, Landau, Lukas, Fischer, Erik January 2013 (has links)
Electronic systems of the future require a very high bandwidth communications infrastructure within the system. This way the massive amount of compute power which will be available can be inter-connected to realize future powerful advanced electronic systems. Today, electronic inter-connects between 3D chip-stacks, as well as intra-connects within 3D chip-stacks are approaching data rates of 100 Gbit/s soon. Hence, the question to be answered is how to efficiently design the communications infrastructure which will be within electronic systems. Within this paper approaches and results for building this infrastructure for future electronics are addressed.
|
45 |
Waiting for Locks: How Long Does It Usually Take?Baier, Christel, Daum, Marcus, Engel, Benjamin, Härtig, Hermann, Klein, Joachim, Klüppelholz, Sascha, Märcker, Steffen, Tews, Hendrik, Völp, Marcus January 2012 (has links)
Reliability of low-level operating-system (OS) code is an indispensable requirement. This includes functional properties from the safety-liveness spectrum, but also quantitative properties stating, e.g., that the average waiting time on locks is sufficiently small or that the energy requirement of a certain system call is below a given threshold with a high probability. This paper reports on our experiences made in a running project where the goal is to apply probabilistic model checking techniques and to align the results of the model checker with measurements to predict quantitative properties of low-level OS code.
|
46 |
Chiefly Symmetric: Results on the Scalability of Probabilistic Model Checking for Operating-System CodeBaier, Christel, Daum, Marcus, Engel, Benjamin, Härtig, Hermann, Klein, Joachim, Klüppelholz, Sascha, Märcker, Steffen, Tews, Hendrik, Völp, Marcus January 2012 (has links)
Reliability in terms of functional properties from the safety-liveness spectrum is an indispensable requirement of low-level operating-system (OS) code. However, with evermore complex and thus less predictable hardware, quantitative and probabilistic guarantees become more and more important. Probabilistic model checking is one technique to automatically obtain these guarantees. First experiences with the automated quantitative analysis of low-level operating-system code confirm the expectation that the naive probabilistic model checking approach rapidly reaches its limits when increasing the numbers of processes. This paper reports on our work-in-progress to tackle the state explosion problem for low-level OS-code caused by the exponential blow-up of the model size when the number of processes grows. We studied the symmetry reduction approach and carried out our experiments with a simple test-and-test-and-set lock case study as a representative example for a wide range of protocols with natural inter-process dependencies and long-run properties. We quickly see a state-space explosion for scenarios where inter-process dependencies are insignificant. However, once inter-process dependencies dominate the picture models with hundred and more processes can be constructed and analysed.
|
47 |
Secure degrees of freedom on widely linear instantaneous relay-assisted interference channelHo, Zuleita K.-M., Jorswieck, Eduard January 2013 (has links)
The number of secure data streams a relay-assisted interference channel can support has been an intriguing problem. The problem is not solved even for a fundamental scenario with a single antenna at each transmitter, receiver and relay. In this paper, we study the achievable secure degrees of freedom of instantaneous relay-assisted interference channels with real and complex coefficients. The study of secure degrees of freedom with complex coefficients is not a trivial multiuser extension of the scenarios with real channel coefficients as in the case for the degrees of freedom, due to secrecy constraints. We tackle this challenge by jointly designing the improper transmit signals and widely-linear relay processing strategies.
|
48 |
Interference Leakage Neutralization in Two-Hop Wiretap Channels with Partial CSIEngelmann, Sabrina, Ho, Zuleita K.-M., Jorswieck, Eduard A. January 2013 (has links)
In this paper, we analyze the four-node relay wiretap channel, where the relay performs amplify-and-forward. There is no direct link between transmitter and receiver available. The transmitter has multiple antennas, which assist in securing the transmission over both phases. In case of full channel state information (CSI), the transmitter can apply information leakage neutralization in order to prevent the eavesdropper from obtaining any information about the signal sent. This gets more challenging, if the transmitter has only an outdated estimate of the channel from the relay to the eavesdropper. For this case, we optimize the worst case secrecy rate by choosing intelligently the beamforming vectors and the power allocation at the transmitter and the relay.
|
49 |
HAEC NewsJanuary 2013 (has links)
No description available.
|
50 |
HAEC News06 September 2013 (has links) (PDF)
No description available.
|
Page generated in 0.0637 seconds