• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 2
  • 1
  • Tagged with
  • 17
  • 17
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

High-speed performance and power modeling

Sunwoo, Dam 01 October 2010 (has links)
The high cost of designing, testing and manufacturing semiconductor chips makes simulation essential to predict performance and power throughout the design cycle of hardware components. However, standard detailed software performance/power simulators are too slow to finish real-life benchmarks within the design cycle. To compensate, reduced accuracy is often traded for improved simulator performance. This dissertation explores the FPGA-Accelerated Simulation Technologies (FAST) methodology that can dramatically improve simulation performance without sacrificing accuracy. Design trade-offs of the functional model partition of a FAST simulator are discussed and QUICK, an implementation of a FAST functional model that is designed to provide fast functional execution as well as the ability to rollback and execute down different paths is described. QUICK is general enough to be useful beyond FPGA-accelerated simulators and provides complex ISA (x86) and full-system support. A complete FAST simulator that combines QUICK with an FPGA-based timing model runs in the millions of x86 instructions per seconds, several orders of magnitude faster than software simulators of comparable accuracy capability, and boots unmodified Windows XP and Linux. Ideally, one could model power at the same speeds as performance modeling in a FAST simulator. However, traditional software-implemented power estimation techniques are very slow. PrEsto, a new power modeling methodology that automatically generates accurate power models that can efficiently fit and operate within FAST simulators, is proposed. Such models can dramatically improve the accuracy and performance of architectural power estimation. Improving high-accuracy simulator performance will open research directions that could not be explored economically in the past. The combination of simulation performance, accuracy, and power estimation capabilities extend the usefulness of such simulators, thus enabling the co-design of architecture, hardware implementation, operating systems, and software. / text
2

Improving Energy-Efficiency of Multicores using First-Order Modeling

Spiliopoulos, Vasileios January 2016 (has links)
In the recent decades, power consumption has evolved to one of the most critical resources in a computer system. In the form of electricity bill in data centers, battery life in mobile devices, or thermal constraints in desktops and laptops, power consumption imposes several limitations in today’s processors and improving power and energy efficiency is one of the most urgent research topics of Computer Architecture. Dynamic Voltage and Frequency Scaling (DVFS) and Cache Resizing are among the most popular energy saving techniques. Previous work, however, has focused on developing heuristics and trial-and-error methods that yield acceptable savings, but fail to provide insight and understanding of how these techniques affect power and performance of a computer system. In contrast, this Thesis proposes the use of first-order modeling to improve the energy efficiency of computer systems. A first-order model needs to be (i) accurate enough to efficiently drive DVFS and Cache Resizing decisions, and (ii) simple enough to eliminate the overhead of collecting the required inputs to the model. We show that such models can be constructed and successfully applied in modern systems. For DVFS, we propose to scale frequency down to exploit applications’ memory slack, i.e., periods that the processor spends waiting for data to be fetched from the main memory. In such cases, the processor frequency can be scaled down to save energy without inordinate performance penalty. Our DVFS models can detect slack and predict the impact of DVFS in both power and performance with great accuracy. Cache Resizing, on the other hand, relies on the fact that many applications do not benefit from the vast amount of cache that modern processors are equipped with. In such cases, the cache can be resized to save static energy consumption at limited performance cost. Since both techniques are related with the memory behavior of applications, we propose a unified model to manage the two techniques in tandem and maximize energy efficiency through synergistic DVFS and Cache Resizing. Finally, our experience with DVFS in real systems motivated us to contribute to the integration of DVFS into the gem5 simulator. Unlike other simulators that ignore the role of OS in DVFS, we extend the gem5 simulator by developing the hardware and software components that allow existing Linux DVFS infrastructure to be seamlessly integrated in the simulator.
3

Pricing of Power Derivatives / Pricing of Power Derivatives

Foukal, Viktor January 2014 (has links)
The main target of this thesis is to summarize and demonstrate the main characteristic of power markets and trying to find out electricity spot model. Thesis starts with definition of market subjects, typology of traded contracts and description of market development with focus on South Easter Europe. Thesis continues with development of consumption function and theoretical concepts of Demand/Capacity ratio which is used in short term/spot modeling and serve to identify a risk of potential increase in volatility. After deriving fundamental models I will continue with stochastic model - Volatility Regime model with Jump diffusion. I used all these knowledge and observed patterns in order to evaluate illiquid power options with daily settlement.
4

Physical Synthesis Toolkit for Area and Power Optimization on FPGAs

Czajkowski, Tomasz Sebastian 19 January 2009 (has links)
A Field-Programmable Gate Array (FPGA) is a configurable platform for implementing a variety of logic circuits. It implements a circuit by the means of logic elements, usually Lookup Tables, connected by a programmable routing network. To utilize an FPGA effectively Computer Aided Design (CAD) tools have been developed. These tools implement circuits by using a traditional CAD flow, where the circuit is analyzed, synthesized, technology mapped, and finally placed and routed on the FPGA fabric. This flow, while generally effective, can produce sub-optimal results because once a stage of the flow is completed it is not revisited. This problem is addressed by an enhanced flow known Physical Synthesis, which consists of a set of iterations of the traditional flow with one key difference: the result of each iteration directly affects the result of the following iteration. An optimization can therefore be evaluated and then adjusted as needed in the following iterations, resulting in an overall better implementation. This CAD flow is challenging to work with because for a given FPGA researchers require access to each stage of the flow in an iterative fashion. This is particularly challenging when targeting modern commercial FPGAs, which are far more complex than a simple Lookup Table and Flip-Flop model generally used by the academic community. This dissertation describes a unified framework, called the Physical Synthesis Toolkit (PST), for research and development of optimizations for modern FPGA devices. PST provides access to modern FPGA devices and CAD tool flow to facilitate research. At the same time the amount of effort required to adapt the framework to a new FPGA device is kept to a minimum. To demonstrate that PST is an effective research platform, this dissertation describes optimization and modeling techniques that were implemented inside of it. The optimizations include: an area reduction technique for XOR-based logic circuits implemented on a 4-LUT based FPGA (25.3% area reduction), and a dynamic power reduction technique that reduces glitches in a circuit implemented on an Altera Stratix II FPGA (7% dynamic power reduction). The modeling technique is a novel toggle rate estimation approach based on the XOR-based decomposition, which reduces the estimate error by 37% as compared to the latest release of the Altera Quartus II CAD tool.
5

Operating reserve assessment of wind integrated power systems

Karki, Bipul 05 April 2010
Wind power is variable, uncertain, intermittent and site specific. The operating capacity credit associated with a wind farm is therefore considerably different from that assigned to a conventional generating unit and as wind penetrations in conventional power systems increase, it is vital that wind power be fully integrated in power system planning and operating protocols.<p> The research described in this thesis is focused on the determination of the operating capacity benefits associated with adding wind power to a conventional power system. Probabilistic techniques are used to quantify the risk and operating capacity benefits under various risk criteria. A short term wind speed probability distribution and short term wind power probability distribution forecasting model is presented and a multi-state model of a wind farm is utilized to determine several operating performance indices. The concepts and developed model are illustrated by application to two published test systems. The increase in peak load carrying capability attributable to added wind power is examined under a range of system operating conditions that include the effects of seasonality, locality and wind parameter trends. The operating capacity credit associated with dependent and independent wind farms is also examined. The dependent and independent conditions provide boundary values that clearly indicate the effects of wind speed correlation. Well-being analyses which incorporate the accepted deterministic criterion in an evaluation of the system operating state probabilities is applied to the wind integrated test systems using a novel approach to calculate the operating state probabilities. Most modern power systems are interconnected to one or more other power systems and therefore have increased access and exposure to wind power. This thesis examines the risk benefits associated with wind integrated interconnected power systems under various conditions using the two test systems.<p> The research described in this thesis clearly illustrates that the operating capacity benefits associated with wind power can be quantified and used in making generating capacity scheduling decisions in a wind integrated power system.
6

Operating reserve assessment of wind integrated power systems

Karki, Bipul 05 April 2010 (has links)
Wind power is variable, uncertain, intermittent and site specific. The operating capacity credit associated with a wind farm is therefore considerably different from that assigned to a conventional generating unit and as wind penetrations in conventional power systems increase, it is vital that wind power be fully integrated in power system planning and operating protocols.<p> The research described in this thesis is focused on the determination of the operating capacity benefits associated with adding wind power to a conventional power system. Probabilistic techniques are used to quantify the risk and operating capacity benefits under various risk criteria. A short term wind speed probability distribution and short term wind power probability distribution forecasting model is presented and a multi-state model of a wind farm is utilized to determine several operating performance indices. The concepts and developed model are illustrated by application to two published test systems. The increase in peak load carrying capability attributable to added wind power is examined under a range of system operating conditions that include the effects of seasonality, locality and wind parameter trends. The operating capacity credit associated with dependent and independent wind farms is also examined. The dependent and independent conditions provide boundary values that clearly indicate the effects of wind speed correlation. Well-being analyses which incorporate the accepted deterministic criterion in an evaluation of the system operating state probabilities is applied to the wind integrated test systems using a novel approach to calculate the operating state probabilities. Most modern power systems are interconnected to one or more other power systems and therefore have increased access and exposure to wind power. This thesis examines the risk benefits associated with wind integrated interconnected power systems under various conditions using the two test systems.<p> The research described in this thesis clearly illustrates that the operating capacity benefits associated with wind power can be quantified and used in making generating capacity scheduling decisions in a wind integrated power system.
7

Predictive power management for multi-core processors

Bircher, William Lloyd 07 February 2011 (has links)
Energy consumption by computing systems is rapidly increasing due to the growth of data centers and pervasive computing. In 2006 data center energy usage in the United States reached 61 billion kilowatt-hours (KWh) at an annual cost of 4.5 billion USD [Pl08]. It is projected to reach 100 billion KWh by 2011 at a cost of 7.4 billion USD. The nature of energy usage in these systems provides an opportunity to reduce consumption. Specifically, the power and performance demand of computing systems vary widely in time and across workloads. This has led to the design of dynamically adaptive or power managed systems. At runtime, these systems can be reconfigured to provide optimal performance and power capacity to match workload demand. This causes the system to frequently be over or under provisioned. Similarly, the power demand of the system is difficult to account for. The aggregate power consumption of a system is composed of many heterogeneous systems, each with a unique power consumption characteristic. This research addresses the problem of when to apply dynamic power management in multi-core processors by accounting for and predicting power and performance demand at the core-level. By tracking performance events at the processor core or thread-level, power consumption can be accounted for at each of the major components of the computing system through empirical, power models. This also provides accounting for individual components within a shared resource such as a power plane or top-level cache. This view of the system exposes the fundamental performance and power phase behavior, thus making prediction possible. This dissertation also presents an extensive analysis of complete system power accounting for systems and workloads ranging from servers to desktops and laptops. The analysis leads to the development of a simple, effective prediction scheme for controlling power adaptations. The proposed Periodic Power Phase Predictor (PPPP) identifies patterns of activity in multi-core systems and predicts transitions between activity levels. This predictor is shown to increase performance and reduce power consumption compared to reactive, commercial power management schemes by achieving higher average frequency in active phases and lower average frequency in idle phases. / text
8

Physical Synthesis Toolkit for Area and Power Optimization on FPGAs

Czajkowski, Tomasz Sebastian 19 January 2009 (has links)
A Field-Programmable Gate Array (FPGA) is a configurable platform for implementing a variety of logic circuits. It implements a circuit by the means of logic elements, usually Lookup Tables, connected by a programmable routing network. To utilize an FPGA effectively Computer Aided Design (CAD) tools have been developed. These tools implement circuits by using a traditional CAD flow, where the circuit is analyzed, synthesized, technology mapped, and finally placed and routed on the FPGA fabric. This flow, while generally effective, can produce sub-optimal results because once a stage of the flow is completed it is not revisited. This problem is addressed by an enhanced flow known Physical Synthesis, which consists of a set of iterations of the traditional flow with one key difference: the result of each iteration directly affects the result of the following iteration. An optimization can therefore be evaluated and then adjusted as needed in the following iterations, resulting in an overall better implementation. This CAD flow is challenging to work with because for a given FPGA researchers require access to each stage of the flow in an iterative fashion. This is particularly challenging when targeting modern commercial FPGAs, which are far more complex than a simple Lookup Table and Flip-Flop model generally used by the academic community. This dissertation describes a unified framework, called the Physical Synthesis Toolkit (PST), for research and development of optimizations for modern FPGA devices. PST provides access to modern FPGA devices and CAD tool flow to facilitate research. At the same time the amount of effort required to adapt the framework to a new FPGA device is kept to a minimum. To demonstrate that PST is an effective research platform, this dissertation describes optimization and modeling techniques that were implemented inside of it. The optimizations include: an area reduction technique for XOR-based logic circuits implemented on a 4-LUT based FPGA (25.3% area reduction), and a dynamic power reduction technique that reduces glitches in a circuit implemented on an Altera Stratix II FPGA (7% dynamic power reduction). The modeling technique is a novel toggle rate estimation approach based on the XOR-based decomposition, which reduces the estimate error by 37% as compared to the latest release of the Altera Quartus II CAD tool.
9

An Intelligent Framework for Energy-Aware Mobile Computing Subject to Stochastic System Dynamics

January 2017 (has links)
abstract: User satisfaction is pivotal to the success of mobile applications. At the same time, it is imperative to maximize the energy efficiency of the mobile device to ensure optimal usage of the limited energy source available to mobile devices while maintaining the necessary levels of user satisfaction. However, this is complicated due to user interactions, numerous shared resources, and network conditions that produce substantial uncertainty to the mobile device's performance and power characteristics. In this dissertation, a new approach is presented to characterize and control mobile devices that accurately models these uncertainties. The proposed modeling framework is a completely data-driven approach to predicting power and performance. The approach makes no assumptions on the distributions of the underlying sources of uncertainty and is capable of predicting power and performance with over 93% accuracy. Using this data-driven prediction framework, a closed-loop solution to the DEM problem is derived to maximize the energy efficiency of the mobile device subject to various thermal, reliability and deadline constraints. The design of the controller imposes minimal operational overhead and is able to tune the performance and power prediction models to changing system conditions. The proposed controller is implemented on a real mobile platform, the Google Pixel smartphone, and demonstrates a 19% improvement in energy efficiency over the standard frequency governor implemented on all Android devices. / Dissertation/Thesis / Doctoral Dissertation Computer Engineering 2017
10

LOW-POWER PULSE-SHAPING FILTER DESIGN USING HARDWARE-SPECIFIC POWER MODELING AND OPTIMIZATION

Bakula, Casey J. 12 May 2008 (has links)
No description available.

Page generated in 0.0976 seconds