• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Improving processor power demand comprehension in data-driven power and software phase classification and prediction

Khoshbakht, Saman 14 August 2018 (has links)
The single-core performance trend predicted by Moore's law has been impeded in recent years partly due to the limitations imposed by increasing processor power demands. One way to mitigate this limitation in performance improvement is the introduction of multi-core and multi-processor computation. Another approach to increasing the performance-per-Watt metric is to utilize the processor's power more efficiently. In a single-core system, the processor cannot sustainably dissipate more than the nominal Thermal Design Power (TDP) limit determined for the processor at design time. Therefore it is important to understand and manage the power demands of the processes being executed. This principle also applies to multi-core and multi-processor environments. In a multi-processor environment, knowing the power demands of the workload, the power management unit can schedule the workload to a processor based on the state of each processor and process in the most efficient way. This is an example of the knapsack problem. Another approach, also applicable to multi-cores, could be to reduce the core's power by reducing its working voltage and frequency, leading to mitigation of the power bursts, lending more headroom to other cores, and keeping the total power under the TDP limit. The information collected from the execution of the software running on the processor (i.e. the workload) is the key to determining the actions needed with regards to power management at any given time. This work comprises two different approaches in improving the comprehension of software power demands as it executes on the processor. In the first part of this work, the effects of software data on power is analysed. It is important to be able to model the power based on the instructions it comprises, however, to the best of our knowledge, no work exists in which the effects of the values being processed has been investigated with regards to processor power. Creating a power model capable of accurately reflecting the power demands of the software at any given time is a problem addressed by previous research. The software power model can be used in processor simulation environments as well as in the processor itself to create an estimated power dissipation without the need to physically measure the power. In the first part of this research, the effects of software data on power is investigated. In order to collect the data required as part of this research, a profiler tool has been developed by the author and used in this part of the research as well as the second part. The second part of this work focuses on the development of processor power throughout time during the execution of the software. Understanding the power demands of the processor at any given time is important to maintain and manage processor power. Additionally, acquiring an insight into the future power demands of the software can help the system with scheduling planning ahead of time, in order to prepare for any high-power section of the code as well as to plan to use the available power headroom as a result of an upcoming low-power section. In this part of our work, a new hierarchical approach to software phase classification is developed. Software phase classification problem focuses on determining the behaviour of the software at any given time slice by assigning the time slice to one of pre-determined software phases. Each phase is assumed to have known behaviour which was previously measured and instrumented based on previously observed instances of the phase, or by utilizing a model capable of estimating the behaviour of each phase. Using a two-tiered hierarchical clustering approach, our proposed phase classification methodology incorporates the recent performance behaviour of the software in order to determine the power phase. We focused on determining the power phase using the performance information because the real processor power is not usually available without the need for added hardware, while there exists a large number of different performance counters available on most modern processors. Additionally, based on our observations, the relation between performance phases and power behaviour is highly predictable. This method is shown to provide robust results with a low amount of noise compared to other methods, while providing a high enough timing accuracy for the processor to act on. To the best of our knowledge, no other existing work is able to provide both timing accuracy and reduced noise compared to our work. Software phase classification can be used to control the processor power based on the software's phase at any given time, but it does not provide future insight into the progression of the workload. Finally, we developed and compared several phase prediction methodologies based on phase precursors and phase locality concepts. Phase precursor-based methods rely on detecting the precursors observed before the software enters a certain phase, while phase locality methods rely on the locality principle, which postulates a high probability for the current software behaviour to be observed in the near-future. The phase classification, as well as phase prediction methodologies was shown to be able to reduce the power bursts within a workload in order to provide a more smooth power trace. As the bursts are removed from one workload's power trace, the multi-core processor power headroom can be confidently utilized for another process. / Graduate
2

Integrated Design of Electrical Distribution Systems: Phase Balancing and Phase Prediction Case Studies

Dilek, Murat 16 November 2001 (has links)
Distribution system analysis and design has experienced a gradual development over the past three decades. The once loosely assembled and largely ad hoc procedures have been progressing toward being well-organized. The increasing power of computers now allows for managing the large volumes of data and other obstacles inherent to distribution system studies. A variety of sophisticated optimization methods, which were impossible to conduct in the past, have been developed and successfully applied to distribution systems. Among the many procedures that deal with making decisions about the state and better operation of a distribution system, two decision support procedures will be addressed in this study: phase balancing and phase prediction. The former recommends re-phasing of single- and double-phase laterals in a radial distribution system in order to improve circuit loss while also maintaining/improving imbalances at various balance point locations. Phase balancing calculations are based on circuit loss information and current magnitudes that are calculated from a power flow solution. The phase balancing algorithm is designed to handle time-varying loads when evaluating phase moves that will result in improved circuit losses over all load points. Applied to radial distribution systems, the phase prediction algorithm attempts to predict the phases of single- and/or double phase laterals that have no phasing information previously recorded by the electric utility. In such an attempt, it uses available customer data and kW/kVar measurements taken at various locations in the system. It is shown that phase balancing is a special case of phase prediction. Building on the phase balancing and phase prediction design studies, this work introduces the concept of integrated design, an approach for coordinating the effects of various design calculations. Integrated design considers using results of multiple design applications rather than employing a single application for a distribution system in need of improvement relative to some system aspect. Also presented is a software architecture supporting integrated design. / Ph. D.
3

Models and Techniques for Green High-Performance Computing

Adhinarayanan, Vignesh 01 June 2020 (has links)
High-performance computing (HPC) systems have become power limited. For instance, the U.S. Department of Energy set a power envelope of 20MW in 2008 for the first exascale supercomputer now expected to arrive in 2021--22. Toward this end, we seek to improve the greenness of HPC systems by improving their performance per watt at the allocated power budget. In this dissertation, we develop a series of models and techniques to manage power at micro-, meso-, and macro-levels of the system hierarchy, specifically addressing data movement and heterogeneity. We target the chip interconnect at the micro-level, heterogeneous nodes at the meso-level, and a supercomputing cluster at the macro-level. Overall, our goal is to improve the greenness of HPC systems by intelligently managing power. The first part of this dissertation focuses on measurement and modeling problems for power. First, we study how to infer chip-interconnect power by observing the system-wide power consumption. Our proposal is to design a novel micro-benchmarking methodology based on data-movement distance by which we can properly isolate the chip interconnect and measure its power. Next, we study how to develop software power meters to monitor a GPU's power consumption at runtime. Our proposal is to adapt performance counter-based models for their use at runtime via a combination of heuristics, statistical techniques, and application-specific knowledge. In the second part of this dissertation, we focus on managing power. First, we propose to reduce the chip-interconnect power by proactively managing its dynamic voltage and frequency (DVFS) state. Toward this end, we develop a novel phase predictor that uses approximate pattern matching to forecast future requirements and in turn, proactively manage power. Second, we study the problem of applying a power cap to a heterogeneous node. Our proposal proactively manages the GPU power using phase prediction and a DVFS power model but reactively manages the CPU. The resulting hybrid approach can take advantage of the differences in the capabilities of the two devices. Third, we study how in-situ techniques can be applied to improve the greenness of HPC clusters. Overall, in our dissertation, we demonstrate that it is possible to infer power consumption of real hardware components without directly measuring them, using the chip interconnect and GPU as examples. We also demonstrate that it is possible to build models of sufficient accuracy and apply them for intelligently managing power at many levels of the system hierarchy. / Doctor of Philosophy / Past research in green high-performance computing (HPC) mostly focused on managing the power consumed by general-purpose processors, known as central processing units (CPUs) and to a lesser extent, memory. In this dissertation, we study two increasingly important components: interconnects (predominantly focused on those inside a chip, but not limited to them) and graphics processing units (GPUs). Our contributions in this dissertation include a set of innovative measurement techniques to estimate the power consumed by the target components, statistical and analytical approaches to develop power models and their optimizations, and algorithms to manage power statically and at runtime. Experimental results show that it is possible to build models of sufficient accuracy and apply them for intelligently managing power on multiple levels of the system hierarchy: chip interconnect at the micro-level, heterogeneous nodes at the meso-level, and a supercomputing cluster at the macro-level.
4

Processor design-space exploration through fast simulation.

Khan, Taj Muhammad 12 May 2011 (has links) (PDF)
Simulation is a vital tool used by architects to develop new architectures. However, because of the complexity of modern architectures and the length of recent benchmarks, detailed simulation of programs can take extremely long times. This impedes the exploration of processor design space which the architects need to do to find the optimal configuration of processor parameters. Sampling is one technique which reduces the simulation time without adversely affecting the accuracy of the results. Yet, most sampling techniques either ignore the warm-up issue or require significant development effort on the part of the user.In this thesis we tackle the problem of reconciling state-of-the-art warm-up techniques and the latest sampling mechanisms with the triple objective of keeping the user effort minimum, achieving good accuracy and being agnostic to software and hardware changes. We show that both the representative and statistical sampling techniques can be adapted to use warm-up mechanisms which can accommodate the underlying architecture's warm-up requirements on-the-fly. We present the experimental results which show an accuracy and speed comparable to latest research. Also, we leverage statistical calculations to provide an estimate of the robustness of the final results.
5

Processor design-space exploration through fast simulation / Exploration de l'espace de conception de processeurs via simulation accélérée

Khan, Taj Muhammad 12 May 2011 (has links)
Nous nous focalisons sur l'échantillonnage comme une technique de simulation pour réduire le temps de simulation. L'échantillonnage est basé sur le fait que l'exécution d'un programme est composée des parties du code qui se répètent, les phases. D'où vient l'observation que l'on peut éviter la simulation entière d'un programme et simuler chaque phase juste une fois et à partir de leurs performances calculer la performance du programme entier. Deux questions importantes se lèvent: quelles parties du programme doit-on simuler? Et comment restaurer l'état du système avant chaque simulation? Pour répondre à la première question, il existe deux solutions: une qui analyse l'exécution du programme en termes de phases et choisit de simuler chaque phase une fois, l'échantillonnage représentatif, et une deuxième qui prône de choisir les échantillons aléatoirement, l'échantillonnage statistique. Pour répondre à la deuxième question de la restauration de l'état du système, des techniques ont été développées récemment qui restaurent l'état (chauffent) du système en fonction des besoins du bout du code simulé (adaptativement). Les techniques des choix des échantillons ignorent complètement les mécanismes de chauffage du système ou proposent des alternatives qui demandent beaucoup de modification du simulateur et les techniques adaptatives du chauffage ne sont pas compatibles avec la plupart des techniques d'échantillonnage. Au sein de cette thèse nous nous focalisons sur le fait de réconcilier les techniques d'échantillonnage avec celles du chauffage adaptatif pour développer un mécanisme qui soit à la fois facile à utiliser, précis dans ses résultats, et soit transparent à l'utilisateur. Nous avons prit l'échantillonnage représentatif et statistique et modifié les techniques adaptatives du chauffage pour les rendre compatibles avec ces premiers dans un seul mécanisme. Nous avons pu montrer que les techniques adaptatives du chauffage peuvent être employées dans l'échantillonnage. Nos résultats sont comparables avec l'état de l'art en terme de précision mais en débarrassant l'utilisateur des problèmes du chauffage et en lui cachant les détails de la simulation, nous rendons le processus plus facile. On a aussi constaté que l'échantillonnage statistique donne des résultats meilleurs que l'échantillonnage représentatif / Simulation is a vital tool used by architects to develop new architectures. However, because of the complexity of modern architectures and the length of recent benchmarks, detailed simulation of programs can take extremely long times. This impedes the exploration of processor design space which the architects need to do to find the optimal configuration of processor parameters. Sampling is one technique which reduces the simulation time without adversely affecting the accuracy of the results. Yet, most sampling techniques either ignore the warm-up issue or require significant development effort on the part of the user.In this thesis we tackle the problem of reconciling state-of-the-art warm-up techniques and the latest sampling mechanisms with the triple objective of keeping the user effort minimum, achieving good accuracy and being agnostic to software and hardware changes. We show that both the representative and statistical sampling techniques can be adapted to use warm-up mechanisms which can accommodate the underlying architecture's warm-up requirements on-the-fly. We present the experimental results which show an accuracy and speed comparable to latest research. Also, we leverage statistical calculations to provide an estimate of the robustness of the final results.
6

Machine Learning for Speech Forensics and Hypersonic Vehicle Applications

Emily R Bartusiak (6630773) 06 December 2022 (has links)
<p>Synthesized speech may be used for nefarious purposes, such as fraud, spoofing, and misinformation campaigns. We present several speech forensics methods based on deep learning to protect against such attacks. First, we use a convolutional neural network (CNN) and transformers to detect synthesized speech. Then, we investigate closed set and open set speech synthesizer attribution. We use a transformer to attribute a speech signal to its source (i.e., to identify the speech synthesizer that created it). Additionally, we show that our approach separates different known and unknown speech synthesizers in its latent space, even though it has not seen any of the unknown speech synthesizers during training. Next, we explore machine learning for an objective in the aerospace domain.</p> <p><br></p> <p>Compared to conventional ballistic vehicles and cruise vehicles, hypersonic glide vehicles (HGVs) exhibit unprecedented abilities. They travel faster than Mach 5 and maneuver to evade defense systems and hinder prediction of their final destinations. We investigate machine learning for identifying different HGVs and a conic reentry vehicle (CRV) based on their aerodynamic state estimates. We also propose a HGV flight phase prediction method. Inspired by natural language processing (NLP), we model flight phases as “words” and HGV trajectories as “sentences.” Next, we learn a “grammar” from the HGV trajectories that describes their flight phase transition patterns. Given “words” from the initial part of a HGV trajectory and the “grammar”, we predict future “words” in the “sentence” (i.e., future HGV flight phases in the trajectory). We demonstrate that this approach successfully predicts future flight phases for HGV trajectories, especially in scenarios with limited training data. We also show that it can be used in a transfer learning scenario to predict flight phases of HGV trajectories that exhibit new maneuvers and behaviors never seen before during training.</p>

Page generated in 0.0867 seconds