Spelling suggestions: "subject:"cynamic boltage scaling"" "subject:"cynamic coltage scaling""
1 |
A Task Selection Based Power-aware Scheduling Algorithm for Applying DVSMori, Yuichiro, Asakura, Koichi, Watanabe, Toyohide 08 November 2009 (has links)
No description available.
|
2 |
Reduction of Cache Related Preemption Delay using DVS in Real Time SystemsChandrashekar, Aravind 01 May 2011 (has links)
Aravind Chandrashekar, for the Master of Science degree in Electrical and Computer, presented on 02/09/2011, at Southern Illinois University Carbondale. TITLE: Reduction of Cache Related Preemption Delay using DVS in Real Time Systems MAJOR PROFESSOR: Dr. Harini Ramaprasad Embedded/real-time systems are ubiquitous in today's world. Providing temporal guarantees is paramount in such systems. In several multi-tasking real-time systems, tasks are assigned varying priorities and scheduled in accordance with a preemptive scheduling policy. When a task is preempted, a significant number of memory blocks belonging to the particular task are displaced from the cache memory between the time that the task is preempted and the time that the task resumes execution. Upon resumption, a corresponding amount of time is spent in reloading the cache with previously replaced memory blocks, thereby incurring what is known as cache-related preemption delay (CRPD). CRPD of a task due to a given preemption depends on the position in the program where the preempted task is executing at the time of preemption. As such, CRPD at different preemption points may be significantly different. In this thesis, we exploit this difference in CRPD and use dynamic voltage/frequency scaling (DVFS) to control the execution speed of a task such that it gets preempted in regions where the CRPD is low, as far as is possible without jeopardizing system schedulability. Simulation results demonstrate that our algorithm reduces number of cache reloads due to preemption to a reasonable extent, thereby reducing the repeated usage of off-chip memory bandwidth.
|
3 |
Operating system directed power managementSnowdon, David, Computer Science & Engineering, Faculty of Engineering, UNSW January 2010 (has links)
Energy is a critical resource in all types of computing systems from servers, where energy costs dominate data centre expenses and carbon footprints, to embedded systems, where the system's battery life limits the device's functionality. In their efforts to reduce the energy use of these system's hardware manufacturers have implemented features which allow a reduced energy consumption under software control. This thesis shows that managing these settings is a more complex problem than previously considered. Where much (but not all) of the previous academic research investigates unrealistic scenarios, this thesis presents a solution to managing the power on varying hardware. Instead of making unrealistic assumptions, we extract a model from empirical data and characterise that model. Our models estimate the effect of different power management settings on the behaviour of the hardware platform, taking into account the workload, platform and environmental characteristics, but without any kind of a-priori knowledge of the specific workloads being run. These models encapsulate a system's knowledge of the platform. We also developed a \emph{generalised energy-delay} policy which allows us to quickly express the instantaneous importance of both performance and energy to the system. It allows us to select a power management strategy from a number of options. This thesis shows, by evaluation on a number of platforms, that our implementation, Koala, can accurately meet energy and performance goals. In some cases, our system saves 26\% of the system-level energy required for a task, while losing only 1\% performance. This is nearly 46\% of the dynamic energy. Taking advantage of all energy-saving opportunities requires detailed platform, workload and environmental information. Given this knowledge, we reach the exciting conclusion that near optimal power management is possible on real operating systems, with real platforms and real workloads.
|
4 |
Operating system directed power managementSnowdon, David, Computer Science & Engineering, Faculty of Engineering, UNSW January 2010 (has links)
Energy is a critical resource in all types of computing systems from servers, where energy costs dominate data centre expenses and carbon footprints, to embedded systems, where the system's battery life limits the device's functionality. In their efforts to reduce the energy use of these system's hardware manufacturers have implemented features which allow a reduced energy consumption under software control. This thesis shows that managing these settings is a more complex problem than previously considered. Where much (but not all) of the previous academic research investigates unrealistic scenarios, this thesis presents a solution to managing the power on varying hardware. Instead of making unrealistic assumptions, we extract a model from empirical data and characterise that model. Our models estimate the effect of different power management settings on the behaviour of the hardware platform, taking into account the workload, platform and environmental characteristics, but without any kind of a-priori knowledge of the specific workloads being run. These models encapsulate a system's knowledge of the platform. We also developed a \emph{generalised energy-delay} policy which allows us to quickly express the instantaneous importance of both performance and energy to the system. It allows us to select a power management strategy from a number of options. This thesis shows, by evaluation on a number of platforms, that our implementation, Koala, can accurately meet energy and performance goals. In some cases, our system saves 26\% of the system-level energy required for a task, while losing only 1\% performance. This is nearly 46\% of the dynamic energy. Taking advantage of all energy-saving opportunities requires detailed platform, workload and environmental information. Given this knowledge, we reach the exciting conclusion that near optimal power management is possible on real operating systems, with real platforms and real workloads.
|
5 |
DYNAMIC VOLTAGE SCALING FOR PRIORITY-DRIVEN SCHEDULED DISTRIBUTED REAL-TIME SYSTEMSWang, Chenxing 01 January 2007 (has links)
Energy consumption is increasingly affecting battery life and cooling for real- time systems. Dynamic Voltage and frequency Scaling (DVS) has been shown to substantially reduce the energy consumption of uniprocessor real-time systems. It is worthwhile to extend the efficient DVS scheduling algorithms to distributed system with dependent tasks. The dissertation describes how to extend several effective uniprocessor DVS schedul- ing algorithms to distributed system with dependent task set. Task assignment and deadline assignment heuristics are proposed and compared with existing heuristics concerning energy-conserving performance. An admission test and a deadline com- putation algorithm are presented in the dissertation for dynamic task set to accept the arriving task in a DVS scheduled real-time system. Simulations show that an effective distributed DVS scheduling is capable of saving as much as 89% of energy that would be consumed without using DVS scheduling. It is also shown that task assignment and deadline assignment affect the energy- conserving performance of DVS scheduling algorithms. For some aggressive DVS scheduling algorithms, however, the effect of task assignment is negligible. The ad- mission test accept over 80% of tasks that can be accepted by a non-DVS scheduler to a DVS scheduled real-time system.
|
6 |
Static Task Scheduling Algorithms Based on Greedy Heuristics for Battery-Powered DVS SystemsTAKADA, Hiroaki, TOMIYAMA, Hiroyuki, ZENG, Gang, YOKOYAMA, Tetsuo 01 October 2010 (has links)
No description available.
|
7 |
IMPACT OF DYNAMIC VOLTAGE SCALING (DVS) ON CIRCUIT OPTIMIZATIONEsquit Hernandez, Carlos A. 16 January 2010 (has links)
Circuit designers perform optimization procedures targeting speed and power
during the design of a circuit. Gate sizing can be applied to optimize for speed, while
Dual-VT and Dynamic Voltage Scaling (DVS) can be applied to optimize for leakage
and dynamic power, respectively. Both gate sizing and Dual-VT are design-time
techniques, which are applied to the circuit at a fixed voltage. On the other hand, DVS
is a run-time technique and implies that the circuit will be operating at a different voltage
than that used during the optimization phase at design-time. After some analysis, the
risk of non-critical paths becoming critical paths at run-time is detected under these
circumstances. The following questions arise: 1) should we take DVS into account
during the optimization phase? 2) Does DVS impose any restrictions while performing
design-time circuit optimizations?. This thesis is a case study of applying DVS to a
circuit that has been optimized for speed and power, and aims at answering the previous
two questions.
We used a 45-nm CMOS design kit and flow. Synthesis, placement and routing,
and timing analysis were applied to the benchmark circuit ISCAS?85 c432. Logical
Effort and Dual-VT algorithms were implemented and applied to the circuit to optimize for speed and leakage power, respectively. Optimizations were run for the circuit
operating at different voltages. Finally, the impact of DVS on circuit optimization was
studied based on HSPICE simulations sweeping the supply voltage for each
optimization.
The results showed that DVS had no impact on gate sizing optimizations, but it
did on Dual-VT optimizations. It is shown that we should not optimize at an arbitrary
voltage. Moreover, simulations showed that Dual-VT optimizations should be performed
at the lowest voltage that DVS is intended to operate, otherwise non-critical paths will
become critical paths at run-time.
|
8 |
An Adaptive Proportional-Integral Controller for Power Management of 3D Graphics System-On-ChipJheng, Hao-Yi 31 July 2009 (has links)
In the past few years, due to the rapid advance in technology and the aid of 3D graphics applications the world of 3D graphics is rapidly expanding from desktop computers and dedicated gaming consoled to handheld devices, such as cellular phones, PDAs, laptops etc.,. However, unlike traditional desktop computers and gaming consoles, mobile computing devices typically have slower processors that have less capability for handling large computation-intensive workloads like 3D graphics application. In addition, the power consumption is one of the major design specifications to realize the 3D graphics accelerating engine for mobile devices because handheld batteries have limited lifetimes. Moreover, the size of chip is depend on the Moore¡¦s Law: The number of transistors in a chip are double in every eighteen months. Even though the produce cost is decrease, but the capacity of battery cannot increase like the transistors. Therefore, how to reduce power consumption by using efficient power management techniques has become a very important research topic in 3D graphics SoC design.
For 3D graphics applications, dynamic voltage and frequency scaling (DVFS) is a good candidate to reduce the power consumption of 3D graphics accelerating engine. So many relative papers have researched in how to accurately predict the workload and scale the voltage and frequency. The prediction policy can divide into History-based predictor [1] and Frame-structure predictor [2-4]. The History-based predictor predicts the latter frame workload by previous frame workload to scale the voltage, and the frame-structure predictor performs offline and then determine the different kind of frame for an application. A table is used to save the mapping of different kind of frame to the voltage, and then the voltage is scaled according to the mapping table. A lot of researchers put the power management policy in software i.e. processors, but our proposed workload prediction scheme has been realized into the hardware circuit. Therefore, it can not only reduce the overhead of processor but also quickly adjust the voltage and frequency of 3D graphics accelerating engine. Our prediction policy is one of the History-based predictor ,and it is an adaptive PID predictor [5-6] in which the parameters of Proportional controller and Integral controller can be adaptively adjusted so that it can obtain more accurate prediction results than non-adaptive predictor.
In general, the workload that the selected voltage can handle is usually over than the predicted workload. That is, actual workload is usually less than predicted workload. So that the slack time will be generated. We can utilize the slack time through Inter-frame compensation [7-10] to save more energy while maintaining the similar output quality. We use a simple policy to adaptively select the parameters for compensation between the frames to simplify the hardware architecture of the power management policy. Experimental results show that, we can get more energy saving and more accurate workload prediction when the adaptive PI predictor and adaptive Inter-frame compensation are utilized.
|
9 |
POWER REDUCTION BY DYNAMICALLY VARYING SAMPLING RATEDatta, Srabosti 01 January 2006 (has links)
In modern digital audio applications, a continuous audio signal stream is sampled at a fixed sampling rate, which is always greater than twice the highest frequency of the input signal, to prevent aliasing. A more energy efficient approach is to dynamically change the sampling rate based on the input signal. In the dynamic sampling rate technique, fewer samples are processed when there is little frequency content in the samples. The perceived quality of the signal is unchanged in this technique. Processing fewer samples involves less computation work; therefore processor speed and voltage can be reduced. This reduction in processor speed and voltage has been shown to reduce power consumption by up to 40% less than if the audio stream had been run at a fixed sampling rate.
|
10 |
Software Synthesis for Energy-Constrained Hard Real-Time Embedded SystemsTAVARES, Eduardo Antônio Guimarães 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:49:47Z (GMT). No. of bitstreams: 1
license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5)
Previous issue date: 2009 / A grande expansão do mercado de dispositivos digitais tem forçado empresas desenvolvedoras
de sistemas embarcados em lidar com diversos desafios para prover sistemas
complexos nesse nicho de mercado. Um dos desafios prominentes está relacionado ao
consumo de energia, principalmente, devido aos seguintes fatores: (i) mobilidade; (ii)
problemas ambientais; e (iii) o custo da energia. Como consequência, consideráveis esforços
de pesquisa têm sido dedicados para a criação de técnicas voltadas para aumentar
a economia de energia.
Na última década, diversas técnicas foram desenvolvidas para reduzir o consumo de
energia em sistemas embarcados. Muitos métodos lidam com gerenciamento dinâmico de
energia (DPM), como, por exemplo, dynamic voltage scaling (DVS), cooperativamente
com sistemas operacionais especializados, a fim de controlar o consumo de energia durante
a execução do sistema. Entretanto, apesar da disponibilidade de muitos métodos de
redução de consumo de energia, diversas questões estão em aberto, principalmente, no
contexto de sistemas de tempo real crítico.
Este trabalho propõe um método de síntese de software, o qual leva em consideração
relação entre tarefas, overheads, restrições temporais e de energia. O método é composto
por diversas atividades, as quais incluem: (i) medição; (ii) especificação; (iii) modelagem
formal; (vi) escalonamento; e (v) geração de código. O método também é centrado no
formalismo redes de Petri, o qual define uma base para geração precisa de escalas em
tempo de projeto, adotando DVS para reduzir o consumo de energia. A partir de uma
escala viável, um código customizado é gerado satisfazendo as restrições especificadas,
e, dessa forma, garantindo previsibilidade em tempo de execução. Para lidar com a natureza
estática das escalas geradas em tempo de projeto, um escalonador simples em
tempo de execução é também proposto para melhorar o consumo de energia durante a
execução do sistema. Diversos experimentos foram conduzidos, os quais demonstram a
viabilidade da abordagem proposta para satisfazer restrições críticas de tempo e energia.
Adicionalmente, um conjunto integrado de ferramentas foram desenvolvidas para
automatizar algumas atividades do método de síntese de software proposto
|
Page generated in 0.0678 seconds