• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 12
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 66
  • 16
  • 15
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Amélioration des performances énergétiques des systèmes de refroidissement industriels : Application aux serveurs informatiques

Mammeri, Amrid 27 May 2014 (has links) (PDF)
Ce travail aborde la problématique des systèmes de refroidissement ou de contrôle thermique industriels. Nous avons particulièrement mis l'accent sur le refroidissement des serveurs informatiques. Une première partie consiste en l'étude des moyens d'amélioration des techniques de refroidissement existantes, tandis que la deuxième partie est une réflexion sur des techniques de refroidissement alternatives potentiellement plus efficaces et répondant aux demandes actuelles du contrôle thermique industriel. Dans le premier chapitre, nous analysons la bibliographie et la théorie relatives aux phénomènes physiques derrière les techniques de refroidissement étudiées. Ensuite, une classification des techniques de refroidissement est proposée en fin de chapitre. Ce chapitre a servi de base pour l'amélioration des technologies de refroidissement existantes et à la réflexion sur de nouvelles techniques plus efficaces. Le second chapitre porte sur l'optimisation d'une plaque froide, destinée au refroidissement des serveurs informatiques, en s'aidant d'un outil numérique et d'essais expérimentaux. Nous avons noté une augmentation des transferts de chaleur dans la plaque froide en utilisant des inserts, notamment ceux en forme de losange disposés en quinconce. A l'inverse, l'utilisation de certains nanofluides en tant que fluides caloporteurs ne semble pas apporter de gain significatif. Dans le troisième chapitre nous détaillons la démarche suivie pour la conception d'un dissipateur de chaleur basé sur une technologie caloducs, destiné au refroidissement des cartes électroniques. En premier lieu, nous présentons le modèle thermohydraulique de dimensionnement d'un caloduc cylindrique ; une étude paramétrique (géométrique, type de fluide,...) nous a permis d'identifier le jeu de paramètres donnant la meilleure performance du caloduc. En second lieu, nous évoquons les tests réalisés sur le dissipateur de chaleur à caloduc qui nous amènent à valider en partie le modèle thermohydraulique développé. Le dernier chapitre porte sur la réalisation et l'étude d'un démonstrateur pour le refroidissement des cartes électroniques par immersion dans un liquide à basse température de saturation. On commence par la mise en place et l'utilisation d'un modèle numérique pour la conception du démonstrateur, puis des tests expérimentaux sont réalisés. Les premiers résultats obtenus en utilisant le SES-36 comme fluide de travail sont assez prometteurs.Mots clés : modélisation, transfert de chaleur, refroidissement, datacenter, liquid-cooling, caloducs, échangeurs, nanofluides, ébullition en vase, simulation numérique
32

Coordinating the Design and Management of Heterogeneous Datacenter Resources

Guevara, Marisabel Alejandra January 2014 (has links)
<p>Heterogeneous design presents an opportunity to improve energy efficiency but raises a challenge in management. Whereas prior work separates the two, we coordinate heterogeneous design and management. We present a market-based resource allocation mechanism that navigates the performance and power trade-offs of heterogeneous architectures. Given this management framework, we explore a design space of heterogeneous processors and show a 12x reduction in response time violations when equipping a datacenter with three processor types over a homogeneous system that consumes the same power. To better understand trade-offs in large heterogeneous design spaces, we explore dozens of design strategies and present a risk taxonomy that classifies the reasons why a deployed system may underperform relative to design targets. We propose design strategies that explicitly mitigate risk, such as a strategy that minimizes the coefficient of variation in performance. In our experiments, we find that risk-aware design accounts for more than 70% of the strategies that produce systems with the best service quality. We also present a new datacenter management mechanism that fairly allocates processors to latency-sensitive applications. Tasks express value for performance using sophisticated piecewise-linear utility functions. With fairness in market allocations, we show how datacenters can mitigate envy amongst latency-sensitive users. We quantify the price of fairness and detail efficiency-fairness trade-offs. Finally, we extend the market to fairly allocate heterogeneous processors.</p> / Dissertation
33

Achieving predictable, guaranted and work-conserving performance in datacenter networks / Atingindo desempenho previsivel, garantido e com conservação de trabalhos em redes datacenter

Marcon, Daniel Stefani January 2017 (has links)
A interferência de desempenho é um desafio bem conhecido em redes de datacenter (DCNs), permanecendo um tema constante de discussão na literatura. Diversos estudos concluíram que a largura de banda disponível para o envio e recebimento de dados entre máquinas virtuais (VMs) pode variar por um fator superior a cinco, resultando em desempenho baixo e imprevisível para as aplicações. Trabalhos na literatura têm proposto técnicas que resultam em subutilização de recursos, introduzem sobrecarga de gerenciamento ou consideram somente recursos de rede. Nesta tese, são apresentadas três propostas para lidar com a interferência de desempenho em DCNs: IoNCloud, Predictor e Packer. O IoNCloud está baseado na observação que diferentes aplicações não possuem pico de damanda de banda ao mesmo tempo. Portanto, ele busca prover desempenho previsível e garantido enquanto minimiza a subutilização dos recursos de rede. Isso é alcançado por meio (a) do agrupamento de aplicações (de acordo com os seus requisitos temporais de banda) em redes virtuais (VNs); e (b) da alocação dessas VNs no substrato físico. Apesar de alcançar os seus objetivos, ele não provê conservação de trabalho entre VNs, o que limita a utilização de recursos ociosos. Nesse contexto, o Predictor, uma evolução do IoNCloud, programa dinamicamente a rede em DCNs baseadas em redes definidas por software (SDN) e utiliza dois novos algoritmos para prover garantias de desempenho de rede com conservação de trabalho. Além disso, ele foi projetado para ser escalável, considerando o número de regras em tabelas de fluxo e o tempo de instalação das regras para um novo fluxo em DCNs com milhões de fluxos ativos. Apesar dos benefícios, o IoNCloud e o Predictor consideram apenas os recursos de rede no processo de alocação de aplicações na infraestrutura física. Isso leva à fragmentação de outros tipos de recursos e, consequentemente, resulta em um menor número de aplicações sendo alocadas. O Packer, em contraste, busca prover desempenho de rede previsível e garantido e minimizar a fragmentação de diferentes tipos de recursos. Estendendo a observação feita ao IoNCloud, a observação-chave é que as aplicações têm demandas complementares ao longo do tempo para múltiplos recursos. Desse modo, o Packer utiliza (i) uma nova abstração para especificar os requisitos temporais das aplicações, denominada TI-MRA (Time- Interleaved Multi-Resource Abstraction); e (ii) uma nova estratégia de alocação de recursos. As avaliações realizadas mostram os benefícios e as sobrecargas do IoNCloud, do Predictor e do Packer. Em particular, os três esquemas proveem desempenho de rede previsível e garantido; o Predictor reduz o número de regras OpenFlow em switches e o tempo de instalação dessas regras para novos fluxos; e o Packer minimiza a fragmentação de múltiplos tipos de recursos. / Performance interference has been a well-known problem in datacenter networks (DCNs) and one that remains a constant topic of discussion in the literature. Several measurement studies concluded that throughput achieved by virtual machines (VMs) in current datacenters can vary by a factor of five or more, leading to poor and unpredictable overall application performance. Recent efforts have proposed techniques that present some shortcomings, such as underutilization of resources, significant management overhead or negligence of non-network resources. In this thesis, we introduce three proposals that address performance interference in DCNs: IoNCloud, Predictor and Packer. IoNCloud leverages the key observation that temporal bandwidth demands of cloud applications do not peak at exactly the same time. Therefore, it seeks to provide predictable and guaranteed performance while minimizing network underutilization by (a) grouping applications in virtual networks (VNs) according to their temporal network usage and need of isolation; and (b) allocating these VNs on the cloud substrate. Despite achieving its objective, IoNCloud does not provide work-conserving sharing among VNs, which limits utilization of idle resources. Predictor, an evolution over IoNCloud, dynamically programs the network in Software-Defined Networking (SDN)-based DCNs and uses two novel algorithms to provide network guarantees with work-conserving sharing. Furthermore, Predictor is designed with scalability in mind, taking into consideration the number of entries required in flow tables and flow setup time in DCNs with high turnover and millions of active flows. IoNCloud and Predictor neglect resources other than the network at allocation time. This leads to fragmentation of non-network resources and, consequently, results in less applications being allocated in the infrastructure. Packer, in contrast, aims at providing predictable and guaranteed network performance while minimizing overall multi-resource fragmentation. Extending the observation presented for IoNCloud, the key insight for Packer is that applications have complementary demands across time for multiple resources. To enable multi-resource allocation, we devise (i) a new abstraction for specifying temporal application requirements (called Time-Interleaved Multi-Resource Abstraction – TI-MRA); and (ii) a new allocation strategy. We evaluated IoNCloud, Predictor and Packer, showing their benefits and overheads. In particular, all of them provide predictable and guaranteed network performance; Predictor reduces flow table size in switches and flow setup time; and Packer minimizes multi-resource fragmentation.
34

Achieving predictable, guaranted and work-conserving performance in datacenter networks / Atingindo desempenho previsivel, garantido e com conservação de trabalhos em redes datacenter

Marcon, Daniel Stefani January 2017 (has links)
A interferência de desempenho é um desafio bem conhecido em redes de datacenter (DCNs), permanecendo um tema constante de discussão na literatura. Diversos estudos concluíram que a largura de banda disponível para o envio e recebimento de dados entre máquinas virtuais (VMs) pode variar por um fator superior a cinco, resultando em desempenho baixo e imprevisível para as aplicações. Trabalhos na literatura têm proposto técnicas que resultam em subutilização de recursos, introduzem sobrecarga de gerenciamento ou consideram somente recursos de rede. Nesta tese, são apresentadas três propostas para lidar com a interferência de desempenho em DCNs: IoNCloud, Predictor e Packer. O IoNCloud está baseado na observação que diferentes aplicações não possuem pico de damanda de banda ao mesmo tempo. Portanto, ele busca prover desempenho previsível e garantido enquanto minimiza a subutilização dos recursos de rede. Isso é alcançado por meio (a) do agrupamento de aplicações (de acordo com os seus requisitos temporais de banda) em redes virtuais (VNs); e (b) da alocação dessas VNs no substrato físico. Apesar de alcançar os seus objetivos, ele não provê conservação de trabalho entre VNs, o que limita a utilização de recursos ociosos. Nesse contexto, o Predictor, uma evolução do IoNCloud, programa dinamicamente a rede em DCNs baseadas em redes definidas por software (SDN) e utiliza dois novos algoritmos para prover garantias de desempenho de rede com conservação de trabalho. Além disso, ele foi projetado para ser escalável, considerando o número de regras em tabelas de fluxo e o tempo de instalação das regras para um novo fluxo em DCNs com milhões de fluxos ativos. Apesar dos benefícios, o IoNCloud e o Predictor consideram apenas os recursos de rede no processo de alocação de aplicações na infraestrutura física. Isso leva à fragmentação de outros tipos de recursos e, consequentemente, resulta em um menor número de aplicações sendo alocadas. O Packer, em contraste, busca prover desempenho de rede previsível e garantido e minimizar a fragmentação de diferentes tipos de recursos. Estendendo a observação feita ao IoNCloud, a observação-chave é que as aplicações têm demandas complementares ao longo do tempo para múltiplos recursos. Desse modo, o Packer utiliza (i) uma nova abstração para especificar os requisitos temporais das aplicações, denominada TI-MRA (Time- Interleaved Multi-Resource Abstraction); e (ii) uma nova estratégia de alocação de recursos. As avaliações realizadas mostram os benefícios e as sobrecargas do IoNCloud, do Predictor e do Packer. Em particular, os três esquemas proveem desempenho de rede previsível e garantido; o Predictor reduz o número de regras OpenFlow em switches e o tempo de instalação dessas regras para novos fluxos; e o Packer minimiza a fragmentação de múltiplos tipos de recursos. / Performance interference has been a well-known problem in datacenter networks (DCNs) and one that remains a constant topic of discussion in the literature. Several measurement studies concluded that throughput achieved by virtual machines (VMs) in current datacenters can vary by a factor of five or more, leading to poor and unpredictable overall application performance. Recent efforts have proposed techniques that present some shortcomings, such as underutilization of resources, significant management overhead or negligence of non-network resources. In this thesis, we introduce three proposals that address performance interference in DCNs: IoNCloud, Predictor and Packer. IoNCloud leverages the key observation that temporal bandwidth demands of cloud applications do not peak at exactly the same time. Therefore, it seeks to provide predictable and guaranteed performance while minimizing network underutilization by (a) grouping applications in virtual networks (VNs) according to their temporal network usage and need of isolation; and (b) allocating these VNs on the cloud substrate. Despite achieving its objective, IoNCloud does not provide work-conserving sharing among VNs, which limits utilization of idle resources. Predictor, an evolution over IoNCloud, dynamically programs the network in Software-Defined Networking (SDN)-based DCNs and uses two novel algorithms to provide network guarantees with work-conserving sharing. Furthermore, Predictor is designed with scalability in mind, taking into consideration the number of entries required in flow tables and flow setup time in DCNs with high turnover and millions of active flows. IoNCloud and Predictor neglect resources other than the network at allocation time. This leads to fragmentation of non-network resources and, consequently, results in less applications being allocated in the infrastructure. Packer, in contrast, aims at providing predictable and guaranteed network performance while minimizing overall multi-resource fragmentation. Extending the observation presented for IoNCloud, the key insight for Packer is that applications have complementary demands across time for multiple resources. To enable multi-resource allocation, we devise (i) a new abstraction for specifying temporal application requirements (called Time-Interleaved Multi-Resource Abstraction – TI-MRA); and (ii) a new allocation strategy. We evaluated IoNCloud, Predictor and Packer, showing their benefits and overheads. In particular, all of them provide predictable and guaranteed network performance; Predictor reduces flow table size in switches and flow setup time; and Packer minimizes multi-resource fragmentation.
35

En tjänstelösning inom energisektorn : Med fokus på datacenterbranschen

Lindgren, Markus, Östgren, Marcus January 2019 (has links)
The energy sector is facing unprecedented changes as technology development and new expectations are necessitating action. As a result of market changes, incumbent companies as well as new ones within the sector are now seeking new business opportunities to cope with changes and position themselves for the future. The term servitization has been identified as a key to successfully being able to cope with these changes. It describes the transition from a product-oriented value proposition towards a service-oriented value proposition and has become attractive as a strategy due to economic, market and competitive arguments. Despite many clear advantages, the phenomenon of servitization comes with great challenges. To this day, no successful full-scale servitization initiative can be identified within the energy sector in Sweden, that is to say, a situation where a company completely has shifted its offering. The thesis therefore sets out to facilitate the development of a servitization endeavor by examining previous research work within the field of servitization and co-creation. These topics are considered by taking a systems perspective because of the characteristics of the given case, where components such as generators, power lines, transformers, electrical equipment companies, utility companies, data centers and regulatory authorities are part of a larger whole. The theoretical material is applied to better understand a potential receiver of a service offering, a potential customer in the energy sector that is: Data centers. More specifically, the thesis discusses and seek to identify the needs and interests that the customer segment data center seeks in a future energy related service offering and other important factors for a servitized offering. This in turn can be seen as a response to the servitization literature, which clearly promotes that all servitization efforts should start with understanding the customer. Findings show that the largest challenge of servitization has much to do with the abstract nature of the term. In contrast to a tool that can be used relatively mindlessly, servitization constitute a complex whole where things like organizational capacity, culture and common interests must co-exist. The case study has mapped out the needs and wants of the customer segment and suggests recommendations to succeed accordingly. The thesis makes in particular three contributions. First, the three theoretical topics of servitization, co-creation and systems thinking have been integrated to extend the understanding of the connection between these, as well as the challenges linked to the topics. Secondly, a deeper understanding of the data center sector has been provided, more specifically, which critical factors the segment considers in the context of the study. Thirdly, the thesis contributes with suggestions concerning areas for improvement or areas where business development can take place, which also was what first caused us to do the study.
36

Výstavba datových center / Data Center Development

Dóša, Vladimír January 2011 (has links)
This thesis presents and describes new global trends among build and operation of datacenters. Further it contains practical application of particular examples, and the theory is supplemented by new findings from given field.
37

Green Room: A Giant Leap in Development of Green Datacenters

El Azzi, Charles, Izadi, Roozbeh January 2012 (has links)
In a world that is slipping towards a global energy crisis as a result of a heavy dependence on depleting fossil-based fuels, investment in projects that promote energy efficiency will have a strong return on investment not only economically but also socially &amp; environmentally. Firms will directly save significant amounts of money on their energy bills and besides, contribute to slowing down the environmental degradation &amp; global warming by diminishing their carbon footprint. In the global contest to achieve high levels of energy efficiency, IT sector is no exception and telecommunication companies have developed new approaches on how to develop more efficient data centers and cooling systems in the past decades. This paper describes an ambitious project carried out by TeliaSonera to develop a highly-energy-efficient cooling approach for data centers called "Green Room". It is important to realize that Green Room approach is not specifically limited to data centers. It is designed to support any sort of “technical site” in need of an efficient cooling system. As a result, the word “datacenter” repeatedly used in this paper is expandable to a huger category of technical sites. As the hypothesis, Green Room was expected to generate appropriate temperature level accompanied with effectual steady air flow inside the room while using considerably lower amount of electricity compared with other cooling methods in the market. To begin with, an introduction is given to familiarize the readers with the concept of "data center" and immediately preceded a concise discussion in Chapter 2 providing convincing reasons to promote energy-efficient projects like Green Room from economic, social and environmental points of view. The chapter is complemented by a comprehensive part attached to this paper as Appendix I. In Chapter 3, the different cooling approaches currently available for datacenters is looked into. Chapter 4 describes how it is possible to assess the efficiency of a data center cooling system by introducing critical values such as PUE (power usage effectiveness) and COP (Coefficient of performance). Understandably, it is of great significance to determine how accurate the measurements carried out in this project are. Chapter 5 provides useful information on measurements and describes uncertainty estimation of the obtained results. Chapter 6 explains the test methodology and continues by touching on the components of Green Room and their technical specifications. Subsequently, it compares the Green Room approach to other cooling systems and identifies five major differences making the Green Room a distinctive cooling method. Chapter 7 explains the measurement requirements from the point of view of sensors, discusses the calibration process and finally represents the uncertainty calculations and their results. Chapter 8 broadly describes the five different categories of 25 independent tests carried out within a period of almost two weeks. It provides the readers with all the necessary details for each test and includes thorough description of conditions, numerical results, calculations, tables, charts, graphs, pictures and some thermal images. Ultimately, the last two chapters summarize the results of this project and assess its degree of success based on the hypothesis of this paper. Consequently, a number of questions have been raised and relevant suggestions made to modify this approach and improve the results. Surprisingly, the values obtained for efficiency of this cooling system are as expected. However, some part of calculations to achieve the total power load of the whole cooling production system is based on estimations acquired from software simulations. Overall, this is considered as a successful project fulfilling the primary expectations of the founders.
38

A data-driven study of operating system energy-performance trade-offs towards system self optimization

Dong, Han 01 December 2023 (has links)
This dissertation is motivated by an intersection of changes occurring in modern software and hardware; driven by increasing application performance and energy requirements while Moore's Law and Dennard Scaling are facing challenges of diminishing returns. To address these challenging requirements, new features are increasingly being packed into hardware to support new offloading capabilities, as well as more complex software policies to manage these features. This is leading to an exponential explosion in the number of possible configurations of both software and hardware to meet these requirements. For network-based applications, this thesis demonstrates how these complexities can be tamed by identifying and exploiting the characteristics of the underlying system through a rigorous and novel experimental study. This thesis demonstrates how one can simplify this control strategy problem in practical settings by cutting across the complexity through the use of mechanisms that exploit two fundamental properties of network processing. Using the common request-response network processing model, this thesis finds that controlling 1) the speed of network interrupts and 2) the speed at which the request is then executed, enables the characterization of the software and hardware in a stable and well-structured manner. Specifically, a network device's interrupt delay feature is used to control the rate of incoming and outgoing network requests and a processor's frequency setting was used to control the speed of instruction execution. This experimental study, conducted using 340 unique combinations of the two mechanisms, across 2 OSes and 4 applications, finds that optimizing these settings in an application-specific way can result in characteristic performance improvements over 2X while improving energy efficiency by over 2X.
39

Systems Support for Carbon-Aware Cloud Applications

Deng, Nan January 2015 (has links)
No description available.
40

HIGH PERFORMANCE AND SCALABLE SOFT SHARED STATE FOR NEXT-GENERATION DATACENTERS

VAIDYANATHAN, KARTHIKEYAN 20 August 2008 (has links)
No description available.

Page generated in 0.0959 seconds