Spelling suggestions: "subject:"data center,"" "subject:"mata center,""
131 |
SUPPORTING DATA CENTER AND INTERNET VIDEO APPLICATIONS WITH STRINGENT PERFORMANCE NEEDS: MEASUREMENTS AND DESIGNEhab Mohammad Ghabashneh (18257911) 28 March 2024 (has links)
<p dir="ltr">Ensuring a high quality of experience for Internet applications is challenging owing to the significant variability (e.g., of traffic patterns) inherent to both cloud data-center networks and wide area networks. This thesis focuses on optimizing application performance by both conducting measurements to characterize traffic variability, and designing applications that can perform well in the face of variability. On the data center side, a key aspect that impacts performance is traffic burstiness at fine granular time scales. Yet, little is know about traffic burstiness and how it impacts application loss. On the wide area side, we focus on video applications as a major traffic driver. While optimizing traditional videos traffic remains a challenge, new forms of video such as 360◦ introduce additional challenges such as respon- siveness in addition to the bandwidth uncertainty challenge. In this thesis, we make three contributions.</p><p dir="ltr"><b>First</b>, for data center networks, we present Millisampler, a lightweight network traffic char- acterization tool for continual monitoring which operates at fine configurable time scales, and deployed across all servers in a large real-world data center networks. Millisampler takes a host-centric perspective to characterize traffic across all servers within a data center rack at the same time. Next, we present data-center-scale joint analysis of burstiness, contention, and loss. Our results show (i) bursts are likely to encounter contention; (ii) contention varies significantly over short timescales; and (iii) higher contention need not lead to more loss, and the interplay with workload and burst properties matters.</p><p dir="ltr"><b>Second</b>, we consider challenges with traditional video in wide area networks. We take a step towards understanding the interplay between Content-Delivery-Networks (CDNs), and video performance through end-to-end measurements. Our results show that (i) video traffic in a session can be sourced from multiple CDN layers, and (ii) throughput can vary signifi- cantly based on the traffic source. Next we evaluate the potential benefits of exposing CDN information to the client Adaptive-Bit-Rate (ABR) algorithm. Emulation experiments show the approach has the potential to reduce prediction inaccuracies, and enhance video quality of experience (QoE).</p><p dir="ltr"><b>Third</b>, for 360◦ videos, we argue for a new streaming model which is explicitly designed for continuous, rather than stalling, playback to preserve interactivity. Next, we propose Dragonfly, a new 360° system that leverages the additional degrees of freedom provided by this design point. Dragonfly proactively skips tiles (i.e., spatial segment of the video) using a model that defines an overall utility function that captures factors relevant to user experience. We conduct a user study which shows that majority of interactivity feedback indicating Dragonfly being highly reactive, while the majority of state-of-the-art’s feedback indicates the systems are slow to react. Further, extensive emulations show Dragonfly improves the image quality significantly without stalling playback.</p>
|
132 |
VM Allocation in Cloud Datacenters Based on the Multi-Agent System. An Investigation into the Design and Response Time Analysis of a Multi-Agent-based Virtual Machine (VM) Allocation/Placement Policy in Cloud DatacentersAl-ou'n, Ashraf M.S. January 2017 (has links)
Recent years have witnessed a surge in demand for infrastructure and services to cover high demands on processing big chunks of data and applications resulting in a mega Cloud Datacenter. A datacenter is of high complexity with increasing difficulties to identify, allocate efficiently and fast an appropriate host for the requested virtual machine (VM). Establishing a good awareness of all datacenter’s resources enables the allocation “placement” policies to make the best decision in reducing the time that is needed to allocate and create the VM(s) at the appropriate host(s). However, current algorithms and policies of placement “allocation” do not focus efficiently on awareness of the resources of the datacenter, and moreover, they are based on conventional static techniques. Which are adversely impacting on the allocation progress of the policies. This thesis proposes a new Agent-based allocation/placement policy that employs some of the Multi-Agent system features to get a good awareness of Cloud Datacenter resources and also provide an efficient allocation decision for the requested VMs. Specifically, (a) The Multi-Agent concept is used as a part of the placement policy (b) A Contract Net Protocol is devised to establish good awareness and (c) A verification process is developed to fully dimensional VM specifications during allocation. These new results show a reduction in response time of VM allocation and the usage improvement of occupied resources. The proposed Agent-based policy was implemented using the CloudSim toolkit and consequently was compared, based on a series of typical numerical experiments, with the toolkit’s default policy. The comparative study was carried out in terms of the time duration of VM allocation and other aspects such as the number of available VM types and the amount of occupied resources. Moreover, a two-stage comparative study was introduced through this thesis. Firstly, the proposed policy is compared with four state of the art algorithms, namely the Random algorithm and three one-dimensional Bin-Packing algorithms. Secondly, the three Bin-Packing algorithms were enhanced to have a two-dimensional verification structure and were compared against the proposed new algorithm of the Agent-based policy. Following a rigorous comparative study, it was shown that, through the typical numerical experiments of all stages, the proposed new Agent-based policy had superior performance in terms of the allocation times. Finally, avenues arising from this thesis are included. / Al al-Bayt University in Jordan.
|
133 |
Data centers and Indigenous sovereignty : Data center materialities, representation and power in Sápmi/northern SwedenSargsyan, Satenik January 2022 (has links)
From “disguised and concealed” (Parks and Starosielski 2015) in nature to more recent, select attempts at “visible, accessible, environmentally friendly” (Holt and Vonderau 2015), data centers are the backbone of the digital infrastructure. Studies of data centers continuously help develop media and communications studies in understanding the role of media infrastructure, representations of imaginaries of the cloud; social, political and economic realities embedded in data, and issues of power, agency and resistance against the backdrop of increased global concerns for the environment and greening practices, built into the discourse of tech companies. This research provides an insight into data centers in S.pmi, in the Arctic and near-Arctic regions in Sweden, from the perspective of Indigenous S.mi communities. Data centers are examined here through their materialities and representations and as industrial sites of politics, power and promise through lived realities of the S.mi people in Sweden. As a result, data centers emerge not only as entities with built-in, inherent dependence on materialities and representations of land, water and air but also as contrapuntal nodes – assemblages perpetually at odds with their built-in power through time: their narratives –neutral connectedness and natural sustainability – at odds with their material infrastructure: detaching and uprooting from land.
|
134 |
DATA CENTER CONDENSER OPTIMIZATION: A DISCRETIZED MODELLING APPROACH TO IMPROVE PUMPED TWO-PHASE COOLING CYCLESTyler John Schostek (16613160) 19 July 2023 (has links)
<p>Rising interest in high-performance servers in data centers to support the increasing demands for cloud-computing and storage have challenged thermal management systems. To prevent these increased power density servers from overheating due to the high heat fluxes dissipated, new cooling methods have continued to be investigated in recent years. One such solution is pumped two-phase cooling which shows promise over traditional air cooling due to the reduced power consumption it requires to operate, while also being able to dissipate large amounts of heat from the small components in servers.</p>
<p> </p>
<p>Although pumped two-phase systems as a cooling strategy have existed for multiple decades, sub-optimal component design have hindered the potential efficiencies achievable. This is especially prevalent in the condenser where, in order to meet required metrics, these heat exchangers are commonly oversized due to maldistribution at low vapor qualities and a lack of understanding about the condensation behavior within certain geometries.</p>
<p><br></p>
<p>Through the work presented in this thesis, the capabilities of an air-cooled microchannel condenser model are explored for future use in optimization studies for data center applications. To perform this research, an investigation into the boundary conditions of these systems and common condenser modeling strategies were carried out. Using this knowledge, a flexible discretized condenser model was developed to capture the behavior of pumped two-phase cooling in data centers under a wide range of operating conditions. In conjunction, an experimental test setup was sized, designed, and constructed to provide validation for the model. Then, using the model, some initial parametric studies were conducted to identify the sensitivity effects of various parameters on overall condenser performance. In this initial study, some favored boundary conditions and geometries were found that both minimize refrigerant pressure drops and maximize heat transfer. For an air-cooled condenser operating with R1234ze(E), these include: refrigerant entering the condenser around 40% quality, operating at moderate refrigerant mass fluxes through the channels (130 - 460 kg/m^2-s), and designing microchannel condenser tubes with many tightly packed square ports. Continued investigation into the contributing parameters of weight in the future using the tools developed in this thesis will lead to further optimized condenser designs and operating conditions.</p>
|
135 |
Dynamic Resource Management of Cloud-Hosted Internet ApplicationsHangwei, Qian 27 August 2012 (has links)
No description available.
|
136 |
Machine Learning Based Failure Detection in Data CentersPiran Nanekaran, Negin January 2020 (has links)
This work proposes a new approach to fast detection of abnormal behaviour of cooling, IT, and power distribution systems in micro data centers based on machine learning techniques. Conventional protection of micro data centers focuses on monitoring individual parameters such as temperature at different locations and when these parameters reach certain high values, then an alarm will be triggered. This research employs machine learning techniques to extract normal and abnormal behaviour of the cooling and IT systems. Developed data acquisition system together with unsupervised learning methods quickly learns the physical dynamics of normal operation and can detect deviations from such behaviours. This provides an efficient way for not only producing health index for the micro data center, but also a rich label logging system that will be used for the supervised learning methods. The effectiveness of the proposed detection technique is evaluated on an micro data center placed at Computing Infrastructure Research Center (CIRC) in McMaster Innovation Park (MIP), McMaster University. / Thesis / Master of Science (MSc)
|
137 |
Data center cooling solutions : A techno-economical case study of a data center in SwedenSjökvist, Joel, Magnusson, Fredrik January 2022 (has links)
Given the coinciding growth-trend in the production of consumer electronics and generation of data, the increase in server halls and data centers, as a means for hosting storage capacity for the generated data, has been prominent over the last decades. The establishment of data centers in already existing infrastructure can entail major changes in terms of energy system design. The activity of data processing and storage is power intensive and as the centers demonstrate substantial heat generation, one of the most important fractions of the energy use comes from the need to provide cooling. The study is a techno-economic analysis purposed for determining the feasibility of different cooling systems for a data center in Sweden. The investigated building currently hosts an industrial printing press hall in which paper printing has been conducted for the several decades. This press hall is subject to a refurbishment process to eventually be converted into a data center. In order to achieve the objectives, a data center building model is developed, designated for the estimation of the internal heat generation and demand for cooling. The design and energy requirements of a number of cooling solutions are then investigated and evaluated using a number of performance metrics: Power Usage Effectiveness (PUE), Capital Expenditure (CapEx), Operational Expenditure (OpEx) and Life Cycle Cost (LCC). More specifically the systems incorporate technologies for utilizing air-based free cooling, ground-source free cooling through borehole ground source heat exchangers (GHEs), mechanical cooling through compressor-driven machines as well as District Cooling (DC). The results of the study show that free cooling is a viable solution for covering the vast majority of the yearly cooling requirements, during sufficiently low outdoor temperatures. Free cooling, provided through borehole GHE’s, is feasible as a partial solution from a technical point of view, to provide cooling capacity during warmer periods. However, it can not alone act to provide a major part of the relatively high and constant cooling capacity requirements throughout the year. All of the investigated scenarios display a similar energy performance in terms of total PUE, at values well below the national average of 1.37. It is also seen, that the scenario that displays the lowest LCC includes a combination of free cooling and compressor-driven cooling. This holds for the studied sensitivity cases. It is found that a combined system incorporating borehole GHE’s and compressor cooling machines perform the best in terms of a low PUE. However, the relative difference in energy performance turns out to be lesser than the relative difference in LCC, when substituting the borehole GHE’s for additional cooling machine capacity. / I takt med digitaliseringen och en ökad global användningen- och produktionen av hemelektronik, vilket föranlett en ökad generering av data, har antalet datahallar blivit allt fler de senaste decennierna. Datahallens syfte är att hantera och bereda lagringskapacitet för den data som genereras vilket involverar en rad energikrävande processer. Upprättandet av datahallar i redan befintlig infrastruktur kan medföra förändringar när det kommer till utformningen av byggnadens energisystem. Att bedriva datalagring och informationsbehandling kräver påtagliga mängder elektricitet vilket medför stor intern värmealstring och därtill behov av aktiv kylning. Denna studie, som valt att benämnas som en tekno-ekonomisk fallstudie, undersöker lämpligheten i implementeringen av olika kylsystem för ett byggnadskomplex i Stockholm. I byggnadens lokaler återfinns idag en industrihall där det sedan flera decennier bedrivits tryckeriverksamhet. Industrihallen är föremål för en konverteringsprocess för att på sikt bli en datahall. Studien är centrerad kring denna konverteringsprocess. För att utvärdera kylbehoven för den framtida datahallen har en modell utvecklats som uppskattar interna värmelaster samt reglerar inomhusklimatet efter rådande krav på inomhuskomfort. Därefter studeras utformning och energibehov för flera olika typer av kylsystemlösningar där en utvärdering av dessa system görs utifrån indikatorerna Power Usage Effectiveness (PUE), Capital Expenditure (CapEx),Operational Expenditure (OpEx) and Life Cycle Cost (LCC). Mer konkret undersöks kombinerade kylsystem som utnyttjar luftburen frikyla, geotermisk frikyla via bergvärmeväxlare (GHEs), mekanisk kyla via kompressordriven kylmaskin samt regional fjärrkyla. Resultaten från studien visar att frikyla från kylmedelskylare är en lämplig lösning för att täcka majoriteten av datahallens kylbehov över ett år, med undantag för årets varmare perioder. Geotermisk frikyla via borrhål är möjlig som partiell lösning ur ett tekniskt perspektiv, men kan inte enskild leverera en majoritet av effekt- eller energibehovet av kyla. Resultatet visar också att alla undersökta scenarier uppvisar en liknande energiprestanda i termer av total PUE, med värden som underskrider det nationella genomsnittet 1,37. Lägst LCC påvisades för ett system som kombinerar luftburen frikyla via kylmedleskylare och mekanisk kyla via kompressordrivna kylmaskiner. Denna låga LCC är signifikant vilket påvisas i utförd känslighetsanalys. Slutligen konstateras att ett system innefattande luftburen och geotermisk frikyla i kombination med kompressordrivna kylmaskiner resulterar i lägst PUE bland de undersökta scenarierna. Den relativa skillnaden i energiprestanda visar sig vara mindre än den relativa skillnaden i LCC, när geotermisk frikyla ersätts med ytterligare kapacitet från kylmaskiner.
|
138 |
Efficient and elastic management of computing infrastructuresAlfonso Laguna, Carlos de 23 October 2016 (has links)
Tesis por compendio / [EN] Modern data centers integrate a lot of computer and electronic devices. However, some reports state that the mean usage of a typical data center is around 50% of its peak capacity, and the mean usage of each server is between 10% and 50%. A lot of energy is destined to power on computer hardware that most of the time remains idle. Therefore, it would be possible to save energy simply by powering off those parts from the data center that are not actually used, and powering them on again as they are needed.
Most data centers have computing clusters that are used for intensive computing, recently evolving towards an on-premises Cloud service model. Despite the use of low consuming components, higher energy savings can be achieved by dynamically adapting the system to the actual workload. The main approach in this case is the usage of energy saving criteria for scheduling the jobs or the virtual machines into the working nodes. The aim is to power off idle servers automatically. But it is necessary to schedule the power management of the servers in order to minimize the impact on the end users and their applications.
The objective of this thesis is the elastic and efficient management of cluster infrastructures, with the aim of reducing the costs associated to idle components. This objective is addressed by automating the power management of the working nodes in a computing cluster, and also proactive stimulating the load distribution to achieve idle resources that could be powered off by means of memory overcommitment and live migration of virtual machines. Moreover, this automation is of interest for virtual clusters, as they also suffer from the same problems. While in physical clusters idle working nodes waste energy, in the case of virtual clusters that are built from virtual machines, the idle working nodes can waste money in commercial Clouds or computational resources in an on-premises Cloud. / [ES] En los Centros de Procesos de Datos (CPD) existe una gran concentración de dispositivos informáticos y de equipamiento electrónico. Sin embargo, algunos estudios han mostrado que la utilización media de los CPD está en torno al 50%, y que la utilización media de los servidores se encuentra entre el 10% y el 50%. Estos datos evidencian que existe una gran cantidad de energía destinada a alimentar equipamiento ocioso, y que podríamos conseguir un ahorro energético simplemente apagando los componentes que no se estén utilizando.
En muchos CPD suele haber clusters de computadores que se utilizan para computación de altas prestaciones y para la creación de Clouds privados. Si bien se ha tratado de ahorrar energía utilizando componentes de bajo consumo, también es posible conseguirlo adaptando los sistemas a la carga de trabajo en cada momento. En los últimos años han surgido trabajos que investigan la aplicación de criterios energéticos a la hora de seleccionar en qué servidor, de entre los que forman un cluster, se debe ejecutar un trabajo o alojar una máquina virtual. En muchos casos se trata de conseguir equipos ociosos que puedan ser apagados, pero habitualmente se asume que dicho apagado se hace de forma automática, y que los equipos se encienden de nuevo cuando son necesarios. Sin embargo, es necesario hacer una planificación de encendido y apagado de máquinas para minimizar el impacto en el usuario final.
En esta tesis nos planteamos la gestión elástica y eficiente de infrastructuras de cálculo tipo cluster, con el objetivo de reducir los costes asociados a los componentes ociosos. Para abordar este problema nos planteamos la automatización del encendido y apagado de máquinas en los clusters, así como la aplicación de técnicas de migración en vivo y de sobreaprovisionamiento de memoria para estimular la obtención de equipos ociosos que puedan ser apagados. Además, esta automatización es de interés para los clusters virtuales, puesto que también sufren el problema de los componentes ociosos, sólo que en este caso están compuestos por, en lugar de equipos físicos que gastan energía, por máquinas virtuales que gastan dinero en un proveedor Cloud comercial o recursos en un Cloud privado. / [CA] En els Centres de Processament de Dades (CPD) hi ha una gran concentració de dispositius informàtics i d'equipament electrònic. No obstant això, alguns estudis han mostrat que la utilització mitjana dels CPD està entorn del 50%, i que la utilització mitjana dels servidors es troba entre el 10% i el 50%. Estes dades evidencien que hi ha una gran quantitat d'energia destinada a alimentar equipament ociós, i que podríem aconseguir un estalvi energètic simplement apagant els components que no s'estiguen utilitzant.
En molts CPD sol haver-hi clusters de computadors que s'utilitzen per a computació d'altes prestacions i per a la creació de Clouds privats. Si bé s'ha tractat d'estalviar energia utilitzant components de baix consum, també és possible aconseguir-ho adaptant els sistemes a la càrrega de treball en cada moment. En els últims anys han sorgit treballs que investiguen l'aplicació de criteris energètics a l'hora de seleccionar en quin servidor, d'entre els que formen un cluster, s'ha d'executar un treball o allotjar una màquina virtual. En molts casos es tracta d'aconseguir equips ociosos que puguen ser apagats, però habitualment s'assumix que l'apagat es fa de forma automàtica, i que els equips s'encenen novament quan són necessaris. No obstant això, és necessari fer una planificació d'encesa i apagat de màquines per a minimitzar l'impacte en l'usuari final.
En esta tesi ens plantegem la gestió elàstica i eficient d'infrastructuras de càlcul tipus cluster, amb l'objectiu de reduir els costos associats als components ociosos. Per a abordar este problema ens plantegem l'automatització de l'encesa i apagat de màquines en els clusters, així com l'aplicació de tècniques de migració en viu i de sobreaprovisionament de memòria per a estimular l'obtenció d'equips ociosos que puguen ser apagats. A més, esta automatització és d'interés per als clusters virtuals, ja que també patixen el problema dels components ociosos, encara que en este cas estan compostos per, en compte d'equips físics que gasten energia, per màquines virtuals que gasten diners en un proveïdor Cloud comercial o recursos en un Cloud privat. / Alfonso Laguna, CD. (2015). Efficient and elastic management of computing infrastructures [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/57187 / Compendio
|
139 |
[pt] MODELAGEM DE UM CIRCUITO DE TERMOSSIFÃO DE BAIXO IMPACTO AMBIENTAL COM APLICAÇÃO EM RESFRIAMENTO DE ELETRÔNICOS / [en] MODELING OF A TWO-PHASE THERMOSYPHON LOOP WITH LOW ENVIRONMENTAL IMPACT REFRIGERANT APPLIED TO ELECTRONIC COOLINGVERONICA DA ROCHA WEAVER 04 October 2021 (has links)
[pt] Diante dos constantes avanços da tecnologia os dispositivos eletrônicos vêm passando por um processo de miniaturização, ao mesmo tempo em que sustentam um aumento de potência. Essa tendência se mostra um desafio para seu gerenciamento térmico, uma vez que os sistemas de resfriamento típicos para eletrônicos utilizam ar como fluido de trabalho, e o seu baixo coeficiente de transferência de calor limita sua capacidade de atender às necessidades térmicas da indústria atual. Nesse sentido, o resfriamento bifásico tem sido considerado uma solução promissora para fornecer resfriamento adequado para dispositivos eletrônicos.
Circuitos de termossifão bifásico combinam a tecnologia de resfriamento bifásico com sua inerente natureza passiva, já que o sistema não requer uma bomba para fornecer circulação para seu fluido de trabalho, graças às forças da gravidade e de empuxo. Um dissipador de calor de microcanais, localizado bem em cima do dispositivo eletrônico, dissipa o calor gerado. Isto o torna uma solução de baixo custo e energia. Além disso, ter um circuito de termossifão operando com um refrigerante de baixo GWP, como o R-1234yf, resulta em baixo impacto para o meio ambiente, uma vez que é um refrigerante ecologicamente correto e o sistema tem baixo ou nenhum consumo de energia.
Este trabalho fornece um modelo numérico detalhado para a simulação de um circuito de termossifão bifásico, operando em condições de regime permanente. O circuito compreende um evaporador (chip e dissipador de calor de micro-aletas), um riser, um condensador refrigerado a água de tubo duplo e um downcomer. Equações fundamentais e constitutivas foram estabelecidas para cada componente. Um método numérico de diferenças finitas, 1-D para o escoamento do fluido por todos os componentes do sistema, e 2-D para a condução de calor no chip e evaporador foi empregado.
O modelo foi validado com dados experimentais para o refrigerante R134a, mostrando uma discrepância em relação ao fluxo de massa em torno de 6 por cento, para quando o sistema operava sob regime dominado pela gravidade. A pressão de entrada do evaporador prevista apresentou um erro relativo máximo de 4,8 por cento quando comparada aos resultados experimentais. Além disso, a maior discrepância da temperatura do chip foi inferior a 1 grau C.
Simulações foram realizadas para apresentar uma comparação de desempenho entre o R134a e seu substituto ecologicamente correto, R1234yf. Os resultados mostraram que quando o sistema operava com R134a, ele trabalhava com uma pressão de entrada no evaporador mais alta, assim como, com um fluxo de massa mais alto. Por causa disso, o R134a foi capaz de manter a temperatura do chip mais baixa do que o R1234yf. No entanto, essa diferença na temperatura do chip foi levemente inferior a 1 grau C, mostrando o R1234yf como comparável em desempenho ao R134a. Além disso, o fator de segurança da operação do sistema foi avaliado para ambos os refrigerantes, e para um fluxo de calor máximo do chip de 33,1 W/cm2, R1234yf mostrou um fator de segurança acima de 3. Isso significa que o circuito de termossifão pode operar com segurança abaixo do ponto crítico de fluxo de calor.
Dada a investigação sobre a comparação de desempenho dos refrigerantes R134a e R1234yf, os resultados apontaram o R1234yf como um excelente substituto ecologicamente correto para o R134a, para operação em um circuito de termossifão bifásico. / [en] Given the constant advances in technology, electronic devices have been going through a process of miniaturization while sustaining an increase in power. This trend proves to be a challenge for thermal management since commonly electronic cooling systems are air-based, so that the low heat transfer coefficient of air limits its capacity to keep up with the thermal needs of today s industry. In this respect, two-phase cooling has been regarded as a promising solution to provide adequate cooling for electronic devices.
Two-phase thermosyphon loops combine the technology of two-phase cooling with its inherent passive nature, as the system does not require a pump to provide circulation for its working fluid, thanks to gravity and buoyancy forces. A micro-channel heat sink located right on top of the electronic device dissipates the heat generated. This makes for an energy and cost-efficient solution. Moreover, having a thermosyphon loop operating with a low GWP refrigerant such as R-1234yf results in low impact for the environment since it is an environmentally friendly refrigerant, and the system has low to none energy consumption.
This work provides a detailed numerical model for the simulation of a two-phase thermosyphon loop operating under steady-state conditions. The loop comprises an evaporator (chip and micro-fin heat sink), a riser, a tube-in-tube water-cooled condenser and a downcomer. Fundamental and constitutive equations were established for each component. A finite-difference method, 1-D for the flow throughout the thermoysphon s components and 2-D for the heat conduction in the evaporator and chip, was employed. The model was validated against experimental data for refrigerant R134a, showing a mass flux discrepancy of around 6 percent for when the system operated under gravity dominant regime. The predicted evaporator inlet pressure showed a maximum relative error of 4.8 percent when compared to the experimental results. Also, the chip temperature s largest discrepancy was lower than 1 C degree.
Simulations were performed to present a performance comparison between R134a and its environmentally friendly substitute, R1234yf. Results showed that when the system operated with R134a, it yielded a higher evaporator inlet pressure as well as a higher mass flux. Because of that, R134a was able to keep the chip temperature lower than R1234yf. Yet, that difference in chip temperature was slightly lower than 1 C degree, showing R1234yf as comparable in performance to R134a. In addition, the safety factor of the system s operation was evaluated for both refrigerants, and for a maximum chip heat flux of 33.1 W/cm2, R1234yf showed a safety factor above 3. This means the thermosyphon loop can operate safely under the critical heat flux.
Given the investigation on the performance comparison of refrigerants R134a and R1234yf, results pointed to R1234yf being a great environmentally friendly substitute for R134a for the two-phase thermosyphon loop.
|
140 |
Maîtrise énergétique des centres de données virtualisés : D'un scénario de charge à l'optimisation du placement des calculs / Power management in virtualized data centers : Form a load scenario to the optimization of the tasks placementLe Louët, Guillaume 12 May 2014 (has links)
Cette thèse se place dans le contexte de l’hébergement de services informatiques virtualisés et apporte deux contributions. Elle propose premièrement un système d’aide à la gestion modulaire, déplaçant les machines virtuelles du centre pour le maintenir dans un état satisfaisant. Ce système permet en particulier d’intégrer la notion de consommation électrique des serveurs ainsi que des règles propres à cette consommation. Sa modularité permet de plus l’adaptation de ses composants à des problèmes de grande taille. Cette thèse propose de plus un outil pour comparer différents gestionnaires de centres virtualisés. Cet outil injecte un scénario de montée en charge reproductible dans une infrastructure virtualisée. L’injection d’un tel scénario permet d’évaluer les performances du système de gestion du centre grâce à des sondes spécifiques. Le langage utilisé pour cette injection est extensible et permet l’utilisation de scénarios paramétrés. / This thesis considers the virtualized IT services hosting and makes two contributions. It first proposes a modular system of management aids, to move the virtual machines of the center in order to keep it in a good condition. This system allows in particular to integrate the concept of server power consumption and rules specific to that concept. What’s more, its modularity allows to adjust its components to handle larger problems. This thesis proposes also a tool to compare different virtualized centers managers. This tool injects a reproductible load increase scenario in a virtualized infrastructure. The injection of such a scenario is used to evaluate the performance of the system center manager, using performances probes. The language used for this injection is extensible and allows the creation of parameterized scenarios. The contributions of this thesis were presented in two international conferences and a french conference.
|
Page generated in 0.1181 seconds