Spelling suggestions: "subject:"data centre"" "subject:"mata centre""
1 |
Integrated Control of Multiple Cooling UnitsMozaffari, Shirin January 2019 (has links)
Data centres are an integral part of today's technology. With the growing demand for data centers to meet computational needs, there is pressure to decrease data center-related costs. By reducing the amount of power needed to cool servers, the overall power consumption can be decreased. Efficient cooling of data centres involves meeting temperature constraints while minimizing power consumption. By exploring the opportunities that may be available through controlling multiple cooling units, we can avoid issues such as overcooling (some parts of the data center being cooled more than necessary) or warm air recirculation (return of exhausted hot air to inlets of servers). Currently, in data centres with more than one cooling unit, each of the cooling units is controlled independently. This mode of operation results in each cooling unit needing to be set for the worst case, which results in over cooling and is not energy efficient. Coordinating cooling units has the potential to decrease the power consumption of a data centre by eliminating this over cooling. Furthermore, coordinating with workload management may help mitigate cooling unit power consumption. This research is concerned with exploring what is feasible within the options discussed above. It contains two main parts. In the first part, we present an algorithm to minimize power consumption of cooling units while keeping all the server cores below a temperature threshold. In the second part of this thesis, we derived a data-driven model for server outlet temperature. / Thesis / Master of Science (MSc)
|
2 |
Power system design guidelines to enhance the reliability of cellular networks in Africa / Leon Petrus StrydomStrydom, Leon Petrus January 2014 (has links)
Cellular networks in Africa have grown exponentially over the past 10 years and their data centres (DCs) on average consume 3 MW of electrical power. They require a reliable electrical power supply and can have a downtime loss of over a million dollars per hour. Power quality, reliability and availability have emerged as key issues for the successful operation of a data centre.
Investigations are carried out into emerging technologies and their application in data centre power distribution systems for cellular networks in Africa. Best practices are applied to develop a power distribution system (PDS) with the objective of achieving optimal reliability and availability.
Analytical techniques are applied to determine and compare the reliability and availability of various power systems. Minimal cut set simulations identify system weak points and confirm component selection. Components’ inherent characteristics (CIC) and system connectivity topology (SCT) are key factors in the improvement of data centre availability.
The analysis practices can be used by engineers and managers as a basis for informed decision making in determining power system reliability and the availability of an existing or a new data centre design. Weak points in the PDS of a data centre causing downtime are identified through analysis, and accurate solutions can be determined to prevent or minimise downtime.
System connectivity topology (SCT) techniques were identified that could increase the reliability and availability of data centres for cellular networks in Africa. These techniques include multiple incomers from the utility company, redundancy levels of critical equipment and parallel distribution paths.
Two case studies were carried out on data centres for a cellular network, one in Nigeria and one in Cameroon. The reliability and availability of both data centres was improved, with substantial reduction in downtime per year.
The outcome of the case studies shows the importance of designing and implementing the power distribution system with sufficient levels of redundancy for critical equipment, and parallel distribution paths. / MSc (Engineering Sciences in Nuclear Engineering), North-West University, Potchefstroom Campus, 2014
|
3 |
Power system design guidelines to enhance the reliability of cellular networks in Africa / Leon Petrus StrydomStrydom, Leon Petrus January 2014 (has links)
Cellular networks in Africa have grown exponentially over the past 10 years and their data centres (DCs) on average consume 3 MW of electrical power. They require a reliable electrical power supply and can have a downtime loss of over a million dollars per hour. Power quality, reliability and availability have emerged as key issues for the successful operation of a data centre.
Investigations are carried out into emerging technologies and their application in data centre power distribution systems for cellular networks in Africa. Best practices are applied to develop a power distribution system (PDS) with the objective of achieving optimal reliability and availability.
Analytical techniques are applied to determine and compare the reliability and availability of various power systems. Minimal cut set simulations identify system weak points and confirm component selection. Components’ inherent characteristics (CIC) and system connectivity topology (SCT) are key factors in the improvement of data centre availability.
The analysis practices can be used by engineers and managers as a basis for informed decision making in determining power system reliability and the availability of an existing or a new data centre design. Weak points in the PDS of a data centre causing downtime are identified through analysis, and accurate solutions can be determined to prevent or minimise downtime.
System connectivity topology (SCT) techniques were identified that could increase the reliability and availability of data centres for cellular networks in Africa. These techniques include multiple incomers from the utility company, redundancy levels of critical equipment and parallel distribution paths.
Two case studies were carried out on data centres for a cellular network, one in Nigeria and one in Cameroon. The reliability and availability of both data centres was improved, with substantial reduction in downtime per year.
The outcome of the case studies shows the importance of designing and implementing the power distribution system with sufficient levels of redundancy for critical equipment, and parallel distribution paths. / MSc (Engineering Sciences in Nuclear Engineering), North-West University, Potchefstroom Campus, 2014
|
4 |
A Novel Architecture, Topology, and Flow Control for Data Center NetworksYuan, Tingqiu 23 February 2022 (has links)
With the advent of new applications such as Cloud Computing, Blockchain, Big Data, and Machine Learning, modern data center network (DCN) architecture has been evolving to meet numerous challenging requirements such as scalability, agility, energy efficiency, and high performance. Among the new applications ones are expediting the convergence of high-performance computing and Data Centers. This convergence has prompted research into a single, converged data center architecture that unites computing, storage, and interconnect network in a synthetic system designed to reduce the total cost of ownership and result in greater efficiency and productivity. The interconnect network is a critical aspect of Data Centers, as it sets performance bounds and determines most of the total cost of ownership. The design of an interconnect network consists of three factors: topology, routing, and congestion control, and this thesis aims to satisfy the above challenging requirements.
To address the challenges noted above, the communication patterns for emerging applications are investigated, and it is shown that the dynamic and diverse traffic patterns (denoted as *-cast), especially multi-cast, in-cast, broadcast (one-to-all), and all-to-all-cast, play a significant impact in the performance of emerging applications. Inspired by hypermesh topologies, this thesis presents a novel cost-efficient topology for large-scale Data Center Networks (DCNs), which is called HyperOXN. HyperOXN takes advantage of high-radix switch components leveraging state-of-the-art colorless wavelength division multiplexing technologies, effectively supports *-cast traffic, and at the same time meets the demands for high throughput, low latency, and lossless delivery. HyperOXN provides a non-blocking interconnect network with a relatively low overhead-cost. Through theoretical analysis, this thesis studies the topological properties of the proposed HyperOXN and compares it with other different types of interconnect networks such as Fat-Tree, Flattened Butterfly, and Hypercube-like topologies. Passive optical cross-connection networks are used in the HyperOXN topology, enabling economical, power-efficient, and reliable communication within DCNs. It is shown that HyperOXN outperforms a comparable Fat-Tree topology in cost, throughput, power consumption and cabling under a variety of workload conditions.
A HyperOXN network provides multiple paths between the source and its destination to obtain high bandwidth and achieve fault tolerance. Inspired by a power-of-two-choices technique, a novel stochastic global congestion-aware load balancing algorithm, which can be used to achieve relatively optimal load balances amongst multiple shared paths is designed. It also guarantees low latency for short-lived mouse flows and high throughput for long-lasting elephant flows. Furthermore, the stability of the flow-scheduling algorithm is formally proven. Experimental results show that the algorithm successfully eliminated the interactions of the elephant and mouse DC flows, and ensured high network bandwidth utilization.
|
5 |
Literature Survey on Optical Data Centre NetworksChen, Hao January 2015 (has links)
Data centre networks are currently experiencing a dramatic increase in the amount of network traffic that needs to be handled due to cloud technology and several emerging applications. To address this challenge, mega data centres are required with hundreds of thousands of servers interconnected with high bandwidth interconnects. Current data centre networks, based on electronic packet switches, consume a huge amount of power to support the increased bandwidth required by the emerging applications. Optical interconnects have gained more and more attentions as a promising solution offering high capacity and consuming much lower energy compared to the commodity switch based solutions. This thesis provides a thorough literature study on optical interconnects for data centre networks that are expected to efficiently handle the future traffic. Two major types of optical interconnects have been reviewed. One is referred to hybrid switching, where optical switching deals big flows while electronic switches handles traffic in packet level. The other one is based on all-optical switch, where power-consuming electronic interconnects can be completely avoided. Furthermore, the thesis includes a qualitative comparison of the presented schemes based on their main features such as topology, technology, network performance, scalability, energy consumption, etc.
|
6 |
Utilising waste heat from Edge-computing Micro Data Centres : Financial and Environmental synergies, Opportunities, and Business Models / Tillvaratagande av spillvärme från Edge-computing Micro Data Center : finansiella och miljömässiga synergier, möjligheter, och affärsmodellerDowds, Eleanor Jane, El-Saghir, Fatme January 2021 (has links)
In recent times, there has been an explosion in the need for high-density computing and data processing. As a result the Internet and Communication Technology (ICT) demand on global energy resources has tripled in the last five years. Edge computing - bringing computing power close to the user, is set to be the cornerstone of future communication and information transport, satisfying the demand for instant response times and zero latency needed for applications such as 5G, self-driving vehicles, face recognition, and much more. The Micro Data Centre (micro DC) is key hardware in the shift to edge computing. Being self-contained, with in-rack liquid cooling systems, these micro data centres can be placed anywhere they are needed the most - often in areas not thought of as locations for datacentres, such as offices and housing blocks. This presents an opportunity to make the ICT industry greener and contribute to lowering total global energy demand, while fulfilling both the need for data processing and heating requirements. If a solution can be found to capture and utilise waste heat from the growing number of micro data centres, it would have a massive impact on overall energy consumption. This project will explore this potential synergy through investigating two different ways of utilising waste heat. The first being supplying waste heat to the District Heating network (Case 1), and the second using the micro DC as a ’data furnace’ supplying heat to the near vicinity (Case 2 and 3). Two scenarios of differing costs and incomes will be exploredin each case, and a sensitivity analysis will be performed to determine how sensitive each scenario is to changing internal and external factors. Results achieved were extremely promising. Capturing waste heat from micro data centres, and both supplying the local district heating network as well as providing the central heating of the near vicinity, is proving to be both economically and physically viable. The three different business models (’Cases’) created not only show good financial promise, but they demonstrate a way of creating value in a greener way of computing and heat supply. The amount of waste heat able to be captured is sufficient to heat many apartments in residential blocks and office buildings, and the temperatures achieved have proven to be sufficient to meet the heating requirements of these facilities, meaning no extra energy is required for the priming of waste heat. It is the hope that the investigations and analyses performed in this thesis will further the discussion around the utilisation of waste heat from lower energy sources, such as micro DCs, so that one day, potential can become reality. / På senare har tid har det skett en explosion i behovet av databehandling och databehandling med hög densitet. Som ett resultat har Internet- och kommunikationstekniksektorns (ICT) efterfråga på globala energiresurser tredubblats under de senaste fem åren. Edgecomputing för datorkraften närmre användaren och är hörnstenen i framtida kommunikation och informationsflöde. Omedelbar svarstid och noll latens som behövs för applikationersom 5G, självkörande fordon, ansiktsigenkänning och mycket mer tillfredställs av att datorkraften förs närme användaren. Micro Data Center är nycklen i övergången till edge computing. Eftersom att MicroData Center är fristående med inbyggda kylsystem kan de placeras där de behövs mest -ofta i områden som inte betraktas som platser för datacenter som exemeplvis kontor och bostadshus. Detta möjliggör för ICT-branschen att bli grönare och bidra till att sänka det totala globala energibehovet, samtidigt som behovet av databehandling kan tillgodoses. Om enlösning kan hittas för att fånga upp och använda spillvärme som genereras från växande antalet Micro Data Center, skulle det ha en enorm inverkan på den totala energiförbrukningen. Detta projekt kommer att undersöka potentiella synergier genom att undersöka två olikasätt att utnyttja spillvärme. Den första är att leverera spillvärme till fjärrvärmenätet (Case 1), och det andra att använda Micro Data Center som en "Data Furnace" som levererar värme till närområdet (Case 2 och 3). Två scenarier med olika kostnader och intäkter kommer att undersökas i varje Case och en känslighetsanalys kommer att utföras för att avgöra hur känsligt varje scenario är för ändrade interna och externa faktorer. Resultaten som uppnåtts är extremt lovande. Att fånga upp spillvärme från Micro Data Center och leverera till antingen det lokala fjärrvärmenätet eller nyttja spillvärmen lokalt har visat sig vara både ekonomiskt och fysiskt genomförbart. De tre olika affärsmodellerna (’Cases’) som skapats visar inte bara positivt ekonomiskt utfall, utan också ett sätt att skapa värde genom att på ett grönare sätt processa och lagra data och samtidigt värma städer. Mängden spillvärme som kan fångas upp är tillräcklig för att värma upp många lägenheter i bostadshus och kontorsbyggnader. Temperaturen på spillvärmen har visat sig vara tillräcklig för att uppfylla uppvärmningskraven i dessa anläggningar, vilket innebär att ingen extra energi krävs för att höja temperturen av spillvärme. Förhoppningen är att de undersökningar och analyser som utförs i denna rapport kommer att främja diskussionen kring utnyttjande av spillvärme från lägre energikällor, såsom Micro Data Center.
|
7 |
Design of Silicon Photonics External Cavity LaserZheng, Jiamin January 2014 (has links)
<p>The development of silicon photonics, driven by the increasing demand for bandwidth from data centre applications, is receiving growing attention. As a result of the indirect bandgap of Si material, it is more practical to heterogeneously incorporate the laser source than fabricate directly on Si. Of all the approaches, an external cavity laser (ECL) approach which consists of III-V gain material and Si photonic integrated circuit (SiPIC), is a flexible and cost effective solution. This thesis captures theoretical and experimental work on the design of SiPIC ECLs. In addition, a four wavelength laser source using an SiPIC ECL scheme is proposed and studied.</p> <p>The theoretical tool is first introduced on the traveling wave model (TWM) and it is numerically solved with FDTD in Matlab. A digital filter approach is used to describe the feedback from an SiPIC external cavity, where the phase delay of the digital filter is investigated and utilized to set the cavity length.</p> <p>The III-V gain chip and SiPIC are then examined separately for their characterization, along with the coupling and feedback requirements in an ECL design.</p> <p>Lastly, experiments are conducted to demonstrate the feasibility of four wavelength ECLs and SiPIC ECLs.</p> / Master of Applied Science (MASc)
|
8 |
Failure Analysis Modelling in an Infrastructure as a Service (Iaas) EnvironmentMohammed, Bashir, Modu, Babagana, Maiyama, Kabiru M., Ugail, Hassan, Awan, Irfan U., Kiran, Mariam 30 October 2018 (has links)
Yes / Failure Prediction has long known to be a challenging problem. With the evolving trend of technology and growing complexity of high-performance cloud data centre infrastructure, focusing on failure becomes very vital particularly when designing systems for the next generation. The traditional runtime fault-tolerance (FT) techniques such as data replication and periodic check-pointing are not very effective to handle the current state of the art emerging computing systems. This has necessitated the urgent need for a robust system with an in-depth understanding of system and component failures as well as the ability to predict accurate potential future system failures. In this paper, we studied data in-production-faults recorded within a five years period from the National Energy Research Scientific computing centre (NERSC). Using
the data collected from the Computer Failure Data Repository (CFDR), we developed an effective failure
prediction model focusing on high-performance cloud data centre infrastructure. Using the Auto-Regressive Moving Average (ARMA), our model was able to predict potential future failures in the system. Our results also show a failure prediction accuracy of 95%, which is good.
|
9 |
Optimisation of a Hadoop cluster based on SDN in cloud computing for big data applicationsKhaleel, Ali January 2018 (has links)
Big data has received a great deal attention from many sectors, including academia, industry and government. The Hadoop framework has emerged for supporting its storage and analysis using the MapReduce programming module. However, this framework is a complex system that has more than 150 parameters and some of them can exert a considerable effect on the performance of a Hadoop job. The optimum tuning of the Hadoop parameters is a difficult task as well as being time consuming. In this thesis, an optimisation approach is presented to improve the performance of a Hadoop framework by setting the values of the Hadoop parameters automatically. Specifically, genetic programming is used to construct a fitness function that represents the interrelations among the Hadoop parameters. Then, a genetic algorithm is employed to search for the optimum or near the optimum values of the Hadoop parameters. A Hadoop cluster is configured on two severe at Brunel University London to evaluate the performance of the proposed optimisation approach. The experimental results show that the performance of a Hadoop MapReduce job for 20 GB on Word Count Application is improved by 69.63% and 30.31% when compared to the default settings and state of the art, respectively. Whilst on Tera sort application, it is improved by 73.39% and 55.93%. For better optimisation, SDN is also employed to improve the performance of a Hadoop job. The experimental results show that the performance of a Hadoop job in SDN network for 50 GB is improved by 32.8% when compared to traditional network. Whilst on Tera sort application, the improvement for 50 GB is on average 38.7%. An effective computing platform is also presented in this thesis to support solar irradiation data analytics. It is built based on RHIPE to provide fast analysis and calculation for solar irradiation datasets. The performance of RHIPE is compared with the R language in terms of accuracy, scalability and speedup. The speed up of RHIPE is evaluated by Gustafson's Law, which is revised to enhance the performance of the parallel computation on intensive irradiation data sets in a cluster computing environment like Hadoop. The performance of the proposed work is evaluated using a Hadoop cluster based on the Microsoft azure cloud and the experimental results show that RHIPE provides considerable improvements over the R language. Finally, an effective routing algorithm based on SDN to improve the performance of a Hadoop job in a large scale cluster in a data centre network is presented. The proposed algorithm is used to improve the performance of a Hadoop job during the shuffle phase by allocating efficient paths for each shuffling flow, according to the network resources demand of each flow as well as their size and number. Furthermore, it is also employed to allocate alternative paths for each shuffling flow in the case of any link crashing or failure. This algorithm is evaluated by two network topologies, namely, fat tree and leaf-spine, built by EstiNet emulator software. The experimental results show that the proposed approach improves the performance of a Hadoop job in a data centre network.
|
10 |
Statistik der öffentlichen Unternehmen in Deutschland : die DatenbasisDietrich, Irina, Strohe, Hans Gerhard January 2011 (has links)
Öffentliche Unternehmen werden in Adäquation zum wirtschaftlichen und politischen Verständnis an Hand des Finanz- und Personalstatistikgesetzes operationalisierbar definiert und sowohl gegenüber öffentlichen Behörden als auch gegenüber privaten Unternehmen abgegrenzt. Dabei wird gezeigt, dass keine Deckungsgleichheit, aber eine stückweise Überlappung mit dem Sektor Staat besteht. Dadurch gewinnt ein Teil der öffentlichen Unternehmen Bedeutung für die Volkswirtschaftliche Gesamtrechnung, insbesondere für den öffentlichen Schuldenstand und damit für die Konvergenzkriterien im Rahmen der Wirtschafts- und Währungsunion.
Die amtliche Statistik gewinnt die Daten für die Statistik öffentlicher Unternehmen in Totalerhebung aus den Jahresabschlüssen dieser Unternehmen einschließlich ihrer Gewinn und Verlustrechnung. Die Statistik öffentlicher Unternehmen übertrifft damit in ihrer Ausführlichkeit und Tiefe die meisten anderen Fachstatistiken. Dem steht der Nachteil der relativ späten Verfügbarkeit gegenüber.
Der Wissenschaft steht die Statistik in Form einer formal anonymisierten Datei an Wissenschaftlerarbeitsplätzen in den Forschungsdatenzentren der Statistischen Ämter des Bundes und der Länder zur Verfügung. Der Anonymisierungsprozess bedeutet eine weitere Verzögerung der Verfügbarkeit der Daten und steht zusammen mit strengen Geheimhaltungsvorschriften in den Forschungsdatenzentren im Widerspruch zur gebotenen Transparenz und der vorgeschriebenen Offenlegung der Bilanzen im öffentlichen Sektor.
|
Page generated in 0.0783 seconds