• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 10
  • 9
  • 6
  • 6
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 129
  • 129
  • 29
  • 27
  • 25
  • 25
  • 23
  • 19
  • 17
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Clustering server properties and syntactic structures in state machines for hyperscale data center operations

Jatko, Johan January 2021 (has links)
In hyperscale data center operations, automation is applied in many ways as it is becomes very hard to scale otherwise. There are however areas relating to understanding, grouping and diagnosing of error reports that are done manually at Facebook today. This master's thesis investigates solutions for applying unsupervised clustering methods to server error reports, server properties and historical data to speed up and enhance the process of finding and root causing systematic issues. By utilizing data representations that can embed both key-value data and historical event log data, the thesis shows that clustering algorithms together with data representations that capture syntactic and semantic structures in the data can be applied with good results in a real-world scenario.
72

Dealing with Data

Lundberg, Agnes January 2021 (has links)
Being an architect means dealing with data. All architectural thinking—whether it is done with pen and paper or the most advanced modeling softwares—starts with data, conveying information about the world, and ultimately outputs data, in the form of drawings or models. Reality is neither the input nor the output. All architectural work is abstractions of reality, mediated by data. What if data, the abstractions of reality that are crucial for our work as architects, was to be used more literally? Could data actually be turned into architecture? Could data be turned into, for example, a volume, a texture, or an aperture? What qualities would such an architecture have?  These questions form the basis of this thesis project. The topic was investigated first by developing a simple design method for generating architectural forms from data, through an iterative series of tests. Then, the design method was applied to create a speculative design proposal for a combined data center and museum, located in Södermalm, Stockholm.
73

Network-Layer Protocols for Data Center Scalability / Protocoles de couche réseau pour l’extensibilité des centres de données

Desmouceaux, Yoann 10 April 2019 (has links)
Du fait de la croissance de la demande en ressources de calcul, les architectures de centres de données gagnent en taille et complexité.Dès lors, cette thèse prend du recul par rapport aux architectures réseaux traditionnelles, et montre que fournir des primitives génériques directement à la couche réseau permet d'améliorer l'utilisation des ressources, et de diminuer le trafic réseau et le surcoût administratif.Deux architectures réseaux récentes, Segment Routing (SR) et Bit-Indexed Explicit Replication (BIER), sont utilisées pour construire et analyser des protocoles de couche réseau, afin de fournir trois primitives: (1) mobilité des tâches, (2) distribution fiable de contenu, et (3) équilibre de charge.Premièrement, pour la mobilité des tâches, SR est utilisé pour fournir un service de migration de machine virtuelles sans perte.Cela ouvre l'opportunité d'étudier comment orchestrer le placement et la migration de tâches afin de (i) maximiser le débit inter-tâches, tout en (ii) maximisant le nombre de nouvelles tâches placées, mais (iii) minimisant le nombre de tâches migrées.Deuxièmement, pour la distribution fiable de contenu, BIER est utilisé pour fournir un protocole de multicast fiable, dans lequel les retransmissions de paquets perdus sont ciblés vers l'ensemble précis de destinations n'ayant pas reçu ce packet : ainsi, le surcoût de trafic est minimisé.Pour diminuer la charge sur la source, cette approche est étendue en rendant possible des retransmissions par des pairs locaux, utilisant SR afin de trouver un pair capable de retransmettre.Troisièmement, pour l'équilibre de charge, SR est utilisé pour distribuer des requêtes à travers plusieurs applications candidates, chacune prenant une décision locale pour accepter ou non ces requêtes, fournissant ainsi une meilleure équité de répartition comparé aux approches centralisées.La faisabilité d'une implémentation matérielle de cette approche est étudiée, et une solution (utilisant des canaux cachés pour transporter de façon invisible de l'information vers l'équilibreur) est implémentée pour une carte réseau programmable de dernière génération.Finalement, la possibilité de fournir de l'équilibrage automatique comme service réseau est étudiée : en faisant passer (avec SR) des requêtes à travers une chaîne fixée d'applications, l'équilibrage est initié par la dernière instance, selon son état local. / With the development of demand for computing resources, data center architectures are growing both in scale and in complexity.In this context, this thesis takes a step back as compared to traditional network approaches, and shows that providing generic primitives directly within the network layer is a great way to improve efficiency of resource usage, and decrease network traffic and management overhead.Using recently-introduced network architectures, Segment Routing (SR) and Bit-Indexed Explicit Replication (BIER), network layer protocols are designed and analyzed to provide three high-level functions: (1) task mobility, (2) reliable content distribution and (3) load-balancing.First, task mobility is achieved by using SR to provide a zero-loss virtual machine migration service.This then opens the opportunity for studying how to orchestrate task placement and migration while aiming at (i) maximizing the inter-task throughput, while (ii) maximizing the number of newly-placed tasks, but (iii) minimizing the number of tasks to be migrated.Second, reliable content distribution is achieved by using BIER to provide a reliable multicast protocol, in which retransmissions of lost packets are targeted towards the precise set of destinations having missed that packet, thus incurring a minimal traffic overhead.To decrease the load on the source link, this is then extended to enable retransmissions by local peers from the same group, with SR as a helper to find a suitable retransmission candidate.Third, load-balancing is achieved by way of using SR to distribute queries through several application candidates, each of which taking local decisions as to whether to accept those, thus achieving better fairness as compared to centralized approaches.The feasibility of hardware implementation of this approach is investigated, and a solution using covert channels to transparently convey information to the load-balancer is implemented for a state-of-the-art programmable network card.Finally, the possibility of providing autoscaling as a network service is investigated: by letting queries go through a fixed chain of applications using SR, autoscaling is triggered by the last instance, depending on its local state.
74

Ein Beitrag zum energie- und kostenoptimierten Betrieb von Rechenzentren mit besonderer Berücksichtigung der Separation von Kalt- und Warmluft

Hackenberg, Daniel 07 February 2022 (has links)
In der vorliegenden Arbeit wird eine simulationsbasierte Methodik zur Energie- und Kostenoptimierung der Kühlung von Rechenzentren mit Kalt-/Warmluft-Separation vorgestellt. Dabei wird die spezifische Charakteristik der Luftseparation für einen wesentlich einfacheren und schnelleren Simulationsansatz genutzt, als das mit herkömmlichen, strömungsmechanischen Methoden möglich ist. Außerdem wird der Energiebedarf des Lufttransports – einschließlich der IT-Ventilatoren – in der Optimierung berücksichtigt. Beispielhaft entwickelte Komponentenmodelle umfassen die IT-Systeme und alle für die Kühlung relevanten Anlagen in einer dem aktuellen Stand der Technik entsprechenden Ausführung. Die besonders wichtigen Aspekte Freikühlbetrieb und Verdunstungskühlung werden berücksichtigt. Anhand verschiedener Konfigurationen eines Modellrechenzentrums wird beispielhaft die Minimierung der jährlichen verbrauchsgebunden Kosten durch Anpassung von Temperatursoll- werten und anderen Parametern der Regelung demonstriert; bestehendes Einsparpotenzial wird quantifiziert. Da die Kalt-/Warmluft-Separation in modernen Installationen mit hoher Leistungsdichte auch Auswirkungen auf bauliche Anforderungen hat, wird ein für diesen Anwendungsfall optimiertes Gebäudekonzept vorgeschlagen und praktisch untersucht, das sich insbesondere durch Vorteile hinsichtlich Energieeffizienz, Flexibilität und Betriebssicherheit auszeichnet.:1 Einleitung 1.1 Motivation 1.2 Kategorisierung von Rechenzentren 1.3 Effizienzmetriken für Rechenzentren 1.4 Wissenschaftlicher Beitrag und Abgrenzung 2 Luftgekühlte IT-Systeme: Anforderungen und Trends 2.1 Anforderungen an das Raumklima 2.1.1 Lufttemperatur 2.1.2 Luftfeuchte 2.1.3 Luftzustand im Warmgang 2.1.4 Schalldruckpegel und Schadgase 2.1.5 Betriebsabläufe und Personal 2.2 Kühllasten 2.2.1 Leistungsbedarf der IT-Systeme 2.2.2 Lastgänge und Teillastbetrieb der IT-Systeme 2.2.3 Flächenspezifische Kühllasten 2.3 Leckströme 2.4 Entwicklungstendenzen 3 Rechenzentrumskühlung: Übliche Lösungen und Optimierungskonzepte 3.1 Anlagenkonzepte zur Entwärmung von Rechenzentren 3.1.1 Freie Kühlung 3.1.2 Maschinelle Kälteerzeugung 3.1.3 Umluftkühlung von Rechnerräumen 3.2 Umluftkühlung mit Separation von Kalt- und Warmluft 3.2.1 Konzept 3.2.2 Umsetzung 3.2.3 Regelung der Umluftkühlgeräte 3.2.4 Effizienzoptimierung durch Anhebung der Lufttemperatur 3.2.5 Betriebssicherheit 3.3 Modellbasierte Untersuchungen in der Literatur 3.4 Zwischenfazit 4 Modellbildung 4.1 Struktur des Modells und Ablauf der Simulation 4.2 Annahmen und Randbedingungen 4.3 Modellierung der IT-Systeme 4.3.1 Testsysteme und -software 4.3.2 Testaufbau und Messung der relevanten physikalischen Größen 4.3.3 Drehzahl der internen Ventilatoren 4.3.4 Leistungsaufnahme der internen Ventilatoren 4.3.5 Luftvolumenstrom 4.3.6 Leistungsaufnahme der IT-Systeme ohne Lüfter 4.3.7 Ausblastemperatur 4.4 Modellierung der Kühlsysteme 4.4.1 Pumpen, Rohrnetz und Ventilatoren 4.4.2 Wärmeübertrager 4.4.3 Umluftkühlgeräte 4.4.4 Pufferspeicher 4.4.5 Kältemaschinen 4.4.6 Rückkühlwerke 4.4.7 Freie Kühlung 4.5 Regelstrategien, Sollwertvorgaben und Lastprofile 4.5.1 Kaltluft 4.5.2 Kaltwasser 4.5.3 Kühlwasser 4.5.4 Kälteerzeuger 4.5.5 Lastprofil der IT-Systeme 4.5.6 Wetterdaten 4.5.7 Standortspezifische Kosten für sonstige Betriebsstoffe 4.6 Validierung der Simulationsumgebung 4.6.1 Stichprobenartige experimentelle Prüfung der ULKG-Modellierung 4.6.2 Stichprobenartige experimentelle Prüfung der Modellierung der Kälteerzeugung 4.6.3 Plausibilitätskontrolle und Modellgrenzen 4.7 Zwischenfazit 5 Variantenuntersuchungen und Ableitung von Empfehlungen 5.1 Konfiguration und ausgewählte Betriebspunkte des Modellrechenzentrums 5.2 Optimierung des Jahresenergiebedarfs mit konstanten Kühlmedientemperaturen 5.2.1 Jahresenergiebedarf des Modell-RZs und Optimierung nach Best Practices 5.2.2 Bestimmung der optimalen (konstanten) ULKG-Ausblastemperatur 5.2.3 Einfluss von Last und Typ der IT-Systeme 5.2.4 Einfluss von Standortfaktoren 5.2.5 Einsparpotenzial Pumpenenergie 5.3 Optimierung mit variablen Kühlmedientemperaturen, RKW trocken 5.3.1 Dynamische Sollwertschiebung der Luft- und Kaltwassertemperaturen 5.3.2 Sollwertschiebung der Kühlwassertemperaturen im Kältemaschinenbetrieb 5.3.3 Kombination der Optimierungen und Übertragung auf andere Standorte 5.4 Optimierung mit variablen Kühlmedientemperaturen, RKW benetzt 5.4.1 Dynamische Sollwertschiebung der Luft- und Kaltwassertemperaturen 5.4.2 Optimierung eines modifizierten Modells ohne Kältemaschinen 5.4.3 Betriebssicherheit der Konfiguration ohne Kältemaschinen 5.4.4 Optimierung der Betriebssicherheit durch Eisspeicher 5.5 Zwischenfazit 6 Vorstellung und Diskussion eines neuen Gebäudekonzepts für Rechenzentren 6.1 Gebäudekonzepte und Anforderungen an Sicherheit, Effizienz und Flexibilität 6.1.1 Limitierungen klassischer Konstruktionsprinzipien 6.1.2 Alternative Konzepte für Umluftkühlung in Rechenzentren 6.1.3 Rechenzentren mit Installationsgeschoss statt Doppelboden 6.2 Plenum statt Doppelboden: Konzept und Umsetzung 6.2.1 Aufgabenstellung und konzeptionelle Anforderungen 6.2.2 Lösung mit dem Plenums-Konzept 6.2.3 Anforderungen an die Regelung der Umluftkühlgeräte 6.3 Experimentelle Leistungsbestimmung und Optimierung 6.3.1 Testaufbau und Messung der relevanten physikalischen Größen 6.3.2 Regelung von Luftvolumenstrom und -Temperatur bei konstanter Last 6.3.3 Optimierung der Kaskadenschaltung der Umluftkühlgeräte bei Lastwechseln 6.3.4 Optimierung der Betriebssicherheit der Umluftkühlung bei Stromausfällen 6.3.5 Ermittlung der Leistungsgrenzen 6.4 Zwischenfazit und weiteres Optimierungspotenzial 7 Zusammenfassung und Ausblick
75

Passive Optical Top-of-Rack Interconnect for Data Center Networks

Cheng, Yuxin January 2017 (has links)
Optical networks offering ultra-high capacity and low energy consumption per bit are considered as a good option to handle the rapidly growing traffic volume inside data center (DCs). However, most of the optical interconnect architectures proposed for DCs so far are mainly focused on the aggregation/core tiers of the data center networks (DCNs), while relying on the conventional top-of-rack (ToR) electronic packet switches (EPS) in the access tier. A large number of ToR switches in the current DCNs brings serious scalability limitations due to high cost and power consumption. Thus, it is important to investigate and evaluate new optical interconnects tailored for the access tier of the DCNs. We propose and evaluate a passive optical ToR interconnect (POTORI) architecture for the access tier. The data plane of the POTORI consists mainly of passive components to interconnect the servers within the rack as well as the interfaces toward the aggregation/core tiers. Using the passive components makes it possible to significantly reduce power consumption while achieving high reliability in a cost-efficient way. Meanwhile, our proposed POTORI’s control plane is based on a centralized rack controller, which is responsible for coordinating the communications among the servers in the rack. It can be reconfigured by software-defined networking (SDN) operation. A cycle-based medium access control (MAC) protocol and a dynamic bandwidth allocation (DBA) algorithm are designed for the POTORI to efficiently manage the exchange of control messages and the data transmission inside the rack. Simulation results show that under realistic DC traffic scenarios, the POTORI with the proposed DBA algorithm is able to achieve an average packet delay below 10 μs with the use of fast tunable optical transceivers. Moreover, we further quantify the impact of different network configuration parameters on the average packet delay. / <p>QC 20170503</p>
76

Performance Modeling and Optimization Techniques in the Presence of Random Process Variations to Improve Parametric Yield of VLSI Circuits

BASU, SHUBHANKAR 28 August 2008 (has links)
No description available.
77

Scalable and Energy-Efficient SIMT Systems for Deep Learning and Data Center Microservices

Mahmoud Khairy A. Abdallah (12894191) 04 July 2022 (has links)
<p> </p> <p>Moore’s law is dead. The physical and economic principles that enabled an exponential rise in transistors per chip have reached their breaking point. As a result, High-Performance Computing (HPC) domain and cloud data centers are encountering significant energy, cost, and environmental hurdles that have led them to embrace custom hardware/software solutions. Single Instruction Multiple Thread (SIMT) accelerators, like Graphics Processing Units (GPUs), are compelling solutions to achieve considerable energy efficiency while still preserving programmability in the twilight of Moore’s Law.</p> <p>In the HPC and Deep Learning (DL) domain, the death of single-chip GPU performance scaling will usher in a renaissance in multi-chip Non-Uniform Memory Access (NUMA) scaling. Advances in silicon interposers and other inter-chip signaling technology will enable single-package systems, composed of multiple chiplets that continue to scale even as per-chip transistors do not. Given this evolving, massively parallel NUMA landscape, the placement of data on each chiplet, or discrete GPU card, and the scheduling of the threads that use that data is a critical factor in system performance and power consumption.</p> <p>Aside from the supercomputer space, general-purpose compute units are still the main driver of data center’s total cost of ownership (TCO). CPUs consume 60% of the total data center power budget, half of which comes from the CPU pipeline’s frontend. Coupled with the hardware efficiency crisis is an increased desire for programmer productivity, flexible scalability, and nimble software updates that have led to the rise of software microservices. Consequently, single servers are now packed with many threads executing the same, relatively small task on different data.</p> <p>In this dissertation, I discuss these new paradigm shifts, addressing the following concerns: (1) how do we overcome the non-uniform memory access overhead for next-generation multi-chiplet GPUs in the era of DL-driven workloads?; (2) how can we improve the energy efficiency of data center’s CPUs in the light of microservices evolution and request similarity?; and (3) how to study such rapidly-evolving systems with an accurate and extensible SIMT performance modeling?</p>
78

Latency Tradeoffs in Distributed Storage Access

Ray, Madhurima January 2019 (has links)
The performance of storage systems is central to handling the huge amount of data being generated from a variety of sources including scientific experiments, social media, crowdsourcing, and from an increasing variety of cyber-physical systems. The emerging high-speed storage technologies enable the ingestion of and access to such large volumes of data efficiently. However, the combination of high data volume requirements of new applications that largely generate unstructured and semistructured streams of data combined with the emerging high-speed storage technologies pose a number of new challenges, including the low latency handling of such data and ensuring that the network providing access to the data does not become the bottleneck. The traditional relational model is not well suited for efficiently storing and retrieving unstructured and semi-structured data. An alternate mechanism, popularly known as Key-Value Store (KVS) has been investigated over the last decade to handle such data. A KVS store only needs a 'key' to uniquely identify the data record, which may be of variable length and may or may not have further structure in the form of predefined fields. Most of the KVS in existence have been designed for hard-disk based storage (before the SSDs gain popularity) where avoiding random accesses is crucial for good performance. Unfortunately, as the modern solid-state drives become the norm as the data center storage, the HDD-based KV structures result in high read, write, and space amplifications which becomes detrimental to both the SSD’s performance and endurance. Also note that regardless of how the storage systems are deployed, access to large amounts of storage by many nodes must necessarily go over the network. At the same time, the emerging storage technologies such as Flash, 3D-crosspoint, phase change memory (PCM), etc. coupled with highly efficient access protocols such as NVMe are capable of ingesting and reading data at rates that challenge even the leading edge networking technologies such as 100Gb/sec Ethernet. At the same time, some of the higher-end storage technologies (e.g., Intel Optane storage based on 3-D crosspoint technology, PCM, etc.) coupled with lean protocols like NVMe are capable of providing storage access latencies in the 10-20$\mu s$ range, which means that the additional latency due to network congestion could become significant. The purpose of this thesis is to addresses some of the aforementioned issues. We propose a new hash-based and SSD-friendly key-value store (KVS) architecture called FlashKey which is especially designed for SSDs to provide low access latencies, low read and write amplification, and the ability to easily trade-off latencies for any sequential access, for example, range queries. Through detailed experimental evaluation of FlashKey against the two most popular KVs, namely, RocksDB and LevelDB, we demonstrate that even as an initial implementation we are able to achieve substantially better write amplification, average, and tail latency at a similar or better space amplification. Next, we try to deal with network congestion by dynamically replicating the data items that are heavily used. The tradeoff here is between the latency and the replication or migration overhead. It is important to reverse the replication or migration as the congestion fades away since our observation tells that placing data and applications (that access the data) together in a consolidated fashion would significantly reduce the propagation delay and increase the network energy-saving opportunities which is required as the data center network nowadays are equipped with high-speed and power-hungry network infrastructures. Finally, we designed a tradeoff between network consolidation and congestion. Here, we have traded off the latency to save power. During the quiet hours, we consolidate the traffic is fewer links and use different sleep modes for the unused links to save powers. However, as the traffic increases, we reactively start to spread out traffic to avoid congestion due to the upcoming traffic surge. There are numerous studies in the area of network energy management that uses similar approaches, however, most of them do energy management at a coarser time granularity (e.g. 24 hours or beyond). As opposed to that, our mechanism tries to steal all the small to medium time gaps in traffic and invoke network energy management without causing a significant increase in latency. / Computer and Information Science
79

雲端運算服務導向架構電子發票加值平台XML-based訊息轉換器與資料中心之研究

曾世傑 Unknown Date (has links)
財政部於2006年底建置完成的電子發票整合服務平台,提供不同產業間之買賣雙方一個具有公信力的交易稽核平台。企業在此可利用電子發票向銀行進行貸款,完成融資服務,由於貸款跨越企業與銀行,而各個不同組織間,其流程中會有金、商流共同所需之資訊,但是彼此所需的資訊格式的不同,讓整個流程無法一氣喝成。 本研究提出一個以雲端運算為基礎的服務導向架構電子發票加值平台,透過此平台企業可將原本的發票融資,轉換成利用電子發票進行線上融資,再透過XML-based訊息轉換器將企業端電子發票XML格式轉換為處理帳務之XBRL格式,以及銀行端之金流訊息FXML格式,並利用雲端運算服務作為資料儲存與呈現的基礎,而服務導向架構提供了完成此跨組織金、商流活動所需的平台一個良好架構。 在電子發票加值平台中,處理電子發票加值服務時會運用發票上的金流與商流資訊,基於安全性的考量,所以不能將資料都儲存於雲端運算的資源中,本研究的資料中心利用分散式資料儲存方式,將機密的資料儲存於企業端,減少企業使用服務的疑慮,並透過分散式資料擷取/儲存機制對不同的資料庫存取服務所需之資料,讓企業可以更放心且便利的使用服務。 / Ministry of Finance, R.O.C builds a reliable E-Invoice platform in 2006, to provide buyers and sellers a credible audit platform. Through E-Invoice platform, companies could complete loan service. Because loan service crosses the banks and enterprises, they will need the same information in loan process. However the information to each other is different formats, so that the whole process can not complete straight through. This study proposed a Service-oriented architecture E-Invoice value-added platform which based on cloud computing services. Through this platform, companies can change invoice loan service into E-Invoice on-line loan service. In this platform, they not only can use a XML-based message converter to convert business XML format to financial information XBRL format and cash flow information FXML format but also can use cloud computing services to store and present data. However Service-oriented architecture can provide this cross-organizational activity a suitable architecture. In E-Invoice value-added platform, E-Invoice on-line loan service will handle the information of cash flow and business flow, but based on security considerations, we can not stored all the data in the cloud. However this study use decentralized data center to store confidential information in the enterprise client to solve this problem and use some mechanism to extract/store data from different data center. Therefore Companies can reduce their concerns of using services and use services conveniently.
80

Exploration of NoSQL technologies for managing hotel reservations

Coulombel, Sylvain January 2014 (has links)
During this project NoSQL technologies for Hotel IT have been evaluated. It has been determined that among NoSQL technologies, document database fits the best this use-case. Couchbase and MongoDB, the two main documents stores have been evaluated, their similarities and differences have been highlighted. This reveals that document-oriented features were more developed in MongoDB than Couchbase, this has a direct impact on search of reservations functionality. However Couchbase offers a better way to replicate data across two remote data centers. As one of the goals was to provide a powerful search functionality, it has been decided to use MongoDB as a database for this project. A proof of concept has been developed, it enables to search reservations by property code, guest name, check-in date and check-out date using a REST/JSON interface and confirms that MongoDB could work for storing hotel reservations in terms of functionality. Then different experiments have been conducted on this system such as throughput and response time using specific hotel reservation search query and data set. The results we got reached our targets. We also performed a scalability test, using MongoDB sharding functionalities to distribute data across several machines (shards) using different strategies (shard keys) so as to provide configuration recommendations. Our main finding was that it was not necessary to always distribute the database. Then if "sharding" is needed, distributing the data according to the property code will make the database go faster, because queries will be sent directly to the good machine(s) in the cluster and thus avoid "scatter-gather" query. Finally some search optimizations have been proposed, and in particular how an advanced search by names could be implemented with MongoDB. / <p>This thesis is submitted in the framework of a double degree between Compiègne University Of Technology (UTC) and Linköping University (LiU)</p>

Page generated in 0.0384 seconds