• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 39
  • 39
  • 12
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Towards more scalability and flexibility for distributed storage systems / Vers un meilleur passage à l'échelle et une plus grande flexibilité pour les systèmes de stockage distribué

Ruty, Guillaume 15 February 2019 (has links)
Les besoins en terme de stockage, en augmentation exponentielle, sont difficilement satisfaits par les systèmes de stockage distribué traditionnels. Alors que les performances des disques ont ratrappé celles des cartes réseau en terme d'ordre de grandeur, leur capacité ne croit pas à la même vitesse que l'ensemble des données requérant d'êtres stockées, notamment à cause de l'avènement des applications de big data. Par ailleurs, l'équilibre de performances entre disques, cartes réseau et processeurs a changé et les états de fait sur lesquels se basent la plupart des systèmes de stockage distribué actuels ne sont plus vrais. Cette dissertation explique de quelle manière certains aspects de tels systèmes de stockages peuvent être modifiés et repensés pour faire une utilisation plus efficace des ressources qui les composent. Elle présente une architecture de stockage nouvelle qui se base sur une couche de métadonnées distribuée afin de fournir du stockage d'objet de manière flexible tout en passant à l'échelle. Elle détaille ensuite un algorithme d'ordonnancement des requêtes permettant a un système de stockage générique de traiter les requêtes de clients en parallèle de manière plus équitable. Enfin, elle décrit comment améliorer le cache générique du système de fichier dans le contexte de systèmes de stockage distribué basés sur des codes correcteurs avant de présenter des contributions effectuées dans le cadre de courts projets de recherche. / The exponentially growing demand for storage puts a huge stress on traditionnal distributed storage systems. While storage devices' performance have caught up with network devices in the last decade, their capacity do not grow as fast as the rate of data growth, especially with the rise of cloud big data applications. Furthermore, the performance balance between storage, network and compute devices has shifted and the assumptions that are the foundation for most distributed storage systems are not true anymore. This dissertation explains how several aspects of such storage systems can be modified and rethought to make a more efficient use of the resource at their disposal. It presents an original architecture that uses a distributed layer of metadata to provide flexible and scalable object-level storage, then proposes a scheduling algorithm improving how a generic storage system handles concurrent requests. Finally, it describes how to improve legacy filesystem-level caching for erasure-code-based distributed storage systems, before presenting a few other contributions made in the context of short research projects.
32

Improving Resource Management in Virtualized Data Centers using Application Performance Models

Kundu, Sajib 01 April 2013 (has links)
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
33

Energy Agile Cluster Communication

Mustafa, Muhammad Zain 18 March 2015 (has links)
Computing researchers have long focused on improving energy-efficiency?the amount of computation per joule? under the implicit assumption that all energy is created equal. Energy however is not created equal: its cost and carbon footprint fluctuates over time due to a variety of factors. These fluctuations are expected to in- tensify as renewable penetration increases. Thus in my work I introduce energy-agility a design concept for a platform?s ability to rapidly and efficiently adapt to such power fluctuations. I then introduce a representative application to assess energy-agility for the type of long-running, parallel, data-intensive tasks that are both common in data centers and most amenable to delays from variations in available power. Multiple variants of the application are implemented to illustrate the fundamental tradeoffs in designing energy-agile parallel applications. I find that with inactive power state transition latencies of up to 15 seconds, a design that regularly ”blinks” servers out- performs one that minimizes transitions by only changing power states when power varies. While the latter approach has much lower transition overhead, it requires additional I/O, since servers are not always concurrently active. Unfortunately, I find that most server-class platforms today are not energy-agile: they have transition la- tencies beyond one minute, forcing them to minimize transition and incur additional I/O.
34

Steady State Analysis of Load Balancing Algorithms in the Heavy Traffic Regime

January 2019 (has links)
abstract: This dissertation studies load balancing algorithms for many-server systems (with N servers) and focuses on the steady-state performance of load balancing algorithms in the heavy traffic regime. The framework of Stein’s method and (iterative) state-space collapse (SSC) are used to analyze three load balancing systems: 1) load balancing in the Sub-Halfin-Whitt regime with exponential service time; 2) load balancing in the Beyond-Halfin-Whitt regime with exponential service time; 3) load balancing in the Sub-Halfin-Whitt regime with Coxian-2 service time. When in the Sub-Halfin-Whitt regime, the sufficient conditions are established such that any load balancing algorithm that satisfies the conditions have both asymptotic zero waiting time and zero waiting probability. Furthermore, the number of servers with more than one jobs is o(1), in other words, the system collapses to a one-dimensional space. The result is proven using Stein’s method and state space collapse (SSC), which are powerful mathematical tools for steady-state analysis of load balancing algorithms. The second system is in even “heavier” traffic regime, and an iterative refined procedure is proposed to obtain the steady-state metrics. Again, asymptotic zero delay and waiting are established for a set of load balancing algorithms. Different from the first system, the system collapses to a two-dimensional state-space instead of one-dimensional state-space. The third system is more challenging because of “non-monotonicity” with Coxian-2 service time, and an iterative state space collapse is proposed to tackle the “non-monotonicity” challenge. For these three systems, a set of load balancing algorithms is established, respectively, under which the probability that an incoming job is routed to an idle server is one asymptotically at steady-state. The set of load balancing algorithms includes join-the-shortest-queue (JSQ), idle-one-first(I1F), join-the-idle-queue (JIQ), and power-of-d-choices (Pod) with a carefully-chosen d. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2019
35

Optical frequency comb generation using InP based quantum-dash/ quantum-well single section mode-locked lasers / Génération de peignes de fréquences optiques à l’aide de lasers à verrouillage de modes mono-section, à base de bâtonnets et puits quantiques élaborés sur InP

Panapakkam Venkatesan, Vivek 05 December 2016 (has links)
Les interconnections optiques dans les fermes de données (data centers) nécessitent la mise au point de nouvelles approches technologiques pour répondre aux besoins grandissants en composants d’interface respectant des cahiers de charge drastiques en termes de débit, coût, encombrement et dissipation d’énergie. Les peignes de fréquences optiques sont particulièrement adaptés comme nouvelles sources optiques, à mêmes de générer un grand nombre de porteuses optiques cohérentes. Leur utilisation dans des systèmes de transmission en multiplexage de longueurs d’onde (WDM) et exploitant de nouveaux formats de modulation, peut aboutir à des capacités jamais atteintes auparavant. Ce travail de thèse s’inscrit dans le cadre du projet européen BIG PIPES (Broadband Integrated and Green Photonic Interconnects for High-Performance Computing and Enterprise Systems) et a pour but l’étude de peignes de fréquences générés à l’aide de lasers à verrouillage de modes, à section unique, à base de bâtonnets quantiques InAs/InP et puits quantiques InGaAsP/InP. Nous avons entrepris l’étude de nouvelles couches actives et conceptions de cavités lasers en vue de répondre au cahier des charges du projet européen. Une étude systématique du bruit d’amplitude et de phase de ces sources a en particulier été menée à l’aide de nouvelles techniques de mesure afin d’évaluer leur compatibilité dans des systèmes de transmission à très haut débit. Ces peignes de fréquences optiques ont été utilisées avec succès dans des expériences de transmission sur fibre optique avec des débits records dépassant le Tbit/s par puce et une dissipation raisonnable d’énergie par bit, montrant leur fort potentiel pour les applications d’interconnections optiques dans les fermes de données / The increasing demand for high capacity, low cost, high compact and energy efficient optical transceivers for data center interconnects requires new technological solutions. In terms of transmitters, optical frequency combs generating a large number of phase coherent optical carriers are attractive solutions for next generation datacenter interconnects, and along with wavelength division multiplexing and advanced modulation formats can demonstrate unprecedented transmission capacities. In the framework of European project BIG PIPES (Broadband Integrated and Green Photonic Interconnects for High-Performance Computing and Enterprise Systems), this thesis investigates the generation of optical frequency combs using single-section mode-locked lasers based on InAs/InP Quantum-Dash and InGaAsP/InP Quantum-Well semiconductor nanostructures. These novel light sources, based on new active layer structures and cavity designs are extensively analyzed to meet the requirements of the project. Comprehensive investigation of amplitude and phase noise of these optical frequency comb sources is performed with advanced measurement techniques, to evaluate the feasibility of their use in high data rate transmission systems. Record Multi-Terabit per second per chip capacities and reasonably low energy per bit consumption are readily demonstrated, making them well suited for next generation datacenter interconnects
36

Navigering i hållbarhetslandskapet : Utmaningar och möjligheter för hållbarhet inom datacentersektorn / Navigation in the sustainability landscape : Challenges and opportunities for sustainability in the data center sector

Saltin, Mattias, Olsson, Julia, Nilsson, Anna January 2024 (has links)
This thesis explores the sustainability challenges and opportunities within the data center sector, with a specific focus on Atea, a leading player in the digital infrastructure industry. As organizations increasingly depend on IT systems for operational efficiency, the environmental impact of data centers has come under scrutiny. With the increasing demands for sustainability driven by both market forces and stringent EU regulations, this study examines how organizations, particularly within the data center industry, can adapt to meet these emerging requirements. This study aims to contribute an understanding of strategies and measures that can be implemented to achieve more sustainable data center operations. Our research employed a qualitative methodology, incorporating both literature review and a case study to gather insights into sustainable practices in the data center industry. Atea’s role as a case study provided practical perspective on the implementation of sustainability strategies within the sector. The findings highlight significant challenges, including energy consumption, waste management, and regulatory compliance. Conversely, the study also identifies opportunities such as the adoption of green technologies, improved energy efficiency, and strategic waste reduction practices. The implications of this research are twofold: it offers a roadmap for data centers aiming to enhance their sustainability profile, and it contributes to the broader discourse on the environmental responsibilities of the IT-sector. Future research should investigate the real-world impact of new EU regulations in the data center sector, focusing on how national legislation aligns with EU requirements and their practical application by data centers.
37

Extending the Cutting Stock Problem for Consolidating Services with Stochastic Workloads

Hähnel, Markus, Martinovic, John, Scheithauer, Guntram, Fischer, Andreas, Schill, Alexander, Dargie, Waltenegus 16 May 2023 (has links)
Data centres and similar server clusters consume a large amount of energy. However, not all consumed energy produces useful work. Servers consume a disproportional amount of energy when they are idle, underutilised, or overloaded. The effect of these conditions can be minimised by attempting to balance the demand for and the supply of resources through a careful prediction of future workloads and their efficient consolidation. In this paper we extend the cutting stock problem for consolidating workloads having stochastic characteristics. Hence, we employ the aggregate probability density function of co-located and simultaneously executing services to establish valid patterns. A valid pattern is one yielding an overall resource utilisation below a set threshold. We tested the scope and usefulness of our approach on a 16-core server with 29 different benchmarks. The workloads of these benchmarks have been generated based on the CPU utilisation traces of 100 real-world virtual machines which we obtained from a Google data centre hosting more than 32000 virtual machines. Altogether, we considered 600 different consolidation scenarios during our experiment. We compared the performance of our approach-system overload probability, job completion time, and energy consumption-with four existing/proposed scheduling strategies. In each category, our approach incurred a modest penalty with respect to the best performing approach in that category, but overall resulted in a remarkable performance clearly demonstrating its capacity to achieve the best trade-off between resource consumption and performance.
38

ICT Design Unsustainability & the Path toward Environmentally Sustainable Technologies

Bibri, Mohamed January 2009 (has links)
This study endeavors to investigate the negative environmental impacts of the prevailing ICT design approaches and to explore some potential remedies for ICT design unsustainability from environmental and corporate sustainability perspectives. More specifically, it aims to spotlight key environmental issues related to ICT design, including resource depletion; GHG emissions resulting from energy-intensive consumption; toxic waste disposal; and hazardous chemicals use; and also to shed light on how alternative design solutions can be devised based on environmental sustainability principles to achieve the goals of sustainable technologies. The study highlights the relationship between ICT design and sustainability and how they can symbiotically affect one another. To achieve the aim of this study, an examination was performed through an extensive literature review covering empirical, theoretical, and critical scholarship. The study draws on a variety of sources to survey the negative environmental impacts of the current mainstream ICT design approach and review the potential remedies for unsustainability of ICT design. For theory, central themes were selected for review given the synergy and integration between them as to the topic under investigation. They include: design issues; design science; design research framework for ICT; sustainability; corporate sustainability; and design and sustainability. Findings highlight the unsustainability of the current mainstream ICT design approach. Key environmental issues for consideration include: resource depletion through extracting huge amounts of material and scarce elements; energy-intensive consumption and GHG emissions, especially from ICT use phase; toxic waste disposal; and hazardous substances use. Potential remedies for ICT design unsustainability include dematerialization as an effective strategy to minimize resources depletion, de-carbonization to cut energy consumption through using efficient energy required over life cycle and renewable energy; recyclability through design with life cycle thinking (LCT) and extending ICT equipment’s operational life through reuse; mitigating hazardous chemicals through green design - low or non-noxious/less hazardous products. As to solving data center dilemma, design solutions vary from hardware and software to technological improvements and adjustments. Furthermore, corporate sustainability can be a strategic model for ICT sector to respond to environmental issues, including those associated with unsustainable ICT design. In the same vein, through adopting corporate sustainability, ICT-enabled organizations can rationalize energy usage to reduce GHG emissions, and thereby alleviating global warming. This study provides a novel approach to sustainable ICT design, highlighting unsustainability of its current mainstream practices. Review of the literature makes an advance on extant reviews of the literature by highlighting the symbiotic relationship between ICT design and environmental sustainability from both research and practice perspectives. This study adds to the body of knowledge and previous endeavours in research of ICT and sustainability. Overall, it endeavours to present contributions and avenues for further theoretical and empirical research and development. / +46704352135/+212662815009
39

Vliv obtokového součinitele na návrh a geometrii přímého výparníku pro chladící jednotku / The Effect of the Bypass Factor on Design and Geometry of the Evaporator for the Cooling Unit

Vytasil, Michal January 2016 (has links)
Diploma thesis focuses on effect of the bypass factor on design and geometry of the evaporator for the cooling unit of data centre. Effect of the bypass factor on individual design parameters is solved in detail. All dependendecies are captured by using graphs in which s placed a cement on that parameter. In part C, mathematical and physical solutions are demonstrated calculations and processes leading to the design of the exchanger. In the end, evaluation of the calculations is done and there is also showed possible improvements for the practise.

Page generated in 0.1009 seconds