• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 6
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 11
  • 10
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

TCP Adaptation Framework in Data Centers

Ghobadi, Monia 09 January 2014 (has links)
Congestion control has been extensively studied for many years. Today, the Transmission Control Protocol (TCP) is used in a wide range of networks (LAN, WAN, data center, campus network, enterprise network, etc.) as the de facto congestion control mechanism. Despite its common usage, TCP operates in these networks with little knowledge of the underlying network or traffic characteristics. As a result, it is doomed to continuously increase or decrease its congestion window size in order to handle changes in the network or traffic conditions. Thus, TCP frequently overshoots or undershoots the ideal rate making it a "Jack of all trades, master of none" congestion control protocol. In light of the emerging popularity of centrally controlled Software-Defined Networks (SDNs), we ask whether we can take advantage of the information available at the central controller to improve TCP. Specifically, in this thesis, we examine the design and implementation of OpenTCP, a dynamic and programmable TCP adaptation framework for SDN-enabled data centers. OpenTCP gathers global information about the status of the network and traffic conditions through the SDN controller, and uses this information to adapt TCP. OpenTCP periodically sends updates to end-hosts which, in turn, update their behaviour using a simple kernel module. In this thesis, we first present two real-world TCP adaptation experiments in depth: (1) using TCP pacing in inter-data center communications with shallow buffers, and (2) using Trickle to rate limit TCP video streaming. We explain the design, implementation, limitation, and benefits of each TCP adaptation to highlight the potential power of having a TCP adaptation framework in today's networks. We then discuss the architectural design of OpenTCP, as well as its implementation and deployment at SciNet, Canada's largest supercomputer center. Furthermore, we study use-cases of OpenTCP using the ns-2 network simulator. We conclude that OpenTCP-based congestion control simplifies the process of adapting TCP to network conditions, leads to improvements in TCP performance, and is practical in real-world settings.
2

TCP Adaptation Framework in Data Centers

Ghobadi, Monia 09 January 2014 (has links)
Congestion control has been extensively studied for many years. Today, the Transmission Control Protocol (TCP) is used in a wide range of networks (LAN, WAN, data center, campus network, enterprise network, etc.) as the de facto congestion control mechanism. Despite its common usage, TCP operates in these networks with little knowledge of the underlying network or traffic characteristics. As a result, it is doomed to continuously increase or decrease its congestion window size in order to handle changes in the network or traffic conditions. Thus, TCP frequently overshoots or undershoots the ideal rate making it a "Jack of all trades, master of none" congestion control protocol. In light of the emerging popularity of centrally controlled Software-Defined Networks (SDNs), we ask whether we can take advantage of the information available at the central controller to improve TCP. Specifically, in this thesis, we examine the design and implementation of OpenTCP, a dynamic and programmable TCP adaptation framework for SDN-enabled data centers. OpenTCP gathers global information about the status of the network and traffic conditions through the SDN controller, and uses this information to adapt TCP. OpenTCP periodically sends updates to end-hosts which, in turn, update their behaviour using a simple kernel module. In this thesis, we first present two real-world TCP adaptation experiments in depth: (1) using TCP pacing in inter-data center communications with shallow buffers, and (2) using Trickle to rate limit TCP video streaming. We explain the design, implementation, limitation, and benefits of each TCP adaptation to highlight the potential power of having a TCP adaptation framework in today's networks. We then discuss the architectural design of OpenTCP, as well as its implementation and deployment at SciNet, Canada's largest supercomputer center. Furthermore, we study use-cases of OpenTCP using the ns-2 network simulator. We conclude that OpenTCP-based congestion control simplifies the process of adapting TCP to network conditions, leads to improvements in TCP performance, and is practical in real-world settings.
3

Improving High Performance Networking Technologies for Data Center Clusters

Grant, RYAN 25 September 2012 (has links)
This dissertation demonstrates new methods for increasing the performance and scalability of high performance networking technologies for use in clustered computing systems, concentrating on Ethernet/High-Speed networking convergence. The motivation behind the improvement of high performance networking technologies and their importance to the viability of modern data centers is discussed first. It then introduces the concepts of high performance networking in a commercial data center context as well as high performance computing (HPC) and describes some of the most important challenges facing such networks in the future. It reviews current relevant literature and discusses problems that are not yet solved. Through a study of existing high performance networks, the most promising features for future networks are identified. Sockets Direct Protocol (SDP) is shown to have unexpected performance issues for commercial applications, due to inefficiencies in handling large numbers of simultaneous connections. The first SDP over eXtended Reliable Connections implementation is developed to reduce connection management overhead, demonstrating that performance issues are related to protocol overhead at the SDP level. Datagram offloading for IP over InfiniBand (IPoIB) is found to work well. In the first work of its kind, hybrid high-speed/Ethernet networks are shown to resolve the issues of SDP underperformance and demonstrate the potential for hybrid high-speed networking local area Remote Direct Memory Access (RDMA) technologies and Ethernet wide area networking for data centers. Given the promising results from these studies, a set of solutions to enhance performance at the local and wide area network level for Ethernet is introduced, providing a scalable, connectionless, socket-compatible, fully RDMA-capable networking technology, datagram-iWARP. A novel method of performing RDMA Write operations (called RDMA Write-Record) and RDMA Read over unreliable datagrams over Ethernet is designed, implemented and tested. It shows its applicability in scientific and commercial application spaces and is applicable to other verbs-based networking interfaces such as InfiniBand. The newly proposed RDMA methods, both for send/recv and RDMA Write-Record, are supplemented with interfaces for both socket-based applications and Message Passing Interface (MPI) applications. An MPI implementation is adapted to support datagram-iWARP. Both scalability and performance improvements are demonstrated for HPC and commercial applications. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2012-09-25 09:43:55.262
4

Um framework de métricas de produtividade e eficiência energética em data centers

GOLDHAR, Marcos Porto 31 January 2009 (has links)
Made available in DSpace on 2014-06-12T15:56:30Z (GMT). No. of bitstreams: 2 arquivo2938_1.pdf: 1785888 bytes, checksum: ae4a8579a2179d16082a917df6ff8af8 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009 / Originada da grande preocupação com os impactos econômicos e ambientais decorrentes da operacionalização dos serviços de tecnologia da informação, a TI Verde é baseada em iniciativas que buscam diminuir os efeitos nocivos deste setor, ao mesmo tempo em que conservam ou até incrementam os seus benefícios. Os gestores dessa área, por sua vez, necessitam de instrumentos que forneçam uma visão objetiva de onde e como podem ser otimizadas suas operações, sobretudo nas estruturas de data centers, considerados os grandes vilões no que se refere ao consumo de energia. Considerando que não há como gerenciar sem medir, as métricas surgem como elementos fundamentais para fornecer esta visão. Neste contexto, a presente dissertação propõe a utilização de um framework de métricas relacionadas à produtividade e eficiência energética, composto tanto de métricas já consolidadas, desenvolvidas por respeitadas entidades como o Green Grid e o Uptime Institute, como também de indicadores desenvolvidos neste estudo. Os requisitos e implicações para determinar os resultados das métricas são abordados na apresentação de um caso prático realizado numa organização alinhada com a sustentabilidade ambiental em seus negócios. Para melhor ilustrar como a utilização sistemática deste framework pode fornecer aos gestores uma perspectiva integrada de seus data centers, identificando onde os investimentos serão mais efetivos, esta dissertação traz ainda o protótipo de uma ferramenta de gestão baseada no framework, o qual demonstra como os resultados podem ser interpretados de forma a incrementar as atividades de gerenciamento
5

PROJECT 02: MEDIA, POWER, AND ECOLOGY AT THE GOOGLE DATA CENTER IN THE DALLES, OREGON

Diller, Adam January 2021 (has links)
The climate emergency and the accompanying ecological and cultural crises challenge existing modes of critique in the humanities. Described by terms such as Anthropocene, Capitalocene, and Chthulucene, these crises are distributed across large scales of space and time and confound simple notions of causality, requiring new paradigms for research in the humanities. Recent work in media studies engages these concerns by examining the ecologies of media and media infrastructures. The Internet is arguably the most critical media infrastructure. Media studies augments cultural analyses of the Internet by focusing on the materialities of this Internet, foregrounding ways that information, infrastructures, cultures, and materialities are—and always have been—intimately entwined.Data centers—the massive server farms that store data, perform cloud computing, and host much of the Internet—are critical sites of the Internet’s computational power. Project 02: Media, Power, and Ecology at the Google Data Center in The Dalles, Oregon contributes to existing work on data centers by focusing on Google’s first hyperscale data center—named “Project 02” in early permitting documents. In media studies, Tung-Hui Hu’s A Prehistory of the Cloud, Jennifer Holt and Patrick Vonderau’s “Where the Internet Lives: Data Centers as Cloud Infrastructure,” and a series of articles by Mel Hogan address the imaginaries of data centers, the discourse around them, and the ecologies they produce. While this work on data centers is foundational, it relies on insights derived primarily from promotional materials and brief site visits. These methodologies address data centers as static objects rather than emergent processes, circumscribing scholars’ ability to address ongoing changes in an industry defined by rapid change and constant growth. Further, these studies often tacitly accept the data centers’ own definition of their spatio-temporal boundaries rather than challenging them. Because data centers are inherently relational, it is necessary to apply a similar network logic to defining the data center itself—as an assemblage of infrastructural relations rather than a self-contained object with discrete connections to the world. My research addresses these methodological and conceptual concerns through a longitudinal study of Google’s first hyperscale data center. Project 02 will be the first book-length study of a single data center, drawing on repeated site visits over four years, talks and publications by Google, and extensive research into government archives. The Introduction, Where the Internet Lives, contrasts this data center’s role in Google’s global network with phenomenological accounts of its ever-expanding security perimeter. Chapter 1, Secrecy, Sustainability, and Security, traces shifts in Google’s discourse, from secrecy to promoting sustainability, through a survey of publications, talks, websites, video tours, and maps. Chapter 2, Territorial, Temporal, and Material Processes in The Dalles, examines political and ecological implications of changes in network topologies and the implementation of artificial intelligence-focused Tensor Processing Units. Chapter 3, Rocks, Water, Salmon, Treaties, and Networks: Making Space for The Dalles data center, frames the data center on an expansive scale of time and space, situating it within the ongoing process of settler colonialism in the northwestern United States. Chapter 4, The Bonneville Power Administration Film Archives: Ecologies of Infrastructural Media from 1939 to the Present, investigates the data center’s source of electrical power, the Bonneville Power Administration (BPA), through an archiveology of the films and photographs produced by the BPA from 1939 to the present. This survey of BPA films attends to a central infrastructure of settler colonialism, while highlighting Indigenous activists’ success in producing changes to the operation of BPA dams. The Conclusion, An Owner’s Manual, considers potentials for analogous infrastructural activism at the Google data center and foregrounds instabilities within the immense power embodied in Google’s corporate infrastructure. This book project informs—and is shaped by—a multimodal research practice that engages contemporary and archival media related to the data center and the infrastructures that support it. Project 02 interweaves videos produced by Google with photographs and films from the archives of the Army Corps of Engineers, Department of Interior, Bureau of Indian Affairs, and Bonneville Power Administration alongside my own video, audio, and photography of the data center and a series of related sites. This media-centric methodology builds meaning from a dual conception of media ecologies as both relations among media objects and as ecologies produced by—and alongside of—these media objects. This method of working through multiple flows of media attends to the expansive spatio-temporal scales of the data center’s ecological, political, and cultural entanglements. In parallel to the book, Project 02 has two multimodal realizations. The media exhibition frames the data center in relation to the last 150 years along a fifteen-mile stretch of the Columbia River. Media produced by Google is interwoven with materials from government archives and my own video and audio of the data center, The Dalles Dam, active Indigenous fishing sites, and the Columbia River. The exhibition immerses the audience in a disparate body of media emerging from, and around, the Google data center, leading the audience to consider long-term implications of the Internet’s central role in our culture. The second multimodal project is a feature-length documentary film which leverages a haptic approach to the data center, cinematic engagements with surrounding environments, and affective encounters with archival media to produce a narrative that spirals outward from the data center to attend to the infrastructural relationships that support it. The book, film, and media exhibition contribute to a broader reckoning with the substantial power accumulated by Google. My research grounds concerns over Google’s monopolization of critical infrastructure in an environmental history of Google’s oldest hyperscale data center. At the hydroelectric dams that power this data center, the New Deal era dream of “power for the people” has evolved into a complexly negotiated system incorporating salmon ecologies, Indigenous land and water rights, and emerging challenges of climate change into the management of an aging network of dams. Project 02 considers analogous potentials for Google’s technological and engineering contributions to be rethought and reconfigured, informed by logics and concerns beyond their original intent. These potential reconfigurations are critical to reimagining the Internet, democratizing the production of knowledge, and bolstering our ability to navigate the ongoing crises of the Anthropocene. / Media Studies & Production
6

Delay Modeling In Data Center Networks: A Taxonomy and Performance Analysis

Alshahrani, Reem Abdullah 06 August 2013 (has links)
No description available.
7

Modelo de migración a la nube de los servidores de un data center

Loo Cuya, Fabiola Magaly, Rojas Solorzano, Christian Gianfranco 11 1900 (has links)
El presente proyecto tiene como objetivo principal implementar un modelo de migración a la nube de servidores de un data center de las PYME, basado en un análisis de buenas prácticas y tecnologias de plataforma cloud., debido a que no todo tiene que migrarse a la nube, sino que depende de la necesidad del negocio. La propuesta está basada en las buenas prácticas que brindan los proveedores, la literatura y en los frameworks: Togaf y CCRA v4. El proyecto consta de 3 partes: input, solución y output. El input está conformado por lo que representa el levantamiento de información y los business principles, goals and drivers que permiten conocer el negocio y sus activos y arquitectura de TI, y además por los requerimientos para la migración. En base a lo antes relevado se evalúan tanto a la viabilidad de la migración como a los proveedores de plataforma cloud. Se determinan si es conveniente realizar o no la migración y que proveedor utilizar, entre las que se encuentran el repositorio cloud, el servicio importer de la plataforma, los scripts de comandos que permiten invocar los servicios, las instancias que son desplegadas en el ambiente cloud y la herramienta de administración y monitoreo sobre dichas instancias. Por último, como output se obtienen los servidores en cloud, correctamente configurados. / The main objective of the project entitled Migration Model to the Cloud of Servers in a Data Center is to implement a migration model to the cloud of servers in a data center of SMEs. Based on an analysis of good practices and cloud platform technologies. Since not everything has to migrate to the cloud, but this depends on the need of the business. The proposal is based on good practices provided by suppliers, literature and frameworks: Togaf and CCRA v4. The project consists of 3 parts: Input, solution and output. The input is made up of what represents the gathering of information and the business principles, goals and drivers that allow knowing the business and its assets and IT architecture, and also the requirements for migration. Based on what was previously surveyed, both the viability of the migration and the cloud platform providers are evaluated. Determine if it is convenient to make or not the migration and which provider to use, among which are the cloud repository, the importer service of the platform, the command scripts that allow invoking the services, the instances that are deployed in the cloud environment and the administration and monitoring tool on these instances. Finally, as output, servers are obtained in the cloud, correctly configured to avoid connection or other problems depending on the services they execute and the implementation document detailing the steps taken and the final configuration. / Tesis
8

Multifaceted resource management on virtualized providers

Goiri, Íñigo 14 June 2011 (has links)
Last decade, providers started using Virtual Machines (VMs) in their datacenters to pack users and their applications. This was a good way to consolidate multiple users in fewer physical nodes while isolating them from each other. Later on in 2006, Amazon started offering their Infrastructure as a Service where their users rent computing resources as VMs in a pay-as-you-go manner. However, virtualized providers cannot be managed like traditional ones as they are now confronted with a set of new challenges. First of all, providers must deal efficiently with new management operations such as the dynamic creation of VMs. These operations enable new capabilities that were not there before, such as moving VMs across the nodes, or the ability to checkpoint VMs. We propose a Decentralized virtualization management infrastructure to create VMs on demand, migrate them between nodes, and checkpointing mechanisms. With the introduction of this infrastructure, virtualized providers become decentralized and are able to scale. Secondly, these providers consolidate multiple VMs in a single machine to more efficiently utilize resources. Nevertheless, this is not straightforward and implies the use of more complex resource management techniques. In addition, this requires that both customers and providers can be confident that signed Service Level Agreements (SLAs) are supporting their respective business activities to their best extent. Providers typically offer very simple metrics that hinder an efficient exploitation of their resources. To solve this, we propose mechanisms to dynamically distribute resources among VMs and a resource-level metric, which together allow increasing provider utilization while maintaining Quality of Service. Thirdly, the provider must allocate the VMs evaluating multiple facets such as power consumption and customers' requirements. In addition, it must exploit the new capabilities introduced by virtualization and manage its overhead. Ultimately, this VM placement must minimize the costs associated with the execution of a VM in a provider to maximize the provider's profit. We propose a new scheduling policy that places VMs on provider nodes according to multiple facets and is able to understand and manage the overheads of dealing with virtualization. And fourthly, resource provisioning in these providers is a challenge because of the high load variability over time. Providers can serve most of the requests owning only a restricted amount of resources but this under-provisioning may cause customers to be rejected during peak hours. In the opposite situation, valley hours incur under-utilization of the resources. As this new paradigm makes the access to resources easier, providers can share resources to serve their loads. We leverage a federated scenario where multiple providers share their resources to overcome this load variability. We exploit the federation capabilities to create policies that take the most convenient decision depending on the environment conditions and tackle the load variability. All these challenges mean that providers must manage their virtualized resources in a different way than they have done traditionally. This dissertation identifies and studies the challenges faced by virtualized provider that offers IaaS, and designs and evaluates a solution to manage the provider's resources in the most cost-effective way by exploiting the virtualization capabilities.
9

Cost-Based Automatic Recovery Policy in Data Centers

Luo, Yi 19 May 2011 (has links)
Today's data centers either provide critical applications to organizations or host computing clouds used by huge Internet populations. Their size and complex structure make management difficult, causing high operational cost. The large number of servers with various different hardware and software components cause frequent failures and need continuous recovery work. Much of the operational cost is from this recovery work. While there is significant research related to automatic recovery, from automatic error detection to different automatic recovery techniques, there is currently no automatic solution that can determine the exact fault, and hence the preferred recovery action. There is some study on how to automatically select a suitable recovery action without knowing the fault behind the error. In this thesis we propose an estimated-total-cost model based on analysis of the cost and the recovery-action-success probability. Our recovery-action selection is based on minimal estimated-total-cost; we implement three policies to use this model under different considerations of failed recovery attempts. The preferred policy is to reduce the recovery action-success probability when it failed to fix the error; we also study different reduction coefficients in this policy. To evaluate the various policies, we design and implement a simulation environment. Our simulation experiments demonstrate significant cost improvement over previous research based on simple heuristic models.
10

Low-cost Data Analytics for Shared Storage and Network Infrastructures

Mihailescu, Madalin 09 August 2013 (has links)
Data analytics used to depend on specialized, high-end software and hardware platforms. Recent years, however, have brought forth the data-flow programming model, i.e., MapReduce, and with it a flurry of sturdy, scalable open-source software solutions for analyzing data. In essence, the commoditization of software frameworks for data analytics is well underway. Yet, up to this point, data analytics frameworks are still regarded as standalone, em dedicated components; deploying these frameworks requires companies to purchase hardware to meet storage and network resource demands, and system administrators to handle management of data across multiple storage systems. This dissertation explores the low-cost integration of frameworks for data analytics within existing, shared infrastructures. The thesis centers on smart software being the key enabler for holistic commoditization of data analytics. We focus on two instances of smart software that aid in realizing the low-cost integration objective. For an efficient storage integration, we build MixApart, a scalable data analytics framework that removes the dependency on dedicated storage for analytics; with MixApart, a single, consolidated storage back-end manages data and services all types of workloads, thereby lowering hardware costs and simplifying data management. We evaluate MixApart at scale with micro-benchmarks and production workload traces, and show that MixApart provides faster or comparable performance to an analytics framework with dedicated storage. For an effective sharing of the networking infrastructure, we implement OX, a virtual machine management framework that allows latency-sensitive web applications to share the data center network with data analytics through intelligent VM placement; OX further protects all applications from hardware failures. The two solutions allow the reuse of existing storage and networking infrastructures when deploying analytics frameworks, and substantiate our thesis that smart software upgrades can enable the end-to-end commoditization of analytics.

Page generated in 0.0639 seconds