• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 12
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 66
  • 16
  • 15
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Performance Controlled Power Optimization for Virtualized Internet Datacenters

Wang, Yefu 01 August 2011 (has links)
Modern data centers must provide performance assurance for complex system software such as web applications. In addition, the power consumption of data centers needs to be minimized to reduce operating costs and avoid system overheating. In recent years, more and more data centers start to adopt server virtualization strategies for resource sharing to reduce hardware and operating costs by consolidating applications previously running on multiple physical servers onto a single physical server. In this dissertation, several power efficient algorithms are proposed to effectively reduce server power consumption while achieving the required application-level performance for virtualized servers. First, at the server level this dissertation proposes two control solutions based on dynamic voltage and frequency scaling (DVFS) technology and request batching technology. The two solutions share a performance balancing technique that maintains performance balancing among all virtual machines so that they can have approximately the same performance level relative to their allowed peak values. Then, when the workload intensity is light, we adopt the request batching technology by using a controller to determine the time length for periodically batching incoming requests and putting the processor into sleep mode. When the workload intensity changes from light to moderate, request batching is automatically switched to DVFS to increase the processor frequency for performance guarantees. Second, at the datacenter level, this dissertation proposes a performance-controlled power optimization solution for virtualized server clusters with multi-tier applications. The solution utilizes both DVFS and server consolidation strategies for maximized power savings by integrating feedback control with optimization strategies. At the application level, a multi-input-multi-output controller is designed to achieve the desired performance for applications spanning multiple VMs, on a short time scale, by reallocating the CPU resources and DVFS. At the cluster level, a power optimizer is proposed to incrementally consolidate VMs onto the most power-efficient servers on a longer time scale. Finally, this dissertation proposes a VM scheduling algorithm that exploits core performance heterogeneity to optimize the overall system energy efficiency. The four algorithms at the three different levels are demonstrated with empirical results on hardware testbeds and trace-driven simulations and compared against state-of-the-art baselines.
12

Torkning av biomassa med restvärme från datacenter : en förstudie

Sandström, Kristina January 2018 (has links)
Datacenterindustrin är en växande industri vilket medför att IT-utrustning idag tillverkas med högre effekter, som i sin tur leder till högre värmeflöden i datacentren då dessa är i drift. För att IT-utrustningen skall fungera säkert, vara tillgänglig och pålitlig krävs att den värme som generas kyls bort och transporteras bort från IT-utrustningen. Vanligen sker detta genom att luft blåses genom utrustningen. Eftersom värme kyls bort är datacenter inte bara en källa till IT-tjänster utan även en källa till restvärme i form av luft med låg temperatur. Den låga temperaturen gör att det är utmaning att hitta användningsområden för den. I detta examensarbete har möjligheten att använda denna värme för att torka träflis för produktion av energikol genom pyrolys undersökts. För att undersöka mängden restvärme som finns tillgänglig undersöktes vilka datacenter som finns i norra Sverige och deras lufttemperaturer, luftflöden samt installerad och förbrukad IT-effekt. Den tillgängliga mängden biomassa undersöktes hos biomassaaktörer i norra Sverige och en marknadsundersökning genomfördes över vilka lämpliga torkutrustningar som finns. För att beräkna hur mycket träflis som kan torkas vid en viss IT-effekt och frånluftstemperatur genomfördes experiment i ett testdatacenter. I experimenten undersöktes vilka luftflöden som fås ut ur testdatacentret vid olika frånluftstemperaturer och IT-effekter. Utifrån dessa flöden och teori för torkning av biomassa beräknades mängden fuktig biomassa som kan torkas samt den resulterande mängden torkad biomassa. Experimenten genomfördes med IT-effekter mellan 32-61 kW. Dessa gav torkeffekter mellan 26-58 kW. Att torkeffekterna är lägre beror på att förluster skett under experimenten och för att energin som krävts för att värma upp vattnet i luften räknats bort. De frånluftstemperaturer som testades var 30, 35, 40 och 45 °C. I experimenten resulterade de lägre temperaturerna i högre luftflöden på grund av att mer kylning krävts. Vid dessa temperaturer har även biomassaflödena blivit högre. Av detta dras slutsatsen att luftflödet väger tyngre än temperaturen då restvärme från datacenter används. Experimenten resulterade i frånluftsflöden mellan 0,86-2,5 m3/s där en stor del utgörs av ett torrt luftflöde. Dessa luftflöden gav ett fuktigt biomassaflöde på 0,28-0,70 m3s/h (kubikmeter stjälpt per timme) vilket motsvarar 0,23-0,58 m3s/h torkad biomassa.
13

Rediseño del modelo de negocios del Datacenter de Telefónica Empresas en función de prácticas ITIL

Rodríguez Rodríguez, Pablo Andrés January 2007 (has links)
No description available.
14

Analýza konkurenčního prostředí serverhousingových společností v ČR / Analysis of the competitive environment of colocation companies in the Czech Republic

Šrámek, Jan January 2013 (has links)
This thesis deals with application of Competitive Intelligence methods in market environment of colocation datacenters in the Czech Republic. The main objective is to analyze competitive environment of colocation datacenters in the Czech Republic. In the theoretical part of this thesis Competitive Intelligence and analytical methods are described. Typical colocation services including cloud computing are described. In the following part the topic of datacenters, their basic types and classification are discussed. Two related topics of outsourcing and Service Level Agreement are also mentioned. The practical part of this thesis is divided into three areas. The first area deals with development of the world datacenter market and discusses trends in the industry. The second area deals with the description of the market structure and analysis of the wider competitive environment of colocation companies in the Czech Republic. This competitive environment was analyzed using Porter five forces analysis. The third thematic area is dedicated to competitive analysis of six colocation companies operating on the territory of Prague. These companies were analyzed with the use of competitive analysis method. In conclusion there are given relevant resources in terms of Competitive Intelligence for the colocation companies that were analyzed in the final part of thesis.
15

The Contemporary Uncanny: An Architecture for Digital Postmortem

Garrison, John 28 June 2021 (has links)
No description available.
16

Scaling RDMA RPCs with FLOCK

Monga, Sumit Kumar 30 November 2021 (has links)
RDMA-capable networks are gaining traction with datacenter deployments due to their high throughput, low latency, CPU efficiency, and advanced features, such as remote memory operations. However, efficiently utilizing RDMA capability in a common setting of high fan-in, fan-out asymmetric network topology is challenging. For instance, using RDMA programming features comes at the cost of connection scalability, which does not scale with increasing cluster size. To address that, several works forgo some RDMA features by only focusing on conventional RPC APIs. In this work, we strive to exploit the full capability of RDMA, while scaling the number of connections regardless of the cluster size. We present FLOCK, a communication framework for RDMA networks that uses hardware provided reliable connection. Using a partially shared model, FLOCK departs from the conventional RDMA design by enabling connection sharing among threads, which provides significant performance improvements contrary to the widely held belief that connection sharing deteriorates performance. At its core, FLOCK uses a connection handle abstraction for connection multiplexing; a new coalescing-based synchronization approach for efficient network utilization; and a load-control mechanism for connections with symbiotic send-recv scheduling, which reduces the synchronization overheads associated with connection sharing along with ensuring fair utilization of network connections. / M.S. / Internet is one of the great discoveries of our time. It provides access to enormous knowledge sources, makes it easier to communicate across the globe seamlessly with other countless advantages. Accessing the internet over the years, it is noticeable that the latency of services like web searches and downloading files has gone down sharply. A download that used to take minutes during the 2000s can complete within seconds in present times. Network speeds have been improving, facilitating a faster and smoother user experience. Another factor contributing to the improved internet experience is the service providers like Google, Amazon, and others that can process user requests in a fraction of time what used to take before. Web services such as search, e-commerce are implemented using a multi-layer architecture with layer containing hundreds to thousands of servers. Each server runs one or more components of the web service application. In this architecture, user requests are received in the upper layer and processed by the lower layers. Servers in different layers communicate over an ultrafast network like Remote Direct Memory Access (RDMA). The implication of the multi-layer architecture is that a server has to communicate with multiple other servers in the upper and lower layers. Unfortunately, due to its inherent limitations, RDMA does not perform well when network communication takes place with a large number of servers. In this thesis, a new communication framework for RDMA networks, FLOCK is proposed to overcome the scalability limitations of RDMA hardware. FLOCK maintains scalability when communicating with many servers and it consistently provides better performance compared to the state-of-the-art. Additionally, FLOCK utilizes the network bandwidth efficiently and reduces the CPU overheads incurred due to network communication.
17

Designing High Performance and Scalable Distributed Datacenter Services over Modern Interconnects

Narravula, Sundeep 12 September 2008 (has links)
No description available.
18

PALMCLOUD / PALMCLOUD

Marchal, Astrid January 2022 (has links)
Think about how much of the things you do daily that is online. So much of our everyday life wouldn’t be possible without online connection. We are becoming more and more dependent on always having access to information and we spend so much time online. Every online interaction come from some kind of server located in a data center or common IT industry. The electricity used in datacenters is converted into heat and in most cases is let out straight into the atmosphere. Not only is the energy loss bad for the environment it is also a waste of money. However this is an hugely expanding business and in order to continue the development of our technological society we have to find a way to make it sustainable.  This is why I made a Folketshus+, from a different perspective. A physical, sustainable place for the could, combined with a place for people.Palmcloud is a hybridbuilding that consists mainly of a data center and a greenhouse for palmtrees. The greenhouse will be heated by reusing the excess heat from de serverhall. This way the datacenter, which I believe is a kind of building that is going to be a big part of the future, will be more climate positive and energy effective. The greenhouse and part of the datacenter will be used by people as a place for online culture, creativity and education. / Tänk på hur mycket av det du gör dagligen som sker online. Så mycket av vår vardag skulle inte fungera utan uppkoppling. Vi blir mer och mer beroende av att alltid ha tillgång till information och vi spenderar otroligt mycket tid online. All form av datatrafik sker genom kommunikation till olika servrar. Allt som sker online kommer från en eller flera servrar placerade i ett datacenter någonstans i världen. Detta är en stor och växande global energikonsument. Elen som används i datacentra omvandlas till värme och släpps i de flesta fall rakt ut i atmosfären. Energiförlusten är inte bara dålig för miljön, det är också slöseri med pengar. För att fortsätta utvecklingen av vårt teknologiska samhälle behöver vi hitta ett sätt att göra det mer hållbart.  Jag ville göra ett folketshus+ ur ett nytt perspektiv genom en fysisk, hållbar plats för molnet, kombinerat med en plats för människor.Palmcloud är en hybridbyggnad som huvudsakligen består av ett datacenter och ett orangeri för palmer. Genom att återanvända överskottsvärmen från serverhallen till att värma upp orangeriet kommer datacentret att bli mer klimatpositivt och energieffektivt. Byggnaden har ett fokus på onlinekultur, utbildning och upplevelser.
19

Aux?lio ? tomada de decis?o no processo de migra??o para computa??o em nuvem

Melo, Marcelo Morais de 25 March 2014 (has links)
Made available in DSpace on 2016-04-04T18:31:40Z (GMT). No. of bitstreams: 1 Marcelo Morais de Melo.pdf: 1968527 bytes, checksum: 2f983f1a8c27eb9ddc9cc2853cc2f95f (MD5) Previous issue date: 2014-03-25 / Many companies have been attracted by Cloud Computing solutions due to the promises such as high quality services, low cost, high availability, and self-service provisioning. However, a very careful implementation plan should be followed in order to achieve the company goals. This plan should evaluate the company real needs to choose the best datacenter model. The possibilities are: a traditional datacenter, which uses only physical servers, a virtualized datacenter, and a datacenter, which implements a Cloud Computing model. In this work, we proposed a new tool to help in this decision making process. The proposed tool has three layers. The first one consists of a questionnaire that is used to obtain the information about the company datacenter. The second one consists of an application developed to treat the data collected by the first layer, generating a five position vector. Finally, the third layer consists of an application that implements a Finite State Machine algorithm that uses the generated vector to recommend a possible datacenter migration. In order to test the proposal, demonstrate the method, and to evidence that the developed tool is not restricted to a specific market, the questionnaire was sent to twenty five companies in the following areas: IT and telecommunications, automotive industry, educational organizations, agricultural industry, government, food companies, visual communication, optical and photography, advertising and marketing, iron and steel, mining. Results suggest that 52% of the companies were using an inappropriate datacenter. By following the recommendations, we have shown a company might have a 41% reduction in IT expenditure. / A utiliza??o de solu??es de Computa??o em Nuvem, em fun??o das perspectivas de servi?os de qualidade com baixo custo, alta disponibilidade e provisionamento do tipo self-service, tem se mostrado extremamente atrativa, em especial, para o mercado corporativo. Por conta disso, cada vez mais empresas deparam-se com a quest?o da pertin?ncia da migra??o de seu datacenter para a Nuvem. Entretanto, para que as expectativas sejam alcan?adas, essa tomada de decis?o requer um planejamento cuidadoso. As necessidades reais de TI da empresa devem ser levadas em considera??o para que a escolha do tipo de datacenter seja adequada. As possibilidades s?o: um datacenter tradicional, apenas com servidores f?sicos, um datacenter virtualizado ou um datacenter em que seja implantado algum modelo de Computa??o em Nuvem. Para auxiliar nesse processo decis?rio, desenvolveu-se um m?todo, in?dito na literatura, que foi implementado em uma ferramenta de software. Tal ferramenta ? composta de tr?s camadas. A primeira consiste de um question?rio que permite conhecer as caracter?sticas do datacenter atual da empresa. A segunda, de um aplicativo que faz o tratamento dos dados coletados pelo question?rio, gerando um vetor de cinco posi??es. Por fim, a terceira camada ? composta de outro aplicativo, desenvolvido de acordo com um algoritmo de m?quina de estados, que processa o vetor gerado e recomenda o tipo de migra??o. De modo a verificar-se o funcionamento e a abrang?ncia da ferramenta, enviou-se o question?rio para vinte e cinco empresas das seguintes ?reas, TI e telecomunica??es, ind?stria automotiva, institui??o de ensino, m?quinas agr?colas, governamental, aliment?cia, comunica??o visual, ?ptica e fotografia, publicidade e propaganda, metalurgia e siderurgia, minera??o, telefonia, pesquisa e desenvolvimento de TI e equipamentos industriais. Os resultados obtidos indicaram a recomenda??o de migra??o de datacenter para 52% das empresas analisadas, recomenda??es estas que, se seguidas, poderiam resultar em economia de custos. Para uma das situa??es analisadas mostrou-se que seria poss?vel uma economia nos custos de TI de aproximadamente 41% caso fosse seguida a recomenda??o obtida com o aux?lio da ferramenta.
20

UCLOUD: uma abordagem para implantação de nuvem privada para a administração pública federal

DAMASCENO, Julio Cesar 07 August 2015 (has links)
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2016-07-12T19:23:34Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) jcd-phd_thesis_final-entrega-biblioteca.pdf: 10811665 bytes, checksum: 4aae0bc706b64354c18e874517000dcd (MD5) / Made available in DSpace on 2016-07-12T19:23:34Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) jcd-phd_thesis_final-entrega-biblioteca.pdf: 10811665 bytes, checksum: 4aae0bc706b64354c18e874517000dcd (MD5) Previous issue date: 2015-11-03 / A crescente adoção e uso de computação em nuvem, revolucionou o processo de aquisição de recursos computacionais, causando uma transferência das despesas em CAPEX para despesas em OPEX. Neste contexto, as organizações, públicas e/ou privadas, tiveram que se adaptar a esta nova realidade de consumo trazida pela computação em nuvem. Deste modo, o caminho natural seria alugar estes recursos computacionais em provedores de nuvem, pois poucas empresas podem investir o capital necessário na construção de data centers para hospedarem suas nuvens (e/ou serviços). Embora nos últimos anos tenha havido uma evolução e adequação da legislação em vigor sobre uso de computação em nuvem, armazenamento de dados, segurança da informação, entre outras questões, surgiu a necessidade da criação de ofertas de serviços de nuvem dentro da administração pública, principalmente graças às imposições causadas pela legislações, que impõem sérias restrições quanto a escolha dos provedores de serviços, armazenamento de dados e segurança. A partir deste cenário, com o objetivo de atender à crescente demanda por gerenciamento de infraestruturas de TIC baseadas em computação em nuvem este trabalho propõe uma solução chamada UCloud, composta por uma metodologia, no formato de um modelo de referência e um conjunto de ferramentas que juntas viabilizarão a transição da infraestrutura de data center tradicional para uma infraestrutura virtualização e, posteriormente, para o ambiente de nuvem. A proposta também viabilizou a possibilidade da oferta de data center como serviço em um cenário onde todos elementos da infraestrutura - redes, armazenamento, CPU e segurança - são virtualizados e entregues como um serviço. A principal contribuição deste trabalho foi a definição e especificação de um modelo de referência para a implementação de ambientes de nuvem na administração publica, respeitando a legislação corrente no tocante às recomendações para o uso de computação em nuvem pela Administração Pública, assim como a introdução e implementação do conceito de data center como serviço. Como resultado obteve-se um modelo de referência para implantação de nuvens privadas na administração pública em conformidade com a legislação brasileira e seguindo boas práticas de mercado. Experimentos de campo mostraram como a plataforma UCloud conseguiu atender aos requisitos do modelo através do gerenciamento de ambientes virtualizados e ambientes nuvem, proporcionando automatização de tarefas antes realizadas manualmente ou de forma ineficiente. / The increasing adoption and use of cloud computing has revolutionized the process of acquiring computing resources, causing a transfer from capital to operational expenditures. In this context, public and private organizations had to adapt to the new reality of consumption and provisioning brought about by cloud computing. In this way, the natural path would be to rent these computing resources in cloud providers because some companies can spent millions in data center construction for hosting his clouds. Although in recent years the law has evolved and adapted on cloud computing usage, data storaging, information security, among other issues, the need to create cloud service offerings has emerged within the public administration, especially because of impositions caused by the laws, which impose severe restrictions on the choice of service providers, data storage and security. From this scenario, in order to meet the growing demand for management of cloud-based IT infrastructure this work proposed a solution called UCloud, consisting of a methodology, presented as reference model and tools that together will enable the transition from traditional data center infrastructure for a virtualized infrastructure and subsequently to cloud environment. The proposal also allowed the data center as a service offering, a scenario where all infrastructure elements - networking, storage, CPU and safety - are virtualized and delivered as a service. The main contribution of this work was the definition and specification of a reference model for implementing cloud environments in public administration, respecting current legislation regarding and following the federal recommendations for cloud computing usage. As well as, the introduction and implementation of data center as a service concept. As a result we obtained a reference model for deploying private clouds in public administration in accordance with Brazilian law and following well know practices. Field experiments have shown how UCloud platform could meet the model requirements through the management of virtualized environments and cloud environments, providing automation of tasks previously made manually or in inefficiently way.

Page generated in 0.0816 seconds