Spelling suggestions: "subject:"data processing service centers."" "subject:"mata processing service centers.""
11 |
Determine information systems service level in Hong KongMa, Ting-sum., 馬庭深. January 1986 (has links)
published_or_final_version / Management Studies / Master / Master of Business Administration
|
12 |
Essays on Cloud Pricing and Causal InferenceKilcioglu, Cinar January 2016 (has links)
In this thesis, we study economics and operations of cloud computing, and we propose new matching methods in observational studies that enable us to estimate the effect of green building practices on market rents.
In the first part, we study a stylized revenue maximization problem for a provider of cloud computing services, where the service provider (SP) operates an infinite capacity system in a market with heterogeneous customers with respect to their valuation and congestion sensitivity. The SP offers two service options: one with guaranteed service availability, and one where users bid for resource availability and only the "winning" bids at any point in time get access to the service. We show that even though capacity is unlimited, in several settings, depending on the relation between valuation and congestion sensitivity, the revenue maximizing service provider will choose to make the spot service option stochastically unavailable. This form of intentional service degradation is optimal in settings where user valuation per unit time increases sub-linearly with respect to their congestion sensitivity (i.e., their disutility per unit time when the service is unavailable) -- this is a form of "damaged goods." We provide some data evidence based on the analysis of price traces from the biggest cloud service provider, Amazon Web Services.
In the second part, we study the competition on price and quality in cloud computing. The public "infrastructure as a service" cloud market possesses unique features that make it difficult to predict long-run economic behavior. On the one hand, major providers buy their hardware from the same manufacturers, operate in similar locations and offer a similar menu of products. On the other hand, the competitors use different proprietary "fabric" to manage virtualization, resource allocation and data transfer. The menus offered by each provider involve a discrete number of choices (virtual machine sizes) and allow providers to locate in different parts of the price-quality space. We document this differentiation empirically by running benchmarking tests. This allows us to calibrate a model of firm technology. Firm technology is an input into our theoretical model of price-quality competition. The monopoly case highlights the importance of competition in blocking "bad equilibrium" where performance is intentionally slowed down or options are unduly limited. In duopoly, price competition is fierce, but prices do not converge to the same level because of price-quality differentiation. The model helps explain market trends, such the healthy operating profit margin recently reported by Amazon Web Services. Our empirically calibrated model helps not only explain price cutting behavior but also how providers can manage a profit despite predictions that the market "should be" totally commoditized.
The backbone of cloud computing is datacenters, whose energy consumption is enormous. In the past years, there has been an extensive effort on making the datacenters more energy efficient. Similarly, buildings are in the process going "green" as they have a major impact on the environment through excessive use of resources. In the last part of this thesis, we revisit a previous study about the economics of environmentally sustainable buildings and estimate the effect of green building practices on market rents. For this, we use new matching methods that take advantage of the clustered structure of the buildings data. We propose a general framework for matching in observational studies and specific matching methods within this framework that simultaneously achieve three goals: (i) maximize the information content of a matched sample (and, in some cases, also minimize the variance of a difference-in-means effect estimator); (ii) form the matches using a flexible matching structure (such as a one-to-many/many-to-one structure); and (iii) directly attain covariate balance as specified ---before matching--- by the investigator. To our knowledge, existing matching methods are only able to achieve, at most, two of these goals simultaneously. Also, unlike most matching methods, the proposed methods do not require estimation of the propensity score or other dimensionality reduction techniques, although with the proposed methods these can be used as additional balancing covariates in the context of (iii). Using these matching methods, we find that green buildings have 3.3% higher rental rates per square foot than otherwise similar buildings without green ratings ---a moderately larger effect than the one previously found.
|
13 |
Empirical studies toward DRP constructs and a model for DRP development for information systems functionHa, Wai On 01 January 2002 (has links)
No description available.
|
14 |
Exploiting surplus renewable energy in datacentre computingAkoush, Sherif January 2012 (has links)
No description available.
|
15 |
Use of air side economizer for data center thermal managementKumar, Anubhav 11 July 2008 (has links)
Sharply increasing power dissipations in microprocessors and telecommunications systems have resulted in significant cooling challenges at the data center facility level.Energy efficient cooling of data centers has emerged as an area of increasing importance in electronics thermal management.
One of the lowest cost options for significantly cutting the cooling cost for the data center is an airside economizer. If outside conditions are suitable, the airside economizer introduces the outside air into the data center, making it the primary source for cooling the space and hence a source of low cost cooling.
Full-scale model of a representative data center was developed, with the arrangement of bringing outside air.Four different cities over the world were considered to evaluate the savings over the entire year.Results show a significant saving in chiller energy (upto 50%).The limits of relative humidity can be met at the inlet of the server for the proposed design, even if the outside air humidity is higher or lower than the allowable limits.The saving in the energy is significant and justifies the infrastructure improvements, such as improved filters and control mechanism for the outside air influx.
|
16 |
Advanced thermal management strategies for energy-efficient data centersSomani, Ankit. January 2009 (has links)
Thesis (M. S.)--Mechanical Engineering, Georgia Institute of Technology, 2009. / Committee Chair: Joshi, Yogendra; Committee Member: ghiaasiaan, mostafa; Committee Member: Schwan, Karsten. Part of the SMARTech Electronic Thesis and Dissertation Collection.
|
17 |
Computer project management in Hong Kong.January 1988 (has links)
by Chow Yiu-tong, Fong Cheung-hoo. / Thesis (M.B.A.)--Chinese University of Hong Kong, 1988. / Bibliography: leaves 182-185.
|
18 |
Advanced thermal management strategies for energy-efficient data centersSomani, Ankit 02 January 2009 (has links)
A simplified computational fluid dynamics/heat transfer (CFD/HT) model for a unit cell of a data center with a hot aisle-cold aisle (HACA) layout is simulated. Inefficiencies dealing with the mixing of hot air present in the room, with the cold inlet air leading to a loss of cooling potential are identified. For existing facilities, an algorithm called the Ambient Intelligence based Load Management (AILM) is developed which enhances the net data center heat dissipation capacity for given energy consumption at the facilities end. It gives a scheme to determine how much and where the computer loads should be allocated, based on the differential loss in cooling potential per unit increase in server workload. While the gains predicted are validated numerically initially, experimental validation is conducted using server simulators. For new facilities, a novel layout of the data center is designed, which uses scalable pods (S-Pod) based cabinet arrangement and air delivery. For the same floor space, the S-Pod and HACA facilities are simulated for different velocities, and the results are compared. An approach to incorporate heterogeneity in data centers, both for lower heat dissipation and liquid cooled racks has been established. Various performance metrics for data centers have been analyzed and sorted on the basis of their applicability. Finally, a roadmap for the transformation of the existing facilities to a state of higher cognizance of Facilities/IT performance is laid out.
|
19 |
Waste heat recovery in data centers: ejector heat pump analysisHarman, Thomas David, V 24 November 2008 (has links)
The purpose of this thesis is to examine possible waste heat recovery methods in data
centers. Predictions indicate that in the next decade data center racks may dissipate 70kW of
heat, up from the current levels of 10-15kW. Due to this increase, solutions must be found to
increase the efficiency of data center cooling. This thesis will examine possible waste heat
recovery technologies which will improve energy efficiency. Possible approaches include phase
change materials, thermoelectrics, thermomagnetics, vapor compression cycles, absorbtion and
adsorbtion systems. After a thorough evaluation of the possible waste heat engines, the use of an
ejector heat pump was evaluated in detail. The principle behind an ejector heat pump is very
similar to a vapor compression cycle. However, the compressor is replaced with a pump, boiler
and an ejector. These three components require less moving parts and are more cost effective
then a comparable compressor, despite a lower efficiency. This system will be examined under
general operating conditions in a data center. The heat load is around 15-20kW and air
temperatures near 85°C. A parametric study is conducted to determine the viability and cost
effectiveness of this system in the data center. Included will be various environmentally friendly
working fluids that satisfy the low temperature ranges found in a data center. It is determined
that Ammonia presents the best option as a working fluid for this application. Using this system
a Coefficient Of Performance of 1.538 at 50°C can be realized. This will result in an estimated
373,000 kW-hr saved over a year and a $36,425 reduction in annual cost. Finally,
recommendations for implementation are considered to allow for future design and testing of this
viable waste heat recovery device.
|
20 |
Uma arquitetura para aprovisionamento de redes virtuais definidas por software em redes de data center / An architecture for virtual networks defined by software embedding in data center networksRosa, Raphael Vicente, 1988- 06 May 2014 (has links)
Orientador: Edmundo Roberto Mauro Madeira / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-25T12:25:32Z (GMT). No. of bitstreams: 1
Rosa_RaphaelVicente_M.pdf: 3185742 bytes, checksum: 0fe239607d96962513c068e7379c6f19 (MD5)
Previous issue date: 2014 / Resumo: Atualmente provedores de infraestrutura (Infrastructure Providers - InPs) alocam recursos virtualizados, computacionais e de rede, de seus data centers para provedores de serviços na forma de data centers virtuais (Virtual Data Centers - VDCs). Almejando maximizar seus lucros e usar de forma eficiente os recursos de seus data centers, InPs lidam com o problema de otimizar a alocação de múltiplos VDCs. Mesmo que a alocação de máquinas virtuais em servidores seja feita de maneira otimizada por diversas técnicas e algoritmos já existentes, aplicações de computação em nuvem ainda tem o desempenho prejudicado pelo gargalo do subaproveitamento de recursos de rede, explicitamente definidos por limitações de largura de banda e latência. Baseado no paradigma de Redes Definidas por Software, nós aplicamos o modelo de rede como serviço (Network-as-a-Service - NaaS) para construir uma arquitetura de data center bem definida para dar suporte ao problema de aprovisionamento de redes virtuais em data centers. Construímos serviços sobre o plano de controle da plataforma RouteFlow os quais tratam a alocação de redes virtuais de data center otimizando a utilização de recursos da infraestrutura de rede. O algoritmo proposto neste trabalho realiza a tarefa de alocação de redes virtuais, baseado na agregação de informações de um plano virtual executando o protocolo BGP eficientemente mapeado a uma topologia física de rede folded-Clos definida por \textit{switches} com suporte a OpenFlow 1.3. Em experimentos realizados, mostramos que o algoritmo proposto neste trabalho realiza a alocação de redes virtuais de data center de forma eficiente, otimizando o balanceamento de carga e, consequentemente, a utilização de recursos da infraestrutura de rede de data centers. A estratégia de alocação de largura de banda utilizada demonstra flexibilidade e simplicidade para atender a diferentes padrões de comunicação nas redes virtuais ao mesmo tempo que permite elasticidade ao balanceamento de carga na rede. Por fim, discutimos como a arquitetura e algoritmo propostos podem ser estendidos para atender desempenho, escalabilidade, e outros requisitos de arquiteturas de redes de data center / Abstract: Nowadays infrastructure providers (InPs) allocate virtualized resources, computational and network, of their data center to service providers (SPs) in the form of virtual data centers (VDCs). Aiming maximize revenues and thus efficiently use the resources of their virtualized data centers, InPs handle the problem to optimally allocate multiple VDCs. Even if the allocation of virtual machines in servers can be made using well known techniques and algorithms already existent, cloud computing applications still have performance limitations imposed by the bottleneck of network resources underutilization, which are explicitly defined by bandwidth and latency constraints. Based on Software Defined Network paradigm we apply the Network-as-a-Service model to build a data center network architecture well-suited to the problem of virtual networks embedding. We build services over the control plane of the RouteFlow platform that perform the allocation of virtual data center networks optimizing the utilization of network infrastructure resources. This task is performed by the algorithm proposed in this dissertation, which is based on aggregated information from a virtual routing plane using the BGP protocol and a folded-Clos physical network topology based on OpenFlow 1.3 devices. The experimental evaluation shows that the proposed algorithm performs efficient load balancing on the data center network and altogether yields better utilization of the physical resources. The proposed bandwidth allocation strategy exhibits simplicity and flexibility to attend different traffic communication patterns while yielding an elastic load balanced network. Finally, we argue that the algorithm and the architecture proposed can be extended to achieve performance, scalability and many other features required in data center network architectures / Mestrado / Ciência da Computação / Mestre em Ciência da Computação
|
Page generated in 0.1269 seconds