• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 10
  • 9
  • 6
  • 6
  • 5
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 129
  • 129
  • 29
  • 27
  • 25
  • 25
  • 23
  • 19
  • 17
  • 17
  • 16
  • 15
  • 15
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Challenges and New Solutions for Live Migration of Virtual Machines in Cloud Computing Environments

Zhang, Fei 03 May 2018 (has links)
No description available.
42

Efficient human annotation schemes for training object class detectors

Papadopoulos, Dimitrios P. January 2018 (has links)
A central task in computer vision is detecting object classes such as cars and horses in complex scenes. Training an object class detector typically requires a large set of images labeled with tight bounding boxes around every object instance. Obtaining such data requires human annotation, which is very expensive and time consuming. Alternatively, researchers have tried to train models in a weakly supervised setting (i.e., given only image-level labels), which is much cheaper but leads to weaker detectors. In this thesis, we propose new and efficient human annotation schemes for training object class detectors that bypass the need for drawing bounding boxes and reduce the annotation cost while still obtaining high quality object detectors. First, we propose to train object class detectors from eye tracking data. Instead of drawing tight bounding boxes, the annotators only need to look at the image and find the target object. We track the eye movements of annotators while they perform this visual search task and we propose a technique for deriving object bounding boxes from these eye fixations. To validate our idea, we augment an existing object detection dataset with eye tracking data. Second, we propose a scheme for training object class detectors, which only requires annotators to verify bounding-boxes produced automatically by the learning algorithm. Our scheme introduces human verification as a new step into a standard weakly supervised framework which typically iterates between re-training object detectors and re-localizing objects in the training images. We use the verification signal to improve both re-training and re-localization. Third, we propose another scheme where annotators are asked to click on the center of an imaginary bounding box, which tightly encloses the object. We then incorporate these clicks into a weakly supervised object localization technique, to jointly localize object bounding boxes over all training images. Both our center-clicking and human verification schemes deliver detectors performing almost as well as those trained in a fully supervised setting. Finally, we propose extreme clicking. We ask the annotator to click on four physical points on the object: the top, bottom, left- and right-most points. This task is more natural than the traditional way of drawing boxes and these points are easy to find. Our experiments show that annotating objects with extreme clicking is 5 X faster than the traditional way of drawing boxes and it leads to boxes of the same quality as the original ground-truth drawn the traditional way. Moreover, we use the resulting extreme points to obtain more accurate segmentations than those derived from bounding boxes.
43

ASTRO- uma ferramenta para avaliação de dependabilidade e sustentabilidade em sistemas data center

Silva, Bruno 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T15:55:22Z (GMT). No. of bitstreams: 2 arquivo2234_1.pdf: 7802626 bytes, checksum: 1cb84477d378fe2e57a785c1609ceac9 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Aspectos de sustentabilidade têm recebido grande atenção pela comunidade científica, devido às preocupações com a satisfação das necessidades atuais de energia sem comprometer, por exemplo, recursos não-renováveis para as gerações futuras. Na verdade, uma crescente demanda de energia é uma questão que tem impactado a forma de como os sistemas são concebidos (data centers, por exemplo), no sentido de que os projetistas necessitam verificar vários trade-offs e selecionar uma solução viável considerando a utilização da energia e outras métricas, tais como confiabilidade e dispobilidadade. As ferramentas são importantes neste contexto para automatizar várias atividades de projeto e obter resultados o mais rápido possível. Este trabalho apresenta um ambiente integrado, denominado, ASTRO, que contempla: (i) Diagramas de Blocos de Confiabilidade (RBD) e Redes de Petri Estocásticas (SPN) para avaliação de dependabilidade , (ii) um método baseado na avaliação do ciclo de vida (LCA) para a quantificação do impacto da sustentabilidade. ASTRO foi concebido para avaliar infra-estruturas de centros de dados, mas o ambiente é genérico o suficiente para avaliar sistemas em geral. Além disso, um estudo de caso é fornecido para demonstrar a viabilidade do ambiente proposto
44

Análise de dependabilidade de sistemas data center baseada em índices de importância

Jair Cavalcante de Figueirêdo, José 31 January 2011 (has links)
Made available in DSpace on 2014-06-12T16:00:41Z (GMT). No. of bitstreams: 2 arquivo6891_1.pdf: 3592402 bytes, checksum: ef946f0de5820fb05efe2612994a5ced (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011 / Nos dias atuais, questões relacionadas `a dependabilidade, como alta disponibilidade e alta confiabilidade estão cada vez mais em foco, principalmente devido aos serviços baseados na internet, que normalmente requerem operação ininterrupta dos serviços. Para melhorar os sistemas (arquiteturas de data center, por exemplo), deve ser realizada análise de dependabilidade. As atividades de melhoria normalmente envolvem redundância de componentes o que exige ainda, análise dos componentes. A fim de obter os valores de dependabilidade, além de entender as funcionalidades dos componentes, é importante quantificar a importância de cada componente para o sistema, além da relação entre dependabilidade e custos. Neste contexto, é importante o auxílio de ferramentas que automatizem atividades do projeto, reduzindo o tempo para se obter os resultados, quando comparados ao processo manual. Este trabalho propõe novos índices para quantificar a importância de componentes, relacionando custos, que podem auxiliar projetistas de data center. Adicionalmente, extensões para a ferramenta ASTRO (núcleo Mercury) foram implementadas e representam um dos resultados deste trabalho. Estas extensões incluem avaliação de importância de componentes, avaliação de dependabilidade por limites e geração das funções lógica e estrutural. Além disso, foram implementadas melhorias no módulo de Diagramas de Blocos para Confiabilidade (RBD). Mercury permite a análise de dependabilidade através de Redes de Petri Estocásticas (SPN) e RBD. Ainda considerando a ferramenta ASTRO, é possível quantificar o impacto na sustentabilidade de infraestruturas de data center. Todas as métricas implementadas foram avaliadas em arquiteturas de data center, embora não sejam limitadas a estas estruturas, podendo ser utilizadas para avaliar sistemas em geral. Para demonstrar a aplicabilidade deste trabalho foram gerados três estudos de caso
45

Improving Flow Completion Time and Throughput in Data Center Networks

Joy, Sijo January 2015 (has links)
Today, data centers host a wide variety of applications which generate a mix of diverse internal data center traffic. In a data center environment 90% of the traffic flows, though they constitute only 10% of the data carried around, are short flows with sizes up to a maximum of 1MB. The rest 10% constitute long flows with sizes in the range of 1MB to 1GB. Throughput matters for the long flows whereas short flows are latency sensitive. This thesis studies various data center transport mechanisms aimed at either improving flow completion time for short flows or throughput for long flows. Thesis puts forth two data center transport mechanisms: (1) for improving flow completion time for short flows (2) for improving throughput for long flows. The first data center transport mechanism proposed in this thesis, FA-DCTCP (Flow Aware DCTCP), is based on Data Center Transmission Control Protocol (DCTCP). DCTCP is a Transmission Control Protocol (TCP) variant for data centers pioneered by Microsoft, which is being deployed widely in data centers today. DCTCP congestion control algorithm treats short flows and long flows equally. This thesis demonstrate that, treating them differently by reducing the congestion window for short flows at a lower rate compared to long flows, at the onset of congestion, 99th percentile of flow completion time for short flows could be improved by up to 32.5%, thereby reducing their tail latency by up to 32.5%. As per data center traffic measurement studies, data center internal traffic often exhibit predefined patterns with respect to the traffic flow mix. The second data center transport mechanism proposed in this thesis shows that, insights into the internal data center traffic composition could be leveraged to achieve better throughput for long flows. The mechanism for the same is implemented by adopting the Software Defined Networking paradigm, which offers the ability to dynamically adapt network configuration parameters based on network observations. The proposed solution achieves up to 22% improvement in long flow throughput, by dynamically adjusting network element’s QoS configurations, based on the observed traffic pattern.
46

Improving Energy Efficiency and Bandwidth Utilization in Data Center Networks Using Segment Routing

Ghuman, Karanjot Singh January 2017 (has links)
In today’s scenario, energy efficiency has become one of the most crucial issues for Data Center Networks (DCN). This paper analyses the energy saving capability of a Data center network using Segment Routing (SR) based model within a Software Defined Network (SDN) architecture. Energy efficiency is measured in terms of number of links turned off and for how long the links remain in sleep mode. Apart from saving the energy by turning off links, our work further efficiently manages the traffic within the available links by using Per-packet based load balancing approach. Aiming to avoid congestion within DCN’s and increase the sleeping time of inactive links. An algorithm for deciding the particular set of links to be turned off within a network is presented. With the introduction of per-packet approach within SR/SDN model, we have successfully saved 21 % of energy within DCN topology. Results show that the proposed Per-packet SR model using Random Packet Spraying (RPS) saves more energy and provides better performance as compared to Per-flow based SR model, which uses Equal Cost Multiple Path (ECMP) for load balancing. But, certain problems also come into picture using per-packet approach, like out of order packets and longer end to end delay. To further solidify the effect of SR in saving energy within DCN and avoid previously introduced problems, we have used per-flow based Flow Reservation approach along with a proposed Flow Scheduling Algorithm. Flow rate of all incoming flows can be deduced using Flow reservation approach, which is further used by Flow Scheduling Algorithm to increase Bandwidth utilization Ratio of links. Ultimately, managing the traffic more efficiently and increasing the sleeping time of links, leading to more energy savings. Results show that, the energy savings are almost similar in per-packet based approach and per-flow based approach with bandwidth reservation. Except, the average sleeping time of links in per-flow based approach with bandwidth reservation decreases less severely as compared to per-packet based approach, as overall traffic load increases.
47

Návrh bezpečné infrastruktury pro cloudové řešení. / Safe cloud infrastructure design

Hanzlová, Marie January 2013 (has links)
This diploma thesis is focused on the security of cloud computing. Theoretical part of this thesis identifies options which generally lead to higher safety and availability of applications, regardless of the solution. A package of cloud services (called Productivity Suite) was defined, based on the customers' requirements, which is built on Microsoft platform and combined of the following products: Microsoft Exchange, Microsoft SharePoint, Microsoft Lync and is intended to be used by end customers. Specification of the service package, identified opportunities for strengthening the level of security and requirements of potential customers are the primary inputs for designing safe cloud infrastructure, which is the main contribution of this thesis. First step of designing infrastructure is to choose the service provider of data center, who will operate the solution. A decision must be made to select leased solution or owned components. The result of this part is a calculation, which contains all HW components (servers, firewalls, switches, tape library backups, disk arrays) and SW components considering the licensing policy, SSL certificate, domain, backup solution and other operating costs. The solution is limited by financial resources. The priority is safety, security and quality of services.
48

Metodika optimálního využití load balancingu v prostředí datového centra / Methodology of optimal usage of load balancing in data center environment

Nidl, Michal January 2015 (has links)
The following master thesis is focused on creation of methodology for optimal usage of load balancing in data center environment. Thesis is divided into eight chapters. The first chapter describes the reasons why to deal with this topic further. The second chapter summarizes the state of load balancing. This chapter is based on research of already elaborated thesis which were focused on load balancing in different ways. The third chapter summarizes load balancing including its key principles. The fourth chapter describes an actual state of load balancing in data center environment. An observation of real usage of load balancing in selected data center was used for the main purpose of this chapter. The fifth chapter consists of analysis of the currently existing methodologies which are used from the infrastructure projects purpose. The sixth chapter deals with creation of methodology for optimal usage of load balancing in data center. The seventh chapter evaluates usage of methodology by applying of this methodology to real practical example of implementation of load balancing. The eighth chapter summarizes all detected conclusions.
49

Passive cooling of data centers : modeling and experimentation / Refroidissement passif des datas centers : modélisation et expérimentation

Nadjahi, Chayan 17 December 2018 (has links)
L'objectif de cette étude est de concevoir un système de refroidissement passif au sein d'un data center. La solution qui a été choisie est la boucle thermosiphon, combinant le free cooling et le refroidissement par changement de phase. Cette technologie offre de la simplicité et de la compacité. De plus, en l'associant avec des échangeurs de chaleur à micro-canaux, elle est capable d'absorber de grandes quantités de flux de chaleur avec un faible débit du réfrigérant. La boucle thermosiphon est composée d'un évaporateur à mini-canaux et à courants parallèles, d'un condenseur à air, d'un riser et d'un downcomer. Un prototype expérimental a été construit afin de caractériser les transferts de chaleur entre le réfrigérant et la chaleur créée. Des études expérimentales sont introduites. L'influence du taux de chargement et de la puissance électrique est détaillée et analysée. En parallèle, un modèle numérique a été développé pour prédire les caractéristiques du réfrigérant en fonction des paramètres géométriques et climatiques. Une comparaison avec les résultats expérimentaux est également effectuée. Enfin, la boucle thermosiphon est améliorée avec l'ajout d'un second évaporateur. Les tests sont effectués avec des puissances plus importantes. Une nouvelle conception d'une boucle thermosiphon et les limites du prototype sont présentées. / The objective of this study is to build a passive cooling system in a data center. The chosen solution is the loop thermosyphon, combining free cooling and two-phase cooling. This technology offers simplicity and compactness. Furthermore, by associating with micro-channels heat exchangers, it is able to remove higher heat fluxes while working with smaller mass flow rate of coolant. The thermosyphon is composed by mini-channel parallel-flow evaporator, an air condenser, a riser and a downcomer. The experimental setup has been built to characterize the heat transfer between the working fluid and the provided heat. An experimental study is introduced. The effect of the fill ratio and the input power is specified and analyzed. In parallel, a numerical model has been developed to predict the fluid properties in function of geometrical and climatic parameters. A comparison between experimental and numerical results is also carried out. Finally, the loop thermosyphon is upgraded with a second mini-channel parallel flow evaporator. Tests are conducted with huger heat flux. A new design of loop thermosyphon and the limits of the prototype are introduced.
50

Roof Material Suitability for IT Mission-Critical Facilities

Petrinovich, Charles Akira 04 June 2020 (has links)
Mission-critical facilities house operations that when interrupted, can prove disastrous to an organization’s future. Limited market research is available to determine what roof types are best suited to meet the unique demands of these buildings. The purpose of this research was to evaluate different roof materials and to observe trends relative to their lifecycle costs and roof professional’s assessment in use with mission-critical facilities. The objectives of the study were to determine the average annual lifecycle costs for the sampled roof materials, to determine the roofing professionals’ preferred mission-critical facility roof materials, and to priority rank the sampled roof materials for use with mission-critical facilities A pilot study was conducted to assess variables in evaluating different roof materials and their use with mission-critical facilities. Additionally, a survey was administered to roofing professionals across the United States to obtain lifecycle cost information for various roof materials as well as ratings for those materials for use with mission-critical facilities. The research found that single-ply roofs, with the exception of 60 Mil TPO, had lower annual lifecycle costs than built-up roofs due to their having lower install and removal costs, as well as having increasing life expectancies over the years. The metal roof selection was also shown to have a low annual lifecycle cost due to having the longest estimated lifespan. Built-up and metal roofs were rated highest by roofing professionals for their use with mission-critical facilities, suggesting a prioritization of risk reduction versus cost savings. When the lifecycle cost data was applied to the roof material ratings, the data showed that built-up roofs presented themselves as good values for mission-critical facilities; however, 90 Mil EPDM and 24-gauge metal roofs could be considered as viable cost savings alternatives.

Page generated in 0.0352 seconds