• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 780
  • 217
  • 122
  • 65
  • 54
  • 34
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1601
  • 1601
  • 392
  • 282
  • 244
  • 243
  • 235
  • 231
  • 231
  • 228
  • 218
  • 210
  • 176
  • 175
  • 154
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
771

Avaliação do impacto da comunicação intra e entre-nós em nuvens computacionais para aplicações de alto desempenho / Evaluation of impact from inter and intra-node communication in cloud computing for HPC applications

Thiago Kenji Okada 07 November 2016 (has links)
Com o advento da computação em nuvem, não é mais necessário ao usuário investir grandes quantidades de recursos financeiros em equipamentos computacionais. Ao invés disto, é possível adquirir recursos de processamento, armazenamento ou mesmo sistemas completos por demanda, usando um dos diversos serviços disponibilizados por provedores de nuvem como a Amazon, o Google, a Microsoft, e a própria USP. Isso permite um controle maior dos gastos operacionais, reduzindo custos em diversos casos. Por exemplo, usuários de computação de alto desempenho podem se beneficiar desse modelo usando um grande número de recursos durante curtos períodos de tempo, ao invés de adquirir um aglomerado computacional de alto custo inicial. Nosso trabalho analisa a viabilidade de execução de aplicações de alto desempenho, comparando o desempenho de aplicações de alto desempenho em infraestruturas com comportamento conhecido com a nuvem pública oferecida pelo Google. Em especial, focamos em diferentes configurações de paralelismo com comunicação interna entre processos no mesmo nó, chamado de intra-nós, e comunicação externa entre processos em diferentes nós, chamado de entre-nós. Nosso caso de estudo para esse trabalho foi o NAS Parallel Benchmarks, um benchmark bastante popular para a análise de desempenho de sistemas paralelos e de alto desempenho. Utilizamos aplicações com implementações puramente MPI (para as comunicações intra e entre-nós) e implementações mistas onde as comunicações internas foram feitas utilizando OpenMP (comunicação intra-nós) e as comunicações externas foram feitas usando o MPI (comunicação entre-nós). / With the advent of cloud computing, it is no longer necessary to invest large amounts of money on computing resources. Instead, it is possible to obtain processing or storage resources, and even complete systems, on demand, using one of the several available services from cloud providers like Amazon, Google, Microsoft, and USP. Cloud computing allows greater control of operating expenses, reducing costs in many cases. For example, high-performance computing users can benefit from this model using a large number of resources for short periods of time, instead of acquiring a computer cluster with high initial cost. Our study examines the feasibility of running high-performance applications, comparing the performance of high-performance applications in a known infrastructure compared to the public cloud offering from Google. In particular, we focus on various parallel configurations with internal communication between processes on the same node, called intra-node, and external communication between processes on different nodes, called inter-nodes. Our case study for this work was the NAS Parallel Benchmarks, a popular benchmark for performance analysis of parallel systems and high performance computing. We tested applications with MPI-only implementations (for intra and inter-node communications) and mixed implementations where internal communications were made using OpenMP (intra-node communications) and external communications were made using the MPI (inter-node communications).
772

Fast delivery of virtual machines and containers : understanding and optimizing the boot operation / Contributions à l'approvisionnement d'environnements virtualisés : la problématique des temps de démarrage des machines virtuelles et des conteneurs

Nguyen, Thuy Linh 24 September 2019 (has links)
Le processus d'approvisionnement d'une machine virtuelle (VM) ou d'un conteneur est une succession de trois étapes complexes : (i) la phase d’ordonnancement qui consiste à affecter la VM / le conteneur sur un nœud de calcul ; (ii) le transfert de l'image disque associée vers ce nœud de calcul ; (iii) et l'exécution du processus de démarrage (généralement connu sous le terme « boot »). En fonction des besoins de l’application virtualisée et de l’état de la plate-forme, chacune de ces trois phases peut avoir une durée plus ou moins importante. Si de nombreux travaux se sont concentrés sur l’optimisation des deux premières étapes, la littérature couvre que partiellement les défis liés à la dernière. Cela est surprenant car des études ont montré que le temps de démarrage peut atteindre l’ordre de la minute dans certaines conditions. Durée que nous avons confirmée grâce à une étude préliminaire visant à quantifier le temps de démarrage, notamment dans des scénarios où le ratio de consolidation est élevé. Pour comprendre les principales raisons de ces durées, nous avons effectué en jusqu'à 15000 expériences au dessus de l’infrastructure Grid5000. Chacune de ces expériences a eu pour but d’étudier le processus de démarrage selon différentes conditions environnementales. Les résultats ont montré que les opérations d'entrée/sorties liées au processus de démarrage étaient les plus coûteuses. Afin d’y remédier, nous défendons dans cette thèse la conception d'un mécanisme dédié permettant de limiter le nombre d’entrées/sorties générées lors du processus de démarrage. Nous démontrons la pertinence de notre proposition en évaluant le prototype YOLO (You Only LoadOnce). Grâce à YOLO, la durée de démarrage peut être accélérée de 2 à 13 fois pour les VM et jusqu’à 2 fois pour les conteneurs. Au delà de l’aspect performance, il convient de noter que la façon dont YOLO a été conçu permet de l’appliquer à d’autres types de technologies devirtualisation / conteneurisation. / The provisioning process of a VirtualMachine (VM) or a container is a succession of three complex stages : (i) scheduling theVM / Container to an appropriate compute node ;(ii) transferring the VM / Container image to that compute node from a repository ; (iii) and finally performing the VM / Container boot process. Depending on the properties of the client’s request and the status of the platform, each of these three phases can impact the total duration of the provisioning operation. While many works focused on optimizing the two first stages, only few works investigated the impact of the boot duration. This comes to us as a surprise as a preliminary study we conducted showed the boot time of a VM / Container can last up to a few minutes in high consolidated scenarios. To understand the major reasons for such overheads, we performed on top of Grid'5000 up to 15k experiments, booting VM / Containerunder different environmental conditions. The results showed that the most influential factor is the I/O operations. To accelerate the boot process, we defend in this thesis, the design of a dedicated mechanism to mitigate the number of generated I/O operations. We demonstrated the relevance of this proposal by discussing a first prototype entitled YOLO (You Only LoadOnce). Thanks to YOLO, the boot duration can be faster 2-13 times for VMs and 2 times for containers. Finally, it is noteworthy to mention that the way YOLO has been designed enables it to be easily applied to other types of virtualization (e.g., Xen) and containerization technologies.
773

An evaluation of how edge computing is enabling the opportunities for Industry 4.0

Svensson, Wictor January 2020 (has links)
Connecting factories to the internet and enable the possibilities for these to autonomously talk to each other is called the Industrial Internet of Things(IIoT) and is mentioned as Industry 4.0 in the terms of the industrial revolutions. The machines are collecting data through very many different sensors and need to share these values with each other and the cloud. This will make a large load to the cloud and the internet, and the latency will be large. To evaluate how the workload and the latency can be reduced and still get the same result as using the cloud, two different systems are implemented. One which uses cloud and one which using edge computing. Edge computing is when the processing of the data is decentralized to the edge of the network. This thesis aims to find out ”When is it more favorable to use an edge solution and when is it to prefer a cloud solution”. The first system is implemented with an edge platform, Crosser, the second system is implemented with a cloud platform, Azure. Both implementations are giving the same outputs but the differences is where the data is processed. The systems are measured in latency, bandwidth, and CPU usage. The result of the measurements shows that the Crosser system has less latency, using smaller bandwidth but is needing more computational power of the device which is on the edge of the network. The conclusion of the results is that it depends on the demands of the system. Is the demands that it should have low latency and not using much bandwidth Crosser is to prefer. But if a very heavy machine learning algorithm is going to be executed in the system and the latency and bandwidth size is not a problem then the Cloud Reference System is to prefer.
774

A policy-based architecture for virtual network embedding

Esposito, Flavio 22 January 2016 (has links)
Network virtualization is a technology that enables multiple virtual instances to coexist on a common physical network infrastructure. This paradigm fostered new business models, allowing infrastructure providers to lease or share their physical resources. Each virtual network is isolated and can be customized to support a new class of customers and applications. To this end, infrastructure providers need to embed virtual networks on their infrastructure. The virtual network embedding is the (NP-hard) problem of matching constrained virtual networks onto a physical network. Heuristics to solve the embedding problem have exploited several policies under different settings. For example, centralized solutions have been devised for small enterprise physical networks, while distributed solutions have been proposed over larger federated wide-area networks. In this thesis we present a policy-based architecture for the virtual network embedding problem. By policy, we mean a variant aspect of any of the three (invariant) embedding mechanisms: physical resource discovery, virtual network mapping, and allocation on the physical infrastructure. Our architecture adapts to different scenarios by instantiating appropriate policies, and has bounds on embedding efficiency, and on convergence embedding time, over a single provider, or across multiple federated providers. The performance of representative novel and existing policy configurations are compared via extensive simulations, and over a prototype implementation. We also present an object model as a foundation for a protocol specification, and we release a testbed to enable users to test their own embedding policies, and to run applications within their virtual networks. The testbed uses a Linux system architecture to reserve virtual node and link capacities.
775

Tango Panopticon: Developing a Platform for Supporting Live Synchronous Art Events Based in Relational Aesthetics

Stillo, Michael Edward 11 March 2010 (has links)
The Tango Panopticon project merges art with technology to create a live and synchronous art experience which is just as much about the participants as it is about the observers. The goal of this project is to create a dialogue between observers of the event in the hopes of creating new social connections where there were none before. This goal is achieved by allowing observers to view the event from anywhere around the world on a computer via the internet and participate in a dialogue with other users on the website. The other objective of this project is to create a multimedia internet platform for other art projects to use. Other artists that are interested in hosting their own live synchronous event will be able to use the platform we have created and customize it to the specific needs of their project.
776

Trusted Computing and Secure Virtualization in Cloud Computing

Paladi, Nicolae January 2012 (has links)
Large-scale deployment and use of cloud computing in industry is accompanied and in the same time hampered by concerns regarding protection of data handled by cloud computing providers. One of the consequences of moving data processing and storage off company premises is that organizations have less control over their infrastructure. As a result, cloud service (CS) clients must trust that the CS provider is able to protect their data and infrastructure from both external and internal attacks. Currently however, such trust can only rely on organizational processes declared by the CS provider and can not be remotely verified and validated by an external party. Enabling the CS client to verify the integrity of the host where the virtual machine instance will run, as well as to ensure that the virtual machine image has not been tampered with, are some steps towards building trust in the CS provider. Having the tools to perform such verifications prior to the launch of the VM instance allows the CS clients to decide in runtime whether certain data should be stored- or calculations should be made on the VM instance offered by the CS provider. This thesis combines three components -- trusted computing, virtualization technology and cloud computing platforms -- to address issues of trust and security in public cloud computing environments. Of the three components, virtualization technology has had the longest evolution and is a cornerstone for the realization of cloud computing. Trusted computing is a recent industry initiative that aims to implement the root of trust in a hardware component, the trusted platform module. The initiative has been formalized in a set of specifications and is currently at version 1.2. Cloud computing platforms pool virtualized computing, storage and network resources in order to serve a large number of customers customers that use a multi-tenant multiplexing model to offer on-demand self-service over broad network. Open source cloud computing platforms are, similar to trusted computing, a fairly recent technology in active development. The issue of trust in public cloud environments is addressed by examining the state of the art within cloud computing security and subsequently addressing the issues of establishing trust in the launch of a generic virtual machine in a public cloud environment. As a result, the thesis proposes a trusted launch protocol that allows CS clients to verify and ensure the integrity of the VM instance at launch time, as well as the integrity of the host where the VM instance is launched. The protocol relies on the use of Trusted Platform Module (TPM) for key generation and data protection. The TPM also plays an essential part in the integrity attestation of the VM instance host. Along with a theoretical, platform-agnostic protocol, the thesis also describes a detailed implementation design of the protocol using the OpenStack cloud computing platform. In order the verify the implementability of the proposed protocol, a prototype implementation has built using a distributed deployment of OpenStack. While the protocol covers only the trusted launch procedure using generic virtual machine images, it presents a step aimed to contribute towards the creation of a secure and trusted public cloud computing environment. / TESPEV / CNS
777

Detecting Insider and Masquerade Attacks by Identifying Malicious User Behavior and Evaluating Trust in Cloud Computing and IoT Devices

Kambhampaty, Krishna Kanth January 2019 (has links)
There are a variety of communication mediums or devices for interaction. Users hop from one medium to another frequently. Though the increase in the number of devices brings convenience, it also raises security concerns. Provision of platform to users is as much important as its security. In this dissertation we propose a security approach that captures user behavior for identifying malicious activities. System users exhibit certain behavioral patterns while utilizing the resources. User behaviors such as device location, accessing certain files in a server, using a designated or specific user account etc. If this behavior is captured and compared with normal users’ behavior, anomalies can be detected. In our model, we have identified malicious users and have assigned trust value to each user accessing the system. When a user accesses new files on the servers that have not been previously accessed, accessing multiple accounts from the same device etc., these users are considered suspicious. If this behavior continues, they are categorized as ingenuine. A trust value is assigned to users. This value determines the trustworthiness of a user. Genuine users get higher trust value and ingenuine users get a lower trust value. The range of trust value varies from zero to one, with one being the highest trustworthiness and zero being the lowest. In our model, we have sixteen different features to track user behavior. These features evaluate users’ activities. From the time users’ log in to the system till they log out, users are monitored based on these sixteen features. These features determine whether the user is malicious. For instance, features such as accessing too many accounts, using proxy servers, too many incorrect logins attribute to suspicious activity. Higher the number of these features, more suspicious is the user. More such additional features contribute to lower trust value. Identifying malicious users could prevent and/or mitigate the attacks. This will enable in taking timely action against these users from performing any unauthorized or illegal actions. This could prevent insider and masquerade attacks. This application could be utilized in mobile, cloud and pervasive computing platforms.
778

Elastic Data Stream Processing

Heinze, Thomas 27 October 2021 (has links)
Data stream processing systems are used to process data from high velocity data sources like financial, sensor, or logistics data. Many use cases force these systems to use a distributed setup to be able to fulfill the strict requirements regarding expected system throughput and end-to-end latency. The major challenge for a distributed data stream processing system is unpredictable load peaks. Most systems use overprovisioning to solve this problem, which leads to a low system utilization and high monetary cost for the user. This doctoral thesis studies a potential solution to this problem by automatic scaling in or out based on the changing workload. This approach is called elastic scaling and allows a cost-efficient execution of the system with a high quality of service. In this thesis, we present our elastic scaling data stream processing system FUGU and address three major challenges of such systems: 1) consideration of user-defined end-to-end latency constraints during the elastic scaling, 2) study of different auto-scaling techniques, and 3) combination of elastic scaling with different fault tolerance techniques. First, we demonstrate how our system considers user-defined end-to-end latency constraints during the scaling decisions. Each scaling decision causes short latency peaks, because the processing needs to be paused while operators are moved. FUGU estimates the latency peaks for different scaling decisions, tries to minimize the created latency peak and at the same time to achieve similar monetary costs like alternative approaches. Second, we study different auto-scaling techniques for elastic-scaling data stream processing systems. Auto-scaling techniques are a very important part of such systems as they derive the scaling decisions. In this thesis, we study three auto-scaling techniques: Threshold-based Scaling, Reinforcement Learning and the novel Online Parameter Optimization. The Online Parameter Optimization overcomes the shortcomings of the two other approaches by avoiding manual tuning and being robust towards different workload patterns. Finally, we present an integration of an elastic scaling with different replication techniques for high availability to allow to minimize the spent monetary cost and to ensure at the same time a maximal recovery time. We leverage two replication approaches in FUGU and evaluate a trade-off between recovery time and overhead. FUGU estimates the recovery time and adaptively optimizes the used replication technique for each operator. All these contributions are carefully evaluated in three real-world scenarios and we discuss the relationship of our contributions towards related work.
779

Elasticity of Elasticsearch

Tsaousi, Kleivi Dimitris January 2021 (has links)
Elasticsearch has evolved from an experimental, open-source, NoSQL database for full-text documents to an easily scalable search engine that canhandle a large amount of documents. This evolution has enabled companies todeploy Elasticsearch as an internal search engine for information retrieval (logs,documents, etc.). Later on, it was transformed as a cloud service and the latestdevelopment allows a containerized, serverless deployment of the application,using Docker and Kubernetes.This research examines the behaviour of the system by comparing the length and appearance of single-term and multiple-terms queries, the scaling behaviour and the security of the service. The application is deployed on Google Cloud Platform as a Kubernetes cluster hosting containerized Elasticsearch images that work as databasenodes of a bigger database cluster. As input data, a collection of JSON formatted documents containing the title and abstract of published papersin the field of computer science was used inside a single index. All the plots were extracted using Kibana visualization software. The results showed that multiple-term queries put a bigger stress on thesystem than single-term queries. Also the number of simultaneous users querying in the system is a big factor affecting the behaviour of the system. By scaling up the number of Elasticsearch nodes inside the cluster, indicated that more simultaneous requests could be served by the system.
780

Transitions in new technology and market structure: applications and new methods for discrete choice model estimation

Wang, Shuang 06 November 2021 (has links)
My dissertation consists of three chapters that evaluate the social welfare effect of either antitrust policy or industrial transition, all using discrete choice model estimation as the front end for counterfactual analysis. In the first chapter, I investigate the economic impact of the merger that created the world's largest hotel chain, Marriott's acquisition of Starwood, thereby shedding light on the antitrust authorities' performance in protecting competitive markets for the benefit of consumers. Different from traditional merger analysis that focuses on the tradeoff between the upward pricing pressure and the cost synergy among the merging parties while fixing the market structure, I endogenize firms’ entry decisions into an oligopoly price competition model. To tackle the associated multiple equilibria issue, I use moment inequality estimation and propose a novel lower probability bound that reduces the computational burden from being exponential to being linear in the number of players. It also adds to the scant empirical evidence on post-merger cost synergy by showing that every one more affiliated hotel in the local market reduces a hotel's marginal cost by up to 2.3%. Then a comparison between the simulated with-merger and without-merger equilibria indicates that this merger enhances social welfare. In particular, for those markets that are previously not profitable for any firm to enter, because of the post-merger cost saving, Marriott or Starwood would enter 6% - 24% of them, which provides a new perspective for merger reviews. The second chapter, joint with Mingli Chen, Marc Rysman and Krzysztof Wozniak, studies the determinants of the US payment system's shift from paper payment instruments, namely cash and check, to digital instruments, such as debit cards and credit cards. With a 5-year transaction-level panel data, for the first time in the literature, we can distinguish the short-term effects of transaction size from the long-term changes in households’ preferences. To do so, we incorporate a household-product-quarter fixed effect into a multinomial logit model. We develop a new method based on the Minorization-Maximization (MM) algorithm to address the prohibitive computational challenge of estimating over one million fixed effects in such a nonlinear model. Results show that over a short horizon (within a quarter), the probability of using card increases with transaction sizes in general but exhibits substantial household heterogeneity. While over long horizon (five-year period of the data), with the estimated household-product-quarter fixed effects, we decompose the increase in card usage into different channels and find that only a third of it is due to the changes in household preferences. Another significant driver is the households' entry and exit into the sample. In the third chapter, my coauthors Jacob LaRiviere, Aadharsh Kannan, and I explore the "death of distance” hypothesis with a novel anonymized customer-level dataset on demand for cloud computing, accounting for both spatial and price competition among public cloud providers. We introduce a mixed logit demand model of spatial competition estimable with detailed data of a single firm but only aggregate sales data of a second. We leverage the Expectation-Maximization (EM) algorithm to tackle the customer-level missing data problem of the second firm. Estimation results and counterfactuals show that standard spatial competition economics hold even when distance for cloud latency is trivial.

Page generated in 0.0756 seconds