• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1663
  • 335
  • 250
  • 172
  • 127
  • 116
  • 52
  • 50
  • 44
  • 43
  • 25
  • 20
  • 19
  • 18
  • 11
  • Tagged with
  • 3335
  • 1652
  • 725
  • 501
  • 436
  • 418
  • 397
  • 333
  • 322
  • 319
  • 314
  • 313
  • 309
  • 264
  • 261
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Performance and Security Provisioning for Mobile Telecom Cloud

Vaezpour, Seyed Yahya 27 August 2015 (has links)
Mobile Telecom Cloud (MTC) refers to cloud services provided by mobile telecommunication companies. Since mobile network operators support the last-mile Internet access to users, they have advantages over other cloud providers by providing users with better mobile connectivity and required quality of service (QoS). The dilemma in meeting higher QoS demands while saving cost poses a big challenge to MTC providers. We tackle this challenge by strategically placing users' data in distributed switching centres to minimize the total system cost and maximize users' satisfaction. We formulate and solve the optimization problems using linear programming (LP) based branch-and-bound and LP with rounding. Furthermore, we discuss MTC brokerage which allows MTC providers to act as a brokerage to broker third-party cloud providers' (TPC) cloud resources and integrate the resources reserved from TPC with those of their own MTC. We address the technical challenges of optimally allocating users' cloud requests to MTC and TPC data centres to meet users' QoS requirement with minimum cost. We also study the price range that can be profitable to a MTC brokerage. We then investigate the resource reservation problem with dynamic request changes. We evaluate our solution using real Google traces collected over a 29-day period from a Google cluster. We also address security provisioning in MTC. Mobile cloud allows users to offload computational intensive applications to a mobile phone's agent in the cloud, which could be implemented as a thin virtual machine (VM), also termed as phone clone. Due to shared hardware components among co-resident VMs, a VM is subject to covert channel attacks and may potentially leak information to other VMs located in the same physical host. We design SWAP: a security aware provisioning and migration scheme for phone clones. We evaluate our solution using the Reality Mining and the Nodobo dataset. Experimental results indicate that our algorithms are nearly optimal for phone clone allocation and are effective in maintaining low risk and minimizing the number of phone clone migrations. / Graduate / 0984
122

Secure Cloud Storage

Luo, Jeff Yucong 23 May 2014 (has links)
The rapid growth of Cloud based services on the Internet invited many critical security attacks. Consumers and corporations who use the Cloud to store their data encounter a difficult trade-off of accepting and bearing the security, reliability, and privacy risks as well as costs in order to reap the benefits of Cloud storage. The primary goal of this thesis is to resolve this trade-off while minimizing total costs. This thesis presents a system framework that solves this problem by using erasure codes to add redundancy and security to users’ data, and by optimally choosing Cloud storage providers to minimize risks and total storage costs. Detailed comparative analysis of the security and algorithmic properties of 7 different erasure codes is presented, showing codes with better data security comes with a higher cost in computational time complexity. The codes which granted the highest configuration flexibility bested their peers, as the flexibility directly corresponded to the level of customizability for data security and storage costs. In-depth analysis of the risks, benefits, and costs of Cloud storage is presented, and analyzed to provide cost-based and security-based optimal selection criteria for choosing appropriate Cloud storage providers. A brief historical introduction to Cloud Computing and security principles is provided as well for those unfamiliar with the field. The analysis results show that the framework can resolve the trade-off problem by mitigating and eliminating the risks while preserving and enhancing the benefits of using Cloud storage. However, it requires higher total storage space due to the redundancy added by the erasure codes. The storage provider selection criteria will minimize the total storage costs even with the added redundancies, and minimize risks.
123

DSFS: a data storage facilitating service for maximizing security, availability, performance, and customizability

Bilbray, Kyle 12 January 2015 (has links)
The objective of this thesis is to study methods for the flexible and secure storage of sensitive data in an unaltered cloud. While current cloud storage providers make guarantees on the availability and security of data once it enters their domain, clients are not given any options for customization. All availability and security measures, along with any resulting performance hits, are applied to all requests, regardless of the data's sensitivity or client's wishes. In addition, once a client's data enters the cloud, it becomes vulnerable to different types of attacks. Other cloud users may access or disrupt the availability of their peers' data, and cloud providers cannot protect from themselves in the event of a malicious administrator or government directive. Current solutions use combinations of known encoding schemes and encryption techniques to provide confidentiality from peers and sometimes the cloud service provider, but its an all-or-nothing model. A client either uses the security methods of their system, or does not, regardless of whether the client's data needs more or less protection and availability. Our approach, referred to as the Data Storage Facilitating Service (DSFS), involves providing a basic set of proven protection schemes with configurable parameters that encode input data into a number of fragments and intelligently scatters them across the target cloud. A client may choose the encoding scheme most appropriate for the sensitivity of their data. If none of the supported schemes are sufficient for the client's needs or the client has their own custom encoding, DSFS can accept already encoded fragments and perform secure placement. Evaluation of our prototype service demonstrates clear trade-offs in performance between the different levels of security encoding provides, allowing clients to choose how much the importance of their data is worth. This amount of flexibility is unique to DSFS and turns it into more of a secure storage facilitator that can help clients as much or as little as required. We also see a significant effect on overhead from the service's location relative to its cloud when we compare performances of our own setup with a commercial cloud service.
124

Political areal-functional organization with special reference to St. Cloud, Minnesota.

Brown, Robert Harold, January 1957 (has links)
Thesis--University of Chicago. / Bibliography: p. 116-123.
125

Le phénomène de circulation des données à caractère personnel dans le cloud : étude de droit matériel dans le contexte de l'Union européenne / The flow of personal data in the cloud : a study of substantive law within the European Union context

Tourne, Elise 11 June 2018 (has links)
Le régime juridique applicable à la collecte et à l’exploitation par les fournisseurs de services de cloud computing des données à caractère personnel de leurs utilisateurs constitue une source d’interrogation pour ces derniers. De fait, aucun régime juridique organisé ne permet aujourd’hui de réguler de manière globale, au niveau de l’Union européenne, le phénomène de circulation des données à caractère personnel dans le cloud, que ce soit de manière directe ou indirecte. Il apparaît, dès lors, nécessaire de s’interroger sur la manière dont le droit s’est organisé en conséquence et d’analyser les traitements complémentaires et/ou alternatifs actuellement offerts par le droit, certes moins structurellement organisés et mosaïques, mais plus pragmatiques, réalistes et politiquement viables. Historiquement, le phénomène de circulation a été presque exclusivement traité via le droit spécifique à la protection des données à caractère personnel découlant de l’Union européenne. Ce droit, souvent considéré par opposition au droit à la libre circulation des données, constituait initialement une émanation du droit à la protection de la vie privée avant d’être consacré en tant que droit fondamental de l’Union européenne. Le traitement offert par le droit à la protection des données, s’il cible directement les données au cœur du phénomène de circulation dans le cloud, ne couvre que partiellement ledit phénomène. De surcroît, malgré l’entrée en vigueur du Règlement 2016/679 relatif à la protection des personnes physiques à l’égard du traitement des données à caractère personnel et à la libre circulation de ces données, il possède une efficacité contestable, ne proposant pas de solution harmonisée au sein de l’Union européenne et étant dépendant de la bonne volonté et des moyens financiers, organisationnels et humains des Etats Membres. Les traitements alternatifs ou complémentaires au droit à la protection des données qui existent au sein de l’Union européenne, qui peuvent être répartis entre outils techniques, contractuels et législatifs, n’offrent qu’une appréhension indirecte du phénomène de circulation via un encadrement de son environnement cloud. Individuellement, ils ne permettent d’appréhender qu’un aspect très réduit du phénomène de circulation, de surcroît avec une efficacité plus ou moins grande. En outre, les outils techniques et contractuels n’ont pas la légitimité attachée aux outils législatifs. Néanmoins, associés les uns aux autres, ils permettent de cibler le phénomène de circulation des données de manière plus globale et efficace. / The legal framework applicable to the gathering and processing by cloud service providers of the personal data of their users raises questions for such users. De facto, there does not now exist an organized legal framework allowing for the regulation, at the European Union level and as a whole, of the flow of personal data in the cloud, whether directly or indirectly. It thus seems necessary to question the way law organized itself consequently and analyze the complementary and/or alternative treatments offered by law, which are less structurally organized and are mosaical, but are more pragmatic, realistic and politically sustainable. Historically, the flow of personal data has been dealt almost exclusively via the specific right to the protection of personal data, which derives from the European Union. Such right, often considered in opposition to the right to the free circulation of data, was initially an emanation of the right to privacy before being established as a fundamental right of the European Union. The treatment provided by the right to the protection of personal data, if it targets directly the data within the flow phenomena, only partly covers such phenomena. In addition, despite the entry into force of the Regulation 2016/679 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, its effectiveness is questionable, not offering any harmonized solution within the European Union and being highly dependent on the goodwill and the financial, organizational and human means of the Member States. The complementary and/or alternative treatments to the right to the protection of personal data that exist within the European Union, which may be allocated among technical, contractual and regulatory tools, only approach the data flow phenomena indirectly by providing a framework to its environment. Individually, they only target one very limited aspect of the data flow phenomena, with more or less effectiveness. Furthermore, technical and contractual tools have not the legitimacy attached to the regulatory tools. However, associated one with another, they allow a more global and efficient targeting of the data flow phenomena.
126

Cloud computing - srovnání cloudových úložišť / Cloud computing - comparison of cloud storages

Tymeš, Radek January 2017 (has links)
This thesis deals with the utilization of cloud computing in the field of online data storages for storing and backing up of user data. After defining the target group and its requirements for the service, ten most suitable options from the total set of cloud storages were selected. Selected storages were subsequently tested and analyzed in order to be used in the next stage of thesis. Materials for multicriterial analysis of options were created on the basis of set criteria and their given or measured values. The most appropriate options were calculated by application of the aforementioned mathematical method. These options were then evaluated and described from the viewpoint of commissioning.
127

Detecting Compute Cloud Co-residency with Network Flow Watermarking Techniques

Bates, Adam, Bates, Adam January 2012 (has links)
This paper presents co-resident watermarking, a traffic analysis attack for cloud environments that allows a malicious co-resident virtual machine to inject a watermark signature into the network flow of a target instance. This watermark can be used to exfiltrate co-residency data, compromising isolation assurances. While previous work depends on virtual hypervisor resource management, our approach is difficult to defend without costly underutilization of the physical machine. We evaluate co-resident watermarking under many configurations, from a local lab environment to production cloud environments. We demonstrate the ability to initiate a covert channel of 4 bits per second, and we can confirm co-residency with a target VM instance in less than 10 seconds. We also show that passive load measurement of the target and behavior profiling is possible. Our investigation demonstrates the need for the careful design of hardware to be used in the cloud. This thesis includes unpublished co-authored material.
128

Performance et qualité de service de l'ordonnanceur dans un environnement virtualisé / Performance and quality of service of the scheduler in a virtualized environment

Djomgwe Teabe, Boris 12 October 2017 (has links)
Confrontées à l'augmentation des coûts de mise en place et de maintenance des systèmes informatiques, les entreprises se tournent vers des solutions d'externalisation telles que le Cloud Computing. Le Cloud se basent sur la virtualisation comme principale technologie permettant la mutualisation. L'utilisation de la virtualisation apporte de nombreux défis donc les principaux portent sur les performances des applications dans les machines virtuelles (VM) et la prévisibilité de ces performances. Dans un système virtualisé, les ressources matérielles sont partagées entre toutes les VMs du système. Dans le cas du CPU, c'est l'ordonnanceur de l'hyperviseur qui se charge de le partager entre tous les processeurs virtuels (vCPU) des VMs. L'hyperviseur réalise une allocation à temps partagé du CPU entre tous les vCPUs des VMs. Chaque vCPU a accès au CPU périodiquement. Ainsi, les vCPUs des VMs n'ont pas accès de façon continue au CPU, mais plutôt discontinue. Cette discontinuité est à l'origine de nombreux problèmes sur des mécanismes tels que la gestion d'interruption et les mécanismes de synchronisation de bas niveau dans les OS invités. Dans cette thèse, nous proposons deux contributions pour répondre à ces problèmes dans la virtualisation. La première est un nouvel ordonnanceur de l'hyperviseur qui adapte dynamiquement la valeur du quantum dans l'hyperviseur en fonction du type des applications dans les VMs sur une plate-forme multi-coeurs. La seconde contribution est une nouvelle primitive de synchronisation (nommée I-Spinlock) dans l'OS invité. Dans un Cloud fournissant un service du type IaaS, la VM est l'unité d'allocation. Le fournisseur établit un catalogue des types de VMs présentant les différentes quantités de ressources qui sont allouées à la VM vis-à-vis des différents périphériques. Ces ressources allouées à la VM correspondent à un contrat sur une qualité de service négocié par le client auprès du fournisseur. L'imprévisibilité des performances est la conséquence de l'incapacité du fournisseur à garantir cette qualité de service. Deux principales causes sont à l'origine de ce problème dans le Cloud: (i) un mauvais partage des ressources entre les différentes VMs et (ii) l'hétérogénéité des infrastructures dans les centres d'hébergement. Dans cette thèse, nous proposons deux contributions pour répondre au problème d'imprévisibilité des performances. La première contribution s'intéresse au partage de la ressource logicielle responsable de la gestion des pilotes, et propose une approche de facturation du temps CPU utilisé par cette couche logiciel aux VMs. La deuxième contribution s'intéresse à l'allocation du CPU dans les Clouds hétérogènes. Dans cette contribution, nous proposons une approche d'allocation permettant de garantir la capacité de calcul allouée à une VM quelle que soit l'hétérogénéité des CPUs dans l'infrastructure. / As a reaction to the increasing costs of setting up and maintaining IT systems, companies are turning to solutions such as Cloud Computing. Cloud computing is based on virtualization as the main technology for mutualisation. The use of virtualization brings many challenges. The main ones concern the performance of the applications in the virtual machines (VM) and the predictability of these performances. In a virtualized system, hardware resources are shared among all VMs in the system. In the case of the CPU, it is the scheduler of the hypervisor that is in charge of sharing the CPU among all the virtual processors (vCPU) of the VMs. The hypervisor uses a time-sharing approach to allocate the CPU. Each vCPU has access to the CPU periodically. Thus, the vCPU of the VMs do not have continuous access to the CPU, but rather discontinuous. This discontinuity is causing many problems on mechanisms such as interuption handling and low-level synchronization mechanisms in guest OSs. In this thesis, we propose two contributions to address these problems in virtualization. The first is a new hypervisor scheduler that dynamically adapts the quantum value in the hypervisor according to the type of applications in the VMs on a multi-core platform. The second contribution is a new synchronization primitive (named I-Spinlock) in the guest OS. In a cloud providing a service of the IaaS type, the VM is the allocation unit. The provider establishes a catalogue presenting the different quantities of resources that are allocated to the VM regarding various devices. These resources allocated to the VM correspond to a contract on a quality of service negotiated by the customer with the provider. The unpredictability of performance is the consequence of the incapability of the provider to guarantee this quality of service. There are two main causes of this problem in the Cloud: (i) poor resource sharing between different VMs and (ii) heterogeneity of infrastructure in hosting centers. In this thesis, we propose two contributions to answer the problem of performance unpredictability. The first contribution focuses on the sharing of the software resource responsible for managing the drivers, and proposes to bill the CPU time used by this software layer to VMs. The second contribution focuses on the allocation of the CPU in heterogeneous clouds. In this contribution, we propose an allocation approach to guarantee the computing capacity allocated to a VM regardless of the heterogeneity of the CPUs in the infrastructure.
129

Governance a management služeb cloud computingu z pohledu spotřebitele / Cloud computing governance a management from consumer point of view

Karkošková, Soňa January 2017 (has links)
Cloud computing brings widely recognized benefits as well as new challenges and risks resulting mainly from the fact that cloud service provider is an external third party that provides public cloud services in multi-tenancy model. At present, widely accepted IT governance frameworks lack focus on cloud computing governance and do not fully address the requirements of cloud computing from cloud consumer viewpoint. Given the absence of any comprehensive cloud computing governance and management framework, this doctoral dissertation thesis focuses on specific aspects of cloud service governance and management from consumer perspective. The main aim of doctoral dissertation thesis is the design of methodological framework for cloud service governance and management (Cloud computing governance and management) from consumer point of view. Cloud consumer is considered as a medium or large-sized enterprise that uses services in public cloud computing model, which are offered and delivered by cloud service provider. Theoretical part of this doctoral dissertation thesis identifies the main theoretical concepts of IT governance, IT management and cloud computing (chapter 2). Analytical part of this doctoral dissertation thesis reviews the literature dealing with specifics of cloud services utilization and their impact on IT governance and IT management, cloud computing governance and cloud computing management (chapter 3). Further, existing IT governance and IT management frameworks (SOA Governance, COBIT, ITIL and MBI) were analysed and evaluated in terms of the use of cloud services from cloud consumer perspective (chapter 4). Scientific research was based on Design Science Research Methodology with intention to design and evaluate artifact methodological framework. The main part of this doctoral dissertation thesis proposes methodical framework Cloud computing governance and management based on SOA Governance, COBIT 5 and ITIL 2011 (chapter 5, 6 and 7). Verification of proposed methodical framework Cloud computing governance and management from cloud consumer perspective was based on scientific method of case study (chapter 8). The main objective of the case study was to evaluate and verify proposed methodical framework Cloud computing governance and management in a real business environment. The main contribution of this doctoral dissertation thesis is both the use of existing knowledge, approaches and methodologies in area of IT governance and IT management to design methodical framework Cloud computing governance and management and the extension of Management of Business Informatics (MBI) framework by a set of new tasks containing procedures and recommendations relating to adoption and utilization of cloud computing services.
130

Cloud-ready aplikace pro integraci kontaktů / Cloud-ready application for contact integration

Koblása, Michal January 2015 (has links)
The main objective is to implement integration application for contacts and communication that will meet the Cloud-ready specification, therefore is ready for deployment in a cloud computing environment. The thesis summarizes the recommendations of scientific papers, case studies and documentation from cloud providers for cloud-ready application. It also summarizes the methodology Twelve-Factor app, which brings together a set of recommendations for cloud applications. The second objective is to define specification of application to be implemented on the basis of identified persona and its problems will be transformed to specification and requirements. It is followed by design of the architecture to meet the principles for the run in the cloud environment. The fourth partial objective is to evaluate recovered principles, procedures and actual implementation. The main contribution of this work is implemented in an application that is able to integrate contacts and communication. Another contribution of this work is to summarize and validate basic recommendations on cloud ready application and methodology Twelve-factor app.

Page generated in 0.0241 seconds