121 |
The Evolution of the Physicochemical Properties of Aerosols in the AtmosphereTomlinson, Jason 2010 December 1900 (has links)
A Differential Mobility Analyzer/Tandem Differential Mobility Analyzer (DMA/TDMA) system was used to measure simultaneously the size distribution and hygroscopicity of the ambient aerosol population. The system was operated aboard the National Center for Atmospheric Research/National Science Foundation (NCAR/NSF) C-130 during the 2006 Megacity Initiative: Local and Global Research Observations (MILAGRO) field campaign followed by the 2006 Intercontinental Chemical Transport Experiment – Phase B (INTEX-B) field campaign.
The research flights for the MILAGRO campaign were conducted within the Mexico City basin and the region to the northeast within the pollution plume. The aerosol within the basin is dominated by organics with an average measured kappa value of 0.21 /- 0.18, 0.13 /- 0.09, 0.09 /- 0.06, 0.14 /- 0.07, and 0.17 /- 0.04 for dry particle diameters of 0.025, 0.050, 0.100, 0.200, and 0.300 mu m, respectively. As the aerosols are transported away from the Mexico City Basin, secondary organic aerosol formation through oxidation and condensation of sulfate on the aerosols surface rapidly increases the solubility of the aerosol. The most pronounced change occurs for a 0.100 mu m diameter aerosol where, after 6 hours of transport, the average kappa value increased by a factor of 3 to a kappaof 0.29 /- 0.13. The rapid increase in solubility increases the fraction of the aerosol size distribution that could be activated within a cloud.
The research flights for the INTEX-B field campaign investigated the evolution of the physicochemical properties of the Asian aerosol plume after 3 to 7 days of transport. The Asian aerosol within the free troposphere exhibited a bimodal growth distribution roughly 50 percent of the time. The more soluble mode of the growth distribution contributed between 67-80 percent of the overall growth distribution and had an average kappabetween 0.40 and 0.53 for dry particle diameters of 0.025, 0.050, 0.100, and 0.300 mu m. The secondary mode was insoluble with an average kappabetween 0.01 and 0.05 for all dry particle diameters. Cloud condensation nuclei closure was attained at a supersaturation of 0.2 percent for all particles within the free troposphere by either assuming a pure ammonium bisulfate composition or a binary composition of ammonium bisulfate and an insoluble organic.
|
122 |
Performance and Security Provisioning for Mobile Telecom CloudVaezpour, Seyed Yahya 27 August 2015 (has links)
Mobile Telecom Cloud (MTC) refers to cloud services provided by mobile telecommunication companies. Since mobile network operators support the last-mile Internet access to users, they have advantages over other cloud providers by providing users with better mobile connectivity and required quality of service (QoS). The dilemma in meeting higher QoS demands while saving cost poses a big challenge to MTC providers. We tackle this challenge by strategically placing users' data in distributed switching centres to minimize the total system cost and maximize users' satisfaction. We formulate and solve the optimization problems using linear programming (LP) based branch-and-bound and LP with rounding.
Furthermore, we discuss MTC brokerage which allows MTC providers to act as a brokerage to broker third-party cloud providers' (TPC) cloud resources and integrate the resources reserved from TPC with those of their own MTC. We address the technical challenges of optimally allocating users' cloud requests to MTC and TPC data centres to meet users' QoS requirement with minimum cost. We also study the price range that can be profitable to a MTC brokerage. We then investigate the resource reservation problem with dynamic request changes. We evaluate our solution using real Google traces collected over a 29-day period from a Google cluster.
We also address security provisioning in MTC. Mobile cloud allows users to offload computational intensive applications to a mobile phone's agent in the cloud, which could be implemented as a thin virtual machine (VM), also termed as phone clone. Due to shared hardware components among co-resident VMs, a VM is subject to covert channel attacks and may potentially leak information to other VMs located in the same physical host. We design SWAP: a security aware provisioning and migration scheme for phone clones. We evaluate our solution using the Reality Mining and the Nodobo dataset. Experimental results indicate that our algorithms are nearly optimal for phone clone allocation and are effective in maintaining low risk and minimizing the number of phone clone migrations. / Graduate / 0984
|
123 |
Secure Cloud StorageLuo, Jeff Yucong 23 May 2014 (has links)
The rapid growth of Cloud based services on the Internet invited many critical security attacks. Consumers and corporations who use the Cloud to store their data encounter a difficult trade-off of accepting and bearing the security, reliability, and privacy risks as well as costs in order to reap the benefits of Cloud storage. The primary goal of this thesis is to resolve this trade-off while minimizing total costs.
This thesis presents a system framework that solves this problem by using erasure codes to add redundancy and security to users’ data, and by optimally choosing Cloud storage providers to minimize risks and total storage costs. Detailed comparative analysis of the security and algorithmic properties of 7 different erasure codes is presented, showing codes with better data security comes with a higher cost in computational time complexity. The codes which granted the highest configuration flexibility bested their peers, as the flexibility directly corresponded to the level of customizability for data security and storage costs. In-depth analysis of the risks, benefits, and costs of Cloud storage is presented, and analyzed to provide cost-based and security-based optimal selection criteria for choosing appropriate Cloud storage providers. A brief historical introduction to Cloud Computing and security principles is provided as well for those unfamiliar with the field.
The analysis results show that the framework can resolve the trade-off problem by mitigating and eliminating the risks while preserving and enhancing the benefits of using Cloud storage. However, it requires higher total storage space due to the redundancy added by the erasure codes. The storage provider selection criteria will minimize the total storage costs even with the added redundancies, and minimize risks.
|
124 |
DSFS: a data storage facilitating service for maximizing security, availability, performance, and customizabilityBilbray, Kyle 12 January 2015 (has links)
The objective of this thesis is to study methods for the flexible and secure storage of sensitive
data in an unaltered cloud. While current cloud storage providers make guarantees
on the availability and security of data once it enters their domain, clients are not given
any options for customization. All availability and security measures, along with any
resulting performance hits, are applied to all requests, regardless of the data's sensitivity or client's wishes. In addition, once a client's data enters the cloud, it becomes vulnerable to different types of attacks. Other cloud users may access or disrupt the availability of their peers' data, and cloud providers cannot protect from themselves in the event of a malicious administrator or government directive. Current solutions use combinations of known encoding schemes and encryption techniques to provide confidentiality from peers and sometimes the cloud service provider, but its an all-or-nothing model. A client either uses the security methods of their system, or does not, regardless of whether the client's data needs more or less protection and availability. Our approach, referred to as the Data Storage Facilitating Service (DSFS), involves providing a basic set of proven protection schemes with configurable parameters that encode
input data into a number of fragments and intelligently scatters them across the target
cloud. A client may choose the encoding scheme most appropriate for the sensitivity of their data. If none of the supported schemes are sufficient for the client's needs or the client
has their own custom encoding, DSFS can accept already encoded fragments and perform
secure placement.
Evaluation of our prototype service demonstrates clear trade-offs in performance between
the different levels of security encoding provides, allowing clients to choose how
much the importance of their data is worth. This amount of flexibility is unique to DSFS and turns it into more of a secure storage facilitator that can help clients as much or as little as required. We also see a significant effect on overhead from the service's location relative to its cloud when we compare performances of our own setup with a commercial cloud
service.
|
125 |
Political areal-functional organization with special reference to St. Cloud, Minnesota.Brown, Robert Harold, January 1957 (has links)
Thesis--University of Chicago. / Bibliography: p. 116-123.
|
126 |
Le phénomène de circulation des données à caractère personnel dans le cloud : étude de droit matériel dans le contexte de l'Union européenne / The flow of personal data in the cloud : a study of substantive law within the European Union contextTourne, Elise 11 June 2018 (has links)
Le régime juridique applicable à la collecte et à l’exploitation par les fournisseurs de services de cloud computing des données à caractère personnel de leurs utilisateurs constitue une source d’interrogation pour ces derniers. De fait, aucun régime juridique organisé ne permet aujourd’hui de réguler de manière globale, au niveau de l’Union européenne, le phénomène de circulation des données à caractère personnel dans le cloud, que ce soit de manière directe ou indirecte. Il apparaît, dès lors, nécessaire de s’interroger sur la manière dont le droit s’est organisé en conséquence et d’analyser les traitements complémentaires et/ou alternatifs actuellement offerts par le droit, certes moins structurellement organisés et mosaïques, mais plus pragmatiques, réalistes et politiquement viables. Historiquement, le phénomène de circulation a été presque exclusivement traité via le droit spécifique à la protection des données à caractère personnel découlant de l’Union européenne. Ce droit, souvent considéré par opposition au droit à la libre circulation des données, constituait initialement une émanation du droit à la protection de la vie privée avant d’être consacré en tant que droit fondamental de l’Union européenne. Le traitement offert par le droit à la protection des données, s’il cible directement les données au cœur du phénomène de circulation dans le cloud, ne couvre que partiellement ledit phénomène. De surcroît, malgré l’entrée en vigueur du Règlement 2016/679 relatif à la protection des personnes physiques à l’égard du traitement des données à caractère personnel et à la libre circulation de ces données, il possède une efficacité contestable, ne proposant pas de solution harmonisée au sein de l’Union européenne et étant dépendant de la bonne volonté et des moyens financiers, organisationnels et humains des Etats Membres. Les traitements alternatifs ou complémentaires au droit à la protection des données qui existent au sein de l’Union européenne, qui peuvent être répartis entre outils techniques, contractuels et législatifs, n’offrent qu’une appréhension indirecte du phénomène de circulation via un encadrement de son environnement cloud. Individuellement, ils ne permettent d’appréhender qu’un aspect très réduit du phénomène de circulation, de surcroît avec une efficacité plus ou moins grande. En outre, les outils techniques et contractuels n’ont pas la légitimité attachée aux outils législatifs. Néanmoins, associés les uns aux autres, ils permettent de cibler le phénomène de circulation des données de manière plus globale et efficace. / The legal framework applicable to the gathering and processing by cloud service providers of the personal data of their users raises questions for such users. De facto, there does not now exist an organized legal framework allowing for the regulation, at the European Union level and as a whole, of the flow of personal data in the cloud, whether directly or indirectly. It thus seems necessary to question the way law organized itself consequently and analyze the complementary and/or alternative treatments offered by law, which are less structurally organized and are mosaical, but are more pragmatic, realistic and politically sustainable. Historically, the flow of personal data has been dealt almost exclusively via the specific right to the protection of personal data, which derives from the European Union. Such right, often considered in opposition to the right to the free circulation of data, was initially an emanation of the right to privacy before being established as a fundamental right of the European Union. The treatment provided by the right to the protection of personal data, if it targets directly the data within the flow phenomena, only partly covers such phenomena. In addition, despite the entry into force of the Regulation 2016/679 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, its effectiveness is questionable, not offering any harmonized solution within the European Union and being highly dependent on the goodwill and the financial, organizational and human means of the Member States. The complementary and/or alternative treatments to the right to the protection of personal data that exist within the European Union, which may be allocated among technical, contractual and regulatory tools, only approach the data flow phenomena indirectly by providing a framework to its environment. Individually, they only target one very limited aspect of the data flow phenomena, with more or less effectiveness. Furthermore, technical and contractual tools have not the legitimacy attached to the regulatory tools. However, associated one with another, they allow a more global and efficient targeting of the data flow phenomena.
|
127 |
Cloud computing - srovnání cloudových úložišť / Cloud computing - comparison of cloud storagesTymeš, Radek January 2017 (has links)
This thesis deals with the utilization of cloud computing in the field of online data storages for storing and backing up of user data. After defining the target group and its requirements for the service, ten most suitable options from the total set of cloud storages were selected. Selected storages were subsequently tested and analyzed in order to be used in the next stage of thesis. Materials for multicriterial analysis of options were created on the basis of set criteria and their given or measured values. The most appropriate options were calculated by application of the aforementioned mathematical method. These options were then evaluated and described from the viewpoint of commissioning.
|
128 |
Detecting Compute Cloud Co-residency with Network Flow Watermarking TechniquesBates, Adam, Bates, Adam January 2012 (has links)
This paper presents co-resident watermarking, a traffic analysis attack for cloud environments that allows a malicious co-resident virtual machine to inject a watermark signature into the network flow of a target instance. This watermark can be used to exfiltrate co-residency data, compromising isolation assurances. While previous work depends on virtual hypervisor resource management, our approach is difficult to defend without costly underutilization of the physical machine. We evaluate co-resident watermarking under many configurations, from a local lab environment to production cloud environments. We demonstrate the ability to initiate a covert channel of 4 bits per second, and we can confirm co-residency with a target VM instance in less than 10 seconds. We also show that passive load measurement of the target and behavior profiling is possible. Our investigation demonstrates the need for the careful design of hardware to be used in the cloud.
This thesis includes unpublished co-authored
material.
|
129 |
Performance et qualité de service de l'ordonnanceur dans un environnement virtualisé / Performance and quality of service of the scheduler in a virtualized environmentDjomgwe Teabe, Boris 12 October 2017 (has links)
Confrontées à l'augmentation des coûts de mise en place et de maintenance des systèmes informatiques, les entreprises se tournent vers des solutions d'externalisation telles que le Cloud Computing. Le Cloud se basent sur la virtualisation comme principale technologie permettant la mutualisation. L'utilisation de la virtualisation apporte de nombreux défis donc les principaux portent sur les performances des applications dans les machines virtuelles (VM) et la prévisibilité de ces performances. Dans un système virtualisé, les ressources matérielles sont partagées entre toutes les VMs du système. Dans le cas du CPU, c'est l'ordonnanceur de l'hyperviseur qui se charge de le partager entre tous les processeurs virtuels (vCPU) des VMs. L'hyperviseur réalise une allocation à temps partagé du CPU entre tous les vCPUs des VMs. Chaque vCPU a accès au CPU périodiquement. Ainsi, les vCPUs des VMs n'ont pas accès de façon continue au CPU, mais plutôt discontinue. Cette discontinuité est à l'origine de nombreux problèmes sur des mécanismes tels que la gestion d'interruption et les mécanismes de synchronisation de bas niveau dans les OS invités. Dans cette thèse, nous proposons deux contributions pour répondre à ces problèmes dans la virtualisation. La première est un nouvel ordonnanceur de l'hyperviseur qui adapte dynamiquement la valeur du quantum dans l'hyperviseur en fonction du type des applications dans les VMs sur une plate-forme multi-coeurs. La seconde contribution est une nouvelle primitive de synchronisation (nommée I-Spinlock) dans l'OS invité. Dans un Cloud fournissant un service du type IaaS, la VM est l'unité d'allocation. Le fournisseur établit un catalogue des types de VMs présentant les différentes quantités de ressources qui sont allouées à la VM vis-à-vis des différents périphériques. Ces ressources allouées à la VM correspondent à un contrat sur une qualité de service négocié par le client auprès du fournisseur. L'imprévisibilité des performances est la conséquence de l'incapacité du fournisseur à garantir cette qualité de service. Deux principales causes sont à l'origine de ce problème dans le Cloud: (i) un mauvais partage des ressources entre les différentes VMs et (ii) l'hétérogénéité des infrastructures dans les centres d'hébergement. Dans cette thèse, nous proposons deux contributions pour répondre au problème d'imprévisibilité des performances. La première contribution s'intéresse au partage de la ressource logicielle responsable de la gestion des pilotes, et propose une approche de facturation du temps CPU utilisé par cette couche logiciel aux VMs. La deuxième contribution s'intéresse à l'allocation du CPU dans les Clouds hétérogènes. Dans cette contribution, nous proposons une approche d'allocation permettant de garantir la capacité de calcul allouée à une VM quelle que soit l'hétérogénéité des CPUs dans l'infrastructure. / As a reaction to the increasing costs of setting up and maintaining IT systems, companies are turning to solutions such as Cloud Computing. Cloud computing is based on virtualization as the main technology for mutualisation. The use of virtualization brings many challenges. The main ones concern the performance of the applications in the virtual machines (VM) and the predictability of these performances. In a virtualized system, hardware resources are shared among all VMs in the system. In the case of the CPU, it is the scheduler of the hypervisor that is in charge of sharing the CPU among all the virtual processors (vCPU) of the VMs. The hypervisor uses a time-sharing approach to allocate the CPU. Each vCPU has access to the CPU periodically. Thus, the vCPU of the VMs do not have continuous access to the CPU, but rather discontinuous. This discontinuity is causing many problems on mechanisms such as interuption handling and low-level synchronization mechanisms in guest OSs. In this thesis, we propose two contributions to address these problems in virtualization. The first is a new hypervisor scheduler that dynamically adapts the quantum value in the hypervisor according to the type of applications in the VMs on a multi-core platform. The second contribution is a new synchronization primitive (named I-Spinlock) in the guest OS. In a cloud providing a service of the IaaS type, the VM is the allocation unit. The provider establishes a catalogue presenting the different quantities of resources that are allocated to the VM regarding various devices. These resources allocated to the VM correspond to a contract on a quality of service negotiated by the customer with the provider. The unpredictability of performance is the consequence of the incapability of the provider to guarantee this quality of service. There are two main causes of this problem in the Cloud: (i) poor resource sharing between different VMs and (ii) heterogeneity of infrastructure in hosting centers. In this thesis, we propose two contributions to answer the problem of performance unpredictability. The first contribution focuses on the sharing of the software resource responsible for managing the drivers, and proposes to bill the CPU time used by this software layer to VMs. The second contribution focuses on the allocation of the CPU in heterogeneous clouds. In this contribution, we propose an allocation approach to guarantee the computing capacity allocated to a VM regardless of the heterogeneity of the CPUs in the infrastructure.
|
130 |
Governance a management služeb cloud computingu z pohledu spotřebitele / Cloud computing governance a management from consumer point of viewKarkošková, Soňa January 2017 (has links)
Cloud computing brings widely recognized benefits as well as new challenges and risks resulting mainly from the fact that cloud service provider is an external third party that provides public cloud services in multi-tenancy model. At present, widely accepted IT governance frameworks lack focus on cloud computing governance and do not fully address the requirements of cloud computing from cloud consumer viewpoint. Given the absence of any comprehensive cloud computing governance and management framework, this doctoral dissertation thesis focuses on specific aspects of cloud service governance and management from consumer perspective. The main aim of doctoral dissertation thesis is the design of methodological framework for cloud service governance and management (Cloud computing governance and management) from consumer point of view. Cloud consumer is considered as a medium or large-sized enterprise that uses services in public cloud computing model, which are offered and delivered by cloud service provider. Theoretical part of this doctoral dissertation thesis identifies the main theoretical concepts of IT governance, IT management and cloud computing (chapter 2). Analytical part of this doctoral dissertation thesis reviews the literature dealing with specifics of cloud services utilization and their impact on IT governance and IT management, cloud computing governance and cloud computing management (chapter 3). Further, existing IT governance and IT management frameworks (SOA Governance, COBIT, ITIL and MBI) were analysed and evaluated in terms of the use of cloud services from cloud consumer perspective (chapter 4). Scientific research was based on Design Science Research Methodology with intention to design and evaluate artifact methodological framework. The main part of this doctoral dissertation thesis proposes methodical framework Cloud computing governance and management based on SOA Governance, COBIT 5 and ITIL 2011 (chapter 5, 6 and 7). Verification of proposed methodical framework Cloud computing governance and management from cloud consumer perspective was based on scientific method of case study (chapter 8). The main objective of the case study was to evaluate and verify proposed methodical framework Cloud computing governance and management in a real business environment. The main contribution of this doctoral dissertation thesis is both the use of existing knowledge, approaches and methodologies in area of IT governance and IT management to design methodical framework Cloud computing governance and management and the extension of Management of Business Informatics (MBI) framework by a set of new tasks containing procedures and recommendations relating to adoption and utilization of cloud computing services.
|
Page generated in 0.0261 seconds