• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1681
  • 332
  • 250
  • 173
  • 127
  • 117
  • 53
  • 52
  • 44
  • 44
  • 25
  • 20
  • 19
  • 18
  • 11
  • Tagged with
  • 3366
  • 1662
  • 733
  • 506
  • 440
  • 422
  • 402
  • 338
  • 326
  • 323
  • 319
  • 315
  • 306
  • 265
  • 261
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

IT-Moln så långt ögat når? : En rapport om IT-stöd för kundarbetet i nystartade företag / IT clouds in sight? : A report on IT support in the client management work for Start up companies

Nordqvist, Anna January 2012 (has links)
Detta examensarbete behandlar ämnena molnteknik och IT-stöd för kundarbetet i nystartade företag och har främst utförts åt en extern beställare, företaget Approdites AB. Företaget önskade information och rekommendationer gällande potentiella tjänster för IT-stöd som de skulle kunna använda sig av i sitt arbeta med kunderna. Examensarbetet resulterar i dels en akademisk rapport bestående av bland annat teori, metod och tidigare forskning, dels en rapport/förundersökning innehållande rekommendationer om tjänster, samt riktlinjer åt företaget gällande de aktuella områdena. De båda rapporterna syftar till att förse företaget Approdites med relevant information om områdena i stort, samt ge förslag på tjänster som kan passa deras verksamhet. I den akademiska rapporten redogörs för hur grunden till rapporten med teori, metoder och andra viktiga områden lades.
482

Aircraft Observations of Sub-cloud Aerosol and Convective Cloud Physical Properties

Axisa, Duncan 2009 December 1900 (has links)
This research focuses on aircraft observational studies of aerosol-cloud interactions in cumulus clouds. The data were collected in the summer of 2004, the spring of 2007 and the mid-winter and spring of 2008 in Texas, central Saudi Arabia and Istanbul, Turkey, respectively. A set of 24 pairs of sub-cloud aerosol and cloud penetration data are analyzed. Measurements of fine and coarse mode aerosol concentrations from 3 different instruments were combined and fitted with lognormal distributions. The fit parameters of the lognormal distributions are compared with cloud droplet effective radii retrieved from 260 cloud penetrations. Cloud condensation nuclei (CCN) measurements for a subset of 10 cases from the Istanbul region are compared with concentrations predicted from aerosol size distributions. Ammonium sulfate was assumed to represent the soluble component of aerosol with dry sizes smaller than 0.5 mm and sodium chloride for aerosol larger than 0.5 mm. The measured CCN spectrum was used to estimate the soluble fraction. The correlations of the measured CCN concentration with the predicted CCN concentration were strong (R2 > 0.89) for supersaturations of 0.2, 0.3 and 0.6%. The measured concentrations were typically consistent with an aerosol having a soluble fraction between roughly 0.5 and 1.0, suggesting a contribution of sulfate or some other similarly soluble inorganic compound. The predicted CCN were found to vary by +or-3.7% when the soluble fraction was varied by 0.1. Cumulative aerosol concentrations at cutoff dry diameters of 1.1, 0.1 and 0.06 mm were found to be correlated with cloud condensation nuclei concentrations but not with maximum cloud base droplet concentrations. It is also shown that in some cases the predominant mechanisms involved in the formation of precipitation were altered and modified by the aerosol properties. This study suggests that CCN-forced variations in cloud droplet number concentration can change the effective radius profile and the type of precipitation hydrometeors. These differences may have a major impact on the global hydrological cycle and energy budget.
483

Monitoring-as-a-service in the cloud

Meng, Shicong 03 April 2012 (has links)
State monitoring is a fundamental building block for Cloud services. The demand for providing state monitoring as services (MaaS) continues to grow and is evidenced by CloudWatch from Amazon EC2, which allows cloud consumers to pay for monitoring a selection of performance metrics with coarse-grained periodical sampling of runtime states. One of the key challenges for wide deployment of MaaS is to provide better balance among a set of critical quality and performance parameters, such as accuracy, cost, scalability and customizability. This dissertation research is dedicated to innovative research and development of an elastic framework for providing state monitoring as a service (MaaS). We analyze limitations of existing techniques, systematically identify the need and the challenges at different layers of a Cloud monitoring service platform, and develop a suite of distributed monitoring techniques to support for flexible monitoring infrastructure, cost-effective state monitoring and monitoring-enhanced Cloud management. At the monitoring infrastructure layer, we develop techniques to support multi-tenancy of monitoring services by exploring cost sharing between monitoring tasks and safeguarding monitoring resource usage. To provide elasticity in monitoring, we propose techniques to allow the monitoring infrastructure to self-scale with monitoring demand. At the cost-effective state monitoring layer, we devise several new state monitoring functionalities to meet unique functional requirements in Cloud monitoring. Violation likelihood state monitoring explores the benefits of consolidating monitoring workloads by allowing utility-driven monitoring intensity tuning on individual monitoring tasks and identifying correlations between monitoring tasks. Window based state monitoring leverages distributed windows for the best monitoring accuracy and communication efficiency. Reliable state monitoring is robust to both transient and long-lasting communication issues caused by component failures or cross-VM performance interferences. At the monitoring-enhanced Cloud management layer, we devise a novel technique to learn about the performance characteristics of both Cloud infrastructure and Cloud applications from cumulative performance monitoring data to increase the cloud deployment efficiency.
484

Evaluating aerosol/cloud/radiation process parameterizations with single-column models and Second Aerosol Characterization Experiment (ACE-2) cloudy column observations

Menon, Surabo, Brenguier, Jean-Louis, Boucher, Olivier, Davison, Paul, Del Genio, Anthony D., Feichter, Johann, Ghan, Steven, Guibert, Sarah, Xiaohong, Liu, Lohmann, Ulrike, Pawlowska, Hanna, Penner, Joyce E., Quaas, Johannes, Roberts, David L., Schüller, Lothar, Snider, Jefferson 21 August 2015 (has links) (PDF)
The Second Aerosol Characterization Experiment (ACE-2) data set along with ECMWF reanalysis meteorological fields provided the basis for the single column model (SCM) simulations, performed as part of the PACE (Parameterization of the Aerosol Indirect Climatic Effect) project. Six different SCMs were used to simulate ACE-2 case studies of clean and polluted cloudy boundary layers, with the objective being to identify limitations of the aerosol/cloud/radiation interaction schemes within the range of uncertainty in in situ, reanalysis and satellite retrieved data. The exercise proceeds in three steps. First, SCMs are configured with the same fine vertical resolution as the ACE-2 in situ data base to evaluate the numerical schemes for prediction of aerosol activation, radiative transfer and precipitation formation. Second, the same test is performed at the coarser vertical resolution of GCMs to evaluate its impact on the performance of the parameterizations. Finally, SCMs are run for a 24–48 hr period to examine predictions of boundary layer clouds when initialized with large-scale meteorological fields. Several schemes were tested for the prediction of cloud droplet number concentration (N). Physically based activation schemes using vertical velocity show noticeable discrepancies compared to empirical schemes due to biases in the diagnosed cloud base vertical velocity. Prognostic schemes exhibit a larger variability than the diagnostic ones, due to a coupling between aerosol activation and drizzle scavenging in the calculation of N. When SCMs are initialized at a fine vertical resolution with locally observed vertical profiles of liquid water, predicted optical properties are comparable to observations. Predictions however degrade at coarser vertical resolution and are more sensitive to the mean liquid water path than to its spatial heterogeneity. Predicted precipitation fluxes are severely underestimated and improve when accounting for sub-grid liquid water variability. Results from the 24–48 hr runs suggest that most models have problems in simulating boundary layer cloud morphology, since the large-scale initialization fields do not accurately reproduce observed meteorological conditions. As a result, models significantly overestimate optical properties. Improved cloud morphologies were obtained for models with subgrid inversions and subgrid cloud thickness schemes. This may be a result of representing subgrid scale effects though we do not rule out the possibility that better large-forcing data may also improve cloud morphology predictions.
485

Assessing and Improving Interoperability of Distributed Systems

Rings, Thomas 23 January 2013 (has links)
Interoperabilität von verteilten Systemen ist eine Grundlage für die Entwicklung von neuen und innovativen Geschäftslösungen. Sie erlaubt es existierende Dienste, die auf verschiedenen Systemen angeboten werden, so miteinander zu verknüpfen, dass neue oder erweiterte Dienste zur Verfügung gestellt werden können. Außerdem kann durch diese Integration die Zuverlässigkeit von Diensten erhöht werden. Das Erreichen und Bewerten von Interoperabilität stellt jedoch eine finanzielle und zeitliche Herausforderung dar. Zur Sicherstellung und Bewertung von Interoperabilität werden systematische Methoden benötigt. Um systematisch Interoperabilität von Systemen erreichen und bewerten zu können, wurde im Rahmen der vorliegenden Arbeit ein Prozess zur Verbesserung und Beurteilung von Interoperabilität (IAI) entwickelt. Der IAI-Prozess beinhaltet drei Phasen und kann die Interoperabilität von verteilten, homogenen und auch heterogenen Systemen bewerten und verbessern. Die Bewertung erfolgt dabei durch Interoperabilitätstests, die manuell oder automatisiert ausgeführt werden können. Für die Automatisierung von Interoperabilitätstests wird eine neue Methodik vorgestellt, die einen Entwicklungsprozess für automatisierte Interoperabilitätstestsysteme beinhaltet. Die vorgestellte Methodik erleichtert die formale und systematische Bewertung der Interoperabilität von verteilten Systemen. Im Vergleich zur manuellen Prüfung von Interoperabilität gewährleistet die hier vorgestellte Methodik eine höhere Testabdeckung, eine konsistente Testdurchführung und wiederholbare Interoperabilitätstests. Die praktische Anwendbarkeit des IAI-Prozesses und der Methodik für automatisierte Interoperabilitätstests wird durch drei Fallstudien belegt. In der ersten Fallstudie werden Prozess und Methodik für Internet Protocol Multimedia Subsystem (IMS) Netzwerke instanziiert. Die Interoperabilität von IMS-Netzwerken wurde bisher nur manuell getestet. In der zweiten und dritten Fallstudie wird der IAI-Prozess zur Beurteilung und Verbesserung der Interoperabilität von Grid- und Cloud-Systemen angewendet. Die Bewertung und Verbesserung dieser Interoperabilität ist eine Herausforderung, da Grid- und Cloud-Systeme im Gegensatz zu IMS-Netzwerken heterogen sind. Im Rahmen der Fallstudien werden Möglichkeiten für Integrations- und Interoperabilitätslösungen von Grid- und Infrastructure as a Service (IaaS) Cloud-Systemen sowie von Grid- und Platform as a Service (PaaS) Cloud-Systemen aufgezeigt. Die vorgestellten Lösungen sind in der Literatur bisher nicht dokumentiert worden. Sie ermöglichen die komplementäre Nutzung von Grid- und Cloud-Systemen, eine vereinfachte Migration von Grid-Anwendungen in ein Cloud-System sowie eine effiziente Ressourcennutzung. Die Interoperabilitätslösungen werden mit Hilfe des IAI-Prozesses bewertet. Die Durchführung der Tests für Grid-IaaS-Cloud-Systeme erfolgte manuell. Die Interoperabilität von Grid-PaaS-Cloud-Systemen wird mit Hilfe der Methodik für automatisierte Interoperabilitätstests bewertet. Interoperabilitätstests und deren Beurteilung wurden bisher in der Grid- und Cloud-Community nicht diskutiert, obwohl sie eine Basis für die Entwicklung von standardisierten Schnittstellen zum Erreichen von Interoperabilität zwischen Grid- und Cloud-Systemen bieten.
486

Scalability and performance management of internet applications in the cloud

Dawoud, Wesam January 2013 (has links)
Cloud computing is a model for enabling on-demand access to a shared pool of computing resources. With virtually limitless on-demand resources, a cloud environment enables the hosted Internet application to quickly cope when there is an increase in the workload. However, the overhead of provisioning resources exposes the Internet application to periods of under-provisioning and performance degradation. Moreover, the performance interference, due to the consolidation in the cloud environment, complicates the performance management of the Internet applications. In this dissertation, we propose two approaches to mitigate the impact of the resources provisioning overhead. The first approach employs control theory to scale resources vertically and cope fast with workload. This approach assumes that the provider has knowledge and control over the platform running in the virtual machines (VMs), which limits it to Platform as a Service (PaaS) and Software as a Service (SaaS) providers. The second approach is a customer-side one that deals with the horizontal scalability in an Infrastructure as a Service (IaaS) model. It addresses the trade-off problem between cost and performance with a multi-goal optimization solution. This approach finds the scale thresholds that achieve the highest performance with the lowest increase in the cost. Moreover, the second approach employs a proposed time series forecasting algorithm to scale the application proactively and avoid under-utilization periods. Furthermore, to mitigate the interference impact on the Internet application performance, we developed a system which finds and eliminates the VMs suffering from performance interference. The developed system is a light-weight solution which does not imply provider involvement. To evaluate our approaches and the designed algorithms at large-scale level, we developed a simulator called (ScaleSim). In the simulator, we implemented scalability components acting as the scalability components of Amazon EC2. The current scalability implementation in Amazon EC2 is used as a reference point for evaluating the improvement in the scalable application performance. ScaleSim is fed with realistic models of the RUBiS benchmark extracted from the real environment. The workload is generated from the access logs of the 1998 world cup website. The results show that optimizing the scalability thresholds and adopting proactive scalability can mitigate 88% of the resources provisioning overhead impact with only a 9% increase in the cost. / Cloud computing ist ein Model fuer einen Pool von Rechenressourcen, den sie auf Anfrage zur Verfuegung stellt. Internetapplikationen in einer Cloud-Infrastruktur koennen bei einer erhoehten Auslastung schnell die Lage meistern, indem sie die durch die Cloud-Infrastruktur auf Anfrage zur Verfuegung stehenden und virtuell unbegrenzten Ressourcen in Anspruch nehmen. Allerdings sind solche Applikationen durch den Verwaltungsaufwand zur Bereitstellung der Ressourcen mit Perioden von Verschlechterung der Performanz und Ressourcenunterversorgung konfrontiert. Ausserdem ist das Management der Performanz aufgrund der Konsolidierung in einer Cloud Umgebung kompliziert. Um die Auswirkung des Mehraufwands zur Bereitstellung von Ressourcen abzuschwächen, schlagen wir in dieser Dissertation zwei Methoden vor. Die erste Methode verwendet die Kontrolltheorie, um Ressourcen vertikal zu skalieren und somit schneller mit einer erhoehten Auslastung umzugehen. Diese Methode setzt voraus, dass der Provider das Wissen und die Kontrolle über die in virtuellen Maschinen laufende Plattform hat. Der Provider ist dadurch als „Plattform als Service (PaaS)“ und als „Software als Service (SaaS)“ Provider definiert. Die zweite Methode bezieht sich auf die Clientseite und behandelt die horizontale Skalierbarkeit in einem Infrastruktur als Service (IaaS)-Model. Sie behandelt den Zielkonflikt zwischen den Kosten und der Performanz mit einer mehrzieloptimierten Loesung. Sie findet massstaebliche Schwellenwerte, die die hoechste Performanz mit der niedrigsten Steigerung der Kosten gewaehrleisten. Ausserdem ist in der zweiten Methode ein Algorithmus der Zeitreifenvorhersage verwendet, um die Applikation proaktiv zu skalieren und Perioden der nicht optimalen Ausnutzung zu vermeiden. Um die Performanz der Internetapplikation zu verbessern, haben wir zusaetzlich ein System entwickelt, das die unter Beeintraechtigung der Performanz leidenden virtuellen Maschinen findet und entfernt. Das entwickelte System ist eine leichtgewichtige Lösung, die keine Provider-Beteiligung verlangt. Um die Skalierbarkeit unserer Methoden und der entwickelten Algorithmen auszuwerten, haben wir einen Simulator namens „ScaleSim“ entwickelt. In diesem Simulator haben wir Komponenten implementiert, die als Skalierbarkeitskomponenten der Amazon EC2 agieren. Die aktuelle Skalierbarkeitsimplementierung in Amazon EC2 ist als Referenzimplementierung fuer die Messesung der Verbesserungen in der Performanz von skalierbaren Applikationen. Der Simulator wurde auf realistische Modelle der RUBiS-Benchmark angewendet, die aus einer echten Umgebung extrahiert wurden. Die Auslastung ist aus den Zugriffslogs der World Cup Website von 1998 erzeugt. Die Ergebnisse zeigen, dass die Optimierung der Schwellenwerte und der angewendeten proaktiven Skalierbarkeit den Verwaltungsaufwand zur Bereitstellung der Ressourcen bis um 88% reduziert kann, während sich die Kosten nur um 9% erhöhen.
487

An automated approach to create, manage and analyze large- scale experiments for elastic n-tier application in clouds

Jayasinghe, Indika D. 20 September 2013 (has links)
Cloud computing has revolutionized the computing landscape by providing on-demand, pay-as-you-go access to elastically scalable resources. Many applications are now being migrated from on-premises data centers to public clouds; yet, the transition to the cloud is not always straightforward and smooth. An application that performed well in an on-premise data center may not perform identically in public computing clouds, because many variables like virtualization can impact the application's performance. By collecting significant performance data through experimental study, the cloud's complexity particularly as it relates to performance can be revealed. However, conducting large-scale system experiments is particularly challenging because of the practical difficulties that arise during experimental deployment, configuration, execution and data processing. In spite of these associated complexities, we argue that a promising approach for addressing these challenges is to leverage automation to facilitate the exhaustive measurement of large-scale experiments. Automation provides numerous benefits: removes the error prone and cumbersome involvement of human testers, reduces the burden of configuring and running large-scale experiments for distributed applications, and accelerates the process of reliable applications testing. In our approach, we have automated three key activities associated with the experiment measurement process: create, manage and analyze. In create, we prepare the platform and deploy and configure applications. In manage, we initialize the application components (in a reproducible and verifiable order), execute workloads, collect resource monitoring and other performance data, and parse and upload the results to the data warehouse. In analyze, we process the collected data using various statistical and visualization techniques to understand and explain performance phenomena. In our approach, a user provides the experiment configuration file, so at the end, the user merely receives the results while the framework does everything else. We enable the automation through code generation. From an architectural viewpoint, our code generator adopts the compiler approach of multiple, serial transformative stages; the hallmarks of this approach are that stages typically operate on an XML document that is the intermediate representation, and XSLT performs the code generation. Our automated approach to large-scale experiments has enabled cloud experiments to scale well beyond the limits of manual experimentation, and it has enabled us to identify non-trivial performance phenomena that would not have been possible otherwise.
488

Secure Service Provisioning in a Public Cloud

Aslam, Mudassar January 2012 (has links)
The evolution of cloud technologies which allows the provisioning of IT resources over the Internet promises many benefits for the individuals and enterprises alike. However, this new resource provisioning model comes with the security challenges which did not exist in the traditional resource procurement mechanisms. We focus on the possible security concerns of a cloud user (e.g. an organization, government department, etc.) to lease cloud services such as resources in the form of Virtual Machines (VM) from a public Infrastructure-as-a-Service (IaaS) provider. There are many security critical areas in the cloud systems, such as data confidentiality, resource integrity, service compliance, security audits etc. In this thesis, we focus on the security aspects which result in the trust deficit among the cloud stakeholders and hence hinder a security sensitive user to benefit from the opportunities offered by the cloud computing. Based upon our findings from the security requirements analysis,we propose solutions that enable user trust in the public IaaS clouds. Our solutions mainly deal with the secure life cycle management of the user VM which include mechanisms for VM launch and migration. The VM launch and migration solutions ensure that the user VM is always protected in the cloud by only allowing it to run on the user trusted platforms. This is done by using trusted computing techniques that allow the users to remotely attest and hence rate the cloud platforms trusted or untrusted. We also provide a prototype implementation to prove the implementation feasibility of the proposed trust enabling principles used in the VM launch and migration solutions.
489

Exploiting weather forecast data for cloud detection

Mackie, Shona January 2009 (has links)
Accurate, fast detection of clouds in satellite imagery has many applications, for example Numerical Weather Prediction (NWP) and climate studies of both the atmosphere and of the Earth’s surface temperature. Most operational techniques for cloud detection rely on the differences between observations of cloud and of clear-sky being more or less constant in space and in time. In reality, this is not the case - different clouds have different spectral properties, and different cloud types are more or less likely in different places and at different times, depending on atmospheric conditions and on the Earth’s surface properties. Observations of clear sky also vary in space and time, depending on atmospheric and surface conditions, and on the presence or absence of aerosol particles. The Bayesian approach adopted in this project allows pixel-specific physical information (for example from NWP) to be used to predict pixel-specific observations of clear sky. A physically-based, spatially- and temporally-specific probability that each pixel contains a cloud observation is then calculated. An advantage of this approach is that identification of ambiguously classed pixels from a probabilistic result is straightforward, in contrast to the binary result generally produced by operational techniques. This project has developed and validated the Bayesian approach to cloud detection, and has extended the range of applications for which it is suitable, achieving skills scores that match or exceed those achieved by operational methods in every case. High temperature gradients can make observations of clear sky around ocean fronts, particularly at thermal wavelengths, appear similar to cloud observations. To address this potential source of ambiguous cloud detection results, a region of imagery acquired by the AATSR sensor which was noted to contain some ocean fronts, was selected. Pixels in the region were clustered according to their spectral properties with the aim of separating pixels that correspond to different thermal regimes of the ocean. The mean spectral properties of pixels in each cluster were then processed using the Bayesian cloud detection technique and the resulting posterior probability of clear then assigned to individual pixels. Several clustering methods were investigated, and the most appropriate, which allowed pixels to be associated with multiple clusters, with a normalized vector of ‘membership strengths’, was used to conduct a case study. The distribution of final calculated probabilities of clear became markedly more bimodal when clustering was included, indicating fewer ambiguous classifications, but at the cost of some single pixel clouds being missed. While further investigations could provide a solution to this, the computational expense of the clustering method made this impractical to include in the work of this project. This new Bayesian approach to cloud detection has been successfully developed by this project to a point where it has been released under public license. Initially designed as a tool to aid retrieval of sea surface temperature from night-time imagery, this project has extended the Bayesian technique to be suitable for imagery acquired over land as well as sea, and for day-time as well as for night-time imagery. This was achieved using the land surface emissivity and surface reflectance parameter products available from the MODIS sensor. This project added a visible Radiative Transfer Model (RTM), developed at University of Edinburgh, and a kernel-based surface reflectance model, adapted here from that used by the MODIS sensor, to the cloud detection algorithm. In addition, the cloud detection algorithm was adapted to be more flexible, making its implementation for data from the SEVIRI sensor straightforward. A database of ‘difficult’ cloud and clear targets, in which a wide range of both spatial and temporal locations was represented, was provided by M´et´eo-France and used in this work to validate the extensions made to the cloud detection scheme and to compare the skill of the Bayesian approach with that of operational approaches. For night land and sea imagery, the Bayesian technique, with the improvements and extensions developed by this project, achieved skills scores 10% and 13% higher than M´et´eo-France respectively. For daytime sea imagery, the skills scores were within 1% of each other for both approaches, while for land imagery the Bayesian method achieved a 2% higher skills score. The main strength of the Bayesian technique is the physical basis of the differentiation between clear and cloud observations. Using NWP information to predict pixel-specific observations for clear-sky is relatively straightforward, but making such predictions for cloud observations is more complicated. The technique therefore relies on an empirical distribution rather than a pixel-specific prediction for cloud observations. To try and address this, this project developed a means of predicting cloudy observations through the fast forward-modelling of pixel-specific NWP information. All cloud fields in the pixel-specific NWP data were set to 0, and clouds were added to the profile at discrete intervals through the atmosphere, with cloud water- and ice- path (cwp, cip) also set to values spaced exponentially at discrete intervals up to saturation, and with cloud pixel fraction set to 25%, 50%, 75% and 100%. Only single-level, single-phase clouds were modelled, with the justification that the resulting distribution of predicted observations, once smoothed through considerations of uncertainties, is likely to include observations that would correspond to multi-phase and multi-level clouds. A fast RTM was run on the profile information for each of these individual clouds and cloud altitude-, cloud pixel fraction- and channel-specific relationships between cwp (and similarly cip) and predicted observations were calculated from the results of the RTM. These relationships were used to infer predicted observations for clouds with cwp/cip values other than those explicitly forward modelled. The parameters used to define the relationships were interpolated to define relationships for predicted observations of cloud at 10m vertical intervals through the atmosphere, with pixel coverage ranging from 25% to 100% in increments of 1%. A distribution of predicted cloud observations is then achieved without explicit forward-modelling of an impractical number of atmospheric states. Weights are applied to the representation of individual clouds within the final Probability Density Function (PDF) in order to make the distribution of predicted observations realistic, according to the pixel-specific NWP data, and to distributions seen in a global reference dataset of NWP profiles from the European Centre for Medium Range Weather Forecasting (ECMWF). The distribution is then convolved with uncertainties in forward-modelling, in the NWP data, and with sensor noise to create the final PDF in observation space, from which the conditional probability that the pixel observation corresponds to a cloud observation can be read. Although the relatively fast computational implementation of the technique was achieved, the results are disappointingly poor for the SEVIRI-acquired dataset, provided by M´et´eo-France, against which validation was carried out. This is thought to be explained by both the uncertainties in the NWP data, and the forward-modelling dependence on those uncertainties, being poorly understood, and treated too optimistically in the algorithm. Including more errors in the convolution introduces the problem of quantifying those errors (a non-trivial task), and would increase the processing time, making implementation impractical. In addition, if the uncertianties considered are too high then a PDF flatter than the empirical distribution currently used would be produced, making the technique less useful.
490

A TRUSTED STORAGE SYSTEM FOR THE CLOUD

Karumanchi, Sushama 01 January 2010 (has links)
Data stored in third party storage systems like the cloud might not be secure since confidentiality and integrity of data are not guaranteed. Though cloud computing provides cost-effective storage services, it is a third party service and so, a client cannot trust the cloud service provider to store its data securely within the cloud. Hence, many organizations and users may not be willing to use the cloud services to store their data in the cloud until certain security guarantees are made. In this thesis, a solution to the problem of securely storing the client’s data by maintaining the confidentiality and integrity of the data within the cloud is developed. Five protocols are developed which ensure that the client’s data is stored only on trusted storage servers, replicated only on trusted storage servers, and guarantee that the data owners and other privileged users of that data access the data securely. The system is based on trusted computing platform technology [11]. It uses a Trusted Platform Module, specified by the Trusted Computing Group [11]. An encrypted file system is used to encrypt the user’s data. The system provides data security against a system administrator in the cloud.

Page generated in 0.0592 seconds