Spelling suggestions: "subject:"ehe cloud"" "subject:"ehe aloud""
751 |
Examining the impact of wildfire smoke aerosol on clouds, precipitation, and radiative fluxes in Northern America and Russia using a fully coupled meso-scale model WRF-Chem-SMOKE and satellite dataZheng, Lu 2014 August 1900 (has links)
We developed a fully-coupled meso-scale model WRF-Chem-SMOKE by incorporating a selection of smoke emission models and improving the representations of aerosol-cloud interactions in the microphysics scheme. We find that the difference in smoke emissions between different datasets, even in one fire cluster, could lead to significant discrepancies in modeled AODs. The integrated smoke emission dataset improves the prediction of modeled AODs. We find that the modeled cloud properties and precipitation are extremely sensitive to the smoke loadings. Higher smoke loadings suppress precipitation initially, because of smoke-induced reduction of the collision-coalescence and riming processes, but ultimately cause an invigoration of precipitation.
|
752 |
處理含有雜訊之點雲骨架的生成 / Dealing with Noisy Data for the Generation of Point Cloud Skeletons林逸芃, Lin, Yi Peng Unknown Date (has links)
一個視覺物體或一個三維模型的骨架,是一種可以揭示該物體或模型的拓樸結構的呈現方式,因此骨架可以被應用在諸多場合當中,例如形狀分析和電腦動畫。近年來,有許多針對從一個物體當中抽取骨架的研究工作。然而,大多數的研究著重於完整和乾淨的資料(儘管這些研究當中,有一些有將缺失值考慮在內),但在實務上,我們經常要處理不完整和不潔淨的資料,就像資料裡面可能有缺失值和雜訊。在本論文中,我們研究雜訊處理,而且我們將焦點放在針對帶有雜訊的點雲資料進行前置處理,以便生成相應物體的骨架。在我們提出的方法當中,我們首先識別可能帶有雜訊的資料點,然後降低雜訊值的影響。為了識別雜訊,我們將監督式學習用在以密度和距離作為特徵的資料上。為了降低雜訊值的影響,我們採用三角形表面和投影。這個前置處理方法是有彈性的,因為它可以搭配任何能夠從點雲資料當中抽取出物體的骨架的工具。我們用數個三維模型和多種設定進行實驗,而結果顯示本論文所提出的前置處理方法的有效性。與未經處理的模型(也就是原始模型加上雜訊)相比,在從帶有雜訊的點雲資料當中產生物體的骨架之前,如果我們先使用本論文所提出的前置處理方法,那麼我們可以得到一個包含更多原來的物體的拓撲特徵的骨架。我們的貢獻如下:第一,我們展示了機器學習可以如何協助電腦圖學。第二、針對雜訊識別,我們提出使用距離和密度做為學習過程中要用的特徵。第三、我們提出使用三角表面和投影,以減少在做雜訊削減時所需要花費的時間。第四、本論文提出的方法可以用於改進三維掃描。 / The skeleton of a visual object or a 3D model is a representation that can reveal the topological structure of the object or the model, and therefore it can be used in various applications such as shape analysis and computer animation. Over the years there have been many studies working on the extraction of the skeleton of an object. However, most of those studies focused on complete and clean data (even though some of them took missing values into account), while in practice we often have to deal with incomplete and unclean data, just as there might be missing values and noise in data. In this thesis, we study noise handling, and we put our focus on preprocessing a noisy point cloud for the generation of the skeleton of the corresponding object. In the proposed approach, we first identify data points that might be noise and then lower the impact of the noisy values. For identifying noise, we use supervised learning on data whose features are density and distance. For lowering the impact of the noisy values, we use triangular surfaces and projection. The preprocessing method is flexible, because it can be used with any tool that can extract skeletons from point clouds. We conduct experiments with several 3D models and various settings, and the results show the effectiveness of the proposed preprocessing approach. Compared with the unprocessed model (which is the original model with the added noise), if we apply the proposed preprocessing approach to a noisy point cloud before using a tool to generate the skeleton, we can obtain a skeleton that contains more topological characteristics of the model. Our contributions are as follows: First, we show how machine learning can help computer graphics. Second, we propose to use distance and density as features in learning for noise identification. Third, we propose to use triangular surfaces and projection to save execution time in noise reduction. Fourth, the proposed approach could be used to improve 3D scanning.
|
753 |
Fuzzy Authorization for Cloud StorageZhu, Shasha January 2013 (has links)
It is widely accepted that OAuth is the most popular authorization scheme adopted and implemented by industrial and academic world, however, it is difficult to adapt OAuth to the situation in which online applications registered with one cloud party intends to access data residing in another cloud party. In this thesis, by leveraging Ciphertext-Policy Attribute Based Encryption technique and Elgamal-like mask over the protocol, we propose a reading authorization scheme among diverse clouds, which is called fuzzy authorization, to facilitate an application registered with one cloud party to access to data residing in another cloud party. More importantly, we enable the fuzziness of authorization thus to enhance the scalability and flexibility of file sharing by taking advantage of the innate connections of Linear Secret-Sharing Scheme and Generalized Reed Solomon code. Furthermore, by conducting error checking and error correction, we eliminate operation of satisfying a access tree. In addition, the automatic revocation is realized with update of TimeSlot attribute when data owner modifies the data. We prove the security of our schemes under the selective-attribute security model. The protocol flow of fuzzy authorization is implemented with OMNET++ 4.2.2 and the bi-linear pairing is realized with PBC library. Simulation results show that our scheme can achieve fuzzy authorization among heterogeneous clouds with security and efficiency.
|
754 |
Towards an MPI-like Framework for Azure Cloud PlatformKaramati, Sara 12 August 2014 (has links)
Message passing interface (MPI) has been widely used for implementing parallel and distributed applications. The emergence of cloud computing offers a scalable, fault-tolerant, on-demand al-ternative to traditional on-premise clusters. In this thesis, we investigate the possibility of adopt-ing the cloud platform as an alternative to conventional MPI-based solutions. We show that cloud platform can exhibit competitive performance and benefit the users of this platform with its fault-tolerant architecture and on-demand access for a robust solution. Extensive research is done to identify the difficulties of designing and implementing an MPI-like framework for Azure cloud platform. We present the details of the key components required for implementing such a framework along with our experimental results for benchmarking multiple basic operations of MPI standard implemented in the cloud and its practical application in solving well-known large-scale algorithmic problems.
|
755 |
System abstractions for resource scaling on heterogeneous platformsGupta, Vishal 13 January 2014 (has links)
The increasingly diverse nature of modern applications makes it critical for future systems to have dynamic resource scaling capabilities which enable them to adapt their resource usage to meet user requirements. Such mechanisms should be both fine-grained in nature for resource-efficient operation and also provide a high scaling range to support a variety of applications with diverse needs. To this end, heterogeneous platforms, consisting of components with varying characteristics, have been proposed to provide improved performance/efficiency than homogeneous configurations, by making it possible to execute applications on the most suitable component. However, introduction of such heterogeneous architectural components requires system software to embrace complexity associated with heterogeneity for managing them efficiently. Diversity across vendors and rapidly changing hardware make it difficult to incorporate heterogeneity-aware resource management mechanisms into mainstream systems, affecting the widespread adoption of these platforms.
Addressing these issues, this dissertation presents novel abstractions and mechanisms for heterogeneous platforms which decouple heterogeneity from management operations by masking the differences due to heterogeneity from applications. By exporting a homogeneous interface over heterogeneous components, it proposes the scalable 'resource state' abstraction, allowing applications to express their resource requirements which then are dynamically and transparently mapped to heterogeneous resources underneath. The proposed approach is explored for both modern mobile devices where power is a key resource and for cloud computing environments where platform resource usage has monetary implications, resulting in HeteroMates and HeteroVisor solutions. In addition, it also highlights the need for hardware and system software to consider multiple resources together to obtain desirable gains from such scaling mechanisms. The solutions presented in this dissertation open ways for utilizing future heterogeneous platforms to provide on-demand performance, as well as resource-efficient operation, without disrupting the existing software stack.
|
756 |
DRAP: A Decentralized Public Resourced Cloudlet for Ad-Hoc NetworksAgarwal, Radhika 07 March 2014 (has links)
Handheld devices are becoming increasingly common, and they have varied range of resources. Mobile Cloud Computing (MCC) allows resource constrained devices to offload computation and use storage capacities of more resourceful surrogate machines. This enables creation of new and interesting applications for all devices.
We propose a scheme that constructs a high-performance de-centralized system by a group of volunteer mobile devices which come together to form a resourceful unit (cloudlet). The idea is to design a model to operate as a public-resource between mobile devices in close geographical proximity. This cloudlet can provide larger storage capability and can be used as a computational resource by other devices in the network. The system needs to watch the movement of the participating nodes and restructure the topology if some nodes that are providing support to the cloudlet fail or move out of the network. In this work, we discuss the need of the system, our goals and design issues in building a scalable and reconfigurable system.
We achieve this by leveraging the concept of virtual dominating set to create an overlay in the broads of the network and distribute the responsibilities in hosting a cloudlet server. We propose an architecture for such a system and develop algorithms that are requited for its operation. We map the resources available in the network by first scoring each device individually, and then gathering these scores to determine suitable candidate cloudlet nodes.
We have simulated cloudlet functionalities for several scenarios and show that our approach is viable alternative for many applications such as sharing GPS, crowd sourcing, natural language processing, etc.
|
757 |
Performance modeling of cloud computing centersKhazaei, Hamzeh 21 February 2013 (has links)
Cloud computing is a general term for system architectures that involves delivering hosted services over the Internet, made possible by significant innovations in virtualization and distributed computing, as well as improved access to high-speed Internet. A cloud service differs from traditional hosting in three principal aspects.
First, it is provided on demand, typically by the minute or the hour; second, it is elastic since the user can have as much or as little of a service as they want at any given time; and third, the service is fully managed by the provider -- user needs little more than computer and Internet access. Typically a contract is negotiated and agreed between a customer and a service provider; the service provider is required to execute service requests from a customer within negotiated quality of service (QoS) requirements for a given price.
Due to dynamic nature of cloud environments, diversity of user's requests, resource virtualization, and time dependency of load, provides expected quality of service while avoiding over-provisioning is not a simple task. To this end, cloud provider must have efficient and accurate techniques for performance evaluation of cloud computing centers. The development of such techniques is the focus of this thesis.
This thesis has two parts. In first part, Chapters 2, 3 and 4, monolithic performance models are developed for cloud computing performance analysis. We begin with Poisson task arrivals, generally distributed service times, and a large number of physical servers. Later on, we extend our model to include finite buffer capacity, batch task arrivals, and virtualized servers with a large number of virtual machines in each physical machine.
However, a monolithic model may suffer from intractability and poor scalability due to large number of parameters. Therefore, in the second part of the thesis (Chapters 5 and 6) we develop and evaluate tractable functional performance sub-models for different servicing steps in a complex cloud center and the overall solution obtains by iteration over individual sub-model solutions. We also extend the proposed interacting analytical sub-models to capture other important aspects including pool management, power consumption, resource assigning process and virtual machine deployment of nowadays cloud centers. Finally, a performance model suitable for cloud computing centers with heterogeneous requests and resources using interacting stochastic models is proposed and evaluated.
|
758 |
Multiple criteria decision analysis in autonomous computing: a study on independent and coordinated self-management.Yazir, Yagiz Onat 26 August 2011 (has links)
In this dissertation, we focus on the problem of self-management in distributed systems. In this context, we propose a new methodology for reactive self-management based on multiple criteria decision analysis (MCDA). The general structure of the proposed methodology is extracted from the commonalities of the former well-established approaches that are applied in other problem domains. The main novelty of this work, however, lies in the usage of MCDA during the reaction processes
in the context of the two problems that the proposed methodology is applied to.
In order to provide a detailed analysis and assessment of this new approach, we have used the proposed methodology to design distributed autonomous agents that can provide self-management in two outstanding problems. These two problems also represent the two distinct ways in which the methodology can be applied to self-management problems. These two cases are: 1) independent self management, and 2) coordinated self-management. In the simulation case study regarding independent self-management, the methodology is used to design and implement a distributed resource consolidation manager for clouds, called IMPROMPTU. In IMPROMPTU, each autonomous agent is attached to a unique physical machine in the cloud, where it manages resource consolidation independently from the rest of the autonomous agents. On the other hand, the simulation case study regarding coordinated self-management focuses on the problem of adaptive routing in mobile ad hoc networks (MANET). The resulting system carries out adaptation through autonomous agents that are attached to each MANET node in a coordinated manner. In
this context, each autonomous node agent expresses its opinion in the form of a decision regarding which routing algorithm should be used given the perceived conditions. The opinions are aggregated through coordination in order to produce a
final decision that is to be shared by every node in the MANET.
Although MCDA has been previously considered within the context of artificial intelligence---particularly with respect to algorithms and frameworks that represent different requirements for MCDA problems, to the best of our knowledge, this dissertation outlines a work where MCDA is applied for the first time in the domain of these two problems that are represented as
simulation case studies. / Graduate
|
759 |
Energy-oriented Partial Desktop Virtual Machine MigrationBila, Nilton 02 August 2013 (has links)
Modern offices are crowded with personal computers. While studies have shown these to be idle most of the time, they remain powered, consuming up to 60% of their peak power. Hardware based solutions engendered by PC vendors (e.g., low power states, Wake-on-LAN) have proven unsuccessful because, in spite of user inactivity, these machines often need to remain network active in support of background applications that maintain network presence.
Recent solutions have been proposed that perform consolidation of idle desktop virtual machines. However, desktop VMs are often large requiring gigabytes of memory. Consolidating such VMs, creates large network transfers lasting in the order of minutes, and utilizes server memory inefficiently. When multiple VMs migrate simultaneously, each VM’s experienced migration latency grows, and this limits the use of VM consolidation to environments in which only a few daily migrations are expected per VM. This thesis introduces partial VM migration, an approach that transparently migrates only the working set of an idle VM, by migrating memory pages on-demand. It creates a partial replica of the desktop VM on the consolidation server by copying only VM metadata, and transferring pages to the server, as the VM accesses them. This approach places desktop PCs in low power state when inactive and resumes them to running state when pages are needed by the VM running on the consolidation server.
Jettison, our software prototype of partial VM migration for off-the-shelf PCs, can
deliver 78% to 91% energy savings during idle periods lasting more than an hour, while providing low migration latencies of about 4 seconds, and migrating minimal state that is under an order of magnitude of the VM’s memory footprint. In shorter idle periods of up to thirty minutes, Jettison delivers savings of 7% to 31%.
We present two approaches that increase energy savings attained with partial VM migration, especially in short idle periods. The first, Context-Aware Selective Resume, expedites PC resume and suspend cycle times by supplying a context identifier at desktop resume, and initializing only devices and code that are relevant to the context. CAESAR, the Context-Aware Selective Resume framework, enables applications to register context vectors that are invoked when the desktop is resumed with matching context. CAESAR increases energy savings in short periods of five minutes to an hour by up to 66%.
The second approach, the low power page cache, embeds network accessible low power hardware in the PC, to enable serving of pages to the consolidation server, while the PC is in low power state. We show that Oasis, our prototype page cache, addresses the shortcomings of energy-oriented on-demand page migration by increasing energy savings, especially during short idle periods. In periods of up to an hour, Oasis increases savings by up to twenty times.
|
760 |
Energy-oriented Partial Desktop Virtual Machine MigrationBila, Nilton 02 August 2013 (has links)
Modern offices are crowded with personal computers. While studies have shown these to be idle most of the time, they remain powered, consuming up to 60% of their peak power. Hardware based solutions engendered by PC vendors (e.g., low power states, Wake-on-LAN) have proven unsuccessful because, in spite of user inactivity, these machines often need to remain network active in support of background applications that maintain network presence.
Recent solutions have been proposed that perform consolidation of idle desktop virtual machines. However, desktop VMs are often large requiring gigabytes of memory. Consolidating such VMs, creates large network transfers lasting in the order of minutes, and utilizes server memory inefficiently. When multiple VMs migrate simultaneously, each VM’s experienced migration latency grows, and this limits the use of VM consolidation to environments in which only a few daily migrations are expected per VM. This thesis introduces partial VM migration, an approach that transparently migrates only the working set of an idle VM, by migrating memory pages on-demand. It creates a partial replica of the desktop VM on the consolidation server by copying only VM metadata, and transferring pages to the server, as the VM accesses them. This approach places desktop PCs in low power state when inactive and resumes them to running state when pages are needed by the VM running on the consolidation server.
Jettison, our software prototype of partial VM migration for off-the-shelf PCs, can
deliver 78% to 91% energy savings during idle periods lasting more than an hour, while providing low migration latencies of about 4 seconds, and migrating minimal state that is under an order of magnitude of the VM’s memory footprint. In shorter idle periods of up to thirty minutes, Jettison delivers savings of 7% to 31%.
We present two approaches that increase energy savings attained with partial VM migration, especially in short idle periods. The first, Context-Aware Selective Resume, expedites PC resume and suspend cycle times by supplying a context identifier at desktop resume, and initializing only devices and code that are relevant to the context. CAESAR, the Context-Aware Selective Resume framework, enables applications to register context vectors that are invoked when the desktop is resumed with matching context. CAESAR increases energy savings in short periods of five minutes to an hour by up to 66%.
The second approach, the low power page cache, embeds network accessible low power hardware in the PC, to enable serving of pages to the consolidation server, while the PC is in low power state. We show that Oasis, our prototype page cache, addresses the shortcomings of energy-oriented on-demand page migration by increasing energy savings, especially during short idle periods. In periods of up to an hour, Oasis increases savings by up to twenty times.
|
Page generated in 0.0696 seconds