331 |
Understanding and Exploiting Design Flaws of AMD Secure Encrypted VirtualizationLi, Mengyuan 29 September 2022 (has links)
No description available.
|
332 |
Cloud Computing - A Study of Performance and SecurityDanielsson, Simon, Johansson, Staffan January 2011 (has links)
Cloud Computing är det stora modeordet i IT-världen just nu. Det har blivit mer och mer populärt på senare år men frågor har uppstått om dess prestanda och säkerhet. Hur säkert är det egentligen och är det någon större skillnad i prestanda mellan en lokal server och en molnbaserad server? Detta examensarbete tar upp dessa frågor. En serie prestandatester kombinerat med en litteraturstudie genomfördes för att få fram ett resultatet för detta examensarbete.Denna rapport kan komma att vara till nytta för de som har ett intresse av Cloud Computing men som saknar någon större kunskap om ämnet. Resultaten kan användas som exempel för hur framtida forskning inom Cloud Computing kan genomföras. / Cloud Computing - the big buzz word of the IT world. It has become more and more popular in recent years but questions has arisen about it’s performance and security. How safe is it and is there any real difference in performance between a locally based server and a cloud based server? This thesis will examine these questions. A series of performance tests combined with a literature study were performed to achieve the results of this thesis.This thesis could be of use for those who have an interest in Cloud Computing and do not have much knowledge of it. The results can be used as an example for how future research in Cloud Computing can be done.
|
333 |
Modeling and performance analysis of scalable web servers not deployed on the CloudAljohani, A.M.D., Holton, David R.W., Awan, Irfan U. January 2013 (has links)
No / Over the last few years, cloud computing has become quite popular. It offers Web-based companies the advantage of scalability. However, this scalability adds complexity which makes analysis and predictable performance difficult. There is a growing body of research on load balancing in cloud data centres which studies the problem from the perspective of the cloud provider. Nevertheless, the load balancing of scalable web servers deployed on the cloud has been subjected to less research. This paper introduces a simple queueing model to analyse the performance metrics of web server under varying traffic loads. This assists web server managers to manage their clusters and understand the trade-off between QoS and cost. In this proposed model two thresholds are used to control the scaling process. A discrete-event simulation (DES) is presented and validated via an analytical solution.
|
334 |
Failure Prediction using Machine Learning in a Virtualised HPC System and applicationBashir, Mohammed, Awan, Irfan U., Ugail, Hassan, Muhammad, Y. 21 March 2019 (has links)
Yes / Failure is an increasingly important issue in high performance computing and cloud systems. As
large-scale systems continue to grow in scale and complexity, mitigating the impact of failure and
providing accurate predictions with sufficient lead time remains a challenging research problem. Traditional
existing fault-tolerance strategies such as regular check-pointing and replication are not adequate because of
the emerging complexities of high performance computing systems. This necessitates the importance of having
an effective as well as proactive failure management approach in place aimed at minimizing the effect of failure
within the system. With the advent of machine learning techniques, the ability to learn from past information to predict future pattern of behaviours makes it possible to predict potential system failure more accurately. Thus, in this paper, we explore the predictive abilities of machine learning by applying a number of algorithms to improve the accuracy of failure prediction. We have developed a failure prediction model using time series and machine learning, and performed comparison based tests on the prediction accuracy. The primary algorithms we considered are the Support Vector Machine (SVM), Random Forest(RF), k-Nearest Neighbors (KNN), Classi cation and Regression Trees (CART) and Linear Discriminant Analysis (LDA). Experimental results indicates that the average prediction accuracy of our model using SVM when predicting failure is 90% accurate and effective compared to other algorithms. This f inding implies that our method can effectively predict all possible future system and
application failures within the system. / Petroleum Technology Development Fund (PTDF) funding support under the OSS scheme with grant number (PTDF/E/OSS/PHD/MB/651/14)
|
335 |
Trends in Forest Recovery After Stand-Replacing Disturbance: A Spatiotemporal Evaluation of Productivity in Southeastern Pine ForestsPutnam, Daniel Jacob 22 May 2023 (has links)
The southeastern United States is one of the most productive forestry regions in the world, encompassing approximately 100 million ha of forest land, about 87% of which is privately owned. Any alteration in this region's duration or rate of forest recovery has consequential economic and ecological ramifications. Despite the need for forest recovery monitoring in this region, a spatially comprehensive evaluation of forest spectral recovery through time has not yet been conducted. Remote sensing analysis via cloud-computing platforms allows for evaluating southeastern forest recovery at spatiotemporal scales not attainable with traditional methods. Forest productivity is assessed in this study using spectral metrics of southern yellow pine recovery following stand-replacing disturbance. An annual cloudfree (1984-2021) Landsat time series intersecting ten southeastern states was constructed using the Google Earth Engine API. Southern yellow pine stands were detected using the National Land Cover Database (NLCD) evergreen class, and pixels with a rapidly changing spectrotemporal profile, suggesting stand-replacing disturbance, were found using the Landscape Change Monitoring System (LCMS) Fast Loss product. Spectral recovery metrics for 3,654 randomly selected stands in 14 Level 3 EPA Ecoregions were derived from their 38-year time series of Normalized Burn Ratio (NBR) values using the Detecting Breakpoints and Estimating Segments in Trend (DBEST) change detection algorithm. Recovery metrics characterizing the rate (NBRregrowth), duration (Y2R), and magnitude (K-shift) of recovery from stand-replacing disturbances occurring between 1989 and 2011 were evaluated to identify long-term and wide-scale changes in forest recovery using linear regression and spatial statistics respectively. Sampled stands typically recover 35% higher in NBR than pre-disturbance and, on average, spectrally recover within seven years of disturbance. Recovery rate is shown to be increasing over time; temporal slope estimates for NBRregrowth suggest a 33% increase in early recovery rate between 1984 and 2011. Similarly, recovery duration measured with Y2R decreased by 43% during the study period with significant spatial variation. Results suggest that the magnitude of change in stand condition between rotations has decreased by 21% during the study period, has substantial regional divisions in high and low magnitude recovery between coastal and inland stands, and low NBR value sites have the most potential to increase their NBR value. Observed spatiotemporal patterns of spectral recovery suggest that changes in management interventions, atmospheric CO2, and climate over time have changed regional productivity. Results from this study will aid the understanding of changing productivity in southern yellow pine and will inform the management, monitoring, and modeling of this ecologically and economically important forest ecosystem. / Master of Science / The Southeast United States contains approximately 100 million hectares of forest land and is one of the world's most productive regions for commercial forestry. Forest managers and those who model the effects of different types of forest land on the changing climate need up-to-date information about how productive these forests are at removing carbon and producing wood and how that productivity differs across space and time. In this study, we evaluate the productivity of southern yellow pine stands by measuring stand recovery attributes from a disturbance that removes the majority or all of the trees in the stand.
This is accomplished by locating 3,654 of randomly selected disturbed pine stands through ten southeastern states using freely available national data products derived from Landsat satellite imagery, namely a combination of the National Land Cover Database (NLCD) and the Landscape Change Monitoring System (LCMS), which provide information about the type of forest, and the year and severity of disturbance respectively. Annual Landsat satellite imagery from 1984 to 2021 is used to create a series of values over time for each stand representing the stand condition each year using an index called the Normalized Burn Ratio (NBR). A change detection algorithm called DBEST is applied to each stands NBR values to find the timing of disturbance and recovery, which is used to create three metrics characterizing the rate (NBRregrowth), duration (Y2R), and magnitude (K-shift) of recovery.
We evaluated how these metrics change through time using linear regression and how they differ across space using regression residuals and spatial statistics. Across the region, stands typically increase in recovery rate, decrease in recovery duration, and decrease in recovery magnitude. On average, stands recover within seven years of disturbance and to a higher NBR value than pre-disturbance. However, there is significant spatial variation in this metric throughout the Southeast. The results indicate that stands with a lower vegetation condition, measured with NBR, before the disturbance had the most significant gain in stand condition after recovery, and stands with initially higher vegetation condition did not increase as much after recovery.
|
336 |
CloudCV: Deep Learning and Computer Vision on the CloudAgrawal, Harsh 20 June 2016 (has links)
We are witnessing a proliferation of massive visual data. Visual content is arguably the fastest growing data on the web. Photo-sharing websites like Flickr and Facebook now host more than 6 and 90 billion photos, respectively. Unfortunately, scaling existing computer vision algorithms to large datasets leaves researchers repeatedly solving the same algorithmic and infrastructural problems. Designing and implementing efficient and provably correct computer vision algorithms is extremely challenging. Researchers must repeatedly solve the same low-level problems: building and maintaining a cluster of machines, formulating each component of the computer vision pipeline, designing new deep learning layers, writing custom hardware wrappers, etc. This thesis introduces CloudCV, an ambitious system that contain algorithms for end-to-end processing of visual content.
The goal of the project is to democratize computer vision; one should not have to be a computer vision, big data and deep learning expert to have access to state-of-the-art distributed computer vision algorithms. We provide researchers, students and developers access to state-of-art distributed computer vision and deep learning algorithms as a cloud service through web interface and APIs. / Master of Science
|
337 |
Data-Intensive Biocomputing in the CloudMeeramohideen Mohamed, Nabeel 25 September 2013 (has links)
Next-generation sequencing (NGS) technologies have made it possible to rapidly sequence the human genome, heralding a new era of health-care innovations based on personalized genetic information. However, these NGS technologies generate data at a rate that far outstrips Moore\'s Law. As a consequence, analyzing this exponentially increasing data deluge requires enormous computational and storage resources, resources that many life science institutions do not have access to. As such, cloud computing has emerged as an obvious, but still nascent, solution.
This thesis intends to investigate and design an efficient framework for running and managing large-scale data-intensive scientific applications in the cloud. Based on the learning from our parallel implementation of a genome analysis pipeline in the cloud, we aim to provide a framework for users to run such data-intensive scientific workflows using a hybrid setup of client and cloud resources. We first present SeqInCloud, our highly scalable parallel implementation of a popular genetic variant pipeline called genome analysis toolkit (GATK), on the Windows Azure HDInsight cloud platform. Together with a parallel implementation of GATK on Hadoop, we evaluate the potential of using cloud computing for large-scale DNA analysis and present a detailed study on efficiently utilizing cloud resources for running data-intensive, life-science applications. Based on our experience from running SeqInCloud on Azure, we present CloudFlow, a feature rich workflow manager for running MapReduce-based bioinformatic pipelines utilizing both client and cloud resources. CloudFlow, built on the top of an existing MapReduce-based workflow manager called Cloudgene, provides unique features that are not offered by existing MapReduce-based workflow managers, such as enabling simultaneous use of client and cloud resources, automatic data-dependency handling between client and cloud resources, and the flexibility of implementing user-defined plugins for data transformations. In-general, we believe that our work attempts to increase the adoption of cloud resources for running data-intensive scientific workloads. / Master of Science
|
338 |
Optimizing, Testing, and Securing Mobile Cloud Computing Systems For Data Aggregation and ProcessingTurner, Hamilton Allen 22 January 2015 (has links)
Seamless interconnection of smart mobile devices and cloud services is a key goal in modern mobile computing. Mobile Cloud Computing is the holistic integration of contextually-rich mobile devices with computationally-powerful cloud services to create high value products for end users, such as Apple's Siri and Google's Google Now product. This coupling has enabled new paradigms and fields of research, such as crowdsourced data collection, and has helped spur substantial changes in research fields such as vehicular ad hoc networking.
However, the growth of Mobile Cloud Computing has resulted in a number of new challenges, such as testing large-scale Mobile Cloud Computing systems, and increased the importance of established challenges, such as ensuring that a user's privacy is not compromised when interacting with a location-aware service. Moreover, the concurrent development of the Infrastructure as a Service paradigm has created inefficiency in how Mobile Cloud Computing systems are executed on cloud platforms.
To address these gaps in the existing research, this dissertation presents a number of software and algorithmic solutions to 1) preserve user locational privacy, 2) improve the speed and effectiveness of deploying and executing Mobile Cloud Computing systems on modern cloud infrastructure, and 3) enable large-scale research on Mobile Cloud Computing systems without requiring substantial domain expertise. / Ph. D.
|
339 |
Distributed Architectures for Enhancing Artificial Intelligence of Things Systems. A Cloud Collaborative ModelElouali, Aya 23 November 2023 (has links)
In today’s world, IoT systems are more and more overwhelming. All electronic devices are becoming connected. From lamps and refrigerators in smart homes, smoke detectors and cameras in monitoring systems, to scales and thermometers in healthcare systems, until phones, cars and watches in smart cities. All these connected devices generate a huge amount of data collected from the environment. To take advantage of these data, a processing phase is needed in order to extract useful information, allowing the best management of the system. Since most objects in IoT systems are resource limited, the processing step, usually performed by an artificial intelligence model, is offloaded to a more powerful machine such as the cloud server in order to benefit from its high storage and processing capacities. However, the cloud server is geographically remote from the connected device, which leads to a long communication delay and harms the effectiveness of the system. Moreover, due to the incredibly increasing number of IoT devices and therefore offloading operations, the load on the network has increased significantly. In order to benefit from the advantages of cloud based AIoT systems, we seek to minimize its shortcomings. In this thesis, we design a distributed architecture that allows combining these three domains while reducing latency and bandwidth consumption as well as the IoT device’s energy and resource consumption. Experiments conducted on different cloud based AIoT systems showed that the designed architecture is capable of reducing up to 80% of the transmitted data. / En el mundo actual, los sistemas de IoT (Internet de las cosas) son cada vez más abrumadores. Todos los dispositivos electrónicos se están conectando entre sí. Desde lámparas y refrigeradores en hogares inteligentes, detectores de humo y cámaras para sistemas de monitoreo, hasta básculas y termómetros para sistemas de atención médica, pasando por teléfonos, automóviles y relojes en ciudades inteligentes. Todos estos dispositivos conectados generan una enorme cantidad de datos recopilados del entorno. Para aprovechar estos datos, es necesario un proceso de análisis para extraer información útil que permita una gestión óptima del sistema. Dado que la mayoría de los objetos en los sistemas de IoT tienen recursos limitados, la etapa de procesamiento, generalmente realizada por un modelo de inteligencia artificial, se traslada a una máquina más potente, como el servidor en la nube, para beneficiarse de su alta capacidad de almacenamiento y procesamiento. Sin embargo, el servidor en la nube está geográficamente alejado del dispositivo conectado, lo que conduce a una larga demora en la comunicación y perjudica la eficacia del sistema. Además, debido al increíble aumento en el número de dispositivos de IoT y, por lo tanto, de las operaciones de transferencia de datos, la carga en la red ha aumentado significativamente. Con el fin de aprovechar las ventajas de los sistemas de AIoT (Inteligencia Artificial en el IoT) basados en la nube, buscamos minimizar sus desventajas. En esta tesis, hemos diseñado una arquitectura distribuida que permite combinar estos tres dominios al tiempo que reduce la latencia y el consumo de ancho de banda, así como el consumo de energía y recursos del dispositivo IoT. Los experimentos realizados en diferentes sistemas de AIoT basados en la nube mostraron que la arquitectura diseñada es capaz de reducir hasta un 80% de los datos transmitidos.
|
340 |
Service-Oriented Architecture based Cloud Computing Framework For Renewable Energy ForecastingSehgal, Rakesh 10 March 2014 (has links)
Forecasting has its application in various domains as the decision-makers are provided with a more predictable and reliable estimate of events that are yet to occur. Typically, a user would invest in licensed software or subscribe to a monthly or yearly plan in order to make such forecasts. The framework presented here differs from conventional software in forecasting, as it allows any interested party to use the proposed services on a pay-per-use basis so that they can avoid investing heavily in the required infrastructure.
The Framework-as-a-Service (FaaS) presented here uses Windows Communication Foundation (WCF) to implement Service-Oriented Architecture (SOA). For forecasting, collection of data, its analysis and forecasting responsibilities lies with users, who have to put together other tools or software in order to produce a forecast. FaaS offers each of these responsibilities as a service, namely, External Data Collection Framework (EDCF), Internal Data Retrieval Framework (IDRF) and Forecast Generation Framework (FGF). FaaS Controller, being a composite service based on the above three, is responsible for coordinating activities between them.
These services are accessible through Economic Endpoint (EE) or Technical Endpoint (TE) that can be used by a remote client in order to obtain cost or perform a forecast, respectively. The use of Cloud Computing makes these services available over the network to be used as software to forecast energy for solar or wind resources. These services can also be used as a platform to create new services by merging existing functionality with new service features for forecasting. Eventually, this can lead to faster development of newer services where a user can choose which services to use and pay for, presenting the use of FaaS as Platform-as-a-Service (PaaS) in forecasting. / Master of Science
|
Page generated in 0.0654 seconds