• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 777
  • 220
  • 122
  • 65
  • 54
  • 33
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1598
  • 1598
  • 390
  • 281
  • 243
  • 243
  • 240
  • 236
  • 231
  • 225
  • 215
  • 209
  • 176
  • 173
  • 152
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Provisioning for Cloud Computing

Gera, Amit 10 January 2011 (has links)
No description available.
332

Zoolander: Modeling and managing replication for predictability

Yang, Daiyi 19 December 2011 (has links)
No description available.
333

Understanding and Exploiting Design Flaws of AMD Secure Encrypted Virtualization

Li, Mengyuan 29 September 2022 (has links)
No description available.
334

Cloud Computing - A Study of Performance and Security

Danielsson, Simon, Johansson, Staffan January 2011 (has links)
Cloud Computing är det stora modeordet i IT-världen just nu. Det har blivit mer och mer populärt på senare år men frågor har uppstått om dess prestanda och säkerhet. Hur säkert är det egentligen och är det någon större skillnad i prestanda mellan en lokal server och en molnbaserad server? Detta examensarbete tar upp dessa frågor. En serie prestandatester kombinerat med en litteraturstudie genomfördes för att få fram ett resultatet för detta examensarbete.Denna rapport kan komma att vara till nytta för de som har ett intresse av Cloud Computing men som saknar någon större kunskap om ämnet. Resultaten kan användas som exempel för hur framtida forskning inom Cloud Computing kan genomföras. / Cloud Computing - the big buzz word of the IT world. It has become more and more popular in recent years but questions has arisen about it’s performance and security. How safe is it and is there any real difference in performance between a locally based server and a cloud based server? This thesis will examine these questions. A series of performance tests combined with a literature study were performed to achieve the results of this thesis.This thesis could be of use for those who have an interest in Cloud Computing and do not have much knowledge of it. The results can be used as an example for how future research in Cloud Computing can be done.
335

Modeling and performance analysis of scalable web servers not deployed on the Cloud

Aljohani, A.M.D., Holton, David R.W., Awan, Irfan U. January 2013 (has links)
No / Over the last few years, cloud computing has become quite popular. It offers Web-based companies the advantage of scalability. However, this scalability adds complexity which makes analysis and predictable performance difficult. There is a growing body of research on load balancing in cloud data centres which studies the problem from the perspective of the cloud provider. Nevertheless, the load balancing of scalable web servers deployed on the cloud has been subjected to less research. This paper introduces a simple queueing model to analyse the performance metrics of web server under varying traffic loads. This assists web server managers to manage their clusters and understand the trade-off between QoS and cost. In this proposed model two thresholds are used to control the scaling process. A discrete-event simulation (DES) is presented and validated via an analytical solution.
336

Energy-Efficient Cloud Radio Access Networks by Cloud Based Workload Consolidation for 5G

Sigwele, Tshiamo, Alam, Atm S., Pillai, Prashant, Hu, Yim Fun 12 November 2016 (has links)
Yes / Next-generation cellular systems like fth generation (5G) is are expected to experience tremendous tra c growth. To accommodate such tra c demand, there is a need to increase the network capacity that eventually requires the deployment of more base stations (BSs). Nevertheless, BSs are very expensive and consume a lot of energy. With growing complexity of signal processing, baseband units are now consuming a signi cant amount of energy. As a result, cloud radio access networks (C-RAN) have been proposed as anenergy e cient (EE) architecture that leverages cloud computing technology where baseband processing is performed in the cloud. This paper proposes an energy reduction technique based on baseband workload consolidation using virtualized general purpose processors (GPPs) in the cloud. The rationale for the cloud based workload consolidation technique model is to switch o idle baseband units (BBUs) to reduce the overall network energy consumption. The power consumption model for C-RAN is also formulated with considering radio side, fronthaul and BS cloud power consumption. Simulation results demonstrate that the proposed scheme achieves an enhanced energy performance compared to the existing distributed long term evolution (LTE) RAN system. The proposed scheme saves up to 80% of energy during low tra c periods and 12% during peak tra c periods compared to baseline LTE system. Moreover, the proposed scheme saves 38% of energy compared to the baseline system on a daily average.
337

Elastic call admission control using fuzzy logic in virtualized cloud radio base stations

Sigwele, Tshiamo, Pillai, Prashant, Hu, Yim Fun January 2015 (has links)
No / Conventional Call Admission Control (CAC) schemes are based on stand-alone Radio Access Networks (RAN) Base Station (BS) architectures which have their independent and fixed spectral and computing resources, which are not shared with other BSs to address their varied traffic needs, causing poor resource utilization, and high call blocking and dropping probabilities. It is envisaged that in future communication systems like 5G, Cloud RAN (C-RAN) will be adopted in order to share this spectrum and computing resources between BSs in order to further improve the Quality of Service (QoS) and network utilization. In this paper, an intelligent Elastic CAC scheme using Fuzzy Logic in C-RAN is proposed. In the proposed scheme, the BS resources are consolidated to the cloud using virtualization technology and dynamically provisioned using the elasticity concept of cloud computing in accordance to traffic demands. Simulations shows that the proposed CAC algorithm has high call acceptance rate compared to conventional CAC.
338

Failure Prediction using Machine Learning in a Virtualised HPC System and application

Bashir, Mohammed, Awan, Irfan U., Ugail, Hassan, Muhammad, Y. 21 March 2019 (has links)
Yes / Failure is an increasingly important issue in high performance computing and cloud systems. As large-scale systems continue to grow in scale and complexity, mitigating the impact of failure and providing accurate predictions with sufficient lead time remains a challenging research problem. Traditional existing fault-tolerance strategies such as regular check-pointing and replication are not adequate because of the emerging complexities of high performance computing systems. This necessitates the importance of having an effective as well as proactive failure management approach in place aimed at minimizing the effect of failure within the system. With the advent of machine learning techniques, the ability to learn from past information to predict future pattern of behaviours makes it possible to predict potential system failure more accurately. Thus, in this paper, we explore the predictive abilities of machine learning by applying a number of algorithms to improve the accuracy of failure prediction. We have developed a failure prediction model using time series and machine learning, and performed comparison based tests on the prediction accuracy. The primary algorithms we considered are the Support Vector Machine (SVM), Random Forest(RF), k-Nearest Neighbors (KNN), Classi cation and Regression Trees (CART) and Linear Discriminant Analysis (LDA). Experimental results indicates that the average prediction accuracy of our model using SVM when predicting failure is 90% accurate and effective compared to other algorithms. This f inding implies that our method can effectively predict all possible future system and application failures within the system. / Petroleum Technology Development Fund (PTDF) funding support under the OSS scheme with grant number (PTDF/E/OSS/PHD/MB/651/14)
339

Trends in Forest Recovery After Stand-Replacing Disturbance: A Spatiotemporal Evaluation of Productivity in Southeastern Pine Forests

Putnam, Daniel Jacob 22 May 2023 (has links)
The southeastern United States is one of the most productive forestry regions in the world, encompassing approximately 100 million ha of forest land, about 87% of which is privately owned. Any alteration in this region's duration or rate of forest recovery has consequential economic and ecological ramifications. Despite the need for forest recovery monitoring in this region, a spatially comprehensive evaluation of forest spectral recovery through time has not yet been conducted. Remote sensing analysis via cloud-computing platforms allows for evaluating southeastern forest recovery at spatiotemporal scales not attainable with traditional methods. Forest productivity is assessed in this study using spectral metrics of southern yellow pine recovery following stand-replacing disturbance. An annual cloudfree (1984-2021) Landsat time series intersecting ten southeastern states was constructed using the Google Earth Engine API. Southern yellow pine stands were detected using the National Land Cover Database (NLCD) evergreen class, and pixels with a rapidly changing spectrotemporal profile, suggesting stand-replacing disturbance, were found using the Landscape Change Monitoring System (LCMS) Fast Loss product. Spectral recovery metrics for 3,654 randomly selected stands in 14 Level 3 EPA Ecoregions were derived from their 38-year time series of Normalized Burn Ratio (NBR) values using the Detecting Breakpoints and Estimating Segments in Trend (DBEST) change detection algorithm. Recovery metrics characterizing the rate (NBRregrowth), duration (Y2R), and magnitude (K-shift) of recovery from stand-replacing disturbances occurring between 1989 and 2011 were evaluated to identify long-term and wide-scale changes in forest recovery using linear regression and spatial statistics respectively. Sampled stands typically recover 35% higher in NBR than pre-disturbance and, on average, spectrally recover within seven years of disturbance. Recovery rate is shown to be increasing over time; temporal slope estimates for NBRregrowth suggest a 33% increase in early recovery rate between 1984 and 2011. Similarly, recovery duration measured with Y2R decreased by 43% during the study period with significant spatial variation. Results suggest that the magnitude of change in stand condition between rotations has decreased by 21% during the study period, has substantial regional divisions in high and low magnitude recovery between coastal and inland stands, and low NBR value sites have the most potential to increase their NBR value. Observed spatiotemporal patterns of spectral recovery suggest that changes in management interventions, atmospheric CO2, and climate over time have changed regional productivity. Results from this study will aid the understanding of changing productivity in southern yellow pine and will inform the management, monitoring, and modeling of this ecologically and economically important forest ecosystem. / Master of Science / The Southeast United States contains approximately 100 million hectares of forest land and is one of the world's most productive regions for commercial forestry. Forest managers and those who model the effects of different types of forest land on the changing climate need up-to-date information about how productive these forests are at removing carbon and producing wood and how that productivity differs across space and time. In this study, we evaluate the productivity of southern yellow pine stands by measuring stand recovery attributes from a disturbance that removes the majority or all of the trees in the stand. This is accomplished by locating 3,654 of randomly selected disturbed pine stands through ten southeastern states using freely available national data products derived from Landsat satellite imagery, namely a combination of the National Land Cover Database (NLCD) and the Landscape Change Monitoring System (LCMS), which provide information about the type of forest, and the year and severity of disturbance respectively. Annual Landsat satellite imagery from 1984 to 2021 is used to create a series of values over time for each stand representing the stand condition each year using an index called the Normalized Burn Ratio (NBR). A change detection algorithm called DBEST is applied to each stands NBR values to find the timing of disturbance and recovery, which is used to create three metrics characterizing the rate (NBRregrowth), duration (Y2R), and magnitude (K-shift) of recovery. We evaluated how these metrics change through time using linear regression and how they differ across space using regression residuals and spatial statistics. Across the region, stands typically increase in recovery rate, decrease in recovery duration, and decrease in recovery magnitude. On average, stands recover within seven years of disturbance and to a higher NBR value than pre-disturbance. However, there is significant spatial variation in this metric throughout the Southeast. The results indicate that stands with a lower vegetation condition, measured with NBR, before the disturbance had the most significant gain in stand condition after recovery, and stands with initially higher vegetation condition did not increase as much after recovery.
340

CloudCV: Deep Learning and Computer Vision on the Cloud

Agrawal, Harsh 20 June 2016 (has links)
We are witnessing a proliferation of massive visual data. Visual content is arguably the fastest growing data on the web. Photo-sharing websites like Flickr and Facebook now host more than 6 and 90 billion photos, respectively. Unfortunately, scaling existing computer vision algorithms to large datasets leaves researchers repeatedly solving the same algorithmic and infrastructural problems. Designing and implementing efficient and provably correct computer vision algorithms is extremely challenging. Researchers must repeatedly solve the same low-level problems: building and maintaining a cluster of machines, formulating each component of the computer vision pipeline, designing new deep learning layers, writing custom hardware wrappers, etc. This thesis introduces CloudCV, an ambitious system that contain algorithms for end-to-end processing of visual content. The goal of the project is to democratize computer vision; one should not have to be a computer vision, big data and deep learning expert to have access to state-of-the-art distributed computer vision algorithms. We provide researchers, students and developers access to state-of-art distributed computer vision and deep learning algorithms as a cloud service through web interface and APIs. / Master of Science

Page generated in 0.1106 seconds