• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 778
  • 220
  • 122
  • 65
  • 54
  • 33
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1599
  • 1599
  • 390
  • 281
  • 244
  • 243
  • 240
  • 236
  • 231
  • 226
  • 215
  • 210
  • 177
  • 174
  • 152
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

A Comprehensive Python Toolkit for Harnessing Cloud-Based High-Throughput Computing to Support Hydrologic Modeling Workflows

Christensen, Scott D. 01 February 2016 (has links)
Advances in water resources modeling are improving the information that can be supplied to support decisions that affect the safety and sustainability of society, but these advances result in models being more computationally demanding. To facilitate the use of cost- effective computing resources to meet the increased demand through high-throughput computing (HTC) and cloud computing in modeling workflows and web applications, I developed a comprehensive Python toolkit that provides the following features: (1) programmatic access to diverse, dynamically scalable computing resources; (2) a batch scheduling system to queue and dispatch the jobs to the computing resources; (3) data management for job inputs and outputs; and (4) the ability for jobs to be dynamically created, submitted, and monitored from the scripting environment. To compose this comprehensive computing toolkit, I created two Python libraries (TethysCluster and CondorPy) that leverage two existing software tools (StarCluster and HTCondor). I further facilitated access to HTC in web applications by using these libraries to create powerful and flexible computing tools for Tethys Platform, a development and hosting platform for web-based water resources applications. I tested this toolkit while collaborating with other researchers to perform several modeling applications that required scalable computing. These applications included a parameter sweep with 57,600 realizations of a distributed, hydrologic model; a set of web applications for retrieving and formatting data; a web application for evaluating the hydrologic impact of land-use change; and an operational, national-scale, high- resolution, ensemble streamflow forecasting tool. In each of these applications the toolkit was successful in automating the process of running the large-scale modeling computations in an HTC environment.
552

Selecting Cloud Platform Services Based On Application Requirements

Larson, Bridger Ronald 01 December 2016 (has links)
As virtualization platforms or cloud computing have become more of a commodity, many more organizations have been utilizing them. Many organizations and technologies have emerged to fulfill those cloud needs. Cloud vendors provide similar services, but the differences can have significant impact on specific applications. Selecting the right provider is difficult and confusing because of the number of options. It can be difficult to determine which application characteristics will impact the choice of implementation. There has not been a concise process to select which cloud vendor and characteristics are best suited for the application requirements and organization requirements. This thesis provides a model that identifies crucial application characteristics, organization requirements and also characteristics of a cloud. The model is used to analyze the interaction of the application with multiple cloud platforms and select the best option based on a suitability score. Case studies utilize this model to test three applications against three cloud implementations to identify the best fit cloud implementation. The model is further validated by a small group of peers through a survey. The studies show that the model is useful in identifying and comparing cloud implementations with regard to application requirements.
553

Telecom Networks Virtualization : Overcoming the Latency Challenge

Oljira, Dejene Boru January 2018 (has links)
Telecom service providers are adopting a Network Functions Virtualization (NFV) based service delivery model, in response to the unprecedented traffic growth and an increasing customers demand for new high-quality network services. In NFV, telecom network functions are virtualized and run on top of commodity servers. Ensuring network performance equivalent to the legacy non-virtualized system is a determining factor for the success of telecom networks virtualization. Whereas in virtualized systems, achieving carrier-grade network performance such as low latency, high throughput, and high availability to guarantee the quality of experience (QoE) for customer is challenging. In this thesis, we focus on addressing the latency challenge. We investigate the delay overhead of virtualization by comprehensive network performance measurements and analysis in a controlled virtualized environment. With this, a break-down of the latency incurred by the virtualization and the impact of co-locating virtual machines (VMs) of different workloads on the end-to-end latency is provided. We exploit this result to develop an optimization model for placement and provisioning of the virtualized telecom network functions to ensure both the latency and cost-efficiency requirements. To further alleviate the latency challenge, we propose a multipath transport protocol MDTCP, that leverage Explicit Congestion Notification (ECN) to quickly detect and react to an incipient congestion to minimize queuing delays, and achieve high network utilization in telecom datacenters. / HITS, 4707
554

Strategies to Manage Cloud Computing Operational Costs

Sackey, Frankie Nii A 01 January 2018 (has links)
Information technology (IT) managers worldwide have adopted cloud computing because of its potential to improve reliability, scalability, security, business agility, and cost savings; however, the rapid adoption of cloud computing has created challenges for IT managers, who have reported an estimated 30% wastage of cloud resources. The purpose of this single case study was to explore successful strategies and processes for managing infrastructure operations costs in cloud computing. The sociotechnical systems (STS) approach was the conceptual framework for the study. Semistructured interviews were conducted with 6 IT managers directly involved in cloud cost management. The data were analyzed using a qualitative data-analysis software to identify initial categories and emerging themes, which were refined in alignment with the STS framework. The key themes from the analysis indicated that successful cloud cost management began with assessing the current environment and architecting applications and systems to fit cloud services, using tools for monitoring and reporting, and actively managing costs in alignment with medium- and long-term goals. Findings also indicated that social considerations such as fostering collaboration among all stakeholders, employee training, and skills development were critical for success. The implications for positive social change that derive from effectively managing operational costs include improved financial posture, job stability, and environmental sustainability.
555

MODELING AND SECURITY IN CLOUD AND RELATED ECOSYSTEMS

Unknown Date (has links)
Software systems increasingly interact with each other, forming ecosystems. Cloud is one such ecosystem that has evolved and enabled other technologies like IoT and containers. Such systems are very complex and heterogeneous because their components can have diverse origins, functions, security policies, and communication protocols, which makes it difficult to comprehend, utilize and consequently secure them. Abstract architectural models can be used to handle this complexity and heterogeneity but there is lack of work on precise, implementation/vendor neutral and holistic models which represent ecosystem components and their mutual interactions. We attempted to find similarities in systems and generalize to create abstract models for adding security. We represented the ecosystem as a Reference architecture (RA) and the ecosystem units as patterns. We started with a pattern diagram which showed all the components involved along with their mutual interactions and dependencies. We added components to the already existent Cloud security RA (SRA). Containers, being relatively new virtualization technology, did not have a precise and holistic reference architecture. We have built a partial RA for containers by identifying and modeling components of the ecosystem. Container security issues were identified from the literature as well as analysis of our patterns. We added corresponding security countermeasures to container RA as security patterns to build a container SRA. Finally, using container SRA as an example, we demonstrated an approach for RA validation. We have also built a composite pattern for fog computing that is an intermediate platform between Cloud and IoT devices. We represented an attack, Distributed Denial of Service (DDoS) using IoT devices, in the form of a misuse pattern which explains it from the attacker’s perspective. We found this modelbased approach useful to build RAs in a flexible and incremental way as components can be identified and added as the ecosystems expand. This provided us better insight to analyze security issues across boundaries of individual ecosystems. A unified, precise and holistic view of the system is not just useful for adding or evaluating security, this approach can also be used to ensure compliance, privacy, safety, reliability and/or governance for cloud and related ecosystems. This is the first work we know of where patterns and RAs are used to represent ecosystems and analyze their security. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2019. / FAU Electronic Theses and Dissertations Collection
556

Transparently Improving Quality of Service of Modern Applications

Yang, Yudong January 2019 (has links)
Improving end-to-end Quality of Service (QoS) in existing network systems is a fundamental problem, as it can be affected by many factors, including congestion, packet scheduling, attacks, and air-time allocation. This dissertation addresses QoS in two critical environments: home WiFi and cloud networks. In home networks, we focus on improving QoS over WiFi networks, the dominant means for home Internet access. Three major reasons for end-to-end QoS efforts fail in WiFi networks are its: 1) inherent wireless channel characteristics, 2) approach to access control of the shared broadcast channel, and 3) impact on transport layer protocols, such as TCP, that operate end-to-end, and over-react to the loss or delay caused by the single WiFi link. We present our cross-layer design, Virtual Wire, leveraging the philosophy of centralization in modern networking to address the problem at the point of entry/egress into the WiFi network. Based on network conditions measured from buffer sizes, airtime, and throughput, flows are scheduled to the optimal utility. Unlike most existing WiFi QoS approaches, our design only relies on transparent modifications, requiring no changes to the network (including link layer) protocols, applications, or user intervention. Through extensive experimental investigation, we show that our design significantly enhances the reliability and predictability of WiFi performance, providing a ``virtual wire''-like link to the targeted application. In cloud networks, we explore mechanisms to improve availability during DDoS attacks. The availability of cloud servers is impacted when excessive loads induced by DDoS attacks cause the servers to crash or respond too slowly to legitimate session requests. We model and analyze the effectiveness of a shuffling mechanism: the periodic, randomized re-assignment of users to servers. This shuffling mechanism not only complicates malicious users’ abilities to target specific servers but also, over time, allows a system to identify who the malicious users are. We design and evaluate improved classifiers which can, with statistical accuracy and well-defined levels of confidence, identify malicious users. We also propose and explore the effectiveness of a two-tiered system in which servers are partitioned in two, where one partition serves only ”filtered” users who have demonstrated non-malicious behavior. Our results show how shuffling with these novel classifiers can improve the QoS of the system, which is evaluated by the survival probability, the probability of a legitimate session not being affected by attacks.
557

Applying UTAUT to Determine Intent to Use Cloud Computing in K-12 Classrooms

Kropf, Dorothy Cortez 01 January 2018 (has links)
Although school districts provide collaborative cloud computing tools such as OneDrive and Google Drive for students and teachers, the use of these tools for grading and feedback purposes remains largely unexplored. Therefore, it is difficult for school districts to make informed decisions on the use of cloud applications for collaboration. This quantitative, nonexperimental study utilized Venkatesh et al.'s unified theory of acceptance and use of technology (UTAUT) to determine teachers' intent to use collaborative cloud computing tools. Online surveys with questions pertaining to UTAUT's predictor variables of performance expectancy (PE), effort expectancy (EE), social influence (SI), facilitating conditions (FC) and UTAUT's criterion variable of behavioral intent (BI) were administered to a convenience sample of 129 teachers who responded to an email solicitation. Pearson correlation results of r = 0.781, r = 0.646, r = 0.569, and r = 0.570 indicated strong, positive correlations between BI and PE, EE, SI, and FC respectively. Spearman rho correlations results of rs = 0.746, rs = 0.587, rs = 0.569, and rs = 0.613 indicated strong, positive correlations between BI and PE, EE, SI, and FC respectively. Simple linear regression results indicated that PE and EE are strong predictors of BI when moderated by age, gender, experience, and voluntariness of use (VU). SI is a strong predictor of BI when moderated by gender, but not by age, experience, and VU. This study's application of the UTAUT model to determine teachers' BI to use collaborative cloud computing tools could transform how administrators and educational technologists introduce these tools for grading and feedback purposes. This study contributes to the growing body of literature on technology integration among K-12 teachers.
558

Adoption of cloud computing services amongst the micro-enterprise sector in Cape Town

Chiza, Albin Boris Lugerero January 2018 (has links)
Thesis (MTech (Business Information Systems))--Cape Peninsula University of Technology, 2018. / Micro-enterprises play a vital role towards the South Africa’s economic growth by contributing towards job creation. Despite the importance of the role of micro-enterprises, micro-enterprises face several challenges such as lack of finance, lack of skilled workers and lack of technological resources. Previous studies indicate that Information Technology has a distinct role in assisting micro-enterprises overcome several challenges. It is further evidenced in the extant literature that cloud computing, provides a low cost entry for enterprises to support several facets of their business operations. In the current era cloud computing requires a constant as well as fast internet connection and the South African government has various interventions to address the infrastructure divide. However, we have a scant understanding of the adoption challenges amongst micro-enterprises to adopt cloud solutions, which to date feature more prominently amongst larger organisations. This research investigated the factors that influence cloud computing adoption in the micro-enterprise sector in Cape Town. This is a city that promotes the contribution of micro-enterprises to their economic activity, and was such an ideal location to investigate cloud computing adoption amongst the micro-enterprise sector. This research provides a rich understanding of the factors that influence micro-enterprises in Cape Town to adopt cloud computing services and proposes guidelines to encourage micro-enterprises in Cape Town to use cloud services to improve their productivity. The researcher uses the UTAUT model as a framework and a qualitative research methodology to investigate the research question. Data for this research study was collected via face to face interviews with semi-structured questions of ten micro-enterprises and an IT expert. The findings showed that the factors influencing the adoption of cloud computing services are performance expectancy, effort expectancy, social influence, facilitating conditions, lack of training, cost efficiency and reduction of working hours.
559

Medical Data Management on the cloud / Gestion de données médicales sur le cloud

Mohamad, Baraa 23 June 2015 (has links)
Résumé indisponible / Medical data management has become a real challenge due to the emergence of new imaging technologies providing high image resolutions.This thesis focuses in particular on the management of DICOM files. DICOM is one of the most important medical standards. DICOM files have special data format where one file may contain regular data, multimedia data and services. These files are extremely heterogeneous (the schema of a file cannot be predicted) and have large data sizes. The characteristics of DICOM files added to the requirements of medical data management in general – in term of availability and accessibility- have led us to construct our research question as follows:Is it possible to build a system that: (1) is highly available, (2) supports any medical images (different specialties, modalities and physicians’ practices), (3) enables to store extremely huge/ever increasing data, (4) provides expressive accesses and (5) is cost-effective .In order to answer this question we have built a hybrid (row-column) cloud-enabled storage system. The idea of this solution is to disperse DICOM attributes thoughtfully, depending on their characteristics, over both data layouts in a way that provides the best of row-oriented and column-oriented storage models in one system. All with exploiting the interesting features of the cloud that enables us to ensure the availability and portability of medical data. Storing data on such hybrid data layout opens the door for a second research question, how to process queries efficiently over this hybrid data storage with enabling new and more efficient query plansThe originality of our proposal comes from the fact that there is currently no system that stores data in such hybrid storage (i.e. an attribute is either on row-oriented database or on column-oriented one and a given query could interrogate both storage models at the same time) and studies query processing over it.The experimental prototypes implemented in this thesis show interesting results and opens the door for multiple optimizations and research questions.
560

Elastic channel distribution in the cloud for live video streaming

Törnqvist, Sebastian January 2018 (has links)
Streaming video has strong availability requirements, while for livestreamed video low latency becomes an additional significant factor. For large-scale video streaming the streaming service must be able to scale in and out in order to conform to the interchanging demands of users. Video streaming demonstrates heavily fluctuating load, where number of viewers may increase exponentially within a few minutes. In combination with the high availability guarantees suggests that the problem is non-trivial.This thesis covers the issues of providing a cost-effective distributed live video streaming application that guarantees a seamless user experience. For instance, there are multiple channels, in the order of hundred, where each has an ever changing popularity and furthermore, users are able to watch content which was streamed for some number of hours ago. Thus, the system must both provide cached streams as well as the live-stream.In this thesis, an elasticity-providing solution for live video streaming is presented. The solution is a combination of rule-based reactive algorithm for channel distribution and a predictive method for VM instance provisioning. The results show that the algorithm, when simulating 15 channels with 80000 viewers and 50 instances, keeps underallocation of channels at less than 1% while achieving significant reduction of about 125% for channel occurrences and thereby bandwidth consumption compared to the previous channel distribution solution. As the video streaming service scales in terms of number of channels and VM instances, the reduction factor increases. / Videoströmmingstjänster har starka krav på tillgänglighet, medan för live-strömmad video blir låg latens också signifikant. För storskalig videoströmmning måste tjänsten kunna skala in och ut för att överensstämma med användarnas växlande krav. Videoströmmning visar starkt varierande belastning, där antalet tittare kan öka exponentiellt inom några minuter. I kombination med de höga tillgänglighetsgarantierna antyder att problemet inte är trivialt.Denna avhandling täcker problemen med att tillhandahålla en kostnadseffektiv distribuerad live-videoströmmningstjänst som garanterar en sömlös användarupplevelse. Till exempel finns det flera kanaler, i storleksordningen hundra, där var och en har en ständigt förändrande popularitet. Därtill tillkommer dessutom att användare har möjligheten titta på innehåll som strömmats för några timmar sedan. Således måste systemet både tillhandahålla cachade strömmar såväl som direktsändning.I denna avhandling presenteras en elasticitetslösning för live video streaming. Lösningen är en kombination av en regelbaserad reaktiv algorithm för kanaldistribution och en prediktiv metod för VM-instans allokering. Resultaten visar att algoritmen, vid en simulering med 15 kanaler, 80000 tittare och 50 instanser, klarar att hålla underallokering av kanaler lägre än 1% samtidigt som totala antalet kanalinstanser reduceras med ungefär 125% jämfört med den tidigare kanaldistributionslösningen. Allteftersom videostreamingstjänsten skalar i antal kanaler och VM-instanser ökar reduktionsfaktorn ytterligare.

Page generated in 0.0932 seconds