• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 779
  • 217
  • 122
  • 65
  • 54
  • 34
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1600
  • 1600
  • 392
  • 281
  • 244
  • 242
  • 235
  • 231
  • 231
  • 227
  • 218
  • 210
  • 176
  • 175
  • 154
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
591

Factors influencing cloud computing readiness in small and medium enterprises.

Sibanyoni, Jabu Lucky. January 2015 (has links)
M. Tech. Business Information Systems / Business innovation driven by technology is widely seen as a key driver to transform enterprises and in particular, the development of Small and Medium Enterprises (SMEs). Any organization eager to improve competitiveness, retain sustainability and cost effectively will require new and better technologies with great capabilities. However not all organizations are ready to adopt these innovative technologies, largely because new and rapidly changing technologies come with new and unique challenges. The emergence of cloud computing paradigm in recent years is rapidly gaining momentum as an alternative to the traditional approach to provide or consume Information Technology (IT) services and resources. It is a significant trend with the potential to increase agility and lower costs of IT. Although, embracing this paradigm promises several benefits to an organization, an effective adoption and implementation of cloud computing in an organization requires these organizations to understand different factors. Current literature have shown that there are inadequate guidelines to guide SMEs in developing economies to determine a company's degree of readiness to adopt technological innovations such as cloud computing to transform the operations of the organisation. The purpose of this study is to investigate factors influencing cloud computing readiness in South African small and medium enterprises.
592

Critical analysis of the key drivers for adopting cloud computing : a case study of an information technology user organisation in Durban

Modiba, Maimela Daniel. January 2013 (has links)
M. Tech. Business Administration / The aim of this research is to explore the factors that drives the adoption of cloud computing within a South African information technology user organisation. It also identifies benefits and risks associated with the adopting of cloud computing within an information and communication technology (ICT) from a South African company perspective.
593

Coding-Based System Primitives for Airborne Cloud Computing

Lin, Chit-Kwan January 2011 (has links)
The recent proliferation of sensors in inhospitable environments such as disaster or battle zones has not been matched by in situ data processing capabilities due to a lack of computing infrastructure in the field. We envision a solution based on small, low-altitude unmanned aerial vehicles (UAVs) that can deploy elastically-scalable computing infrastructure anywhere, at any time. This airborne compute cloud—essentially, micro-data centers hosted on UAVs—would communicate with terrestrial assets over a bandwidth-constrained wireless network with variable, unpredictable link qualities. Achieving high performance over this ground-to-air mobile radio channel thus requires making full and efficient use of every single transmission opportunity. To this end, this dissertation presents two system primitives that improve throughput and reduce network overhead by using recent distributed coding methods to exploit natural properties of the airborne environment (i.e., antenna beam diversity and anomaly sparsity). We first built and deployed an UAV wireless networking testbed and used it to characterize the ground-to-UAV wireless channel. Our flight experiments revealed that antenna beam diversity from using multiple SISO radios boosts reception range and aggregate throughput. This observation led us to develop our first primitive: ground-to-UAV bulk data transport. We designed and implemented FlowCode, a reliable link layer for uplink data transport that uses network coding to harness antenna beam diversity gains. Via flight experiments, we show that FlowCode can boost reception range and TCP throughput as much as 4.5-fold. Our second primitive permits low-overhead cloud status monitoring. We designed CloudSense, a network switch that compresses cloud status streams in-network via compressive sensing. CloudSense is particularly useful for anomaly detection tasks requiring global relative comparisons (e.g., MapReduce straggler detection) and can achieve up to 16.3-fold compression as well as early detection of the worst anomalies. Our efforts have also shed light on the close relationship between network coding and compressive sensing. Thus, we offer FlowCode and CloudSense not only as first steps toward the airborne compute cloud, but also as exemplars of two classes of applications—approximation intolerant and tolerant—to which network coding and compressive sensing should be judiciously and selectively applied. / Engineering and Applied Sciences
594

Protecting sensitive information from untrusted code

Roy, Indrajit 13 December 2010 (has links)
As computer systems support more aspects of modern life, from finance to health care, security is becoming increasingly important. However, building secure systems remains a challenge. Software continues to have security vulnerabilities due to reasons ranging from programmer errors to inadequate programming tools. Because of these vulnerabilities we need mechanisms that protect sensitive data even when the software is untrusted. This dissertation shows that secure and practical frameworks can be built for protecting users' data from untrusted applications in both desktop and cloud computing environment. Laminar is a new framework that secures desktop applications by enforcing policies written as information flow rules. Information flow control, a form of mandatory access control, enables programmers to write powerful, end-to-end security guarantees while reducing the amount of trusted code. Current programming abstractions and implementations of this model either compromise end-to-end security guarantees or require substantial modifications to applications, thus deterring adoption. Laminar addresses these shortcomings by exporting a single set of abstractions to control information flows through operating system resources and heap-allocated objects. Programmers express security policies by labeling data and represent access restrictions on code using a new abstraction called a security region. The Laminar programming model eases incremental deployment, limits dynamic security checks, and supports multithreaded programs that can access heterogeneously labeled data. In large scale, distributed computations safeguarding information requires solutions beyond mandatory access control. An important challenge is to ensure that the computation, including its output, does not leak sensitive information about the inputs. For untrusted code, access control cannot guarantee that the output does not leak information. This dissertation proposes Airavat, a MapReduce-based system which augments mandatory access control with differential privacy to guarantee security and privacy for distributed computations. Data providers control the security policy for their sensitive data, including a mathematical bound on potential privacy violations. Users without security expertise can perform computations on the data; Airavat prevents information leakage beyond the data provider's policy. Our prototype implementation of Airavat demonstrates that several data mining tasks can be performed in a privacy preserving fashion with modest performance overheads. / text
595

Monitoring and control of distributed web services on cloud computing infrastructure / Παρακολούθηση και έλεγχος κατανεμημένων δικτυακών υπηρεσιών σε υπολογιστική αρχιτεκτονική νέφους

Δεχουνιώτης, Δημήτριος 26 August 2014 (has links)
This thesis concerns two main research areas of distributed web services deployed on cloud computing infrastructure. The first category is about monitoring of cloud computing infrastructure. In chapter 2 a novel general technique is used to infer relationships between different service components in a data center. This approach relies on a small set of fuzzy rules, produced by a hybrid genetic algorithm with high classification rate. Furthermore, the strength of detected dependencies is measured. Although we do not know the ground truth about relationships in a network, the proposed method mines realistic relationships without having any previous information about network topology and infrastructure. This approach can be a useful monitoring tool for administrators to obtain a clear view of what is happening in the underlying network. Finally, because of the simplicity of our algorithm and the flexibility of FIM, an online approach seems feasible. The second major problem, which is addressed in chapter 3, is the automated resource control of consolidated web applications on cloud computing infrastructure. ACRA is an innovative modeling and controlling technique of distributed services that are co-located on server cluster. The system dynamics are modeled by a group of linear state space models, which cover all the range of workload conditions. Because of the variant workload conditions, there are non-linear term and uncertainties which are modeled by an additive term in the local linear models. Due to the several types of service transactions with varying time and resources demands there are many desired candidate reference values of the SLOs during a day. Due to these requirements and the workload circumstances, we choose the appropriate model and we compute the closest feasible operating point according to several optimization criteria. Then using a set-theoretic technique a state feedback controller is designed that successfully leads and stabilize the system in the region of the equilibrium point. ACRA controller computes a positively invariant set on the state-space, which includes the target set and drives the system trajectories in it. Thus provide stability guarantee and high levels of robustness against system disturbances and nonlinearities. Furthermore we compare ACRA with an MPC and a PI controller and the results are very promising, since our solution outperforms the two other approaches. Secondly, a unified local level modeling and control framework for consolidated web services in a server cluster was presented, which can be a vital element of a holistic distributed control platform. Admission control and resource allocation were addressed as a common decision problem. Stability and constraint satisfaction was guaranteed. A real testbed was built and from a range of examples, in different operating conditions, we can conclude that both the identification scheme and controller provide high level of QoS. A novel component of this approach is the determination of a set of feasible operating (equilibrium) points which allows choosing the appropriate equilibrium point, depending only on what our objectives are, such as maximizing throughput, minimizing consumption or maximizing profit. Evaluation shows that our approach has high performance compared to well-known solutions, such as queuing models and measurement approach of equilibrium points. Both controllers succeed in their main targets respectively to the already proposed studies in literature. Firstly they satisfy the SLA requirements and the constraints of the underlying cloud computing infrastructure. To the best of our knowledge they are the only studies that calculate a set of feasible operating points that ensure system stability. Furthermore they adopt modern control theory and beyond the stability guarantee they introduce new control properties such as positively invariant sets , ultimate boundedness and e- contractive sets. / Στη παρούσα διδακτορική διατριβή δύο ερευνητικά θέματα επιλύονται. Αρχικά αναπτύσσεται μια τεχνική παρακολούθηση της δικτυακής κίνησης με σκοπό την εύρεση λειτουργικών σχέσεων μεταξύ των διάφορων μερών μιας δικτυακής εφαρμογής. Στο δεύτερο μέρος επιλύεται το πρόβλημα της αυτοματοποιημένη διανομής των πόρων σε δικτυακές εφαρμογές που μοιράζονται ένα κοινό περιβάλλον ΥΑΝ ( Υπολογιστική Αρχιτεκτονική Νέφους). Στόχος του πρώτου κεφαλαίου της διατριβής σε σχέση με την υπάρχουσα βιβλιογραφία είναι η δημιουργία ενός εργαλείου ανάλυσης της δικτυακής κίνησης έτσι ώστε να γίνονται κατανοητές οι λειτουργικές σχέσεις μεταξύ μερών των κατανεμημένων δικτυακών υπηρεσιών. Αυτός ο γράφος είναι πρωτεύον εργαλείο για πολλές εργασίες ενός διαχειριστή που εντάσσονται στο πεδίο της ανάλυσης της απόδοσης και της ανάλυσης των αρχικών αιτίων. Για παράδειγμα η ανίχνευση λανθασμένων εγκαταστάσεων ή διαδικτυακών επιθέσεων και ο σχεδιασμός για την επέκταση η μετατροπή των ΥΑΝ υποδομών. Το δεύτερο μέρος της παρούσας διατριβής ασχολείται με το θέμα της αυτοματοποιημένης κατανομής των υπολογιστικών πόρων ενός υπολογιστικού κέντρου ΥΑΝ σε ένα σύνολο εγκατεστημένων δικτυακών εφαρμογών. Η σύγχρονη τεχνολογία της εικονικοποίησης είναι ο κύριος παράγοντας για την «συστέγαση» πολλών κατανεμημένων υπηρεσιών σε υπολογιστικά κέντρα ΥΑΝ. Το ΕΑΚΠ (έλεγχος αποδοχής και κατανομή πόρων) είναι ένα αυτόνομο πλαίσιο μοντελοποίησης και ελέγχου, το οποίο παρέχει ακριβή μοντέλα και λύνει ενοποιημένα τα προβλήματα ΕΑ και ΚΠ των δικτυακών εφαρμογών που είναι συγκεντρωμένες σε υπολογιστικά κέντρα ΥΑΝ. Στόχος του ΕΑΚΠ είναι να μεγιστοποιεί την είσοδο των αιτήσεων των χρηστών στη παρεχόμενη υπηρεσία εκπληρώνοντας παράλληλα και τις προδιαγεγραμμένες απαιτήσεις ΠΥ (Ποιότητα Υπηρεσίας). Ο δεύτερος τοπικός ελεγκτής που παρουσιάζεται σε αυτή τη διατριβή είναι ένα αυτόνομο πλαίσιο μοντελοποίησης και ελέγχου κατανεμημένων δικτυακών εφαρμογών σε περιβάλλον ΥΑΝ, το οποίο λύνει συγχρόνως τα προβλήματα ΕΑ και ΚΠ με ενιαίο τρόπο.
596

Evaluation and Optimization of Turnaround Time and Cost of HPC Applications on the Cloud

Marathe, Aniruddha Prakash January 2014 (has links)
The popularity of Amazon's EC2 cloud platform has increased in commercial and scientific high-performance computing (HPC) applications domain in recent years. However, many HPC users consider dedicated high-performance clusters, typically found in large compute centers such as those in national laboratories, to be far superior to EC2 because of significant communication overhead of the latter. We find this view to be quite narrow and the proper metrics for comparing high-performance clusters to EC2 is turnaround time and cost. In this work, we first compare the HPC-grade EC2 cluster to top-of-the-line HPC clusters based on turnaround time and total cost of execution. When measuring turnaround time, we include expected queue wait time on HPC clusters. Our results show that although as expected, standard HPC clusters are superior in raw performance, they suffer from potentially significant queue wait times. We show that EC2 clusters may produce better turnaround times due to typically lower wait queue times. To estimate cost, we developed a pricing model---relative to EC2's node-hour prices---to set node-hour prices for (currently free) HPC clusters. We observe that the cost-effectiveness of running an application on a cluster depends on raw performance and application scalability. However, despite the potentially lower queue wait and turnaround times, the primary barrier to using clouds for many HPC users is the cost. Amazon EC2 provides a fixed-cost option (called on-demand) and a variable-cost, auction-based option (called the spot market). The spot market trades lower cost for potential interruptions that necessitate checkpointing; if the market price exceeds the bid price, a node is taken away from the user without warning. We explore techniques to maximize performance per dollar given a time constraint within which an application must complete. Specifically, we design and implement multiple techniques to reduce expected cost by exploiting redundancy in the EC2 spot market. We then design an adaptive algorithm that selects a scheduling algorithm and determines the bid price. We show that our adaptive algorithm executes programs up to 7x cheaper than using the on-demand market and up to 44% cheaper than the best non-redundant, spot-market algorithm. Finally, we extend our adaptive algorithm to exploit several opportunities for cost-savings on the EC2 spot market. First, we incorporate application scalability characteristics into our adaptive policy. We show that the adaptive algorithm informed with scalability characteristics of applications achieves up to 56% cost-savings compared to the expected cost for the base adaptive algorithm run at a fixed, user-defined scale. Second, we demonstrate potential for obtaining considerable free computation time on the spot market enabled by its hour-boundary pricing model.
597

Demand Forecast, Resource Allocation and Pricing for Multimedia Delivery from the Cloud

Niu, Di 13 January 2014 (has links)
Video traffic constitutes a major part of the Internet traffic nowadays. Yet most video delivery services remain best-effort, relying on server bandwidth over-provisioning to guarantee Quality of Service (QoS). Cloud computing is changing the way that video services are offered, enabling elastic and efficient resource allocation through auto-scaling. In this thesis, we propose a new framework of cloud workload management for multimedia delivery services, incorporating demand forecast, predictive resource allocation and quality assurance, as well as resource pricing as inter-dependent components. Based on the trace analysis of a production Video-on-Demand (VoD) system, we propose time-series techniques to predict video bandwidth demand from online monitoring, and determine bandwidth reservations from multiple data centers and the related load direction policy. We further study how such quality-guaranteed cloud services should be priced, in both a game theoretical model and an optimization model.Particularly, when multiple video providers coexist to use cloud resources, we use pricing to control resource allocation in order to maximize the aggregate network utility, which is a standard network utility maximization (NUM) problem with coupled objectives. We propose a novel class of iterative distributed solutions to such problems with a simple economic interpretation of pricing. The method proves to be more efficient than the conventional approach of dual decomposition and gradient methods for large-scale systems, both in theory and in trace-driven simulations.
598

Market-based autonomous and elastic application execution on clouds

Costache, Stefania 03 July 2013 (has links) (PDF)
Organizations owning HPC infrastructures are facing difficulties in managing their resources. These difficulties come from the need to provide concurrent resource access to different application types while considering that users might have different performance objectives for their applications. Cloud computing brings more flexibility and better resource control, promising to improve the user's satisfaction in terms of perceived Quality of Service. Nevertheless, current cloud solutions provide limited support for users to express or use various resource management policies and they don't provide any support for application performance objectives.In this thesis, we present an approach that addresses this challenge in an unique way. Our approach provides a fully decentralized resource control by allocating resources through a proportional-share market, while applications run in autonomous virtual environments capable of scaling the application demand according to user performance objectives.The combination of currency distribution and dynamic resource pricing ensures fair resource utilization.We evaluated our approach in simulation and on the Grid'5000 testbed. Our results show that our approach can enable the co-habitation of different resource usage policies on the infrastructure, improving resource utilisation.
599

Design and Implementation of a Service Discovery and Recommendation Architecture for SaaS Applications

Sukkar, Muhamed January 2010 (has links)
Increasing number of software vendors are offering or planning to offer their applications as a Software-as-a-Service (SaaS) to leverage the benefits of cloud computing and Internet-based delivery. Therefore, potential clients will face increasing number of providers that satisfy their requirements to choose from. Consequently, there is an increasing demand for automating such a time-consuming and error-prone task. In this work, we develop an architecture for automated service discovery and selection in cloud computing environment. The system is based on an algorithm that recommends service choices to users based on both functional and non-functional characteristics of available services. The system also derives automated ratings from monitoring results of past service invocations to objectively detect badly-behaving providers. We demonstrate the effectiveness of our approach using an early prototype that was developed following object-oriented methodology and implemented using various open-source Java technologies and frameworks. The prototype uses a Chord DHT as its distributed backing store to achieve scalability.
600

Scalable Scientific Computing Algorithms Using MapReduce

Xiang, Jingen January 2013 (has links)
Cloud computing systems, like MapReduce and Pregel, provide a scalable and fault tolerant environment for running computations at massive scale. However, these systems are designed primarily for data intensive computational tasks, while a large class of problems in scientific computing and business analytics are computationally intensive (i.e., they require a lot of CPU in addition to I/O). In this thesis, we investigate the use of cloud computing systems, in particular MapReduce, for computationally intensive problems, focusing on two classic problems that arise in scienti c computing and also in analytics: maximum clique and matrix inversion. The key contribution that enables us to e ectively use MapReduce to solve the maximum clique problem on dense graphs is a recursive partitioning method that partitions the graph into several subgraphs of similar size and running time complexity. After partitioning, the maximum cliques of the di erent partitions can be computed independently, and the computation is sped up using a branch and bound method. Our experiments show that our approach leads to good scalability, which is unachievable by other partitioning methods since they result in partitions of di erent sizes and hence lead to load imbalance. Our method is more scalable than an MPI algorithm, and is simpler and more fault tolerant. For the matrix inversion problem, we show that a recursive block LU decomposition allows us to e ectively compute in parallel both the lower triangular (L) and upper triangular (U) matrices using MapReduce. After computing the L and U matrices, their inverses are computed using MapReduce. The inverse of the original matrix, which is the product of the inverses of the L and U matrices, is also obtained using MapReduce. Our technique is the rst matrix inversion technique that uses MapReduce. We show experimentally that our technique has good scalability, and it is simpler and more fault tolerant than MPI implementations such as ScaLAPACK.

Page generated in 0.0679 seconds