591 |
Protecting sensitive information from untrusted codeRoy, Indrajit 13 December 2010 (has links)
As computer systems support more aspects of modern life, from finance to health care, security is becoming increasingly important. However, building secure systems remains a challenge. Software continues to
have security vulnerabilities due to reasons ranging from programmer
errors to inadequate programming tools. Because of these
vulnerabilities we need mechanisms that protect sensitive data
even when the software is untrusted.
This dissertation shows that secure and practical frameworks can be built
for protecting users' data from untrusted applications in both desktop
and cloud computing environment.
Laminar is a new framework that secures desktop applications by
enforcing policies written as information flow rules. Information flow control, a form of mandatory access control, enables programmers to write powerful, end-to-end security guarantees while reducing
the amount of trusted code. Current programming abstractions and implementations of this model either compromise end-to-end security guarantees or require substantial modifications to applications, thus deterring adoption. Laminar addresses these shortcomings by exporting
a single set of abstractions to control information flows through
operating system resources and heap-allocated objects. Programmers express security policies by labeling data and represent access restrictions on code using a new abstraction called a security region.
The Laminar programming model eases incremental deployment, limits dynamic security checks, and supports multithreaded programs that can access
heterogeneously labeled data.
In large scale, distributed computations safeguarding information requires solutions beyond mandatory access control. An important challenge is to ensure that the computation, including its output,
does not leak sensitive information about the inputs. For untrusted code, access control cannot guarantee that the output does not leak information. This dissertation proposes Airavat, a MapReduce-based system which augments mandatory access control with differential privacy to guarantee security and privacy for distributed computations. Data providers control the security policy for their sensitive data, including a mathematical bound on potential privacy violations. Users without security expertise can perform computations
on the data; Airavat prevents information leakage beyond the data
provider's policy. Our prototype implementation of Airavat
demonstrates that several data mining tasks can be performed in a
privacy preserving fashion with modest performance overheads. / text
|
592 |
Monitoring and control of distributed web services on cloud computing infrastructure / Παρακολούθηση και έλεγχος κατανεμημένων δικτυακών υπηρεσιών σε υπολογιστική αρχιτεκτονική νέφουςΔεχουνιώτης, Δημήτριος 26 August 2014 (has links)
This thesis concerns two main research areas of distributed web services deployed on cloud computing infrastructure.
The first category is about monitoring of cloud computing infrastructure. In chapter 2 a novel general technique is used to infer relationships between different service components in a data center. This approach relies on a small set of fuzzy rules, produced by a hybrid genetic algorithm with high classification rate. Furthermore, the strength of detected dependencies is measured. Although we do not know the ground truth about relationships in a network, the proposed method mines realistic relationships without having any previous information about network topology and infrastructure. This approach can be a useful monitoring tool for administrators to obtain a clear view of what is happening in the underlying network. Finally, because of the simplicity of our algorithm and the flexibility of FIM, an online approach seems feasible.
The second major problem, which is addressed in chapter 3, is the automated resource control of consolidated web applications on cloud computing infrastructure. ACRA is an innovative modeling and controlling technique of distributed services that are co-located on server cluster. The system dynamics are modeled by a group of linear state space models, which cover all the range of workload conditions. Because of the variant workload conditions, there are non-linear term and uncertainties which are modeled by an additive term in the local linear models. Due to the several types of service transactions with varying time and resources demands there are many desired candidate reference values of the SLOs during a day. Due to these requirements and the workload circumstances, we choose the appropriate model and we compute the closest feasible operating point according to several optimization criteria. Then using a set-theoretic technique a state feedback controller is designed that successfully leads and stabilize the system in the region of the equilibrium point. ACRA controller computes a positively invariant set on the state-space, which includes the target set and drives the system trajectories in it. Thus provide stability guarantee and high levels of robustness against system disturbances and nonlinearities. Furthermore we compare ACRA with an MPC and a PI controller and the results are very promising, since our solution outperforms the two other approaches.
Secondly, a unified local level modeling and control framework for consolidated web services in a server cluster was presented, which can be a vital element of a holistic distributed control platform. Admission control and resource allocation were addressed as a common decision problem. Stability and constraint satisfaction was guaranteed. A real testbed was built and from a range of examples, in different operating conditions, we can conclude that both the identification scheme and controller provide high level of QoS. A novel component of this approach is the determination of a set of feasible operating (equilibrium) points which allows choosing the appropriate equilibrium point, depending only on what our objectives are, such as maximizing throughput, minimizing consumption or maximizing profit. Evaluation shows that our approach has high performance compared to well-known solutions, such as queuing models and measurement approach of equilibrium points.
Both controllers succeed in their main targets respectively to the already proposed studies in literature. Firstly they satisfy the SLA requirements and the constraints of the underlying cloud computing infrastructure. To the best of our knowledge they are the only studies that calculate a set of feasible operating points that ensure system stability. Furthermore they adopt modern control theory and beyond the stability guarantee they introduce new control properties such as positively invariant sets , ultimate boundedness and e- contractive sets. / Στη παρούσα διδακτορική διατριβή δύο ερευνητικά θέματα επιλύονται. Αρχικά αναπτύσσεται μια τεχνική παρακολούθηση της δικτυακής κίνησης με σκοπό την εύρεση λειτουργικών σχέσεων μεταξύ των διάφορων μερών μιας δικτυακής εφαρμογής. Στο δεύτερο μέρος επιλύεται το πρόβλημα της αυτοματοποιημένη διανομής των πόρων σε δικτυακές εφαρμογές που μοιράζονται ένα κοινό περιβάλλον ΥΑΝ ( Υπολογιστική Αρχιτεκτονική Νέφους). Στόχος του πρώτου κεφαλαίου της διατριβής σε σχέση με την υπάρχουσα βιβλιογραφία είναι η δημιουργία ενός εργαλείου ανάλυσης της δικτυακής κίνησης έτσι ώστε να γίνονται κατανοητές οι λειτουργικές σχέσεις μεταξύ μερών των κατανεμημένων δικτυακών υπηρεσιών. Αυτός ο γράφος είναι πρωτεύον εργαλείο για πολλές εργασίες ενός διαχειριστή που εντάσσονται στο πεδίο της ανάλυσης της απόδοσης και της ανάλυσης των αρχικών αιτίων. Για παράδειγμα η ανίχνευση λανθασμένων εγκαταστάσεων ή διαδικτυακών επιθέσεων και ο σχεδιασμός για την επέκταση η μετατροπή των ΥΑΝ υποδομών.
Το δεύτερο μέρος της παρούσας διατριβής ασχολείται με το θέμα της αυτοματοποιημένης κατανομής των υπολογιστικών πόρων ενός υπολογιστικού κέντρου ΥΑΝ σε ένα σύνολο εγκατεστημένων δικτυακών εφαρμογών. Η σύγχρονη τεχνολογία της εικονικοποίησης είναι ο κύριος παράγοντας για την «συστέγαση» πολλών κατανεμημένων υπηρεσιών σε υπολογιστικά κέντρα ΥΑΝ.
Το ΕΑΚΠ (έλεγχος αποδοχής και κατανομή πόρων) είναι ένα αυτόνομο πλαίσιο μοντελοποίησης και ελέγχου, το οποίο παρέχει ακριβή μοντέλα και λύνει ενοποιημένα τα προβλήματα ΕΑ και ΚΠ των δικτυακών εφαρμογών που είναι συγκεντρωμένες σε υπολογιστικά κέντρα ΥΑΝ. Στόχος του ΕΑΚΠ είναι να μεγιστοποιεί την είσοδο των αιτήσεων των χρηστών στη παρεχόμενη υπηρεσία εκπληρώνοντας παράλληλα και τις προδιαγεγραμμένες απαιτήσεις ΠΥ (Ποιότητα Υπηρεσίας). Ο δεύτερος τοπικός ελεγκτής που παρουσιάζεται σε αυτή τη διατριβή είναι ένα αυτόνομο πλαίσιο μοντελοποίησης και ελέγχου κατανεμημένων δικτυακών εφαρμογών σε περιβάλλον ΥΑΝ, το οποίο λύνει συγχρόνως τα προβλήματα ΕΑ και ΚΠ με ενιαίο τρόπο.
|
593 |
Evaluation and Optimization of Turnaround Time and Cost of HPC Applications on the CloudMarathe, Aniruddha Prakash January 2014 (has links)
The popularity of Amazon's EC2 cloud platform has increased in commercial and scientific high-performance computing (HPC) applications domain in recent years. However, many HPC users consider dedicated high-performance clusters, typically found in large compute centers such as those in national laboratories, to be far superior to EC2 because of significant communication overhead of the latter. We find this view to be quite narrow and the proper metrics for comparing high-performance clusters to EC2 is turnaround time and cost. In this work, we first compare the HPC-grade EC2 cluster to top-of-the-line HPC clusters based on turnaround time and total cost of execution. When measuring turnaround time, we include expected queue wait time on HPC clusters. Our results show that although as expected, standard HPC clusters are superior in raw performance, they suffer from potentially significant queue wait times. We show that EC2 clusters may produce better turnaround times due to typically lower wait queue times. To estimate cost, we developed a pricing model---relative to EC2's node-hour prices---to set node-hour prices for (currently free) HPC clusters. We observe that the cost-effectiveness of running an application on a cluster depends on raw performance and application scalability. However, despite the potentially lower queue wait and turnaround times, the primary barrier to using clouds for many HPC users is the cost. Amazon EC2 provides a fixed-cost option (called on-demand) and a variable-cost, auction-based option (called the spot market). The spot market trades lower cost for potential interruptions that necessitate checkpointing; if the market price exceeds the bid price, a node is taken away from the user without warning. We explore techniques to maximize performance per dollar given a time constraint within which an application must complete. Specifically, we design and implement multiple techniques to reduce expected cost by exploiting redundancy in the EC2 spot market. We then design an adaptive algorithm that selects a scheduling algorithm and determines the bid price. We show that our adaptive algorithm executes programs up to 7x cheaper than using the on-demand market and up to 44% cheaper than the best non-redundant, spot-market algorithm. Finally, we extend our adaptive algorithm to exploit several opportunities for cost-savings on the EC2 spot market. First, we incorporate application scalability characteristics into our adaptive policy. We show that the adaptive algorithm informed with scalability characteristics of applications achieves up to 56% cost-savings compared to the expected cost for the base adaptive algorithm run at a fixed, user-defined scale. Second, we demonstrate potential for obtaining considerable free computation time on the spot market enabled by its hour-boundary pricing model.
|
594 |
Demand Forecast, Resource Allocation and Pricing for Multimedia Delivery from the CloudNiu, Di 13 January 2014 (has links)
Video traffic constitutes a major part of the Internet traffic nowadays. Yet most video delivery services remain best-effort, relying on server bandwidth over-provisioning to guarantee Quality of Service (QoS). Cloud computing is changing the way that video services are offered, enabling elastic and efficient resource allocation through auto-scaling. In this thesis, we propose a new framework of cloud workload management for multimedia delivery services, incorporating demand forecast, predictive resource allocation and quality assurance, as well as resource pricing as inter-dependent components. Based on the trace analysis of a production Video-on-Demand (VoD) system, we propose time-series techniques to predict video bandwidth demand from online monitoring, and determine bandwidth reservations from multiple data centers and the related load direction policy. We further study how such quality-guaranteed cloud services should be priced, in both a game theoretical model and an optimization model.Particularly, when multiple video providers coexist to use cloud resources, we use pricing to control resource allocation in order to maximize the aggregate network utility, which is a standard network utility maximization (NUM) problem with coupled objectives. We propose a novel class of iterative distributed solutions to such problems with a simple economic interpretation of pricing. The method proves to be more efficient than the conventional approach of dual decomposition and gradient methods for large-scale systems, both in theory and in trace-driven simulations.
|
595 |
Market-based autonomous and elastic application execution on cloudsCostache, Stefania 03 July 2013 (has links) (PDF)
Organizations owning HPC infrastructures are facing difficulties in managing their resources. These difficulties come from the need to provide concurrent resource access to different application types while considering that users might have different performance objectives for their applications. Cloud computing brings more flexibility and better resource control, promising to improve the user's satisfaction in terms of perceived Quality of Service. Nevertheless, current cloud solutions provide limited support for users to express or use various resource management policies and they don't provide any support for application performance objectives.In this thesis, we present an approach that addresses this challenge in an unique way. Our approach provides a fully decentralized resource control by allocating resources through a proportional-share market, while applications run in autonomous virtual environments capable of scaling the application demand according to user performance objectives.The combination of currency distribution and dynamic resource pricing ensures fair resource utilization.We evaluated our approach in simulation and on the Grid'5000 testbed. Our results show that our approach can enable the co-habitation of different resource usage policies on the infrastructure, improving resource utilisation.
|
596 |
Design and Implementation of a Service Discovery and Recommendation Architecture for SaaS ApplicationsSukkar, Muhamed January 2010 (has links)
Increasing number of software vendors are offering or planning to offer their applications as a Software-as-a-Service (SaaS) to leverage the benefits of cloud computing and Internet-based delivery. Therefore, potential clients will face increasing number of providers that satisfy their requirements to choose from. Consequently, there is an increasing demand for automating such a time-consuming and error-prone task. In this work, we develop an architecture for automated service discovery and selection in cloud computing environment. The system is based on an algorithm that recommends service choices to users based on both functional and non-functional characteristics of available services. The system also derives automated ratings from monitoring results of past service invocations to objectively detect badly-behaving providers. We demonstrate the effectiveness of our approach using an early prototype that was developed following object-oriented methodology and implemented using various open-source Java technologies and frameworks. The prototype uses a Chord DHT as its distributed backing store to achieve scalability.
|
597 |
Scalable Scientific Computing Algorithms Using MapReduceXiang, Jingen January 2013 (has links)
Cloud computing systems, like MapReduce and Pregel, provide a scalable and fault tolerant environment for running computations at massive scale. However, these systems are designed primarily for data intensive computational tasks, while a large class of problems in
scientific computing and business analytics are computationally intensive (i.e., they require a lot of CPU in addition to I/O). In this thesis, we investigate the use of cloud computing systems, in particular MapReduce, for computationally intensive problems, focusing on two classic problems that arise in scienti c computing and also in analytics: maximum clique and matrix inversion.
The key contribution that enables us to e ectively use MapReduce to solve the maximum clique problem on dense graphs is a recursive partitioning method that partitions the graph into several subgraphs of similar size and running time complexity. After partitioning, the maximum cliques of the di erent partitions can be computed independently, and the computation is sped up using a branch and bound method. Our experiments show that our approach leads to good scalability, which is unachievable by other partitioning methods since they result in partitions of di erent sizes and hence lead to load imbalance. Our method is more scalable than an MPI algorithm, and is simpler and more fault tolerant.
For the matrix inversion problem, we show that a recursive block LU decomposition allows us to e ectively compute in parallel both the lower triangular (L) and upper triangular
(U) matrices using MapReduce. After computing the L and U matrices, their inverses are computed using MapReduce. The inverse of the original matrix, which is the product
of the inverses of the L and U matrices, is also obtained using MapReduce. Our technique is the rst matrix inversion technique that uses MapReduce. We show experimentally that our technique has good scalability, and it is simpler and more fault tolerant than MPI implementations such as ScaLAPACK.
|
598 |
Veiksmų kaip įkalčių skaičiavimų debesies saugyklose atkūrimo metodika / Methodology of user activities reconstruction for forensic purposes in cloud storageSaikauskas, Nerijus 26 August 2013 (has links)
Skaičiavimų debesies (angl. cloud computing) technologijos sukūrimas suteikė galimybę padidinti kompanijų veiklos efektyvumą, tačiau sukėlė ir naujų problemų, viena kurių – skaitmeninės teismo ekspertizės (angl. digital forensics) atlikimas nutolusioje aplinkoje. Apskritai teigiama, kad jeigu skaičiavimų debesies paslauga nefiksuoja tinkamų audito įrašų, nustatyti įkalčius tampa sunku arba tiesiog neįmanoma. Deja, paprastai šiam tikslui siūlomas funkcionalumas yra gana ribotas arba iš viso neegzistuoja. Šiame magistriniame darbe yra siūloma nauja metodika-įrankis, Žurnalizavimo Paramos Sistema (ŽPS), programavimo kalba „Python” apjungianti skaitmeninės teismo ekspertizės atlikimui skirtas atvirojo kodo programines priemones „The Sleuth Kit“ ir „The Volatility Framework“, kuri padeda užfiksuoti ir atkurti vartotojų veiksmus kaip įkalčius skaičiavimų debesies saugyklose. ŽPS įgyvendina kitų autorių pasiūlytą unifikuotą audito įrašų formatą tokio pobūdžio aplinkoms ir sukuria save aprašančių duomenų efektą, kuris, manoma, yra svarbus žingsnis siekiant efektyviai tirti nusikaltimus skaičiavimų debesies saugyklose. Eksperimentinio tyrimo metu Žurnalizavimo Paramos Sistema pademonstravo aukštus efektyvumo rodiklius: jos pagalba pavyko atkurti daugiau kaip 65 % veiksmų priklausomai nuo vartotojų aktyvumo su sąlyga, kad virtualių mašinų (angl. virtual machine) kopijos buvo kuriamas ir analizuojamas ne rečiau kaip kas 5 min. / Even though creation of cloud computing technology has provided opportunities to increase effectiveness of the companies, it has also generated new problems where one of them is digital forensics in the remote environments. It is generally agreed that if the service of a cloud doesn't record appropriate logs, identification of evidence becomes hard if not possible. Unfortunately, the existing functionality for this purpose is limited or absent all together. In this Master's thesis a new method-tool, Žurnalizavimo Paramos Sistema (ŽPS), has been proposed which combines open source digital forensic software The Sleuth Kit and The Volatility Framework with the help of Python programming language and helps to record and restore user activities in cloud storage environments. ŽPS implements unified logging format for such types of settings proposed by other authors and creates a data-centric effect which is thought to be an important step towards proper crime investigations in cloud storage environments. During experimental evaluation the method proved to be highly effective managing to reconstruct more than 65 % of user actions depending on their activeness when the copies of virtual machines have been created and analized not rarer than 5 minutes period.
|
599 |
Demand Forecast, Resource Allocation and Pricing for Multimedia Delivery from the CloudNiu, Di 13 January 2014 (has links)
Video traffic constitutes a major part of the Internet traffic nowadays. Yet most video delivery services remain best-effort, relying on server bandwidth over-provisioning to guarantee Quality of Service (QoS). Cloud computing is changing the way that video services are offered, enabling elastic and efficient resource allocation through auto-scaling. In this thesis, we propose a new framework of cloud workload management for multimedia delivery services, incorporating demand forecast, predictive resource allocation and quality assurance, as well as resource pricing as inter-dependent components. Based on the trace analysis of a production Video-on-Demand (VoD) system, we propose time-series techniques to predict video bandwidth demand from online monitoring, and determine bandwidth reservations from multiple data centers and the related load direction policy. We further study how such quality-guaranteed cloud services should be priced, in both a game theoretical model and an optimization model.Particularly, when multiple video providers coexist to use cloud resources, we use pricing to control resource allocation in order to maximize the aggregate network utility, which is a standard network utility maximization (NUM) problem with coupled objectives. We propose a novel class of iterative distributed solutions to such problems with a simple economic interpretation of pricing. The method proves to be more efficient than the conventional approach of dual decomposition and gradient methods for large-scale systems, both in theory and in trace-driven simulations.
|
600 |
The Design and Applications of a Privacy-Preserving Identity and Trust-Management SystemHussain, Mohammed 08 April 2010 (has links)
Identities are present in the interactions between individuals and organizations.
Online shopping requires credit card information, while e-government services require social security or passport numbers. The involvement of identities, however, makes them susceptible to theft and misuse.
The most prominent approach for maintaining the privacy of individuals is the enforcement of privacy policies that regulate the flow and use of identity information.
This approach suffers two drawbacks that severely limit its effectiveness. First, recent research in data-mining facilitates the fusion of partial identities into complete identities. That holds true even if the attributes examined are not, normally considered, to be identifying. Second, policies are prone to human error, allowing for identity information to be released accidentally.
This thesis presents a system that enables an individual to interact with organizations, without allowing these organizations to link the interactions of that individual together. The system does not release individuals' identities to
organizations. Instead, certified artificial identities are used to guarantee that individuals possess the required attributes to successfully participate in the interactions. The system limits the fusion of partial identities and minimizes the effects of human error. The concept of using certified artificial identities has been
extensively researched. The system, however, tackles several unaddressed scenarios.
The system works not only for interactions that involve an individual and an organization, but also for interactions
that involve a set of individuals connected by structured relations. The individuals should prove the existence of relations among
them to organizations, yet organizations cannot profile the actions of these individuals. Further, the system allows organizations to be anonymous, while proving their attributes to individuals. Reputation-based trust is incorporated to help individuals make informed decisions whether to deal with a particular organization.
The system is used to design applications in e-commerce, access control, reputation management, and cloud computing. The thesis describes the applications in detail. / Thesis (Ph.D, Computing) -- Queen's University, 2010-04-07 11:17:37.68
|
Page generated in 0.0553 seconds