Spelling suggestions: "subject:"scalability."" "subject:"calability.""
51 |
Flexible Computing with Virtual MachinesLagar Cavilla, Horacio Andres 30 March 2011 (has links)
This thesis is predicated upon a vision of the future of computing with a separation of functionality between core and edges, very
similar to that governing the Internet itself. In this vision, the core of our computing infrastructure is made up of vast server farms with an abundance of storage and processing cycles. Centralization of
computation in these farms, coupled with high-speed wired or wireless connectivity, allows for pervasive access to a highly-available and well-maintained repository for data, configurations, and applications. Computation in the edges is concerned with provisioning application state and user data to rich clients, notably mobile devices equipped with powerful displays and graphics processors.
We define flexible computing as systems support for applications that dynamically leverage the resources available in the core
infrastructure, or cloud. The work in this thesis focuses on two instances of flexible computing that are crucial to the
realization of the aforementioned vision. Location flexibility aims to, transparently and seamlessly, migrate applications between
the edges and the core based on user demand. This enables performing the interactive tasks on rich edge clients and the computational tasks on powerful core servers. Scale flexibility is the ability of
applications executing in cloud environments, such as parallel jobs or
clustered servers, to swiftly grow and shrink their footprint according to execution demands.
This thesis shows how we can use system virtualization to implement systems that provide scale and location flexibility. To that effect we build and evaluate two system prototypes: Snowbird and SnowFlock. We present techniques for manipulating virtual machine state that turn running software into a malleable entity which is easily manageable, is decoupled from the underlying hardware, and is capable of dynamic relocation and scaling. This thesis demonstrates that virtualization technology is a powerful and suitable tool to
enable solutions for location and scale flexibility.
|
52 |
System Support for Hosting Services on Server ClusterTseng, Chun-wei 22 July 2004 (has links)
Since the speedy popularity of Internet infrastructure and explosive increasing of World Wide Web services, a lot of traditional companies had web-enabled their business over Internet and used WWW as their application platform. Those web services, such as the on-line search, e-learning, e-commence, multimedia, network service, e-government etc., need a stable and powerful web server to support the huge capability.
The web hosting service becomes increasingly important in which the service providers offer system resources to store and to provide Web access to contents from individuals, institutions, and companies, who lack resources or expertise to maintain their Web site. Web server cluster is a popular architecture used by the hosting service providers as a way to create scalable and highly available solutions. However, hosting a variety of contents from different customers on such a distributed server system faces new design and management problems. This dissertation describes the work pursuing for constructing a system to address the challenges of hosting Web content on a server cluster environment.
Based on the achievement of the software-based content-aware distributor, some high repetition and fixity tasks originally treated by software module are delivered to hardware module for speeding up the packet processes and for sharing the loads on the software module. Our software-based distributor is implemented in Linux kernel as a loadable module. A few system calls in user level are built to support the integration of resource management.
|
53 |
Dependability aspects of COM+ and EJB in multi-tiered distributed systemsKarásson, Robert January 2002 (has links)
<p>COM+ and Enterprise JavaBeans are two component-based technologies that can be used to build enterprise systems. These are two competing technologies in the software industry today and choosing which technology a company should use to build their enterprise system is not an easy task. There are many factors to consider and in this project we evaluate these two technologies with focus on scalability and the dependability aspects security, availability and reliability. Independently, these technologies are theoretically evaluated with the criteria in mind.</p><p>We use a 4-tier architecture for the evaluation and the center of attention is a persistence layer, which typically resides in an application server, and how it can be realized using the technologies. This evaluation results in a recommendation about which technology is a better approach to build a scalable and dependable distributed system. The results are that COM+ is considered a better approach to build this kind of multi-tier distributed systems.</p>
|
54 |
The Quest for Equilibrium : Towards an Understanding of Scalability and Sustainability for Mobile LearningWingkvist, Anna January 2008 (has links)
<p>The research presented in this thesis investigates the concept of sustainability in relation to mobile learning initiatives. Sustainability is seen as a key concept for mobile learning to gain acceptance. In linking sustainability to scalability, a term used to describe how well something can grow to suit an increasing complexity, a representation of this process is provided. In this thesis, this process is called ``the quest for equilibrium.''</p><p>A study was conducted of an actual mobile learning initiative that involved introducing podcasts as a supplement to traditional lectures in higher education. In following this initiative, thorough data gathering was conducted, utilizing the process of iterative cycles that characterizes the action research approach. In accordance, a literature survey was conducted, whereby leading publications in mobile learning were classified and analyzed according to the following criteria: Reflections, Frameworks, Scalability, and Sustainability.</p><p>As the mobile learning system evolved from idea to an actual empirical study, trying to understand this process became important. The insights gained during this research were used to develop a conceptual model that is based on the notion that the two concepts of Scalability and Sustainability can be linked to each other.</p><p>This conceptual model is presented describing how a mobile learning system evolves, from Idea, to Experiment, to Project, to Release. Further, each of the stages in this evolution is described by using four areas of concern: Technology, Learning, Social, and Organization.</p><p>Using the experience from a specific mobile learning initiative to define a conceptual model that then is used to describe the same initiative, was a way to bring together practice, theory, and research, thus provide reliable evidence for the model itself.</p><p>The conceptual model can serve as a thinking tool for mobile learning practitioners, to help address the complexity involved when undertaking new efforts and initiatives in this field.</p>
|
55 |
Performance Benchmarking of Fast Multipole MethodsAl-Harthi, Noha A. 06 1900 (has links)
The current trends in computer architecture are shifting towards smaller byte/flop ratios, while available parallelism is increasing at all levels of granularity – vector length, core count, and MPI process. Intel’s Xeon Phi coprocessor, NVIDIA’s Kepler GPU, and IBM’s BlueGene/Q all have a Byte/flop ratio close to 0.2, which makes it very difficult for most algorithms to extract a high percentage of the theoretical peak flop/s from these architectures. Popular algorithms in scientific computing such as FFT are continuously evolving to keep up with this trend in hardware. In the meantime it is also necessary to invest in novel algorithms that are more suitable for computer architectures of the future.
The fast multipole method (FMM) was originally developed as a fast algorithm for ap- proximating the N-body interactions that appear in astrophysics, molecular dynamics, and vortex based fluid dynamics simulations. The FMM possesses have a unique combination of being an efficient O(N) algorithm, while having an operational intensity that is higher than a matrix-matrix multiplication. In fact, the FMM can reduce the requirement of Byte/flop to around 0.01, which means that it will remain compute bound until 2020 even if the cur- rent trend in microprocessors continues. Despite these advantages, there have not been any benchmarks of FMM codes on modern architectures such as Xeon Phi, Kepler, and Blue- Gene/Q.
This study aims to provide a comprehensive benchmark of a state of the art FMM code “exaFMM” on the latest architectures, in hopes of providing a useful reference for deciding when the FMM will become useful as the computational engine in a given application code. It may also serve as a warning to certain problem size domains areas where the FMM will exhibit insignificant performance improvements. Such issues depend strongly on the asymptotic constants rather than the asymptotics themselves, and therefore are strongly implementation and hardware dependent. The primary objective of this study is to provide these constants on various computer architectures.
|
56 |
Scalably Verifiable Cache CoherenceZhang, Meng January 2013 (has links)
<p>The correctness of a cache coherence protocol is crucial to the system since a subtle bug in the protocol may lead to disastrous consequences. However, the verification of a cache coherence protocol is never an easy task due to the complexity of the protocol. Moreover, as more and more cores are compressed into a single chip, there is an urge for the cache coherence protocol to have higher performance, lower power consumption, and less storage overhead. People perform various optimizations to meet these goals, which unfortunately, further exacerbate the verification problem. The current situation is that there are no efficient and universal methods for verifying a realistic cache coherence protocol for a many-core system. </p><p>We, as architects, believe that we can alleviate the verification problem by changing the traditional design paradigm. We suggest taking verifiability as a first-class design constraint, just as we do with other traditional metrics, such as performance, power consumption, and area overhead. To do this, we need to incorporate verification effort in the early design stage of a cache coherence protocol and make wise design decisions regarding the verifiability. Such a protocol will be amenable to verification and easier to be verified in a later stage. Specifically, we propose two methods in this thesis for designing scalably verifiable cache coherence protocols. </p><p>The first method is Fractal Coherence, targeting verifiable hierarchical protocols. Fractal Coherence leverages the fractal idea to design a cache coherence protocol. The self-similarity of the fractal enables the inductive verification of the protocol. Such a verification process is independent of the number of nodes and thus is scalable. We also design example protocols to show that Fractal Coherence protocols can attain comparable performance compared to a traditional snooping or directory protocol. </p><p>As a system scales hierarchically, Fractal Coherence can perfectly solve the verification problem of the implemented cache coherence protocol. However, Fractal Coherence cannot help if the system scales horizontally. Therefore, we propose the second method, PVCoherence, targeting verifiable flat protocols. PVCoherence is based on parametric verification, a widely used method for verifying the coherence of a flat protocol with infinite number of nodes. PVCoherence captures the fundamental requirements and limitations of parametric verification and proposes a set of guidelines for designing cache coherence protocols that are compatible with parametric verification. As long as designers follow these guidelines, their protocols can be easily verified. </p><p>We further show that Fractal Coherence and PVCoherence can also facilitate the verification of memory consistency, another extremely challenging problem. One piece of previous work proves that the verification of memory consistency can be decomposed into three steps. The most complex and non-scalable step is the verification of the cache coherence protocol. If we design the protocol following the design methodology of Fractal Coherence or PVCoherence, we can easily verify the cache coherence protocol and overcome the biggest obstacle in the verification of memory consistency. </p><p>As system expands and cache coherence protocols get more complex, the verification problem of the protocol becomes more prominent. We believe it is time to reconsider the traditional design flow in which verification is totally separated from the design stage. We show that by incorporating the verifiability in the early design stage and designing protocols to be scalably verifiable in the first place, we can greatly reduce the burden of verification. Meanwhile, we perform various experiments and show that we do not lose benefits in performance as well as in other metrics when we obtain the correctness guarantee.</p> / Dissertation
|
57 |
Scalable and adaptable security modelling and analysis.Hong, Jin Bum January 2015 (has links)
Modern networked systems are complex in such a way that assessing the security of them is a difficult task. Security models are widely used to analyse the security of these systems, which are capable of evaluating the complex relationship between network components. Security models can be generated by identifying vulnerabilities, threats (e.g., cyber attacks), network configurations, and reachability of network components. These network components are then combined into a single model to evaluate how an attacker may penetrate through the networked system. Further, countermeasures can be enforced to minimise cyber attacks based on security analysis. However, modern networked systems are becoming large sized and dynamic (e.g., Cloud Computing systems). As a result, existing security models suffer from scalability problem, where it becomes infeasible to use them for modern networked systems that contain hundreds and thousands of hosts and vulnerabilities. Moreover, the dynamic nature of modern networked systems requires a responsive update in the security model to monitor how these changes may affect the security, but there is a lack of capabilities to efficiently manage these changes with existing security models. In addition, existing security models do not provide functionalities to capture and analyse the security of unknown attacks, where the combined effects of both known and unknown attacks can create unforeseen attack scenarios that may not be detected or mitigated. Therefore, the three goals of this thesis are to (i) develop security modelling and analysis methods that can scale to a large number of network components and adapts to changes in the networked system; (ii) develop efficient security assessment methods to formulate countermeasures; and (iii) develop models and metrics to incorporate and assess the security of unknown attacks.
A lifecycle of security models is introduced in this thesis to concisely describe performance and functionalities of modern security models. The five phases in the lifecycle of security models are: (1) Preprocessing, (2) Generation, (3) Representation, (4) Evaluation, and (5) Modification.
To achieve goal (i), a hierarchical security model is developed to reduce the computational costs of assessing the security while maintaining all security information, where each layer captures different security information. Then, a comparative analysis is presented to show the scalability and adaptability of security models. The complexity analysis showed that the hierarchical security model has better or equivalent complexities in all phases of the lifecycle in comparison to existing security models, while the performance analysis showed that in fact it is much more scalable in practical network scenarios.
To achieve goal (ii), security assessment methods based on importance measures are developed. Network centrality measures are used to identify important hosts in the networked systems, and security metrics are used to identify important vulnerabilities in the host. Also, new network centrality measures are developed to improvise the lack of accuracy of existing network centrality measures when the attack scenarios consist of attackers located inside the networked system. Important hosts and vulnerabilities are identified using efficient algorithms with a polynomial time complexity, and the accuracy of these algorithms are shown as nearly equivalent to the naive method through experiments, which has an exponential complexity.
To achieve goal (iii), unknown attacks are incorporated into the hierarchical security model and the combined effects of both known and unknown attacks are analysed. Algorithms taking into account all possible attack scenarios associated with unknown attacks are used to identify significant hosts and vulnerabilities. Approximation algorithms based on dynamic programming and greedy algorithms are also developed to improve the performance. Mitigation strategies to minimise the effects of unknown attacks are formulated on the basis of significant hosts and vulnerabilities identified in the analysis. Results show that mitigation strategies formulated on the basis of significant hosts and vulnerabilities can significantly reduce the system risk in comparison to randomly applying mitigations.
In summary, the contributions of this thesis are: (1) the development and evaluation of the hierarchical security model to enhance the scalability and adaptability of security modelling and analysis; (2) a comparative analysis of security models taking into account scalability and adaptability; (3) the development of security assessment methods based on importance measures to identify important hosts and vulnerabilities in the networked system and evaluating their efficiencies in terms of accuracies and performances; and (4) the development of security analysis taking into account unknown attacks, which consists of evaluating the combined effects of both known and unknown attacks.
|
58 |
Design of a Scalable Path Service for the InternetAscigil, Mehmet O 01 January 2015 (has links)
Despite the world-changing success of the Internet, shortcomings in its routing and forwarding system have become increasingly apparent. One symptom is an escalating tension between users and providers over the control of routing and forwarding of packets: providers understandably want to control use of their infrastructure, and users understandably want paths with sufficient quality-of-service (QoS) to improve the performance of their applications. As a result, users resort to various “hacks” such as sending traffic through intermediate end-systems, and the providers fight back with mechanisms to inspect and block such traffic.
To enable users and providers to jointly control routing and forwarding policies, recent research has considered various architectural approaches in which provider- level route determination occurs separately from forwarding. With this separation, provider-level path computation and selection can be provided as a centralized service: users (or their applications) send path queries to a path service to obtain provider- level paths that meet their application-specific QoS requirements. At the same time, providers can control the use of their infrastructure by dictating how packets are forwarded across their network. The separation of routing and forwarding offers many advantages, but also brings a number of challenges such as scalability. In particular, the path service must respond to path queries in a timely manner and periodically collect topology information containing load-dependent (i.e., performance) routing information.
We present a new design for a path service that makes use of expensive pre- computations, parallel on-demand computations on performance information, and caching of recently computed paths to achieve scalability. We demonstrate that, us- ing commodity hardware with a modest amount of resources, the path service can respond to path queries with acceptable latency under a realistic workload. The ser- vice can scale to arbitrarily large topologies through parallelism. Finally, we describe how to utilize the path service in the current Internet with existing Internet applica- tions.
|
59 |
Elasca: Workload-Aware Elastic Scalability for Partition Based Database SystemsRafiq, Taha January 2013 (has links)
Providing the ability to increase or decrease allocated resources on demand as the transactional load varies is essential for database management systems (DBMS) deployed on today's computing platforms, such as the cloud. The need to maintain consistency of the database, at very large scales, while providing high performance and reliability makes elasticity particularly challenging. In this thesis, we exploit data partitioning as a way to provide elastic DBMS scalability. We assert that the flexibility provided by a partitioned, shared-nothing parallel DBMS can be used to implement elasticity. Our idea is to start with a small number of servers that manage all the partitions, and to elastically scale out by dynamically adding new servers and redistributing database partitions among these servers as the load varies. Implementing this approach requires (a) efficient mechanisms for addition/removal of servers and migration of partitions, and (b) policies to efficiently determine the optimal placement of partitions on the given servers as well as plans for partition migration.
This thesis presents Elasca, a system that implements both these features in an existing shared-nothing DBMS (namely VoltDB) to provide automatic elastic scalability. Elasca consists of a mechanism for enabling elastic scalability, and a workload-aware optimizer for determining optimal partition placement and migration plans. Our optimizer minimizes computing resources required and balances load effectively without compromising system performance, even in the presence of variations in intensity and skew of the load. The results of our experiments show that Elasca is able to achieve performance close to a fully provisioned system while saving 35% resources on average. Furthermore, Elasca's workload-aware optimizer performs up to 79% less data movement than a greedy approach to resource minimization, and also balance load much more effectively.
|
60 |
On the efficient distributed evaluation of SPARQL queries / Sur l'évaluation efficace de requêtes SPARQL distribuéesGraux, Damien 15 December 2016 (has links)
Le Web Sémantique est une extension du Web standardisée par le World Wide Web Consortium. Les différents standards utilisent comme format de base pour les données le Resource Description Framework (rdf) et son langage de requêtes nommé sparql. Plus généralement, le Web Sémantique tend à orienter l’évolution du Web pour permettre de trouver et de traiter l’information plus facilement. L'augmentation des volumes de données rdf disponibles tend à faire rendre standard la distribution des jeux de données. Par conséquent, des évaluateurs de requêtes sparql efficaces et distribués sont de plus en plus nécessaires. Pour faire face à ces challenges, nous avons commencé par comparer plusieurs évaluateurs sparql distribués de l'état-de-l'art tout en adaptant le jeu de métriques considéré. Ensuite, une analyse guidée par des cas typiques d'utilisation nous a conduit à définir de nouveaux champs de développement dans le domaine de l'évaluation distribuée de sparql. Sur la base de ces nouvelles perspectives, nous avons développé plusieurs évaluateurs efficaces pour ces différents cas d'utilisation que nous avons comparé expérimentalement. / The Semantic Web standardized by the World Wide Web Consortium aims at providing a common framework that allows data to be shared and analyzed across applications. Thereby, it introduced as common base for data the Resource Description Framework (rdf) and its query language sparql.Because of the increasing amounts of rdf data available, dataset distribution across clusters is poised to become a standard storage method. As a consequence, efficient and distributed sparql evaluators are needed.To tackle these needs, we first benchmark several state-of-the-art distributed sparql evaluators while adapting the considered set of metrics to a distributed context such as e.g. network traffic. Then, an analysis driven by typical use cases leads us to define new development areas in the field of distributed sparql evaluation. On the basis of these fresh perspectives, we design several efficient distributed sparql evaluators which fit into each of these use cases and whose performances are validated compared with the already benchmarked evaluators. For instance, our distributed sparql evaluator named sparqlgx offers efficient time performances while being resilient to the loss of nodes.
|
Page generated in 0.0651 seconds