41 |
Controlling Scalability in Distributed Virtual EnvironmentsSingh, Hermanpreet 01 May 2013 (has links)
A Distributed Virtual Environment (DVE) system provides a shared virtual environment where physically separated users can interact and collaborate over a computer network. More simultaneous DVE users could result in intolerable system performance degradation. We address the three major challenges to improve DVE scalability: effective DVE system performance measurement, understanding the controlling factors of system performance/quality and determining the consequences of DVE system changes.
We propose a DVE Scalability Engineering (DSE) process that addresses these three major challenges for DVE design. DSE allow us to identify, evaluate, and leverage trade-offs among DVE resources, the DVE software, and the virtual environment. DSE has three stages. First, we show how to simulate different numbers and types of users on DVE resources. Collected user study data is used to identify representative user types. Second, we describe a modeling method to discover the major trade-offs between quality of service and DVE resource usage. The method makes use of a new instrumentation tool called ppt. ppt collects atomic blocks of developer-selected instrumentation at high rates and saves it for offline analysis. Finally, we integrate our load simulation and modeling method into a single process to explore the effects of changes in DVE resources.
We use the simple Asteroids DVE as a minimal case study to describe the DSE process. The larger and commercial Torque and Quake III DVE systems provide realistic case studies and demonstrate DSE usage. The Torque case study shows the impact of many users on a DVE system. We apply the DSE process to significantly enhance the Quality of Experience given the available DVE resources. The Quake III case study shows how to identify the DVE network needs and evaluate network characteristics when using a mobile phone platform. We analyze the trade-offs between power consumption and quality of service.
The case studies demonstrate the applicability of DSE for discovering and leveraging tradeoffs between Quality of Experience and DVE resource usage. Each of the three stages can be used individually to improve DVE performance. The DSE process enables fast and effective DVE performance improvement. / Ph. D.
|
42 |
Towards a Scalable Docker RegistryLittley, Michael Brian 29 June 2018 (has links)
Containers are an alternative to virtual machines rapidly increasing in popularity due to their minimal overhead. To help facilitate their adoption, containers use management systems with central registries to store and distribute container images. However, these registries rely on other, preexisting services to provide load balancing and storage, which limits their scalability. This thesis introduces a new registry design for Docker, the most prevalent container management system. The new design coalesces all the services into a single, highly scalable, registry. By increasing the scalability of the registry, the new design greatly decreases the distribution time for container images. This work also describes a new Docker registry benchmarking tool, the trace player, that uses real Docker registry workload traces to test the performance of new registry designs and setups. / Master of Science / Cloud services allow many different web applications to run on shared machines. The applications can be owned by a variety of customers to provide many different types of services. Because these applications are owned by different customers, they need to be isolated to ensure the users’ privacy and security. Containers are one technology that can provide isolation to the applications on a single machine, and they are rapidly gaining popularity as they incur less overhead on the applications that use them. This means the applications will run faster with the same isolation guarantees as other isolation technologies. Containers also allow the cloud provider to run more applications on a single machine, letting them serve more customers. Docker is by far the most popular container management system on the market. It provides a registry service for containerized application storage and distribution. Users can store snapshots of their applications on the registry, and then use the snapshots to run multiple copies of the application on different machines. As more and more users use the registry service, the registry becomes slower, making it take longer for users to pull their applications from the registry. This will increase the start time of their application, making them harder to scale out their application to more machines to accommodate more customers of their services. This work creates a new registry design that will allow the registry to handle more users, and allow them to retrieve their applications even faster than what’s currently possible. This will allow them to more rapidly scale their applications out to more machines to handle more customers. The customers, in turn, will have a better experience.
|
43 |
Scalable Transactions in Decentralized NetworksPainter, Zachary M 01 January 2024 (has links) (PDF)
The study of shared memory concurrency is extensive. There exist many state-of-the-art strategies for dealing with fundamental concurrency problems, such as race conditions or deadlocks, to leverage massive performance boosts out of modern multiprocessors. With the introduction of blockchain technology as a popular financial tool, we observe many decades-old concurrency problems re-emerge within the context of decentralized networks. These challenges introduce additional constraints, such as the lack of hardware atomic instructions like Compare-And-Swap, or the potential for malicious clients to join the network. In this dissertation, we propose key algorithms which adapt knowledge from the domain of shared memory concurrency to solve emerging concurrency problems in decentralized networks.
We propose three key algorithms which further the state of the art in decentralized networks. (1) We present Hash-Mark-Set, a concurrent algorithm for providing a read-uncommitted view of the blockchain state, enabling a higher success rate in transaction use cases where state changes frequently in relation to the block interval. (2) We propose Proof of Descriptor, a descriptor based consensus mechanism for decentralized networks. Proof of Descriptor utilizes well-known techniques from shared memory concurrent programming to create an efficient and scalable algorithm for blockchain consensus. (3) We propose a descriptor-based algorithm for concurrent execution of smart contracts that efficiently captures the concurrent execution as a graph of descriptors, enabling validators to analyze the concurrent execution and verify its results through re-execution.
|
44 |
ROUTING IN MOBILE AD-HOC NETWORKS: SCALABILITY AND EFFICIENCYBai, Rendong 01 January 2008 (has links)
Mobile Ad-hoc Networks (MANETs) have received considerable research interest in recent years. Because of dynamic topology and limited resources, it is challenging to design routing protocols for MANETs. In this dissertation, we focus on the scalability and efficiency problems in designing routing protocols for MANETs. We design the Way Point Routing (WPR) model for medium to large networks. WPR selects a number of nodes on a route as waypoints and divides the route into segments at the waypoints. Waypoint nodes run a high-level inter-segment routing protocol, and nodes on each segment run a low-level intra-segment routing protocol. We use DSR and AODV as the inter-segment and the intra-segment routing protocols, respectively. We term this instantiation the DSR Over AODV (DOA) routing protocol. We develop Salvaging Route Reply (SRR) to salvage undeliverable route reply (RREP) messages. We propose two SRR schemes: SRR1 and SRR2. In SRR1, a salvor actively broadcasts a one-hop salvage request to find an alternative path to the source. In SRR2, nodes passively learn an alternative path from duplicate route request (RREQ) packets. A salvor uses the alternative path to forward a RREP when the original path is broken. We propose Multiple-Target Route Discovery (MTRD) to aggregate multiple route requests into one RREQ message and to discover multiple targets simultaneously. When a source initiates a route discovery, it first tries to attach its request to existing RREQ packets that it relays. MTRD improves routing performance by reducing the number of regular route discoveries. We develop a new scheme called Bilateral Route Discovery (BRD), in which both source and destination actively participate in a route discovery process. BRD consists of two halves: a source route discovery and a destination route discovery, each searching for the other. BRD has the potential to reduce control overhead by one half. We propose an efficient and generalized approach called Accumulated Path Metric (APM) to support High-Throughput Metrics (HTMs). APM finds the shortest path without collecting topology information and without running a shortest-path algorithm. Moreover, we develop the Broadcast Ordering (BO) technique to suppress unnecessary RREQ transmissions.
|
45 |
Microservices in data intensive applicationsRemeika, Mantas, Urbanavicius, Jovydas January 2018 (has links)
The volumes of data which Big Data applications have to process are constantly increasing. This requires for the development of highly scalable systems. Microservices is considered as one of the solutions to deal with the scalability problem. However, the literature on practices for building scalable data-intensive systems is still lacking. This thesis aims to investigate and present the benefits and drawbacks of using microservices architecture in big data systems. Moreover, it presents other practices used to increase scalability. It includes containerization, shared-nothing architecture, data sharding, load balancing, clustering, and stateless design. Finally, an experiment comparing the performance of a monolithic application and a microservices-based application was performed. The results show that with increasing amount of load microservices perform better than the monolith. However, to cope with the constantly increasing amount of data, additional techniques should be used together with microservices.
|
46 |
Adaptive power control in wireless networks for scalable and fair capacity distributions.January 2006 (has links)
Ho Wang Hei. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 93-94). / Abstracts in English and Chinese. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Contributions --- p.1 / Chapter 1.1.1 --- Scalability of Network Capacity with Power Control --- p.1 / Chapter 1.1.2 --- Trade-off between network capacity and fairness with Power Control --- p.3 / Chapter 1.2 --- Related Work --- p.4 / Chapter 1.3 --- Organization of the Thesis --- p.6 / Chapter Chapter 2 --- Background --- p.8 / Chapter 2.1 --- Hidden- and Exposed-node Problems --- p.8 / Chapter 2.1.1 --- HN-free Design (HFD) --- p.9 / Chapter 2.1.2 --- Non-Scalable Capacity in 802.11 caused by EN --- p.11 / Chapter 2.2 --- Shortcomings of Minimum-Transmit-Power Approach --- p.13 / Chapter Chapter 3 --- Simultaneous Transmissions Constraints with Power Control --- p.15 / Chapter 3.1 --- Physical-Collision Constraints --- p.16 / Chapter 3.1.1 --- Protocol-Independent Physical-Collision Constraints --- p.17 / Chapter 3.1.2 --- Protocol-Specific Physical-Collision Constraints --- p.17 / Chapter 3.2 --- Protocol-Collision-Prevention Constraints --- p.18 / Chapter 3.2.1 --- Transmitter-Side Carrier-Sensing Constraints --- p.18 / Chapter 3.2.2 --- Receiver-Side Carrier-Sensing Constraints --- p.19 / Chapter Chapter 4 --- Graph Models for Capturing Transmission Constraints and Hidden-node Problems --- p.20 / Chapter 4.1 --- Link-Interference Graph from Physical-Collision Constraints --- p.21 / Chapter 4.2 --- Protocol-Collision-Prevention Graphs --- p.22 / Chapter 4.3 --- Ideal Protocol-Collision-Prevention Graphs --- p.22 / Chapter 4.4 --- Definition of HN and EN and their Investigation using Graph Model --- p.23 / Chapter 4.5 --- Attacking Cases --- p.26 / Chapter Chapter 5 --- Scalability of Network Capacity with Adaptive Power Control --- p.27 / Chapter 5.1 --- Selective Disregard of NAVs (SDN) --- p.27 / Chapter 5.2 --- Scalability of Network Capacity: Analytical Discussion --- p.29 / Chapter 5.3 --- Adaptive Power Control for SDN --- p.31 / Chapter 5.3.1 --- Per-iteration Power Adjustment --- p.32 / Chapter 5.3.2 --- Power Control Scheduling Strategy --- p.35 / Chapter 5.3.3 --- Power Exchange Algorithm --- p.39 / Chapter 5.3.4 --- Comparison of Scheduling Strategies --- p.41 / Chapter 5.4 --- Scalability of Network Capacity: Numerical Results --- p.43 / Chapter Chapter 6 --- Decoupled Adaptive Power Control (DAPC) --- p.45 / Chapter 6.1 --- Per-iteration Power Adjustment --- p.45 / Chapter 6.2 --- Power Exchange Algorithm --- p.47 / Chapter 6.3 --- Implementation of DAPC --- p.48 / Chapter 6.4 --- Deadlock Problem in DAPC --- p.50 / Chapter Chapter 7 --- Progressive-Uniformly-Scaled Power Control (PUSPC): Deadlock-free Design --- p.53 / Chapter 7.1 --- Algorithm of PUSPC --- p.53 / Chapter 7.2 --- Deadlock-free property of PUSPC --- p.60 / Chapter 7.3 --- Deadlock Resolution of DAPC using PUSPC --- p.62 / Chapter Chapter 8 --- Incremental Power Adaptation --- p.65 / Chapter 8.1 --- Incremental Power Adaptation (IPA) --- p.65 / Chapter 8.2 --- Maximum Allowable Power in EPA --- p.68 / Chapter 8.3 --- Numerical Results of IPA --- p.71 / Chapter Chapter 9 --- Numerical Results and the Trade-off between EN and HN --- p.78 / Chapter Chapter 10 --- Conclusion --- p.83 / Appendix I: Proof of the Correct Operation of PE Algorithm for APC for SDN --- p.86 / Appendix II: Proof of the Correct Operation of PE Algorithm for DAPC --- p.89 / Appendix III: Scalability of the Communication Cost of PE Algorithm --- p.91 / Bibliography --- p.93
|
47 |
Analysis and Management of Security State for Large-Scale Data Center NetworksJanuary 2018 (has links)
abstract: With the increasing complexity of computing systems and the rise in the number of risks and vulnerabilities, it is necessary to provide a scalable security situation awareness tool to assist the system administrator in protecting the critical assets, as well as managing the security state of the system. There are many methods to provide security states' analysis and management. For instance, by using a Firewall to manage the security state, and/or a graphical analysis tools such as attack graphs for analysis.
Attack Graphs are powerful graphical security analysis tools as they provide a visual representation of all possible attack scenarios that an attacker may take to exploit system vulnerabilities. The attack graph's scalability, however, is a major concern for enumerating all possible attack scenarios as it is considered an NP-complete problem. There have been many research work trying to come up with a scalable solution for the attack graph. Nevertheless, non-practical attack graph based solutions have been used in practice for realtime security analysis.
In this thesis, a new framework, namely 3S (Scalable Security Sates) analysis framework is proposed, which present a new approach of utilizing Software-Defined Networking (SDN)-based distributed firewall capabilities and the concept of stateful data plane to construct scalable attack graphs in near-realtime, which is a practical approach to use attack graph for realtime security decisions. The goal of the proposed work is to control reachability information between different datacenter segments to reduce the dependencies among vulnerabilities and restrict the attack graph analysis in a relative small scope. The proposed framework is based on SDN's programmable capabilities to adjust the distributed firewall policies dynamically according to security situations during the running time. It apply white-list-based security policies to limit the attacker's capability from moving or exploiting different segments by only allowing uni-directional vulnerability dependency links between segments. Specifically, several test cases will be presented with various attack scenarios and analyze how distributed firewall and stateful SDN data plan can significantly reduce the security states construction and analysis. The proposed approach proved to achieve a percentage of improvement over 61% in comparison with prior modules were SDN and distributed firewall are not in use. / Dissertation/Thesis / Masters Thesis Computer Engineering 2018
|
48 |
Avaliação de escalabilidade e desempenho da camada de transporte de mensagens em plataformas multiagente / Scalability and performance comparison between message transport systems of multiagent platformsRodrigues, Henrique Donâncio Nunes 12 August 2019 (has links)
Este trabalho reside no campo de sistemas multiagente (MAS) compostos por agentes inteligentes que são capazes de usar protocolos de comunicação da Internet. Uma plataforma multiagente é um software ou framework capaz de gerenciar múltiplos aspectos da execução de agentes e suas interações. Muitas plataformas MAS foram desenvolvidas nos últimos anos, todas elas compatíveis com padrões de desenvolvimento de sistemas interoperáveis em diferentes níveis. Nos últimos anos,novas linguagens de programação foram definidas e novos protocolos foram adotados para comunicação em sistemas distribuídos. Esses fatos também influenciaram a comunidade multiagente,com a proposição de novas plataformas para apoiar o desenvolvimento de sistemas multiagente. Além disso, a adoção de agentes como paradigma para o desenvolvimento de sistemas distribuídos complexos em larga escala é vista como uma solução interessante na era do grande volume de dados. Portanto, uma comparação entre as plataformas existentes e seu suporte para desenvolver e implantar com eficiência sistemas multiagente de grande escala pode beneficiar a comunidade de desenvolvedores interessada em escolher qual plataforma melhor se adapta a seus projetos. O objetivo deste trabalho é avaliar plataformas multiagente em relação à escalabilidade, desempenho e compatibilidade com outras tecnologias com o objetivo de facilitar a escolha do desenvolvedor que queira projetar Sistemas Multiagente de grande porte. A fim de escolher as plataformas MAS para a comparação proposta, são consideradas plataformas de código aberto que são ativamente utilizadas pela comunidade multiagente. Além disso, tais plataformas MAS devem ser capazes de oferecer uma implantação de forma distribuída, característica essencial de sistemas escaláveis. Depois de restringir a lista de plataformas MAS de acordo com esses critérios, são analisados os sistemas de transporte de mensagens utilizando benchmarks para análise de escalabilidade e desempenho, considerando diferentes cenários de comunicação. Por fim, é apresentado um cenário realístico onde um MAS escalável pode ser adotado como solução. / This work resides in the field of multiagent systems (MAS) composed of intelligent agents that are able to use Internet communication protocols. A multiagent platform is a software or framework capable of managing multiple aspects of the agent execution and their interactions. In the recent years, many MAS platforms have been developed, all of them compliant with interoperable system development standards at different levels. Also, new programming languages have been defined and new protocols have been adopted for communication in distributed systems. These facts also influenced the multiagent community with the proposition of new platforms to support the development of multiagent systems. In addition, the adoption of agents as a paradigm for the development of large scale complex distributed systems is seen as an interesting solution in the era of big data. Therefore, a comparison between existing platforms and their support for efficiently developing and deploying large scale multiagent systems can benefit the developer community interested in choosing which platform best fits their projects. The purpose of this work is evaluate multiagent platforms for scalability, performance and compatibility with other technologies in order to facilitate the choice of the developer that wants design large scale multiagent systems. In order to choose MAS platforms for the proposed comparison, are considered open source platforms that are actively used by the multiagent community. Moreover, these MAS platforms should be able to provide a deployment in a distributed manner, essential characteristic of scalable systems. After narrowing the list of MAS platforms according to these criteria, message transport systems are analyzed using benchmarks for scalability and performance comparison, considering different communication scenarios. Finally, a realistic scenario is presented where a scalable MAS can be adopted as a solution.
|
49 |
Elasca: Workload-Aware Elastic Scalability for Partition Based Database SystemsRafiq, Taha January 2013 (has links)
Providing the ability to increase or decrease allocated resources on demand as the transactional load varies is essential for database management systems (DBMS) deployed on today's computing platforms, such as the cloud. The need to maintain consistency of the database, at very large scales, while providing high performance and reliability makes elasticity particularly challenging. In this thesis, we exploit data partitioning as a way to provide elastic DBMS scalability. We assert that the flexibility provided by a partitioned, shared-nothing parallel DBMS can be used to implement elasticity. Our idea is to start with a small number of servers that manage all the partitions, and to elastically scale out by dynamically adding new servers and redistributing database partitions among these servers as the load varies. Implementing this approach requires (a) efficient mechanisms for addition/removal of servers and migration of partitions, and (b) policies to efficiently determine the optimal placement of partitions on the given servers as well as plans for partition migration.
This thesis presents Elasca, a system that implements both these features in an existing shared-nothing DBMS (namely VoltDB) to provide automatic elastic scalability. Elasca consists of a mechanism for enabling elastic scalability, and a workload-aware optimizer for determining optimal partition placement and migration plans. Our optimizer minimizes computing resources required and balances load effectively without compromising system performance, even in the presence of variations in intensity and skew of the load. The results of our experiments show that Elasca is able to achieve performance close to a fully provisioned system while saving 35% resources on average. Furthermore, Elasca's workload-aware optimizer performs up to 79% less data movement than a greedy approach to resource minimization, and also balance load much more effectively.
|
50 |
Flexible Computing with Virtual MachinesLagar Cavilla, Horacio Andres 30 March 2011 (has links)
This thesis is predicated upon a vision of the future of computing with a separation of functionality between core and edges, very
similar to that governing the Internet itself. In this vision, the core of our computing infrastructure is made up of vast server farms with an abundance of storage and processing cycles. Centralization of
computation in these farms, coupled with high-speed wired or wireless connectivity, allows for pervasive access to a highly-available and well-maintained repository for data, configurations, and applications. Computation in the edges is concerned with provisioning application state and user data to rich clients, notably mobile devices equipped with powerful displays and graphics processors.
We define flexible computing as systems support for applications that dynamically leverage the resources available in the core
infrastructure, or cloud. The work in this thesis focuses on two instances of flexible computing that are crucial to the
realization of the aforementioned vision. Location flexibility aims to, transparently and seamlessly, migrate applications between
the edges and the core based on user demand. This enables performing the interactive tasks on rich edge clients and the computational tasks on powerful core servers. Scale flexibility is the ability of
applications executing in cloud environments, such as parallel jobs or
clustered servers, to swiftly grow and shrink their footprint according to execution demands.
This thesis shows how we can use system virtualization to implement systems that provide scale and location flexibility. To that effect we build and evaluate two system prototypes: Snowbird and SnowFlock. We present techniques for manipulating virtual machine state that turn running software into a malleable entity which is easily manageable, is decoupled from the underlying hardware, and is capable of dynamic relocation and scaling. This thesis demonstrates that virtualization technology is a powerful and suitable tool to
enable solutions for location and scale flexibility.
|
Page generated in 0.1665 seconds