Spelling suggestions: "subject:"scalability."" "subject:"calability.""
41 |
Improving Energy and Area Scalability of the Cache Hierarchy in CMPsValls Mompó, Joan Josep 07 April 2017 (has links)
As the core counts increase in each chip multiprocessor generation, CMPs should improve scalability in performance, area, and energy consumption to meet the demands of
larger core counts. Directory-based protocols constitute the most scalable alternative.
A conventional directory, however, suffers from an inefficient use of storage and energy.
First, the large, non-scalable, sharer vectors consume unnecessary area and leakage, especially considering that most of the blocks tracked in a directory are cached by a single
core. Second, although increasing directory size and associativity could boost system
performance by reducing the coverage misses, it would come at the expense of area and
energy consumption.
This thesis focuses and exploits the important differences of behavior between private
and shared blocks from the directory point of view. These differences claim for a separate
management of both types of blocks at the directory. First, we propose the PS-Directory,
a two-level directory cache that keeps the reduced number of frequently accessed shared
entries in a small and fast first-level cache, namely Shared Directory Cache, and uses
a larger and slower second-level Private Directory Cache to track the large amount of
private blocks. Experimental results show that, compared to a conventional directory, the PS-Directory improves performance while also reducing silicon area and energy consumption.
In this thesis we also show that the shared/private ratio of entries in the directory varies
across applications and across different execution phases within the applications, which
encourages us to propose Dynamic Way Partitioning (DWP) Directory. DWP-Directory
reduces the number of ways with storage for shared blocks and it allows this storage to be
powered off or on at run-time according to the dynamic requirements of the applications
following a repartitioning algorithm. Results show similar performance as a traditional
directory with high associativity, and similar area requirements as recent state-of-the-art schemes. In addition, DWP-Directory achieves notable static and dynamic power
consumption savings.
This dissertation also deals with the scalability issues in terms of power found
in processor caches. A significant fraction of the total power budget is consumed by
on-chip caches which are usually deployed with a high associativity degree (even L1
caches are being implemented with eight ways) to enhance the system performance. On
a cache access, each way in the corresponding set is accessed in parallel, which is costly
in terms of energy. This thesis presents the PS-Cache architecture, an energy-efficient
cache design that reduces the number of accessed ways without hurting the performance.
The PS-Cache takes advantage of the private-shared knowledge of the referenced block
to reduce energy by accessing only those ways holding the kind of block looked up.
Results show significant dynamic power consumption savings.
Finally, we propose an energy-efficient architectural design that can be effectively applied
to any kind of set-associative cache memory, not only to processor caches. The proposed
approach, called the Tag Filter (TF) Architecture, filters the ways accessed in the target
cache set, and just a few ways are searched in the tag and data arrays. This allows the
approach to reduce the dynamic energy consumption of caches without hurting their
access time. For this purpose, the proposed architecture holds the X least significant
bits of each tag in a small auxiliary X-bit-wide array. These bits are used to filter
the ways where the least significant bits of the tag do not match with the bits in the
X-bit array. Experimental results show that this filtering mechanism achieves energy
consumption in set-associative caches similar to direct mapped ones.
Experimental results show that the proposals presented in this thesis offer a good tradeoff
among these three major design axes. / Conforme se incrementa el número de núcleos en las nuevas generaciones de multiprocesadores en chip, los CMPs deben de escalar en prestaciones, área y consumo energético
para cumplir con las demandas de un número núcleos mayor. Los protocolos basados
en directorio constituyen la alternativa más escalable. Un directorio convencional, no
obstante, sufre de una utilización ineficiente de almacenamiento y energía. En primer
lugar, los grandes y poco escalables vectores de compartidores consumen una cantidad
de energía de fuga y de área innecesaria, especialmente si se tiene en consideración que
la mayoría de los bloques en un directorio solo se encuentran en la cache de un único
núcleo. En segundo lugar, aunque incrementar el tamaño y la asociatividad del directorio aumentaría las prestaciones del sistema, esto supondría un incremento notable en el
consumo energético.
Esta tesis estudia las diferencias significativas entre el comportamiento de bloques privados y compartidos en el directorio, lo que nos lleva hacia una gestión separada para
cada uno de los tipos de bloque. Proponemos el PS-Directory, una cache de directorio de dos niveles que mantiene el reducido número de las entradas compartidas, que
son los que se acceden con más frecuencia, en una estructura pequeña de primer nivel
(concretamente, la Shared Directory Cache) y que utiliza una estructura más grande y
lenta en el segundo nivel (Private Directory Cache) para poder mantener la información
de los bloques privados. Los resultados experimentales muestran
que, comparado con un directorio convencional, el PS-Directory consigue mejorar las
prestaciones a la vez que reduce el área de silicio y el consumo energético.
Ya que el ratio compartido/privado de las entradas en el directorio varia entre aplicaciones y entre las diferentes fases de ejecución dentro de las aplicaciones, proponemos el
Dynamic Way Partitioning (DWP) Directory. El DWP-Directory reduce el número de
vías que almacenan entradas compartidas y permite que éstas se enciendan o apaguen
en tiempo de ejecución según los requisitos dinámicos de las aplicaciones según un algoritmo de reparticionado. Los resultados muestran unas prestaciones similares a un
directorio tradicional de alta asociatividad y un área similar a otros esquemas recientes
del estado del arte. Adicionalmente, el DWP-Directory obtiene importantes reducciones
de consumo estático y dinámico.
Esta disertación también se enfrenta a los problemas de escalabilidad que se pueden
encontrar en las memorias cache. En un acceso a la cache, se accede a cada vía del conjunto en paralelo, siendo
así un acción costosa en energía. Esta tesis presenta la arquitectura PS-Cache, un
diseño energéticamente eficiente que reduce el número de vías accedidas sin perjudicar
las prestaciones. La PS-Cache utiliza la información del estado privado-compartido del
bloque referenciado para reducir la energía, ya que tan solo accedemos a un subconjunto
de las vías que mantienen los bloques del tipo solicitado. Los resultados muestran unos
importantes ahorros de energía dinámica.
Finalmente, proponemos otro diseño de arquitectura energéticamente eficiente que se
puede aplicar a cualquier tipo de memoria cache asociativa por conjuntos. La propuesta, la Tag Filter (TF) Architecture, filtra las vías accedidas en el conjunto de la cache, de manera que solo se mira un número reducido de
vías tanto en el array de etiquetas como en el de datos. Esto permite que nuestra propuesta reduzca el consumo de energía dinámico de las caches sin perjudicar su tiempo de
acceso. Los resultados experimentales muestran que este mecanismo de filtrado es capaz de obtener un
consumo energético en caches asociativas por conjunto similar de las caches de mapeado
directo.
Los resultados
experimentales muestran que las propuestas presentadas en esta tesis consiguen un buen
compromiso entre estos tres importantes pilares de diseño. / Conforme s'incrementen el nombre de nuclis en les noves generacions de multiprocessadors en xip, els CMPs han d'escalar en prestacions, àrea i consum energètic per complir en les demandes d'un nombre de nuclis major. El protocols basats en directori són
l'alternativa més escalable. Un directori convencional, no obstant, pateix una utilització
ineficient d'emmagatzematge i energia. En primer lloc, els grans i poc escalables vectors
de compartidors consumeixen una quantitat d'energia estàtica i d'àrea innecessària, especialment si es considera que la majoria dels blocs en un directori només es troben en la
cache d'un sol nucli. En segon lloc, tot i que incrementar la grandària i l'associativitat del
directori augmentaria les prestacions del sistema, això suposaria un increment notable
en el consum d'energia.
Aquesta tesis estudia les diferències significatives entre el comportament de blocs privats
i compartits dins del directori, la qual cosa ens guia cap a una gestió separada per a cada
un dels tipus de bloc. Proposem el PS-Directory, una cache de directori de dos nivells que
manté el reduït nombre de les entrades de blocs compartits, que són els que s'accedeixen
amb més freqüència, en una estructura menuda de primer nivell (concretament, la Shared
Directory Cache) i que empra una estructura més gran i lenta en el segon nivell (Private
Directory Cache) per poder mantenir la informació dels blocs privats.
Els resultats experimentals mostren que, comparat amb un directori convencional, el
PS-Directory aconsegueix millorar les prestacions a la vegada que redueix l'àrea de silici
i el consum energètic.
Ja que la ràtio compartit/privat de les entrades en el directori varia entre aplicacions
i entre les diferents fases d'execució dins de les aplicacions, proposem el Dynamic Way
Partitioning (DWP) Directory. DWP-Directory redueix el nombre de vies que emmagatzemen entrades compartides i permeten que aquest s'encengui o apagui en temps
d'execució segons els requeriments dinàmics de les aplicacions seguint un algoritme de
reparticionat. Els resultats mostren unes prestacions similars a un directori tradicional
d'alta associativitat i una àrea similar a altres esquemes recents de l'estat de l'art. Adicionalment, el DWP-Directory obté importants reduccions de consum estàtic i dinàmic.
Aquesta dissertació també s'enfronta als problemes d'escalabilitat que es poden tro-
bar en les memòries cache. Les caches on-chip consumeixen una part significativa del
consum total del sistema. Aquestes caches implementen un alt nivell d'associativitat. En un accés a la cache, s'accedeix a cada via del conjunt en paral·lel, essent
així una acció costosa en energia. Aquesta tesis presenta l'arquitectura PS-Cache, un
disseny energèticament eficient que redueix el nombre de vies accedides sense perjudicar
les prestacions. La PS-Cache utilitza la informació de l'estat privat-compartit del bloc
referenciat per a reduir energia, ja que només accedim al subconjunt de vies que mantenen blocs del tipus sol·licitat. Els resultats mostren uns importants estalvis d'energia
dinàmica.
Finalment, proposem un altre disseny d'arquitectura energèticament eficient que es pot
aplicar a qualsevol tipus de memòria cache associativa per conjunts. La proposta, la Tag Filter (TF) Architecture, filtra les vies
accedides en el conjunt de la cache, de manera que només un reduït nombre de vies es
miren tant en el array d'etiquetes com en el de dades. Això permet que la nostra proposta
redueixi el consum dinàmic energètic de les caches sense perjudicar el seu temps d'accés.
Els
resultats experimentals mostren que aquest mecanisme de filtre és capaç d'obtenir un
consum energètic en caches associatives per conjunt similar al de les caches de mapejada
directa.
Els resultats experimentals mostren que les propostes presentades en aquesta tesis conseguixen un bon
compromís entre aquestros tres importants pilars de diseny. / Valls Mompó, JJ. (2017). Improving Energy and Area Scalability of the Cache Hierarchy in CMPs [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/79551
|
42 |
Towards a Scalable Docker RegistryLittley, Michael Brian 29 June 2018 (has links)
Containers are an alternative to virtual machines rapidly increasing in popularity due to their minimal overhead. To help facilitate their adoption, containers use management systems with central registries to store and distribute container images. However, these registries rely on other, preexisting services to provide load balancing and storage, which limits their scalability. This thesis introduces a new registry design for Docker, the most prevalent container management system. The new design coalesces all the services into a single, highly scalable, registry. By increasing the scalability of the registry, the new design greatly decreases the distribution time for container images. This work also describes a new Docker registry benchmarking tool, the trace player, that uses real Docker registry workload traces to test the performance of new registry designs and setups. / Master of Science / Cloud services allow many different web applications to run on shared machines. The applications can be owned by a variety of customers to provide many different types of services. Because these applications are owned by different customers, they need to be isolated to ensure the users’ privacy and security. Containers are one technology that can provide isolation to the applications on a single machine, and they are rapidly gaining popularity as they incur less overhead on the applications that use them. This means the applications will run faster with the same isolation guarantees as other isolation technologies. Containers also allow the cloud provider to run more applications on a single machine, letting them serve more customers. Docker is by far the most popular container management system on the market. It provides a registry service for containerized application storage and distribution. Users can store snapshots of their applications on the registry, and then use the snapshots to run multiple copies of the application on different machines. As more and more users use the registry service, the registry becomes slower, making it take longer for users to pull their applications from the registry. This will increase the start time of their application, making them harder to scale out their application to more machines to accommodate more customers of their services. This work creates a new registry design that will allow the registry to handle more users, and allow them to retrieve their applications even faster than what’s currently possible. This will allow them to more rapidly scale their applications out to more machines to handle more customers. The customers, in turn, will have a better experience.
|
43 |
Scalable Transactions in Decentralized NetworksPainter, Zachary M 01 January 2024 (has links) (PDF)
The study of shared memory concurrency is extensive. There exist many state-of-the-art strategies for dealing with fundamental concurrency problems, such as race conditions or deadlocks, to leverage massive performance boosts out of modern multiprocessors. With the introduction of blockchain technology as a popular financial tool, we observe many decades-old concurrency problems re-emerge within the context of decentralized networks. These challenges introduce additional constraints, such as the lack of hardware atomic instructions like Compare-And-Swap, or the potential for malicious clients to join the network. In this dissertation, we propose key algorithms which adapt knowledge from the domain of shared memory concurrency to solve emerging concurrency problems in decentralized networks.
We propose three key algorithms which further the state of the art in decentralized networks. (1) We present Hash-Mark-Set, a concurrent algorithm for providing a read-uncommitted view of the blockchain state, enabling a higher success rate in transaction use cases where state changes frequently in relation to the block interval. (2) We propose Proof of Descriptor, a descriptor based consensus mechanism for decentralized networks. Proof of Descriptor utilizes well-known techniques from shared memory concurrent programming to create an efficient and scalable algorithm for blockchain consensus. (3) We propose a descriptor-based algorithm for concurrent execution of smart contracts that efficiently captures the concurrent execution as a graph of descriptors, enabling validators to analyze the concurrent execution and verify its results through re-execution.
|
44 |
ROUTING IN MOBILE AD-HOC NETWORKS: SCALABILITY AND EFFICIENCYBai, Rendong 01 January 2008 (has links)
Mobile Ad-hoc Networks (MANETs) have received considerable research interest in recent years. Because of dynamic topology and limited resources, it is challenging to design routing protocols for MANETs. In this dissertation, we focus on the scalability and efficiency problems in designing routing protocols for MANETs. We design the Way Point Routing (WPR) model for medium to large networks. WPR selects a number of nodes on a route as waypoints and divides the route into segments at the waypoints. Waypoint nodes run a high-level inter-segment routing protocol, and nodes on each segment run a low-level intra-segment routing protocol. We use DSR and AODV as the inter-segment and the intra-segment routing protocols, respectively. We term this instantiation the DSR Over AODV (DOA) routing protocol. We develop Salvaging Route Reply (SRR) to salvage undeliverable route reply (RREP) messages. We propose two SRR schemes: SRR1 and SRR2. In SRR1, a salvor actively broadcasts a one-hop salvage request to find an alternative path to the source. In SRR2, nodes passively learn an alternative path from duplicate route request (RREQ) packets. A salvor uses the alternative path to forward a RREP when the original path is broken. We propose Multiple-Target Route Discovery (MTRD) to aggregate multiple route requests into one RREQ message and to discover multiple targets simultaneously. When a source initiates a route discovery, it first tries to attach its request to existing RREQ packets that it relays. MTRD improves routing performance by reducing the number of regular route discoveries. We develop a new scheme called Bilateral Route Discovery (BRD), in which both source and destination actively participate in a route discovery process. BRD consists of two halves: a source route discovery and a destination route discovery, each searching for the other. BRD has the potential to reduce control overhead by one half. We propose an efficient and generalized approach called Accumulated Path Metric (APM) to support High-Throughput Metrics (HTMs). APM finds the shortest path without collecting topology information and without running a shortest-path algorithm. Moreover, we develop the Broadcast Ordering (BO) technique to suppress unnecessary RREQ transmissions.
|
45 |
Microservices in data intensive applicationsRemeika, Mantas, Urbanavicius, Jovydas January 2018 (has links)
The volumes of data which Big Data applications have to process are constantly increasing. This requires for the development of highly scalable systems. Microservices is considered as one of the solutions to deal with the scalability problem. However, the literature on practices for building scalable data-intensive systems is still lacking. This thesis aims to investigate and present the benefits and drawbacks of using microservices architecture in big data systems. Moreover, it presents other practices used to increase scalability. It includes containerization, shared-nothing architecture, data sharding, load balancing, clustering, and stateless design. Finally, an experiment comparing the performance of a monolithic application and a microservices-based application was performed. The results show that with increasing amount of load microservices perform better than the monolith. However, to cope with the constantly increasing amount of data, additional techniques should be used together with microservices.
|
46 |
Adaptive power control in wireless networks for scalable and fair capacity distributions.January 2006 (has links)
Ho Wang Hei. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 93-94). / Abstracts in English and Chinese. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Motivation and Contributions --- p.1 / Chapter 1.1.1 --- Scalability of Network Capacity with Power Control --- p.1 / Chapter 1.1.2 --- Trade-off between network capacity and fairness with Power Control --- p.3 / Chapter 1.2 --- Related Work --- p.4 / Chapter 1.3 --- Organization of the Thesis --- p.6 / Chapter Chapter 2 --- Background --- p.8 / Chapter 2.1 --- Hidden- and Exposed-node Problems --- p.8 / Chapter 2.1.1 --- HN-free Design (HFD) --- p.9 / Chapter 2.1.2 --- Non-Scalable Capacity in 802.11 caused by EN --- p.11 / Chapter 2.2 --- Shortcomings of Minimum-Transmit-Power Approach --- p.13 / Chapter Chapter 3 --- Simultaneous Transmissions Constraints with Power Control --- p.15 / Chapter 3.1 --- Physical-Collision Constraints --- p.16 / Chapter 3.1.1 --- Protocol-Independent Physical-Collision Constraints --- p.17 / Chapter 3.1.2 --- Protocol-Specific Physical-Collision Constraints --- p.17 / Chapter 3.2 --- Protocol-Collision-Prevention Constraints --- p.18 / Chapter 3.2.1 --- Transmitter-Side Carrier-Sensing Constraints --- p.18 / Chapter 3.2.2 --- Receiver-Side Carrier-Sensing Constraints --- p.19 / Chapter Chapter 4 --- Graph Models for Capturing Transmission Constraints and Hidden-node Problems --- p.20 / Chapter 4.1 --- Link-Interference Graph from Physical-Collision Constraints --- p.21 / Chapter 4.2 --- Protocol-Collision-Prevention Graphs --- p.22 / Chapter 4.3 --- Ideal Protocol-Collision-Prevention Graphs --- p.22 / Chapter 4.4 --- Definition of HN and EN and their Investigation using Graph Model --- p.23 / Chapter 4.5 --- Attacking Cases --- p.26 / Chapter Chapter 5 --- Scalability of Network Capacity with Adaptive Power Control --- p.27 / Chapter 5.1 --- Selective Disregard of NAVs (SDN) --- p.27 / Chapter 5.2 --- Scalability of Network Capacity: Analytical Discussion --- p.29 / Chapter 5.3 --- Adaptive Power Control for SDN --- p.31 / Chapter 5.3.1 --- Per-iteration Power Adjustment --- p.32 / Chapter 5.3.2 --- Power Control Scheduling Strategy --- p.35 / Chapter 5.3.3 --- Power Exchange Algorithm --- p.39 / Chapter 5.3.4 --- Comparison of Scheduling Strategies --- p.41 / Chapter 5.4 --- Scalability of Network Capacity: Numerical Results --- p.43 / Chapter Chapter 6 --- Decoupled Adaptive Power Control (DAPC) --- p.45 / Chapter 6.1 --- Per-iteration Power Adjustment --- p.45 / Chapter 6.2 --- Power Exchange Algorithm --- p.47 / Chapter 6.3 --- Implementation of DAPC --- p.48 / Chapter 6.4 --- Deadlock Problem in DAPC --- p.50 / Chapter Chapter 7 --- Progressive-Uniformly-Scaled Power Control (PUSPC): Deadlock-free Design --- p.53 / Chapter 7.1 --- Algorithm of PUSPC --- p.53 / Chapter 7.2 --- Deadlock-free property of PUSPC --- p.60 / Chapter 7.3 --- Deadlock Resolution of DAPC using PUSPC --- p.62 / Chapter Chapter 8 --- Incremental Power Adaptation --- p.65 / Chapter 8.1 --- Incremental Power Adaptation (IPA) --- p.65 / Chapter 8.2 --- Maximum Allowable Power in EPA --- p.68 / Chapter 8.3 --- Numerical Results of IPA --- p.71 / Chapter Chapter 9 --- Numerical Results and the Trade-off between EN and HN --- p.78 / Chapter Chapter 10 --- Conclusion --- p.83 / Appendix I: Proof of the Correct Operation of PE Algorithm for APC for SDN --- p.86 / Appendix II: Proof of the Correct Operation of PE Algorithm for DAPC --- p.89 / Appendix III: Scalability of the Communication Cost of PE Algorithm --- p.91 / Bibliography --- p.93
|
47 |
Analysis and Management of Security State for Large-Scale Data Center NetworksJanuary 2018 (has links)
abstract: With the increasing complexity of computing systems and the rise in the number of risks and vulnerabilities, it is necessary to provide a scalable security situation awareness tool to assist the system administrator in protecting the critical assets, as well as managing the security state of the system. There are many methods to provide security states' analysis and management. For instance, by using a Firewall to manage the security state, and/or a graphical analysis tools such as attack graphs for analysis.
Attack Graphs are powerful graphical security analysis tools as they provide a visual representation of all possible attack scenarios that an attacker may take to exploit system vulnerabilities. The attack graph's scalability, however, is a major concern for enumerating all possible attack scenarios as it is considered an NP-complete problem. There have been many research work trying to come up with a scalable solution for the attack graph. Nevertheless, non-practical attack graph based solutions have been used in practice for realtime security analysis.
In this thesis, a new framework, namely 3S (Scalable Security Sates) analysis framework is proposed, which present a new approach of utilizing Software-Defined Networking (SDN)-based distributed firewall capabilities and the concept of stateful data plane to construct scalable attack graphs in near-realtime, which is a practical approach to use attack graph for realtime security decisions. The goal of the proposed work is to control reachability information between different datacenter segments to reduce the dependencies among vulnerabilities and restrict the attack graph analysis in a relative small scope. The proposed framework is based on SDN's programmable capabilities to adjust the distributed firewall policies dynamically according to security situations during the running time. It apply white-list-based security policies to limit the attacker's capability from moving or exploiting different segments by only allowing uni-directional vulnerability dependency links between segments. Specifically, several test cases will be presented with various attack scenarios and analyze how distributed firewall and stateful SDN data plan can significantly reduce the security states construction and analysis. The proposed approach proved to achieve a percentage of improvement over 61% in comparison with prior modules were SDN and distributed firewall are not in use. / Dissertation/Thesis / Masters Thesis Computer Engineering 2018
|
48 |
Avaliação de escalabilidade e desempenho da camada de transporte de mensagens em plataformas multiagente / Scalability and performance comparison between message transport systems of multiagent platformsRodrigues, Henrique Donâncio Nunes 12 August 2019 (has links)
Este trabalho reside no campo de sistemas multiagente (MAS) compostos por agentes inteligentes que são capazes de usar protocolos de comunicação da Internet. Uma plataforma multiagente é um software ou framework capaz de gerenciar múltiplos aspectos da execução de agentes e suas interações. Muitas plataformas MAS foram desenvolvidas nos últimos anos, todas elas compatíveis com padrões de desenvolvimento de sistemas interoperáveis em diferentes níveis. Nos últimos anos,novas linguagens de programação foram definidas e novos protocolos foram adotados para comunicação em sistemas distribuídos. Esses fatos também influenciaram a comunidade multiagente,com a proposição de novas plataformas para apoiar o desenvolvimento de sistemas multiagente. Além disso, a adoção de agentes como paradigma para o desenvolvimento de sistemas distribuídos complexos em larga escala é vista como uma solução interessante na era do grande volume de dados. Portanto, uma comparação entre as plataformas existentes e seu suporte para desenvolver e implantar com eficiência sistemas multiagente de grande escala pode beneficiar a comunidade de desenvolvedores interessada em escolher qual plataforma melhor se adapta a seus projetos. O objetivo deste trabalho é avaliar plataformas multiagente em relação à escalabilidade, desempenho e compatibilidade com outras tecnologias com o objetivo de facilitar a escolha do desenvolvedor que queira projetar Sistemas Multiagente de grande porte. A fim de escolher as plataformas MAS para a comparação proposta, são consideradas plataformas de código aberto que são ativamente utilizadas pela comunidade multiagente. Além disso, tais plataformas MAS devem ser capazes de oferecer uma implantação de forma distribuída, característica essencial de sistemas escaláveis. Depois de restringir a lista de plataformas MAS de acordo com esses critérios, são analisados os sistemas de transporte de mensagens utilizando benchmarks para análise de escalabilidade e desempenho, considerando diferentes cenários de comunicação. Por fim, é apresentado um cenário realístico onde um MAS escalável pode ser adotado como solução. / This work resides in the field of multiagent systems (MAS) composed of intelligent agents that are able to use Internet communication protocols. A multiagent platform is a software or framework capable of managing multiple aspects of the agent execution and their interactions. In the recent years, many MAS platforms have been developed, all of them compliant with interoperable system development standards at different levels. Also, new programming languages have been defined and new protocols have been adopted for communication in distributed systems. These facts also influenced the multiagent community with the proposition of new platforms to support the development of multiagent systems. In addition, the adoption of agents as a paradigm for the development of large scale complex distributed systems is seen as an interesting solution in the era of big data. Therefore, a comparison between existing platforms and their support for efficiently developing and deploying large scale multiagent systems can benefit the developer community interested in choosing which platform best fits their projects. The purpose of this work is evaluate multiagent platforms for scalability, performance and compatibility with other technologies in order to facilitate the choice of the developer that wants design large scale multiagent systems. In order to choose MAS platforms for the proposed comparison, are considered open source platforms that are actively used by the multiagent community. Moreover, these MAS platforms should be able to provide a deployment in a distributed manner, essential characteristic of scalable systems. After narrowing the list of MAS platforms according to these criteria, message transport systems are analyzed using benchmarks for scalability and performance comparison, considering different communication scenarios. Finally, a realistic scenario is presented where a scalable MAS can be adopted as a solution.
|
49 |
Elasca: Workload-Aware Elastic Scalability for Partition Based Database SystemsRafiq, Taha January 2013 (has links)
Providing the ability to increase or decrease allocated resources on demand as the transactional load varies is essential for database management systems (DBMS) deployed on today's computing platforms, such as the cloud. The need to maintain consistency of the database, at very large scales, while providing high performance and reliability makes elasticity particularly challenging. In this thesis, we exploit data partitioning as a way to provide elastic DBMS scalability. We assert that the flexibility provided by a partitioned, shared-nothing parallel DBMS can be used to implement elasticity. Our idea is to start with a small number of servers that manage all the partitions, and to elastically scale out by dynamically adding new servers and redistributing database partitions among these servers as the load varies. Implementing this approach requires (a) efficient mechanisms for addition/removal of servers and migration of partitions, and (b) policies to efficiently determine the optimal placement of partitions on the given servers as well as plans for partition migration.
This thesis presents Elasca, a system that implements both these features in an existing shared-nothing DBMS (namely VoltDB) to provide automatic elastic scalability. Elasca consists of a mechanism for enabling elastic scalability, and a workload-aware optimizer for determining optimal partition placement and migration plans. Our optimizer minimizes computing resources required and balances load effectively without compromising system performance, even in the presence of variations in intensity and skew of the load. The results of our experiments show that Elasca is able to achieve performance close to a fully provisioned system while saving 35% resources on average. Furthermore, Elasca's workload-aware optimizer performs up to 79% less data movement than a greedy approach to resource minimization, and also balance load much more effectively.
|
50 |
Flexible Computing with Virtual MachinesLagar Cavilla, Horacio Andres 30 March 2011 (has links)
This thesis is predicated upon a vision of the future of computing with a separation of functionality between core and edges, very
similar to that governing the Internet itself. In this vision, the core of our computing infrastructure is made up of vast server farms with an abundance of storage and processing cycles. Centralization of
computation in these farms, coupled with high-speed wired or wireless connectivity, allows for pervasive access to a highly-available and well-maintained repository for data, configurations, and applications. Computation in the edges is concerned with provisioning application state and user data to rich clients, notably mobile devices equipped with powerful displays and graphics processors.
We define flexible computing as systems support for applications that dynamically leverage the resources available in the core
infrastructure, or cloud. The work in this thesis focuses on two instances of flexible computing that are crucial to the
realization of the aforementioned vision. Location flexibility aims to, transparently and seamlessly, migrate applications between
the edges and the core based on user demand. This enables performing the interactive tasks on rich edge clients and the computational tasks on powerful core servers. Scale flexibility is the ability of
applications executing in cloud environments, such as parallel jobs or
clustered servers, to swiftly grow and shrink their footprint according to execution demands.
This thesis shows how we can use system virtualization to implement systems that provide scale and location flexibility. To that effect we build and evaluate two system prototypes: Snowbird and SnowFlock. We present techniques for manipulating virtual machine state that turn running software into a malleable entity which is easily manageable, is decoupled from the underlying hardware, and is capable of dynamic relocation and scaling. This thesis demonstrates that virtualization technology is a powerful and suitable tool to
enable solutions for location and scale flexibility.
|
Page generated in 0.0529 seconds