Spelling suggestions: "subject:"scalability."" "subject:"calability.""
131 |
Scalability and performance management of internet applications in the cloudDawoud, Wesam January 2013 (has links)
Cloud computing is a model for enabling on-demand access to a shared pool of computing resources. With virtually limitless on-demand resources, a cloud environment enables the hosted Internet application to quickly cope when there is an increase in the workload. However, the overhead of provisioning resources exposes the Internet application to periods of under-provisioning and performance degradation. Moreover, the performance interference, due to the consolidation in the cloud environment, complicates the performance management of the Internet applications.
In this dissertation, we propose two approaches to mitigate the impact of the resources provisioning overhead. The first approach employs control theory to scale resources vertically and cope fast with workload. This approach assumes that the provider has knowledge and control over the platform running in the virtual machines (VMs), which limits it to Platform as a Service (PaaS) and Software as a Service (SaaS) providers. The second approach is a customer-side one that deals with the horizontal scalability in an Infrastructure as a Service (IaaS) model. It addresses the trade-off problem between cost and performance with a multi-goal optimization solution. This approach finds the scale thresholds that achieve the highest performance with the lowest increase in the cost. Moreover, the second approach employs a proposed time series forecasting algorithm to scale the application proactively and avoid under-utilization periods. Furthermore, to mitigate the interference impact on the Internet application performance, we developed a system which finds and eliminates the VMs suffering from performance interference. The developed system is a light-weight solution which does not imply provider involvement.
To evaluate our approaches and the designed algorithms at large-scale level, we developed a simulator called (ScaleSim). In the simulator, we implemented scalability components acting as the scalability components of Amazon EC2. The current scalability implementation in Amazon EC2 is used as a reference point for evaluating the improvement in the scalable application performance. ScaleSim is fed with realistic models of the RUBiS benchmark extracted from the real environment. The workload is generated from the access logs of the 1998 world cup website. The results show that optimizing the scalability thresholds and adopting proactive scalability can mitigate 88% of the resources provisioning overhead impact with only a 9% increase in the cost. / Cloud computing ist ein Model fuer einen Pool von Rechenressourcen, den sie auf Anfrage zur Verfuegung stellt. Internetapplikationen in einer Cloud-Infrastruktur koennen bei einer erhoehten Auslastung schnell die Lage meistern, indem sie die durch die Cloud-Infrastruktur auf Anfrage zur Verfuegung stehenden und virtuell unbegrenzten Ressourcen in Anspruch nehmen. Allerdings sind solche Applikationen durch den Verwaltungsaufwand zur Bereitstellung der Ressourcen mit Perioden von Verschlechterung der Performanz und Ressourcenunterversorgung konfrontiert. Ausserdem ist das Management der Performanz aufgrund der Konsolidierung in einer Cloud Umgebung kompliziert.
Um die Auswirkung des Mehraufwands zur Bereitstellung von Ressourcen abzuschwächen, schlagen wir in dieser Dissertation zwei Methoden vor. Die erste Methode verwendet die Kontrolltheorie, um Ressourcen vertikal zu skalieren und somit schneller mit einer erhoehten Auslastung umzugehen. Diese Methode setzt voraus, dass der Provider das Wissen und die Kontrolle über die in virtuellen Maschinen laufende Plattform hat. Der Provider ist dadurch als „Plattform als Service (PaaS)“ und als „Software als Service (SaaS)“ Provider definiert. Die zweite Methode bezieht sich auf die Clientseite und behandelt die horizontale Skalierbarkeit in einem Infrastruktur als Service (IaaS)-Model. Sie behandelt den Zielkonflikt zwischen den Kosten und der Performanz mit einer mehrzieloptimierten Loesung. Sie findet massstaebliche Schwellenwerte, die die hoechste Performanz mit der niedrigsten Steigerung der Kosten gewaehrleisten. Ausserdem ist in der zweiten Methode ein Algorithmus der Zeitreifenvorhersage verwendet, um die Applikation proaktiv zu skalieren und Perioden der nicht optimalen Ausnutzung zu vermeiden. Um die Performanz der Internetapplikation zu verbessern, haben wir zusaetzlich ein System entwickelt, das die unter Beeintraechtigung der Performanz leidenden virtuellen Maschinen findet und entfernt. Das entwickelte System ist eine leichtgewichtige Lösung, die keine Provider-Beteiligung verlangt.
Um die Skalierbarkeit unserer Methoden und der entwickelten Algorithmen auszuwerten, haben wir einen Simulator namens „ScaleSim“ entwickelt. In diesem Simulator haben wir Komponenten implementiert, die als Skalierbarkeitskomponenten der Amazon EC2 agieren. Die aktuelle Skalierbarkeitsimplementierung in Amazon EC2 ist als Referenzimplementierung fuer die Messesung der Verbesserungen in der Performanz von skalierbaren Applikationen. Der Simulator wurde auf realistische Modelle der RUBiS-Benchmark angewendet, die aus einer echten Umgebung extrahiert wurden. Die Auslastung ist aus den Zugriffslogs der World Cup Website von 1998 erzeugt. Die Ergebnisse zeigen, dass die Optimierung der Schwellenwerte und der angewendeten proaktiven Skalierbarkeit den Verwaltungsaufwand zur Bereitstellung der Ressourcen bis um 88% reduziert kann, während sich die Kosten nur um 9% erhöhen.
|
132 |
Scalable Collaborative Filtering Recommendation Algorithms on Apache SparkCasey, Walker Evan 01 January 2014 (has links)
Collaborative filtering based recommender systems use information about a user's preferences to make personalized predictions about content, such as topics, people, or products, that they might find relevant. As the volume of accessible information and active users on the Internet continues to grow, it becomes increasingly difficult to compute recommendations quickly and accurately over a large dataset. In this study, we will introduce an algorithmic framework built on top of Apache Spark for parallel computation of the neighborhood-based collaborative filtering problem, which allows the algorithm to scale linearly with a growing number of users. We also investigate several different variants of this technique including user and item-based recommendation approaches, correlation and vector-based similarity calculations, and selective down-sampling of user interactions. Finally, we provide an experimental comparison of these techniques on the MovieLens dataset consisting of 10 million movie ratings.
|
133 |
Analysis and Coding of High Quality Audio SignalsNing, Daryl January 2003 (has links)
Digital audio is increasingly becoming more and more a part of our daily lives. Unfortunately, the excessive bitrate associated with the raw digital signal makes it an extremely expensive representation. Applications such as digital audio broadcasting, high definition television, and internet audio, require high quality audio at low bitrates. The field of audio coding addresses this important issue of reducing the bitrate of digital audio, while maintaining a high perceptual quality. Developing an efficient audio coder requires a detailed analysis of the audio signals themselves. It is important to find a representation that can concisely model any general audio signal. In this thesis, we propose two new high quality audio coders based on two different audio representations - the sinusoidal-wavelet representation, and the warped linear predictive coding (WLPC)-wavelet representation. In addition to high quality coding, it is also important for audio coders to be flexible in their application. With the increasing popularity of internet audio, it is advantageous for audio coders to address issues related to real-time audio delivery. The issue of bitstream scalability has been targeted in this thesis, and therefore, a third audio coder capable of bitstream scalability is also proposed. The performance of each of the proposed coders was evaluated by comparisons with the MPEG layer III coder. The first coder proposed is based on a hybrid sinusoidal-wavelet representation. This assumes that each frame of audio can be modelled as a sum of sinusoids plus a noisy residual. The discrete wavelet transform (DWT) is used to decompose the residual into subbands that approximate the critical bands of human hearing. A perceptually derived bit allocation algorithm is then used to minimise the audible distortions introduced from quantising the DWT coefficients. Listening tests showed that the coder delivers near-transparent quality for a range of critical audio signals at G4 kbps. It also outperforms the MPEG layer III coder operating at this same bitrate. This coder, however, is only useful for high quality coding, and is difficult to scale to operate at lower rates. The second coder proposed is based on a hybrid WLPC-wavelet representation. In this approach, the spectrum of the audio signal is estimated by an all pole filter using warped linear prediction (WLP). WLP operates on a warped frequency domain, where the resolution can be adjusted to approximate that of the human auditory system. This makes the inherent noise shaping of the synthesis filter even more suited to audio coding. The excitation to this filter is transformed using the DWT and perceptually encoded. Listening tests showed that near-transparent coding is achieved at G4 kbps. The coder was also found to be slightly superior to the MPEG layer III coder operating at this same bitrate. The third proposed coder is similar to the previous WLPC-wavelet coder, but modified to achieve bitstream scalability. A noise model for high frequency components is included to keep the overall bitrate low, and a two stage quantisation scheme for the DWT coefficients is implemented. The first stage uses fixed rate scalar and vector quantisation to provide a coarse approximation of the coefficients. This allows for low bitrate, low quality versions of the input signal to be embedded in the overall bitstream. The second stage of quantisation adds detail to the coefficients, and hence, enhances the quality of the output signal. Listening tests showed that signal quality gracefully improves as the bitrate increases from 16 kbps to SO kbps. This coder has a performance that is comparable to the MPEG layer III coder operating at a similar (but fixed) bitrate.
|
134 |
Scalable state machine replication / Replicação escalável de máquina de estadosBezerra, Carlos Eduardo Benevides January 2016 (has links)
Redundância provê tolerância a falhas. Um serviço pode ser executado em múltiplos servidores que se replicam uns aos outros, de maneira a prover disponibilidade do serviço em caso de falhas. Uma maneira de implementar um tal serviço replicado é através de técnicas como replicação de máquina de estados (SMR). SMR provê tolerância a falhas, ao mesmo tempo que é linearizável, isto é, clientes não são capazes de distinguir o comportamento do sistema replicado daquele de um sistema não replicado. No entanto, ter um sistema completamente replicado e linearizável vem com um custo, que é escalabilidade – por escalabilidade, queremos dizer que adicionar servidores ao sistema aumenta a sua vazão, pelo menos para algumas cargas de trabalho. Mesmo com uma configuração cuidadosa e usando otimizações que evitam que os servidores executem ações redundantes desnecessárias, em um determinado ponto a vazão de um sistema replicado com SMR não pode ser mais aumentada acrescentando-se servidores; na verdade, adicionar réplicas pode até degradar a sua performance. Uma maneira de conseguir escalabilidade é particionar o serviço e então permitir que partições trabalhem independentemente. Por outro lado, ter um sistema particionado, porém linearizável e com razoavelmente boa performance não é trivial, e esse é o tópico de pesquisa tratado aqui. Para permitir que sistemas escalem, ao mesmo tempo que se garante linearizabilidade, nós propomos as seguinte ideias: (i) Replicação Escalável de Máquina de Estados (SSMR), (ii) Multicast Atômico Otimista (Opt-amcast) e (iii) S-SMR Rápido (Fast-SSMR). S-SMR é um modelo de execução que permite que a vazão do sistema escale de maneira linear com o número de servidores, sem sacrificar consistência. Para reduzir o tempo de resposta dos comandos, nós definimos o conceito de Opt-amcast, que permite que mensagens sejam entregues duas vezes: uma entrega garante ordem atômica (entrega atômica), enquanto a outra é mais rápida, mas nem sempre garante ordem atômica (entrega otimista). A implementação de Opt-amcast que nós propomos nessa tese se chama Ridge, um protocolo que combina baixa latência com alta vazão. Fast-SSMR é uma extensão do S-SMR que utiliza a entrega otimista do Opt-amcast: enquanto um comando é ordenado de maneira atômica, pode-se fazer alguma pré-computação baseado na entrega otimista, reduzindo assim tempo de resposta. / Redundancy provides fault-tolerance. A service can run on multiple servers that replicate each other, in order to provide service availability even in the case of crashes. A way to implement such a replicated service is by using techniques like state machine replication (SMR). SMR provides fault tolerance, while being linearizable, that is, clients cannot distinguish the behaviour of the replicated system to that of a single-site, unreplicated one. However, having a fully replicated, linearizable system comes at a cost, namely, scalability—by scalability we mean that adding servers will always increase the maximum system throughput, at least for some workloads. Even with a careful setup and using optimizations that avoid unnecessary redundant actions to be taken by servers, at some point the throughput of a system replicated with SMR cannot be increased by additional servers; in fact, adding replicas may even degrade performance. A way to achieve scalability is by partitioning the service state and then allowing partitions to work independently. Having a partitioned, yet linearizable and reasonably performant service is not trivial, and this is the topic of research addressed here. To allow systems to scale, while at the same time ensuring linearizability, we propose and implement the following ideas: (i) Scalable State Machine Replication (S-SMR), (ii) Optimistic Atomic Multicast (Opt-amcast), and (iii) Fast S-SMR (Fast-SSMR). S-SMR is an execution model that allows the throughput of the system to scale linearly with the number of servers without sacrificing consistency. To provide faster responses for commands, we developed Opt-amcast, which allows messages to be delivered twice: one delivery guarantees atomic order (conservative delivery), while the other is fast, but not always guarantees atomic order (optimistic delivery). The implementation of Opt-amcast that we propose is called Ridge, a protocol that combines low latency with high throughput. Fast-SSMR is an extension of S-SMR that uses the optimistic delivery of Opt-amcast: while a command is atomically ordered, some precomputation can be done based on its fast, optimistically ordered delivery, improving response time.
|
135 |
Scalability of the Bitcoin and Nano protocols: a comparative analysisBowin, Hampus, Johansson, Daniel January 2018 (has links)
In the past year cryptocurrencies have gained a lot of attention because of the increase in price. This attention has increased the number of people trading and investing in different cryptocurrencies which has lead to an increased number of transactions flowing through the different networks. This has revealed scalability issues in some of them, especially in the most popular cryptocurrency, Bitcoin. Many people are working on solutions to this problem. One proposed solution replaces the blockchain with a DAG structure. In this report the scalability of Bitcoin’s protocol will be compared to the scalability of the protocol used in the newer cryptocurrency, Nano. The comparison is conducted in terms of throughput and latency. To perform this comparison, an experiment was conducted where tests were run with an increasing number of nodes and each test sent different number of transactions per second from every node. Our results show that Nano’s protocol scales better regarding both throughput and latency, and we argue that the reason for this is that the Bitcoin protocol uses a blockchain as a global data-structure unlike Nano that uses a block-lattice structure where each node has their own local blockchain.
|
136 |
Joint-e: um framework para avaliação de desempenho e escalabilidade de apis de persistência em ontologias / Joint-e: a framework to evaluate performance and scalability of ontology persistence apisSoares, Endhe Elias 23 May 2014 (has links)
The Semantic Web (WS) is be coming an important research topic in Computer Science. One of reasons is the possibility of representing information in semantic way using ontologies, assisting in the building of applications which are able to use data, be coming more scalable and intelligent. As a result, many new applications have been developed using the Semantic Web technologies such as RDF, SPARQL and ontologies themselves. Although software development using these technologies is complicated and costly, the community of WS has been producing tools and APIs (application programming interface) to support programmers in the development of semantic applications. Among the most important APIs developed by the community are those that provide mechanisms for handling ontologies. Currently, there are two main approaches used by manipulation APIs: i) RDF triples and objet-oriented programming (OOP). On the one hand, using the RDF triples, the developers have to manipulate the ontologies using only triples, making the development process more complicated. On the other hand, the OOP APIs promote ontologies manipulation using object, which facilitates the development. Although several APIs are been developed in order to manipulate ontologies at object level, most of them are not being evaluated, especially when related to the quality of attributes, such as performance and scalability. Moreover, the high quantity and variability of APIs require a more generic approach in order to deal with these issues, be cause building an evaluation system for each API is a costly task and does not allow the reuse of the solution, neither of its modules. Therefore, this work presents an architecture-entered framework, named JOINT-Evaluator (JOINT-E), where the developers will be able to evaluate the APIs based on a set of pre-defined performance and scalability metrics. It is important to note that the framework also enables the analysis and comparison of the data with statistical support, in creasing the credibility and reliability of the results. To validate the work three scenarios were created on experiment. The main APIs (Alibaba and Jastor) used by developers were tested and evaluated. Furthermore, a qualitative analysis is presented, showing the statistical results and highlighting the advantages and disadvantages of each API. The results have showed that Jastor surpass Alibaba in many issues. Finally, a survey was applied to developers in order to validate the framework. This suvery have presented that this work attend a developers demand. / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A Web Semântica (WS) vem se tornando um importante tópico de pesquisa na Ciência da Computação. Uma das razões é a possibilidade de representar a informação de maneira semântica por meio de ontologias. Como consequência, inúmeras aplicações têm sido desenvolvidas utilizando as tecnologias da Web semântica, tais como, RDF, SPARQL e as próprias ontologias. Embora o desenvolvimento de software utilizando estas tecnologias seja complicado e custoso, a comunidade da WS vem produzindo ferramentas e APIs (application programming interface) para apoiar programadores no desenvolvimento de aplicações semânticas. Nesse cenário, existem atualmente duas principais abordagens utilizadas por essas APIs para manipulação de ontologias: triplas RDF e Programação Orientada a Objetos (POO). Por um lado, o uso de APIs para manipular triplas RDF permite que o desenvolvedor crie aplicações mapeando as propriedades das classes presentes nas triplas RDF em código utilizando uma linguagem de programação (e.g. Java). Por outro lado, existem as APIs para manipular ontologias utilizando o paradigma de orientação a objetivo. Isso permite que desenvolvedores continuem utilizando um paradigma já conhecido e largamente utilizado. Embora várias APIs tenham sido desenvolvidas para manipular ontologias no nível de objeto, a maioria não foi adequadamente avaliada, principalmente no que se refere aos atributos de qualidade de desempenho e escalabilidade. Além disso, a alta quantidade e a variabilidade das APIs faz com que seja necessário construir uma abordagem genérica que lide com essas questões, uma vez que construir um sistema de avaliação para cada API isoladamente é inviável. Dessa forma, este trabalho apresenta um framework centrado na arquitetura, chamado JOINT-E (Java Ontology Integration Toolkit-Evaluator), que permite a desenvolvedores avaliar as APIs para manipulação de ontologias. Para realizar esta avaliação foi objetivo deste trabalho definir um conjunto de métricas de desempenho e escalabilidade. Frisa-se que o framework e as métricas definidas possibilitam a análise e comparação das APIs com apoio estatístico, aumentando a credibilidade e confiabilidade dos resultados. Para validar os resultados obtidos foi proposto um experimento com três cenários usando as principais APIs (Alibaba e Jastor) utilizadas pela comunidade de desenvolvimento de aplicativos semântico. De acordo com este experimento a Jastor apresentou melhor performance em relação ao Alibaba levando em consideração as métricas propostas. Por fim, realizou-se também uma pesquisa de opinião com desenvolvedores para verificar se o framework JOINT-E oferece informações importantes para tomada de decisão referente a escolha de uma API. Os resultados desta pesquisa constaram que o framework oferece meios mais precisos para avaliação das APIs e atende a demanda dos desenvolvedores.
|
137 |
Miniaturisation extrême de mémoires STT-MRAM : couche de stockage à anisotropie de forme perpendiculaire / Ultimate scalability of STT MRAM : storage layer with perpendicular shape anisotropyPerrissin fabert, Nicolas 31 August 2018 (has links)
La plupart des efforts de développements actuels des STT-MRAM est centrée sur des jonctions tunnels magnétiques à aimantation hors du plan. Les derniers empilements mis au point utilisent avantageusement l’anisotropie perpendiculaire induite aux interfaces magnétiques métal / oxydes, qui permet de réconcilier la forte anisotropie demandée pour assurer une rétention suffisante de la mémoire ainsi qu’une faible densité de courant de retournement STT grâce au couplage spin-orbite faible. Cependant, pour des cellules mémoire de taille inférieure à 20 nm, il est difficile d’atteindre une rétention de 10 ans à 100°C en utilisant uniquement l’anisotropie interfaciale. Pour augmenter encore plus l’anisotropie magnétique, ceci impose l’utilisation de couches magnétiques de CoFeB ultraminces (épaisseur inférieure à 1.4nm) qui présentent un coefficient d’amortissement Gilbert augmenté ainsi qu’une magnétorésistance tunnel TMR réduite. Pour des nœuds technologiques inférieurs à 20 nm, des nouveaux matériaux présentant une forte anisotropie magnétocrystalline et faible coefficient d’amortissement doivent être trouvés. De plus, l’anisotropie interfaciale est très sensible aux propriétés structurelles et chimiques aux interfaces entre les métaux magnétiques et la barrière tunnel de MgO. Avec des techniques de nanofabrication conventionnelles, ces interfaces peuvent être endommagées durant notamment l’étape de gravure, ce qui conduit à une variabilité importante cellule à cellule. Pour résoudre ce genre de problèmes pour des cellules STT-MRAM de tailles très petites, nous proposons l’utilisation d’empilements jonctions tunnel magnétiques dans lesquels l’anisotropie de la couche de stockage est contrôlée uniquement par son anisotropie de forme hors du plan. Ceci donne notamment une couche de stockage de forme cylindrique avec un aspect de forme suffisamment large (épaisseur / diamètre environ > 1). De cette façon, pour des raisons purement magnétostatiques, l’aimantation de la couche de stockage sera orientée perpendiculairement au plan de la cellule. Dans cette approche, la géométrie planaire classique des couches minces est ainsi remplacée par une géométrie tridimensionnelle. Cette approche innovante a plusieurs avantages : (i) elle génère une source fiable et robuste d’anisotropie perpendiculaire, beaucoup moins sensible aux défauts de structure et aux fluctuations thermiques; (ii) permet d’utiliser des matériaux connus et facile à croître, avec des coefficients d’amortissement faible, comme le Permalloy, en combinaison avec du CoFeB aux interfaces avec la barrière tunnel de MgO et (iii) donne une approche miniaturisable, même à des diamètres sub-10 nm, car le même matériau peut être utilisé pour des nœuds technologiques très petits. / Most of the actual STT-MRAM development effort is nowadays focused on out-of-plane magnetized MTJ taking advantage of the perpendicular magnetic anisotropy (PMA) arising at magnetic metal/oxide interface. This interfacial anisotropy allows conciliating large anisotropy required to insure a sufficient retention of the memory together with low switching current density thanks to weak spin-orbit coupling. However this PMA is too weak to insure 10 year retention up to 100°C in sub-20 nm devices. For deeply sub-20 nm nodes, new materials with large bulk PMA and low damping still have to be found. Furthermore, because this PMA is an interfacial effect, it is very sensitive to the structural and chemical properties of the magnetic metal/MgO interfaces contributing to dot to dot variability. To solve these problems in very small feature size STT-MRAM, we propose a totally novel approach: use MTJ stacks in which the storage layer anisotropy is uniquely controlled by its out-of-plane shape anisotropy i.e. by giving the storage layer a cylindrical shape with large enough aspect ratio (thickness / diameter typically > 1). In such structure, for purely magnetostatic reasons, the storage layer magnetization lies out-of-plane. With this approach, the geometry of conventional 2D thin layers is thus replaced by a 3D geometry. This innovative approach had several advantages: (i) it creates a strong and robust source of perpendicular anisotropy, much less sensitive to interfacial defects and thermal fluctuations; (ii) allows the use of well-known materials with mastered growth and low magnetic damping, such as Permalloy in combination with FeCoB at the interface of the MgO tunnel barrier and (iii) yields to an extreme scalability of the memory point, down to the sub-10 nm node, as the same materials can be used at very low nodes.
|
138 |
Unveiling the interplay between timeliness and scalability in cloud monitoring systems / Desvelando a relação mútua entre escalabilidade e oportunidade em sistemas de monitoramento de nuvens computacionaisRodrigues, Guilherme da Cunha January 2016 (has links)
Computação em nuvem é uma solução adequada para profissionais, empresas, centros de pesquisa e instituições que necessitam de acesso a recursos computacionais sob demanda. Atualmente, nuvens computacionais confiam no gerenciamento de sua estrutura para fornecer recursos computacionais com qualidade de serviço adequada as expectativas de seus clientes, tal qualidade de serviço é estabelecida através de acordos de nível de serviço. Nesse contexto, o monitoramento é uma função crítica de gerenciamento para se prover tal qualidade de serviço. Requisitos de monitoramento em nuvens computacionais são propriedades que um sistema de monitoramento de nuvem precisa reunir para executar suas funções de modo adequado e atualmente existem diversos requisitos definidos pela literatura, tais como: oportunidade, elasticidade e escalabilidade. Entretanto, tais requisitos geralmente possuem influência mútua entre eles, que pode ser positiva ou negativa, e isso impossibilita o desenvolvimento de soluções de monitoramento completas. Dado o cenario descrito acima, essa tese tem como objetivo investigar a influência mútua entre escalabilidade e oportunidade. Especificamente, essa tese propõe um modelo matemático para estimar a influência mútua entre tais requisitos de monitoramento. A metodologia utilizada por essa tese para construir tal modelo matemático baseia-se em parâmetros de monitoramento tais como: topologia de monitoramento, quantidade de dados de monitoramento e frequencia de amostragem. Além destes, a largura de banda de rede e o tempo de resposta também são importantes métricas do modelo matemático. A avaliação dos resultados obtidos foi realizada através da comparação entre os resultados do modelo matemático e de uma simulação. As maiores contribuições dessa tese são divididas em dois eixos, estes são denominados: Básico e Chave. As contribuições do eixo básico são: (i) a discussão a respeito da estrutura de monitoramento de nuvem e introdução do conceito de foco de monitoramento (ii) o exame do conceito de requisito de monitoramento e a proposição do conceito de abilidade de monitoramento (iii) a análise dos desafios e tendências a respeito de monitoramento de nuvens computacionais. As contribuições do eixo chave são: (i) a discussão a respeito de oportunidade e escalabilidade incluindo métodos para lidar com a mútua influência entre tais requisitos e a relação desses requisitos com parâmetros de monitoramento (ii) a identificação dos parâmetros de monitoramento que são essenciais na relação entre oportunidade e escalabilidade (iii) a proposição de um modelo matemático baseado em parâmetros de monitoramento que visa estimar a relação mútua entre oportunidade e escalabilidade. / Cloud computing is a suitable solution for professionals, companies, research centres, and institutions that need to have access to computational resources on demand. Nowadays, clouds have to rely on proper management of its structure to provide such computational resources with adequate quality of service, which is established by Service Level Agreements (SLAs), to customers. In this context, cloud monitoring is a critical management function to achieve it. Cloud monitoring requirements are properties that a cloud monitoring system need to meet to perform its functions properly, and currently there are several of them such as timeliness, elasticity and scalability. However, such requirements usually have mutual influence, which is either positive or negative, among themselves, and it has prevented the development of complete cloud monitoring solutions. From the above, this thesis investigates the mutual influence between timeliness and scalability. This thesis proposes a mathematical model to estimate such mutual influence to enhance cloud monitoring systems. The methodology used in this thesis is based on monitoring parameters such as monitoring topologies, the amount of monitoring data, and frequency sampling. Besides, it considers as important metrics network bandwidth and response time. Finally, the evaluation is based on a comparison of the mathematical model results and outcomes obtained via simulation. The main contributions of this thesis are divided into two axes, namely, basic and key. Basic contributions of this thesis are: (i) it discusses the cloud monitoring structure and introduced the concept of cloud monitoring focus (ii) it examines the concept of cloud monitoring requirement and proposed to divide them into two groups defined as cloud monitoring requirements and cloud monitoring abilities (iii) it analysed challenges and trends in cloud monitoring pointing research gaps that include the mutual influence between cloud monitoring requirements which is core to the key contributions. The key contributions of this thesis are: (i) it presents a discussion of timeliness and scalability that include: the methods currently used to cope with the mutual influence between them, and the relation between such requirements and monitoring parameters (ii) it identifies the monitoring parameters that are essential in the relation between timeliness and scalability (iii) it proposes a mathematical model based on monitoring parameters to estimate the mutual influence between timeliness and scalability.
|
139 |
Scalable state machine replication / Replicação escalável de máquina de estadosBezerra, Carlos Eduardo Benevides January 2016 (has links)
Redundância provê tolerância a falhas. Um serviço pode ser executado em múltiplos servidores que se replicam uns aos outros, de maneira a prover disponibilidade do serviço em caso de falhas. Uma maneira de implementar um tal serviço replicado é através de técnicas como replicação de máquina de estados (SMR). SMR provê tolerância a falhas, ao mesmo tempo que é linearizável, isto é, clientes não são capazes de distinguir o comportamento do sistema replicado daquele de um sistema não replicado. No entanto, ter um sistema completamente replicado e linearizável vem com um custo, que é escalabilidade – por escalabilidade, queremos dizer que adicionar servidores ao sistema aumenta a sua vazão, pelo menos para algumas cargas de trabalho. Mesmo com uma configuração cuidadosa e usando otimizações que evitam que os servidores executem ações redundantes desnecessárias, em um determinado ponto a vazão de um sistema replicado com SMR não pode ser mais aumentada acrescentando-se servidores; na verdade, adicionar réplicas pode até degradar a sua performance. Uma maneira de conseguir escalabilidade é particionar o serviço e então permitir que partições trabalhem independentemente. Por outro lado, ter um sistema particionado, porém linearizável e com razoavelmente boa performance não é trivial, e esse é o tópico de pesquisa tratado aqui. Para permitir que sistemas escalem, ao mesmo tempo que se garante linearizabilidade, nós propomos as seguinte ideias: (i) Replicação Escalável de Máquina de Estados (SSMR), (ii) Multicast Atômico Otimista (Opt-amcast) e (iii) S-SMR Rápido (Fast-SSMR). S-SMR é um modelo de execução que permite que a vazão do sistema escale de maneira linear com o número de servidores, sem sacrificar consistência. Para reduzir o tempo de resposta dos comandos, nós definimos o conceito de Opt-amcast, que permite que mensagens sejam entregues duas vezes: uma entrega garante ordem atômica (entrega atômica), enquanto a outra é mais rápida, mas nem sempre garante ordem atômica (entrega otimista). A implementação de Opt-amcast que nós propomos nessa tese se chama Ridge, um protocolo que combina baixa latência com alta vazão. Fast-SSMR é uma extensão do S-SMR que utiliza a entrega otimista do Opt-amcast: enquanto um comando é ordenado de maneira atômica, pode-se fazer alguma pré-computação baseado na entrega otimista, reduzindo assim tempo de resposta. / Redundancy provides fault-tolerance. A service can run on multiple servers that replicate each other, in order to provide service availability even in the case of crashes. A way to implement such a replicated service is by using techniques like state machine replication (SMR). SMR provides fault tolerance, while being linearizable, that is, clients cannot distinguish the behaviour of the replicated system to that of a single-site, unreplicated one. However, having a fully replicated, linearizable system comes at a cost, namely, scalability—by scalability we mean that adding servers will always increase the maximum system throughput, at least for some workloads. Even with a careful setup and using optimizations that avoid unnecessary redundant actions to be taken by servers, at some point the throughput of a system replicated with SMR cannot be increased by additional servers; in fact, adding replicas may even degrade performance. A way to achieve scalability is by partitioning the service state and then allowing partitions to work independently. Having a partitioned, yet linearizable and reasonably performant service is not trivial, and this is the topic of research addressed here. To allow systems to scale, while at the same time ensuring linearizability, we propose and implement the following ideas: (i) Scalable State Machine Replication (S-SMR), (ii) Optimistic Atomic Multicast (Opt-amcast), and (iii) Fast S-SMR (Fast-SSMR). S-SMR is an execution model that allows the throughput of the system to scale linearly with the number of servers without sacrificing consistency. To provide faster responses for commands, we developed Opt-amcast, which allows messages to be delivered twice: one delivery guarantees atomic order (conservative delivery), while the other is fast, but not always guarantees atomic order (optimistic delivery). The implementation of Opt-amcast that we propose is called Ridge, a protocol that combines low latency with high throughput. Fast-SSMR is an extension of S-SMR that uses the optimistic delivery of Opt-amcast: while a command is atomically ordered, some precomputation can be done based on its fast, optimistically ordered delivery, improving response time.
|
140 |
Transparency analysis of Distributed file systems : With a focus on InterPlanetary File SystemWennergren, Oscar, Vidhall, Mattias, Sörensen, Jimmy January 2018 (has links)
IPFS claims to be the replacement of HTTP and aims to be used globally. However, our study shows that in terms of scalability, performance and security, IPFS is inadequate. This is a result from our experimental and qualitative study of transparency of IPFS version 0.4.13. Moreover, since IPFS is a distributed file system, it should fulfill all aspects of transparency, but according to our study, this is not the case. From our small-scale analysis, we speculate that nested files appear to be the main cause of the performance issues and replication amplifies these problems even further.
|
Page generated in 0.0669 seconds