61 |
Enabling and Achieving Self-Management for Large Scale Distributed Systems : Platform and Design Methodology for Self-ManagementAl-Shishtawy, Ahmad January 2010 (has links)
<p>Autonomic computing is a paradigm that aims at reducing administrative overhead by using autonomic managers to make applications self-managing. To better deal with large-scale dynamic environments; and to improve scalability, robustness, and performance; we advocate for distribution of management functions among several cooperative autonomic managers that coordinate their activities in order to achieve management objectives. Programming autonomic management in turn requires programming environment support and higher level abstractions to become feasible.</p><p>In this thesis we present an introductory part and a number of papers that summaries our work in the area of autonomic computing. We focus on enabling and achieving self-management for large scale and/or dynamic distributed applications. We start by presenting our platform, called Niche, for programming self-managing component-based distributed applications. Niche supports a network-transparent view of system architecture simplifying designing application self-* code. Niche provides a concise and expressive API for self-* code. The implementation of the framework relies on scalability and robustness of structured overlay networks. We have also developed a distributed file storage service, called YASS, to illustrate and evaluate Niche.</p><p>After introducing Niche we proceed by presenting a methodology and design space for designing the management part of a distributed self-managing application in a distributed manner. We define design steps, that includes partitioning of management functions and orchestration of multiple autonomic managers. We illustrate the proposed design methodology by applying it to the design and development of an improved version of our distributed storage service YASS as a case study.</p><p>We continue by presenting a generic policy-based management framework which has been integrated into Niche. Policies are sets of rules that govern the system behaviors and reflect the business goals or system management objectives. The policy based management is introduced to simplify the management and reduce the overhead, by setting up policies to govern system behaviors. A prototype of the framework is presented and two generic policy languages (policy engines and corresponding APIs), namely SPL and XACML, are evaluated using our self-managing file storage application YASS as a case study.</p><p>Finally, we present a generic approach to achieve robust services that is based on finite state machine replication with dynamic reconfiguration of replica sets. We contribute a decentralized algorithm that maintains the set of resource hosting service replicas in the presence of churn. We use this approach to implement robust management elements as robust services that can operate despite of churn.</p><p> </p> / QC 20100520
|
62 |
An Efficient Computation of Convex Closure on Abstract EventsBedasse, Dwight Samuel January 2005 (has links)
The behaviour of distributed applications can be modeled as the occurrence of events and how these events relate to each other. Event data collected according to this event model can be visualized using process-time diagrams that are constructed from a collection of traces and events. One of the main characteristics of a distributed system is the large number of events that are involved, especially in practical situations. This large number of events, and hence large process-time diagrams, make distributed-system observation difficult for the user. However, event-predicate detection, a search mechanism able to detect and locate arbitrary predicates within a process-time diagram or event collection, can help the user to make sense of this large amount of data. Ping Xie used the convex-abstract event concept, developed by Thomas Kunz, to search for hierarchical event predicates. However, his algorithm for computing convex closure to construct compound events, and especially hierarchical compound events (i. e. , compound events that contain other compound events), is inefficient. In one case it took, on average, close to four hours to search the collection of event data for a specific hierarchical event predicate. In another case, it took nearly one hour. This dissertation discusses an efficient algorithm, an extension of Ping Xie?s algorithm, that employs a caching scheme to build compound and hierarchical compound events based on matched sub-patterns. In both cases cited above, the new execution times were reduced by over 94%. They now take, on average, less than four minutes.
|
63 |
A method for the architectural design of distributed control systems for large, civil jet engines : a systems engineering approachBourne, Duncan January 2011 (has links)
The design of distributed control systems (DCSs) for large, civil gas turbine engines is a complex architectural challenge. To date, the majority of research into DCSs has focused on the contributing technologies and high temperature electronics rather than the architecture of the system itself. This thesis proposes a method for the architectural design of distributed systems using a genetic algorithm to generate, evaluate and refine designs. The proposed designs are analysed for their architectural quality, lifecycle value and commercial benefit. The method is presented along with results proving the concept. Whilst the method described here is applied exclusively to Distributed Control System (DCS) for jet engines, the principles and methods could be adapted for a broad range of complex systems.
|
64 |
Computational analysis of CpG site DNA methylationGhorbani, Mohammadmersad January 2013 (has links)
Epigenetics is the study of factors that can change DNA and passed to next generation without change to DNA sequence. DNA methylation is one of the categories of epigenetic change. DNA methylation is the attachment of methyl group (CH3) to DNA. Most of the time it occurs in the sequences that G is followed by C known as CpG sites and by addition of methyl to the cytosine residue. As science and technology progress new data are available about individual’s DNA methylation profile in different conditions. Also new features discovered that can have role in DNA methylation. The availability of new data on DNA methylation and other features of DNA provide challenge to bioinformatics and the opportunity to discover new knowledge from existing data. In this research multiple data series were used to identify classes of methylation DNA to CpG sites. These classes are a) Never methylated CpG sites,b) Always methylated CpG sites, c) Methylated CpG sites in cancer/disease samples and non-methylated in normal samples d) Methylated CpG sites in normal samples and non-methylated in cancer/disease samples. After identification of these sites and their classes, an analysis was carried out to find the features which can better classify these sites a matrix of features was generated using four applications in EMBOSS software suite. Features matrix was also generated using the gUse/WS-PGRADE portal workflow system. In order to do this each of the four applications were grid enabled and ported to BOINC platform. The gUse portal was connected to the BOINC project via 3G-bridge. Each node in the workflow created portion of matrix and then these portions were combined together to create final matrix. This final feature matrix used in a hill climbing workflow. Hill climbing node was a JAVA program ported to BOINC platform. A Hill climbing search workflow was used to search for a subset of features that are better at classifying the CpG sites using 5 different measurements and three different classification methods: support vector machine, naïve bayes and J48 decision tree. Using this approach the hill climbing search found the models which contain less than half the number of features and better classification results. It is also been demonstrated that using gUse/WS-PGRADE workflow system can provide a modular way of feature generation so adding new feature generator application can be done without changing other parts. It is also shown that using grid enabled applications can speedup both feature generation and feature subset selection. The approach used in this research for distributed workflow based feature generation is not restricted to this study and can be applied in other studies that involve feature generation. The approach also needs multiple binaries to generate portions of features. The grid enabled hill climbing search application can also be used in different context as it only requires to follow the same format of feature matrix.
|
65 |
Monitoring DNS serverů domén druhé úrovně / Monitoring of SLD DNS serversŠťastný, Petr January 2011 (has links)
This publication directly follows the bachelor thesis. It contains necessary theory of HTTP, SMTP and some other protocols and services. This knowledge is then used to draw a methodology to build additional tests to verify availability and functionality of basic Internet services of a domain name. This methodology is then implemented as an application that uses distributed processing to analyse a large number of domains. Obtained results are then compiled into statistical outputs. One chapter is also devoted to overview of the attacks on DNS and security options of DNS servers and domain records.
|
66 |
Composing and connecting devices in animal telemetry networkKrishna, Ashwin January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Venkatesh P. Ranganath / As the Internet of Things (IoT) continues to grow, the need for services that span multiple application domains will continue to increase to realise the numerous possibilities enabled by IoT. Today, however, heterogeneity among devices leads to interoperability issues while building a system of systems and often give rise to closed ecosystems. The issues with interoperability are driven by the inability of devices and apps from different vendors to communicate with each other. The interoperability problem forces the users to stick to one particular vendor, leading to vendor lock-in. To achieve interoperability, the users have to do the heavy lifting (at times impossible) of connecting heterogeneous devices.
As we slowly move towards system-of-systems and IoT, there is a real need to support heterogeneity and interoperability. A recent effort in Santos Lab developed Medical Device Coordination Framework (MDCF), which was a step to address these issues in the space of human medical systems. Subsequently, we have been wondering if a similar solution can be employed in the area of animal science.
In this effort, by borrowing observations from MDCF and knowledge from on-field experience, we have created a demonstration showcasing how a combination of precise component descriptions (via DSL) and communication patterns can be used in software development and deployment to overcome barriers due to heterogeneity, interoperability and to enable an open ecosystem of apps and devices in the space of animal telemetry.
|
67 |
Scalable Methodology for Performance-based Selection of Security Services for Distributed SystemsKraus, Petr 01 January 2011 (has links)
Distributed systems are shared by a large number of users that generate task-based workloads. The sharing of hardware and software by multiple workloads mandates the need for security mechanisms that protect the artifacts of individual tasks. Additionally, these systems must meet user-based performance expectations, a factor that must be addressed during the security service selection process. Current performance-based security service selection methodologies use flat GSPN models that suffer from exponential evaluation complexity as the model size increases. Due to this limitation, these methodologies cannot evaluate models representing the scale of current distributed systems.
To address the evaluation complexity problem the hierarchical methodology presented in this report was designed to avoid the system size limitations of the current flat GSPN model-based methodologies. The methodology relies only on general performance models capable of modeling platform-independent systems designs. The refactoring methodology uses a divide-and-conquer approach to evaluate the entire system model. Using model-refactoring techniques the input model is modified into a hierarchy of subsystem models using abstraction to isolate performance measurement to component level. This technique further increases the effectiveness of the performance evaluation by avoiding the duplicate evaluation of identical components. Therefore increasing the number of alternate security service components results in a linear complexity growth of the entire system model. Thus, the limiting factor of the hierarchical methodology is the size of the largest component rather than the previous system size limitation.
The experimental results show that the hierarchical model-based methodology is able to scale beyond system model sizes that can be evaluated using current flat GSPN-based performance evaluation methodologies. This scalability improvement implies that the hierarchical technique can evaluate models containing up to 50 individual components using the current GSPN tools. Thus the contribution of this hierarchical technique will continue to improve with subsequent advancements in GSPN model evaluation techniques.
|
68 |
The GNU Taler system : practical and provably secure electronic payments / Le système GNU Taler : Paiements électroniques pratiques et sécurisésDold, Florian 25 February 2019 (has links)
Les nouveaux protocoles de réseautage et cryptographiques peuvent considérablement améliorer les systèmes de paiement électroniques en ligne. Le présent mémoire porte sur la conception, la mise en œuvre et l’analyse sécuritaire du GNU Taler, un système de paiement respectueux de la vie privée conçu pour être pratique pour l’utilisation en ligne comme méthode de (micro-)paiement, et en même temps socialement et moralement responsable. La base technique du GNU Taler peut être dû à l’e-cash de David Chaum. Notre travail va au-delà de l’e-cash de Chaum avec un changement efficace, et la nouvelle notion de transparence des revenus garantissant que les marchands ne peuvent recevoir de manière fiable un paiement d’un payeur non fiable que lorsque leurs revenus du paiement est visible aux autorités fiscales. La transparence des revenus est obtenue grâce à l’introduction d’un protocole d’actualisation donnant lieu à un changement anonyme pour un jeton partiellement dépensé sans avoir besoin de l’introduction d’une évasion fiscale échappatoire. De plus, nous démontrons la sécurité prouvable de la transparence anonyme de nos revenus e-cash, qui concerne en plus l’anonymat habituel et les propriétés infalsifiables de l’e-cash, ainsi que la conservation formelle des fonds et la transparence des revenus. Notre mise en œuvre du GNU Taler est utilisable par des utilisateurs non-experts et s’intègre à l’architecture du web moderne. Notre plateforme de paiement aborde une série de questions pratiques telles que la prodigue des conseils aux clients, le mode de remboursement, l’intégration avec les banques et les chèques “know-your-customer (KYC)”, ainsi que les exigences de sécurité et de fiabilité de la plateforme web. Sur une seule machine, nous réalisons des taux d’opérations qui rivalisent avec ceux des processeurs de cartes de crédit commerciaux globaux. Pendant que les crypto-monnaies basées sur la preuve de travail à l’instar de Bitcoin doivent encore être mises à l’échelle pour servir de substituant aux systèmes de paiement établis, d’autres systèmes plus efficaces basés sur les Blockchains avec des algorithmes de consensus plus classiques pourraient avoir des applications prometteurs dans le secteur financier. Nous faisons dans la conception, la mise en œuvre et l’analyse de la Byzantine Set Union Consensus, un protocole de Byzantine consensus qui s’accorde sur un (Super-)ensemble d’éléments à la fois, au lieu d’accepter en séquence les éléments individuels sur un ensemble. Byzantine Set consensus peut être utilisé comme composante de base pour des chaînes de blocs de permissions, où (à l’instar du style Nakamoto consensus) des blocs entiers d’opérations sont convenus à la fois d’augmenter le taux d’opération. / We describe the design and implementation of GNU Taler, an electronic payment system based on an extension of Chaumian online e-cash with efficient change. In addition to anonymity for customers, it provides the novel notion of income transparency, which guarantees that merchants can reliably receive a payment from an untrusted payer only when their income from the payment is visible to tax authorities. Income transparency is achieved by the introduction of a refresh protocol, which gives anonymous change for a partially spent coin without introducing a tax evasion loophole. In addition to income transparency, the refresh protocol can be used to implement Camenisch-style atomic swaps, and to preserve anonymity in the presence of protocol aborts and crash faults with data loss by participants. Furthermore, we show the provable security of our income-transparent anonymous e-cash, which, in addition to the usual anonymity and unforgeability proper- ties of e-cash, also formally models conservation of funds and income transparency. Our implementation of GNU Taler is usable by non-expert users and integrates with the modern Web architecture. Our payment platform addresses a range of practical issues, such as tipping customers, providing refunds, integrating with banks and know-your-customer (KYC) checks, as well as Web platform security and reliability requirements. On a single machine, we achieve transaction rates that rival those of global, commercial credit card processors. We increase the robustness of the exchange—the component that keeps bank money in escrow in exchange for e-cash—by adding an auditor component, which verifies the correct operation of the system and allows to detect a compromise or misbehavior of the exchange early. Just like bank accounts have reason to exist besides bank notes, e-cash only serves as part of a whole payment system stack. Distributed ledgers have recently gained immense popularity as potential replacement for parts of the traditional financial industry. While cryptocurrencies based on proof-of-work such as Bitcoin have yet to scale to be useful as a replacement for established payment systems, other more efficient systems based on Blockchains with more classical consensus algorithms might still have promising applications in the financial industry. We design, implement and analyze the performance of Byzantine Set Union Consensus (BSC), a Byzantine consensus protocol that agrees on a (super-)set of elements at once, instead of sequentially agreeing on the individual elements of a set. While BSC is interesting in itself, it can also be used as a building block for permissioned Blockchains, where—just like in Nakamoto-style consensus—whole blocks of transactions are agreed upon at once, increasing the transaction rate.
|
69 |
Uma arquitetura de comunicação escalável para sistemas de visualização imersivos. / A scalable communication architecture for immersive visualization systems.Belloc, Olavo da Rosa 21 November 2016 (has links)
A complexidade dos sistemas de visualização imersivos pode variar tremendamente conforme a sua aplicação. Algumas ferramentas mais simples fazem uso de um único óculos de Realidade Virtual como infraestrutura de visualização. No entanto, aplicações mais complexas, como simuladores e outras ferramentas de treinamento, podem necessitar de uma infraestrutura distribuída, contendo diversos computadores e telas. Alguns simuladores e outras aplicações de treinamento fazem uso frequente de periféricos sofisticados de interação, que reproduzem de maneira fiel os elementos encontrados no cenário real. Além disto, o espaço de treinamento pode ser compartilhado por dois ou mais usuários. Estes requisitos acabam por impor o uso de sistemas de visualização complexos e distribuídos, que visam cobrir de maneira quase completa o campo de vis~ao destes usuários. Por causa das características deste tipo de sistema, as aplicações desenvolvidas nestes cenários são inerentemente complexas, pois frequentemente consideram aspectos específicos da infraestrutura para realizar a distribuição e o sincronismo da cena virtual. Esta complexidade dificulta o desenvolvimento, a manutenção e a interoperabilidade destas ferramentas. Este trabalho apresenta uma arquitetura de comunicação para promover o uso de sistemas imersivos de forma simples e transparente para as aplicações, viabilizando o uso de infraestruturas complexas e distribuídas. A arquitetura proposta utiliza o mecanismo de substituição do driver OpenGL para obter, de forma automática, a distribuição do aspecto gráfico das aplicações. Apesar deste conceito já ter sido discutido na literatura, esta proposta apresenta um conjunto de técnicas para contornar as limitações inerentes desta abordagem e obter ganhos de desempenho significativos, com resultados consistentes em um amplo conjunto de infraestruturas. As técnicas apresentadas neste trabalho sugerem, entre outras coisas, o uso de recursos modernos do padrão OpenGL para reduzir o volume de comunicação entre CPU e GPU. Um dos recursos avaliados foi o uso de mecanismos de renderização indireta, onde a aplicação armazena os comandos de renderização na memória da placa gráfica. Juntamente com esta técnica, o trabalho também investigou o uso de um algoritmo de culling na própria GPU, o que permitiu que esta otimização fosse utilizada mesmo em sistemas com arranjos mais complexos de tela. Os resultados obtidos mostram que a aplicação pode exibir o seu conteúdo em um conjunto amplo de sistemas imersivos, contendo mais resolução e mais geometria visível, sem deteriorar o seu desempenho. Os testes foram conduzidos em diferentes infraestruturas e com cenas de tamanhos variáveis. Nos casos mais complexos, as técnicas propostas podem reduzir em 86% o tempo médio de renderização, quando comparadas com as abordagens tradicionais. / The complexity of immersive visualization systems can vary tremendously depending on their application. Some simple tools might only require a conventional virtual reality goggle as a visualization infrastructure. However, more complex applications, such as simulators and other training tools, might require a distributed infrastructure, containing several computers and screens. Some training applications and simulators invariably make use of physical peripherals for interaction, which are designed to faithfully reproduce the elements found in real scenarios. Furthermore, the training area may be shared by two or more users. These requirements usually impose the use of complex and distributed imaging systems, which are intended to cover almost the entire field of view of the users involved. Because of the characteristics of this type of system, the applications developed for these infrastructures are inherently complex. They are required to consider specific aspects of the infrastructure itself to carry out the distribution and synchronization of the virtual scene. This complexity hampers the development, maintenance and interoperability of these tools. This work presents a communication architecture to promote the use of immersive systems by allowing applications to use complex and distributed infrastructures in a simple and transparent way. The proposed architecture uses the approach of replacing the OpenGL driver to transparently achieve graphics distribution. Although this has already been discussed in the literature, this document presents a set of techniques to overcome the inherent limitations of this approach and ultimately achieve significant performance gains, with consistent results across a broad range of infrastructures. The techniques presented here suggest, among other things, the use of modern features of the OpenGL standard to reduce the communication overhead between CPU and GPU. One of the features evaluated was the usage of indirect rendering, where the application stores all the rendering commands in the graphics card dedicated memory. Along with this feature, the work also investigated the use of a culling algorithm on the GPU itself, which allowed this optimization to be used even on systems containing screens with a more complex layout. The results show that the application can render its content in a wide range of immersive systems, with higher resolution and more visible geometry, without degrading its performance. The tests were conducted at different infrastructures and scenes with variable sizes. In the more complex use cases, the proposed techniques can reduce by up to 86% the average rendering time, when compared to the traditional approaches.
|
70 |
A utilização da arquitetura CORBA na construção de ambientes virtuais distribuídos. / The use of CORBA architecture in the construction of distributed virtual environments.Sementille, Antonio Carlos 17 September 1999 (has links)
As aplicações de Realidade Virtual imergem o usuário em um ambiente virtual simulado, denominado Ambiente Virtual. A simulação de Ambientes Virtuais é um processo intensivo, o qual pode ser drasticamente limitado se for restrito a apenas um computador. É possível, através da distribuição, aumentar-se o tamanho e a abrangência destes sistemas, permitindo que múltiplos usuários interajam entre si e com o ambiente. Estes ambientes virtuais são conhecidos como Ambientes Virtuais Distribuídos. A construção de Ambientes Virtuais Distribuídos é uma tarefa complexa, principalmente quando são levados em consideração, aspectos como a estruturação da comunicação ao nível de processo, a escalabilidade, a interoperabilidade e o reuso de seus componentes. Tais aspectos são, também, enfatizados, através da Tecnologia de Objetos Distribuídos, cujo maior representante atual é a Arquitetura CORBA (Common Object Kequest Broker Architecture). Assim, neste contexto, este trabalho apresenta um estudo e uma metodologia para a construção de Ambientes Virtuais Distribuídos que utilizem a Arquitetura CORBA como infra-estrutura de alto nível para a comunicação e sincronização entre seus objetos. Buscou-se, também, apresentar os elementos teóricos e práticos desta abordagem, através da implantação de três protótipos, os quais formaram uma base comparativa para o estudo da viabilidade das idéias usadas. / Virtual Reality applications immerge the user in a simulated virtual environment entitled Virtual Environment that is an intensive process which can be drastically limited if restricted to a single computer. It is possible to enlarge the size and reach of these systems through distribution to allow multiple users to Interact among themselves and with the environment. These virtual environments are known as Distributed Virtual Environments. Their construction is a complex task, mainly when aspects such as communication structuring at the process level, scalability, interoperability and reuse of their components are taken into consideration. Such aspects are also emphasized through Distributed Objects Technology, being CORBA (Common Object Request Broker Architecture) currently the biggest representative. Thus, in this context, this thesis presents a study and methodology for the construction of Distributed Virtual Environments which utilize CORBA architecture as a high-level structure for the communication and synchronization among their objects. Last but not least, the aim of this thesis was also to present theoretical and practical elements of this approach, by implementing three prototypes, which formed a comparative basis to study the viability of the ideas hereby used.
|
Page generated in 0.0588 seconds