Spelling suggestions: "subject:"distributed system"" "subject:"eistributed system""
111 |
Implementation of a Prototype for Body-Coupled Communication Using Embedded Electronics : Implementation of a distributed system of sensors and actuators using BodyCom development boardMaleev, Andrey January 2018 (has links)
A wireless body network with sensors and actuators is a topical subject in current situation, because the healthcare services cannot meet peoples requirements for personal health-care. Such a network can be used to monitor the health status of e.g. elderly people and provide a drug delivery without external human interaction. In this project we will implement a prototype of a distributed system of sensors and actuator using the human body as a transmission line for communication purposes (Capacitive Body-coupled Communication), as a solution for the problem. Similar systems have been implemented earlier, using radio-based wireless communication which consumes more power and have critical security issues, compared to capacitive body-coupled communication. This document describes how the system is implemented with focus on robust gathering of sensor data from several sensors from a single node using capacitive body-coupled communication and an actuator control with user interaction.
|
112 |
Distributed Implementations of Component-based Systems with Prioritized Multiparty Interactions : Application to the BIP Framework. / Implantations distribuées de modèles à base de composants communicants par interactions multiparties avec priorités : application au langage BIPQuilbeuf, Jean 16 September 2013 (has links)
Les nouveaux systèmes ont souvent recours à une implémentation distribuée du logiciel, pour des raisons d'efficacité et à cause de l'emplacement physique de certains capteurs et actuateurs. S'assurer de la correction d'un logiciel distribué est difficile car cela impose de considérer tous les enchevêtrements possibles des actions exécutées par des processus distincts. Cette thèse propose une méthode pour générer, à partir d'un modèle d'application haut niveau, une implémentation distribuée correcte et efficace. Le modèle de l'application comporte des composants communiquant au moyen d'interactions multiparties avec priorités. L'exécution d'une interaction multipartie, qui correspond à un pas de la sémantique, change de façon atomique l'état de tous les composants participant à l'interaction. On définit une implantation distribuée comme un ensemble de processus communiquant par envoi de message asynchrone. La principale difficulté est de produire une implémentation correcte et efficace des interactions multiparties avec priorités, en utilisant uniquement l'envoi de message comme primitive. La méthode se fonde sur un flot de conception rigoureux qui raffine progressivement le modèle haut niveau en un modèle bas niveau, à partir duquel le code pour une plateforme particulière est généré. Tous les modèles intermédiaires apparaissant dans le flot sont exprimés avec la même sémantique que le modèle original. À chaque étape du flot, les interactions complexes sont remplacés par des constructions utilisant des interactions plus simples. En particulier, le dernier modèle obtenu avant la génération du code ne contient que des interactions modélisant l'envoi de message. La correction de l'implémentation est obtenue par construction. L'utilisation des interactions multiparties comme primitives dans le modèle de l'application permet de réduire très significativement l'ensemble des états atteignables, par rapport à un modèle équivalent mais utilisant des primitives de communication plus simples. Les propriétés essentielles du système sont vérifiées à ce niveau d'abstraction. Chaque transformation constituante du flot de conception est suffisamment simple pour être complètement formalisée et prouvée, en termes d'équivalence observationelle ou d'équivalence de trace entre le modèles avant et après transformation. L'implémentation ainsi obtenue est correcte par rapport au modèle original, ce qui évite une coûteuse vérification a posteriori. Concernant l'efficacité, la performance de l'implémentation peut être optimisée en choisissant les paramètres adéquats pour les transformations, ou en augmentant la connaissance des composants. Cette dernière solution requiert une analyse du modèle de départ afin de calculer la connaissance qui est réutilisée pour les étapes ultérieures du flot de conception. Les différentes transformations et optimisations constituant le flot de conception ont été implémentées dans le cadre de BIP. Cette implémentation a permis d'évaluer les différentes possibilités ainsi que l'influence des différents paramètres, sur la performance de l'implémentation obtenue avec plusieurs exemples. Le code généré utilise les primitives fournies par les sockets POSIX, MPI ou les pthreads pour envoyer des messages entre les processus. / Distributed software is often required for new systems, because of efficiency and physical distribution and sensors and actuators. Ensuring correctness of a distributed implementation is hard due to the interleaving of actions belonging to distinct processes. This thesis proposes a method for generating a correct and efficient distributed implementation from a high-level model of an application. The input model is described as a set of components communicating through prioritized multiparty interactions. Such primitives change the state of all components involved in an interaction during a single atomic execution step. We assume that a distributed implementation is a set of processes communicating through asynchronous message-passing. The main challenge is to produce a correct and efficient distributed implementation of prioritized multiparty interactions, relying only on message-passing. The method relies on a rigorous design flow refining the high-level model of the application into a low-level model, from which code for a given platform is generated. All intermediate models appearing in the flow are expressed using the same semantics as the input model. Complex interactions are replaced with constructs using simpler interactions at each step of the design flow. In particular, the last model obtained before code generation contains only interactions modeling asynchronous message passing. The correctness of the implementation is obtained by construction. Using multiparty interaction reduces drastically the set of reachable states, compared to an equivalent model expressed with lower level primitives. Essential properties of the system are checked at this abstraction level. Each transformation of the design flow is simple enough to be fully formalized and proved by showing observational equivalence or trace equivalence between the input and output models. The obtained implementation is correct with respect to the original model, which avoids an expensive a posteriori verification. Performance can be optimized through adequate choice of the transformation parameters, or by augmenting the knowledge of components. The latter solution requires to analyze the original model to compute the knowledge, that is reused at subsequent steps of the decentralization. The various transformations and optimizations constituting the design flow have been implemented using the BIP framework. The implementation has been used to evaluate the different possibilities, as well the influence of parameters of the design flow, on several examples. The generated code uses either Unix sockets, MPI or pthreads primitives for communication between processes.
|
113 |
Modelo de virtualização distribuída aplicado ao gerenciamento e replicação de cluster multiuso /Aguiar, César de Souza. January 2008 (has links)
Orientador: Roberto Spolon Ulson / Banca: João Ângelo Martini / Banca: Paulo Sérgio da Silva / Resumo: Este trabalho apresenta um modelo de boot remoto para computadores commodity utilizando máquinas virtuais e sistemas de arquivos distribuídos e paralelos. O modelo proposto pode substituir o boot local com disco rígido por um boot através da rede de comunicação, aumentando assim a flexibilidade e manutenibilidade do parque de máquinas, além de permitir que dezenas de sistemas operacionais distintos sejam inicializados sem a necessidade de um disco rígido nos clientes, reduzindo dessa forma o custo em hardware e diminuindo a complexidade de instalação e manutenção de software, implantando um único ponto centralizado de gerenciamento. O projeto analisa maneiras de otimizar a transmissão de blocos de dados com técnicas de localidade de dados, sistemas de arquivos distribuídos e balanceamento de carga para implementar um ambiente robusto e de virtualização distribuída. O modelo também auxilia implementações de clusters multiuso e LAN grids para computadores commodity, provendo ferramentas para aproveitar recursos computacionais ociosos em conjuntos de computadores conectados. Neste estudo foram analisados diferentes modelos de sistemas de arquivos distribuídos, detalhando suas principais características e utilizações, e foram realizados experimentos com a virtualização distribuída juntamente com balanceamento de carga. A implantação de um sistema de arquivos híbrido através da integração de PVFS2 com pNFS trouxe melhorias de até 16% na velocidade de operações de leitura e permitiu maior escalabilidade da solução, assim como o gerenciamento de cache que permitiu a melhora de até 37% na velocidade de boot do middleware. Os resultados obtidos também viabilizaram o uso da solução para um grande número de computadores e possibilitaram o boot escalável de imagens virtuais remotamente. / Abstract: This work presents a remote boot model to commodity computers using virtual machines and distributed and parallel file system. The proposed model can replace the local hard disk boot to a boot over the network of communication, thereby increasing the flexibility and maintainability of the group of machines, and with that allowing dozens of different operating system to be initialized without the need of a hard disk on customers, thus reducing the cost in hardware and reducing the complexity of installation and maintenance of software, implementing a centralized management unit. The project examines ways to optimize the data block transmission with techniques of data locality, distributed file system and load balancing to implement a robust environment for distributed virtualization. The model also helps implementations of multiuse clusters and grids to commodity computers, providing tools to take advantage of idle computing resources in connected computers. In this study it was analyzed different models of distributed file system, detailing their main characteristics and uses, it was also conducted experiments with distributed virtualization along with load balancing, which showed improvements in the overall performance of the system. The deployment of a hybrid filesystem by mixing PVFS2 with pNFS brought improvements of up to 16% in the speed of operations for reading and allowed greater scalability of the solution, as well as the management of cache that allowed the improvement of up to 37% in speed the boot of middleware. The results also made possible the use of the solution for a large number of computers and allowed a scalable boot of virtual images remotely. / Mestre
|
114 |
Arthron: uma ferramenta para gerenciamento e transmissão de mídia sem performances artístico-tecnológicasMelo, Erick Augusto Gomes de 05 November 2010 (has links)
Made available in DSpace on 2015-05-14T12:36:26Z (GMT). No. of bitstreams: 1
parte1.pdf: 2214493 bytes, checksum: c82cb1463df95c6fd3389899383d535b (MD5)
Previous issue date: 2010-11-05 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Developing and implementing a shared, distributed media spectacle requires some
dedicated staff and a good plan so that errors can be minimized. Besides the
technological necessary apparatus, it is imperative that everyone in the team be in
complete synergy and artists have to worry about one more important point, not
regarding to the artistic aspects, but regarding unavoidable technical aspects related
to delays caused by network during transmission of live media streams. In this
context of entanglement between Art and Technology that it was necessary to
develop a tool that is able to support in a systematic way the performance of such
shows. This software tool, named Arthron, offer to the user a simple interface for
simultaneously manipulate different sources and pre-recorded or live media streams.
Thus, the users can remotely add, remove, set the presentation format and adjust
exhibition scheduling in time (when to present?) and space (where to present?) of
media streams in a technological-artistic performance. / Para elaboração e execução de um espetáculo midiático compartilhado e distribuído
necessita-se de pessoal especializado e de um bom planejamento para que os erros
possam ser minimizados. Além do aparato tecnológico necessário, é imprescindível
que toda equipe esteja em plena sinergia e os artistas tenham que se preocupar com
mais um ponto importante, não no tocante a parte artística, mas que influencia
bastante nesse tipo de espetáculo, que são os inevitáveis atrasos causados pela rede
durante a transmissão dos fluxos de mídia ao vivo. É nesse contexto de
entrelaçamento entre a Arte e a Tecnologia que surgiu a necessidade do
desenvolvimento de uma ferramenta que apoiasse de forma sistemática a realização
de espetáculos desse tipo. A principal funcionalidade dessa ferramenta em software,
chamada Arthron, é oferecer ao usuário uma interface simples para manipulação de
diferentes fontes e fluxos de mídia simultâneos pré-gravados ou ao vivo. Dessa
forma, o usuário pode, remotamente, adicionar, remover, configurar o formato de
apresentação e programar exibição no tempo (quando apresentar?) e no espaço
(onde apresentar?) dos fluxos de mídia em um espetáculo artístico-tecnológico.
|
115 |
Modelo de virtualização distribuída aplicado ao gerenciamento e replicação de cluster multiusoAguiar, César de Souza [UNESP] 09 May 2008 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:29:40Z (GMT). No. of bitstreams: 0
Previous issue date: 2008-05-09Bitstream added on 2014-06-13T19:59:31Z : No. of bitstreams: 1
aguiar_cs_me_sjrp.pdf: 1247995 bytes, checksum: f5aee55885a540898989509b38e1f1d5 (MD5) / Este trabalho apresenta um modelo de boot remoto para computadores commodity utilizando máquinas virtuais e sistemas de arquivos distribuídos e paralelos. O modelo proposto pode substituir o boot local com disco rígido por um boot através da rede de comunicação, aumentando assim a flexibilidade e manutenibilidade do parque de máquinas, além de permitir que dezenas de sistemas operacionais distintos sejam inicializados sem a necessidade de um disco rígido nos clientes, reduzindo dessa forma o custo em hardware e diminuindo a complexidade de instalação e manutenção de software, implantando um único ponto centralizado de gerenciamento. O projeto analisa maneiras de otimizar a transmissão de blocos de dados com técnicas de localidade de dados, sistemas de arquivos distribuídos e balanceamento de carga para implementar um ambiente robusto e de virtualização distribuída. O modelo também auxilia implementações de clusters multiuso e LAN grids para computadores commodity, provendo ferramentas para aproveitar recursos computacionais ociosos em conjuntos de computadores conectados. Neste estudo foram analisados diferentes modelos de sistemas de arquivos distribuídos, detalhando suas principais características e utilizações, e foram realizados experimentos com a virtualização distribuída juntamente com balanceamento de carga. A implantação de um sistema de arquivos híbrido através da integração de PVFS2 com pNFS trouxe melhorias de até 16% na velocidade de operações de leitura e permitiu maior escalabilidade da solução, assim como o gerenciamento de cache que permitiu a melhora de até 37% na velocidade de boot do middleware. Os resultados obtidos também viabilizaram o uso da solução para um grande número de computadores e possibilitaram o boot escalável de imagens virtuais remotamente. / This work presents a remote boot model to commodity computers using virtual machines and distributed and parallel file system. The proposed model can replace the local hard disk boot to a boot over the network of communication, thereby increasing the flexibility and maintainability of the group of machines, and with that allowing dozens of different operating system to be initialized without the need of a hard disk on customers, thus reducing the cost in hardware and reducing the complexity of installation and maintenance of software, implementing a centralized management unit. The project examines ways to optimize the data block transmission with techniques of data locality, distributed file system and load balancing to implement a robust environment for distributed virtualization. The model also helps implementations of multiuse clusters and grids to commodity computers, providing tools to take advantage of idle computing resources in connected computers. In this study it was analyzed different models of distributed file system, detailing their main characteristics and uses, it was also conducted experiments with distributed virtualization along with load balancing, which showed improvements in the overall performance of the system. The deployment of a hybrid filesystem by mixing PVFS2 with pNFS brought improvements of up to 16% in the speed of operations for reading and allowed greater scalability of the solution, as well as the management of cache that allowed the improvement of up to 37% in speed the boot of middleware. The results also made possible the use of the solution for a large number of computers and allowed a scalable boot of virtual images remotely.
|
116 |
Scheduling on Clouds considering energy consumption and performance trade-offs : from modelization to industrial applications / Ordonnancement sur Clouds avec arbitrage entre la performance et la consommation d'énergie : de la modélisation aux applications industriellesBalouek-Thomert, Daniel 05 December 2016 (has links)
L'utilisation massive des services connectés dans les entreprises et les foyers a conduit à un développement majeur des "Ciouds" ou informatique en nuage. Les Clouds s'imposent maintenant comme un modèle économique attractif où le client paye pour utiliser des ressources ou des services à la demande sans avoir à se préoccuper de la maintenance ou du coût réel de l'infrastructure. Ce développement rencontre cependant un obstacle majeur du point de vue des fournisseurs de ce type d'architecture : la consommation électrique des moteurs du cloud, les "datacenters" ou centre de données.Cette thèse s'intéresse à l'efficacité énergétique des Clouds en proposant un framework d'ordonnancement extensible et multi-critères dans le but d'augmenter le rendement d'une infrastructure hétérogène d'un point de vue énergétique. Nous proposons une approche basée sur un curseur capable d'aggréger les préférences de l'opérateur et du client pour la création de politiques d'ordonnancement. L'enjeu est de dimensionner au plus juste le nombre de serveurs et composants actifs tout en respectant les contraintes d'exploitation, et ainsi réduire les impacts environnementaux liés à une consommation superflue.Ces travaux ont été validés de façon expérimentale sur la plateforme Grid'SOOO par leur intégration au sein de l'intergiciel DIET et font l'objet d'un transfert industriel au sein de la plateforme NUVEA que nous proposons. Cette plate-forme fournit un accompagnement pour l'opérateur et l'utilisateur allant de l'audit à l'optimisation des infrastructures. / Modern society relies heavily on the use of computational resources. Over the last decades, the number of connected users and deviees has dramatically increased, leading to the consideration of decentralized on-demand computing as a utility, commonly named "The Cloud". Numerous fields of application such as High Performance Computing (HPC). medical research, movie rendering , industrial facto ry processes or smart city management , benefit from recent advances of on-demand computation .The maturity of Cloud technologies led to a democratization and to an explosion of connected services for companies, researchers, techies and even mere mortals, using those resources in a pay-per-use fashion.ln particular, since the Cloud Computing paradigm has since been adopted in companies . A significant reason is that the hardware running the cloud andprocessing the data does not reside at a company physical site, which means thatthe company does not have to build computer rooms (known as CAPEX, CAPitalEXpenditures) or buy equipment, nor to fill and mainta in that equipment over a normal life-cycle (known as OPEX, Operational EXpenditures).This thesis revolves around the energy efficiency of Cloud platforms by proposing an extensible and multi-criteria framework, which intends to improve the efficiency of an heterogeneous platform from an energy consumption perspective. We propose an approach based on user involvement using the notion of a cursor offering the ability to aggregate cloud operator and end user preferences to establish scheduling policies . The objective is the right sizing of active servers and computing equipments while considering exploitation constraints, thus reducing the environmental impactassociated to energy wastage.This research work has been validated on experiments and simulations on the Grid'SOOO platform, the biggest shared network in Europe dedicated to research.lt has been integrated to the DIET middleware, and a industrial valorisation has beendone in the NUVEA commercial platform, designed during this thesis . This platform constitutes an audit and optimization tool of large scale infrastructures for operatorsand end users
|
117 |
Détection d'évènements complexes dans les flux d'évènements massifs / Complex event detection over large event streamsBraik, William 15 May 2017 (has links)
La détection d’évènements complexes dans les flux d’évènements est un domaine qui a récemment fait surface dans le ecommerce. Notre partenaire industriel Cdiscount, parmi les sites ecommerce les plus importants en France, vise à identifier en temps réel des scénarios de navigation afin d’analyser le comportement des clients. Les objectifs principaux sont la performance et la mise à l’échelle : les scénarios de navigation doivent être détectés en moins de quelques secondes, alorsque des millions de clients visitent le site chaque jour, générant ainsi un flux d’évènements massif.Dans cette thèse, nous présentons Auros, un système permettant l’identification efficace et à grande échelle de scénarios de navigation conçu pour le eCommerce. Ce système s’appuie sur un langage dédié pour l’expression des scénarios à identifier. Les règles de détection définies sont ensuite compilées en automates déterministes, qui sont exécutés au sein d’une plateforme Big Data adaptée au traitement de flux. Notre évaluation montre qu’Auros répond aux exigences formulées par Cdiscount, en étant capable de traiter plus de 10,000 évènements par seconde, avec une latence de détection inférieure à une seconde. / Pattern detection over streams of events is gaining more and more attention, especially in the field of eCommerce. Our industrial partner Cdiscount, which is one of the largest eCommerce companies in France, aims to use pattern detection for real-time customer behavior analysis. The main challenges to consider are efficiency and scalability, as the detection of customer behaviors must be achieved within a few seconds, while millions of unique customers visit the website every day,thus producing a large event stream. In this thesis, we present Auros, a system for large-scale an defficient pattern detection for eCommerce. It relies on a domain-specific language to define behavior patterns. Patterns are then compiled into deterministic finite automata, which are run on a BigData streaming platform. Our evaluation shows that our approach is efficient and scalable, and fits the requirements of Cdiscount.
|
118 |
Microclouds : an approach for a network-aware energy-efficient decentralised cloud / Microclouds : une approche pour un cloud décentralisé prenant en compte les ressources réseau et efficace en énergieCuadrado-Cordero, Ismael 09 February 2017 (has links)
L'architecture actuelle du cloud, reposant sur des datacenters centralisés, limite la qualité des services offerts par le cloud du fait de l'éloignement de ces datacenters par rapport aux utilisateurs. En effet, cette architecture est peu adaptée à la tendance actuelle promouvant l'ubiquité du cloud computing. De plus, la consommation énergétique actuelle des data centers, ainsi que du cœur de réseau, représente 3% de la production totale d'énergie, tandis que selon les dernières estimations, seulement 42,3% de la population serait connectée. Dans cette thèse, nous nous intéressons à deux inconvénients majeurs des clouds centralisés: la consommation d'énergie ainsi que la faible qualité de service offerte. D'une part, du fait de son architecture centralisée, le cœur de réseau consomme plus d'énergie afin de connecter les utilisateurs aux datacenters. D'autre part, la distance entre les utilisateurs et les datacenters entraîne une utilisation accrue du réseau mondial à large bande, menant à des expériences utilisateurs de faible qualité, particulièrement pour les applications interactives. Une approche semi-centralisée peut offrir une meilleur qualité d'expérience aux utilisateurs urbains dans des réseaux clouds mobiles. Pour ce faire, cette approche confine le traffic local au plus proche de l'utilisateur, tout en maintenant les caractéristiques centralisées s’exécutant sur les équipements réseaux et utilisateurs. Dans cette thèse, nous proposons une nouvelle architecture de cloud distribué, basée sur des "microclouds". Des "microclouds" sont créés de manière dynamique, afin que les ressources utilisateurs provenant de leurs ordinateurs, téléphones ou équipements réseaux puissent être mises à disposition dans le cloud. De ce fait, les microclouds offrent un système dynamique, passant à l'échelle, tout en évitant d’investir dans de nouvelles infrastructures. Nous proposons également un exemple d'utilisation des microclouds sur un cas typique réel. Par simulation, nous montrons que notre approche permet une économie d'énergie pouvant atteindre 75%, comparée à une approche centralisée standard. En outre, nos résultats indiquent que cette architecture passe à l'échelle en terme du nombre d'utilisateurs mobiles, tout en offrant une bien plus faible latence qu'une architecture centralisée. Pour finir, nous analysons comment inciter les utilisateurs à partager leur ressources dans les clouds mobiles et proposons un nouveau mécanisme d'enchère adapté à l'hétérogénéité et la forte dynamicité de ces systèmes. Nous comparons notre solution aux autres mécanismes d’enchère existants dans des cas d'utilisations typiques au sein des clouds mobiles, et montrons la pertinence de notre solution. / The current datacenter-centralized architecture limits the cloud to the location of the datacenters, generally far from the user. This architecture collides with the latest trend of ubiquity of Cloud computing. Also, current estimated energy usage of data centers and core networks adds up to 3% of the global energy production, while according to latest estimations only 42,3% of the population is connected. In the current work, we focused on two drawbacks of datacenter-centralized Clouds: Energy consumption and poor quality of service. On the one hand, due to its centralized nature, energy consumption in networks is affected by the centralized vision of the Cloud. That is, backbone networks increase their energy consumption in order to connect the clients to the datacenters. On the other hand, distance leads to increased utilization of the broadband Wide Area Network and poor user experience, especially for interactive applications. A distributed approach can provide a better Quality of Experience (QoE) in large urban populations in mobile cloud networks. To do so, the cloud should confine local traffic close to the user, running on the users and network devices. In this work, we propose a novel distributed cloud architecture based on microclouds. Microclouds are dynamically created and allow users to contribute resources from their computers, mobile and network devices to the cloud. This way, they provide a dynamic and scalable system without the need of an extra investment in infrastructure. We also provide a description of a realistic mobile cloud use case, and the adaptation of microclouds on it. Through simulations, we show an overall saving up to 75% of energy consumed in standard centralized clouds with our approach. Also, our results indicate that this architecture is scalable with the number of mobile devices and provide a significantly lower latency than regular datacenter-centralized approaches. Finally, we analyze the use of incentives for Mobile Clouds, and propose a new auction system adapted to the high dynamism and heterogeneity of these systems. We compare our solution to other existing auctions systems in a Mobile Cloud use case, and show the suitability of our solution.
|
119 |
Paralelní a distribuované zpracování rozsáhlých textových dat / Parallel and Distributed Processing of Large Textual DataMatoušek, Martin January 2017 (has links)
This master thesis deals with task scheduling and allocation of resources in parallel and distributed enviroment. Thesis subscribes design and implementation of application for executeing of data processing with optimal resources usage.
|
120 |
Inteligentní agenti v bezdrátových sítích / Intelligent Agents in Wireless NetworksKružliak, Miroslav January 2010 (has links)
This Master thesis deals with synchronization of sensor nodes in wireless sensor net. It is used event ordering by the implementation of logical clocks . Lamport's algorithm is used here for synchronization, which is trying to order events within the given system. The thesis also evaluates how appropriate this principle for synchronization is. The implementation has been carried out in agent-oriented language AgentSpeak on the Jason platform. Samson environment has been used and modified for observation of this synchronization's behaviour and testing purposes.
|
Page generated in 0.3235 seconds