• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 25
  • 21
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 133
  • 133
  • 35
  • 34
  • 32
  • 31
  • 29
  • 22
  • 22
  • 21
  • 20
  • 20
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Internet des Objets centré service autocontrôlé / Self-controlled service-centric Internet of Things

Lemoine, Frédéric 03 July 2019 (has links)
A l'heure du numérique, la quantité d'objets connectés ne cesse de croître et de se diversifier. Afin de supporter cette complexité croissante, nous avons souhaité apporter un maximum d'automatismes à l'Internet des Objets de manière à garantir une qualité de service (QoS) de bout en bout. Pour ce faire, un composant de service autocontrôlé est proposé pour intégrer l'objet dans l'écosystème digital. Grâce à la calibration de chaque service, qui permet la connaissance du comportement, une composition automatisée devient possible. Nous avons illustré la faisabilité de notre approche à travers un cas d'étude. Nous avons également montré comment les objets connectés peuvent s'assembler eux-mêmes, coopérant pour atteindre un objectif commun, tout en répondant aux exigences de QoS globales. / In the digital era, the number of connected objects continues to grow and diversify. To support this increasing complexity, we wanted to bring a maximum of automatisms to the Internet of Things in order to guarantee end-to-end quality of service (QoS). To do this, a self-controlled service component is proposed to integrate the object into the digital ecosystem. Thanks to the calibration of each service, which makes it possible to know the behaviour, an automated composition becomes possible. We have illustrated the feasibility of our approach on a case study. We also have shown how connected objects can assemble themselves, cooperating to achieve a common objective, while meeting global QoS requirements.
72

Modélisation d'une architecture orientée service et basée composant pour une couche de transport autonome, dynamique et hautement configurable / Modeling a service oriented and component based architecture for an autonomous, dynamic and highly configurable transport layer

Dugué, Guillaume 24 September 2014 (has links)
L’évolution des réseaux et des applications distribuées liée au développement massif de l’utilisation de l’Internet par le grand public a conduit à de nombreuses propositions, standardisées ou non, de nouveaux protocoles de Transport et à l’évolution des protocoles existants (TCP notamment), destinées à prendre en compte les nouveaux besoins en qualité de service (QoS) des applications et les caractéristiques nouvelles des réseaux sousjacents. Cependant, force est de constater que ces différentes propositions, quoi que pertinentes, ne se sont pas traduites dans les faits et que le protocole TCP reste ultra majoritairement utilisé en dépit de ses limites conceptuelles connues. Ainsi, alors que le contexte applicatif et réseau a évolué de façonextrêmement forte, les solutions protocolaires utilisées au niveau Transport restent sous optimales et conduisent à des performances moindres en termes de QoS, que celles auxquelles permettraient de prétendre les nouvelles solutions.Dans ce contexte, ce document analyse tout d’abord le pourquoi de ce constat en dégageant cinq points de problématique qui justifie la difficulté, et que nous exprimons en termes de complexité (d’utilisation), d’extensibilité, de configurabilité, de dépendance et de déploiement. Sur ces bases, et en réponse à la problématique générale, la contribution de cette thèse consiste non pas à proposer une nouvelle solution protocolaire pour le niveau Transport, mais à redéfinir l’architecture et le fonctionnement de la couche Transport et ses interactions avec les applications. Cette nouvelle couche Transport, que nous avons appelée Autonomic Transport Layer (ATL), vise à permettre l’intégration transparente de solutions protocolaires existantes et futures pour les niveaux supérieurs et inférieurs de la pile protocolaire tout en simplifiant son utilisation par une augmentation du taux d’abstractiondu réseau (au sens large) du point de vue des développeurs d’applications. Afin de décharger ces derniers de la complexité d’utilisation des multiples solutions envisageables au niveau Transport, notre solution intègre des principesd’autonomie lui permettant une prise de décision du / des protocoles de Transport à invoquer sans intervention extérieure, et une dynamicité dans l’adaptation de la solution retenue en cours de communication afin de toujours délivrer le meilleur niveau de QoS aux applications quelles que soient les évolutions du contexte applicatif et réseau en cours de communication.Après un état de l’art confrontant les solutions actuelles aux points de problématique identifiés, ce document présente les principes fondamentaux de l’ATL, ainsi que son architecture globale suivant une méthodologie basée sur le formalisme UML 2.0. Deux cas d’utilisation fondamentaux sont ensuite introduits pour décrire l’ATL d’un point de vue comportemental. Finalement,nous présentons différents résultats de mesures de performances attestant de l’utilité d’une solution telle que l’ATL. / The massive development of Internet and its usage by the public and the subsequent evolution in networks and distributed applications lead to numerous proposals, standardized or not, of new Transport protocols and changes in existing ones (such as TCP) in order to take into account new arising Quality of Service (QoS) applicative needs and the new characteristics of underlying networks. However, no matter how relevant those new solutions are, they are not meeting the success they should because of TCP’s preponderance and overuse in spite of all its well known limits. Therefore, while applications and underlying networks have evolved tremendously, Transport protocols are becoming suboptimal and lead to lesser performances in termsof QoS than what one could expect from newer Transport solutions. In this context the present document analyses the reasons of this situations by indentifying five problematic points which we express in terms of complexity (of use), extensibility, configurability, dependence and deployment. Upon this basis, and trying to address the main problematic, this thesis contribution is not to propose yet another new Transport protocol but to redefine how the Transport Layer operates, its architecture and its interactions with applications. This new Transport Layer, which we call the Autonomic Transport Layer (ATL) aims for transparent integration of existing and future protocol solutions from the upper and lower layers’ point of view as long as simplifying its use by offering a better, wider network abstraction to application developers. To discharge them the complexity of use of the numerous solutions at the Transport level, our solutions integrates autonomy principles to give it decision power over the protocol(s) to instantiatewithout external intervention and dynamicity so as to be able to adapt the chosen solution during the communication so that it always delivers the best QoS level to applications whatever the contextual evolutions might be for applications or for the network.After a state of the art confronting the current solutions to the different problematic points we identified, this document presents the fundamental principles of the ATL and its global architecture described using UML 2.0. Two major use cases are then introduced to describe the ATL’s behavior. Finally we present several performance figures as evidence of the relevanceof a solution such as the ATL.
73

THE APPLICATION OF AUTONOMIC COMPUTING FOR THE PROTECTION OF INDUSTRIAL CONTROL SYSTEMS

Cox, Donald Patrick January 2011 (has links)
Critical infrastructures are defined as the basic facilities, services and utilities needed to support the functioning of society. For over three-thousand years, civil engineers have built these infrastructures to ensure that needed services and products are available to make mankind more comfortable, secure and productive. Modern infrastructure control systems are vulnerable to disruption from natural disaster, accident, negligent operation and intentional cyber assaults from malicious agents. Many critical processes within our infrastructures are continuous (e.g., electric power, etc.) and cannot be interrupted without consequence to industry and the public. Failure to protect the critical infrastructure from cyber assaults will result in physical, economic and social impacts, extending from the local to the national level. Cyber weapons have shown that harm to infrastructures can occur before system operators have time to determine the source.We present the thesis that infrastructure control systems can employ autonomic computing technology to detect anomalies and mitigate process disruption. Specifically we focus on: 1) autonomic computing algorithms that can be integrated into control systems and networks to detect and respond to anomalies; 2) autonomic technology capable of detecting and blocking infrastructure controller commands, that if executed, would result in process disruption; 3) design and construction of a prototype Autonomic Critical Infrastructure Protection appliance (ACIP) for integration and testing of autonomic algorithms; and 4) the design and construction of a test bed capable of modeling critical infrastructures and related control systems and processes for the purpose of testing and demonstrating new autonomic technologies.We report on the development of a new, multi-dimension ontology that organizes cyber assault methodologies correlated with perpetrator motivation and goals. Using this ontology, we create a theoretical framework to identify the integration points for protective technology within infrastructure control systems. We have created a unique modeling and simulation test bed for critical infrastructure systems and processes, and a prototype autonomic computing appliance. Through this work, we have developed an expanded understanding of autonomic computing theory and its application to controls systems. We also, through experimentation, prove the thesis and establish a roadmap for future research.
74

Self-Management for Large-Scale Distributed Systems

Al-Shishtawy, Ahmad January 2012 (has links)
Autonomic computing aims at making computing systems self-managing by using autonomic managers in order to reduce obstacles caused by management complexity. This thesis presents results of research on self-management for large-scale distributed systems. This research was motivated by the increasing complexity of computing systems and their management. In the first part, we present our platform, called Niche, for programming self-managing component-based distributed applications. In our work on Niche, we have faced and addressed the following four challenges in achieving self-management in a dynamic environment characterized by volatile resources and high churn: resource discovery, robust and efficient sensing and actuation, management bottleneck, and scale. We present results of our research on addressing the above challenges. Niche implements the autonomic computing architecture, proposed by IBM, in a fully decentralized way. Niche supports a network-transparent view of the system architecture simplifying the design of distributed self-management. Niche provides a concise and expressive API for self-management. The implementation of the platform relies on the scalability and robustness of structured overlay networks. We proceed by presenting a methodology for designing the management part of a distributed self-managing application. We define design steps that include partitioning of management functions and orchestration of multiple autonomic managers. In the second part, we discuss robustness of management and data consistency, which are necessary in a distributed system. Dealing with the effect of churn on management increases the complexity of the management logic and thus makes its development time consuming and error prone. We propose the abstraction of Robust Management Elements, which are able to heal themselves under continuous churn. Our approach is based on replicating a management element using finite state machine replication with a reconfigurable replica set. Our algorithm automates the reconfiguration (migration) of the replica set in order to tolerate continuous churn. For data consistency, we propose a majority-based distributed key-value store supporting multiple consistency levels that is based on a peer-to-peer network. The store enables the tradeoff between high availability and data consistency. Using majority allows avoiding potential drawbacks of a master-based consistency control, namely, a single-point of failure and a potential performance bottleneck. In the third part, we investigate self-management for Cloud-based storage systems with the focus on elasticity control using elements of control theory and machine learning. We have conducted research on a number of different designs of an elasticity controller, including a State-Space feedback controller and a controller that combines feedback and feedforward control. We describe our experience in designing an elasticity controller for a Cloud-based key-value store using state-space model that enables to trade-off performance for cost. We describe the steps in designing an elasticity controller. We continue by presenting the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores that combines feedforward and feedback control. / <p>QC 20120831</p>
75

Langage de modélisation spécifique au domaine pour les architectures logicielles auto-adaptatives / Domain-specific modeling language for self-adaptive software system architectures

Křikava, Filip 22 November 2013 (has links)
Le calcul autonome vise à concevoir des logiciels qui prennent en compte les variations dans leur environnement d'exécution. Les boucles de rétro-action (FCL) fournissent un mécanisme d'auto-adaptation générique, mais leur intégration dans des systèmes logiciels soulève de nombreux défis. Cette thèse s'attaque au défi d'intégration, c.à.d. la composition de l'architecture de connexion reliant le système logiciel adaptable au moteur d'adaptation. Nous proposons pour cela le langage de modélisation spécifique au domaine FCDL. Il élève le niveau d'abstraction des FCLs, permettant l'analyse automatique et la synthèse du code. Ce langage est capable de composition, de distribution et de réflexivité, permettant la coordination de plusieurs boucles de rétro-action distribuées et utilisant des mécanismes de contrôle variés. Son utilisation est facilitée par l'environnement de modélisation ACTRESS qui permet la modélisation, la vérification et la génération du code. La pertinence de notre approche est illustrée à travers trois scénarios d'adaptation réels construits de bout en bout. Nous considérons ensuite la manipulation de modèles comme moyen d'implanter ACTRESS. Nous proposons un Langage Spécifique au Domaine interne qui utilise Scala pour implanter une famille de DSLs. Il permet la vérification de cohérence et les transformations de modèles. Les DSLs résultant ont des propriétés similaires aux approches existantes, mais bénéficient en plus de la souplesse, de la performance et de l'outillage associés à Scala. Nous concluons avec des pistes de recherche découlant de l'application de l'IDM au domaine du calcul autonome. / The vision of Autonomic Computing and Self-Adaptive Software Systems aims at realizing software that autonomously manage itself in presence of varying environmental conditions. Feedback Control Loops (FCL) provide generic mechanisms for self-adaptation, however, incorporating them into software systems raises many challenges. The first part of this thesis addresses the integration challenge, i.e., forming the architecture connection between the underlying adaptable software and the adaptation engine. We propose a domain-specific modeling language, FCDL, for integrating adaptation mechanisms into software systems through external FCLs. It raises the level of abstraction, making FCLs amenable to automated analysis and implementation code synthesis. The language supports composition, distribution and reflection thereby enabling coordination and composition of multiple distributed FCLs. Its use is facilitated by a modeling environment, ACTRESS, that provides support for modeling, verification and complete code generation. The suitability of our approach is illustrated on three real-world adaptation scenarios. The second part of this thesis focuses on model manipulation as the underlying facility for implementing ACTRESS. We propose an internal Domain-Specific Language (DSL) approach whereby Scala is used to implement a family of DSLs, SIGMA, for model consistency checking and model transformations. The DSLs have similar expressiveness and features to existing approaches, while leveraging Scala versatility, performance and tool support. To conclude this thesis we discuss further work and further research directions for MDE applications to self-adaptive software systems.
76

Gestion autonomique d'applications dynamiques sûres et résilientes / Autonomic Management of Reliable and Resilient Dynamic Applications

Calmant, Thomas 19 October 2015 (has links)
Les architectures orientées services (SOA) sont considérées comme le moyen le plus avancé pour réaliser et intégrer rapidement des applications modulaires et flexibles.Dans ce domaine, les plates-formes SOA à disposition des développeurs et des architectes de produits logiciels sont multiples; les deux plus évoluées d'entre elles étant SCA et OSGi.Une application s'appuyant sur l'une de ces plates-formes peut ainsi être assemblée avec le minimum de composants nécessaires à la réalisation de ses tâches, afin de réduire sa consommation de ressources et d'augmenter sa maintenabilité.De plus, ces plates-formes autorisent l'ajout de composants greffons qui n'étaient pas connus lors des phases initiales de la réalisation du produit.Elles permettent ainsi de mettre à jour, d'étendre et d'adapter continuellement les fonctionnalités du produit de base ou des services techniques nécessaires à sa mise en production, sans interruption de service.Ces capacités sont notamment utilisées dans le cadre du paradigme DevOps et, plus généralement, pour mettre en œuvre le déploiement continu d'artefacts.Cependant, l'extensibilité offerte par ces plates-formes peut diminuer la fiabilité globale du système: une tendance forte pour développer un produit est l'assemblage de composants provenant de tierces-parties. De tels composants peuvent être d'une qualité inconnue voire douteuse.En cas d'erreur, de détérioration des performances, etc., il est difficile de diagnostiquer les composants ou combinaisons de composants incriminés.Il devient indispensable pour le producteur d'un logiciel de déterminer la responsabilité des différents composants impliqués dans un dysfonctionnement.Cette thèse a pour objectif de fournir une plate-forme, Cohorte, permettant de concevoir et d'exécuter des produits logiciels extensibles et résilients aux dysfonctionnements d'extensions non qualifiées.Les composants de tels produits pourront être développés dans différents langages de programmation et être déployés (ajout, mise à jour et retrait) en continu et sans interruption de service.Notre proposition adopte pour principe d'isoler les composants considérés comme instables ou peu sûrs.Le choix des composants à isoler peut être décidé par l'équipe de développement et l'équipe opérationnelle, à partir de leur expertise, ou bien déterminé à partir d'une combinaison d'indicateurs.Ces derniers évoluent au cours du temps pour refléter la fiabilité des composants.Par exemple, des composants peuvent être considérés fiables après une période de quarantaine; une mise à jour peut entraîner la dégradation de leur stabilité, etc..Par conséquent, il est indispensable de remettre en cause les choix initiaux dans l'isolation des composants afin, dans le premier cas, de limiter le coup des communications entre composants et, dans le deuxième cas, de maintenir le niveau de fiabilité du noyau critique du produit. / Service-Oriented architectures (SOA) are considered the most advanced way to develop and integrate modular and flexible applications.There are many SOA platforms available for software developers and architects; the most evolved of them being SCA and OSGi.An application based on one of these platforms can be assembled with only the components required for the execution of its tasks, which helps decreasing its resource consumption and increasing its maintainability.Furthermore, those platforms allow adding plug-ins at runtime, even if they were not known during the early stages of the development of the application.Thus, they allow updating, extending and adapting the features of the base product or of the technical services required for its execution, continuously and without outage.Those capabilities are applied in the DevOps paradigm and, more generally, to implement the continuous deployment of artifacts.However, the extensibility provided by those platforms can decrease the overall reliability of the system: a strong tendency in software development is the assembly of third-parties components.Such components may be of unknown or even questionable quality.In case of error, deterioration of performance, ... it is difficult to identify the implicated components or combinations of components.It becomes essential for the software producer to determine the responsibility of the various components involved in a malfunction.This thesis aims to provide a platform, Cohorte, to design and implement scalable software products, resilient to malfunctions of unqualified extensions.The components of such products may be developed in various programming languages and be deployed continuously (adding, updating and withdrawal) and without interruption of service.Our proposal adopts the principle of isolating the components considered unstable or insecure.The choice of the components to be isolated may be decided by the development team and the operational team, from their expertise, or determined from a combination of indicators.The latters evolve over time to reflect the reliability of components.For example, components can be considered reliable after a quarantine period; an update may result in deterioration of stability, ...Therefore, it is essential to question the initial choices in isolating components to limit, in the first case, the scope of communications between components and, in the second case, to maintain the reliability of the critical core of the product.
77

Computação em nuvem elástica auxiliada por agentes computacionaise baseada em histórico para web services / Elastic cloud computing aided by history-based computacionaise agents to web service

Dias, Ariel da Silva 15 December 2014 (has links)
A gestão eficaz de recursos computacionais em nuvem está diretamente ligada a gerir corretamente o desempenho das aplicações hospedadas na Máquina Virtual (Virtual Machine - VM), criando um ambiente capaz de controlá-la e redimensionar recursos de Memória, Disco, CPU e outros que se façam necessários, individualmente em resposta a carga de trabalho. Neste trabalho considera-se também a gestão eficaz a qual é possível realizar o retorno sobre o investimento realizado para a contratação do serviço de IaaS. Nesta pesquisa de mestrado, foi proposto o gerenciamento da infraestrutura computacional em nuvem, através de dois modelos que facilitam o provisionamento auto-adaptativo de recursos em um ambiente virtualizado: alocação de recursos utilizando modelo para previsão da carga de trabalho futura e a gestão auto-adaptativa de capacidade utilizando agentes computacionais para monitorarem constantemente as VMs. Além disso, é proposto o retorno do investimento, que trata a relação entre o valor que o cliente contratou do serviço de IaaS e o quanto efetivamente ele está utilizando. Desta forma, a cada período é contabilizado a taxa do valor gasto em unidades monetárias. Para contemplar esta proposta, foram desenvolvidos algoritmos que são o núcleo de todo gerenciamento. Também foram realizados experimentos e os resultados mostram a capacidade do autogerenciamento das máquinas virtuais, com reconfiguração dinâmica da infraestrutura através de previsões baseadas em histórico e também da reconfiguração e monitoramento com o uso de agentes computacionais. Após a análise e avaliação dos resultados obtidos nos experimentos, é possível afirmar que houve uma significativa melhora da reconfiguração dos recursos com agentes computacionais se comparado a reconfiguração com previsão de carga futura. / The efficient management of computational resources in the cloud is directly linked to correctly manage the performance of the applications hosted in the virtual machine (Virtual Machine - VM), creating an environment able to control it and resize features Memory, Disk, CPU and others resources, individually in response to workload. This work is also considered effective management which is possible to realize the return on investment for hiring the IaaS service. This Master thesis, is proposed the management of computing infrastructure in the cloud, using two models that facilitate self-adaptive resource provisioning in a virtualized environment using resource allocation model to predict the future workload and adaptive self-management capacity utilizing computational agents to continuously monitor the VMs. Furthermore, it is proposed return on investment, which is the ratio between the value that the client hired the IaaS service and how effectively it is using. Thus, each period is accounted for the rate of the amount spent in monetary units. To address this proposal, were developed algorithms that are the core of all management. Experiments were also conducted and the results show the ability of self-management for virtual machines with dynamic reconfiguration of infrastructure through predictions based on historical and also the reconfiguration and monitoring with the use of computational agents. After the analysis and evaluation of the results obtained in the experiments, is possible say that there was a significant improvement in reconfiguration of resources with computational agents compared with the workload forecast.
78

Mecanismos de autoconfiguração e auto-otimização para arquiteturas virtualizadas que visam a provisão de qualidade de serviço / Mechanisms of self-configuration and self-ptimization for virtualized architectures aiming at the provision of quality of service

Nakamura, Luis Hideo Vasconcelos 19 April 2017 (has links)
A proposta deste projeto de doutorado envolve a pesquisa sobre computação autônoma, focando na elaboração de mecanismos de autoconfiguração e auto-otimização para arquiteturas virtualizadas que buscam garantir a provisão de qualidade de serviço. Esses mecanismos fazem uso de elementos autônomos que são auxiliados por uma ontologia. Para isso, instrumentos de Web Semântica são utilizados para que a ontologia represente uma base de conhecimento com as informações dos recursos computacionais. Tais informações são utilizadas por algoritmos de otimização que, baseados em regras pré-definidas pelo administrador, tomam a decisão por uma nova configuração do sistema que vise a otimizar o desempenho. A configuração e a otimização geralmente envolvem elementos de software que precisam ser gerenciados pelos profissionais em Tecnologia da Informação (TI). Parte desse gerenciamento é composto de tarefas corriqueiras, por exemplo, monitorações, reconfigurações e verificações de desempenho. Tais tarefas demandam tempo e, portanto, geram custos e desgastes para os profissionais. Dessa forma, este projeto visa automatizar algumas dessas tarefas corriqueiras, facilitando o trabalho dos profissionais de TI e permitindo que eles foquem em tarefas mais críticas. Portanto, para alcançar esse objetivo foi realizado um estudo e a criação de mecanismos distribuídos baseados em Computação Autônoma e Web Semântica que permitem a configuração e otimização de recursos de forma automática. Os resultados individuais de cada mecanismo indicam que é possível alcançar um nível satisfatório de auto-configuração e auto-otimização para arquiteturas virtualizadas. O mecanismo de auto-configuração obteve melhores resultados com a abordagem de monitoração de recursos ao invés de utilizar previsões, já o mecanismo de auto-otimização provou que sua metodologia e algoritmo são aplicáveis na busca de uma configuração otimizada para atender ao SLA acordado. / The purpose of this PhD project involves the research about autonomic computing, focusing on the development of self-configuration and self-optimization mechanisms for virtualized architectures that aims to ensure the provision of Quality of Service. These mechanisms make use of autonomous elements that are aided by an ontology. Therefore, SemanticWeb tools are used in order to allow the ontology to represent a knowledge base with information of the computational resources. Such information is used by optimization algorithms that take the decision of choosing a new configuration that aims at optimizing the architecture performance based on rules predefined by the administrator. The configuration and optimization usually involve elements of software that must be managed by professionals in the Information Technology (IT) field and part of this management is composed of common tasks, for example, monitoring tests, reconfigurations and performance evaluations. These tasks take some time and therefore generate costs and distress to the professionals. Thus, this project aims at automating some of these common tasks, facilitating the work of IT professionals and allowing them to focus on more critical tasks. Therefore, to achieve this goal a study was performed and distributed mechanisms based on Autonomic Computing and Semantic Web were created allowing the configuration and optimization of resources automatically. The individual results of each mechanism indicate that it is possible to achieve a satisfactory level of self-configuration and self-optimization for virtualized architectures. The self-configuration mechanism has achieved better results with the resource monitoring approach rather than using predictions and the self-optimization mechanism has proven that its methodology and algorithm are applicable in the search for an optimized configuration to meet the SLA agreed.
79

Generic monitoring and reconfiguration for service-based applications in the cloud / Supervision et reconfiguration génériques des applications à base de service dans le nuage

Mohamed, Mohamed 07 November 2014 (has links)
Le Cloud Computing est un paradigme émergent dans les technologies de l'information. L'un de ses atouts majeurs étant la mise à disposition des ressources fondée sur le modèle pay-as-you-go. Les ressources Cloud se situent dans un environnement très dynamique. Cependant, chaque ressource provisionnée offre des services fonctionnels et peut ne pas offrir des services non fonctionnels tels que la supervision, la reconfiguration, la sécurité, etc. Dans un tel environnement dynamique, les services non fonctionnels ont une importance critique pour le maintien du niveau de service des ressources ainsi que le respect des contrats entre les fournisseurs et les consommateurs. Dans notre travail, nous nous intéressons à la supervision, la reconfiguration et la gestion autonomique des ressources Cloud. En particulier, nous mettons l'accent sur les applications à base de services. Ensuite, nous poussons plus loin notre travail pour traiter les ressources Cloud d'une manière générale. Par conséquent, cette thèse contient deux contributions majeures. Dans la première contribution, nous étendons le standard SCA (Service Component Architecture) afin de permettre l'ajout de besoins en supervision et reconfiguration à la description des composants. Dans ce contexte, nous proposons une liste de transformations qui permet d'ajouter automatiquement aux composants des facilités de supervision et de reconfiguration, et ce, même si ces facilités n'ont pas été prévues dans la conception des composants. Ceci facilite la tâche au développeur en lui permettant de se concentrer sur les services fonctionnels de ses composants. Pour être en conformité avec la scalabilité des environnements Cloud, nous utilisons une approche basée sur des micro-conteneurs pour le déploiement de composants. Dans la deuxième contribution, nous étendons le standard OCCI (Open Cloud Computing Interface) pour ajouter dynamiquement des facilités de supervision et de reconfiguration aux ressources Cloud, indépendamment de leurs niveaux de service. Cette extension implique la définition de nouvelles Ressources, Links et Mixins OCCI pour permettre d'ajouter dynamiquement des facilités de supervision et de reconfiguration à n'importe quelle ressource Cloud. Nous étendons par la suite nos deux contributions de supervision et reconfiguration afin d'ajouter des capacités de gestion autonomique aux applications SCA et ressources Cloud. Les solutions que nous proposons sont génériques, granulaires et basées sur les standards de facto (i.e., SCA et OCCI). Dans ce manuscrit de thèse, nous décrivons les détails de nos implémentations ainsi que les expérimentations que nous avons menées pour l'évaluation de nos propositions / Cloud Computing is an emerging paradigm in Information Technologies (IT). One of its major assets is the provisioning of resources based on pay-as-you-go model. Cloud resources are situated in a highly dynamic environment. However, each provisioned resource comes with functional properties and may not offer non functional properties like monitoring, reconfiguration, security, accountability, etc. In such dynamic environment, non functional properties have a critical importance to maintain the service level of resources and to make them respect the contracts between providers and consumers. In our work, we are interested in monitoring, reconfiguration and autonomic management of Cloud resources. Particularly, we put the focus on Service-based applications. Afterwards, we push further our work to treat Cloud resources. Consequently, this thesis contains two major contributions. On the first hand, we extend Service Component Architecture (SCA) in order to add monitoring and reconfiguration requirements description to components. In this context, we propose a list of transformations that dynamically adds monitoring and reconfiguration facilities to components even if they were designed without them. That alleviates the task of the developer and lets him focus just on the business of his components. To be in line with scalability of Cloud environments, we use a micro-container based approach for the deployment of components. On the second hand, we extend Open Cloud Computing Interface standards to dynamically add monitoring and reconfiguration facilities to Cloud resources while remaining agnostic to their level. This extension entails the definition of new Resources, Links and Mixins to dynamically add monitoring and reconfiguration facilities to resources. We extend the two contributions to couple monitoring and reconfiguration in order to add self management capabilities to SCA-based applications and Cloud resource. The solutions that we propose are generic, granular and are based on the de facto standards (i.e., SCA and OCCI). In this thesis manuscript, we give implementation details as well as experiments that we realized to evaluate our proposals
80

An autonomic communication framework for wireless sensor networks

Sun, Jingbo January 2009 (has links)
Sensor networks use a group of collaborating sensor nodes to collect information about real world phenomena. Sensor nodes use low-power short-range radio links to communicate with each other. Communication between sensor nodes shows significant variation over time and space. This can lead to unreliable and unpredictable network performance. These dynamic and lossy characteristics of wireless links pose major challenges for building reliable sensor networks and raise new issues that data delivery protocols must address. This thesis addresses the problems of designing protocols to overcome time-varying environmental conditions that lead to unpredictable network performance. The goal is to provide reliable data delivery in sensor networks and to minimise energy use. The major contributions of this thesis are: measuring the performance of wireless links in field trials on a time scale of weeks; systematic analysis of strengths and weaknesses of existing data delivery protocols; and the design, implementation and testing of a novel autonomic communication framework. We have measured link quality over time in experiments in unattended outdoor environments. Most previous work focused on spatial properties and experiments were not extensive, only lasting for a few hours. Besides common phenomena found in other work, such as the variation of network performance over time and the existence of asymmetric links, we find that links are independent over long time scales, and performance patterns of links are different. We also analyse the performance of data delivery protocols that use different techniques to improve reliability in sensor networks. Through systematic analysis of strengths and weaknesses of existing data delivery strategies, we find that networks using a single technique can only perform well for a limited range of link conditions. Different strategies are required in different operating conditions. Based on these experimental and theoretical studies, a novel autonomic communication framework (ACF) for wireless sensor networks is proposed. Nodes in this ACF are able to change their behaviour to adapt to time-varying environments so that optimal network performance can be achieved. Our framework provides a holistic solution for reliable data delivery to overcome time-varying wireless links. Our implementation and experimental evaluations demonstrate that this holistic framework is effective for reliable and energy-efficient data delivery in realistic sensor network settings.

Page generated in 0.0679 seconds