• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 11
  • 10
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 87
  • 87
  • 35
  • 32
  • 27
  • 27
  • 20
  • 15
  • 13
  • 10
  • 10
  • 9
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Optimistic Adaptation of Decentralised Role-based Software Systems

Matusek, Daniel 17 May 2023 (has links)
The complexity of computer networks has been rising over the last decades. Increasing interconnectivity between multiple devices, growing complexity of performed tasks and a strong collaboration between nodes are drivers for this phenomenon. An example is represented by Internet-of-Things devices, whose relevance has been rising in recent years. The increasing number of devices requiring updates and supervision makes maintenance more difficult. Human interaction, in this case, is costly and requires a lot of time. To overcome this, self-adaptive software systems (SAS) can be used. SAS are a subset of autonomous systems which can monitor themselves and their environment to adapt to changes without human interaction. In the literature, different approaches for engineering SAS were proposed, including techniques for executing adaptations on multiple devices based on generated plans for reacting to changes. Among those solutions, also decentralised approaches can be found. To the best of our knowledge, no approach for engineering a SAS exists which tolerates errors during the execution of adaptation in a decentralised setting. While some approaches for role-based execution reset the application in case of a single failure during the adaptation process, others do not make assumptions about errors or do not consider an erroneous environment. In a real-world environment, errors will likely occur during run-time, and the adaptation process could be disturbed. This work aims to perform adaptations in a decentralised way on role-based systems with a relaxed consistency constraint, i.e., errors during the adaptation phase are tolerated. This increases the availability of nodes since no rollbacks are required in case of a failure. Moreover, a subset of applications, such as drone swarms, would benefit from an approach with a relaxed consistency model since parts of the system that adapted successfully can already operate in an adapted configuration instead of waiting for other peers to apply the changes in a later iteration. Moreover, if we eliminate the need for an atomic adaptation execution, asynchronous execution of adaptation would be possible. In that case, we can supervise the adaptation process for a long time and ensure that every peer takes the planned actions as soon as the internal task execution allows it. To allow for a relaxed consistent way of adaptation execution, we develop a decentralised adaptation execution protocol, which supports the notion of eventual consistency. As soon as devices reconnect after network congestion or restore their internal state after local failures, our protocol can coordinate the recovery process among multiple devices to attempt recovery of a globally consistent state after errors occur. By superseding the need for a central instance, every peer who received information about failing peers can start the recovery process. The developed approach can restore a consistent global configuration if almost all peers fail. Moreover, the approach supports asynchronous adaptations, i.e., the peers can execute planned adaptations as soon as they are ready, which increases overall availability in case of delayed adaptation of single nodes. The developed protocol is evaluated with the help of a proof-of-concept implementation. The approach was run in five different experiments with thousands of iterations to show the applicability and reliability of this novel approach. The time for execution of the protocol and the number of exchanged messages has been measured to compare the protocol for different error cases and system sizes, as well as to show the scalability of the approach. The developed solution has been compared to a blocking approach to show the feasibility compared to an atomic approach. The applicability in a real-world scenario has been described in an empirical study using an example of a fire-extinguishing drone swarm. The results show that an optimistic approach to adaptation is suitable and specific scenarios can benefit from the improved availability since no rollbacks are required. Systems can continue their work regardless of the failures of participating nodes in large-scale systems.:Abstract VI 1. Introduction 1 1.1. Motivational Use-Case 2 1.2. Problem Definition 3 1.3. Objectives 4 1.4. Research Questions 5 1.5. Contributions 5 1.6. Outline 6 2. Foundation 7 2.1. Role Concept 7 2.2. Self-Adaptive Software Systems 13 2.3. Terminology for Role-Based Self-Adaptation 15 2.4. Consistency Preservation and Consistency Models 17 2.5. Summary 20 3. Related Work 21 3.1. Role-Based Approaches 22 3.2. Actor Model of Computation and Akka 23 3.3. Adaptation Execution in Self-Adaptive Software Systems 24 3.4. Change Consistency in Distributed Systems 33 3.5. Comparison of the Evaluated Approaches 40 4. The Decentralised Consistency Compensation Protocol 43 4.1. System and Error Model 43 4.2. Requirements to the Concept 44 4.3. The Usage of Roles in Adaptations 45 4.4. Protocol Overview 47 4.5. Protocol Description 51 4.6. Protocol Corner- and Error Cases 64 4.7. Summary 66 5. Prototypical Implementation 67 5.1. Technology Overview 67 5.2. Reused Artifacts 68 5.3. Implementation Details 70 5.4. Setup of the Prototypical Implementation 76 5.5. Summary 77 6. Evaluation 79 6.1. Evaluation Methodology 79 6.2. Evaluation Setup 80 6.3. Experiment Overview 81 6.4. Default Case: Successful Adaptation 84 6.5. Compensation on Disconnection of Peers 85 6.6. Recovery from Failed Adaptation 88 6.7. Impact of Early Activation of Adaptations 91 6.8. Comparison with a Blocking Approach 92 6.9. Empirical Study: Fire Extinguishing Drones 95 6.10. Summary 97 7. Conclusion and Future Work 99 7.1. Recap of the Research Questions 99 7.2. Discussion 101 7.3. Future Work 101 A. Protocol Buffer Definition 103 Acronyms 108 Bibliography 109
72

Optimal Production Planning for Small-Scale Hydropower

Towle, Anna-Linnea January 2018 (has links)
As more and more renewable energy sources like wind and solar power are added to the electricgrid, reliable sources of power like hydropower become more important. Hydropower isabundant in Scandinavia, and helps to maintain a stable and reliable grid with added irregularitiesfrom wind and solar power, as well as more fluctuations in demand. Aside from the reliabilityaspect of hydropower, power producers want to maximize their profit from sold electricity. InSweden, power is bid to the spot market at Nord Pool every day, and a final spot price is decidedwithin the electricity market. There is a different electricity price each hour of the day, so it ismore profitable to generate power during some hours than others.There are many other factors that can change when it is most profitable for a hydropower plant tooperate, like how much local inflow of water there is. Hydropower production is an ideal case forusing optimisation models, and they are widely used throughout industry already. Though theoptimisation calculations are done by a computer, there is a lot of manual work from the spottraders that goes into specifying the inputs to the model, such as local inflow, price forecasts, andperhaps most importantly, market strategy. Due to the large amount of work that needs to be donefor each hydropower plant, many of the smaller power plants are not optimised at all, but are leftto run on an automatic control that typically tries to maintain a constant water level. In Fortum,this is called, VNR, or vattennivåreglering (water level regulation).The purpose of this thesis is to develop an optimisation algorithm for a small hydropower plant,using Fortum owned and operated Båthusströmmen as a test case. An optimisation model is builtin Fortum’s current modelling system and is tested for 2016. In addition, a mathematical model isalso built and tested using GAMS. It is found that by optimising the plant instead of running it onVNR, an increase of about 15-16% in profit could be seen for the year 2016. This is a significantimprovement, and is a strong motivator to being optimising the small hydropower plants.Since the main reason many small hydropower plants are not optimised is because it takes toomuch of employees time, a second phase of this thesis was conducted in conjunction with twoother students, Jenny Möller and Johan Wiklund. The focus of this portion was to develop acentralized controller to automatically optimise the production schedule and communicate withthe central database. This would completely remove the workload from the spot traders, as wellas increase the overall profit of the plant. This thesis describes the results from both the Fortummodel and the GAMS model, as well as the mathematical formulation of the GAMS model. Thebasic structure of the automatic controller is also presented, and more can be read in the thesis byMöller and Wiklund (Möller & Wiklund, 2018). / Tillförlitliga energikällor som vattenkraft blir allt viktigare vart eftersom elkraftsystemet utökasmed fler förnybara energikällor som vindkraft och solenergi. I Norden finns det rikligt medvattenkraft, vilket bidrar till att upprätthålla ett stabilt och pålitligt elnät även med ökadeoregelbundenheter från vindkraft och solkraft samt större variationer i efterfrågan. Bortsett frånvattenkraftens tillförlitlighetsaspekter vill kraftproducenter maximera sin vinst från såld el. ISverige läggs dagligen bud på effektvolym till spotmarknaden Nord Pool och ett slutgiltigtmarknadspris bestäms därefter av elmarknaden. Varje timme under dygnet motsvarar ett enskiltelpris, därmed är det mer lönsamt att generera effekt under de timmar där priset är som högst.Det finns många andra faktorer som påverkar när det är mest lönsamt för ett vattenkraftverk attproducera el, exempelvis hur stort det lokala inflödet av vatten är. Vattenkraftproduktion är idealtför tillämpning av optimeringsmodeller, vilka är vanligt förekommande inom verksamhetsområdet.Även om optimeringsberäkningarna utförs av en dator innebär optimeringen mycket manuelltarbete för Fortums elhandlare som specificerar indata till modellen. Exempel på indata är lokaltinflöde, prisprognoser och kanske viktigast av allt marknadsstrategi. På grund av den storamängden arbete som fordras för varje vattenkraftverk, optimeras inte produktionen för många avde småskaliga kraftverken utan de regleras automatiskt med mål att upprätthålla en konstantvattennivå. Denna typ av reglering kallas vattennivåreglering, VNR.Syftet med examensarbetet var att utveckla en optimeringsalgoritm för ett småskaligtvattenkraftverk, där Fortumägda vattenkraftverket Båthusströmmen används som testobjekt. Enoptimeringsmodell utvecklades i Fortums befintliga system och testades för 2016. Dessutom haren matematisk modell utvecklats och testades med GAMS. Det konstaterades att genom attoptimera produktionen från vattenkraftverket istället för att reglera den via VNR kan envinstökning med cirka 15-16 % för noteras år 2016. Detta är en väsentlig förbättring och är ettstarkt argument för att optimera produktionen från småskaliga vattenkraftverk.Eftersom den främsta orsaken till att många småskaliga vattenkraftverk inte optimeras är denutökade arbetsbelastningen det skulle innebära för de anställda, genomfördes en andra fas iexamensarbetet i samverkan med två andra studenter, Jenny Möller och Johan Wiklund. Fokus fördenna del var att utveckla en centraliserad styrenhet för att automatiskt optimera produktionsplaneroch kommunicera med det befintliga centrala systemet. Detta innebär att utökad arbetsbelastningenfrån elhandlarna undviks, samt öka vattenkraftverkets totala vinst. Denna rapport beskriverresultaten från både Fortum-modellen och GAMS-modellen, liksom den matematiskaformuleringen av GAMS-modellen. Även grundstrukturen för det självreglerandeoptimeringsverktyget presenteras, mer kan läsas i rapporten av Möller och Wiklund (Möller &Wiklund, 2018).Nyckelord: Optimering, vattenkraftplanering, självreglerande, automatisk styrning, optimalplanering
73

Formalisation et évaluation de stratégies d’élasticité multi-couches dans le Cloud / Formalization and evaluation of cross-layer elasticity strategies in the Cloud

Khebbeb, Khaled 29 June 2019 (has links)
L'élasticité est une propriété qui permet aux systèmes Cloud de s'auto-adapter à leur charge de travail en provisionnant et en libérant des ressources informatiques, de manière autonomique, lorsque la demande augmente et diminue. En raison de la nature imprévisible de la charge de travail et des nombreux facteurs déterminant l'élasticité, fournir des plans d'action précis pour gérer l'élasticité d'un système cloud, tout en respectant des politiques de haut niveau (performances, cout, etc.) est une tâche particulièrement difficile.Les travaux de cette thèse visent à proposer, en utilisant le formalisme des bigraphes comme modèle formel, une spécification et une implémentation des systèmes Cloud Computing élastiques sur deux aspects : structurel et comportemental.Du point de vue structurel, le but est de définir et de modéliser une structure correcte des systèmes Cloud du côté " backend ". Cette partie est supportée par les capacités de spécification fournies par le formalisme des Bigraphes, à savoir : le principe de "sorting" et de règles de construction permettant de définir les desiderata du concepteur. Concernant l'aspect comportemental, il s'agit de modéliser, valider et implémenter des stratégies génériques de mise à l'échelle automatique en vue de décrire les différents mécanismes d'auto-adaptation élastiques des systèmes cloud (mise à l'échelle horizontale, verticale, migration, etc.), en multi-couches (i.e., aux niveaux service et infrastructure). Ces tâches sont prises en charge par les aspects dynamiques propres aux Systèmes Réactifs Bigraphiques (BRS) notamment par le biais des règles de réaction.Les stratégies d'élasticité introduites visent à guider le déclenchement conditionnel des différentes règles de réaction définies, afin de décrire les comportements d'auto-adaptation des systèmes Cloud au niveau service et infrastructure. L'encodage de ces spécifications et leurs implémentations sont définis en logique de réécriture via le langage Maude. Leur bon fonctionnement est vérifié formellement à travers une technique de model-checking supportée par la logique temporelle linéaire LTL.Afin de valider ces contributions d'un point de vue quantitatif, nous proposons une approche à base de file d'attente pour analyser, évaluer et discuter les stratégies d'élasticité d'un système Cloud à travers différents scénarios simulés. Dans nos travaux, nous explorons la définition d'une "bonne" stratégie en prenant en compte une étude de cas qui repose sur la nature changeante de la charge de travail. Nous proposons une manière originale de composer plusieurs stratégies d'élasticité à plusieurs niveaux afin de garantir différentes politiques de haut-niveau. / Elasticity property allows Cloud systems to adapt to their incoming workload by provisioning and de-provisioning computing resources in an autonomic manner, as the demand rises and drops. Due to the unpredictable nature of the workload and the numerous factors that impact elasticity, providing accurate action plans to insure a Cloud system's elasticity while preserving high level policies (performance, costs, etc.) is a particularly challenging task. This thesis aims at providing a thorough specification and implementation of Cloud systems, by relying on bigraphs as a formal model, over two aspects: structural and behavioral.Structurally, the goal is to define a correct modeling of Cloud systems' "back-end" structure. This part is supported by the specification capabilities of Bigraph formalism. Specifically, via "sorting" mechanisms and construction rules that allow defining the designer's desiderata. As for the behavioral part, it consists of model, implement and validate generic elasticity strategies in order to describe Cloud systems' auto-adaptive behaviors (i.e., horizontal and vertical scaling, migration, etc.) in a cross-layer manner (i.e., at service and infrastructure levels). These tasks are supported by the dynamic aspects of Bigraphical Reactive Systems (BRS) formalism (through reaction rules).The introduced elasticity strategies aim at guiding the conditional triggering of the defined reaction rules, to describe Cloud systems' auto-scaling behaviors in a cross-layered manner. The encoding of these specifications and their implementation are defined in Rewrite Logic via Maude language. Their correctness is formally verified through a model-checking technique supported by the linear temporal logic LTL.In order to quantitatively validate these contributions, we propose a queuing-based approach in order to evaluate, analyze and discuss elasticity strategies in Cloud systems through different simulated execution scenarios. In this work, we explore the definition of a “good” strategy through a case study which considers the changing nature of the input workload. We propose an original way de compose different cross-layer elasticity strategies to guarantee different high-level policies.
74

Étude et conception de systèmes miniaturisés « intelligents » pour l’amortissement non-linéaire de vibration / Study and design of "smart" miniaturized systems for non-linear vibration damping

Viant, Jean-Nicolas 06 July 2011 (has links)
L’amortissement de vibrations mécaniques trouve de nombreuses applications dans le domaine du contrôle acoustique ou de la réduction de contraintes dans l’industrie (machine outil), le génie civil (structure autoportée), ou encore l’aéronautique (réduction de contrainte lors des manoeuvres). Les recherches actuelles tendent principalement vers des méthodes utilisant des matériaux piézoélectriques collés à la surface des structures à traiter. Une technique prometteuse, développée au LGEF à l’INSA de Lyon, est l’amortissement de vibration d’une structure mécanique par méthode SSDI (pour Synchronized Switch Damping on an Inductor). Cette technique d’amortissement semi-active exploite un procédé non-linéaire de traitement de la tension aux bornes d’un élément piézoélectrique, capteur et actionneur à la fois. L’objectif de ce travail est de réaliser l’intégration de l’électronique de traitement de la tension aux bornes des éléments piézoélectriques en technologie microélectronique, afin de pouvoir l’embarquer sur le patch piézoélectrique à terme. Une analyse des techniques d’amortissement publiées permet d’y situer ce travail et de définir les points clés de la technique SSDI. Au deuxième chapitre, un certain nombre de modèles sont développés pour comparer et guider les choix de conception, et pour aboutir à des arbitrages architecturaux. Le troisième chapitre développe la conception d’un ASIC dans une technologie avec option haute tension, comprenant une fonction haute-tension de traitement du signal piézoélectrique et une chaine basse-tension d’analyse, de décision et de commande. La première réalise l’inversion de la tension piézoélectrique à l’aide d’un circuit RLC passif de conversion de l’énergie. La seconde s’attache à la détection des extremums de manière à optimiser l’amortissement. Un diviseur de tension auto-adaptatif avec protection contre les surtensions ainsi qu’un détecteur de pic de tension permettent de réaliser cette opération. Ces fonctions sont caractérisées en simulations et mesures. Le fonctionnement de l’ASIC est ensuite testé sur une structure mécanique, et les performances sont décrites et interprétées au chapitre 4. Le comportement multi-mode et la grande dynamique des signaux mécaniques traités sont des avancées par rapport à la bibliographie. / Mechanical vibration damping has many applications in industry (machine tools), civil engineering (bridge construction), or aeronautics (stress during maneuvers). Current research tends mainly to use piezoelectric materials based methods. A promising technique from the LGEF of INSA Lyon is the vibration damping of mechanical structure by so-called SSDI method (for Synchronized Switch Damping on an Inductor). This semi-active damping technique uses a non-linear process to invert the voltage across a piezoelectric element. The element is used as sensor and actuator at a time. The aim of this work is to achieve an integration of the electronic process with the SSDI voltage inversion in a microelectronic technology. It has ultimately to embed the electronic controller on the piezoelectric patch. The analysis of published damping techniques can situate this work and identify key points of the SSDI technique. In the second chapter, several models are developed to compare and decide of the best architectural design choice. The third chapter presents an ASIC design in a technology with high voltage option. The ASIC consists of a high-voltage piezoelectric signal processing part and a low-voltage control part. The first function performs piezoelectric voltage reversing by mean of a passive RLC energy conversion circuit. The second function focuses on the extremum voltage detection circuit in order to optimize damping efficiency. A self-tuning voltage divider with over-voltage protection and a peak voltage detector can perform this operation. These functions are characterized by simulations and measurements. The ASIC operation is then tested with mechanical structures, and damping performances are described and interpreted in Chapter 4. The multimodal behavior and the mechanical signals high-dynamic are new contribution as regard in the bibliography.
75

Uma abordagem dirigida por modelos para desenvolvimento de middlewares auto-adaptativos para transmiss?o de fluxo de dados baseado em restri??es de QoS / Uma abordagem dirigida por modelos para desenvolvimento de middlewares auto-adaptativos para transmiss?o de fluxo de dados baseado em restri??es de QoS

Silva, Andre Gustavo Pereira da 15 March 2010 (has links)
Made available in DSpace on 2014-12-17T15:47:52Z (GMT). No. of bitstreams: 1 AndreGPS_DISSERT.pdf: 1357503 bytes, checksum: e140d06d3ffeafa9c2f772fa5796fc4d (MD5) Previous issue date: 2010-03-15 / The use of middleware technology in various types of systems, in order to abstract low-level details related to the distribution of application logic, is increasingly common. Among several systems that can be benefited from using these components, we highlight the distributed systems, where it is necessary to allow communications between software components located on different physical machines. An important issue related to the communication between distributed components is the provision of mechanisms for managing the quality of service. This work presents a metamodel for modeling middlewares based on components in order to provide to an application the abstraction of a communication between components involved in a data stream, regardless their location. Another feature of the metamodel is the possibility of self-adaptation related to the communication mechanism, either by updating the values of its configuration parameters, or by its replacement by another mechanism, in case of the restrictions of quality of service specified are not being guaranteed. In this respect, it is planned the monitoring of the communication state (application of techniques like feedback control loop), analyzing performance metrics related. The paradigm of Model Driven Development was used to generate the implementation of a middleware that will serve as proof of concept of the metamodel, and the configuration and reconfiguration policies related to the dynamic adaptation processes. In this sense was defined the metamodel associated to the process of a communication configuration. The MDD application also corresponds to the definition of the following transformations: the architectural model of the middleware in Java code, and the configuration model to XML / A utiliza??o da tecnologia de middleware em diversos tipos de sistemas, com a finalidade de abstrair detalhes de baixo n?vel relacionados com a distribui??o da l?gica da aplica??o, ? cada vez mais frequente. Dentre diversos sistemas que podem ser beneficiados com a utiliza??o desses componentes, podemos destacar os sistemas distribu?dos, onde ? necess?rio viabilizar a comunica??o entre componentes de software localizados em diferentes m?quinas f?sicas. Uma importante quest?o relacionada ? comunica??o entre componentes distribu?dos ? o fornecimento de mecanismos para gerenciamento da qualidade de servi?o. Este trabalho apresenta um metamodelo para modelagem de middlewares baseados em componentes que prov?em ? aplica??o a abstra??o da comunica??o entre componentes envolvidos em um fluxo de dados, independente da sua localiza??o. Outra caracter?stica do metamodelo ? a possibilidade de auto-adapta??o relacionada ao mecanismo de comunica??o utilizado, seja atrav?s da atualiza??o dos valores dos seus par?metros de configura??o, ou atrav?s da sua substitui??o por outro mecanismo, caso as restri??es de qualidade de servi?o especificadas n?o estejam sendo garantidas. Nesse prop?sito, ? previsto o monitoramento do estado da comunica??o (aplica??es de t?cnicas do tipo feedback control loop), analisando-se m?tricas de desempenho relacionadas. O paradigma de Desenvolvimento Dirigido por Modelos foi utilizado para gerar a implementa??o de um middleware que servir? como prova de conceito do metamodelo, e as pol?ticas de configura??o e reconfigura??o relacionadas com o processo de adapta??o din?mica; neste sentido, foi definido o metamodelo associado ao processo de configura??o de uma comunica??o. A aplica??o da t?cnica de MDD corresponde ainda ? defini??o das seguintes transforma??es: do modelo arquitetural do middleware para c?digo em linguagem Java, e do modelo de configura??o para c?digo XML
76

Algoritmo auto-adaptativo para proteção de sobrecorrente instantânea.

SOUZA JÚNIOR, Francisco das Chagas. 13 August 2018 (has links)
Submitted by Emanuel Varela Cardoso (emanuel.varela@ufcg.edu.br) on 2018-08-13T16:41:07Z No. of bitstreams: 1 FRANCISCO DAS CHAGAS SOUZA JÚNIOR – TESE (PPGEEI) 2016.pdf: 4537364 bytes, checksum: 1696c9bd2456c0caad5ee6e61bcb7ca4 (MD5) / Made available in DSpace on 2018-08-13T16:41:07Z (GMT). No. of bitstreams: 1 FRANCISCO DAS CHAGAS SOUZA JÚNIOR – TESE (PPGEEI) 2016.pdf: 4537364 bytes, checksum: 1696c9bd2456c0caad5ee6e61bcb7ca4 (MD5) Previous issue date: 2016-07-05 / Uma técnica auto-adaptativa que torna a obtenção dos ajustes de coordenação de relés de sobre corrente instantânea para sistemas de distribuição uma tarefa automática, sem a necessidade de intervenção humana e nem a interrupção do fornecimento de energia elétrica ou do monitoramento da rede é proposta. Usando uma arquitetura distribuída, formada por três camadas conectadas através de canal de comunicação, modificações topológicas como entrada/saída de linhas, e/ou nos perfis de carga e geração do sistema elétrico terão seus efeitos automaticamente refletidas nos ajustes dos dispositivos de proteção. O método proposto usa a corrente de carga como item principal para a determinação dos ajustes das unidades instantâneas de sobre corrente em redes de distribuição de média tensão com e sem apresentação geração distribuída. Por meio do cálculo online dos equivalentes de rede a técnica proposta necessita de baixos níveis de intervenção humana para a realização dos estudos de coordenação e seletividade. Os resultados obtidos comprovam a viabilidade técnica da metodologia proposta e corrobora com o estado da arte no tocante ao desenvolvimento das redes elétricas inteligentes. / A self-adaptive technique that improves the instantaneous overcurrent relay settings determination an automatic task, without human intervention neither interruption on electric supply or grid monitoring is proposed. An architecture designed by three layers connected using a communication channel provides that modificationon power electric grid as connection/disconnection of transmissionlines, and/or on generator or load profiles will be automatically reflected on protective device settings. Load current has been used to determine the reach of instantaneous over current relay settings in a médium voltage distribution system considering cases with and without distributed generation connected on the grid. The possibility of online determination at Thevenin equivalent circuit low human interventions to execute coordination study. Obtained results demonstrate the viability of proposed technique an dincrease with state of art a bout improvements of smart electric grids.
77

Um paradigma baseado em algoritmos genéticos para o aprendizado de regras Fuzzy

Castro, Pablo Alberto Dalbem de 24 May 2004 (has links)
Made available in DSpace on 2016-06-02T19:06:08Z (GMT). No. of bitstreams: 1 656.pdf: 1176051 bytes, checksum: 79408472b8b3606bcf1eb1699d034a2e (MD5) Previous issue date: 2004-05-24 / Financiadora de Estudos e Projetos / The construction of the knowledge base of fuzzy systems has been beneficited intensively from automatic methods that extract the necessary knowledge from data sets which represent examples of the problem. The evolutionary computation, especially genetic algorithms, has been the focus of a great number of researches that deal with the problem of automatic generation of knowledge bases as search and optimization processes using di erent approaches. This work presents a methodology to learn fuzzy rule bases from examples by means of Genetic Algorithms using the Pittsburgh approach. The methodology is composed of 2 stages. The first one is the genetic learning of rule base and the other one is the genetic optimization of the rule base previously obtained in order to exclude redundant and unnecessary rules. The first stage uses a Self Adaptive Genetic Algorithm, that changes dynamically the crossover and mutation rates ensuring genetic diversity and avoiding the premature convergence. The membership functions are defined previously by the fuzzy clustering algorithm FC-Means and remain fixed during all learning process. The application domain is multidimensional pattern classification, where the attributes and, sometimes, the class are fuzzy, so they are represented by linguistic values. The proposed methodology performance is evaluated by computational simulations on some real-world pattern classification problems. The tests focused the accuracy of generated fuzzy rules in di erent situations. The dynamic change of algorithm parameters showed that better results can be obtained and the use of don t care conditions allowed to generate a small number of comprehensible and compact rules. / A construção da base de conhecimento de sistemas fuzzy tem sido beneficiada intensamente por métodos automáticos que extraem o conhecimento necessário a partir de conjuntos de dados que representam exemplos do problema. A computação evolutiva, em particular os algoritmos genéticos, tem sido alvo de um grande número de pesquisas que tratam, usando abordagens variadas, a questão da geração automática da base de conhecimento de sistemas fuzzy como um processo de busca e otimização. Este trabalho apresenta uma metodologia para o aprendizado de bases de regras fuzzy a partir de exemplos por meio de Algoritmos Genéticos usando a abordagem Pittsburgh. A metodologia é composta por duas etapas. A primeira é a geração genética da base de regras e a segunda é a otimização genética da base de regras previamente obtida, a fim de eliminar regras redundantes e desnecessárias. A primeira etapa utiliza um algoritmo genético auto-adaptativo, que altera dinamicamente os valores das taxas de cruzamento e mutação, a fim de garantir diversidade genética na população e evitar convergência prematura. As funções de pertinência são previamente definidas pelo algoritmo de agrupamento fuzzy FC-Means e permanecem fixas durante todo o processo de aprendizado. O domínio da aplicação é a classificação de padrões multi-dimensionais, onde os atributos e, algumas vezes, as classes são fuzzy, portanto, representados por valores lingüísticos. O desempenho da metodologia proposta é avaliado por simulações computacionais em alguns problemas de classificação do mundo real. Os testes focaram a acuidade das bases de regras geradas em diferentes situações. A alteração dinâmica dos parâmetros do algoritmo mostrou que melhores resultados podem ser obtidos e o uso da condição de don t care permitiu gerar um reduzido n´umero de regras mais compreensíveis e compactas.
78

Contrôle modal autoadaptatif de vibrations de structures évolutives / Self-adaptive modal control of vibrations for time-varying structures

Deng, Fengyan 30 May 2012 (has links)
L’allègement des structures imposé par les réductions de coût se traduit par des structures de plus en plus souples qui les rendent de plus en plus sensibles aux vibrations. Le contrôle des vibrations devient donc un enjeu majeur dans de nombreuses applications industrielles et les limites des matériaux imposent maintenant un recours au contrôle actif de plus en plus fréquent. L’évolution des structures au cours du temps (viellisement, conditions aux limites, architecture, …) pose le problème de la robustesse du contrôle. Par ailleurs, l’actionnement de plus en plus présent dans le domaine mécanique constitue à la fois une source supplémentaire de vibrations, mais aussi de contrôle et d’évolution d’architecture des structures. La thèse s’intéresse au contrôle actif autoadaptatif des vibrations permettant de maintenir automatiquement la performance et la stabilité des structures évolutives. Il s’agit donc de s’affranchir de la connaissance des causes et des informations sur les évolutions. La méthode proposée s’appuie sur un développement modal permettant de limiter le nombre de composants de contrôle et de cibler les modes à contrôler en limitant l’énergie de contrôle. Ainsi, il est nécessaire de reconstruire les caractéristiques du modèle modal indispensables pour réactualiser le contrôle en figeant seulement une structure de modèle. S’affranchissant à la fois des causes d’évolution de la structure et utilisant seulement une structure de modèle, la méthode est généralisable à toute application en mécanique des structures. La méthode proposée, basée sur l’utilisation d’un identificateur exploitant à la fois excitation et réponse de la structure, prend en compte les limites imposées par le contrôleur. Le modèle constitue le lien qui doit être établi entre identificateur et contrôle pour permettre la réactualisation. Par ailleurs, un compromis entre l’objectif d’atténuation des vibrations et les performances de l’identification est alors nécessaire du fait du couplage identification/contrôle apparaissant dans la boucle fermée. Ce compromis est également conditionné par le matériel utilisé. La méthode proposée est exploitée sur une structure discrète mettant en évidence une inversion de formes modales au cours de son évolution qui déstabilise un contrôle figé. Le choix opéré pour répondre aux différents compromis cités ci dessus a conduit à l’utilisation d’un contrôleur classique (LQG) et un identificateur basé sur la méthode des sous-espaces (N4SID). Cette application sur une structure simple a permis de caractériser un certain nombre de limites physiques : la bande passante, densité modale, vitesse d’évolution, Le contrôle modal autoadaptatif proposé s’avère robuste en performance et efficace lorsque la réactualisation est systématique. Une variante conditionnelle, toujours basée sur l’analyse de la réponse de la structure, est enfin proposée pour optimiser le processus de réactualisation afin de suivre plus efficacement les évolutions. / The lightness of structure due to the reduction of cost results in some structures which are more and more flexible. This flexibility makes these structures more sensitive to vibrations. The vibration control becomes an important issue in lots of industrial applications, and now the limitation of materials imposes a requirement of active control more and more frequently.The change of time-varying structure(ageing effect, boundaries conditions, architecture of structure etc)brings the robust problem of control.Further more,the action of device which emerges more and more frequently in mechanical fields introduces not only an additional cause of vibrations,but also a source of control and a source for changing the architecture of structures.The thesis focuses on self-adaptive active control of vibration which permits to keep up automatically the performance and stability of the time-varying structures.So it needs to overcome the knowing about cause and information on the changes.The proposed method relies on a development of modal technology which permits to limit the amount of component in control system and to target on the modes which need to be controlled.So the energy of control is limited. Further more,it needs to reconstruct the characteristics of modal model which are indispensable for updating the control.In this case, only the structure of model is fixed.Overcoming the knowing about cause of change in the structure and using only the structure of model, this method can be generalized for all applications in mechanical structures.The proposed method is based on the utilization of an identifier which uses both the excitation and response of the structure.And this method considers the limitations induced by the controller.The model forms le link which should be established between the identifier and the controller for allowing the updating. Further more, a compromise between the objective of reducing vibrations and the performance of identification is necessary due to the coupling effect of identification/control which appears in the closed-loop. This compromise is also conditioned by the used equipments.The proposed method is carried out on a discrete time-varying structure for showing an inversion of mode shape during its change. This inversion of mode shape destabilises a fixed control system. The operated choices for responding the different previous quoted compromise lead to a classic controller (LQG) and an identifier based on the subspace method (N4SID).This application on a simple structure permitted to characterise some physical limitation: the bandwidth, the modal density and the velocity of change…The proposed self-adaptive modal control is proved to be robust in terms of performance and be efficient when the updating is systematical. Always based on the analysis of the response of the structure, a conditional variant is finally proposed for optimizing the process of updating in order to follow the change more efficiently.
79

Définition d'un substrat computationnel bio-inspiré : déclinaison de propriétés de plasticité cérébrale dans les architectures de traitement auto-adaptatif / Design of a bio-inspired computing substrata : hardware plasticity properties for self-adaptive computing architectures

Rodriguez, Laurent 01 December 2015 (has links)
L'augmentation du parallélisme, sur des puces dont la densité d'intégration est en constante croissance, soulève un certain nombre de défis tels que le routage de l'information qui se confronte au problème de "goulot d'étranglement de données", ou la simple difficulté à exploiter un parallélisme massif et grandissant avec les paradigmes de calcul modernes issus pour la plupart, d'un historique séquentiel.Nous nous inscrivons dans une démarche bio-inspirée pour définir un nouveau type d'architecture, basée sur le concept d'auto-adaptation, afin de décharger le concepteur au maximum de cette complexité. Mimant la plasticité cérébrale, cette architecture devient capable de s'adapter sur son environnement interne et externe de manière homéostatique. Il s'inscrit dans la famille du calcul incorporé ("embodied computing") car le substrat de calcul n'est plus pensé comme une boite noire, programmée pour une tâche donnée, mais est façonné par son environnement ainsi que par les applications qu'il supporte.Dans nos travaux, nous proposons un modèle de carte neuronale auto-organisatrice, le DMADSOM (pour Distributed Multiplicative Activity Dependent SOM), basé sur le principe des champs de neurones dynamiques (DNF pour "Dynamic Neural Fields"), pour apporter le concept de plasticité à l'architecture. Ce modèle a pour originalité de s'adapter sur les données de chaque stimulus sans besoin d'un continuum sur les stimuli consécutifs. Ce comportement généralise les cas applicatifs de ce type de réseau car l'activité est toujours calculée selon la théorie des champs neuronaux dynamique. Les réseaux DNFs ne sont pas directement portables sur les technologies matérielles d'aujourd'hui de part leurs forte connectivité. Nous proposons plusieurs solutions à ce problème. La première consiste à minimiser la connectivité et d'obtenir une approximation du comportement du réseau par apprentissage sur les connexions latérales restantes. Cela montre un bon comportement dans certain cas applicatifs. Afin de s'abstraire de ces limitations, partant du constat que lorsqu'un signal se propage de proche en proche sur une topologie en grille, le temps de propagation représente la distance parcourue, nous proposons aussi deux méthodes qui permettent d'émuler, cette fois, l'ensemble de la large connectivité des Neural Fields de manière efficace et proche des technologies matérielles. Le premier substrat calcule les potentiels transmis sur le réseau par itérations successives en laissant les données se propager dans toutes les directions. Il est capable, en un minimum d'itérations, de calculer l'ensemble des potentiels latéraux de la carte grâce à une pondération particulière de l'ensemble des itérations.Le second passe par une représentation à spikes des potentiels qui transitent sur la grille sans cycles et reconstitue l'ensemble des potentiels latéraux au fil des itérations de propagation.Le réseau supporté par ces substrats est capable de caractériser les densités statistiques des données à traiter par l'architecture et de contrôler, de manière distribuée, l'allocation des cellules de calcul. / The increasing degree of parallelism on chip which comes from the always increasing integration density, raises a number of challenges such as routing information that confronts the "bottleneck problem" or the simple difficulty to exploit massive parallelism thanks to modern computing paradigms which derived mostly from a sequential history.In order to discharge the designer of this complexity, we design a new type of bio-inspired self-adaptive architecture. Mimicking brain plasticity, this architecture is able to adapt to its internal and external environment and becomes homeostatic. Belonging to the embodied computing theory, the computing substrate is no longer thought of as a black box, programmed for a given task, but is shaped by its environment and by applications that it supports.In our work, we propose a model of self-organizing neural map, DMADSOM (for Distributed Multiplicative Activity Dependent SOM), based on the principle of dynamic neural fields (DNF for "Dynamic Neural Fields"), to bring the concept of hardware plasticity. This model is able to adapt the data of each stimulus without need of a continuum on consecutive stimuli. This behavior generalizes the case of applications of such networks. The activity remains calculated using the dynamic neural field theory. The DNFs networks are not directly portable onto hardware technology today because of their large connectivity. We propose models that bring solutions to this problem. The first is to minimize connectivity and to approximate the global behavior thanks to a learning rule on the remaining lateral connections. This shows good behavior in some application cases. In order to reach the general case, based on the observation that when a signal travels from place to place on a grid topology, the delay represents the distance, we also propose two methods to emulate the whole wide connectivity of the Neural Field with respect to hardware technology constraints. The first substrate calculates the transmitted potential over the network by iteratively allowing the data to propagate in all directions. It is capable, in a minimum of iterations, to compute the lateral potentials of the map with a particular weighting of all iterations.The second involves a spike representation of the synaptic potential and transmits them on the grid without cycles. This one is hightly customisable and allows a very low complexity while still being capable to compute the lateral potentials.The network supported, by these substrates, is capable of characterizing the statistics densities of the data to be processed by the architecture, and to control in a distributed manner the allocation of computation cells.
80

Conception sûre et optimale de systèmes dynamiques critiques auto-adaptatifs soumis à des événements redoutés probabilistes / Safe and optimal design of dynamical, critical self-adaptive systems subject to probabilistic undesirable events

Sprauel, Jonathan 19 February 2016 (has links)
Cette étude s’inscrit dans le domaine de l’intelligence artificielle, plus précisément au croisement des deux domaines que sont la planification autonome en environnement probabiliste et la vérification formelle probabiliste. Dans ce contexte, elle pose la question de la maîtrise de la complexité face à l’intégration de nouvelles technologies dans les systèmes critiques : comment garantir que l’ajout d’une intelligence à un système, sous la forme d’une autonomie, ne se fasse pas au détriment de la sécurité ? Pour répondre à cette problématique, cette étude a pour enjeu de développer un processus outillé, permettant de concevoir des systèmes auto-adaptatifs critiques, ce qui met en œuvre à la fois des méthodes de modélisation formelle des connaissances d’ingénierie, ainsi que des algorithmes de planification sûre et optimale des décisions du système. / This study takes place in the broad field of Artificial Intelligence, specifically at the intersection of two domains : Automated Planning and Formal Verification in probabilistic environment. In this context, it raises the question of the integration of new technologies in critical systems, and the complexity it entails : How to ensure that adding intelligence to a system, in the form of autonomy, is not done at the expense of safety ? To address this issue, this study aims to develop a tool-supported process for designing critical, self-adaptive systems. Throughout this document, innovations are therefore proposed in methods of formal modeling and in algorithms for safe and optimal planning.

Page generated in 0.0588 seconds