Spelling suggestions: "subject:"scale systems"" "subject:"acale systems""
51 |
Multidisciplinary Dynamic System Design Optimization of Hybrid Electric Vehicle PowertrainsHoushmand, Arian January 2016 (has links)
No description available.
|
52 |
Estimation et commande décentralisée pour les systèmes de grandes dimensions : application aux réseaux électriques / Decentralized estimation and control for large scale systems : application to electrical networksBel Haj Frej, Ghazi 30 September 2017 (has links)
Les travaux de cette thèse portent sur l’estimation et la commande décentralisée des systèmes de grande dimension. L’objectif est de développer des capteurs logiciels pouvant produire une estimation fiable des variables nécessaires pour la stabilisation des systèmes non linéaires interconnectés. Une décomposition d’un tel système de grande dimension en un ensemble de n sous-systèmes interconnectés est primordiale. Ensuite, en tenant compte de la nature du sous-système ainsi que les fonctions d’interconnexions, des lois de commande décentralisées basées observateurs ont été synthétisées. Chaque loi de commande est associée à un sous-système qui permet de le stabiliser localement, ainsi la stabilité du système global est assurée. L’existence d’un observateur et d’un contrôleur stabilisant le système dépend de la faisabilité d’un problème d’optimisation LMI. La formulation LMI, basée sur l’approche de Lyapunov, est élaborée par l’utilisation de principe de DMVT sur la fonction d’interconnexion non linéaire supposée bornée et incertaine. Ainsi des conditions de synthèse non restrictives sont obtenues. Des méthodes de synthèse de loi de commande décentralisée basée observateur ont été proposées pour les systèmes non linéaires interconnectés dans le cas continu et dans le cas discret. Des lois de commande robuste H1 décentralisées sont élaborées pour les systèmes non linéaires interconnectés en présence de perturbations et des incertitudes paramétriques. L’efficacité et la validation des approches présentées sont testées sur un modèle de réseaux électriques composé de trois générateurs interconnectés / This thesis focuses on the decentralized estimation and control for large scale systems. The objective is to develop software sensors that can produce a reliable estimate of the variables necessary for the interconnected nonlinear systems stability analysis. A decomposition of a such large system into a set of n interconnected subsystems is paramount for model simplification. Then, taking into account the nature of the subsystem as well as the interconnected functions, observer-based decentralized control laws have been synthesized. Each control law is associated with a subsystem which allows it to be locally stable, thus the stability of the overall system is ensured. The existence of an observer and a controller gain matrix stabilizing the system depends on the feasibility of an LMI optimization problem. The LMI formulation, based on Lyapunov approach, is elaborated by applying the DMVT technique on the nonlinear interconnection function, assumed to be bounded and uncertain. Thus, non-restrictive synthesis conditions are obtained. Observer-based decentralized control schemes have been proposed for nonlinear interconnected systems in the continuous and discrete time. Robust Hinfini decentralized controllers are provided for interconnected nonlinear systems in the presence of perturbations and parametric uncertainties. Effectiveness of the proposed schemes are verified through simulation results on a power systems with interconnected machines
|
53 |
Scalable analysis of stochastic process algebra modelsTribastone, Mirco January 2010 (has links)
The performance modelling of large-scale systems using discrete-state approaches is fundamentally hampered by the well-known problem of state-space explosion, which causes exponential growth of the reachable state space as a function of the number of the components which constitute the model. Because they are mapped onto continuous-time Markov chains (CTMCs), models described in the stochastic process algebra PEPA are no exception. This thesis presents a deterministic continuous-state semantics of PEPA which employs ordinary differential equations (ODEs) as the underlying mathematics for the performance evaluation. This is suitable for models consisting of large numbers of replicated components, as the ODE problem size is insensitive to the actual population levels of the system under study. Furthermore, the ODE is given an interpretation as the fluid limit of a properly defined CTMC model when the initial population levels go to infinity. This framework allows the use of existing results which give error bounds to assess the quality of the differential approximation. The computation of performance indices such as throughput, utilisation, and average response time are interpreted deterministically as functions of the ODE solution and are related to corresponding reward structures in the Markovian setting. The differential interpretation of PEPA provides a framework that is conceptually analogous to established approximation methods in queueing networks based on meanvalue analysis, as both approaches aim at reducing the computational cost of the analysis by providing estimates for the expected values of the performance metrics of interest. The relationship between these two techniques is examined in more detail in a comparison between PEPA and the Layered Queueing Network (LQN) model. General patterns of translation of LQN elements into corresponding PEPA components are applied to a substantial case study of a distributed computer system. This model is analysed using stochastic simulation to gauge the soundness of the translation. Furthermore, it is subjected to a series of numerical tests to compare execution runtimes and accuracy of the PEPA differential analysis against the LQN mean-value approximation method. Finally, this thesis discusses the major elements concerning the development of a software toolkit, the PEPA Eclipse Plug-in, which offers a comprehensive modelling environment for PEPA, including modules for static analysis, explicit state-space exploration, numerical solution of the steady-state equilibrium of the Markov chain, stochastic simulation, the differential analysis approach herein presented, and a graphical framework for model editing and visualisation of performance evaluation results.
|
54 |
Conception de l'architecture d'un réseau de capteurs sans fil de grande dimension / Architecture design for a large-scale Wireless Sensor networkKoné, Cheick Tidjane 18 October 2011 (has links)
Cette thèse considère les réseaux de capteurs sans fil (RCSF) de grande dimension (de l'ordre du million de noeuds). Les questions posées sont les suivantes : comment prédire le bon fonctionnement et calculer avant déploiement les performances d'un tel réseau, sachant qu'aucun simulateur ne peut simuler un réseau de plus de 100 000 noeuds ? Comment assurer sa configuration pour garantir performance, passage à l'échelle, robustesse et durabilité ? La solution proposée dans cette thèse s'appuie sur une architecture de RCSF hétérogène à deux niveaux, dont le niveau inférieur est composé de capteurs et le niveau supérieur de collecteurs. La première contribution est un algorithme d'auto-organisation multi-canal qui permet de partitionner le réseau inférieur en plusieurs sous-réseaux disjoints avec un collecteur et un canal de fréquence par sous-réseau tout en respectant le principe de réutilisation de fréquence. La seconde contribution est l'optimisation du déploiement des collecteurs car leur nombre représente celui des sous-réseaux. Les problèmes traités ont été : l'optimisation des emplacements des puits pour un nombre prédéfini de puits et la minimisation du nombre de puits ou du coût pour un nombre prédéfini de sauts dans les sous-réseaux. Une solution intuitive et appropriée pour assurer à la fois performances réseaux et coût, est de partitionner le réseau inférieur en sous-réseaux équilibrés en nombre de sauts. Pour ce faire, la topologie physique des puits est une répartition géographique régulière en grille (carrée, triangulaire, etc.). Des études théoriques et expérimentales par simulation des modèles de topologie montrent, en fonction des besoins applicatifs et physiques, la méthodologie de choix et le calcul des meilleures solutions de déploiement. / This thesis considers the large-scale wireless sensor network (LSWSN) consisting of million nodes. The questions are: how to predict the good working and to compute before deployment the performances of such a network, knowing that no simulator can simulate a network of more than 100000 nodes? How to ensure its configuration to ensure performance, scalability, robustness and longevity? The solution proposed in this thesis is based on a two-tiered heterogeneous architecture of WSN in which the level 1 is composed of sensors and the level 2 of collectors. The first contribution is a multi-channel self-organization algorithm, which allows partitioning the network of level 1 into several disjointed sub-networks with one collector and one frequency channel while respecting the principle of frequency reuse. The second contribution is to optimize the deployment of collectors because their number represents that of sub-networks. The problems addressed were: the optimization of sinks locations for a predetermined number of sinks, and the minimization of financial cost related of the sinks? number, for a predetermined number of hops in the sub-networks. An intuitive and appropriate solution to ensure both network performance and cost is to partition the network of level 1 into balanced sub-networks in number of hops. To do this, the physical topology of sinks is a regular geographical grid (square, triangular, etc.). Theoretical studies and simulation of topology models show, depending on application requirements (node density, charge application, etc.) and physical (radio range, surveillance zone), the methodology of choice and the computation of the best deployment solutions.
|
55 |
Design, optimization and validation of start-up sequences of energy production systems.Tica, Adrian 01 June 2012 (has links) (PDF)
This thesis focuses on the application of model predictive control approaches to optimize the combined cycle power plants start-ups. Generally, the optimization of start-up is a very problematic issue that poses significant challenges. The development of the proposed approaches is progressive. In the first part a physical model of plant is developed and adapted to optimization purposes, by using a methodology which transforms Modelica model components into optimization-oriented models. By applying this methodology, a library suitable for optimization purposes has been built.In the second part, based on the developed model, an optimization procedure to improve the performances of the start-up phases is suggested. The proposed solution optimizes, in continuous time, the load profile of the turbines, by seeking in specific sets of functions. The optimal profile is derived by considering that this profile can be described by a parameterized function whose parameters are computed by solving a constrained optimal control problem. In the last part, the open-loop optimization procedure has been integrated into a receding horizon control strategy. This strategy represents a robust solution against perturbation and models errors, and enables to improve the trade-off between computation time and optimality of the solution. Nevertheless, the control approach leads to a significant computation time. In order to obtain real-time implementable results, a hierarchical model predictive control structure with two layers, working at different time scales and over different prediction horizons, has been proposed.
|
56 |
A Design Space Exploration Process for Large Scale, Multi-Objective Computer SimulationsZentner, John Marc 07 July 2006 (has links)
The primary contributions of this thesis are associated with the development of a new method for exploring the relationships between inputs and outputs for large scale computer simulations. Primarily, the proposed design space exploration procedure uses a hierarchical partitioning method to help mitigate the curse of dimensionality often associated with the analysis of large scale systems. Closely coupled with the use of a partitioning approach, is the problem of how to partition the system. This thesis also introduces and discusses a quantitative method developed to aid the user in finding a set of good partitions for creating partitioned metamodels of large scale systems.
The new hierarchically partitioned metamodeling scheme, the lumped parameter model (LPM), was developed to address two primary limitations to the current partitioning methods for large scale metamodeling. First the LPM was formulated to negate the need to rely on variable redundancies between partitions to account for potentially important interactions. By using a hierarchical structure, the LPM addresses the impact of neglected, direct interactions by indirectly accounting for these interactions via the interactions that occur between the lumped parameters in intermediate to top-level mappings. Secondly, the LPM was developed to allow for hierarchical modeling of black-box analyses that do not have available intermediaries with which to partition the system around.
The second contribution of this thesis is a graph-based partitioning method for large scale, black-box systems. The graph-based partitioning method combines the graph and sparse matrix decomposition methods used by the electrical engineering community with the results of a screening test to create a quantitative method for partitioning large scale, black-box systems. An ANOVA analysis of the results of a screening test can be used to determine the sparse nature of the large scale system. With this information known, the sparse matrix and graph theoretic partitioning schemes can then be used to create potential sets of partitions to use with the lumped parameter model.
|
57 |
Distributed learning in large populationsFox, Michael Jacob 14 August 2012 (has links)
Distributed learning is the iterative process of decision-making in the presence of other decision-makers. In recent years, researchers across fields as disparate as engineering, biology, and economics have identified mathematically congruous problem formulations at the intersection of their disciplines. In particular, stochastic processes, game theory, and control theory have been brought to bare on certain very basic and universal questions. What sort of environments are conducive to distributed learning? Are there any generic algorithms offering non-trivial performance guarantees for a large class of models?
The first half of this thesis makes contributions to two particular problems in distributed learning, self-assembly and language. Self-assembly refers to the emergence of high-level structures via the aggregate behavior of simpler building blocks. A number of algorithms have been suggested that are capable of generic self-assembly of graphs. That is, given a description of the objective they produce a policy with a corresponding performance guarantee. These guarantees have been in the form of deterministic convergence results. We introduce the notion of stochastic stability to the self-assembly problem. The stochastically stable states are the configurations the system spends almost all of its time in as a noise parameter is taken to zero. We show that in this framework simple procedures exist that are capable of self-assembly of any tree under stringent locality constraints. Our procedure gives an asymptotically maximum yield of target assemblies while obeying communication and reversibility constraints. We also present a slightly more sophisticated algorithm that guarantees maximum yields for any problem size. The latter algorithm utilizes a somewhat more presumptive notion of agents' internal states. While it is unknown whether an algorithm providing maximum yields subject to our constraints can depend only on the more parsimonious form of internal state, we are able to show that such an algorithm would not be able to possess a unique completing rule--- a useful feature for analysis.
We then turn our attention to the problem of distributed learning of communication protocols, or, language. Recent results for signaling game models establish the non-negligible possibility of convergence, under distributed learning, to states of unbounded efficiency loss. We provide a tight lower bound on efficiency and discuss its implications. Moreover, motivated by the empirical phenomenon of linguistic drift, we study the signaling game under stochastic evolutionary dynamics. We again make use of stochastic stability analysis and show that the long-run distribution of states has support limited to the efficient communication systems. We find that this behavior is insensitive to the particular choice of evolutionary dynamic, a fact that is intuitively captured by the game's potential function corresponding to average fitness. Consequently, the model supports conclusions similar to those found in the literature on language competition. That is, we expect monomorphic language states to eventually predominate. Homophily has been identified as a feature that potentially stabilizes diverse linguistic communities. We find that incorporating homophily in our stochastic model gives mixed results. While the monomorphic prediction holds in the small noise limit, diversity can persist at higher noise levels or as a metastable phenomenon.
The contributions of the second half of this thesis relate to more basic issues in distributed learning. In particular, we provide new results on the problem of distributed convergence to Nash equilibrium in finite games. A recently proposed class of games known as stable games have the attractive property of admitting global convergence to equilibria under many learning dynamics. We show that stable games can be formulated as passive input-output systems. This observation enables us to identify passivity of a learning dynamic as a sufficient condition for global convergence in stable games. Notably, dynamics satisfying our condition need not exhibit positive correlation between the payoffs and their directions of motion. We show that our condition is satisfied by the dynamics known to exhibit global convergence in stable games. We give a decision-theoretic interpretation for passive learning dynamics that mirrors the interpretation of stable games as strategic environments exhibiting self-defeating externalities. Moreover, we exploit the flexibility of the passivity condition to study the impact of applying various forecasting heuristics to the payoffs used in the learning process. Finally, we show how passivity can be used to identify strategic tendencies of the players that allow for convergence in the presence of information lags of arbitrary duration in some games.
|
58 |
Efficient large electromagnetic simulation based on hybrid TLM and modal approach on grid computing and supercomputer / Parallélisation, déploiement et adaptation automatique de la simulation électromagnétique sur une grille de calculAlexandru, Mihai 14 December 2012 (has links)
Dans le contexte des Sciences de l’Information et de la Technologie, un des challenges est de créer des systèmes de plus en plus petits embarquant de plus en plus d’intelligence au niveau matériel et logiciel avec des architectures communicantes de plus en plus complexes. Ceci nécessite des méthodologies robustes de conception afin de réduire le cycle de développement et la phase de prototypage. Ainsi, la conception et l’optimisation de la couche physique de communication est primordiale. La complexité de ces systèmes rend difficile leur optimisation notamment à cause de l’explosion du nombre des paramètres inconnus. Les méthodes et outils développés ces dernières années seront à terme inadéquats pour traiter les problèmes qui nous attendent. Par exemple, la propagation des ondes dans une cabine d’avion à partir des capteurs ou même d’une antenne, vers le poste de pilotage est grandement affectée par la présence de la structure métallique des sièges à l’intérieur de la cabine, voir les passagers. Il faut, donc, absolument prendre en compte cette perturbation pour prédire correctement le bilan de puissance entre l’antenne et un possible récepteur. Ces travaux de recherche portent sur les aspects théoriques et de mise en oeuvre pratique afin de proposer des outils informatiques pour le calcul rigoureux de la réflexion des champs électromagnétiques à l’intérieur de très grandes structures . Ce calcul implique la solution numérique de très grands systèmes inaccessibles par des ressources traditionnelles. La solution sera basée sur une grille de calcul et un supercalculateur. La modélisation électromagnétique des structures surdimensionnées par plusieurs méthodes numériques utilisant des nouvelles ressources informatiques, hardware et software, pour dérouler des calculs performants, représente le but de ce travail. La modélisation numérique est basée sur une approche hybride qui combine la méthode Transmission-Line Matrix (TLM) et l’approche modale. La TLM est appliquée aux volumes homogènes, tandis que l’approche modale est utilisée pour décrire les structures planaires complexes. Afin d’accélérer la simulation, une implémentation parallèle de l’algorithme TLM dans le contexte du paradigme de calcul distribué est proposé. Le sous-domaine de la structure qui est discrétisé avec la TLM est divisé en plusieurs parties appelées tâches, chacune étant calculée en parallèle par des processeurs différents. Pour accomplir le travail, les tâches communiquent entre elles au cours de la simulation par une librairie d’échange de messages. Une extension de l’approche modale avec plusieurs modes différents a été développée par l’augmentation de la complexité des structures planaires. Les résultats démontrent les avantages de la grille de calcul combinée avec l’approche hybride pour résoudre des grandes structures électriques, en faisant correspondre la taille du problème avec le nombre de ressources de calcul utilisées. L’étude met en évidence le rôle du schéma de parallélisation, cluster versus grille, par rapport à la taille du problème et à sa répartition. En outre, un modèle de prédiction a été développé pour déterminer les performances du calcul sur la grille, basé sur une approche hybride qui combine une prédiction issue d’un historique d’expériences avec une prédiction dérivée du profil de l’application. Les valeurs prédites sont en bon accord avec les valeurs mesurées. L’analyse des performances de simulation a permis d’extraire des règles pratiques pour l’estimation des ressources nécessaires pour un problème donné. En utilisant tous ces outils, la propagation du champ électromagnétique à l’intérieur d’une structure surdimensionnée complexe, telle qu’une cabine d’avion, a été effectuée sur la grille et également sur le supercalculateur. Les avantages et les inconvénients des deux environnements sont discutés. / In the context of Information Communications Technology (ICT), the major challenge is to create systems increasingly small, boarding more and more intelligence, hardware and software, including complex communicating architectures. This requires robust design methodologies to reduce the development cycle and prototyping phase. Thus, the design and optimization of physical layer communication is paramount. The complexity of these systems makes them difficult to optimize, because of the explosion in the number of unknown parameters. The methods and tools developed in past years will be eventually inadequate to address problems that lie ahead. Communicating objects will be very often integrated into cluttered environments with all kinds of metal structures and dielectric larger or smaller sizes compared to the wavelength. The designer must anticipate the presence of such barriers in the propagation channel to establish properly link budgets and an optimal design of the communicating object. For example, the wave propagation in an airplane cabin from sensors or even an antenna, towards the cockpit is greatly affected by the presence of the metal structure of the seats inside the cabin or even the passengers. So, we must absolutely take into account this perturbation to predict correctly the power balance between the antenna and a possible receiver. More generally, this topic will address the theoretical and computational electromagnetics in order to propose an implementation of informatics tools for the rigorous calculation of electromagnetic scattering inside very large structures or radiation antenna placed near oversized objects. This calculation involves the numerical solution of very large systems inaccessible by traditional resources. The solution will be based on grid computing and supercomputers. Electromagnetic modeling of oversized structures by means of different numerical methods, using new resources (hardware and software) to realize yet more performant calculations, is the aim of this work. The numerical modeling is based on a hybrid approach which combines Transmission-Line Matrix (TLM) and the mode matching methods. The former is applied to homogeneous volumes while the latter is used to describe complex planar structures. In order to accelerate the simulation, a parallel implementation of the TLM algorithm in the context of distributed computing paradigm is proposed. The subdomain of the structure which is discretized upon TLM is divided into several parts called tasks, each one being computed in parallel by different processors. To achieve this, the tasks communicate between them during the simulation by a message passing library. An extension of the modal approach to various modes has been developped by increasing the complexity of the planar structures. The results prove the benefits of the combined grid computing and hybrid approach to solve electrically large structures, by matching the size of the problem with the number of computing resources used. The study highlights the role of parallelization scheme, cluster versus grid, with respect to the size of the problem and its repartition. Moreover, a prediction model for the computing performances on grid, based on a hybrid approach that combines a historic-based prediction and an application profile-based prediction, has been developped. The predicted values are in good agreement with the measured values. The analysis of the simulation performances has allowed to extract practical rules for the estimation of the required resources for a given problem. Using all these tools, the propagation of the electromagnetic field inside a complex oversized structure such an airplane cabin, has been performed on grid and also on a supercomputer. The advantages and disadvantages of the two environments are discussed.
|
59 |
Analyse macroscopique des grands systèmes : émergence épistémique et agrégation spatio-temporelle / Macroscopic Analysis of Large-scale Systems : Epistemic Emergence and Spatiotemporal AggregationLamarche-Perrin, Robin 14 October 2013 (has links)
L'analyse des systèmes de grande taille est confrontée à des difficultés d'ordre syntaxique et sémantique : comment observer un million d'entités distribuées et asynchrones ? Comment interpréter le désordre résultant de l'observation microscopique de ces entités ? Comment produire et manipuler des abstractions pertinentes pour l'analyse macroscopique des systèmes ? Face à l'échec de l'approche analytique, le concept d'émergence épistémique - relatif à la nature de la connaissance - nous permet de définir une stratégie d'analyse alternative, motivée par le constat suivant : l'activité scientifique repose sur des processus d'abstraction fournissant des éléments de description macroscopique pour aborder la complexité des systèmes. Cette thèse s'intéresse plus particulièrement à la production d'abstractions spatiales et temporelles par agrégation de données. Afin d'engendrer des représentations exploitables lors du passage à l'échelle, il apparaît nécessaire de contrôler deux aspects essentiels du processus d'abstraction. Premièrement, la complexité et le contenu informationnel des représentations macroscopiques doivent être conjointement optimisés afin de préserver les détails pertinents pour l'observateur, tout en minimisant le coût de l'analyse. Nous proposons des mesures de qualité (critères internes) permettant d'évaluer, de comparer et de sélectionner les représentations en fonction du contexte et des objectifs de l'analyse. Deuxièmement, afin de conserver leur pouvoir explicatif, les abstractions engendrées doivent être cohérentes avec les connaissances mobilisées par l'observateur lors de l'analyse. Nous proposons d'utiliser les propriétés organisationnelles, structurelles et topologiques du système (critères externes) pour contraindre le processus d'agrégation et pour engendrer des représentations viables sur les plans syntaxique et sémantique. Par conséquent, l'automatisation du processus d'agrégation nécessite de résoudre un problème d'optimisation sous contraintes. Nous proposons dans cette thèse un algorithme de résolution générique, s'adaptant aux critères formulés par l'observateur. De plus, nous montrons que la complexité de ce problème d'optimisation dépend directement de ces critères. L'approche macroscopique défendue dans cette thèse est évaluée sur deux classes de systèmes. Premièrement, le processus d'agrégation est appliqué à la visualisation d'applications parallèles de grande taille pour l'analyse de performance. Il permet de détecter les anomalies présentes à plusieurs niveaux de granularité dans les traces d'exécution et d'expliquer ces anomalies à partir des propriétés syntaxiques du système. Deuxièmement, le processus est appliqué à l'agrégation de données médiatiques pour l'analyse des relations internationales. L'agrégation géographique et temporelle de l'attention médiatique permet de définir des évènements macroscopiques pertinents sur le plan sémantique pour l'analyse du système international. Pour autant, nous pensons que l'approche et les outils présentés dans cette thèse peuvent être généralisés à de nombreux autres domaines d'application. / The analysis of large-scale systems faces syntactic and semantic difficulties: How to observe millions of distributed and asynchronous entities? How to interpret the disorder that results from the microscopic observation of such entities? How to produce and handle relevant abstractions for the systems' macroscopic analysis? Faced with the failure of the analytic approach, the concept of epistemic emergence - related to the nature of knowledge - allows us to define an alternative strategy. This strategy is motivated by the observation that scientific activity relies on abstraction processes that provide macroscopic descriptions to broach the systems' complexity. This thesis is more specifically interested in the production of spatial and temporal abstractions through data aggregation. In order to generate scalable representations, the control of two essential aspects of the aggregation process is necessary. Firstly, the complexity and the information content of macroscopic representations should be jointly optimized in order to preserve the relevant details for the observer, while minimizing the cost of the analysis. We propose several measures of quality (internal criteria) to evaluate, compare and select the representations depending on the context and the objectives of the analysis. Secondly, in order to preserve their explanatory power, the generated abstractions should be consistent with the background knowledge exploited by the observer for the analysis. We propose to exploit the systems' organisational, structural and topological properties (external criteria) to constrain the aggregation process and to generate syntactically and semantically consistent representations. Consequently, the automation of the aggregation process requires solving a constrained optimization problem. We propose a generic algorithm that adapts to the criteria expressed by the observer. Furthermore, we show that the complexity of this optimization problem directly depend on these criteria. The macroscopic approach supported by this thesis is evaluated on two classes of systems. Firstly, the aggregation process is applied to the visualisation of large-scale distributed applications for performance analysis. It allows the detection of anomalies at several scales in the execution traces and the explanation of these anomalies according to the system syntactic properties. Secondly, the process is applied to the aggregation of news for the analysis of international relations. The geographical and temporal aggregation of media attention allows the definition of semantically consistent macroscopic events for the analysis of the international system. Furthermore, we believe that the approach and the tools presented in this thesis can be extended to a wider class of application domains.
|
60 |
Contribution à la coordination de commandes MPC pour systèmes distribués appliquée à la production d'énergie / Contribution to MPC coordination of distributed and power generation systemsSandoval Moreno, John Anderson 28 November 2014 (has links)
Cette thèse porte principalement sur la coordination des systèmes distribués, avec une attention particulière pour les systèmes de production d'électricité multi-énergiques. Aux fins de l'optimalité, ainsi que l'application des contraintes, la commande prédictive (MPC-Model Predictive Control) est choisi comme l'outil sous-jacent, tandis que les éoliennes, piles à combustible, panneaux photovoltaïques et les centrales hydroélectriques sont considérés comme les sources d'énergie a être contrôlées et coordonnées. En premier lieu, une application de la commande MPC dans un microréseau électrique est proposée, illustrant comment assurer une performance appropriée pour chaque unité de génération et de soutien. Dans ce contexte, une attention particulière est accordée à la production de puissance maximale par une éolienne, en prenant une commande basée sur un observateur quand la mesure de la vitesse du vent est disponible. Ensuite, les principes de contrôle distribué coordonnés, en considérant une formulation à base de la commande MPC, sont pris en considération pour le contexte des systèmes à grande taille. Ici, une nouvelle approche pour la coordination par prix avec des contraintes est proposée pour la gestion des contrôleurs MPC locaux, chacun d'eux étant typiquement associé à une unité de génération. En outre, le calcule des espace invariants a été utilisé pour l'analyse de la performance pour le système à boucle fermée, à la fois pour les schémas MPC centralisée et coordination par prix. Finalement, deux cas d'études dans le contexte des systèmes de génération d'électricité sont inclus, en illustrant la pertinence de la stratégie de commande coordonnée proposée. / This thesis is mainly about coordination of distributed systems, with a special attention to multi-energy electric power generation ones. For purposes of optimality, as well as constraint enforcement, Model Predictive Control (MPC) is chosen as the underlying tool, while wind turbines, fuel cells, photovoltaic panels, and hydroelectric plants are mostly considered as power sources to be controlled and coordinated. In the first place, an application of MPC to a micro-grid system is proposed, illustrating how to ensure appropriate performance for each generator and support units. In this context, a special attention is paid to the maximum power production by a wind turbine, via an original observer-based control when no wind speed measurement is available. Then, the principles of distributed-coordinated control, when considering an MPC-based formulation, are considered for the context of larger scale systems. Here, a new approach for price-driven coordination with constraints is proposed for the management of local MPC controllers, each of them being associated to one power generation unit typically. In addition, the computation of invariant sets is used for the performance analysis of the closed- loop control system, for both centralized MPC and price-driven coordination schemes. Finally, a couple of case studies in the field of power generation systems is included, illustrating the relevance of the proposed coordination control strategy.
|
Page generated in 0.0592 seconds