• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 9
  • 7
  • 2
  • Tagged with
  • 71
  • 71
  • 71
  • 22
  • 20
  • 18
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Design, optimization and validation of start-up sequences of energy production systems.

Tica, Adrian 01 June 2012 (has links) (PDF)
This thesis focuses on the application of model predictive control approaches to optimize the combined cycle power plants start-ups. Generally, the optimization of start-up is a very problematic issue that poses significant challenges. The development of the proposed approaches is progressive. In the first part a physical model of plant is developed and adapted to optimization purposes, by using a methodology which transforms Modelica model components into optimization-oriented models. By applying this methodology, a library suitable for optimization purposes has been built.In the second part, based on the developed model, an optimization procedure to improve the performances of the start-up phases is suggested. The proposed solution optimizes, in continuous time, the load profile of the turbines, by seeking in specific sets of functions. The optimal profile is derived by considering that this profile can be described by a parameterized function whose parameters are computed by solving a constrained optimal control problem. In the last part, the open-loop optimization procedure has been integrated into a receding horizon control strategy. This strategy represents a robust solution against perturbation and models errors, and enables to improve the trade-off between computation time and optimality of the solution. Nevertheless, the control approach leads to a significant computation time. In order to obtain real-time implementable results, a hierarchical model predictive control structure with two layers, working at different time scales and over different prediction horizons, has been proposed.
52

A Design Space Exploration Process for Large Scale, Multi-Objective Computer Simulations

Zentner, John Marc 07 July 2006 (has links)
The primary contributions of this thesis are associated with the development of a new method for exploring the relationships between inputs and outputs for large scale computer simulations. Primarily, the proposed design space exploration procedure uses a hierarchical partitioning method to help mitigate the curse of dimensionality often associated with the analysis of large scale systems. Closely coupled with the use of a partitioning approach, is the problem of how to partition the system. This thesis also introduces and discusses a quantitative method developed to aid the user in finding a set of good partitions for creating partitioned metamodels of large scale systems. The new hierarchically partitioned metamodeling scheme, the lumped parameter model (LPM), was developed to address two primary limitations to the current partitioning methods for large scale metamodeling. First the LPM was formulated to negate the need to rely on variable redundancies between partitions to account for potentially important interactions. By using a hierarchical structure, the LPM addresses the impact of neglected, direct interactions by indirectly accounting for these interactions via the interactions that occur between the lumped parameters in intermediate to top-level mappings. Secondly, the LPM was developed to allow for hierarchical modeling of black-box analyses that do not have available intermediaries with which to partition the system around. The second contribution of this thesis is a graph-based partitioning method for large scale, black-box systems. The graph-based partitioning method combines the graph and sparse matrix decomposition methods used by the electrical engineering community with the results of a screening test to create a quantitative method for partitioning large scale, black-box systems. An ANOVA analysis of the results of a screening test can be used to determine the sparse nature of the large scale system. With this information known, the sparse matrix and graph theoretic partitioning schemes can then be used to create potential sets of partitions to use with the lumped parameter model.
53

Distributed learning in large populations

Fox, Michael Jacob 14 August 2012 (has links)
Distributed learning is the iterative process of decision-making in the presence of other decision-makers. In recent years, researchers across fields as disparate as engineering, biology, and economics have identified mathematically congruous problem formulations at the intersection of their disciplines. In particular, stochastic processes, game theory, and control theory have been brought to bare on certain very basic and universal questions. What sort of environments are conducive to distributed learning? Are there any generic algorithms offering non-trivial performance guarantees for a large class of models? The first half of this thesis makes contributions to two particular problems in distributed learning, self-assembly and language. Self-assembly refers to the emergence of high-level structures via the aggregate behavior of simpler building blocks. A number of algorithms have been suggested that are capable of generic self-assembly of graphs. That is, given a description of the objective they produce a policy with a corresponding performance guarantee. These guarantees have been in the form of deterministic convergence results. We introduce the notion of stochastic stability to the self-assembly problem. The stochastically stable states are the configurations the system spends almost all of its time in as a noise parameter is taken to zero. We show that in this framework simple procedures exist that are capable of self-assembly of any tree under stringent locality constraints. Our procedure gives an asymptotically maximum yield of target assemblies while obeying communication and reversibility constraints. We also present a slightly more sophisticated algorithm that guarantees maximum yields for any problem size. The latter algorithm utilizes a somewhat more presumptive notion of agents' internal states. While it is unknown whether an algorithm providing maximum yields subject to our constraints can depend only on the more parsimonious form of internal state, we are able to show that such an algorithm would not be able to possess a unique completing rule--- a useful feature for analysis. We then turn our attention to the problem of distributed learning of communication protocols, or, language. Recent results for signaling game models establish the non-negligible possibility of convergence, under distributed learning, to states of unbounded efficiency loss. We provide a tight lower bound on efficiency and discuss its implications. Moreover, motivated by the empirical phenomenon of linguistic drift, we study the signaling game under stochastic evolutionary dynamics. We again make use of stochastic stability analysis and show that the long-run distribution of states has support limited to the efficient communication systems. We find that this behavior is insensitive to the particular choice of evolutionary dynamic, a fact that is intuitively captured by the game's potential function corresponding to average fitness. Consequently, the model supports conclusions similar to those found in the literature on language competition. That is, we expect monomorphic language states to eventually predominate. Homophily has been identified as a feature that potentially stabilizes diverse linguistic communities. We find that incorporating homophily in our stochastic model gives mixed results. While the monomorphic prediction holds in the small noise limit, diversity can persist at higher noise levels or as a metastable phenomenon. The contributions of the second half of this thesis relate to more basic issues in distributed learning. In particular, we provide new results on the problem of distributed convergence to Nash equilibrium in finite games. A recently proposed class of games known as stable games have the attractive property of admitting global convergence to equilibria under many learning dynamics. We show that stable games can be formulated as passive input-output systems. This observation enables us to identify passivity of a learning dynamic as a sufficient condition for global convergence in stable games. Notably, dynamics satisfying our condition need not exhibit positive correlation between the payoffs and their directions of motion. We show that our condition is satisfied by the dynamics known to exhibit global convergence in stable games. We give a decision-theoretic interpretation for passive learning dynamics that mirrors the interpretation of stable games as strategic environments exhibiting self-defeating externalities. Moreover, we exploit the flexibility of the passivity condition to study the impact of applying various forecasting heuristics to the payoffs used in the learning process. Finally, we show how passivity can be used to identify strategic tendencies of the players that allow for convergence in the presence of information lags of arbitrary duration in some games.
54

Efficient large electromagnetic simulation based on hybrid TLM and modal approach on grid computing and supercomputer / Parallélisation, déploiement et adaptation automatique de la simulation électromagnétique sur une grille de calcul

Alexandru, Mihai 14 December 2012 (has links)
Dans le contexte des Sciences de l’Information et de la Technologie, un des challenges est de créer des systèmes de plus en plus petits embarquant de plus en plus d’intelligence au niveau matériel et logiciel avec des architectures communicantes de plus en plus complexes. Ceci nécessite des méthodologies robustes de conception afin de réduire le cycle de développement et la phase de prototypage. Ainsi, la conception et l’optimisation de la couche physique de communication est primordiale. La complexité de ces systèmes rend difficile leur optimisation notamment à cause de l’explosion du nombre des paramètres inconnus. Les méthodes et outils développés ces dernières années seront à terme inadéquats pour traiter les problèmes qui nous attendent. Par exemple, la propagation des ondes dans une cabine d’avion à partir des capteurs ou même d’une antenne, vers le poste de pilotage est grandement affectée par la présence de la structure métallique des sièges à l’intérieur de la cabine, voir les passagers. Il faut, donc, absolument prendre en compte cette perturbation pour prédire correctement le bilan de puissance entre l’antenne et un possible récepteur. Ces travaux de recherche portent sur les aspects théoriques et de mise en oeuvre pratique afin de proposer des outils informatiques pour le calcul rigoureux de la réflexion des champs électromagnétiques à l’intérieur de très grandes structures . Ce calcul implique la solution numérique de très grands systèmes inaccessibles par des ressources traditionnelles. La solution sera basée sur une grille de calcul et un supercalculateur. La modélisation électromagnétique des structures surdimensionnées par plusieurs méthodes numériques utilisant des nouvelles ressources informatiques, hardware et software, pour dérouler des calculs performants, représente le but de ce travail. La modélisation numérique est basée sur une approche hybride qui combine la méthode Transmission-Line Matrix (TLM) et l’approche modale. La TLM est appliquée aux volumes homogènes, tandis que l’approche modale est utilisée pour décrire les structures planaires complexes. Afin d’accélérer la simulation, une implémentation parallèle de l’algorithme TLM dans le contexte du paradigme de calcul distribué est proposé. Le sous-domaine de la structure qui est discrétisé avec la TLM est divisé en plusieurs parties appelées tâches, chacune étant calculée en parallèle par des processeurs différents. Pour accomplir le travail, les tâches communiquent entre elles au cours de la simulation par une librairie d’échange de messages. Une extension de l’approche modale avec plusieurs modes différents a été développée par l’augmentation de la complexité des structures planaires. Les résultats démontrent les avantages de la grille de calcul combinée avec l’approche hybride pour résoudre des grandes structures électriques, en faisant correspondre la taille du problème avec le nombre de ressources de calcul utilisées. L’étude met en évidence le rôle du schéma de parallélisation, cluster versus grille, par rapport à la taille du problème et à sa répartition. En outre, un modèle de prédiction a été développé pour déterminer les performances du calcul sur la grille, basé sur une approche hybride qui combine une prédiction issue d’un historique d’expériences avec une prédiction dérivée du profil de l’application. Les valeurs prédites sont en bon accord avec les valeurs mesurées. L’analyse des performances de simulation a permis d’extraire des règles pratiques pour l’estimation des ressources nécessaires pour un problème donné. En utilisant tous ces outils, la propagation du champ électromagnétique à l’intérieur d’une structure surdimensionnée complexe, telle qu’une cabine d’avion, a été effectuée sur la grille et également sur le supercalculateur. Les avantages et les inconvénients des deux environnements sont discutés. / In the context of Information Communications Technology (ICT), the major challenge is to create systems increasingly small, boarding more and more intelligence, hardware and software, including complex communicating architectures. This requires robust design methodologies to reduce the development cycle and prototyping phase. Thus, the design and optimization of physical layer communication is paramount. The complexity of these systems makes them difficult to optimize, because of the explosion in the number of unknown parameters. The methods and tools developed in past years will be eventually inadequate to address problems that lie ahead. Communicating objects will be very often integrated into cluttered environments with all kinds of metal structures and dielectric larger or smaller sizes compared to the wavelength. The designer must anticipate the presence of such barriers in the propagation channel to establish properly link budgets and an optimal design of the communicating object. For example, the wave propagation in an airplane cabin from sensors or even an antenna, towards the cockpit is greatly affected by the presence of the metal structure of the seats inside the cabin or even the passengers. So, we must absolutely take into account this perturbation to predict correctly the power balance between the antenna and a possible receiver. More generally, this topic will address the theoretical and computational electromagnetics in order to propose an implementation of informatics tools for the rigorous calculation of electromagnetic scattering inside very large structures or radiation antenna placed near oversized objects. This calculation involves the numerical solution of very large systems inaccessible by traditional resources. The solution will be based on grid computing and supercomputers. Electromagnetic modeling of oversized structures by means of different numerical methods, using new resources (hardware and software) to realize yet more performant calculations, is the aim of this work. The numerical modeling is based on a hybrid approach which combines Transmission-Line Matrix (TLM) and the mode matching methods. The former is applied to homogeneous volumes while the latter is used to describe complex planar structures. In order to accelerate the simulation, a parallel implementation of the TLM algorithm in the context of distributed computing paradigm is proposed. The subdomain of the structure which is discretized upon TLM is divided into several parts called tasks, each one being computed in parallel by different processors. To achieve this, the tasks communicate between them during the simulation by a message passing library. An extension of the modal approach to various modes has been developped by increasing the complexity of the planar structures. The results prove the benefits of the combined grid computing and hybrid approach to solve electrically large structures, by matching the size of the problem with the number of computing resources used. The study highlights the role of parallelization scheme, cluster versus grid, with respect to the size of the problem and its repartition. Moreover, a prediction model for the computing performances on grid, based on a hybrid approach that combines a historic-based prediction and an application profile-based prediction, has been developped. The predicted values are in good agreement with the measured values. The analysis of the simulation performances has allowed to extract practical rules for the estimation of the required resources for a given problem. Using all these tools, the propagation of the electromagnetic field inside a complex oversized structure such an airplane cabin, has been performed on grid and also on a supercomputer. The advantages and disadvantages of the two environments are discussed.
55

Analyse macroscopique des grands systèmes : émergence épistémique et agrégation spatio-temporelle / Macroscopic Analysis of Large-scale Systems : Epistemic Emergence and Spatiotemporal Aggregation

Lamarche-Perrin, Robin 14 October 2013 (has links)
L'analyse des systèmes de grande taille est confrontée à des difficultés d'ordre syntaxique et sémantique : comment observer un million d'entités distribuées et asynchrones ? Comment interpréter le désordre résultant de l'observation microscopique de ces entités ? Comment produire et manipuler des abstractions pertinentes pour l'analyse macroscopique des systèmes ? Face à l'échec de l'approche analytique, le concept d'émergence épistémique - relatif à la nature de la connaissance - nous permet de définir une stratégie d'analyse alternative, motivée par le constat suivant : l'activité scientifique repose sur des processus d'abstraction fournissant des éléments de description macroscopique pour aborder la complexité des systèmes. Cette thèse s'intéresse plus particulièrement à la production d'abstractions spatiales et temporelles par agrégation de données. Afin d'engendrer des représentations exploitables lors du passage à l'échelle, il apparaît nécessaire de contrôler deux aspects essentiels du processus d'abstraction. Premièrement, la complexité et le contenu informationnel des représentations macroscopiques doivent être conjointement optimisés afin de préserver les détails pertinents pour l'observateur, tout en minimisant le coût de l'analyse. Nous proposons des mesures de qualité (critères internes) permettant d'évaluer, de comparer et de sélectionner les représentations en fonction du contexte et des objectifs de l'analyse. Deuxièmement, afin de conserver leur pouvoir explicatif, les abstractions engendrées doivent être cohérentes avec les connaissances mobilisées par l'observateur lors de l'analyse. Nous proposons d'utiliser les propriétés organisationnelles, structurelles et topologiques du système (critères externes) pour contraindre le processus d'agrégation et pour engendrer des représentations viables sur les plans syntaxique et sémantique. Par conséquent, l'automatisation du processus d'agrégation nécessite de résoudre un problème d'optimisation sous contraintes. Nous proposons dans cette thèse un algorithme de résolution générique, s'adaptant aux critères formulés par l'observateur. De plus, nous montrons que la complexité de ce problème d'optimisation dépend directement de ces critères. L'approche macroscopique défendue dans cette thèse est évaluée sur deux classes de systèmes. Premièrement, le processus d'agrégation est appliqué à la visualisation d'applications parallèles de grande taille pour l'analyse de performance. Il permet de détecter les anomalies présentes à plusieurs niveaux de granularité dans les traces d'exécution et d'expliquer ces anomalies à partir des propriétés syntaxiques du système. Deuxièmement, le processus est appliqué à l'agrégation de données médiatiques pour l'analyse des relations internationales. L'agrégation géographique et temporelle de l'attention médiatique permet de définir des évènements macroscopiques pertinents sur le plan sémantique pour l'analyse du système international. Pour autant, nous pensons que l'approche et les outils présentés dans cette thèse peuvent être généralisés à de nombreux autres domaines d'application. / The analysis of large-scale systems faces syntactic and semantic difficulties: How to observe millions of distributed and asynchronous entities? How to interpret the disorder that results from the microscopic observation of such entities? How to produce and handle relevant abstractions for the systems' macroscopic analysis? Faced with the failure of the analytic approach, the concept of epistemic emergence - related to the nature of knowledge - allows us to define an alternative strategy. This strategy is motivated by the observation that scientific activity relies on abstraction processes that provide macroscopic descriptions to broach the systems' complexity. This thesis is more specifically interested in the production of spatial and temporal abstractions through data aggregation. In order to generate scalable representations, the control of two essential aspects of the aggregation process is necessary. Firstly, the complexity and the information content of macroscopic representations should be jointly optimized in order to preserve the relevant details for the observer, while minimizing the cost of the analysis. We propose several measures of quality (internal criteria) to evaluate, compare and select the representations depending on the context and the objectives of the analysis. Secondly, in order to preserve their explanatory power, the generated abstractions should be consistent with the background knowledge exploited by the observer for the analysis. We propose to exploit the systems' organisational, structural and topological properties (external criteria) to constrain the aggregation process and to generate syntactically and semantically consistent representations. Consequently, the automation of the aggregation process requires solving a constrained optimization problem. We propose a generic algorithm that adapts to the criteria expressed by the observer. Furthermore, we show that the complexity of this optimization problem directly depend on these criteria. The macroscopic approach supported by this thesis is evaluated on two classes of systems. Firstly, the aggregation process is applied to the visualisation of large-scale distributed applications for performance analysis. It allows the detection of anomalies at several scales in the execution traces and the explanation of these anomalies according to the system syntactic properties. Secondly, the process is applied to the aggregation of news for the analysis of international relations. The geographical and temporal aggregation of media attention allows the definition of semantically consistent macroscopic events for the analysis of the international system. Furthermore, we believe that the approach and the tools presented in this thesis can be extended to a wider class of application domains.
56

Contribution à la coordination de commandes MPC pour systèmes distribués appliquée à la production d'énergie / Contribution to MPC coordination of distributed and power generation systems

Sandoval Moreno, John Anderson 28 November 2014 (has links)
Cette thèse porte principalement sur la coordination des systèmes distribués, avec une attention particulière pour les systèmes de production d'électricité multi-énergiques. Aux fins de l'optimalité, ainsi que l'application des contraintes, la commande prédictive (MPC-Model Predictive Control) est choisi comme l'outil sous-jacent, tandis que les éoliennes, piles à combustible, panneaux photovoltaïques et les centrales hydroélectriques sont considérés comme les sources d'énergie a être contrôlées et coordonnées. En premier lieu, une application de la commande MPC dans un microréseau électrique est proposée, illustrant comment assurer une performance appropriée pour chaque unité de génération et de soutien. Dans ce contexte, une attention particulière est accordée à la production de puissance maximale par une éolienne, en prenant une commande basée sur un observateur quand la mesure de la vitesse du vent est disponible. Ensuite, les principes de contrôle distribué coordonnés, en considérant une formulation à base de la commande MPC, sont pris en considération pour le contexte des systèmes à grande taille. Ici, une nouvelle approche pour la coordination par prix avec des contraintes est proposée pour la gestion des contrôleurs MPC locaux, chacun d'eux étant typiquement associé à une unité de génération. En outre, le calcule des espace invariants a été utilisé pour l'analyse de la performance pour le système à boucle fermée, à la fois pour les schémas MPC centralisée et coordination par prix. Finalement, deux cas d'études dans le contexte des systèmes de génération d'électricité sont inclus, en illustrant la pertinence de la stratégie de commande coordonnée proposée. / This thesis is mainly about coordination of distributed systems, with a special attention to multi-energy electric power generation ones. For purposes of optimality, as well as constraint enforcement, Model Predictive Control (MPC) is chosen as the underlying tool, while wind turbines, fuel cells, photovoltaic panels, and hydroelectric plants are mostly considered as power sources to be controlled and coordinated. In the first place, an application of MPC to a micro-grid system is proposed, illustrating how to ensure appropriate performance for each generator and support units. In this context, a special attention is paid to the maximum power production by a wind turbine, via an original observer-based control when no wind speed measurement is available. Then, the principles of distributed-coordinated control, when considering an MPC-based formulation, are considered for the context of larger scale systems. Here, a new approach for price-driven coordination with constraints is proposed for the management of local MPC controllers, each of them being associated to one power generation unit typically. In addition, the computation of invariant sets is used for the performance analysis of the closed- loop control system, for both centralized MPC and price-driven coordination schemes. Finally, a couple of case studies in the field of power generation systems is included, illustrating the relevance of the proposed coordination control strategy.
57

Modélisation et analyse structurelle du fonctionnement dynamique des systèmes électriques / Modeling and structural analysis of power systems dynamic model

Belhocine, Mohamed 25 November 2016 (has links)
Cette thèse traite des problématiques relatives à la modélisation, l’analyse et la simulation des systèmes électriques interconnectés. Elle a été principalement motivée par le besoin de comprendre et de clarifier les niveaux de modélisation de certains composants électriques comme les lignes de transmission ainsi que leur habilité à reproduire les différents phénomènes qui présentent un intérêt dans le système. En effet, pour faciliter les études et les simulations, les modèles les plus détaillés et les plus complexes sont généralement remplacés par des modèles plus simples selon le type des phénomènes visés. Usuellement, la structure dynamique de l’ensemble du système n’est pas prise en compte dans l’étape de simplification, ce qui fait que, dans la plupart du temps, des validations expérimentales sont nécessaires. C’est pourquoi un cadre structurel et une vision systémique sont proposés dans cette thèse afin d’apporter des solutions plus générales et plus fiables qui s’adaptent mieux à l’approximation des systèmes électriques. Le cadre proposé il nous a permis : d’une part, d’expliquer par le biais de la réduction l’approximation du modèle à paramètres distribués d’une ligne par un modèle plus simple de dimension finie appelé π et d’autre part, d’étendre les investigations à une échelle plus grande où une nouvelle méthodologie de troncature est proposée pour l’approximation des modèles dynamiques de grande taille. Elle se distingue des méthodes standards par l’efficacité d’évaluation des dynamiques du système même dans des situations particulières ainsi que la prise en compte de divers besoins pratiques. Il s’agit d’une approche mixte qui combine, d’une part, le principe de la troncature modale et, d’autre part, l’énergie de la réponse impulsionnelle afin de préserver les structures dynamique et physique du système. A cela s’ajoute également la question du nombre et choix des entrées des systèmes à paramètres distribués qui est posée ici d’un point de vue de l’automatique afin de lui donner plus d’utilité pratique, surtout dans les simulateurs numériques. L’approche utilisée à cet effet est algébrique, et la contribution est une étude assez détaillée de l’état de l’art sur le sujet qui a finalement affiné d’avantage la problématique et conduit à de nouvelles pistes plus prometteuses car la question est jusqu’alors partiellement résolue. Plus précisément, les investigations doivent se concentrer désormais sur les commandes dites aux bords ou frontières.D’un point de vue pratique, les résultats obtenus dans cette thèse peuvent être exploités dans l’avenir pour améliorer et facilité les techniques actuelles de simulation et d’analyse en adaptant, par exemple, chaque modèle utilisé aux phénomènes à reproduire. / This thesis was motivated by the need to better understand the connection between the models used in simulation of the power systems dynamics and the phenomena which have to be analyzed or reproduced by the simulation. Indeed, to study and to simulate the behavior of the interconnected power systems, the sophisticated models such as the one of the transmission lines are generally replaced by simple ones. Usually, a dynamic structure of the whole system is not taken into account in a simplification step. As a consequence, experimental validations are generally needed to assess the result of the approximation. For this reason, a structural framework and a systemic viewpoint are proposed to make the solutions more general and more appropriate to the approximation of power systems. First, this allows explaining the link between the distributed parameters model of the transmission lines and the finite dimensional one called π model based on the model reduction. Next, a novel mixed approximation methodology for large-scale dynamic models is proposed which allows one to better rate the dynamics of the system in different situations, and to take into account several practical needs. This methodology is based on a mixture between the modal truncation and the energy of the impulse response so that the dynamical and the physical structures of the system remain unchanged. Moreover, in the context of the automatic control theory, the issue related to a number and a choice of input variables for distributed parameters systems is discussed. To address this issue, an algebraic approach is applied. Here, the main contribution is the detailed study conducted on the basis of the state of the art by which a new way is proposed. Because the issue is not fully solved, more investigations have to be focused on the so called boundary control variables. For practical applications, all the results presented in this study can be exploited to further improve numerical simulations and behavioral studies of large-scale power systems.
58

The Interrelationships between Technical Standards and Industry Structures: Actor-Network Based Case Studies of the Mobile Wireless and Television Industries in the US and the UK

Tilson, David Albert 04 April 2008 (has links)
No description available.
59

Nonlinear Impulsive and Hybrid Dynamical Systems

Nersesov, Sergey G 23 June 2005 (has links)
Modern complex dynamical systems typically possess a multiechelon hierarchical hybrid structure characterized by continuous-time dynamics at the lower-level units and logical decision-making units at the higher-level of hierarchy. Hybrid dynamical systems involve an interacting countable collection of dynamical systems defined on subregions of the partitioned state space. Thus, in addition to traditional control systems, hybrid control systems involve supervising controllers which serve to coordinate the (sometimes competing) actions of the lower-level controllers. A subclass of hybrid dynamical systems are impulsive dynamical systems which consist of three elements, namely, a continuous-time differential equation, a difference equation, and a criterion for determining when the states of the system are to be reset. One of the main topics of this dissertation is the development of stability analysis and control design for impulsive dynamical systems. Specifically, we generalize Poincare's theorem to dynamical systems possessing left-continuous flows to address the stability of limit cycles and periodic orbits of left-continuous, hybrid, and impulsive dynamical systems. For nonlinear impulsive dynamical systems, we present partial stability results, that is, stability with respect to part of the system's state. Furthermore, we develop adaptive control framework for general class of impulsive systems as well as energy-based control framework for hybrid port-controlled Hamiltonian systems. Extensions of stability theory for impulsive dynamical systems with respect to the nonnegative orthant of the state space are also addressed in this dissertation. Furthermore, we design optimal output feedback controllers for set-point regulation of linear nonnegative dynamical systems. Another main topic that has been addressed in this research is the stability analysis of large-scale dynamical systems. Specifically, we extend the theory of vector Lyapunov functions by constructing a generalized comparison system whose vector field can be a function of the comparison system states as well as the nonlinear dynamical system states. Furthermore, we present a generalized convergence result which, in the case of a scalar comparison system, specializes to the classical Krasovskii-LaSalle invariant set theorem. Moreover, we develop vector dissipativity theory for large-scale dynamical systems based on vector storage functions and vector supply rates. Finally, using a large-scale dynamical systems perspective, we develop a system-theoretic foundation for thermodynamics. Specifically, using compartmental dynamical system energy flow models, we place the universal energy conservation, energy equipartition, temperature equipartition, and entropy nonconservation laws of thermodynamics on a system-theoretic basis.
60

Commande prédictive distribuée. Approches appliquées à la régulation thermique des bâtiments. / Distributed model predictive control. Approaches applied to building temperature

Morosan, Petru-daniel 30 September 2011 (has links)
Les exigences croissantes sur l'efficacité énergétique des bâtiments, l'évolution du {marché} énergétique, le développement technique récent ainsi que les particularités du poste de chauffage ont fait du MPC le meilleur candidat pour la régulation thermique des bâtiments à occupation intermittente. Cette thèse présente une méthodologie basée sur la commande prédictive distribuée visant un compromis entre l'optimalité, la simplicité et la flexibilité de l'implantation de la solution proposée. Le développement de l'approche est progressif : à partir du cas d'une seule zone, la démarche est ensuite étendue au cas multizone et / ou multisource, avec la prise en compte des couplages thermiques entre les zones adjacentes. Après une formulation quadratique du critère MPC pour mieux satisfaire les objectifs économiques du contrôle, la formulation linéaire est retenue. Pour répartir la charge de calcul, des méthodes de décomposition linéaire (comme Dantzig-Wolfe et Benders) sont employées. L'efficacité des algorithmes distribués proposés est illustrée par diverses simulations. / The increasing requirements on energy efficiency of buildings, the evolution of the energy market, the technical developments and the characteristics of the heating systems made of MPC the best candidate for thermal control of intermittently occupied buildings. This thesis presents a methodology based on distributed model predictive control, aiming a compromise between optimality, on the one hand, and simplicity and flexibility of the implementation of the proposed solution, on the other hand. The development of the approach is gradually. The mono-zone case is initially considered, then the basic ideas of the solution are extended to the multi-zone and / or multi-source case, including the thermal coupling between adjacent zones. Firstly we consider the quadratic formulation of the MPC cost function, then we pass towards a linear criterion, in order to better satisfy the economic control objectives. Thus, linear decomposition methods (such as Dantzig-Wolfe and Benders) represent the mathematical tools used to distribute the computational charge among the local controllers. The efficiency of the distributed algorithms is illustrated by simulations.

Page generated in 0.07 seconds