Spelling suggestions: "subject:"state space"" "subject:"itate space""
361 |
Modèles de connaissance à paramètres identifiables expérimentalement pour les systèmes de refroidissement dessiccatif couplés à un système solaire / Knowledge models with identifiable parameters of solar desiccant cooling systemsGhazal, Roula 12 April 2013 (has links)
La Centrale de traitement d’Air par Dessiccation (CAD) offre un contrôle complet de la température et de l'humidité dans les locaux climatisés. Son élément clé est la roue dessicante qui permet la dessiccation de l’air et une régénération continue. A travers cette étude, nous nous intéressons au développement d’une méthodologie pour obtenir un modèle dynamique de la roue utilisable dans les algorithmes de contrôle avancés de la CAD. La roue dessicante peut être considérée comme un système de type multi-entrées/multi-sorties (MIMO). La seconde partie de ce mémoire concerne l'identification expérimentale des paramètres des modèles d’état de la roue dessicante pour deux types de modèles : boîte noire et boîte grise. Dans le cas de la boîte noire, tous les paramètres du modèle sont identifiés expérimentalement. Dans le cas de la boîte grise, certains paramètres sont dérivés de considérations physiques et les paramètres restants sont identifiés en utilisant les mesures expérimentales des entrées et des sorties. Les paramètres du modèle boîte grise ont une signification physique. En comparaison avec les modèles boîte noire, les modèles boîte grises sont moins précis sur le domaine sur lequel les paramètres ont été identifiés, mais beaucoup plus précis en dehors de ce domaine. Comme les paramètres ont une signification physique, leurs valeurs ne varient pas de manière significative avec le point de fonctionnement utilisé pour l’identification. Dans l’approche boîte grise, les valeurs des paramètres obtenues pour les modèles linéaires sont presque identiques pour tous les modèles locaux du coté dessiccation et pour tous les modèles locaux du coté régénération ; cela nous a permis de considérer qu’un modèle local est valable pour tout le domaine de variation des variables d’entrée. Le modèle final de la roue dessicante se compose de deux modèles globaux : un pour le côté de la dessiccation et l'autre pour le côté de la régénération. La troisième partie de ce travail consiste dans l'identification des coefficients de transfert de masse et de chaleur au sein de la roue dessicante en utilisant un modèle boîte grise. Le coefficient de transfert de masse, le coefficient de transfert convectif et le nombre de Nusselt ont été obtenus en écrivant les paramètres du modèle d’état en fonction d’une seule variable et en exprimant les paramètres en fonction des caractéristiques géométriques et des propriétés de matériaux de la roue. Ce travail contribue au développement d’un modèle d’état utilisable pour la synthèse des algorithmes de contrôle pour la roue dessicante. / Desiccant Air Unit (DAU) offers a complete control of air temperature and humidity in the conditioned space. Its key component is the desiccant wheel which provides the functions of air desiccation and regeneration. The aim of this study is to develop a methodology for obtaining a dynamic model of the desiccant wheel which can be used for the model-based control algorithms of DAU. The desiccant wheel can be regarded as a multi-input/multi-output (MIMO) system. The first part of the thesis is devoted to the modeling of the desiccant wheel based on energy and mass balance equations. The resulting set of equations is formulated as a second order state-space system without delay. The second part of this thesis concerns the experimental identification of the parameters of the state-space model of the desiccant wheel by using a black-box and a gray-box approach. In the case of the black-box, all the parameters of the model are identified experimentally. The identified parameters have values which minimize the difference between the output of the model and the experimental values. The parameters of the black-box model do not have physical significance. Although precise in the range of variation of the inputs in which the parameters were identified, this model gives significant errors in other domains of variation of the inputs. The parameters of the gray-box model are physically significant. Compared with the black-box models, the gray-box model was less accurate for the domains for which the parameters were identified, but it was notably more robust when applied to other ranges of the inputs. Since the parameters are related to physical properties, their values do not vary significantly with changes of the operating point used for identification. For the gray-box approach, the parameter values obtained for the linear models are almost identical for all local models on the desiccation side and all the local models on the regeneration side, suggesting that a local model may be valid for all the complete range of input variables. Using the above results, a final model of the desiccant wheel was developed, comprising two global models: one for the desiccation side and another for the regeneration side. The third part of the thesis deals with the identification of mass and heat transfer coefficients of the air within the desiccant wheel using a gray-box model. The mass transfer coefficient, the convective heat transfer coefficient and the Nusselt number were obtained by defining the variable parameters of the model as a function of a single variable and by expressing the constant parameters as a function of the geometric and material properties of the wheel. This work contributes to the development of a state-space model used for the synthesis of control algorithms for the desiccant wheel.
|
362 |
Improvements in Genetic Approach to Pole Placement in Linear State Space Systems Through Island Approach PGA with Orthogonal Mutation VectorsCassell, Arnold 01 January 2012 (has links)
This thesis describes a genetic approach for shaping the dynamic responses of linear state space systems through pole placement. This paper makes further comparisons between this approach and an island approach parallel genetic algorithm (PGA) which incorporates orthogonal mutation vectors to increase sub-population specialization and decrease convergence time.
Both approaches generate a gain vector K. The vector K is used in state feedback for altering the poles of the system so as to meet step response requirements such as settling time and percent overshoot. To obtain the gain vector K by the proposed genetic approaches, a pair of ideal, desired poles is calculate first. Those poles serve as the basis by which an initial population is created. In the island approach, those poles serve as a basis for n populations, where n is the dimension of the necessary K vector.
Each member of the population is tested for its fitness (the degree to which it matches the criteria). A new population is created each “generation” from the results of the previous iteration, until the criteria are met, or a certain number of generations have passed. Several case studies are provided in this paper to illustrate that this new approach is working, and also to compare performance of the two approaches.
|
363 |
Essays on bayesian analysis of state space models with financial applicationsGingras, Samuel 05 1900 (has links)
Cette thèse est organisée en trois chapitres où sont développées des méthodes de simulation à posteriori pour inférence Bayesienne dans des modèles espace-état ainsi que des modèles économétriques pour l’analyse de données financières.
Au chapitre 1, nous considérons le problème de simulation a posteriori dans les modèles espace-état univariés et non-Gaussiens. Nous proposons une nouvelle méthode de Monte-Carlo par chaînes de Markov (MCMC) mettant à jour le vecteur de paramètres de la dynamique d’état ainsi que la séquence de variables d’état conjointement dans un bloc unique. La proposition MCMC est tirée en deux étapes: la distribution marginale du vecteur de paramètres de la dynamique d’état est construite en utilisant une approximation du gradient et du Hessien du logarithme de sa densité a posteriori, pour laquelle le vecteur de variables d’état a été intégré. La distribution conditionnelle de la séquence de variables d’état, étant donné la proposition du vecteur de paramètres, est telle que décrite dans McCausland (2012). Le calcul du gradient et du Hessien approximatif combine des sous-produits de calcul du tirage d’état avec une quantité modeste de calculs supplémentaires. Nous comparons l’efficacité numérique de notre simulation a posteriori à celle de la méthode Ancillarity-Sufficiency Interweaving Strategy (ASIS) décrite dans Kastner & Frühwirth-Schnatter (2014), en utilisant un modèle de volatilité stochastique Gaussien et le même panel de 23 taux de change quotidiens utilisé dans ce même article. Pour calculer la moyenne a posteriori du paramètre de persistance de la volatilité, notre efficacité numérique est de 6 à 27 fois plus élevée; pour la volatilité du paramètre de volatilité, elle est de 18 à 53 fois plus élevée. Nous analysons dans un second exemple des données de compte de transaction avec un modèle Poisson et Gamma-Poisson dynamique. Malgré la nature non Gaussienne des données de compte, nous obtenons une efficacité numérique élevée, guère inférieure à celle rapportée dans McCausland (2012) pour une méthode d’échantillonnage impliquant un calcul préliminaire de la forme de la distribution a posteriori statique des paramètres.
Au chapitre 2, nous proposons un nouveau modèle de durée conditionnelle stochastique (SCD) pour l’analyse de données de transactions financières en haute fréquence. Nous identifions certaines caractéristiques indésirables des densités de durée conditionnelles paramétriques existantes et proposons une nouvelle famille de densités conditionnelles flexibles pouvant correspondre à une grande variété de distributions avec des fonctions de taux de probabilité modérément variable. Guidés par des considérations théoriques issues de la théorie des files d’attente, nous introduisons des déviations non-paramétriques autour d’une distribution exponentielle centrale, qui, selon nous, est un bon modèle de premier ordre pour les durées financières, en utilisant une densité de Bernstein. La densité résultante est non seulement flexible, dans le sens qu’elle peut s’approcher de n’importe quelle densité continue sur [0, ∞) de manière arbitraire, à condition qu’elle se compose d’un nombre suffisamment grand de termes, mais également susceptible de rétrécissement vers la distribution exponentielle. Grâce aux tirages très efficaces des variables d’état, l’efficacité numérique de notre simulation a posteriori se compare très favorablement à celles obtenues dans les études précédentes. Nous illustrons nos méthodes à l’aide des données de cotation d’actions négociées à la Bourse de Toronto. Nous constatons que les modèles utilisant notre densité conditionnelle avec moins de qua- tre termes offrent le meilleur ajustement. La variation régulière trouvée dans les fonctions de taux de probabilité, ainsi que la possibilité qu’elle ne soit pas monotone, aurait été impossible à saisir avec une spécification paramétrique couramment utilisée.
Au chapitre 3, nous présentons un nouveau modèle de durée stochastique pour les temps de transaction dans les marchés d’actifs. Nous soutenons que les règles largement acceptées pour l’agrégation de transactions apparemment liées induisent une inférence erronée concernant les durées entre des transactions non liées: alors que deux transactions exécutées au cours de la même seconde sont probablement liées, il est extrêmement improbable que toutes paires de transactions le soient, dans un échantillon typique. En plaçant une incertitude sur les transactions liées dans notre modèle, nous améliorons l’inférence pour la distribution de la durée entre les transactions non liées, en particulier près de zéro. Nous proposons un modèle en temps discret pour les temps de transaction censurés permettant des valeurs nulles excessives résultant des durées entre les transactions liées. La distribution discrète des durées entre les transactions indépendantes découle d’une densité flexible susceptible de rétrécissement vers une distribution exponentielle. Dans un exemple empirique, nous constatons que la fonction de taux de probabilité conditionnelle sous-jacente pour des durées (non censurées) entre transactions non liées varie beaucoup moins que celles trouvées dans la plupart des études; une distribution discrète pour les transactions non liées basée sur une distribution exponentielle fournit le meilleur ajustement pour les trois séries analysées. Nous prétendons que c’est parce que nous évitons les artefacts statistiques qui résultent de règles déterministes d’agrégation des échanges et d’une distribution paramétrique inadaptée. / This thesis is organized in three chapters which develop posterior simulation methods for Bayesian inference in state space models and econometrics models for the analysis of financial data.
In Chapter 1, we consider the problem of posterior simulation in state space models with non-linear non-Gaussian observables and univariate Gaussian states. We propose a new Markov Chain Monte Carlo (MCMC) method that updates the parameter vector of the state dynamics and the state sequence together as a single block. The MCMC proposal is drawn in two steps: the marginal proposal distribution for the parameter vector is constructed using an approximation of the gradient and Hessian of its log posterior density, with the state vector integrated out. The conditional proposal distribution for the state sequence given the proposal of the parameter vector is the one described in McCausland (2012). Computation of the approximate gradient and Hessian combines computational by-products of the state draw with a modest amount of additional computation. We compare the numerical efficiency of our posterior simulation with that of the Ancillarity-Sufficiency Interweaving Strategy (ASIS) described in Kastner & Frühwirth-Schnatter (2014), using the Gaus- sian stochastic volatility model and the panel of 23 daily exchange rates from that paper. For computing the posterior mean of the volatility persistence parameter, our numerical efficiency is 6-27 times higher; for the volatility of volatility parameter, 18-53 times higher. We analyse trans- action counts in a second example using dynamic Poisson and Gamma-Poisson models. Despite non-Gaussianity of the count data, we obtain high numerical efficiency that is not much lower than that reported in McCausland (2012) for a sampler that involves pre-computing the shape of a static posterior distribution of parameters.
In Chapter 2, we propose a new stochastic conditional duration model (SCD) for the analysis of high-frequency financial transaction data. We identify undesirable features of existing parametric conditional duration densities and propose a new family of flexible conditional densities capable of matching a wide variety of distributions with moderately varying hazard functions. Guided by theoretical consideration from queuing theory, we introduce nonparametric deviations around a central exponential distribution, which we argue is a sound first-order model for financial durations, using a Bernstein density. The resulting density is not only flexible, in the sense that it can approximate any continuous density on [0,∞) arbitrarily closely, provided it consists of a large enough number of terms, but also amenable to shrinkage towards the exponential distribution. Thank to highly efficiency draws of state variables, numerical efficiency of our posterior simulation compares very favourably with those obtained in previous studies. We illustrate our methods using quotation data on equities traded on the Toronto Stock Exchange. We find that models with our proposed conditional density having less than four terms provide the best fit. The smooth variation found in the hazard functions, together with the possibility of it being non-monotonic, would have been impossible to capture using commonly used parametric specification.
In Chapter 3, we introduce a new stochastic duration model for transaction times in asset markets. We argue that widely accepted rules for aggregating seemingly related trades mislead inference pertaining to durations between unrelated trades: while any two trades executed in the same second are probably related, it is extremely unlikely that all such pairs of trades are, in a typical sample. By placing uncertainty about which trades are related within our model, we improve inference for the distribution of duration between unrelated trades, especially near zero. We propose a discrete model for censored transaction times allowing for zero-inflation resulting from clusters of related trades. The discrete distribution of durations between unrelated trades arises from a flexible density amenable to shrinkage towards an exponential distribution. In an empirical example, we find that the underlying conditional hazard function for (uncensored) durations between unrelated trades varies much less than what most studies find; a discrete distribution for unrelated trades based on an exponential distribution provides a better fit for all three series analyzed. We claim that this is because we avoid statistical artifacts that arise from deterministic trade-aggregation rules and unsuitable parametric distribution.
|
364 |
Quadrocopter - stabilizace pomocí inerciálních snímačů / Quadrocopter - Sensory SubsytemBradáč, František January 2011 (has links)
This diploma thesis deals with processing of measured data from inertial navigation system in order these could be used for stabilization. There is general information about aerial vehicles called copters with emphasis on four-rotor construction called quadrocopter at first. Then mathematical model of quadrocopter in state space form is derived, the particular implementation of university developed quadrocopter is described and the design of data processing algorithm is presented with measured results. Finally achieved results are discussed.
|
365 |
Reimagining Human-Machine Interactions through Trust-Based FeedbackKumar Akash (8862785) 17 June 2020 (has links)
<div>Intelligent machines, and more broadly, intelligent systems, are becoming increasingly common in the everyday lives of humans. Nonetheless, despite significant advancements in automation, human supervision and intervention are still essential in almost all sectors, ranging from manufacturing and transportation to disaster-management and healthcare. These intelligent machines<i> interact and collaborate</i> with humans in a way that demands a greater level of trust between human and machine. While a lack of trust can lead to a human's disuse of automation, over-trust can result in a human trusting a faulty autonomous system which could have negative consequences for the human. Therefore, human trust should be <i>calibrated </i>to optimize these human-machine interactions. This calibration can be achieved by designing human-aware automation that can infer human behavior and respond accordingly in real-time.</div><div><br></div><div>In this dissertation, I present a probabilistic framework to model and calibrate a human's trust and workload dynamics during his/her interaction with an intelligent decision-aid system. More specifically, I develop multiple quantitative models of human trust, ranging from a classical state-space model to a classification model based on machine learning techniques. Both models are parameterized using data collected through human-subject experiments. Thereafter, I present a probabilistic dynamic model to capture the dynamics of human trust along with human workload. This model is used to synthesize optimal control policies aimed at improving context-specific performance objectives that vary automation transparency based on human state estimation. I also analyze the coupled interactions between human trust and workload to strengthen the model framework. Finally, I validate the optimal control policies using closed-loop human subject experiments. The proposed framework provides a foundation toward widespread design and implementation of real-time adaptive automation based on human states for use in human-machine interactions.</div>
|
366 |
Model-based co-design of sensing and control systems for turbo-charged, EGR-utilizing spark-ignited enginesXu Zhang (9976460) 01 March 2021 (has links)
<div>Stoichiometric air-fuel ratio (AFR) and air/EGR flow control are essential control problems in today’s advanced spark-ignited (SI) engines to enable effective application of the three-way-catalyst (TWC) and generation of required torque. External exhaust gas recirculation (EGR) can be used in SI engines to help mitigate knock, reduce enrichment and improve efficiency[1 ]. However, the introduction of the EGR system increases the complexity of stoichiometric engine-out lambda and torque management, particularly for high BMEP commercial vehicle applications. This thesis develops advanced frameworks for sensing and control architecture designs to enable robust air handling system management, stoichiometric cylinder air-fuel ratio (AFR) control and three-way-catalyst emission control.</div><div><br></div><div><div>The first work in this thesis derives a physically-based, control-oriented model for turbocharged SI engines utilizing cooled EGR and flexible VVA systems. The model includes the impacts of modulation to any combination of 11 actuators, including the throttle valve, bypass valve, fuel injection rate, waste-gate, high-pressure (HP) EGR, low-pressure (LP) EGR, number of firing cylinders, intake and exhaust valve opening and closing timings. A new cylinder-out gas composition estimation method, based on the inputs’ information of cylinder charge flow, injected fuel amount, residual gas mass and intake gas compositions, is proposed in this model. This method can be implemented in the control-oriented model as a critical input for estimating the exhaust manifold gas compositions. A new flow-based turbine-out pressure modeling strategy is also proposed in this thesis as a necessary input to estimate the LP EGR flow rate. Incorporated with these two sub-models, the control-oriented model is capable to capture the dynamics of pressure, temperature and gas compositions in manifolds and the cylinder. Thirteen physical parameters, including intake, boost and exhaust manifolds’ pressures, temperatures, unburnt and burnt mass fractions as well as the turbocharger speed, are defined as state variables. The outputs such as flow rates and AFR are modeled as functions of selected states and inputs. The control-oriented model is validated with a high fidelity SI engine GT-Power model for different operating conditions. The novelty in this physical modeling work includes the development and incorporation of the cylinder-out gas composition estimation method and the turbine-out pressure model in the control-oriented model.</div></div><div><br></div><div><div>The second part of the work outlines a novel sensor selection and observer design algorithm for linear time-invariant systems with both process and measurement noise based on <i>H</i>2 optimization to optimize the tradeoff between the observer error and the number of required sensors. The optimization problem is relaxed to a sequence of convex optimization problems that minimize the cost function consisting of the <i>H</i>2 norm of the observer error and the weighted <i>l</i>1 norm of the observer gain. An LMI formulation allows for efficient solution via semi-definite programing. The approach is applied here, for the first time, to a turbo-charged spark-ignited (SI) engine using exhaust gas recirculation to determine the optimal sensor sets for real-time intake manifold burnt gas mass fraction estimation. Simulation with the candidate estimator embedded in a high fidelity engine GT-Power model demonstrates that the optimal sensor sets selected using this algorithm have the best <i>H</i>2 estimation performance. Sensor redundancy is also analyzed based on the algorithm results. This algorithm is applicable for any type of modern internal combustion engines to reduce system design time and experimental efforts typically required for selecting optimal sensor sets.</div></div><div><br></div><div><div>The third study develops a model-based sensor selection and controller design framework for robust control of air-fuel-ratio (AFR), air flow and EGR flow for turbocharged stoichiometric engines using low pressure EGR, waste-gate turbo-charging, intake throttling and variable valve timing. Model uncertainties, disturbances, transport delays, sensor and actuator characteristics are considered in this framework. Based on the required control performance and candidate sensor sets, the framework synthesizes an H1 feedback controller and evaluates the viability of the candidate sensor set through analysis of the structured</div><div>singular value μ of the closed-loop system in the frequency domain. The framework can also be used to understand if relaxing the controller performance requirements enables the use of a simpler (less costly) sensor set. The sensor selection and controller co-design approach is applied here, for the first time, to turbo-charged engines using exhaust gas circulation. High fidelity GT-Power simulations are used to validate the approach. The novelty of the work in this part can be summarized as follows: (1) A novel control strategy is proposed for the stoichiometric SI engines using low pressure EGR to simultaneously satisfy both the AFR and air/EGR-path control performance requirements; (2) A parametrical method to simultaneously select the sensors and design the controller is first proposed for the internal combustion engines.</div></div><div><br></div><div><div>In the fourth part of the work, a novel two-loop estimation and control strategy is proposed to reduce the emission of the three-way-catalyst (TWC). In the outer loop, an FOS estimator consisting of a TWC model and an extended Kalman-filter is used to estimate the current TWC fractional oxygen state (FOS) and a robust controller is used to control the TWC FOS by manipulating the desired engine λ. The outer loop estimator and controller are combined with an existing inner loop controller. The inner loop controller controls the engine λ based on the desired λ value and the control inaccuracies are considered and compensated by the outer loop robust controller. This control strategy achieves good emission reduction performance and has advantages over the constant λ control strategy and the conventional two-loop switch-type control strategy.</div></div>
|
367 |
Entwicklung einer Erregereinheit zur Erzeugung hochfrequenter Schwingungen beim DrahtsägenKrüger, Thomas 14 November 2014 (has links)
Bei der Fertigung von Siliziumwafern durch Zerteilen eines Siliziumblockes kommt das Drahttrennläppverfahren zur Anwendung. Es wird eine Erregereinheit entwickelt, die den Siliziumblock während des Schneidprozesses zu Schwingungen anregt. Die Verwendung von Piezoaktoren ermöglicht mehrachsige Schwingungen mit variabler Frequenz und Amplitude. Wesentliche Bestandteile der Arbeit sind experimentelle Untersuchungen an den Aktoren und der gesamten Erregereinheit sowie die Modellierung des Gesamtsystems mit Hilfe linearer Einzelmodelle. Es zeigt sich, dass die Aktoren bei dynamischen Anwendungen linear beschrieben werden können, während das Gesamtmodell besonders in den Resonanzbereichen aufgrund montagebedingter Einflüsse Schwächen aufweist. Abschließend wird der Einfluss der Schwingungsanregung beim Drahtsägen untersucht. Aus den Versuchen geht hervor, dass im getesteten Frequenz- und Amplitudenbereich sowohl hohe Erregerfrequenzen als auch –amplituden geringere Schnittkräfte zur Folge haben.
|
368 |
Langevinized Ensemble Kalman Filter for Large-Scale Dynamic SystemsPeiyi Zhang (11166777) 26 July 2021 (has links)
<p>The Ensemble Kalman filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. Other existing methods, such as particle filter or sequential importance sampler, do not scale well to the dimension of the system and the sample size of the datasets. In this dissertation, we address these difficulties in a coherent way.</p><p><br></p><p> </p><p>In the first part of the dissertation, we reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF (LEnKF). The LEnKF algorithm inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the LEnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage, and thus it can be used for uncertainty quantification. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the LEnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short-Term Memory (LSTM) network learning with dynamic data.</p><p><br></p><p> </p><p>In the second part of the dissertation, we focus on two extensions of the LEnKF algorithm. Like the EnKF, the LEnKF algorithm was developed for Gaussian dynamic systems containing no unknown parameters. We propose the so-called stochastic approximation- LEnKF (SA-LEnKF) for simultaneously estimating the states and parameters of dynamic systems, where the parameters are estimated on the fly based on the state variables simulated by the LEnKF under the framework of stochastic approximation. Under mild conditions, we prove the consistency of resulting parameter estimator and the ergodicity of the SA-LEnKF. For non-Gaussian dynamic systems, we extend the LEnKF algorithm (Extended LEnKF) by introducing a latent Gaussian measurement variable to dynamic systems. Those two extensions inherit the scalability of the LEnKF algorithm with respect to the dimension and sample size. The numerical results indicate that they outperform other existing methods in both states/parameters estimation and uncertainty quantification.</p>
|
369 |
Contrôle d'un système multi-terminal HVDC (MTDC) et étude des interactions entre les réseaux AC et le réseau MTDC. / Control of a multi-terminal HVDC (MTDC) system and study of the interactions between the MTDC and the AC grids.Akkari, Samy 29 September 2016 (has links)
La multiplication des projets HVDC de par le monde démontre l'engouement toujours croissant pour cette technologie de transport de l'électricité. La grande majorité de ces transmissions HVDC correspondent à des liaisons point-à-point et se basent sur des convertisseurs AC/DC de type LCC ou VSC à 2 ou 3 niveaux. Les travaux de cette thèse se focalisent sur l'étude, le contrôle et la commande de systèmes HVDC de type multi-terminal (MTDC), avec des convertisseurs de type VSC classique ou modulaire multi-niveaux. La première étape consiste à obtenir les modèles moyens du VSC classique et du MMC. La différence fondamentale entre ces deux convertisseurs, à savoir la possibilité pour le MMC de stocker et de contrôler l'énergie des condensateurs des sous-modules, est détaillée et expliquée. Ces modèles et leurs commandes sont ensuite linéarisés et mis sous forme de représentations d'état, puis validés en comparant leur comportement à ceux de modèles de convertisseurs plus détaillés à l'aide de logiciels de type EMT. Une fois validés, les modèles d'état peuvent être utilisés afin de générer le modèle d'état de tout système de transmissions HVDC, qu'il soit point-à-point ou MTDC. La comparaison d'une liaison HVDC à base de VSCs classiques puis de MMCs est alors réalisée. Leurs valeurs propres sont étudiées et comparées, et les modes ayant un impact sur la tension DC sont identifiés et analysés. Cette étude est ensuite étendue à un système MTDC à 5 terminaux, et son analyse modale permet à la fois d'étudier la stabilité du système, mais aussi de comprendre l'origine de ses valeurs propres ainsi que leur impact sur la dynamique du système. La méthode de décomposition en valeurs singulières permet ensuite d'obtenir un intervalle de valeurs possibles pour le paramètre de"voltage droop", permettant ainsi le contrôle du système MTDC tout en s'assurant qu'il soit conforme à des contraintes bien définies, comme l'écart maximal admissible en tension DC. Enfin, une proposition de "frequency droop" (ou "statisme"), permettant aux convertisseurs de participer au réglage de la fréquence des réseaux AC auxquels ils sont connectés, est étudiée. Le frequency droop est utilisé conjointement avec le voltage droop afn de garantir le bon fonctionnement de la partie AC et de la partie DC. Cependant, l'utilisation des deux droop génère un couplage indésirable entre les deux commandes. Ces interactions sont mathématiquement quantifiées et une correction à apporter au paramètre de frequency droop est proposée. Ces résultats sont ensuite validés par des simulations EMT et par des essais sur la plate-forme MTDC du laboratoire L2EP. / HVDC transmission systems are largely used worldwide, mostly in the form of back-to-back and point-to-point HVDC, using either thyristor-based LCC or IGBT-based VSC. With the recent deployment of the INELFE HVDC link between France and Spain, and the commissioning in China of a three-terminal HVDC transmission system using Modular Multilevel Converters (MMCs), a modular design of voltage source converters, the focus of the scientific community has shifted onto the analysis and control of MMC-based HVDC transmission systems. In this thesis, the average value models of both a standard 2-level VSC and an MMC are proposed and the most interesting difference between the two converter technologies -the control of the stored energy in the MMC- is emphasised and explained. These models are then linearised, expressed in state-space form and validated by comparing their behaviour to more detailed models under EMT programs. Afterwards, these state-space representations are used in the modelling of HVDC transmission systems, either point-to-point or Multi-Terminal HVDC (MTDC). A modal analysis is performed on an HVDC link, for both 2-level VSCs and MMCs. The modes of these two systems are specifed and compared and the independent control of the DC voltage and the DC current in the case of an MMC is illustrated. This analysis is extended to the scope of a 5-terminal HVDC system in order to perform a stability analysis, understand the origin of the system dynamics and identify the dominant DC voltage mode that dictates the DC voltage response time. Using the Singular Value Decomposition method on the MTDC system, the proper design of the voltage-droop gains of the controllers is then achieved so that the system operation is ensured within physical constraints, such as the maximum DC voltage deviation and the maximum admissible current in the power electronics. Finally, a supplementary droop "the frequency-droop control" is proposed so that MTDC systems also participate to the onshore grids frequency regulation. However, this controller interacts with the voltage-droop controller. This interaction is mathematically quantified and a corrected frequency-droop gain is proposed. This control is then illustrated with an application to the physical converters of the Twenties project mock-up.
|
370 |
On-the-Fly Dynamic Dead Variable AnalysisSelf, Joel P. 22 March 2007 (has links) (PDF)
State explosion in model checking continues to be the primary obstacle to widespread use of software model checking. The large input ranges of variables used in software is the main cause of state explosion. As software grows in size and complexity the problem only becomes worse. As such, model checking research into data abstraction as a way of mitigating state explosion has become more and more important. Data abstractions aim to reduce the effect of large input ranges. This work focuses on a static program analysis technique called dead variable analysis. The goal of dead variable analysis is to discover variable assignments that are not used. When applied to model checking, this allows us to ignore the entire input range of dead variables and thus reduce the size of the explored state space. Prior research into dead variable analysis for model checking does not make full use of dynamic run-time information that is present during model checking. We present an algorithm for intraprocedural dead variable analysis that uses dynamic run-time information to find more dead variables on-the-fly and further reduce the size of the explored state space. We introduce a definition for the maximal state space reduction possible through an on-the-fly dead variable analysis and then show that our algorithm produces a maximal reduction in the absence of non-determinism.
|
Page generated in 0.1276 seconds