• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 170
  • 67
  • 14
  • 13
  • 10
  • 8
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 364
  • 364
  • 119
  • 89
  • 64
  • 62
  • 62
  • 51
  • 49
  • 47
  • 44
  • 39
  • 38
  • 33
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Model-based federation of systems of modelling / Fédération dirigée par les modèles des systèmes de modélisation

Kamdem Simo, Freddy 26 September 2017 (has links)
L'ingénierie des systèmes complexes et systèmes de systèmes conduit souvent à des activités de modélisation (MA) complexes. Les problèmes soulevés par les MA sont notamment : comprendre le contexte dans lequel elles sont exécutées, comprendre l'impact sur les cycles de vie des modèles qu'elles produisent, et finalement trouver une approche pour les maîtriser. L'objectif principal de cette thèse est d'élaborer une approche formelle pour adresser ce problème. Dans cette thèse, après avoir étudié les travaux connexes en ingénierie système et plus spécifiquement ceux qui portent sur la co-ingénierie du système à faire (le produit) et du système pour faire (le projet), nous développons une méthodologie nommée MODEF pour traiter ce problème. MODEF consiste en: (1) Caractériser les MA comme un système et plus généralement une fédération de systèmes. (2) Construire de manière itérative une architecture de ce système via la modélisation du contenu conceptuel des modèles produits par MA et leur cycle de vie, les tâches réalisées au sein des MA et leurs effets sur ces cycles de vie. (3) Spécifier les attentes sur ces cycles de vie. (4) Analyser les modèles (des MA) par rapport à ces attentes (et éventuellement les contraintes sur les tâches) pour vérifier jusqu'à quel point elles sont atteignables via la synthèse des points (ou états) acceptables. D'un point de vue pratique, l'exploitation des résultats de l'analyse permet de contrôler le déroulement des tâches de modélisation à partir de la mise en évidence de leur impact sur les modèles qu'elles produisent. En effet, cette exploitation fournit des données pertinentes sur la façon dont les MA se déroulent et se dérouleraient de bout en bout. A partir de ces informations, il est possible de prendre des mesures préventives ou correctives. Nous illustrons cela à l'aide de deux cas d'étude (le fonctionnement d'un supermarché et la modélisation de la couverture fonctionnelle d'un système). D'un point de vue théorique, les sémantiques formelles des modèles des MA et le formalisme des attentes sont d'abord données. Ensuite, les algorithmes d'analyse et d'exploitation sont présentés. Cette approche est brièvement comparée avec des approches de vérification des modèles et de synthèse de systèmes. Enfin, deux facilitateurs de la mise en œuvre de MODEF sont présentés. Le premier est une implémentation modulaire des blocs de base de MODEF. Le second est une architecture fédérée (FA) des modèles visant à faciliter la réutilisation des modèles formels en pratique. La formalisation de FA est faite dans le cadre de la théorie des catégories. De ce fait, afin de construire un lien entre abstraction et implémentation, des structures de données et algorithmes de base sont proposés pour utiliser FA en pratique. Différentes perspectives sur les composantes de MODEF concluent ce travail. / The engineering of complex systems and systems of systems often leads to complex modelling activities (MA). Some challenges exhibited by MA are: understanding the context where they are carried out and their impacts on the lifecycles of models they produce, and ultimately providing a support for mastering them. How to address these challenges with a formal approach is the central challenge of this thesis. In this thesis, after discussing the related works from systems engineering in general and the co-engineering of the system to be made (product) and the system for make (project) systems specifically, we position and develop a methodology named MODEF, that aims to master the operation of MA. MODEF consists in: (1) characterizing MA as a system (and more globally as a federation of systems) in its own right; (2) iteratively architecting this system through: the modelling of the conceptual content of the models produced by MA and their life cycles, the tasks carried out within MA and their effects on these life cycles; (3) specifying the expectations over these life cycles and; (4) analysing models (of MA) against expectations (and possibly tasks constraints) - to check how far expectations are achievable - via the synthesis of the acceptable behaviours. On a practical perspective, the exploitation of the results of the analysis allows figuring out what could happen with the modelling tasks and their impacts on the whole state of models they handle. We show on two case studies (the operation of a supermarket and the modelling of the functional coverage of a system) how this exploitation provides insightful data on how the system is end-to-end operated and how it can behave. Based on this information, it is possible to take some preventive or corrective actions on how the MA are carried out. On the foundational perspective, the formal semantics of three kinds of involved models and the expectations formalism are first discussed. Then the analysis and exploitation algorithms are presented. Finally this approach is roughly compared with model checking and systems synthesis approaches. Last but not least, two enablers whose first objectives are to ease the implementation of MODEF are presented. The first one is a modular implementation of MODEF's buildings blocks. The second one is a federated architecture (FA) of models which aims to ease working with formal models in practice. Despite the fact that FA is formalised within the abstract framework of category theory, an attempt to bridge the gap between abstraction and implementation is sketched via some basic data structures and base algorithms. Several perspectives related to the different components of MODEF conclude this work.
342

Previsão de inflação utilizando modelos de séries temporais

Bonno, Simone Jager Patrocinio 23 January 2014 (has links)
Submitted by Simone Jager (si_jager@hotmail.com) on 2014-02-10T15:30:57Z No. of bitstreams: 1 Simone Jager 2014.pdf: 764649 bytes, checksum: 100e29a7572ff1d6c57a770ace28e1bf (MD5) / Approved for entry into archive by Vitor Souza (vitor.souza@fgv.br) on 2014-02-24T21:08:40Z (GMT) No. of bitstreams: 1 Simone Jager 2014.pdf: 764649 bytes, checksum: 100e29a7572ff1d6c57a770ace28e1bf (MD5) / Made available in DSpace on 2014-05-20T13:15:26Z (GMT). No. of bitstreams: 1 Simone Jager 2014.pdf: 764649 bytes, checksum: 100e29a7572ff1d6c57a770ace28e1bf (MD5) Previous issue date: 2014-01-23 / This paper compares time series models to forecast short-term Brazilian inflation measured by Consumer Price Index (IPCA). Were considered SARIMA Box-Jenkins models and structural models in state space, as estimated by the Kalman filter. For estimation of the models, the series of IPCA monthly basis from March 2003 to March 2012 was used. The SARIMA models were estimated in EVIEWS and structural models in STAMP. For the validation of the models out of sample forecasts were considered one step ahead for the period April 2012 to March 2013, based on the main criteria for assessing predictive ability proposed in the literature. The conclusion of the study is that, although the structural model allows, to decompose the series into components with direct interpretation and study them separately, while incorporating explanatory variables in a simple way, the performance of the SARIMA model to predict Brazilian inflation was higher in the period and horizon considered. Another important positive aspect is that the implementation of a SARIMA model is ready, and predictions from it are obtained in a simple and direct way. / Este trabalho compara modelos de séries temporais para a projeção de curto prazo da inflação brasileira, medida pelo Índice de Preços ao Consumidor Amplo (IPCA). Foram considerados modelos SARIMA de Box e Jenkins e modelos estruturais em espaço de estados, estimados pelo filtro de Kalman. Para a estimação dos modelos, foi utilizada a série do IPCA na base mensal, de março de 2003 a março de 2012. Os modelos SARIMA foram estimados no EVIEWS e os modelos estruturais no STAMP. Para a validação dos modelos para fora da amostra, foram consideradas as previsões 1 passo à frente para o período de abril de 2012 a março de 2013, tomando como base os principais critérios de avaliação de capacidade preditiva propostos na literatura. A conclusão do trabalho é que, embora o modelo estrutural permita, decompor a série em componentes com interpretação direta e estudá-las separadamente, além de incorporar variáveis explicativas de forma simples, o desempenho do modelo SARIMA para prever a inflação brasileira foi superior, no período e horizonte considerados. Outro importante aspecto positivo é que a implementação de um modelo SARIMA é imediata, e previsões a partir dele são obtidas de forma simples e direta.
343

Modelagem computacional de dados e controle inteligente no espaço de estado / State space computational data modelling and intelligent control

Del Real Tamariz, Annabell 15 July 2005 (has links)
Orientador: Celso Pascoli Bottura / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-04T18:33:31Z (GMT). No. of bitstreams: 1 DelRealTamariz_Annabell_D.pdf: 5783881 bytes, checksum: 21a1a2e27552398a982a934513988a24 (MD5) Previous issue date: 2005 / Resumo: Este estudo apresenta contribuições para modelagem computacional de dados multivariáveis no espaço de estado, tanto com sistemas lineares invariantes como com variantes no tempo. Propomos para modelagem determinística-estocástica de dados ruidosos, o Algoritmo MOESP_AOKI. Propomos, utilizando Redes Neurais Recorrentes multicamadas, algoritmos para resolver a Equação Algébrica de Riccati Discreta bem como a Inequação Algébrica de Riccati Discreta, via Desigualdades Matriciais Lineares. Propomos um esquema de controle adaptativo com Escalonamento de Ganhos, baseado em Redes Neurais, para sistemas multivariáveis discretos variantes no tempo, identificados pelo algoritmo MOESP_VAR, também proposto nesta tese. Em síntese, uma estrutura de controle inteligente para sistemas discretos multivariáveis variantes no tempo, através de uma abordagem que pode ser chamada ILPV (Intelligent Linear Parameter Varying), é proposta e implementada. Um controlador LPV Inteligente, para dados computacionalmente modelados pelo algoritmo MOESP_VAR, é concretizado, implementado e testado com bons resultados / Abstract: This study presents contributions for state space multivariable computational data modelling with discrete time invariant as well as with time varying linear systems. A proposal for Deterministic-Estocastica Modelling of noisy data, MOESP_AOKI Algorithm, is made. We present proposals forsolving the Discrete-Time Algebraic Riccati Equation as well as the associate Linear Matrix Inequalityusing a multilayer Recurrent Neural Network approaches. An Intelligent Linear Parameter Varying(ILPV) control approach for multivariable discrete Linear Time Varying (LTV) systems identified bythe MOESP_VAR algorithm, are both proposed. A gain scheduling adaptive control scheme based on neural networks is designed to tune on-line the optimal controllers. In synthesis, an Intelligent Linear Parameter Varying (ILPV) Control approach for multivariable discrete Linear Time Varying Systems (LTV), identified by the algorithm MOESP_VAR, is proposed. This way an Intelligent LPV Control for multivariable data computationally modeled via the MOESP_VAR algorithm is structured, implemented and tested with good results / Doutorado / Automação / Doutor em Engenharia Elétrica
344

Modèles de connaissance à paramètres identifiables expérimentalement pour les systèmes de refroidissement dessiccatif couplés à un système solaire / Knowledge models with identifiable parameters of solar desiccant cooling systems

Ghazal, Roula 12 April 2013 (has links)
La Centrale de traitement d’Air par Dessiccation (CAD) offre un contrôle complet de la température et de l'humidité dans les locaux climatisés. Son élément clé est la roue dessicante qui permet la dessiccation de l’air et une régénération continue. A travers cette étude, nous nous intéressons au développement d’une méthodologie pour obtenir un modèle dynamique de la roue utilisable dans les algorithmes de contrôle avancés de la CAD. La roue dessicante peut être considérée comme un système de type multi-entrées/multi-sorties (MIMO). La seconde partie de ce mémoire concerne l'identification expérimentale des paramètres des modèles d’état de la roue dessicante pour deux types de modèles : boîte noire et boîte grise. Dans le cas de la boîte noire, tous les paramètres du modèle sont identifiés expérimentalement. Dans le cas de la boîte grise, certains paramètres sont dérivés de considérations physiques et les paramètres restants sont identifiés en utilisant les mesures expérimentales des entrées et des sorties. Les paramètres du modèle boîte grise ont une signification physique. En comparaison avec les modèles boîte noire, les modèles boîte grises sont moins précis sur le domaine sur lequel les paramètres ont été identifiés, mais beaucoup plus précis en dehors de ce domaine. Comme les paramètres ont une signification physique, leurs valeurs ne varient pas de manière significative avec le point de fonctionnement utilisé pour l’identification. Dans l’approche boîte grise, les valeurs des paramètres obtenues pour les modèles linéaires sont presque identiques pour tous les modèles locaux du coté dessiccation et pour tous les modèles locaux du coté régénération ; cela nous a permis de considérer qu’un modèle local est valable pour tout le domaine de variation des variables d’entrée. Le modèle final de la roue dessicante se compose de deux modèles globaux : un pour le côté de la dessiccation et l'autre pour le côté de la régénération. La troisième partie de ce travail consiste dans l'identification des coefficients de transfert de masse et de chaleur au sein de la roue dessicante en utilisant un modèle boîte grise. Le coefficient de transfert de masse, le coefficient de transfert convectif et le nombre de Nusselt ont été obtenus en écrivant les paramètres du modèle d’état en fonction d’une seule variable et en exprimant les paramètres en fonction des caractéristiques géométriques et des propriétés de matériaux de la roue. Ce travail contribue au développement d’un modèle d’état utilisable pour la synthèse des algorithmes de contrôle pour la roue dessicante. / Desiccant Air Unit (DAU) offers a complete control of air temperature and humidity in the conditioned space. Its key component is the desiccant wheel which provides the functions of air desiccation and regeneration. The aim of this study is to develop a methodology for obtaining a dynamic model of the desiccant wheel which can be used for the model-based control algorithms of DAU. The desiccant wheel can be regarded as a multi-input/multi-output (MIMO) system. The first part of the thesis is devoted to the modeling of the desiccant wheel based on energy and mass balance equations. The resulting set of equations is formulated as a second order state-space system without delay. The second part of this thesis concerns the experimental identification of the parameters of the state-space model of the desiccant wheel by using a black-box and a gray-box approach. In the case of the black-box, all the parameters of the model are identified experimentally. The identified parameters have values which minimize the difference between the output of the model and the experimental values. The parameters of the black-box model do not have physical significance. Although precise in the range of variation of the inputs in which the parameters were identified, this model gives significant errors in other domains of variation of the inputs. The parameters of the gray-box model are physically significant. Compared with the black-box models, the gray-box model was less accurate for the domains for which the parameters were identified, but it was notably more robust when applied to other ranges of the inputs. Since the parameters are related to physical properties, their values do not vary significantly with changes of the operating point used for identification. For the gray-box approach, the parameter values obtained for the linear models are almost identical for all local models on the desiccation side and all the local models on the regeneration side, suggesting that a local model may be valid for all the complete range of input variables. Using the above results, a final model of the desiccant wheel was developed, comprising two global models: one for the desiccation side and another for the regeneration side. The third part of the thesis deals with the identification of mass and heat transfer coefficients of the air within the desiccant wheel using a gray-box model. The mass transfer coefficient, the convective heat transfer coefficient and the Nusselt number were obtained by defining the variable parameters of the model as a function of a single variable and by expressing the constant parameters as a function of the geometric and material properties of the wheel. This work contributes to the development of a state-space model used for the synthesis of control algorithms for the desiccant wheel.
345

Improvements in Genetic Approach to Pole Placement in Linear State Space Systems Through Island Approach PGA with Orthogonal Mutation Vectors

Cassell, Arnold 01 January 2012 (has links)
This thesis describes a genetic approach for shaping the dynamic responses of linear state space systems through pole placement. This paper makes further comparisons between this approach and an island approach parallel genetic algorithm (PGA) which incorporates orthogonal mutation vectors to increase sub-population specialization and decrease convergence time. Both approaches generate a gain vector K. The vector K is used in state feedback for altering the poles of the system so as to meet step response requirements such as settling time and percent overshoot. To obtain the gain vector K by the proposed genetic approaches, a pair of ideal, desired poles is calculate first. Those poles serve as the basis by which an initial population is created. In the island approach, those poles serve as a basis for n populations, where n is the dimension of the necessary K vector. Each member of the population is tested for its fitness (the degree to which it matches the criteria). A new population is created each “generation” from the results of the previous iteration, until the criteria are met, or a certain number of generations have passed. Several case studies are provided in this paper to illustrate that this new approach is working, and also to compare performance of the two approaches.
346

Essays on bayesian analysis of state space models with financial applications

Gingras, Samuel 05 1900 (has links)
Cette thèse est organisée en trois chapitres où sont développées des méthodes de simulation à posteriori pour inférence Bayesienne dans des modèles espace-état ainsi que des modèles économétriques pour l’analyse de données financières. Au chapitre 1, nous considérons le problème de simulation a posteriori dans les modèles espace-état univariés et non-Gaussiens. Nous proposons une nouvelle méthode de Monte-Carlo par chaînes de Markov (MCMC) mettant à jour le vecteur de paramètres de la dynamique d’état ainsi que la séquence de variables d’état conjointement dans un bloc unique. La proposition MCMC est tirée en deux étapes: la distribution marginale du vecteur de paramètres de la dynamique d’état est construite en utilisant une approximation du gradient et du Hessien du logarithme de sa densité a posteriori, pour laquelle le vecteur de variables d’état a été intégré. La distribution conditionnelle de la séquence de variables d’état, étant donné la proposition du vecteur de paramètres, est telle que décrite dans McCausland (2012). Le calcul du gradient et du Hessien approximatif combine des sous-produits de calcul du tirage d’état avec une quantité modeste de calculs supplémentaires. Nous comparons l’efficacité numérique de notre simulation a posteriori à celle de la méthode Ancillarity-Sufficiency Interweaving Strategy (ASIS) décrite dans Kastner & Frühwirth-Schnatter (2014), en utilisant un modèle de volatilité stochastique Gaussien et le même panel de 23 taux de change quotidiens utilisé dans ce même article. Pour calculer la moyenne a posteriori du paramètre de persistance de la volatilité, notre efficacité numérique est de 6 à 27 fois plus élevée; pour la volatilité du paramètre de volatilité, elle est de 18 à 53 fois plus élevée. Nous analysons dans un second exemple des données de compte de transaction avec un modèle Poisson et Gamma-Poisson dynamique. Malgré la nature non Gaussienne des données de compte, nous obtenons une efficacité numérique élevée, guère inférieure à celle rapportée dans McCausland (2012) pour une méthode d’échantillonnage impliquant un calcul préliminaire de la forme de la distribution a posteriori statique des paramètres. Au chapitre 2, nous proposons un nouveau modèle de durée conditionnelle stochastique (SCD) pour l’analyse de données de transactions financières en haute fréquence. Nous identifions certaines caractéristiques indésirables des densités de durée conditionnelles paramétriques existantes et proposons une nouvelle famille de densités conditionnelles flexibles pouvant correspondre à une grande variété de distributions avec des fonctions de taux de probabilité modérément variable. Guidés par des considérations théoriques issues de la théorie des files d’attente, nous introduisons des déviations non-paramétriques autour d’une distribution exponentielle centrale, qui, selon nous, est un bon modèle de premier ordre pour les durées financières, en utilisant une densité de Bernstein. La densité résultante est non seulement flexible, dans le sens qu’elle peut s’approcher de n’importe quelle densité continue sur [0, ∞) de manière arbitraire, à condition qu’elle se compose d’un nombre suffisamment grand de termes, mais également susceptible de rétrécissement vers la distribution exponentielle. Grâce aux tirages très efficaces des variables d’état, l’efficacité numérique de notre simulation a posteriori se compare très favorablement à celles obtenues dans les études précédentes. Nous illustrons nos méthodes à l’aide des données de cotation d’actions négociées à la Bourse de Toronto. Nous constatons que les modèles utilisant notre densité conditionnelle avec moins de qua- tre termes offrent le meilleur ajustement. La variation régulière trouvée dans les fonctions de taux de probabilité, ainsi que la possibilité qu’elle ne soit pas monotone, aurait été impossible à saisir avec une spécification paramétrique couramment utilisée. Au chapitre 3, nous présentons un nouveau modèle de durée stochastique pour les temps de transaction dans les marchés d’actifs. Nous soutenons que les règles largement acceptées pour l’agrégation de transactions apparemment liées induisent une inférence erronée concernant les durées entre des transactions non liées: alors que deux transactions exécutées au cours de la même seconde sont probablement liées, il est extrêmement improbable que toutes paires de transactions le soient, dans un échantillon typique. En plaçant une incertitude sur les transactions liées dans notre modèle, nous améliorons l’inférence pour la distribution de la durée entre les transactions non liées, en particulier près de zéro. Nous proposons un modèle en temps discret pour les temps de transaction censurés permettant des valeurs nulles excessives résultant des durées entre les transactions liées. La distribution discrète des durées entre les transactions indépendantes découle d’une densité flexible susceptible de rétrécissement vers une distribution exponentielle. Dans un exemple empirique, nous constatons que la fonction de taux de probabilité conditionnelle sous-jacente pour des durées (non censurées) entre transactions non liées varie beaucoup moins que celles trouvées dans la plupart des études; une distribution discrète pour les transactions non liées basée sur une distribution exponentielle fournit le meilleur ajustement pour les trois séries analysées. Nous prétendons que c’est parce que nous évitons les artefacts statistiques qui résultent de règles déterministes d’agrégation des échanges et d’une distribution paramétrique inadaptée. / This thesis is organized in three chapters which develop posterior simulation methods for Bayesian inference in state space models and econometrics models for the analysis of financial data. In Chapter 1, we consider the problem of posterior simulation in state space models with non-linear non-Gaussian observables and univariate Gaussian states. We propose a new Markov Chain Monte Carlo (MCMC) method that updates the parameter vector of the state dynamics and the state sequence together as a single block. The MCMC proposal is drawn in two steps: the marginal proposal distribution for the parameter vector is constructed using an approximation of the gradient and Hessian of its log posterior density, with the state vector integrated out. The conditional proposal distribution for the state sequence given the proposal of the parameter vector is the one described in McCausland (2012). Computation of the approximate gradient and Hessian combines computational by-products of the state draw with a modest amount of additional computation. We compare the numerical efficiency of our posterior simulation with that of the Ancillarity-Sufficiency Interweaving Strategy (ASIS) described in Kastner & Frühwirth-Schnatter (2014), using the Gaus- sian stochastic volatility model and the panel of 23 daily exchange rates from that paper. For computing the posterior mean of the volatility persistence parameter, our numerical efficiency is 6-27 times higher; for the volatility of volatility parameter, 18-53 times higher. We analyse trans- action counts in a second example using dynamic Poisson and Gamma-Poisson models. Despite non-Gaussianity of the count data, we obtain high numerical efficiency that is not much lower than that reported in McCausland (2012) for a sampler that involves pre-computing the shape of a static posterior distribution of parameters. In Chapter 2, we propose a new stochastic conditional duration model (SCD) for the analysis of high-frequency financial transaction data. We identify undesirable features of existing parametric conditional duration densities and propose a new family of flexible conditional densities capable of matching a wide variety of distributions with moderately varying hazard functions. Guided by theoretical consideration from queuing theory, we introduce nonparametric deviations around a central exponential distribution, which we argue is a sound first-order model for financial durations, using a Bernstein density. The resulting density is not only flexible, in the sense that it can approximate any continuous density on [0,∞) arbitrarily closely, provided it consists of a large enough number of terms, but also amenable to shrinkage towards the exponential distribution. Thank to highly efficiency draws of state variables, numerical efficiency of our posterior simulation compares very favourably with those obtained in previous studies. We illustrate our methods using quotation data on equities traded on the Toronto Stock Exchange. We find that models with our proposed conditional density having less than four terms provide the best fit. The smooth variation found in the hazard functions, together with the possibility of it being non-monotonic, would have been impossible to capture using commonly used parametric specification. In Chapter 3, we introduce a new stochastic duration model for transaction times in asset markets. We argue that widely accepted rules for aggregating seemingly related trades mislead inference pertaining to durations between unrelated trades: while any two trades executed in the same second are probably related, it is extremely unlikely that all such pairs of trades are, in a typical sample. By placing uncertainty about which trades are related within our model, we improve inference for the distribution of duration between unrelated trades, especially near zero. We propose a discrete model for censored transaction times allowing for zero-inflation resulting from clusters of related trades. The discrete distribution of durations between unrelated trades arises from a flexible density amenable to shrinkage towards an exponential distribution. In an empirical example, we find that the underlying conditional hazard function for (uncensored) durations between unrelated trades varies much less than what most studies find; a discrete distribution for unrelated trades based on an exponential distribution provides a better fit for all three series analyzed. We claim that this is because we avoid statistical artifacts that arise from deterministic trade-aggregation rules and unsuitable parametric distribution.
347

Quadrocopter - stabilizace pomocí inerciálních snímačů / Quadrocopter - Sensory Subsytem

Bradáč, František January 2011 (has links)
This diploma thesis deals with processing of measured data from inertial navigation system in order these could be used for stabilization. There is general information about aerial vehicles called copters with emphasis on four-rotor construction called quadrocopter at first. Then mathematical model of quadrocopter in state space form is derived, the particular implementation of university developed quadrocopter is described and the design of data processing algorithm is presented with measured results. Finally achieved results are discussed.
348

Reimagining Human-Machine Interactions through Trust-Based Feedback

Kumar Akash (8862785) 17 June 2020 (has links)
<div>Intelligent machines, and more broadly, intelligent systems, are becoming increasingly common in the everyday lives of humans. Nonetheless, despite significant advancements in automation, human supervision and intervention are still essential in almost all sectors, ranging from manufacturing and transportation to disaster-management and healthcare. These intelligent machines<i> interact and collaborate</i> with humans in a way that demands a greater level of trust between human and machine. While a lack of trust can lead to a human's disuse of automation, over-trust can result in a human trusting a faulty autonomous system which could have negative consequences for the human. Therefore, human trust should be <i>calibrated </i>to optimize these human-machine interactions. This calibration can be achieved by designing human-aware automation that can infer human behavior and respond accordingly in real-time.</div><div><br></div><div>In this dissertation, I present a probabilistic framework to model and calibrate a human's trust and workload dynamics during his/her interaction with an intelligent decision-aid system. More specifically, I develop multiple quantitative models of human trust, ranging from a classical state-space model to a classification model based on machine learning techniques. Both models are parameterized using data collected through human-subject experiments. Thereafter, I present a probabilistic dynamic model to capture the dynamics of human trust along with human workload. This model is used to synthesize optimal control policies aimed at improving context-specific performance objectives that vary automation transparency based on human state estimation. I also analyze the coupled interactions between human trust and workload to strengthen the model framework. Finally, I validate the optimal control policies using closed-loop human subject experiments. The proposed framework provides a foundation toward widespread design and implementation of real-time adaptive automation based on human states for use in human-machine interactions.</div>
349

Model-based co-design of sensing and control systems for turbo-charged, EGR-utilizing spark-ignited engines

Xu Zhang (9976460) 01 March 2021 (has links)
<div>Stoichiometric air-fuel ratio (AFR) and air/EGR flow control are essential control problems in today’s advanced spark-ignited (SI) engines to enable effective application of the three-way-catalyst (TWC) and generation of required torque. External exhaust gas recirculation (EGR) can be used in SI engines to help mitigate knock, reduce enrichment and improve efficiency[1 ]. However, the introduction of the EGR system increases the complexity of stoichiometric engine-out lambda and torque management, particularly for high BMEP commercial vehicle applications. This thesis develops advanced frameworks for sensing and control architecture designs to enable robust air handling system management, stoichiometric cylinder air-fuel ratio (AFR) control and three-way-catalyst emission control.</div><div><br></div><div><div>The first work in this thesis derives a physically-based, control-oriented model for turbocharged SI engines utilizing cooled EGR and flexible VVA systems. The model includes the impacts of modulation to any combination of 11 actuators, including the throttle valve, bypass valve, fuel injection rate, waste-gate, high-pressure (HP) EGR, low-pressure (LP) EGR, number of firing cylinders, intake and exhaust valve opening and closing timings. A new cylinder-out gas composition estimation method, based on the inputs’ information of cylinder charge flow, injected fuel amount, residual gas mass and intake gas compositions, is proposed in this model. This method can be implemented in the control-oriented model as a critical input for estimating the exhaust manifold gas compositions. A new flow-based turbine-out pressure modeling strategy is also proposed in this thesis as a necessary input to estimate the LP EGR flow rate. Incorporated with these two sub-models, the control-oriented model is capable to capture the dynamics of pressure, temperature and gas compositions in manifolds and the cylinder. Thirteen physical parameters, including intake, boost and exhaust manifolds’ pressures, temperatures, unburnt and burnt mass fractions as well as the turbocharger speed, are defined as state variables. The outputs such as flow rates and AFR are modeled as functions of selected states and inputs. The control-oriented model is validated with a high fidelity SI engine GT-Power model for different operating conditions. The novelty in this physical modeling work includes the development and incorporation of the cylinder-out gas composition estimation method and the turbine-out pressure model in the control-oriented model.</div></div><div><br></div><div><div>The second part of the work outlines a novel sensor selection and observer design algorithm for linear time-invariant systems with both process and measurement noise based on <i>H</i>2 optimization to optimize the tradeoff between the observer error and the number of required sensors. The optimization problem is relaxed to a sequence of convex optimization problems that minimize the cost function consisting of the <i>H</i>2 norm of the observer error and the weighted <i>l</i>1 norm of the observer gain. An LMI formulation allows for efficient solution via semi-definite programing. The approach is applied here, for the first time, to a turbo-charged spark-ignited (SI) engine using exhaust gas recirculation to determine the optimal sensor sets for real-time intake manifold burnt gas mass fraction estimation. Simulation with the candidate estimator embedded in a high fidelity engine GT-Power model demonstrates that the optimal sensor sets selected using this algorithm have the best <i>H</i>2 estimation performance. Sensor redundancy is also analyzed based on the algorithm results. This algorithm is applicable for any type of modern internal combustion engines to reduce system design time and experimental efforts typically required for selecting optimal sensor sets.</div></div><div><br></div><div><div>The third study develops a model-based sensor selection and controller design framework for robust control of air-fuel-ratio (AFR), air flow and EGR flow for turbocharged stoichiometric engines using low pressure EGR, waste-gate turbo-charging, intake throttling and variable valve timing. Model uncertainties, disturbances, transport delays, sensor and actuator characteristics are considered in this framework. Based on the required control performance and candidate sensor sets, the framework synthesizes an H1 feedback controller and evaluates the viability of the candidate sensor set through analysis of the structured</div><div>singular value μ of the closed-loop system in the frequency domain. The framework can also be used to understand if relaxing the controller performance requirements enables the use of a simpler (less costly) sensor set. The sensor selection and controller co-design approach is applied here, for the first time, to turbo-charged engines using exhaust gas circulation. High fidelity GT-Power simulations are used to validate the approach. The novelty of the work in this part can be summarized as follows: (1) A novel control strategy is proposed for the stoichiometric SI engines using low pressure EGR to simultaneously satisfy both the AFR and air/EGR-path control performance requirements; (2) A parametrical method to simultaneously select the sensors and design the controller is first proposed for the internal combustion engines.</div></div><div><br></div><div><div>In the fourth part of the work, a novel two-loop estimation and control strategy is proposed to reduce the emission of the three-way-catalyst (TWC). In the outer loop, an FOS estimator consisting of a TWC model and an extended Kalman-filter is used to estimate the current TWC fractional oxygen state (FOS) and a robust controller is used to control the TWC FOS by manipulating the desired engine λ. The outer loop estimator and controller are combined with an existing inner loop controller. The inner loop controller controls the engine λ based on the desired λ value and the control inaccuracies are considered and compensated by the outer loop robust controller. This control strategy achieves good emission reduction performance and has advantages over the constant λ control strategy and the conventional two-loop switch-type control strategy.</div></div>
350

Entwicklung einer Erregereinheit zur Erzeugung hochfrequenter Schwingungen beim Drahtsägen

Krüger, Thomas 14 November 2014 (has links)
Bei der Fertigung von Siliziumwafern durch Zerteilen eines Siliziumblockes kommt das Drahttrennläppverfahren zur Anwendung. Es wird eine Erregereinheit entwickelt, die den Siliziumblock während des Schneidprozesses zu Schwingungen anregt. Die Verwendung von Piezoaktoren ermöglicht mehrachsige Schwingungen mit variabler Frequenz und Amplitude. Wesentliche Bestandteile der Arbeit sind experimentelle Untersuchungen an den Aktoren und der gesamten Erregereinheit sowie die Modellierung des Gesamtsystems mit Hilfe linearer Einzelmodelle. Es zeigt sich, dass die Aktoren bei dynamischen Anwendungen linear beschrieben werden können, während das Gesamtmodell besonders in den Resonanzbereichen aufgrund montagebedingter Einflüsse Schwächen aufweist. Abschließend wird der Einfluss der Schwingungsanregung beim Drahtsägen untersucht. Aus den Versuchen geht hervor, dass im getesteten Frequenz- und Amplitudenbereich sowohl hohe Erregerfrequenzen als auch –amplituden geringere Schnittkräfte zur Folge haben.

Page generated in 0.079 seconds