• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 67
  • 67
  • 67
  • 15
  • 13
  • 12
  • 10
  • 9
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Essays on bayesian analysis of state space models with financial applications

Gingras, Samuel 05 1900 (has links)
Cette thèse est organisée en trois chapitres où sont développées des méthodes de simulation à posteriori pour inférence Bayesienne dans des modèles espace-état ainsi que des modèles économétriques pour l’analyse de données financières. Au chapitre 1, nous considérons le problème de simulation a posteriori dans les modèles espace-état univariés et non-Gaussiens. Nous proposons une nouvelle méthode de Monte-Carlo par chaînes de Markov (MCMC) mettant à jour le vecteur de paramètres de la dynamique d’état ainsi que la séquence de variables d’état conjointement dans un bloc unique. La proposition MCMC est tirée en deux étapes: la distribution marginale du vecteur de paramètres de la dynamique d’état est construite en utilisant une approximation du gradient et du Hessien du logarithme de sa densité a posteriori, pour laquelle le vecteur de variables d’état a été intégré. La distribution conditionnelle de la séquence de variables d’état, étant donné la proposition du vecteur de paramètres, est telle que décrite dans McCausland (2012). Le calcul du gradient et du Hessien approximatif combine des sous-produits de calcul du tirage d’état avec une quantité modeste de calculs supplémentaires. Nous comparons l’efficacité numérique de notre simulation a posteriori à celle de la méthode Ancillarity-Sufficiency Interweaving Strategy (ASIS) décrite dans Kastner & Frühwirth-Schnatter (2014), en utilisant un modèle de volatilité stochastique Gaussien et le même panel de 23 taux de change quotidiens utilisé dans ce même article. Pour calculer la moyenne a posteriori du paramètre de persistance de la volatilité, notre efficacité numérique est de 6 à 27 fois plus élevée; pour la volatilité du paramètre de volatilité, elle est de 18 à 53 fois plus élevée. Nous analysons dans un second exemple des données de compte de transaction avec un modèle Poisson et Gamma-Poisson dynamique. Malgré la nature non Gaussienne des données de compte, nous obtenons une efficacité numérique élevée, guère inférieure à celle rapportée dans McCausland (2012) pour une méthode d’échantillonnage impliquant un calcul préliminaire de la forme de la distribution a posteriori statique des paramètres. Au chapitre 2, nous proposons un nouveau modèle de durée conditionnelle stochastique (SCD) pour l’analyse de données de transactions financières en haute fréquence. Nous identifions certaines caractéristiques indésirables des densités de durée conditionnelles paramétriques existantes et proposons une nouvelle famille de densités conditionnelles flexibles pouvant correspondre à une grande variété de distributions avec des fonctions de taux de probabilité modérément variable. Guidés par des considérations théoriques issues de la théorie des files d’attente, nous introduisons des déviations non-paramétriques autour d’une distribution exponentielle centrale, qui, selon nous, est un bon modèle de premier ordre pour les durées financières, en utilisant une densité de Bernstein. La densité résultante est non seulement flexible, dans le sens qu’elle peut s’approcher de n’importe quelle densité continue sur [0, ∞) de manière arbitraire, à condition qu’elle se compose d’un nombre suffisamment grand de termes, mais également susceptible de rétrécissement vers la distribution exponentielle. Grâce aux tirages très efficaces des variables d’état, l’efficacité numérique de notre simulation a posteriori se compare très favorablement à celles obtenues dans les études précédentes. Nous illustrons nos méthodes à l’aide des données de cotation d’actions négociées à la Bourse de Toronto. Nous constatons que les modèles utilisant notre densité conditionnelle avec moins de qua- tre termes offrent le meilleur ajustement. La variation régulière trouvée dans les fonctions de taux de probabilité, ainsi que la possibilité qu’elle ne soit pas monotone, aurait été impossible à saisir avec une spécification paramétrique couramment utilisée. Au chapitre 3, nous présentons un nouveau modèle de durée stochastique pour les temps de transaction dans les marchés d’actifs. Nous soutenons que les règles largement acceptées pour l’agrégation de transactions apparemment liées induisent une inférence erronée concernant les durées entre des transactions non liées: alors que deux transactions exécutées au cours de la même seconde sont probablement liées, il est extrêmement improbable que toutes paires de transactions le soient, dans un échantillon typique. En plaçant une incertitude sur les transactions liées dans notre modèle, nous améliorons l’inférence pour la distribution de la durée entre les transactions non liées, en particulier près de zéro. Nous proposons un modèle en temps discret pour les temps de transaction censurés permettant des valeurs nulles excessives résultant des durées entre les transactions liées. La distribution discrète des durées entre les transactions indépendantes découle d’une densité flexible susceptible de rétrécissement vers une distribution exponentielle. Dans un exemple empirique, nous constatons que la fonction de taux de probabilité conditionnelle sous-jacente pour des durées (non censurées) entre transactions non liées varie beaucoup moins que celles trouvées dans la plupart des études; une distribution discrète pour les transactions non liées basée sur une distribution exponentielle fournit le meilleur ajustement pour les trois séries analysées. Nous prétendons que c’est parce que nous évitons les artefacts statistiques qui résultent de règles déterministes d’agrégation des échanges et d’une distribution paramétrique inadaptée. / This thesis is organized in three chapters which develop posterior simulation methods for Bayesian inference in state space models and econometrics models for the analysis of financial data. In Chapter 1, we consider the problem of posterior simulation in state space models with non-linear non-Gaussian observables and univariate Gaussian states. We propose a new Markov Chain Monte Carlo (MCMC) method that updates the parameter vector of the state dynamics and the state sequence together as a single block. The MCMC proposal is drawn in two steps: the marginal proposal distribution for the parameter vector is constructed using an approximation of the gradient and Hessian of its log posterior density, with the state vector integrated out. The conditional proposal distribution for the state sequence given the proposal of the parameter vector is the one described in McCausland (2012). Computation of the approximate gradient and Hessian combines computational by-products of the state draw with a modest amount of additional computation. We compare the numerical efficiency of our posterior simulation with that of the Ancillarity-Sufficiency Interweaving Strategy (ASIS) described in Kastner & Frühwirth-Schnatter (2014), using the Gaus- sian stochastic volatility model and the panel of 23 daily exchange rates from that paper. For computing the posterior mean of the volatility persistence parameter, our numerical efficiency is 6-27 times higher; for the volatility of volatility parameter, 18-53 times higher. We analyse trans- action counts in a second example using dynamic Poisson and Gamma-Poisson models. Despite non-Gaussianity of the count data, we obtain high numerical efficiency that is not much lower than that reported in McCausland (2012) for a sampler that involves pre-computing the shape of a static posterior distribution of parameters. In Chapter 2, we propose a new stochastic conditional duration model (SCD) for the analysis of high-frequency financial transaction data. We identify undesirable features of existing parametric conditional duration densities and propose a new family of flexible conditional densities capable of matching a wide variety of distributions with moderately varying hazard functions. Guided by theoretical consideration from queuing theory, we introduce nonparametric deviations around a central exponential distribution, which we argue is a sound first-order model for financial durations, using a Bernstein density. The resulting density is not only flexible, in the sense that it can approximate any continuous density on [0,∞) arbitrarily closely, provided it consists of a large enough number of terms, but also amenable to shrinkage towards the exponential distribution. Thank to highly efficiency draws of state variables, numerical efficiency of our posterior simulation compares very favourably with those obtained in previous studies. We illustrate our methods using quotation data on equities traded on the Toronto Stock Exchange. We find that models with our proposed conditional density having less than four terms provide the best fit. The smooth variation found in the hazard functions, together with the possibility of it being non-monotonic, would have been impossible to capture using commonly used parametric specification. In Chapter 3, we introduce a new stochastic duration model for transaction times in asset markets. We argue that widely accepted rules for aggregating seemingly related trades mislead inference pertaining to durations between unrelated trades: while any two trades executed in the same second are probably related, it is extremely unlikely that all such pairs of trades are, in a typical sample. By placing uncertainty about which trades are related within our model, we improve inference for the distribution of duration between unrelated trades, especially near zero. We propose a discrete model for censored transaction times allowing for zero-inflation resulting from clusters of related trades. The discrete distribution of durations between unrelated trades arises from a flexible density amenable to shrinkage towards an exponential distribution. In an empirical example, we find that the underlying conditional hazard function for (uncensored) durations between unrelated trades varies much less than what most studies find; a discrete distribution for unrelated trades based on an exponential distribution provides a better fit for all three series analyzed. We claim that this is because we avoid statistical artifacts that arise from deterministic trade-aggregation rules and unsuitable parametric distribution.
62

Reimagining Human-Machine Interactions through Trust-Based Feedback

Kumar Akash (8862785) 17 June 2020 (has links)
<div>Intelligent machines, and more broadly, intelligent systems, are becoming increasingly common in the everyday lives of humans. Nonetheless, despite significant advancements in automation, human supervision and intervention are still essential in almost all sectors, ranging from manufacturing and transportation to disaster-management and healthcare. These intelligent machines<i> interact and collaborate</i> with humans in a way that demands a greater level of trust between human and machine. While a lack of trust can lead to a human's disuse of automation, over-trust can result in a human trusting a faulty autonomous system which could have negative consequences for the human. Therefore, human trust should be <i>calibrated </i>to optimize these human-machine interactions. This calibration can be achieved by designing human-aware automation that can infer human behavior and respond accordingly in real-time.</div><div><br></div><div>In this dissertation, I present a probabilistic framework to model and calibrate a human's trust and workload dynamics during his/her interaction with an intelligent decision-aid system. More specifically, I develop multiple quantitative models of human trust, ranging from a classical state-space model to a classification model based on machine learning techniques. Both models are parameterized using data collected through human-subject experiments. Thereafter, I present a probabilistic dynamic model to capture the dynamics of human trust along with human workload. This model is used to synthesize optimal control policies aimed at improving context-specific performance objectives that vary automation transparency based on human state estimation. I also analyze the coupled interactions between human trust and workload to strengthen the model framework. Finally, I validate the optimal control policies using closed-loop human subject experiments. The proposed framework provides a foundation toward widespread design and implementation of real-time adaptive automation based on human states for use in human-machine interactions.</div>
63

Model-based co-design of sensing and control systems for turbo-charged, EGR-utilizing spark-ignited engines

Xu Zhang (9976460) 01 March 2021 (has links)
<div>Stoichiometric air-fuel ratio (AFR) and air/EGR flow control are essential control problems in today’s advanced spark-ignited (SI) engines to enable effective application of the three-way-catalyst (TWC) and generation of required torque. External exhaust gas recirculation (EGR) can be used in SI engines to help mitigate knock, reduce enrichment and improve efficiency[1 ]. However, the introduction of the EGR system increases the complexity of stoichiometric engine-out lambda and torque management, particularly for high BMEP commercial vehicle applications. This thesis develops advanced frameworks for sensing and control architecture designs to enable robust air handling system management, stoichiometric cylinder air-fuel ratio (AFR) control and three-way-catalyst emission control.</div><div><br></div><div><div>The first work in this thesis derives a physically-based, control-oriented model for turbocharged SI engines utilizing cooled EGR and flexible VVA systems. The model includes the impacts of modulation to any combination of 11 actuators, including the throttle valve, bypass valve, fuel injection rate, waste-gate, high-pressure (HP) EGR, low-pressure (LP) EGR, number of firing cylinders, intake and exhaust valve opening and closing timings. A new cylinder-out gas composition estimation method, based on the inputs’ information of cylinder charge flow, injected fuel amount, residual gas mass and intake gas compositions, is proposed in this model. This method can be implemented in the control-oriented model as a critical input for estimating the exhaust manifold gas compositions. A new flow-based turbine-out pressure modeling strategy is also proposed in this thesis as a necessary input to estimate the LP EGR flow rate. Incorporated with these two sub-models, the control-oriented model is capable to capture the dynamics of pressure, temperature and gas compositions in manifolds and the cylinder. Thirteen physical parameters, including intake, boost and exhaust manifolds’ pressures, temperatures, unburnt and burnt mass fractions as well as the turbocharger speed, are defined as state variables. The outputs such as flow rates and AFR are modeled as functions of selected states and inputs. The control-oriented model is validated with a high fidelity SI engine GT-Power model for different operating conditions. The novelty in this physical modeling work includes the development and incorporation of the cylinder-out gas composition estimation method and the turbine-out pressure model in the control-oriented model.</div></div><div><br></div><div><div>The second part of the work outlines a novel sensor selection and observer design algorithm for linear time-invariant systems with both process and measurement noise based on <i>H</i>2 optimization to optimize the tradeoff between the observer error and the number of required sensors. The optimization problem is relaxed to a sequence of convex optimization problems that minimize the cost function consisting of the <i>H</i>2 norm of the observer error and the weighted <i>l</i>1 norm of the observer gain. An LMI formulation allows for efficient solution via semi-definite programing. The approach is applied here, for the first time, to a turbo-charged spark-ignited (SI) engine using exhaust gas recirculation to determine the optimal sensor sets for real-time intake manifold burnt gas mass fraction estimation. Simulation with the candidate estimator embedded in a high fidelity engine GT-Power model demonstrates that the optimal sensor sets selected using this algorithm have the best <i>H</i>2 estimation performance. Sensor redundancy is also analyzed based on the algorithm results. This algorithm is applicable for any type of modern internal combustion engines to reduce system design time and experimental efforts typically required for selecting optimal sensor sets.</div></div><div><br></div><div><div>The third study develops a model-based sensor selection and controller design framework for robust control of air-fuel-ratio (AFR), air flow and EGR flow for turbocharged stoichiometric engines using low pressure EGR, waste-gate turbo-charging, intake throttling and variable valve timing. Model uncertainties, disturbances, transport delays, sensor and actuator characteristics are considered in this framework. Based on the required control performance and candidate sensor sets, the framework synthesizes an H1 feedback controller and evaluates the viability of the candidate sensor set through analysis of the structured</div><div>singular value μ of the closed-loop system in the frequency domain. The framework can also be used to understand if relaxing the controller performance requirements enables the use of a simpler (less costly) sensor set. The sensor selection and controller co-design approach is applied here, for the first time, to turbo-charged engines using exhaust gas circulation. High fidelity GT-Power simulations are used to validate the approach. The novelty of the work in this part can be summarized as follows: (1) A novel control strategy is proposed for the stoichiometric SI engines using low pressure EGR to simultaneously satisfy both the AFR and air/EGR-path control performance requirements; (2) A parametrical method to simultaneously select the sensors and design the controller is first proposed for the internal combustion engines.</div></div><div><br></div><div><div>In the fourth part of the work, a novel two-loop estimation and control strategy is proposed to reduce the emission of the three-way-catalyst (TWC). In the outer loop, an FOS estimator consisting of a TWC model and an extended Kalman-filter is used to estimate the current TWC fractional oxygen state (FOS) and a robust controller is used to control the TWC FOS by manipulating the desired engine λ. The outer loop estimator and controller are combined with an existing inner loop controller. The inner loop controller controls the engine λ based on the desired λ value and the control inaccuracies are considered and compensated by the outer loop robust controller. This control strategy achieves good emission reduction performance and has advantages over the constant λ control strategy and the conventional two-loop switch-type control strategy.</div></div>
64

Langevinized Ensemble Kalman Filter for Large-Scale Dynamic Systems

Peiyi Zhang (11166777) 26 July 2021 (has links)
<p>The Ensemble Kalman filter (EnKF) has achieved great successes in data assimilation in atmospheric and oceanic sciences, but its failure in convergence to the right filtering distribution precludes its use for uncertainty quantification. Other existing methods, such as particle filter or sequential importance sampler, do not scale well to the dimension of the system and the sample size of the datasets. In this dissertation, we address these difficulties in a coherent way.</p><p><br></p><p> </p><p>In the first part of the dissertation, we reformulate the EnKF under the framework of Langevin dynamics, which leads to a new particle filtering algorithm, the so-called Langevinized EnKF (LEnKF). The LEnKF algorithm inherits the forecast-analysis procedure from the EnKF and the use of mini-batch data from the stochastic gradient Langevin-type algorithms, which make it scalable with respect to both the dimension and sample size. We prove that the LEnKF converges to the right filtering distribution in Wasserstein distance under the big data scenario that the dynamic system consists of a large number of stages and has a large number of samples observed at each stage, and thus it can be used for uncertainty quantification. We reformulate the Bayesian inverse problem as a dynamic state estimation problem based on the techniques of subsampling and Langevin diffusion process. We illustrate the performance of the LEnKF using a variety of examples, including the Lorenz-96 model, high-dimensional variable selection, Bayesian deep learning, and Long Short-Term Memory (LSTM) network learning with dynamic data.</p><p><br></p><p> </p><p>In the second part of the dissertation, we focus on two extensions of the LEnKF algorithm. Like the EnKF, the LEnKF algorithm was developed for Gaussian dynamic systems containing no unknown parameters. We propose the so-called stochastic approximation- LEnKF (SA-LEnKF) for simultaneously estimating the states and parameters of dynamic systems, where the parameters are estimated on the fly based on the state variables simulated by the LEnKF under the framework of stochastic approximation. Under mild conditions, we prove the consistency of resulting parameter estimator and the ergodicity of the SA-LEnKF. For non-Gaussian dynamic systems, we extend the LEnKF algorithm (Extended LEnKF) by introducing a latent Gaussian measurement variable to dynamic systems. Those two extensions inherit the scalability of the LEnKF algorithm with respect to the dimension and sample size. The numerical results indicate that they outperform other existing methods in both states/parameters estimation and uncertainty quantification.</p>
65

Modélisation à haut niveau de systèmes hétérogènes, interfaçage analogique /numérique / High level modeling of heterogeneous systems, analog/digital interfacing.

Cenni, Fabio 06 April 2012 (has links)
L’objet de la thèse est la modélisation de systèmes hétérogènes intégrant différents domaines de la physique et à signaux mixtes, numériques et analogiques (AMS). Une étude approfondie de différentes techniques d’extraction et de calibration de modèles comportementaux de composants analogiques à différents niveaux d’abstraction et de précision est présentée. Cette étude a mis en lumière trois approches principales qui ont été validées par la modélisation de plusieurs applications issues de divers domaines: un amplificateur faible bruit (LNA), un capteur chimique basé sur des ondes acoustiques de surface (SAW), le développement à plusieurs niveaux d’abstraction d’un capteur CMOS vidéo, et son intégration dans une plateforme industrielle. Les outils développés sont basés sur les extensions AMS du standard IEEE 1666 SystemC mais les techniques proposées sont facilement transposables à d’autres langages tels que VHDL-AMS ou Verilog-AMS utilisés en conception de dispositifs mixtes. / The thesis objective is the modeling of heterogeneous systems. Such systems integrate different physical domains (mechanical, chemical, optical or magnetic) therefore integrate analog and mixed- signal (AMS) parts. The aim is to provide a methodology based on high-level modeling for assisting both the design and the verification of AMS systems. A study on different techniques for extracting behavioral models of analog devices at different abstraction levels and computational weights is presented. Three approaches are identified and regrouped in three techniques. These techniques have been validated through the virtual prototyping of different applications issued from different domains: a low noise amplifier (LNA), a surface acoustic wave-based (SAW) chemical sensor, a CMOS video sensor with models developed at different abstraction levels and their integration within an industrial platform. The flows developed are based on the AMS extensions of the SystemC (IEEE 1666) standard but the methodologies can be implemented using other Analog Hardware Description Languages (VHDL-AMS, Verilog-AMS) typically used for mixed-signal microelectronics design.
66

Odhad Letových Parametrů Malého Letounu / Light Airplane Flight Parameters Estimation

Dittrich, Petr Unknown Date (has links)
Tato práce je zaměřena na odhad letových parametrů malého letounu, konkrétně letounu Evektor SportStar RTC. Pro odhad letových parametrů jsou použity metody "Equation Error Method", "Output Error Method" a metody rekurzivních nejmenších čtverců. Práce je zaměřena na zkoumání charakteristik aerodynamických parametrů podélného pohybu a ověření, zda takto odhadnuté letové parametry odpovídají naměřeným datům a tudíž vytvářejí předpoklad pro realizaci dostatečně přesného modelu letadla. Odhadnuté letové parametry jsou dále porovnávány s a-priorními hodnotami získanými s využitím programů Tornado, AVL a softwarovéverze sbírky Datcom. Rozdíly mezi a-priorními hodnotami a odhadnutými letovými paramatery jsou porovnány s korekcemi publikovanými pro subsonické letové podmínky modelu letounu F-18 Hornet.
67

Development of a Cost-Effective, Reliable and Versatile Monitoring System for Solar Power Installations in Developing Countries : A Minor Field Study as a Master Thesis of the Master Programme in Engineering Physics, Electrical Engineering

Trella, Fredrik, Paakkonen, Nils January 2016 (has links)
This report is the result of a conducted Minor Field Study (MFS), to the greatestextent funded by the Swedish International Development Cooperation Agency(SIDA), in an attempt to design a system for evaluating smaller solar power systems indeveloping countries. The study was to the greater part conducted in Nairobi, Kenyain close collaboration with the University of Nairobi. The aim was to develop asystem that would use easily available components and keep the costs to a minimum,yet deliver adequate performance. The system would measure certain parameters of asolar power system and also relevant environmental data in order to evaluate theperformance of the system. Due to the specific competence of the collaboratinggroup at the University of Nairobi, a Kinetis Freescale K64-microcontroller with anARM-Cortex processor was selected as the core of the design. Components wereselected, schematics were drawn, a circuit board was designed and manufactured andsoftware was written. After 12 weeks a somewhat satisfying proof-of-concept wasreached at the end of the field study in Kenya. The project however proved howdifficult it is to go from first idea to a functional proof-of-concept during a limitedtimeframe, and also in an East-African country. The final proof-of-concept was testedat Mpala Research Centre in Kenya and despite containing some flaws proved that itwould indeed be possible to design a working system on the principles discussed inthis report. The system is open-source, so anyone may use and modify it.

Page generated in 0.047 seconds