• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Localisation robuste multi-capteurs et multi-modèles / A robust multisensors and multiple model localisation system

Ndjeng Ndjeng, Alexandre 14 September 2009 (has links)
De nombreux travaux de recherches sont menés depuis quelques années dans le but de fournir une solution précise et intègre au problème de la localisation de véhicules routiers. Ces recherches sont en majorité fondées sur la théorie probabiliste de l’estimation. Elles utilisent la fusion multi-capteurs et le filtrage de Kalman mono-modèle, au travers de variantes adaptées aux systèmes non linéaires ; l’unique modèle complexe étant supposé décrire toute la dynamique du véhicule. Nous proposons dans cette thèse une approche multi-modèles. Cette étude dérive d’une analyse modulaire de la dynamique du véhicule, c’est-à-dire que l’espace d’évolution est pris comme un espace discret : plusieurs modèles simples et dédiés chacun à une manœuvre particulière sont générés, ce qui améliore la robustesse face aux défauts de modélisation du système. Il s’agit d’une variante de l’algorithme IMM, qui prend en compte l’asynchronisme des capteurs embarqués dans le processus d’estimation de l’état du véhicule. Pour cela, une nouvelle modélisation sous contraintes est développée, ce qui permet de mettre à jour la vraisemblance des modèles intégrés même en l’absence de mesures provenant de capteurs extéroceptifs. Toutefois, la performance d’un tel système nécessite d’utiliser des données capteurs de bonne qualité. Plusieurs opérations sont présentées, illustrant la correction du biais des capteurs, des bruits de mesures ainsi que la prise en compte de l’angle de dévers de la chaussée. La méthodologie développée est validée à travers une comparaison avec les algorithmes de fusion probabilistes EKF, UKF, DD1, DD2 et le filtrage particulaire. Cette comparaison est fondée sur des mesures courantes de précision et de confiance, puis sur l’utilisation de critères statistiques de consistance et de crédibilité, à partir de scénarios synthétiques et ensuite des données réelles. / Many research works have been devoted in the last years in order to provide an accurate and high integrity solution to the problem outdoor vehicles localization. These research efforts are mainly based on the probability estimation theory. They use multi-sensor fusion approach and a single-model based Kalman filtering, through some variants adapted to nonlinear systems. The single complex model that is used is assumed to describe the dynamics of the vehicle. We rather propose a multiple model approach in this thesis. The presented study derives from a modular analysis of the dynamics of the vehicle, ie the evolution of the vehicle is considered as a discrete process, which combines several simple models. Each model is dedicated to a particular manoeuvre of the vehicle. This evolution space discretizing will improves the system robustness to modelling defects. Our approach is a variant of the IMM algorithm, which takes into account the asynchronism of the embedded sensors. In order to achieve this goal, a new system constrained modelling is developed, which allows to update the various models likelihood even in absence of exteroceptive sensors. However, the performance of such a system requires the use of good quality data. Several operations are presented, illustrating the corrections on the sensors bias, measurements noise and taking into account the road bank angle. The developed methodology is validated through a comparison with the probabilistic fusion algorithms EKF, UKF, DD1, DD2 and particle filtering. This comparison is based on measurements of accuracy and confidence, then the use of statistical consistency and credibility measures, from simulation scenarios and then real data.
2

An Integrated Affine Jump Diffusion Framework to Manage Power Portfolios in a Deregulated Market

Culot, Michel F.J. 24 January 2003 (has links)
Electricity markets around the world have gone through, or are currently in a deregulation phase. As a result, power companies that formerly enjoyed a monopoly are now facing risks. In order to cover (hedge) these risks, futures markets have emerged, in parallel with the spot price markets. Then, markets of more complex derived products have appeared to better hedge the risk exposures of power suppliers and consumers. An Affine Jump Diffusion (AJD) framework is presented here to coherently model the dynamics of the spot price of electricity and all the futures contracts. The non-storability of electricity makes it indeed impossible to use it in hedging strategies. Futures contracts, however, are standard financial contracts that can be stored and used in hedging strategies. We thus propose to consider the set of futures contracts as the primary commodities to be modelled and jointly estimate the parameters of the spot and futures prices based on their historical time series. The estimation is done by Maximum Likelihood, using a Kalman Filter recursive algorithm that has been updated to account for non-Gaussian errors. This procedure has been applied to the German European Energy index (EEX) based in Frankfurt for electricity, to the Brent for Crude oil, and to the NBP for natural gas. The AJD framework is very powerful because the characteristic function of the underlying stochastic variables can be obtained just by solving a system of complex valued ODEs. We took advantage of this feature and developed a novel approach to estimate expectations of arbitrary functions of random variables that does not require the probability density function of the stochastic variables, but instead, their characteristic function. This approach, relying on the Parseval Identity, provided closed form solutions for options with payoff functions that have an analytical Fourier transform. In particular, European calls, puts and spread options could be computed as well as the value of multi-fuel power plants that can be viewed as an option to exchange the most economic fuel of the moment against electricity. A numerical procedure has also been developed for options with payoff functions that do not have an analytical Fourier transform. This numerical approach is indeed using a Fast Fourier Transform of the payoff function, and can be used in Dynamic Programming algorithms to price contracts with endogenous exercise strategies. Finally, it is showed that the (mathematical) partial derivatives of these contracts, often referred to as the Greeks, could also be computed at low cost. This allows to build hedging strategies to shape the risk profile of a given producer, or consumer.
3

Robust Analysis of M-Estimators of Nonlinear Models

Neugebauer, Shawn Patrick 16 August 1996 (has links)
Estimation of nonlinear models finds applications in every field of engineering and the sciences. Much work has been done to build solid statistical theories for its use and interpretation. However, there has been little analysis of the tolerance of nonlinear model estimators to deviations from assumptions and normality. We focus on analyzing the robustness properties of M-estimators of nonlinear models by studying the effects of deviations from assumptions and normality on these estimators. We discuss St. Laurent and Cook's Jacobian Leverage and identify the relationship of the technique to the robustness concept of influence. We derive influence functions for M-estimators of nonlinear models and show that influence of position becomes, more generally, influence of model. The result shows that, for M-estimators, we must bound not only influence of residual but also influence of model. Several examples highlight the unique problems of nonlinear model estimation and demonstrate the utility of the influence function. / Master of Science
4

OPTIMAL CONTROL OF PROJECTS BASED ON KALMAN FILTER APPROACH FOR TRACKING & FORECASTING THE PROJECT PERFORMANCE

Bondugula, Srikant 2009 May 1900 (has links)
Traditional scheduling tools like Gantt Charts and CPM while useful in planning and execution of complex construction projects with multiple interdependent activities haven?t been of much help in implementing effective control systems for the same projects in case of deviation from their desired or assumed behavior. Further, in case of such deviations project managers in most cases make decisions which might be guided either by the prospects of short term gains or the intension of forcing the project to follow the original schedule or plan, inadvertently increasing the overall project cost. Many deterministic project control methods have been proposed by various researchers for calculating optimal resource schedules considering the time-cost as well as the time-cost-quality trade-off analysis. But the need is for a project control system which optimizes the effort or cost required for controlling the project by incorporating the stochastic dynamic nature of the construction-production process. Further, such a system must include a method for updating and revising the beliefs or models used for representing the dynamics of the project using the actual progress data of the project. This research develops such an optimal project control method using Kalman Filter forecasting method for updating and using the assumed project dynamics model for forecasting the Estimated Cost at Completion (EAC) and the Estimated Duration at Completion (EDAC) taking into account the inherent uncertainties in the project progress and progress measurements. The controller is then formulated for iteratively calculating the optimal resource allocation schedule that minimizes either the EAC or both the EAC and EDAC together using the evolutionary optimization algorithm Covariance Matrix Adaption Evolution Strategy (CMA-ES). The implementation of the developed framework is used with a hypothetical project and tested for its robustness in updating the assumed initial project dynamics model and yielding the optimal control policy considering some hypothetical cases of uncertainties in the project progress and progress measurements. Based on the tests and demonstrations firstly it is concluded that a project dynamics model based on the project Gantt chart for spatial interdependencies of sub-tasks with triangular progress rates is a good representation of a typical construction project; and secondly, it is shown that the use of CMA-ES in conjunction with the Kalman Filter estimation and forecasting method provides a robust framework that can be implemented for any kind of complex construction process for yielding the optimal control policies.
5

Robust estimation of factor models in finance /

Bailer, Heiko Manfred. January 2005 (has links)
Thesis (Ph. D.)--University of Washington, 2005. / Vita. Includes bibliographical references (p. 171).
6

Design Optimization Using Model Estimation Programming

Brimhall, Richard Kay 01 May 1967 (has links)
Model estimation programming provides a method for obtaining extreme solutions subject to constraints. Functions which are continuous with continuous first and second derivatives in the neighborhood of the solution are approximated using quadratic polynomials (termed estimating functions) derived from computed or experimental data points. Using the estimating functions, an approximation problem is solved by a numerical adaptation of the method of Lagrange. The method is not limited by the concavity of the objective function. Beginning with an initial array of data observations, an initial approximate solution is obtained. Using this approximate solution as a new datum point, the coefficients for the estimating function are recalculated with a constrained least squares fit which forces intersection of the functions and their estimating functions at the last three observations. The constraining of the least squares estimate provides a sequence of approximate solutions which converge to the desired extremal. A digital computer program employing the technique is used extensively by Thiokol Chemical Corporation's Wasatch Division, especially for vehicle design optimization where flight performance and hardware constraints must be satisfied simultaneously.
7

Multiple Model Estimation for Channel Equalization and Space-Time Block Coding

Kamran, Ziauddin M. 09 1900 (has links)
<p> This thesis investigates the application of multiple model estimation algorithms to the problem of channel equalization for digital data transmission and channel tracking for space-time block coded systems with non-Gaussian additive noise. Recently, a network of Kalman filters (NKF) has been reported for the equalization of digital communication channels based on the approximation of the a posteriori probability density function of a sequence of delayed symbols by a weighted Gaussian sum. A serious drawback of this approach is that the number of Gaussian terms in the sum increases exponentially through iterations. In this thesis, firstly, we have shown that the NKF-based equalizer can be further improved by considering the interactions between the parallel filters in an efficient way. To this end, we take resort to the Interacting Multiple Model (IMM) estimator widely used in the area of multiple target tracking. The IMM is a very effective approach when the system exhibits discrete uncertainties in the dynamic or measurement model as well as continuous uncertainties in state values. A computationally feasible implementation based on a weighted sum of Gaussian approximation of the density functions of the data signals is introduced. Next, we present an adaptive multiple model blind equalization algorithm based on the IMM estimator to estimate the channel and the transmitted sequence corrupted by intersymbol interference and noise. It is shown through simulations that the proposed IMM-based equalizer offers substantially improved performance relative to the blind equalizer based on a (static or non-interacting) network of extended Kalman filters. It obviates the exponential growth of the state complexity caused by increasing channel memory length. The proposed approaches avoid the exponential growth of the number of terms used in the weighted Gaussian sum approximation of the plant noise making it practical for real-time processing.</p> <p> Finally, we consider the problem of channel estimation and tracking for space-time block coded systems contaminated by additive non-Gaussian noise. In many practical wireless channels in which space-time block coding techniques may be applied, the ambient noise is likely to have an impulsive component that gives rise to larger tail probabilities than is predicted by the Gaussian model. Although Kalman filters are often used in practice to track the channel variation, they are notoriously sensitive to heavy-tailed outliers and model mismatches resulting from the presence of impulsive noise. Non-Gaussian noise environments require the modification of standard filters to perform acceptably. Based on the coding/decoding technique, we propose a robust IMM algorithm approach in estimating time-selective fading channels when the measurements are perturbed by the presence of impulsive noise. The impulsive noise is modeled by a two terms Gaussian mixture distribution. Simulations demonstrate that the proposed method yields substantially improved performance compared to the conventional Kalman filter algorithm using the clipping or localization approaches to handle impulses in the observation. It is also shown that IMM-based approach performs robustly even when the prior information about the impulsive noise is not known exactly.</p> / Thesis / Master of Applied Science (MASc)
8

Adaptive Estimation and Detection Techniques with Applications

Ru, Jifeng 10 August 2005 (has links)
Hybrid systems have been identified as one of the main directions in control theory and attracted increasing attention in recent years due to their huge diversity of engineering applications. Multiplemodel (MM) estimation is the state-of-the-art approach to many hybrid estimation problems. Existing MM methods with fixed structure usually perform well for problems that can be handled by a small set of models. However, their performance is limited when the required number of models to achieve a satisfactory accuracy is large due to time evolution of the true mode over a large continuous space. In this research, variable-structure multiple model (VSMM) estimation was investigated, further developed and evaluated. A fundamental solution for on-line adaptation of model sets was developed as well as several VSMM algorithms. These algorithms have been successfully applied to the fields of fault detection and identification as well as target tracking in this thesis. In particular, an integrated framework to detect, identify and estimate failures is developed based on the VSMM. It can handle sequential failures and multiple failures by sensors or actuators. Fault detection and target maneuver detection can be formulated as change-point detection problems in statistics. It is of great importance to have the quickest detection of such mode changes in a hybrid system. Traditional maneuver detectors based on simplistic models are not optimal and are computationally demanding due to the requirement of batch processing. In this presentation, a general sequential testing procedure is proposed for maneuver detection based on advanced sequential tests. It uses a likelihood marginalization technique to cope with the difficulty that the target accelerations are unknown. The approach essentially utilizes a priori information about the accelerations in typical tracking engagements and thus allows improved detection performance. The proposed approach is applicable to change-point detection problems under similar formulation, such as fault detection.
9

Empirical study on the Korean treasury auction focusing on the revenue comparison in multiple versus single price auction

Kang, Boo-Sung 12 April 2006 (has links)
This dissertation pursues to find an answer empirically to the question of the revenue ranking between the multiple price auction and the single price auction. I also attempt to get empirical clues in terms of the efficiency ranking between the two. Under the assumptions of symmetric bidders and private independent value (PIV), I derive the optimal bidding conditions for both auction formats. Following the structural model estimation approach, I estimate the underlying distribution of market clearing price using the nonparametric resampling strategy and recover the bidders’ unknown true valuations corresponding to each observed bid point. With these estimated valuations of the bidders, I calculate what the upper bound of the revenue would have been under the Vickery auction to perform the counterfactual revenue comparison with the actual revenue. I find that, ex-post, the multiple price auction yields more revenue to the Korean Treasury than the alternative. I also investigate the efficiency ranking by comparing the number of bids switched and the amount of surplus change which would occur when the bidders are assumed to report their true valuations as their bids. I find that the multiple price auction is also superior to the alternative in efficiency which supports the current theoretical prediction. Finally, I investigate the robustness of my model and empirical results by relaxing the previous assumptions. I, first, extend the model and estimation to the case of asymmetric bidders where the bidders are divided into two groups based on their size. It shows that the model and estimation framework are still valid and that the empirical findings are very similar to the symmetric case. I also test for the presence of common value (CV) component in the bidders’ valuation function. I propose the simple regression model adopting the idea of the policy experimental approach. I obtain quite an inconclusive result in general but find some evidence supporting PIV for relatively higher bid prices while supporting CV for lower bid prices.
10

Modelling and Trajectory Planning for a Small-Scale Surface Ship

Zetterqvist, Gustav, Steen, Fabian January 2021 (has links)
Autonomous ships are one way to increase safety at sea and to decrease environmental impact of marine traveling and shipping. For this application, a good representation of the environment and a physical model of the ship are vital components. By optimizing the trajectory of the ship, a good trade-off between the time duration and energy consumption can be found. In this thesis, a three degree of freedom model that describes the dynamics of a small-scale surface ship is estimated. By using optimal control theory and a grey-box model, the parameters are estimated by defining an optimal control problem (OCP). The optimal solution is found by transcribing the problem into a nonlinear program and solving it using an interior point algorithm. The identification method is tested and validated using simulated data as well as using data from real world experiments. The performance of the estimated models is validated using cross validation. In a second track of this thesis, a trajectory is created in two steps. The first is path planning to find a shortest geometric path between two points. In the second step, the path is converted to a trajectory and is optimized to become dynamically feasible. For this purpose, a roadmap is generated from a modified version of the generalized Voronoi diagram. To find an initial path in the roadmap, the A-star algorithm is utilized and to connect start and goal position to the map a few different methods are examined. An initial trajectory is created by mapping a straight-line trajectory to the initial path, thus connecting time, position and velocity. The final trajectory is found by solving a discrete OCP initialized with the initial trajectory. The OCP contains spatial constraints that ensures that the vessel does not collide with static obstacles. The suggested estimation method resulted in models that could be used for trajectory planning to generate a dynamically feasible trajectory for both simulated and real data. The trajectory generated by the trajectory planner resulted in a collision-free trajectory, satisfying the dynamics of the estimated model, such that the trade-off between time duration and energy consumption is well balanced. Future work consists of implementation of a controller to see if the planned trajectory can be followed by the small-scale ship.

Page generated in 0.1386 seconds