• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 67
  • 67
  • 67
  • 15
  • 13
  • 12
  • 10
  • 9
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A unified discrepancy-based approach for balancing efficiency and robustness in state-space modeling estimation, selection, and diagnosis

Hu, Nan 01 December 2016 (has links)
Due to its generality and flexibility, the state-space model has become one of the most popular models in modern time domain analysis for the description and prediction of time series data. The model is often used to characterize processes that can be conceptualized as "signal plus noise," where the realized series is viewed as the manifestation of a latent signal that has been corrupted by observation noise. In the state-space framework, parameter estimation is generally accomplished by maximizing the innovations Gaussian log-likelihood. The maximum likelihood estimator (MLE) is efficient when the normality assumption is satisfied. However, in the presence of contamination, the MLE suffers from a lack of robustness. Basu, Harris, Hjort, and Jones (1998) introduced a discrepancy measure (BHHJ) with a non-negative tuning parameter that regulates the trade-off between robustness and efficiency. In this manuscript, we propose a new parameter estimation procedure based on the BHHJ discrepancy for fitting state-space models. As the tuning parameter is increased, the estimation procedure becomes more robust but less efficient. We investigate the performance of the procedure in an illustrative simulation study. In addition, we propose a numerical method to approximate the asymptotic variance of the estimator, and we provide an approach for choosing an appropriate tuning parameter in practice. We justify these procedures theoretically and investigate their efficacy in simulation studies. Based on the proposed parameter estimation procedure, we then develop a new model selection criterion in the state-space framework. The traditional Akaike information criterion (AIC), where the goodness-of-fit is assessed by the empirical log-likelihood, is not robust to outliers. Our new criterion is comprised of a goodness-of-fit term based on the empirical BHHJ discrepancy, and a penalty term based on both the tuning parameter and the dimension of the candidate model. We present a comprehensive simulation study to investigate the performance of the new criterion. In instances where the time series data is contaminated, our proposed model selection criterion is shown to perform favorably relative to AIC. Lastly, using the BHHJ discrepancy based on the chosen tuning parameter, we propose two versions of an influence diagnostic in the state-space framework. Specifically, our diagnostics help to identify cases that influence the recovery of the latent signal, thereby providing initial guidance and insight for further exploration. We illustrate the behavior of these measures in a simulation study.
22

Control of plane poiseuille flow: a theoretical and computational investigation

McKernan, John 04 1900 (has links)
Control of the transition of laminar flow to turbulence would result in lower drag and reduced energy consumption in many engineering applications. A spectral state-space model of linearised plane Poiseuille flow with wall transpiration ac¬tuation and wall shear measurements is developed from the Navier-Stokes and continuity equations, and optimal controllers are synthesized and assessed in sim¬ulations of the flow. The polynomial-form collocation model with control by rate of change of wall-normal velocity is shown to be consistent with previous interpo¬lating models with control by wall-normal velocity. Previous methods of applying the Dirichlet and Neumann boundary conditions to Chebyshev series are shown to be not strictly valid. A partly novel method provides the best numerical behaviour after preconditioning. Two test cases representing the earliest stages of the transition are consid¬ered, and linear quadratic regulators (LQR) and estimators (LQE) are synthesized. Finer discretisation is required for convergence of estimators. A novel estimator covariance weighting improves estimator transient convergence. Initial conditions which generate the highest subsequent transient energy are calculated. Non-linear open- and closed-loop simulations, using an independently derived finite-volume Navier-Stokes solver modified to work in terms of perturbations, agree with linear simulations for small perturbations. Although the transpiration considered is zero net mass flow, large amounts of fluid are required locally. At larger perturbations the flow saturates. State feedback controllers continue to stabilise the flow, but estimators may overshoot and occasionally output feedback destabilises the flow. Actuation by simultaneous wall-normal and tangential transpiration is derived. There are indications that control via tangential actuation produces lower highest transient energy, although requiring larger control effort. State feedback controllers are also synthesized which minimise upper bounds on the highest transient energy and control effort. The performance of these controllers is similar to that of the optimal controllers.
23

Simulation and Characterization of Cathode Reactions in Solid Oxide Fuel Cells

Williams, Robert Earl, Jr. 05 July 2007 (has links)
In this study, we have developed a dense La0.85Sr0.15MnO3-δ (LSM) Ce0.9Gd0.1O1.95 (GDC) composite electrode system for studying the surface modification of cathodes. The LSM and GDC grains in the composite were well defined and distinguished using energy dispersive x-ray (EDX) analysis. The specific three-phase boundary (TPB) length per unit electrode surface area was systematically controlled by adjusting the LSM to GDC volume ratio of the composite from 40% up to 70%. The TPB length for each tested sample was determined through stereological techniques and used to correlate the cell performance and degradation with the specific TPB length per unit surface area. An overlapping spheres percolation model was developed to estimate the activity of the TPB lines on the surface of the dense composite electrodes developed. The model suggested that the majority of the TPB lines would be active and the length of those lines maximized if the volume percent of the electrolyte material was kept in the range of 47 57%. Additionally, other insights into the processing conditions to maximize the amount of active TPB length were garnered from both the stereology calculations and the percolation simulations. Steady-state current voltage measurements as well as electrochemical impedance measurements on numerous samples under various environmental conditions were completed. The apparent activation energy for the reduction reaction was found to lie somewhere between 31 kJ/mol and 41 kJ/mol depending upon the experimental conditions. The exchange current density was found to vary with the partial pressure of oxygen differently over two separate regions. At relatively low partial pressures, i0 had an approximately dependence and at relatively high partial pressures, i0 had an approximately dependence. This led to the conclusion that a change in the rate limiting step occurs over this range. A method for deriving the electrochemical properties from proposed reaction mechanisms was also presented. State-space modeling was used as it is a robust approach to addressing these particular types of problems due to its relative ease of implementation and ability to efficiently handle large systems of differential algebraic equations. This method combined theoretical development with experimental results obtained previously to predict the electrochemical performance data. The simulations agreed well the experimental data and allowed for testing of operating conditions not easily reproducible in the lab (e.g. precise control and differentiation of low oxygen partial pressures).
24

Exponential Smoothing for Forecasting and Bayesian Validation of Computer Models

Wang, Shuchun 22 August 2006 (has links)
Despite their success and widespread usage in industry and business, ES methods have received little attention from the statistical community. We investigate three types of statistical models that have been found to underpin ES methods. They are ARIMA models, state space models with multiple sources of error (MSOE), and state space models with a single source of error (SSOE). We establish the relationship among the three classes of models and conclude that the class of SSOE state space models is broader than the other two and provides a formal statistical foundation for ES methods. To better understand ES methods, we investigate the behaviors of ES methods for time series generated from different processes. We mainly focus on time series of ARIMA type. ES methods forecast a time series using only the series own history. To include covariates into ES methods for better forecasting a time series, we propose a new forecasting method, Exponential Smoothing with Covariates (ESCov). ESCov uses an ES method to model what left unexplained in a time series by covariates. We establish the optimality of ESCov, identify SSOE state space models underlying ESCov, and derive analytically the variances of forecasts by ESCov. Empirical studies show that ESCov outperforms ES methods and regression with ARIMA errors. We suggest a model selection procedure for choosing appropriate covariates and ES methods in practice. Computer models have been commonly used to investigate complex systems for which physical experiments are highly expensive or very time-consuming. Before using a computer model, we need to address an important question ``How well does the computer model represent the real system?" The process of addressing this question is called computer model validation that generally involves the comparison of computer outputs and physical observations. In this thesis, we propose a Bayesian approach to computer model validation. This approach integrates together computer outputs and physical observation to give a better prediction of the real system output. This prediction is then used to validate the computer model. We investigate the impacts of several factors on the performance of the proposed approach and propose a generalization to the proposed approach.
25

Control of plane poiseuille flow : a theoretical and computational investigation

McKernan, John January 2006 (has links)
Control of the transition of laminar flow to turbulence would result in lower drag and reduced energy consumption in many engineering applications. A spectral state-space model of linearised plane Poiseuille flow with wall transpiration ac¬tuation and wall shear measurements is developed from the Navier-Stokes and continuity equations, and optimal controllers are synthesized and assessed in sim¬ulations of the flow. The polynomial-form collocation model with control by rate of change of wall-normal velocity is shown to be consistent with previous interpo¬lating models with control by wall-normal velocity. Previous methods of applying the Dirichlet and Neumann boundary conditions to Chebyshev series are shown to be not strictly valid. A partly novel method provides the best numerical behaviour after preconditioning. Two test cases representing the earliest stages of the transition are consid¬ered, and linear quadratic regulators (LQR) and estimators (LQE) are synthesized. Finer discretisation is required for convergence of estimators. A novel estimator covariance weighting improves estimator transient convergence. Initial conditions which generate the highest subsequent transient energy are calculated. Non-linear open- and closed-loop simulations, using an independently derived finite-volume Navier-Stokes solver modified to work in terms of perturbations, agree with linear simulations for small perturbations. Although the transpiration considered is zero net mass flow, large amounts of fluid are required locally. At larger perturbations the flow saturates. State feedback controllers continue to stabilise the flow, but estimators may overshoot and occasionally output feedback destabilises the flow. Actuation by simultaneous wall-normal and tangential transpiration is derived. There are indications that control via tangential actuation produces lower highest transient energy, although requiring larger control effort. State feedback controllers are also synthesized which minimise upper bounds on the highest transient energy and control effort. The performance of these controllers is similar to that of the optimal controllers.
26

A state-space approach in analyzing longitudinal neuropsychological outcomes

Chua, Alicia S. 06 October 2021 (has links)
Longitudinal assessments are crucial in evaluating the disease state and trajectory in patients of neurodegenerative diseases. Neuropsychological outcomes measured over time often have a non-linear trajectory with autocorrelated residuals and skewed distributions. Due to these issues, statistical analysis and interpretation involving longitudinal cognitive outcomes can be a difficult and controversial task, thus hindering most convenient transformations (e.g. logarithmic) to avoid the assumption violations of common statistical modelling techniques. We propose the Adjusted Local Linear Trend (ALLT) model, an extended state space model in lieu of the commonly-used linear mixed-effects model (LMEM) in modeling longitudinal neuropsychological outcomes. Our contributed model has the capability to utilize information from the stochasticity of the data while accounting for subject-specific trajectories with the inclusion of covariates and unequally-spaced time intervals. The first step of model fitting involves a likelihood maximization step to estimate the unknown variances in the model before parsing these values into the Kalman Filter and Kalman Smoother recursive algorithms. Results from simulation studies showed that the ALLT model is able to attain lower bias, lower standard errors and high power, particularly in short longitudinal studies with equally-spaced time intervals, as compared to the LMEM. The ALLT model also outperforms the LMEM when data is missing completely at random (MCAR), missing at random (MAR) and, in certain cases, even in data with missing not at random (MNAR). In terms of model selection, likelihood-based inference is applicable for the ALLT model. Although a Chi-Square distribution with k degrees of freedom, where k is the number of parameter lost during estimation, was not the asymptotic distribution in the case of ALLT, we were able to derive an asymptotic distribution approximation of the likelihood ratio test statistics using the power transformation method for the utility of a Gaussian distribution to facilitate model selections for ALLT. In light of these findings, we believe that our proposed model will shed light into longitudinal data analysis not only in the neuropsychological data realm but also on a broader scale for statistical analysis of longitudinal data. / 2023-10-05T00:00:00Z
27

Stavové modelování vývojových trojúhelníků / State space modeling of run-off triangles

Kohout, Marek January 2021 (has links)
The main goal of this Diploma thesis is to describe an approach for modeling run-off triangles of nonlife insurance (calculation of IBNR reserve) based on state space models and apply the method to the selected run-off triangles. In difference from (Atherino a kol., 2010) the KFAS package in R software is used for modeling purposes in the numerical study at the end of the thesis. One provides a preview of various possibilities of data and model adjustment applied to the same run-off triangles in order to asses added value of these steps (logartihmic transformation of input data, interventions for outliers etc.). A special attention is devoted to lognormal modification of the basic state space model. An integral part of the numerical study in the thesis is a residual diagnostic of models and simulation approach to IBNR reserves. 1
28

Achieving shrinkage in a time-varying parameter model framework

Bitto, Angela, Frühwirth-Schnatter, Sylvia January 2019 (has links) (PDF)
Shrinkage for time-varying parameter (TVP) models is investigated within a Bayesian framework, with the aim to automatically reduce time-varying Parameters to staticones, if the model is overfitting. This is achieved through placing the double gamma shrinkage prior on the process variances. An efficient Markov chain Monte Carlo scheme is devel- oped, exploiting boosting based on the ancillarity-sufficiency interweaving strategy. The method is applicable both to TVP models for univariate a swell as multivariate time series. Applications include a TVP generalized Phillips curve for EU area inflation modeling and a multivariate TVP Cholesky stochastic volatility model for joint modeling of the Returns from the DAX-30index.
29

Essays in mathematical finance : modeling the futures price

Blix, Magnus January 2004 (has links)
This thesis consists of four papers dealing with the futures price process. In the first paper, we propose a two-factor futures volatility model designed for the US natural gas market, but applicable to any futures market where volatility decreases with maturity and varies with the seasons. A closed form analytical expression for European call options is derived within the model and used to calibrate the model to implied market volatilities. The result is used to price swaptions and calendar spread options on the futures curve. In the second paper, a financial market is specified where the underlying asset is driven by a d-dimensional Wiener process and an M dimensional Markov process. On this market, we provide necessary and, in the time homogenous case, sufficient conditions for the futures price to possess a semi-affine term structure. Next, the case when the Markov process is unobservable is considered. We show that the pricing problem in this setting can be viewed as a filtering problem, and we present explicit solutions for futures. Finally, we present explicit solutions for options on futures both in the observable and unobservable case. The third paper is an empirical study of the SABR model, one of the latest contributions to the field of stochastic volatility models. By Monte Carlo simulation we test the accuracy of the approximation the model relies on, and we investigate the stability of the parameters involved. Further, the model is calibrated to market implied volatility, and its dynamic performance is tested. In the fourth paper, co-authored with Tomas Björk and Camilla Landén, we consider HJM type models for the term structure of futures prices, where the volatility is allowed to be an arbitrary smooth functional of the present futures price curve. Using a Lie algebraic approach we investigate when the infinite dimensional futures price process can be realized by a finite dimensional Markovian state space model, and we give general necessary and sufficient conditions, in terms of the volatility structure, for the existence of a finite dimensional realization. We study a number of concrete applications including the model developed in the first paper of this thesis. In particular, we provide necessary and sufficient conditions for when the induced spot price is a Markov process. We prove that the only HJM type futures price models with spot price dependent volatility structures, generically possessing a spot price realization, are the affine ones. These models are thus the only generic spot price models from a futures price term structure point of view. / Diss. Stockholm : Handelshögskolan, 2004
30

Computational methods for analysis and modeling of time-course gene expression data

Wu, Fangxiang 31 August 2004
Genes encode proteins, some of which in turn regulate other genes. Such interactions make up gene regulatory relationships or (dynamic) gene regulatory networks. With advances in the measurement technology for gene expression and in genome sequencing, it has become possible to measure the expression level of thousands of genes simultaneously in a cell at a series of time points over a specific biological process. Such time-course gene expression data may provide a snapshot of most (if not all) of the interesting genes and may lead to a better understanding gene regulatory relationships and networks. However, inferring either gene regulatory relationships or networks puts a high demand on powerful computational methods that are capable of sufficiently mining the large quantities of time-course gene expression data, while reducing the complexity of the data to make them comprehensible. This dissertation presents several computational methods for inferring gene regulatory relationships and gene regulatory networks from time-course gene expression. These methods are the result of the authors doctoral study. Cluster analysis plays an important role for inferring gene regulatory relationships, for example, uncovering new regulons (sets of co-regulated genes) and their putative cis-regulatory elements. Two dynamic model-based clustering methods, namely the Markov chain model (MCM)-based clustering and the autoregressive model (ARM)-based clustering, are developed for time-course gene expression data. However, gene regulatory relationships based on cluster analysis are static and thus do not describe the dynamic evolution of gene expression over an observation period. The gene regulatory network is believed to be a time-varying system. Consequently, a state-space model for dynamic gene regulatory networks from time-course gene expression data is developed. To account for the complex time-delayed relationships in gene regulatory networks, the state space model is extended to be the one with time delays. Finally, a method based on genetic algorithms is developed to infer the time-delayed relationships in gene regulatory networks. Validations of all these developed methods are based on the experimental data available from well-cited public databases.

Page generated in 0.061 seconds