• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2563
  • 671
  • 303
  • 287
  • 96
  • 68
  • 48
  • 36
  • 34
  • 31
  • 26
  • 26
  • 26
  • 26
  • 26
  • Tagged with
  • 5088
  • 1050
  • 751
  • 607
  • 574
  • 566
  • 559
  • 550
  • 514
  • 436
  • 423
  • 415
  • 412
  • 393
  • 383
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
771

Stochastic Modeling and Simulation of Multiscale Biochemical Systems

Chen, Minghan 02 July 2019 (has links)
Numerous challenges arise in modeling and simulation as biochemical networks are discovered with increasing complexities and unknown mechanisms. With the improvement in experimental techniques, biologists are able to quantify genes and proteins and their dynamics in a single cell, which calls for quantitative stochastic models for gene and protein networks at cellular levels that match well with the data and account for cellular noise. This dissertation studies a stochastic spatiotemporal model of the Caulobacter crescentus cell cycle. A two-dimensional model based on a Turing mechanism is investigated to illustrate the bipolar localization of the protein PopZ. However, stochastic simulations are often impeded by expensive computational cost for large and complex biochemical networks. The hybrid stochastic simulation algorithm is a combination of differential equations for traditional deterministic models and Gillespie's algorithm (SSA) for stochastic models. The hybrid method can significantly improve the efficiency of stochastic simulations for biochemical networks with multiscale features, which contain both species populations and reaction rates with widely varying magnitude. The populations of some reactant species might be driven negative if they are involved in both deterministic and stochastic systems. This dissertation investigates the negativity problem of the hybrid method, proposes several remedies, and tests them with several models including a realistic biological system. As a key factor that affects the quality of biological models, parameter estimation in stochastic models is challenging because the amount of empirical data must be large enough to obtain statistically valid parameter estimates. To optimize system parameters, a quasi-Newton algorithm for stochastic optimization (QNSTOP) was studied and applied to a stochastic budding yeast cell cycle model by matching multivariate probability distributions between simulated results and empirical data. Furthermore, to reduce model complexity, this dissertation simplifies the fundamental cooperative binding mechanism by a stochastic Hill equation model with optimized system parameters. Considering that many parameter vectors generate similar system dynamics and results, this dissertation proposes a general α-β-γ rule to return an acceptable parameter region of the stochastic Hill equation based on QNSTOP. Different objective functions are explored targeting different features of the empirical data. / Doctor of Philosophy / Modeling and simulation of biochemical networks faces numerous challenges as biochemical networks are discovered with increased complexity and unknown mechanisms. With improvement in experimental techniques, biologists are able to quantify genes and proteins and their dynamics in a single cell, which calls for quantitative stochastic models, or numerical models based on probability distributions, for gene and protein networks at cellular levels that match well with the data and account for randomness. This dissertation studies a stochastic model in space and time of a bacterium’s life cycle— Caulobacter. A two-dimensional model based on a natural pattern mechanism is investigated to illustrate the changes in space and time of a key protein population. However, stochastic simulations are often complicated by the expensive computational cost for large and sophisticated biochemical networks. The hybrid stochastic simulation algorithm is a combination of traditional deterministic models, or analytical models with a single output for a given input, and stochastic models. The hybrid method can significantly improve the efficiency of stochastic simulations for biochemical networks that contain both species populations and reaction rates with widely varying magnitude. The populations of some species may become negative in the simulation under some circumstances. This dissertation investigates negative population estimates from the hybrid method, proposes several remedies, and tests them with several cases including a realistic biological system. As a key factor that affects the quality of biological models, parameter estimation in stochastic models is challenging because the amount of observed data must be large enough to obtain valid results. To optimize system parameters, the quasi-Newton algorithm for stochastic optimization (QNSTOP) was studied and applied to a stochastic (budding) yeast life cycle model by matching different distributions between simulated results and observed data. Furthermore, to reduce model complexity, this dissertation simplifies the fundamental molecular binding mechanism by the stochastic Hill equation model with optimized system parameters. Considering that many parameter vectors generate similar system dynamics and results, this dissertation proposes a general α-β-γ rule to return an acceptable parameter region of the stochastic Hill equation based on QNSTOP. Different optimization strategies are explored targeting different features of the observed data.
772

Computational Techniques for the Analysis of Large Scale Biological Systems

Ahn, Tae-Hyuk 27 August 2012 (has links)
An accelerated pace of discovery in biological sciences is made possible by a new generation of computational biology and bioinformatics tools. In this dissertation we develop novel computational, analytical, and high performance simulation techniques for biological problems, with applications to the yeast cell division cycle, and to the RNA-Sequencing of the yellow fever mosquito. Cell cycle system evolves stochastic effects when there are a small number of molecules react each other. Consequently, the stochastic effects of the cell cycle are important, and the evolution of cells is best described statistically. Stochastic simulation algorithm (SSA), the standard stochastic method for chemical kinetics, is often slow because it accounts for every individual reaction event. This work develops a stochastic version of a deterministic cell cycle model, in order to capture the stochastic aspects of the evolution of the budding yeast wild-type and mutant strain cells. In order to efficiently run large ensembles to compute statistics of cell evolution, the dissertation investigates parallel simulation strategies, and presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms. This work also proposes new accelerated stochastic simulation algorithms based on a fully implicit approach and on stochastic Taylor expansions. Next Generation RNA-Sequencing, a high-throughput technology to sequence cDNA in order to get information about a sample's RNA content, is becoming an efficient genomic approach to uncover new genes and to study gene expression and alternative splicing. This dissertation develops efficient algorithms and strategies to find new genes in Aedes aegypti, which is the most important vector of dengue fever and yellow fever. We report the discovery of a large number of new gene transcripts, and the identification and characterization of genes that showed male-biased expression profiles. This basic information may open important avenues to control mosquito borne infectious diseases. / Ph. D.
773

Threat Assessment and Proactive Decision-Making for Crash Avoidance in Autonomous Vehicles

Khattar, Vanshaj 24 May 2021 (has links)
Threat assessment and reliable motion-prediction of surrounding vehicles are some of the major challenges encountered in autonomous vehicles' safe decision-making. Predicting a threat in advance can give an autonomous vehicle enough time to avoid crashes or near crash situations. Most vehicles on roads are human-driven, making it challenging to predict their intentions and movements due to inherent uncertainty in their behaviors. Moreover, different driver behaviors pose different kinds of threats. Various driver behavior predictive models have been proposed in the literature for motion prediction. However, these models cannot be trusted entirely due to the human drivers' highly uncertain nature. This thesis proposes a novel trust-based driver behavior prediction and stochastic reachable set threat assessment methodology for various dangerous situations on the road. This trust-based methodology allows autonomous vehicles to quantify the degree of trust in their predictions to generate the probabilistically safest trajectory. This approach can be instrumental in the near-crash scenarios where no collision-free trajectory exists. Three different driving behaviors are considered: Normal, Aggressive, and Drowsy. Hidden Markov Models are used for driver behavior prediction. A "trust" in the detected driver is established by combining four driving features: Longitudinal acceleration, lateral acceleration, lane deviation, and velocity. A stochastic reachable set-based approach is used to model these three different driving behaviors. Two measures of threat are proposed: Current Threat and Short Term Prediction Threat which quantify present and the future probability of a crash. The proposed threat assessment methodology resulted in a lower rate of false positives and negatives. This probabilistic threat assessment methodology is used to address the second challenge in autonomous vehicle safety: crash avoidance decision-making. This thesis presents a fast, proactive decision-making methodology based on Stochastic Model Predictive Control (SMPC). A proactive decision-making approach exploits the surrounding human-driven vehicles' intent to assess the future threat, which helps generate a safe trajectory in advance, unlike reactive decision-making approaches that do not account for the surrounding vehicles' future intent. The crash avoidance problem is formulated as a chance-constrained optimization problem to account for uncertainty in the surrounding vehicle's motion. These chance-constraints always ensure a minimum probabilistic safety of the autonomous vehicle by keeping the probability of crash below a predefined risk parameter. This thesis proposes a tractable and deterministic reformulation of these chance-constraints using convex hull formulation for a fast real-time implementation. The controller's performance is studied for different risk parameters used in the chance-constraint formulation. Simulation results show that the proposed control methodology can avoid crashes in most hazardous situations on the road. / Master of Science / Unexpected road situations frequently arise on the roads which leads to crashes. In an NHTSA study, it was reported that around 94% of car crashes could be attributed to driver errors and misjudgments. This could be attributed to drinking and driving, fatigue, or reckless driving on the roads. Full self-driving cars can significantly reduce the frequency of such accidents. Testing of self-driving cars has recently begun on certain roads, and it is estimated that one in ten cars will be self-driving by the year 2030. This means that these self-driving cars will need to operate in human-driven environments and interact with human-driven vehicles. Therefore, it is crucial for autonomous vehicles to understand the way humans drive on the road to avoid collisions and interact safely with human-driven vehicles on the road. Detecting a threat in advance and generating a safe trajectory for crash avoidance are some of the major challenges faced by autonomous vehicles. We have proposed a reliable decision-making algorithm for crash avoidance in autonomous vehicles. Our framework addresses two core challenges encountered in crash avoidance decision-making in autonomous vehicles: 1. The outside challenge: Reliable motion prediction of surrounding vehicles to continuously assess the threat to the autonomous vehicle. 2. The inside challenge: Generating a safe trajectory for the autonomous vehicle in case of future predicted threat. The outside challenge is to predict the motion of surrounding vehicles. This requires building a reliable model through which future evolution of their position states can be predicted. Building these models is not trivial, as the surrounding vehicles' motion depends on human driver intentions and behaviors, which are highly uncertain. Various driver behavior predictive models have been proposed in the literature. However, most do not quantify trust in their predictions. We have proposed a trust-based driver behavior prediction method which combines all sensor measurements to output the probability (trust value) of a certain driver being "drowsy", "aggressive", or "normal". This method allows the autonomous vehicle to choose how much to trust a particular prediction. Once a picture is painted of surrounding vehicles, we can generate safe trajectories in advance – the inside challenge. Most existing approaches use stochastic optimal control methods, which are computationally expensive and impractical for fast real-time decision-making in crash scenarios. We have proposed a fast, proactive decision-making algorithm to generate crash avoidance trajectories based on Stochastic Model Predictive Control (SMPC). We reformulate the SMPC probabilistic constraints as deterministic constraints using convex hull formulation, allowing for faster real-time implementation. This deterministic SMPC implementation ensures in real-time that the vehicle maintains a minimum probabilistic safety.
774

Stochastic Terrain and Soil Modeling for Off-Road Mobility Studies

Lee, Richard Chan 01 June 2009 (has links)
For realistic predictions of vehicle performance in off-road conditions, it is critical to incorporate in the simulation accurate representations of the variability of the terrain profile. It is not practically feasible to measure the terrain at a sufficiently large number of points, or, if measured, to use such data directly in the simulation. Dedicated modeling techniques and computational methods that realistically and efficiently simulate off-road operating conditions are thus necessary. Many studies have been recently conducted to identify effective and appropriate ways to reduce experimental data in order to preserve only essential information needed to re-create the main terrain characteristics, for future use. This thesis focuses on modeling terrain profiles using the finite difference approach for solving linear second-order stochastic partial differential equations. We currently use this approach to model non-stationary terrain profiles in two dimensions (i.e., surface maps). Certain assumptions are made for the values of the model coefficients to obtain the terrain profile through the fast computational approach described, while preserving the stochastic properties of the original terrain topology. The technique developed is illustrated to recreate the stochastic properties of a sample of terrain profile measured experimentally. To further analyze off-road conditions, stochastic soil properties are incorporated into the terrain topology. Soil models can be developed empirically by measuring soil data at several points, or they can be created by using mathematical relations such as the Bekker's pressure-sinkage equation for homogeneous soils. In this thesis, based on a previously developed stochastic soil model, the polynomial chaos method is incorporated in the soil model. In a virtual proving ground, the wheel and soil interaction has to be simulated in order to analyze vehicle maneuverability over different soil types. Simulations have been created on a surface map for different case studies: stepping with a rigid plate, rigid wheel and flexible wheel, and rolling of a rigid wheel and flexible wheel. These case studies had various combinations of stochastic or deterministic terrain profile, stochastic or deterministic soil model, and an object to run across the surface (e.g., deterministic terrain profile, stochastic soil model, rolling rigid wheel). This thesis develops a comprehensive terrain and soil simulation environment for off-road mobility studies. Moreover, the technique developed to simulate stochastic terrain profile can be employed to simulate other stochastic systems modeled by PDEs. / Master of Science
775

Data-driven minimum entropy control for stochastic nonlinear systems using the cumulant-generating function

Zhang, Qichun, Zhang, J., Wang, H. 27 September 2022 (has links)
Yes / This paper presents a novel minimum entropy control algorithm for a class of stochastic nonlinear systems subjected to non-Gaussian noises. The entropy control can be considered as an optimization problem for the system randomness attenuation, but the mean value has to be considered separately. To overcome this disadvantage, a new representation of the system stochastic properties was given using the cumulant-generating function based on the moment-generating function, in which the mean value and the entropy was reflected by the shape of the cumulant-generating function. Based on the samples of the system output and control input, a time-variant linear model was identified, and the minimum entropy optimization was transformed to system stabilization. Then, an optimal control strategy was developed to achieve the randomness attenuation, and the boundedness of the controlled system output was analyzed. The effectiveness of the presented control algorithm was demonstrated by a numerical example. In this paper, a data-driven minimum entropy design is presented without pre-knowledge of the system model; entropy optimization is achieved by the system stabilization approach in which the stochastic distribution control and minimum entropy are unified using the same identified structure; and a potential framework is obtained since all the existing system stabilization methods can be adopted to achieve the minimum entropy objective.
776

Stochastic Volatility Models and Simulated Maximum Likelihood Estimation

Choi, Ji Eun 08 July 2011 (has links)
Financial time series studies indicate that the lognormal assumption for the return of an underlying security is often violated in practice. This is due to the presence of time-varying volatility in the return series. The most common departures are due to a fat left-tail of the return distribution, volatility clustering or persistence, and asymmetry of the volatility. To account for these characteristics of time-varying volatility, many volatility models have been proposed and studied in the financial time series literature. Two main conditional-variance model specifications are the autoregressive conditional heteroscedasticity (ARCH) and the stochastic volatility (SV) models. The SV model, proposed by Taylor (1986), is a useful alternative to the ARCH family (Engle (1982)). It incorporates time-dependency of the volatility through a latent process, which is an autoregressive model of order 1 (AR(1)), and successfully accounts for the stylized facts of the return series implied by the characteristics of time-varying volatility. In this thesis, we review both ARCH and SV models but focus on the SV model and its variations. We consider two modified SV models. One is an autoregressive process with stochastic volatility errors (AR--SV) and the other is the Markov regime switching stochastic volatility (MSSV) model. The AR--SV model consists of two AR processes. The conditional mean process is an AR(p) model , and the conditional variance process is an AR(1) model. One notable advantage of the AR--SV model is that it better captures volatility persistence by considering the AR structure in the conditional mean process. The MSSV model consists of the SV model and a discrete Markov process. In this model, the volatility can switch from a low level to a high level at random points in time, and this feature better captures the volatility movement. We study the moment properties and the likelihood functions associated with these models. In spite of the simple structure of the SV models, it is not easy to estimate parameters by conventional estimation methods such as maximum likelihood estimation (MLE) or the Bayesian method because of the presence of the latent log-variance process. Of the various estimation methods proposed in the SV model literature, we consider the simulated maximum likelihood (SML) method with the efficient importance sampling (EIS) technique, one of the most efficient estimation methods for SV models. In particular, the EIS technique is applied in the SML to reduce the MC sampling error. It increases the accuracy of the estimates by determining an importance function with a conditional density function of the latent log variance at time t given the latent log variance and the return at time t-1. Initially we perform an empirical study to compare the estimation of the SV model using the SML method with EIS and the Markov chain Monte Carlo (MCMC) method with Gibbs sampling. We conclude that SML has a slight edge over MCMC. We then introduce the SML approach in the AR--SV models and study the performance of the estimation method through simulation studies and real-data analysis. In the analysis, we use the AIC and BIC criteria to determine the order of the AR process and perform model diagnostics for the goodness of fit. In addition, we introduce the MSSV models and extend the SML approach with EIS to estimate this new model. Simulation studies and empirical studies with several return series indicate that this model is reasonable when there is a possibility of volatility switching at random time points. Based on our analysis, the modified SV, AR--SV, and MSSV models capture the stylized facts of financial return series reasonably well, and the SML estimation method with the EIS technique works very well in the models and the cases considered.
777

Stochastic Analysis Of Flow And Solute Transport In Heterogeneous Porous Media Using Perturbation Approach

Chaudhuri, Abhijit 01 1900 (has links)
Analysis of flow and solute transport problem in porous media are affected by uncertainty inbuilt both in boundary conditions and spatial variability in system parameters. The experimental investigation reveals that the parameters may vary in various scales by several orders. These affect the solute plume characteristics in field-scale problem and cause uncertainty in the prediction of concentration. The main focus of the present thesis is to analyze the probabilistic behavior of solute concentration in three dimensional(3-D) heterogeneous porous media. The framework for the probabilistic analysis has been developed using perturbation approach for both spectral based analytical and finite element based numerical method. The results of the probabilistic analysis are presented either in terms of solute plume characteristics or prediction uncertainty of the concentration. After providing a brief introduction on the role of stochastic analysis in subsurface hydrology in chapter 1, a detailed review of the literature is presented to establish the existing state-of-art in the research on the probabilistic analysis of flow and transport in simple and complex heterogeneous porous media in chapter 2. The literature review is mainly focused on the methods of solution of the stochastic differential equation. Perturbation based spectral method is often used for probabilistic analysis of flow and solute transport problem. Using this analytical method a nonlocal equation is solved to derive the expression of the spatial plume moments. The spatial plume moments represent the solute movement, spreading in an average sense. In chapter 3 of the present thesis, local dispersivity if also assumed to be random space function along with hydraulic conductivity. For various correlation coefficients of the random parameters, the results in terms of the field scale effective dispersivity are presented to demonstrate the effect of local dispersivity variation in space. The randomness of local dispersivity is found to reduce the effective fields scale dispersivity. The transverse effective macrodispersivity is affected more than the longitudinal effective macrodispersivity due to random spatial variation of local dispersivity. The reduction in effective field scale longitudinal dispersivity is more for positive correlation coefficient. The applicability of the analytical method, which is discussed in earlier chapter, is limited to the simple boundary conditions. The solution by spectral method in terms of statistical moments of concentration as a function of space and time, require higher dimensional integration. Perturbation based stochastic finite element method(SFEM) is an alternative method for performing probabilistic analysis of concentration. The use of this numerical method for performing probabilistic analysis of concentration. The use of this numerical method is non common in the literature of stochastic subsurface hydrology. The perturbation based SFEM which uses FEM for spatial discretization of the steady state flow and Laplace transform for the solute transport equation, is developed in chapter 4. The SFEM is formulated using Taylor series of the dependent variable upto second-order term. This results in second-order accurate mean and first-order accurate standard deviation of concentration. In this study the governing medium properties viz. hydraulic Conductivity, dispersivity, molecular diffusion, porosity, sorption coefficient and decay coefficient are considered to vary randomly in space. The accuracy of results and computational efficiency of the SFEM are compared with Monte Carle Simulation method(MCSM) for both I-D and 3-D problems. The comparison of results obtained hby SFEM and MCSM indicates that SFEM is capable in providing reasonably accurate mean and standard deviation of concentration. The Laplace transform based SFEM is simpler and advantageous since it does not require any stability criteria for choosing the time step. However it is not applicable for nonlinear transport problems as well as unsteady flow conditions. In this situation, finite difference method is adopted for the time discretization. The first part of the Chapter 5, deals with the formulation of time domain SFEM for the linear solute transport problem. Later the SFEM is extended for a problem which involve uncertainty of both system parameters and boundary/source conditions. For the flow problem, the randomness in the boundary condition is attributed by the random spatial variation of recharge at the top of the domain. The random recharge is modeled using mean, standard deviation and 2-D spatial correlation function. It is observed that even for the deterministic recharge case, the behavior of prediction uncertainty of concentration in the space is affected significantly due to the variation of flow field. When the effect of randomness of recharge condition is included, the standard deviation of concentration increases further. For solute transport, the concentration input at the source is modeled as a time varying random process. Two types of random source at the source is modeled as a time varying random process. Two types of random source condition are considered, firstly the amount of solute mass released at uniform time interval is random and secondly the source is treated as a Poission process. For the case of multiple random mass releases, the stochastic response function due to stochastic system is obtained by using SFEM. Comparing the results for the two type of random sources, it sis found that the prediction uncertainty is more when it is modeled as a Poisson process. The probabilistic analysis of nonlinear solute transport problem using MCSM is often requires large computational cost. The formulation of the alternative efficient method, SFEM, for nonlinear solute transport problem is presented in chapter 6. A general Langmuir-Freundlich isotherm is considered to model the equilibrium mass transfer between aqueous and sorbed phase. In the SFEM formulation, which uses the Taylor Series expansion, the zeroth-order derivatives of concentration are obtained by solving nonlinear algebraic equation. The higher order derivatives are obtained by solving linear equation. During transport, the nonlinear sorbing solutes is characterized by sharp solute fronts with a traveling wave behavior. Due to this the prediction uncertainty is significantly higher. The comparison of accuracy and computational efficiency of SFEM with MCSM for I-D and 3-D problems, reveals that the performance of SFEM for nonlinear problem is good and similar to the linear problem. In Chapter 7, the nonlinear SFEM is extended for probabilistic analysis of biodegrading solute, which is modeled by a set of PDEs coupled with nonlinear Monod type source/sink terms. In this study the biodegradation problem involves a single solute by a single class of microorganisms coupled with dynamic microbial growth is attempted using this methods. The temporal behavior of mean and standard deviation of substrate concentration are not monotonic, they show peaks before reaching lower steady state value. A comparison between the SFEM and MCSM for the mean and standard deviation of concentration is made for various stochastic cases of the I-D problem. In most of the cases the results compare reasonably well. The analysis of probabilistic behavior of substrate concentration for different correlation coefficient between the physical parameters(hydraulic conductivity, porosity, dispersivity and diffusion coefficient) and the biological parameters(maximum substrate utilization rate and the coefficient of cell decay) is performed. It is observed that the positive correlation between the two sets of parameters results in a lower mean and significantly higher standard deviation of substrate concentration. In the previous chapters, the stochastic analysis pertaining to the prediction uncertainty of concentration has been presented for simple problem where the system parameters are modeled as statistically homogeneous random. The experimental investigations in a small watershed, point towards a complex in geological substratum. It has been observed through the 2-D electrical resistivity imaging that the interface between the layers of high conductive weathered zone and low conductive clay is very irregular and complex in nature. In chapter 8 a theoretical model based on stochastic approach is developed to stimulate the complex geological structure of the weathered zone, using the 2-D electrical image. The statistical parameters of hydraulic conductivity field are estimated using the data obtained from the Magnetic Resonance Sounding(MRS) method. Due to the large complexity in the distribution of weathered zone, the stochastic analysis of seepage flux has been carried out by using MCSM. A batter characterization of the domain based on sufficient experimental data and suitable model of the random conductivity field may help to use the efficient SFEM. The flow domain is modeled as (i) an unstructured random field consisting of a single material with spatial heterogeneity, and (ii) a structured random field using 2-D electrical imaging which is composed of two layers of different heterogeneous random hydraulic properties. The simulations show that the prediction uncertainty of seepage flux is comparatively less when structured modeling framework is used rather than the unstructured modeling. At the end, in chapter 9 the important conclusions drawn from various chapters are summarized.
778

Fractional Stochastic Dynamics in Structural Stability Analysis

Deng, Jian January 2013 (has links)
The objective of this thesis is to develop a novel methodology of fractional stochastic dynamics to study stochastic stability of viscoelastic systems under stochastic loadings. Numerous structures in civil engineering are driven by dynamic forces, such as seismic and wind loads, which can be described satisfactorily only by using probabilistic models, such as white noise processes, real noise processes, or bounded noise processes. Viscoelastic materials exhibit time-dependent stress relaxation and creep; it has been shown that fractional calculus provide a unique and powerful mathematical tool to model such a hereditary property. Investigation of stochastic stability of viscoelastic systems with fractional calculus frequently leads to a parametrized family of fractional stochastic differential equations of motion. Parametric excitation may cause parametric resonance or instability, which is more dangerous than ordinary resonance as it is characterized by exponential growth of the response amplitudes even in the presence of damping. The Lyapunov exponents and moment Lyapunov exponents provide not only the information about stability or instability of stochastic systems, but also how rapidly the response grows or diminishes with time. Lyapunov exponents characterizes sample stability or instability. However, this sample stability cannot assure the moment stability. Hence, to obtain a complete picture of the dynamic stability, it is important to study both the top Lyapunov exponent and the moment Lyapunov exponent. Unfortunately, it is very difficult to obtain the accurate values of theses two exponents. One has to resort to numerical and approximate approaches. The main contributions of this thesis are: (1) A new numerical simulation method is proposed to determine moment Lyapunov exponents of fractional stochastic systems, in which three steps are involved: discretization of fractional derivatives, numerical solution of the fractional equation, and an algorithm for calculating Lyapunov exponents from small data sets. (2) Higher-order stochastic averaging method is developed and applied to investigate stochastic stability of fractional viscoelastic single-degree-of-freedom structures under white noise, real noise, or bounded noise excitation. (3) For two-degree-of-freedom coupled non-gyroscopic and gyroscopic viscoelastic systems under random excitation, the Stratonovich equations of motion are set up, and then decoupled into four-dimensional Ito stochastic differential equations, by making use of the method of stochastic averaging for the non-viscoelastic terms and the method of Larionov for viscoelastic terms. An elegant scheme for formulating the eigenvalue problems is presented by using Khasminskii and Wedig’s mathematical transformations from the decoupled Ito equations. Moment Lyapunov exponents are approximately determined by solving the eigenvalue problems through Fourier series expansion. Stability boundaries, critical excitations, and stability index are obtained. The effects of various parameters on the stochastic stability of the system are discussed. Parametric resonances are studied in detail. Approximate analytical results are confirmed by numerical simulations.
779

Stochastic Volatility Models and Simulated Maximum Likelihood Estimation

Choi, Ji Eun 08 July 2011 (has links)
Financial time series studies indicate that the lognormal assumption for the return of an underlying security is often violated in practice. This is due to the presence of time-varying volatility in the return series. The most common departures are due to a fat left-tail of the return distribution, volatility clustering or persistence, and asymmetry of the volatility. To account for these characteristics of time-varying volatility, many volatility models have been proposed and studied in the financial time series literature. Two main conditional-variance model specifications are the autoregressive conditional heteroscedasticity (ARCH) and the stochastic volatility (SV) models. The SV model, proposed by Taylor (1986), is a useful alternative to the ARCH family (Engle (1982)). It incorporates time-dependency of the volatility through a latent process, which is an autoregressive model of order 1 (AR(1)), and successfully accounts for the stylized facts of the return series implied by the characteristics of time-varying volatility. In this thesis, we review both ARCH and SV models but focus on the SV model and its variations. We consider two modified SV models. One is an autoregressive process with stochastic volatility errors (AR--SV) and the other is the Markov regime switching stochastic volatility (MSSV) model. The AR--SV model consists of two AR processes. The conditional mean process is an AR(p) model , and the conditional variance process is an AR(1) model. One notable advantage of the AR--SV model is that it better captures volatility persistence by considering the AR structure in the conditional mean process. The MSSV model consists of the SV model and a discrete Markov process. In this model, the volatility can switch from a low level to a high level at random points in time, and this feature better captures the volatility movement. We study the moment properties and the likelihood functions associated with these models. In spite of the simple structure of the SV models, it is not easy to estimate parameters by conventional estimation methods such as maximum likelihood estimation (MLE) or the Bayesian method because of the presence of the latent log-variance process. Of the various estimation methods proposed in the SV model literature, we consider the simulated maximum likelihood (SML) method with the efficient importance sampling (EIS) technique, one of the most efficient estimation methods for SV models. In particular, the EIS technique is applied in the SML to reduce the MC sampling error. It increases the accuracy of the estimates by determining an importance function with a conditional density function of the latent log variance at time t given the latent log variance and the return at time t-1. Initially we perform an empirical study to compare the estimation of the SV model using the SML method with EIS and the Markov chain Monte Carlo (MCMC) method with Gibbs sampling. We conclude that SML has a slight edge over MCMC. We then introduce the SML approach in the AR--SV models and study the performance of the estimation method through simulation studies and real-data analysis. In the analysis, we use the AIC and BIC criteria to determine the order of the AR process and perform model diagnostics for the goodness of fit. In addition, we introduce the MSSV models and extend the SML approach with EIS to estimate this new model. Simulation studies and empirical studies with several return series indicate that this model is reasonable when there is a possibility of volatility switching at random time points. Based on our analysis, the modified SV, AR--SV, and MSSV models capture the stylized facts of financial return series reasonably well, and the SML estimation method with the EIS technique works very well in the models and the cases considered.
780

Stochastic task scheduling in time-critical information delivery systems /

Britton, Matthew Scott. January 2003 (has links) (PDF)
Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 2003. / "January 2003" Includes bibliographical references (leaves 120-129).

Page generated in 0.0709 seconds