• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 7
  • 7
  • 7
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Effective Stochastic Models of Neuroscientific Data with Application to Weakly Electric Fish

Melanson, Alexandre 23 April 2019 (has links)
Neural systems are often stochastic, non-linear, and non-autonomous. The complex manifestation of these aspects hinders the interpretation of neuroscientific data. Neuroscience thus benefits from the inclusion of theoretical models in its methodology. Detailed biophysical models of neural systems, however, are often plagued by high-dimensional and poorly constrained parameter spaces. As an alternative, data-driven effective models can often explain the core dynamical features of a dataset with few underlying assumptions. By lumping high-dimensional fluctuations into low-dimensional stochastic terms, observed time-series can be well-represented by stochastic dynamical systems. Here, I apply this approach to two datasets from weakly electric fish. The rate of electrosensory sampling of freely behaving fish displays spontaneous transitions between two preferred values: an active exploratory state and a resting state. I show that, over a long timescale, this rate can be modelled with a stochastic double-well system where a slow external agent modulates the relative depth of the wells. On a shorter timescale, however, fish exhibit abrupt and transient increases in sampling rate not consistent with a diffusion process. I develop and apply a novel inference method to construct a jump-diffusion process that fits the observed fluctuations. This same technique is successfully applied to intrinsic membrane voltage noise in pyramidal neurons of the primary electrosensory processing area, which display abrupt depolarization events along with diffusive fluctuations. I then characterize a novel sensory acquisition strategy whereby fish adopt a rhythmic movement pattern coupled with stochastic oscillations of their sampling rate. Lastly, in the context of differentiating between self-generated and external electrosensory signals, I model the sensory signature of communication signals between fish. This analysis provides supporting evidence for the presence of a sensory ambiguity associated with these signals.
2

THE DYNAMICAL SYSTEMS APPROACH TO MACROECONOMICS

Reis, Carneiro da Costa 10 1900 (has links)
<p>The aim of this thesis is to provide mathematical tools for an alternative to the mainstream study of macroeconomics with a focus on debt-driven dynamics.</p> <p>We start with a survey of the literature on formalizations of Minsky's Financial Instability Hypothesis in the context of stock-flow consistent models.</p> <p>We then study a family of macro-economical models that date back to the Goodwin model. In particular, we propose a stochastic extension where noise is introduced in the productivity. Besides proving existence and uniqueness of solutions, we show that orbits must loop around a specific point indefinitely.</p> <p>Subsequently, we analyze the Keen model, where private debt is introduced. We demonstrate that there are two key equilibrium points, intuitively denoted good and bad equilibria. Analytical stability analysis is followed by numerical study of the basin of attraction of the good equilibrium.</p> <p>Assuming low interest rate levels, we derive an approximate solution through perturbation techniques, which can be solved analytically. The zero order solution, in particular, is shown to converge to a limit cycle. The first order solution, on the other hand, is shown to explode, rendering its use dubious for long term assessments.</p> <p>Alternatively, we propose an extension of the Keen model that addresses the immediate completion time of investment projects. Using distributed time delays, we verify the existence of the key equilibrium points, good and bad, followed by their stability analysis. Through bifurcation theory, we verify the existence of limit cycles for certain mean completion times, which are absent in the original Keen model.</p> <p>Finally, we examine the Keen model under government intervention, where we introduce a general form for the government policy. Besides performing stability analysis, we prove several results concerning the persistence of both profits and employment. In economical terms, we demonstrate that when the government is responsive enough, total economic meltdowns are avoidable.</p> / Doctor of Philosophy (PhD)
3

Design and Analysis of Stochastic Dynamical Systems with Fokker-Planck Equation

Kumar, Mrinal 2009 December 1900 (has links)
This dissertation addresses design and analysis aspects of stochastic dynamical systems using Fokker-Planck equation (FPE). A new numerical methodology based on the partition of unity meshless paradigm is developed to tackle the greatest hurdle in successful numerical solution of FPE, namely the curse of dimensionality. A local variational form of the Fokker-Planck operator is developed with provision for h- and p- refinement. The resulting high dimensional weak form integrals are evaluated using quasi Monte-Carlo techniques. Spectral analysis of the discretized Fokker- Planck operator, followed by spurious mode rejection is employed to construct a new semi-analytical algorithm to obtain near real-time approximations of transient FPE response of high dimensional nonlinear dynamical systems in terms of a reduced subset of admissible modes. Numerical evidence is provided showing that the curse of dimensionality associated with FPE is broken by the proposed technique, while providing problem size reduction of several orders of magnitude. In addition, a simple modification of norm in the variational formulation is shown to improve quality of approximation significantly while keeping the problem size fixed. Norm modification is also employed as part of a recursive methodology for tracking the optimal finite domain to solve FPE numerically. The basic tools developed to solve FPE are applied to solving problems in nonlinear stochastic optimal control and nonlinear filtering. A policy iteration algorithm for stochastic dynamical systems is implemented in which successive approximations of a forced backward Kolmogorov equation (BKE) is shown to converge to the solution of the corresponding Hamilton Jacobi Bellman (HJB) equation. Several examples, including a four-state missile autopilot design for pitch control, are considered. Application of the FPE solver to nonlinear filtering is considered with special emphasis on situations involving long durations of propagation in between measurement updates, which is implemented as a weak form of the Bayes rule. A nonlinear filter is formulated that provides complete probabilistic state information conditioned on measurements. Examples with long propagation times are considered to demonstrate benefits of using the FPE based approach to filtering.
4

New Algorithms for Uncertainty Quantification and Nonlinear Estimation of Stochastic Dynamical Systems

Dutta, Parikshit 2011 August 1900 (has links)
Recently there has been growing interest to characterize and reduce uncertainty in stochastic dynamical systems. This drive arises out of need to manage uncertainty in complex, high dimensional physical systems. Traditional techniques of uncertainty quantification (UQ) use local linearization of dynamics and assumes Gaussian probability evolution. But several difficulties arise when these UQ models are applied to real world problems, which, generally are nonlinear in nature. Hence, to improve performance, robust algorithms, which can work efficiently in a nonlinear non-Gaussian setting are desired. The main focus of this dissertation is to develop UQ algorithms for nonlinear systems, where uncertainty evolves in a non-Gaussian manner. The algorithms developed are then applied to state estimation of real-world systems. The first part of the dissertation focuses on using polynomial chaos (PC) for uncertainty propagation, and then achieving the estimation task by the use of higher order moment updates and Bayes rule. The second part mainly deals with Frobenius-Perron (FP) operator theory, how it can be used to propagate uncertainty in dynamical systems, and then using it to estimate states by the use of Bayesian update. Finally, a method to represent the process noise in a stochastic dynamical system using a nite term Karhunen-Loeve (KL) expansion is proposed. The uncertainty in the resulting approximated system is propagated using FP operator. The performance of the PC based estimation algorithms were compared with extended Kalman filter (EKF) and unscented Kalman filter (UKF), and the FP operator based techniques were compared with particle filters, when applied to a duffing oscillator system and hypersonic reentry of a vehicle in the atmosphere of Mars. It was found that the accuracy of the PC based estimators is higher than EKF or UKF and the FP operator based estimators were computationally superior to the particle filtering algorithms.
5

Stability and variability of open-ocean deep convection in deterministic and stochastic simple models

Kuhlbrodt, Till January 2002 (has links)
Die Tiefenkonvektion ist ein wesentlicher Bestandteil der Zirkulation im Nordatlantik. Sie beeinflusst den nordwärtigen Wärmetransport der thermohalinen Zirkulation. Ein Verständnis ihrer Stabilität und Variabilität ist daher nötig, um Klimaveränderungen im Bereich des Nordatlantiks einschätzen zu können. <br /> <br /> Diese Arbeit hat zum Ziel, das konzeptionelle Verständnis der Stabilität und der Variabilität der Tiefenkonvektion zu verbessern. Beobachtungsdaten aus der Labradorsee zeigen Phasen mit und ohne Tiefenkonvektion. Ein einfaches Modell mit zwei Boxen wird an diese Daten angepasst. Das Ergebnis legt nahe, dass die Labradorsee zwei koexistierende stabile Zustände hat, einen mit regelmäßiger Tiefenkonvektion und einen ohne Tiefenkonvektion. Diese Bistabilität ergibt sich aus einer positiven Salzgehalts-Rückkopplung, deren Ursache ein Netto-Süßwassereintrag in die Deckschicht ist. Der konvektive Zustand kann schnell instabil werden, wenn der mittlere Antrieb sich hin zu wärmeren oder weniger salzhaltigen Bedingungen ändert. <br /> <br /> Die wetterbedingte Variabilität des externen Antriebs wird durch die Addition eines stochastischen Antriebsterms in das Modell eingebaut. Es zeigt sich, dass dann die Tiefenkonvektion häufig an- und wieder ausgeschaltet wird. Die mittlere Aufenthaltszeit in beiden Zuständen ist ein Maß ihrer stochastischen Stabilität. Die stochastische Stabilität hängt in glatter Weise von den Parametern des Antriebs ab, im Gegensatz zu der deterministischen (nichtstochastischen) Stabilität, die sich abrupt ändern kann. Sowohl das Mittel als auch die Varianz des stochastischen Antriebs beeinflussen die Häufigkeit von Tiefenkonvektion. Eine Abnahme der Konvektionshäufigkeit, als Reaktion auf eine Abnahme des Salzgehalts an der Oberfläche, kann zum Beispiel durch eine Zunahme der Variabilität in den Wärmeflüssen kompensiert werden. <br /> <br /> Mit einem weiter vereinfachten Box-Modell werden einige Eigenschaften der stochastischen Stabilität analytisch untersucht. Es wird ein neuer Effekt beschrieben, die wandernde Monostabilität: Auch wenn die Tiefenkonvektion aufgrund geänderter Parameter des Antriebs kein stabiler Zustand mehr ist, kann der stochastische Antrieb immer noch häufig Konvektionsereignisse auslösen. Die analytischen Gleichungen zeigen explizit, wie die wandernde Monostabilität sowie andere Effekte von den Modellparametern abhängen. Diese Abhängigkeit ist für die mittleren Aufenthaltszeiten immer exponentiell, für die Wahrscheinlichkeit langer nichtkonvektiver Phasen dagegen nur dann, wenn diese Wahrscheinlichkeit gering ist. Es ist zu erwarten, dass wandernde Monostabilität auch in anderen Teilen des Klimasystems eine Rolle spielt. <br /> <br /> Insgesamt zeigen die Ergebnisse, dass die Stabilität der Tiefenkonvektion in der Labradorsee sehr empfindlich auf den Antrieb reagiert. Die Rolle der Variabilität ist entscheidend für ein Verständnis dieser Empfindlichkeit. Kleine Änderungen im Antrieb können bereits die Häufigkeit von Tiefenkonvektionsereignissen deutlich mindern, was sich vermutlich stark auf das regionale Klima auswirkt. / Deep convection is an essential part of the circulation in the North Atlantic Ocean. It influences the northward heat transport achieved by the thermohaline circulation. Understanding its stability and variability is therefore necessary for assessing climatic changes in the area of the North Atlantic. <br /> <br /> This thesis aims at improving the conceptual understanding of the stability and variability of deep convection. Observational data from the Labrador Sea show phases with and without deep convection. A simple two-box model is fitted to these data. The results suggest that the Labrador Sea has two coexisting stable states, one with regular deep convection and one without deep convection. This bistability arises from a positive salinity feedback that is due to the net freshwater input into the surface layer. The convecting state can easily become unstable if the mean forcing shifts to warmer or less saline conditions. <br /> <br /> The weather-induced variability of the external forcing is included into the box model by adding a stochastic forcing term. It turns out that deep convection is then switched &quot;on&quot; and &quot;off&quot; frequently. The mean residence time in either state is a measure of its stochastic stability. The stochastic stability depends smoothly on the forcing parameters, in contrast to the deterministic (non-stochastic) stability which may change abruptly. The mean and the variance of the stochastic forcing both have an impact on the frequency of deep convection. For instance, a decline in convection frequency due to a surface freshening may be compensated for by an increased heat flux variability. <br /> <br /> With a further simplified box model some stochastic stability features are studied analytically. A new effect is described, called wandering monostability: even if deep convection is not a stable state due to changed forcing parameters, the stochastic forcing can still trigger convection events frequently. The analytical expressions explicitly show how wandering monostability and other effects depend on the model parameters. This dependence is always exponential for the mean residence times, but for the probability of long nonconvecting phases it is exponential only if this probability is small. It is to be expected that wandering monostability is relevant in other parts of the climate system as well. <br /> <br /> All in all, the results demonstrate that the stability of deep convection in the Labrador Sea reacts very sensitively to the forcing. The presence of variability is crucial for understanding this sensitivity. Small changes in the forcing can already significantly lower the frequency of deep convection events, which presumably strongly affects the regional climate. <br><br>----<br>Anmerkung:<br> Der Autor ist Träger des durch die Physikalische Gesellschaft zu Berlin vergebenen Carl-Ramsauer-Preises 2003 für die jeweils beste Dissertation der vier Universitäten Freie Universität Berlin, Humboldt-Universität zu Berlin, Technische Universität Berlin und Universität Potsdam.
6

Efficient Spectral-Chaos Methods for Uncertainty Quantification in Long-Time Response of Stochastic Dynamical Systems

Hugo Esquivel (10702248) 06 May 2021 (has links)
<div>Uncertainty quantification techniques based on the spectral approach have been studied extensively in the literature to characterize and quantify, at low computational cost, the impact that uncertainties may have on large-scale engineering problems. One such technique is the <i>generalized polynomial chaos</i> (gPC) which utilizes a time-independent orthogonal basis to expand a stochastic process in the space of random functions. The method uses a specific Askey-chaos system that is concordant with the measure defined in the probability space in order to ensure exponential convergence to the solution. For nearly two decades, this technique has been used widely by several researchers in the area of uncertainty quantification to solve stochastic problems using the spectral approach. However, a major drawback of the gPC method is that it cannot be used in the resolution of problems that feature strong nonlinear dependencies over the probability space as time progresses. Such downside arises due to the time-independent nature of the random basis, which has the undesirable property to lose unavoidably its optimality as soon as the probability distribution of the system's state starts to evolve dynamically in time.</div><div><br></div><div>Another technique is the <i>time-dependent generalized polynomial chaos</i> (TD-gPC) which utilizes a time-dependent orthogonal basis to better represent the stochastic part of the solution space (aka random function space or RFS) in time. The development of this technique was motivated by the fact that the probability distribution of the solution changes with time, which in turn requires that the random basis is frequently updated during the simulation to ensure that the mean-square error is kept orthogonal to the discretized RFS. Though this technique works well for problems that feature strong nonlinear dependencies over the probability space, the TD-gPC method possesses a serious issue: it suffers from the curse of dimensionality at the RFS level. This is because in all gPC-based methods the RFS is constructed using a tensor product of vector spaces with each of these representing a single RFS over one of the dimensions of the probability space. As a result, the higher the dimensionality of the probability space, the more vector spaces needed in the construction of a suitable RFS. To reduce the dimensionality of the RFS (and thus, its associated computational cost), gPC-based methods require the use of versatile sparse tensor products within their numerical schemes to alleviate to some extent the curse of dimensionality at the RFS level. Therefore, this curse of dimensionality in the TD-gPC method alludes to the need of developing a more compelling spectral method that can quantify uncertainties in long-time response of dynamical systems at much lower computational cost.</div><div><br></div><div>In this work, a novel numerical method based on the spectral approach is proposed to resolve the curse-of-dimensionality issue mentioned above. The method has been called the <i>flow-driven spectral chaos</i> (FSC) because it uses a novel concept called <i>enriched stochastic flow maps</i> to track the evolution of a finite-dimensional RFS efficiently in time. The enriched stochastic flow map does not only push the system's state forward in time (as would a traditional stochastic flow map) but also its first few time derivatives. The push is performed this way to allow the random basis to be constructed using the system's enriched state as a germ during the simulation and so as to guarantee exponential convergence to the solution. It is worth noting that this exponential convergence is achieved in the FSC method by using only a few number of random basis vectors, even when the dimensionality of the probability space is considerably high. This is for two reasons: (1) the cardinality of the random basis does not depend upon the dimensionality of the probability space, and (2) the cardinality is bounded from above by <i>M+n+1</i>, where <i>M</i> is the order of the stochastic flow map and <i>n</i> is the order of the governing stochastic ODE. The boundedness of the random basis from above is what makes the FSC method be curse-of-dimensionality free at the RFS level. For instance, for a dynamical system that is governed by a second-order stochastic ODE (<i>n=2</i>) and driven by a stochastic flow map of fourth-order (<i>M=4</i>), the maximum number of random basis vectors to consider within the FSC scheme is just 7, independent whether the dimensionality of the probability space is as low as 1 or as high as 10,000.</div><div><br></div><div>With the aim of reducing the complexity of the presentation, this dissertation includes three levels of abstraction for the FSC method, namely: a <i>specialized version</i> of the FSC method for dealing with structural dynamical systems subjected to uncertainties (Chapter 2), a <i>generalized version</i> of the FSC method for dealing with dynamical systems governed by (nonlinear) stochastic ODEs of arbitrary order (Chapter 3), and a <i>multi-element version</i> of the FSC method for dealing with dynamical systems that exhibit discontinuities over the probability space (Chapter 4). This dissertation also includes an implementation of the FSC method to address the dynamics of large-scale stochastic structural systems more effectively (Chapter 5). The implementation is done via a modal decomposition of the spatial function space as a means to reduce the number of degrees of freedom in the system substantially, and thus, save computational runtime.</div>
7

Stochastic Dynamical Systems : New Schemes for Corrections of Linearization Errors and Dynamic Systems Identification

Raveendran, Tara January 2013 (has links) (PDF)
This thesis essentially deals with the development and numerical explorations of a few improved Monte Carlo filters for nonlinear dynamical systems with a view to estimating the associated states and parameters (i.e. the hidden states appearing in the system or process model) based on the available noisy partial observations. The hidden states are characterized, subject to modelling errors, by the weak solutions of the process model, which is typically in the form of a system of stochastic ordinary differential equations (SDEs). The unknown system parameters, when included as pseudo-states within the process model, are made to evolve as Wiener processes. The observations may also be modelled by a set of measurement SDEs or, when collected at discrete time instants, their temporally discretized maps. The proposed Monte Carlo filters aim at achieving robustness (i.e. insensitivity to variations in the noise parameters) and higher accuracy in the estimates whilst retaining the important feature of applicability to large dimensional nonlinear filtering problems. The thesis begins with a brief review of the literature in Chapter 1. The first development, reported in Chapter 2, is that of a nearly exact, semi-analytical, weak and explicit linearization scheme called Girsanov Corrected Linearization Method (GCLM) for nonlinear mechanical oscillators under additive stochastic excitations. At the heart of the linearization is a temporally localized rejection sampling strategy that, combined with a resampling scheme, enables selecting from and appropriately modifying an ensemble of locally linearized trajectories whilst weakly applying the Girsanov correction (the Radon- Nikodym derivative) for the linearization errors. Through their numeric implementations for a few workhorse nonlinear oscillators, the proposed variants of the scheme are shown to exhibit significantly higher numerical accuracy over a much larger range of the time step size than is possible with the local drift-linearization schemes on their own. The above scheme for linearization correction is exploited and extended in Chapter 3, wherein novel variations within a particle filtering algorithm are proposed to weakly correct for the linearization or integration errors that occur while numerically propagating the process dynamics. Specifically, the correction for linearization, provided by the likelihood or the Radon-Nikodym derivative, is incorporated in two steps. Once the likelihood, an exponential martingale, is split into a product of two factors, correction owing to the first factor is implemented via rejection sampling in the first step. The second factor, being directly computable, is accounted for via two schemes, one employing resampling and the other, a gain-weighted innovation term added to the drift field of the process SDE thereby overcoming excessive sample dispersion by resampling. The proposed strategies, employed as add-ons to existing particle filters, the bootstrap and auxiliary SIR filters in this work, are found to non-trivially improve the convergence and accuracy of the estimates and also yield reduced mean square errors of such estimates visà-vis those obtained through the parent filtering schemes. In Chapter 4, we explore the possibility of unscented transformation on Gaussian random variables, as employed within a scaled Gaussian sum stochastic filter, as a means of applying the nonlinear stochastic filtering theory to higher dimensional system identification problems. As an additional strategy to reconcile the evolving process dynamics with the observation history, the proposed filtering scheme also modifies the process model via the incorporation of gain-weighted innovation terms. The reported numerical work on the identification of dynamic models of dimension up to 100 is indicative of the potential of the proposed filter in realizing the stated aim of successfully treating relatively larger dimensional filtering problems. We propose in Chapter 5 an iterated gain-based particle filter that is consistent with the form of the nonlinear filtering (Kushner-Stratonovich) equation in our attempt to treat larger dimensional filtering problems with enhanced estimation accuracy. A crucial aspect of the proposed filtering set-up is that it retains the simplicity of implementation of the ensemble Kalman filter (EnKF). The numerical results obtained via EnKF-like simulations with or without a reduced-rank unscented transformation also indicate substantively improved filter convergence. The final contribution, reported in Chapter 6, is an iterative, gain-based filter bank incorporating an artificial diffusion parameter and may be viewed as an extension of the iterative filter in Chapter 5. While the filter bank helps in exploring the phase space of the state variables better, the iterative strategy based on the artificial diffusion parameter, which is lowered to zero over successive iterations, helps improve the mixing property of the associated iterative update kernels and these are aspects that gather importance for highly nonlinear filtering problems, including those involving significant initial mismatch of the process states and the measured ones. Numerical evidence of remarkably enhanced filter performance is exemplified by target tracking and structural health assessment applications. The thesis is finally wound up in Chapter 7 by summarizing these developments and briefly outlining the future research directions

Page generated in 0.1322 seconds