Spelling suggestions: "subject:"stochastic"" "subject:"ctochastic""
791 |
Hybrid numerical methods for stochastic differential equationsChinemerem, Ikpe Dennis 02 1900 (has links)
In this dissertation we obtain an e cient hybrid numerical method for the
solution of stochastic di erential equations (SDEs). Speci cally, our method
chooses between two numerical methods (Euler and Milstein) over a particular
discretization interval depending on the value of the simulated Brownian
increment driving the stochastic process. This is thus a new1 adaptive method
in the numerical analysis of stochastic di erential equation. Mauthner (1998)
and Hofmann et al (2000) have developed a general framework for adaptive
schemes for the numerical solution to SDEs, [30, 21]. The former presents
a Runge-Kutta-type method based on stepsize control while the latter considered
a one-step adaptive scheme where the method is also adapted based
on step size control. Lamba, Mattingly and Stuart, [28] considered an adaptive
Euler scheme based on controlling the drift component of the time-step
method. Here we seek to develop a hybrid algorithm that switches between
euler and milstein schemes at each time step over the entire discretization
interval, depending on the outcome of the simulated Brownian motion increment.
The bias of the hybrid scheme as well as its order of convergence is
studied. We also do a comparative analysis of the performance of the hybrid
scheme relative to the basic numerical schemes of Euler and Milstein. / Mathematical Sciences / M.Sc. (Applied Mathematics)
|
792 |
The asymptotic stability of stochastic kernel operatorsBrown, Thomas John 06 1900 (has links)
A stochastic operator is a positive linear contraction, P : L1 --+ L1,
such that
llPfII2 = llfll1 for f > 0. It is called asymptotically stable if the iterates pn f of
each density converge in the norm to a fixed density. Pf(x) = f K(x,y)f(y)dy,
where K( ·, y) is a density, defines a stochastic kernel operator. A general probabilistic/
deterministic model for biological systems is considered. This leads to the
LMT operator
P f(x) = Jo - Bx H(Q(>.(x)) - Q(y)) dy,
where -H'(x) = h(x) is a density. Several particular examples of cell cycle models
are examined. An operator overlaps supports iffor all densities f,g, pn f APng of 0
for some n. If the operator is partially kernel, has a positive invariant density and
overlaps supports, it is asymptotically stable. It is found that if h( x) > 0 for
x ~ xo ~ 0 and
["'" x"h(x) dx < liminf(Q(A(x))" - Q(x)") for a E (0, 1] lo x-oo
then P is asymptotically stable, and an opposite condition implies P is sweeping.
Many known results for cell cycle models follow from this. / Mathematical Science / M. Sc. (Mathematics)
|
793 |
Energy Aware Management of 5G NetworksLiu, Chang January 1900 (has links)
Doctor of Philosophy / Department of Electrical and Computer Engineering / Balasubramaniam Natarajan / The number of wireless devices is predicted to skyrocket from about 5 billion in 2015 to 25 billion by 2020. Therefore, traffic volume demand is envisioned to explode in the very near future. The proposed fifth generation (5G) of mobile networks is expected to be a mixture of network components with different sizes, transmit powers, back-haul connections and radio access technologies. While there are many interesting problems within the 5G framework, we address the challenges of energy-related management in a heterogeneous 5G networks. Based on the 5G architecture, in this dissertation, we present some fundamental methodologies to analyze and improve the energy efficiency of 5G network components using mathematical tools from optimization, control theory and stochastic geometry.
Specifically, the main contributions of this research include:
• We design power-saving modes in small cells to maximize energy efficiency. We first derive performance metrics for heterogeneous cellular networks with sleep modes based on stochastic geometry. Then we quantify the energy efficiency and maximize it with quality-of-service constraint based on an analytical model. We also develop a simple sleep strategy to further improve the energy efficiency according to traffic conditions.
• We conduct a techno-economic analysis of heterogeneous cellular networks powered by both on-grid electricity and renewable energy. We propose a scheme to minimize the electricity cost based on a real-time pricing model.
• We provide a framework to uncover desirable system design parameters that offer the best gains in terms of ergodic capacity and average achievable throughput for device-to-device underlay cellular networks. We also suggest a two-phase scheme to optimize the ergodic capacity while minimizing the total power consumption.
• We investigate the modeling and analysis of simultaneous information and energy transfer in Internet of things and evaluate both transmission outage probability and power outage probability. Then we try to balance the trade-off between the outage performances by careful design of the power splitting ratio.
This research provides valuable insights related to the trade-offs between energy-conservation and system performance in 5G networks. Theoretical and simulation results help verify the performance of the proposed algorithms.
|
794 |
Model-based Assessment of Heat Pump FlexibilityWolf, Tobias January 2016 (has links)
Today's energy production is changing from scheduled to intermittent generation due to the increasing energy injection from renewable sources. This alteration requires flexibility in energy generation and demand. Electric heat pumps and thermal storages were found to have a large potential to provide demand flexibility which is analysed in this work. A three-fold method is set up to generate thermal load profiles, to simulate heat pump pools and to assess heat pump flexibility. The thermal profile generation based on a combination of physical and behavioural models is successfully validated against measurement data. A randomised system sizing procedure was implemented for the simulation of heat pump pools. The parameter randomisation yields correct seasonal performance factors, full load hours and average operation cycles per day compared to 87 monitored systems. The flexibility assessment analysis the electric load deviation of representative heat pump pool in response to 5 different on / off signals. The flexibility is induced by the capacity of thermal storages and analysed by four parameters. Generally, on signals are more powerful than off signals. A generic assessment by the ambient temperature yield that the flexibility is highest for heating days and the activated additional space heating storage: Superheating of the storage to the maximal temperature provides a flexible energy of more than 400 kWh per 100 heat pumps in a temperature range between -10 and +13 °C.
|
795 |
Two variable and linear temporal logic in model checking and gamesLenhardt, Rastislav January 2013 (has links)
Model checking linear-time properties expressed in first-order logic has non-elementary complexity, and thus various restricted logical languages are employed. In the first part of this dissertation we consider two such restricted specification logics on words: linear temporal logic (LTL) and two-variable first-order logic (FO2). LTL is more expressive but FO2 can be more succinct, and hence it is not clear which should be easier to verify. We take a comprehensive look at the issue, giving a comparison of verification problems for FO2, LTL, and various sublogics thereof across a wide range of models. In particular, we look at unary temporal logic (UTL), a subset of LTL that is expressively equivalent to FO2. We give three logic-to-automata translations which can be used to give upper bounds for FO2 and UTL and various sublogics. We apply these to get new bounds for model checking both non-deterministic systems (hierarchical and recursive state machines, games) and for probabilistic systems (Markov chains, recursive Markov chains, and Markov decision processes). Our results give a unified approach to understanding the behaviour of FO2, LTL, and their sublogics. We further consider the problem of computing maximal probabilities for interval Markov chains (and recursive interval Markov chains, stochastic context-free grammars) to satisfy LTL specifications. Using again our automata constructions we describe an expectation-maximisation algorithm to solve this problem in practice. Our algorithm can be seen as a variant of the classical Baum-Welch algorithm on hidden Markov models. We also introduce a publicly available on-line tool Tulip to perform such analysis. Finally, we investigate the extension of our techniques from words to trees. We show that the parallel between the complexity of FO2 satisfiability on general and on restricted structures breaks down as we move from words to trees, since trees allow one to encode alternating exponential time computation.
|
796 |
On Stochastic Volatility Models as an Alternative to GARCH Type ModelsNilsson, Oscar January 2016 (has links)
For the purpose of modelling and prediction of volatility, the family of Stochastic Volatility (SV) models is an alternative to the extensively used ARCH type models. SV models differ in their assumption that volatility itself follows a latent stochastic process. This reformulation of the volatility process makes however model estimation distinctly more complicated for the SV type models, which in this paper is conducted through Markov Chain Monte Carlo methods. The aim of this paper is to assess the standard SV model and the SV model assuming t-distributed errors and compare the results with their corresponding GARCH(1,1) counterpart. The data examined cover daily closing prices of the Swedish stock index OMXS30 for the period 2010-01-05 to 2016- 03-02. The evaluation show that both SV models outperform the two GARCH(1,1) models, where the SV model with assumed t-distributed error distribution give the smallest forecast errors.
|
797 |
Error in the invariant measure of numerical discretization schemes for canonical sampling of molecular dynamicsMatthews, Charles January 2013 (has links)
Molecular dynamics (MD) computations aim to simulate materials at the atomic level by approximating molecular interactions classically, relying on the Born-Oppenheimer approximation and semi-empirical potential energy functions as an alternative to solving the difficult time-dependent Schrodinger equation. An approximate solution is obtained by discretization in time, with an appropriate algorithm used to advance the state of the system between successive timesteps. Modern MD simulations simulate complex systems with as many as a trillion individual atoms in three spatial dimensions. Many applications use MD to compute ensemble averages of molecular systems at constant temperature. Langevin dynamics approximates the effects of weakly coupling an external energy reservoir to a system of interest, by adding the stochastic Ornstein-Uhlenbeck process to the system momenta, where the resulting trajectories are ergodic with respect to the canonical (Boltzmann-Gibbs) distribution. By solving the resulting stochastic differential equations (SDEs), we can compute trajectories that sample the accessible states of a system at a constant temperature by evolving the dynamics in time. The complexity of the classical potential energy function requires the use of efficient discretization schemes to evolve the dynamics. In this thesis we provide a systematic evaluation of splitting-based methods for the integration of Langevin dynamics. We focus on the weak properties of methods for confiurational sampling in MD, given as the accuracy of averages computed via numerical discretization. Our emphasis is on the application of discretization algorithms to high performance computing (HPC) simulations of a wide variety of phenomena, where configurational sampling is the goal. Our first contribution is to give a framework for the analysis of stochastic splitting methods in the spirit of backward error analysis, which provides, in certain cases, explicit formulae required to correct the errors in observed averages. A second contribution of this thesis is the investigation of the performance of schemes in the overdamped limit of Langevin dynamics (Brownian or Smoluchowski dynamics), showing the inconsistency of some numerical schemes in this limit. A new method is given that is second-order accurate (in law) but requires only one force evaluation per timestep. Finally we compare the performance of our derived schemes against those in common use in MD codes, by comparing the observed errors introduced by each algorithm when sampling a solvated alanine dipeptide molecule, based on our implementation of the schemes in state-of-the-art molecular simulation software. One scheme is found to give exceptional results for the computed averages of functions purely of position.
|
798 |
COUPLING STOCHASTIC AND DETERMINISTIC HYDROLOGIC MODELS FOR DECISION-MAKINGMills, William Carlisle 06 1900 (has links)
Many planning decisions related to the land phase of the
hydrologic cycle involve uncertainty due to stochasticity of rainfall
inputs and uncertainty in state and knowledge of hydrologic processes.
Consideration of this uncertainty in planning requires quantification
in the form of probability distributions. Needed probability distributions,
for many cases, must be obtained by transforming distributions
of rainfall input and hydrologic state through deterministic models of
hydrologic processes.
Probability generating functions are used to derive a recursive
technique that provides the necessary probability transformation for
situations where the hydrologic output of interest is the cumulative
effect of a random number of stochastic inputs. The derived recursive
technique is observed to be quite accurate from a comparison of
probability distributions obtained independently by the recursive
technique and an exact analytic method for a simple problem that can
be solved with the analytic method.
The assumption of Poisson occurrence of rainfall events, which
is inherent in derivation of the recursive technique, is examined and
found reasonable for practical application. Application of the derived technique is demonstrated with
two important hydrology- related problems. It is first demonstrated
for computing probability distributions of annual direct runoff from
a watershed, using the USDA Soil Conservation Service (SCS direct
runoff model and stochastic models for rainfall event depth and
watershed state. The technique is also demonstrated for obtaining
probability distributions of annual sediment yield. For this
demonstration, the-deterministic transform model consists of a parametric
event -based sediment yield model and the SCS models for direct
runoff volume and peak flow rate. The stochastic rainfall model
consists of a marginal Weibull distribution for rainfall event duration
and a conditional log -normal distribution for rainfall event depth,
given duration. The stochastic state model is the same as used for
the direct runoff application.
Probability distributions obtained with the recursive technique
for both the direct runoff and sediment yield demonstration examples
appear to be reasonable when compared to available data. It is,
therefore, concluded that the recursive technique, derived from
probability generating functions, is a feasible transform method
that can be useful for coupling stochastic models of rainfall input
and state to deterministic models of hydrologic processes to obtain
probability distributions of outputs where these outputs are cumulative
effects of random numbers of stochastic inputs.
|
799 |
Convergent algorithms in simulation optimizationHu, Liujia 27 May 2016 (has links)
It is frequently the case that deterministic optimization models could be made more practical by explicitly incorporating uncertainty. The resulting stochastic optimization problems are in general more difficult to solve than their deterministic counterparts, because the objective function cannot be evaluated exactly and/or because there is no explicit relation between the objective function and the corresponding decision variables. This thesis develops random search algorithms for solving optimization problems with continuous decision variables when the objective function values can be estimated with some noise via simulation. Our algorithms will maintain a set of sampled solutions, and use simulation results at these solutions to guide the search for better solutions. In the first part of the thesis, we propose an Adaptive Search with Resampling and Discarding (ASRD) approach for solving continuous stochastic optimization problems. Our ASRD approach is a framework for designing provably convergent algorithms that are adaptive both in seeking new solutions and in keeping or discarding already sampled solutions. The framework is an improvement over the Adaptive Search with Resampling (ASR) method of Andradottir and Prudius in that it spends less effort on inferior solutions (the ASR method does not discard already sampled solutions). We present conditions under which the ASRD method is convergent almost surely and carry out numerical studies aimed at comparing the algorithms. Moreover, we show that whether it is beneficial to resample or not depends on the problem, and analyze when resampling is desirable. Our numerical results show that the ASRD approach makes substantial improvements on ASR, especially for difficult problems with large numbers of local optima. In traditional simulation optimization problems, noise is only involved in the objective functions. However, many real world problems involve stochastic constraints. Such problems are more difficult to solve because of the added uncertainty about feasibility. The second part of the thesis presents an Adaptive Search with Discarding and Penalization (ASDP) method for solving continuous simulation optimization problems involving stochastic constraints. Rather than addressing feasibility separately, ASDP utilizes the penalty function method from deterministic optimization to convert the original problem into a series of simulation optimization problems without stochastic constraints. We present conditions under which the ASDP algorithm converges almost surely from inside the feasible region, and under which it converges to the optimal solution but without feasibility guarantee. We also conduct numerical studies aimed at assessing the efficiency and tradeoff under the two different convergence modes. Finally, in the third part of the thesis, we propose a random search method named Gaussian Search with Resampling and Discarding (GSRD) for solving simulation optimization problems with continuous decision spaces. The method combines the ASRD framework with a sampling distribution based on a Gaussian process that not only utilizes the current best estimate of the optimal solution but also learns from past sampled solutions and their objective function observations. We prove that our GSRD algorithm converges almost surely, and carry out numerical studies aimed at studying the effects of utilizing the Gaussian sampling strategy. Our numerical results show that the GSRD framework performs well when the underlying objective function is multi-modal. However, it takes much longer to sample solutions, especially in higher dimensions.
|
800 |
Non-linear dynamic modelling for panel data in the social sciencesRanganathan, Shyam January 2015 (has links)
Non-linearities and dynamic interactions between state variables are characteristic of complex social systems and processes. In this thesis, we present a new methodology to model these non-linearities and interactions from the large panel datasets available for some of these systems. We build macro-level statistical models that can verify theoretical predictions, and use polynomial basis functions so that each term in the model represents a specific mechanism. This bridges the existing gap between macro-level theories supported by statistical models and micro-level mechanistic models supported by behavioural evidence. We apply this methodology to two important problems in the social sciences, the demographic transition and the transition to democracy. The demographic transition is an important problem for economists and development scientists. Research has shown that economic growth reduces mortality and fertility rates, which reduction in turn results in faster economic growth. We build a non-linear dynamic model and show how this data-driven model extends existing mechanistic models. We also show policy applications for our models, especially in setting development targets for the Millennium Development Goals or the Sustainable Development Goals. The transition to democracy is an important problem for political scientists and sociologists. Research has shown that economic growth and overall human development transforms socio-cultural values and drives political institutions towards democracy. We model the interactions between the state variables and find that changes in institutional freedoms precedes changes in socio-cultural values. We show applications of our models in studying development traps. This thesis comprises the comprehensive summary and seven papers. Papers I and II describe two similar but complementary methodologies to build non-linear dynamic models from panel datasets. Papers III and IV deal with the demographic transition and policy applications. Papers V and VI describe the transition to democracy and applications. Paper VII describes an application to sustainable development.
|
Page generated in 0.0792 seconds