• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 84
  • 18
  • 16
  • 15
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 169
  • 169
  • 32
  • 26
  • 26
  • 26
  • 25
  • 25
  • 24
  • 22
  • 21
  • 20
  • 19
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Un métamodèle de calcul en temps continu pour les systèmes d'aide à la décision appliqués à la planification financière / A metamodel calculation continuous time for aid decision systems applied to financial planning

Hélard, Davy 01 December 2015 (has links)
Dans le cadre de l’informatique décisionnelle, la programmation physico-financière doit permettre à des acteurs d’une collectivité provenant de divers domaines de faire converger leurs problématiques vers un objectif commun. L’une des principales difficultés de la modélisation d’une programmation physico-financière est que chaque acteur exprime ses problématiques dans des échelles de temps différentes. Dans cette thèse CIFRE, un métamodèle de calcul en temps continu appliqué à la programmation physico-financière est proposé afin de permettre aux acteurs de regrouper leurs visions dans un modèle unique, tout en se plaçant sur des échelles de temps différentes. La modélisation continue développée est confrontée à la modélisation discrète (représentative de l’état de l’art) au travers d’un cas d’étude, montrant les avancées de la première vis-à-vis de la seconde. Ce métamodèle innovant a été implémenté au sein de la société MGDIS, dans le cadre d’une convention CIFRE, à l’aide d’une architecture orientée service. Cette architecture se base sur un style innovant conçu dans cette thèse afin de maximiser la capacité à paralléliser l’évaluation des modèles. La solution développée dans cette thèse a été conçue pour permettre la programmation physico-financière de gros volumes de données à l’échelle réelle. Elle a été validée sur un cas d’étude et répond aux exigences exprimées par les experts de la modélisation de programmation physico-financière de MGDIS qui ont émis un avis positif quant à l’applicabilité de la solution proposée. / In the scope of Business Intelligence, planning aims to support multiple actors in their process of converging different views and problematics from different domains to get a shared business planning model. A major difficulty in business planning is that each actor states her/his views and problematics with a different time scale. Integrating them into a unique model that represents a common state of reality becomes very costly and awkward to manage when basing the construction of these models on discrete modeling techniques used by current tools of business planning. This doctorate thesis proposes a novel solution, beyond the state-of-the-art, for addressing these issues: it conceives a novel metamodel based on a continuous time calculus. Through the developed approach, it allows multiple actors to integrate the different business logics of their planning domain in a shared model as well as to observe it from different time scales. The advantages of our solution based on continuous time against solutions based on discrete time are presented through a case study. The conceived metamodel was implemented within a real industrial set in MGDIS (a company specialized in business planning for local governments) following an innovative service oriented architecture: this architecture segregates the modeling from the evaluation to allow the parallelization of model evaluation for big volumes of data. The overall solution conceived and implemented in this thesis was designed to be a real scale prototype to be applied to real scale problems. Besides the case study, it was validated by MGDIS experts on business planning against real requirements.
82

Efficient Path and Parameter Inference for Markov Jump Processes

Boqian Zhang (6563222) 15 May 2019 (has links)
<div>Markov jump processes are continuous-time stochastic processes widely used in a variety of applied disciplines. Inference typically proceeds via Markov chain Monte Carlo (MCMC), the state-of-the-art being a uniformization-based auxiliary variable Gibbs sampler. This was designed for situations where the process parameters are known, and Bayesian inference over unknown parameters is typically carried out by incorporating it into a larger Gibbs sampler. This strategy of sampling parameters given path, and path given parameters can result in poor Markov chain mixing.</div><div><br></div><div>In this thesis, we focus on the problem of path and parameter inference for Markov jump processes.</div><div><br></div><div>In the first part of the thesis, a simple and efficient MCMC algorithm is proposed to address the problem of path and parameter inference for Markov jump processes. Our scheme brings Metropolis-Hastings approaches for discrete-time hidden Markov models to the continuous-time setting, resulting in a complete and clean recipe for parameter and path inference in Markov jump processes. In our experiments, we demonstrate superior performance over Gibbs sampling, a more naive Metropolis-Hastings algorithm we propose, as well as another popular approach, particle Markov chain Monte Carlo. We also show our sampler inherits geometric mixing from an ‘ideal’ sampler that is computationally much more expensive.</div><div><br></div><div>In the second part of the thesis, a novel collapsed variational inference algorithm is proposed. Our variational inference algorithm leverages ideas from discrete-time Markov chains, and exploits a connection between Markov jump processes and discrete-time Markov chains through uniformization. Our algorithm proceeds by marginalizing out the parameters of the Markov jump process, and then approximating the distribution over the trajectory with a factored distribution over segments of a piecewise-constant function. Unlike MCMC schemes that marginalize out transition times of a piecewise-constant process, our scheme optimizes the discretization of time, resulting in significant computational savings. We apply our ideas to synthetic data as well as a dataset of check-in recordings, where we demonstrate superior performance over state-of-the-art MCMC methods.</div><div><br></div>
83

Continuous Time and Discrete Time Fractional Order Adaptive Control for a Class of Nonlinear Systems

Aburakhis, Mohamed Khalifa I, Dr 26 September 2019 (has links)
No description available.
84

Performance analysis of the general packet radio service

Lindemann, Christoph, Thümmler, Axel 17 December 2018 (has links)
This paper presents an efficient and accurate analytical model for the radio interface of the general packet radio service (GPRS) in a GSM network. The model is utilized for investigating how many packet data channels should be allocated for GPRS under a given amount of traffic in order to guarantee appropriate quality of service. The presented model constitutes a continuous-time Markov chain. The Markov model represents the sharing of radio channels by circuit switched GSM connections and packet switched GPRS sessions under a dynamic channel allocation scheme. In contrast to previous work, the Markov model explicitly represents the mobility of users by taking into account arrivals of new GSM and GPRS users as well as handovers from neighboring cells. Furthermore, we take into account TCP flow control for the GPRS data packets. To validate the simplifications necessary for making the Markov model amenable to numerical solution, we provide a comparison of the results of the Markov model with a detailed simulator on the network level.
85

Contributions to the theory of dynamic risk measures

Schlotter, Ruben 27 May 2021 (has links)
This thesis aims to fill this gap between static and dynamic risk measures. It presents a theory of dynamic risk measures based directly on classical, static risk measures. This allows for a direct connection of the static, the discrete time as well as the continuous time setting. Unlike the existing literature this approach leads to a interpretable pendant to the well-understood static risk measures. As a key concept the notion of divisible families of risk measures is introduced. These families of risk measures admit a dynamic version in continuous time. Moreover, divisibility allows the definition of the risk generator, a nonlinear extension of the classical infinitesimal generator. Based on this extension we derive a nonlinear version of Dynkins lemma as well as risk-averse Hamilton–Jacobi–Bellman equations.
86

General methods and properties for evaluation of continuum limits of discrete time quantum walks in one and two dimensions

Manighalam, Michael 07 June 2021 (has links)
Models of quantum walks which admit continuous time and continuous spacetime limits have recently led to quantum simulation schemes for simulating fermions in relativistic and non relativistic regimes (Di Molfetta and Arrighi, 2020). This work continues the study of relationships between discrete time quantum walks (DTQW) and their ostensive continuum counterparts by developing a more general framework than was done in (Di Molfetta and Arrighi, 2020) to evaluate the continuous time limit of these discrete quantum systems. Under this framework, we prove two constructive theorems concerning which internal discrete transitions ("coins") admit nontrivial continuum limits in 1D+1. We additionally prove that the continuous space limit of the continuous time limit of the DTQW can only yield massless states which obey the Dirac equation. We also demonstrate that for general coins the continuous time limit of the DTQW can be identified with the canonical continuous time quantum walk (CTQW) when the coin is allowed to transition through the continuum limit process. Finally, we introduce the Plastic Quantum Walk, or a quantum walk which admits both continuous time and continuous spacetime limits and, as a novel result, we use our 1D+1 results to obtain necessary and sufficient conditions concerning which DTQWs admit plasticity in 2D+1, showing the resulting Hamiltonians. We consider coin operators as general 4 parameter unitary matrices, with parameters which are functions of the lattice step size 𝜖. This dependence on 𝜖 encapsulates all functions of 𝜖 for which a Taylor series expansion in 𝜖 is well defined, making our results very general.
87

A Comparison of Computational Efficiencies of Stochastic Algorithms in Terms of Two Infection Models

Banks, H. Thomas, Hu, Shuhua, Joyner, Michele, Broido, Anna, Canter, Brandi, Gayvert, Kaitlyn, Link, Kathryn 01 July 2012 (has links)
In this paper, we investigate three particular algorithms: A sto- chastic simulation algorithm (SSA), and explicit and implicit tau-leaping al- gorithms. To compare these methods, we used them to analyze two infection models: A Vancomycin-resistant enterococcus (VRE) infection model at the population level, and a Human Immunode ciency Virus (HIV) within host in- fection model. While the rst has a low species count and few transitions, the second is more complex with a comparable number of species involved. The relative effciency of each algorithm is determined based on computational time and degree of precision required. The numerical results suggest that all three algorithms have the similar computational effciency for the simpler VRE model, and the SSA is the best choice due to its simplicity and accuracy. In addition, we have found that with the larger and more complex HIV model, implementation and modication of tau-Leaping methods are preferred.
88

A test for Non-Gaussian distributions on the Johannesburg stock exchange and its implications on forecasting models based on historical growth rates.

Corker, Lloyd A January 2002 (has links)
Masters of Commerce / If share price fluctuations follow a simple random walk then it implies that forecasting models based on historical growth rates have little ability to forecast acceptable share price movements over a certain period. The simple random walk description of share price dynamics is obtained when a large number of investors have equal probability to buy or sell based on their own opinion. This simple random walk description of the stock market is in essence the Efficient Market Hypothesis, EMT. EMT is the central concept around which financial modelling is based which includes the Black-Scholes model and other important theoretical underpinnings of capital market theory like mean-variance portfolio selection, arbitrage pricing theory (APT), security market line and capital asset pricing model (CAPM). These theories, which postulates that risk can be reduced to zero sets the foundation for option pricing and is a key component in financial software packages used for pricing and forecasting in the financial industry. The model used by Black and Scholes and other models mentioned above are Gaussian, i.e. they exhibit a random nature. This Gaussian property and the existence of expected returns and continuous time paths (also Gaussian properties) allow the use of stochastic calculus to solve complex Black- Scholes models. However, if the markets are not Gaussian then the idea that risk can be. (educed to zero can lead to a misleading and potentially disastrous sense of security on the financial markets. This study project test the null hypothesis - share prices on the JSE follow a random walk - by means of graphical techniques such as symmetry plots and Quantile-Quantile plots to analyse the test distributions. In both graphical techniques evidence for the rejection of normality was found. Evidenceleading to the rejection of the hypothesis was also found through nonparametric or distribution free methods at a 1% level of significance for Anderson-Darling and Runs test.
89

Hierarchical Approximation Methods for Option Pricing and Stochastic Reaction Networks

Ben Hammouda, Chiheb 22 July 2020 (has links)
In biochemically reactive systems with small copy numbers of one or more reactant molecules, stochastic effects dominate the dynamics. In the first part of this thesis, we design novel efficient simulation techniques for a reliable and fast estimation of various statistical quantities for stochastic biological and chemical systems under the framework of Stochastic Reaction Networks. In the first work, we propose a novel hybrid multilevel Monte Carlo (MLMC) estimator, for systems characterized by having simultaneously fast and slow timescales. Our hybrid multilevel estimator uses a novel split-step implicit tau-leap scheme at the coarse levels, where the explicit tau-leap method is not applicable due to numerical instability issues. In a second work, we address another challenge present in this context called the high kurtosis phenomenon, observed at the deep levels of the MLMC estimator. We propose a novel approach that combines the MLMC method with a pathwise-dependent importance sampling technique for simulating the coupled paths. Our theoretical estimates and numerical analysis show that our method improves the robustness and complexity of the multilevel estimator, with a negligible additional cost. In the second part of this thesis, we design novel methods for pricing financial derivatives. Option pricing is usually challenging due to: 1) The high dimensionality of the input space, and 2) The low regularity of the integrand on the input parameters. We address these challenges by developing different techniques for smoothing the integrand to uncover the available regularity. Then, we approximate the resulting integrals using hierarchical quadrature methods combined with Brownian bridge construction and Richardson extrapolation. In the first work, we apply our approach to efficiently price options under the rough Bergomi model. This model exhibits several numerical and theoretical challenges, implying classical numerical methods for pricing being either inapplicable or computationally expensive. In a second work, we design a numerical smoothing technique for cases where analytic smoothing is impossible. Our analysis shows that adaptive sparse grids’ quadrature combined with numerical smoothing outperforms the Monte Carlo approach. Furthermore, our numerical smoothing improves the robustness and the complexity of the MLMC estimator, particularly when estimating density functions.
90

Performance Analysis and Sampled-Data Controller Synthesis for Bounded Persistent Disturbances / 有界持続的外乱に対する性能解析およびサンプル値制御器設計

Kim, Jung Hoon 23 March 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第18993号 / 工博第4035号 / 新制||工||1621(附属図書館) / 31944 / 京都大学大学院工学研究科電気工学専攻 / (主査)教授 萩原 朋道, 教授 松尾 哲司, 准教授 古谷 栄光 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM

Page generated in 0.0665 seconds