• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 67
  • 48
  • 32
  • 28
  • 18
  • 14
  • 13
  • 12
  • 9
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 667
  • 667
  • 359
  • 359
  • 150
  • 147
  • 101
  • 72
  • 66
  • 66
  • 65
  • 63
  • 62
  • 60
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Option Pricing Under the Markov-switching Framework Defined by Three States

Castoe, Minna, Raspudic, Teo January 2020 (has links)
An exact solution for the valuation of the options of the European style can be obtained using the Black-Scholes model. However, some of the limitations of the Black-Scholes model are said to be inconsistent such as the constant volatility of the stock price which is not the case in real life. In this thesis, the Black-Scholes model is extended to a model where the volatility is fully stochastic and changing over time, modelled by Markov chain with three states - high, medium and low. Under this model, we price options of both types, European and American, using Monte Carlo simulation.
192

Joint Posterior Inference for Latent Gaussian Models and extended strategies using INLA

Chiuchiolo, Cristian 06 June 2022 (has links)
Bayesian inference is particularly challenging on hierarchical statistical models as computational complexity becomes a significant issue. Sampling-based methods like the popular Markov Chain Monte Carlo (MCMC) can provide accurate solutions, but they likely suffer a high computational burden. An attractive alternative is the Integrated Nested Laplace Approximations (INLA) approach, which is faster when applied to the broad class of Latent Gaussian Models (LGMs). The method computes fast and empirically accurate deterministic posterior marginal approximations of the model's unknown parameters. In the first part of this thesis, we discuss how to extend the software's applicability to a joint posterior inference by constructing a new class of joint posterior approximations, which also add marginal corrections for location and skewness. As these approximations result from a combination of a Gaussian Copula and internally pre-computed accurate Gaussian Approximations, we name this class Skew Gaussian Copula (SGC). By computing moments and correlation structure of a mixture representation of these distributions, we achieve new fast and accurate deterministic approximations for linear combinations in a subset of the model's latent field. The same mixture approximates a full joint posterior density through a Monte Carlo sampling on the hyperparameter set. We set highly skewed examples based on Poisson and Binomial hierarchical models and verify these new approximations using INLA and MCMC. The new skewness correction from the Skew Gaussian Copula is more consistent with the outcomes provided by the default INLA strategies. In the last part, we propose an extension of the parametric fit employed by the Simplified Laplace Approximation strategy in INLA when approximating posterior marginals. By default, the strategy matches log derivatives from a third-order Taylor expansion of each Laplace Approximation marginal with those derived from Skew Normal distributions. We consider a fourth-order term and adapt an Extended Skew Normal distribution to produce a more accurate approximation fit when skewness is large. We set similarly skewed data simulations with Poisson and Binomial likelihoods and show that the posterior marginal results from the new extended strategy are more accurate and coherent with the MCMC ones than its original version.
193

A Method for Reconstructing Historical Destructive Earthquakes Using Bayesian Inference

Ringer, Hayden J. 04 August 2020 (has links)
Seismic hazard analysis is concerned with estimating risk to human populations due to earthquakes and the other natural disasters that they cause. In many parts of the world, earthquake-generated tsunamis are especially dangerous. Assessing the risk for seismic disasters relies on historical data that indicate which fault zones are capable of supporting significant earthquakes. Due to the nature of geologic time scales, the era of seismological data collection with modern instruments has captured only a part of the Earth's seismic hot zones. However, non-instrumental records, such as anecdotal accounts in newspapers, personal journals, or oral tradition, provide limited information on earthquakes that occurred before the modern era. Here, we introduce a method for reconstructing the source earthquakes of historical tsunamis based on anecdotal accounts. We frame the reconstruction task as a Bayesian inference problem by making a probabilistic interpretation of the anecdotal records. Utilizing robust models for simulating earthquakes and tsunamis provided by the software package GeoClaw, we implement a Metropolis-Hastings sampler for the posterior distribution on source earthquake parameters. In this work, we present our analysis of the 1852 Banda Arc earthquake and tsunami as a case study for the method. Our method is implemented as a Python package, which we call tsunamibayes. It is available, open-source, on GitHub: https://github.com/jwp37/tsunamibayes.
194

Optimization of Grid Connection Capacity for Onshore Wind Farms / Optimering av nätkapacitet för landbaserad vindkraft

Wall, Patrik January 2022 (has links)
This thesis investigates if the profitability of a wind farm can be increased by reducing itscontracted grid capacity. Two years of SCADA data is cleaned from non- and partialperformance which is used to estimate a wake reduced annual power time series. Stochasticmodels of production losses are applied to translate the wake reduced annual power timeseries. Ice losses are modelled with a 3-state Markov chain. The statistical properties arecalculated by identifying ice events in the SCADA. With the IEA task19 IceLoss algorithm areice events identified in the SCADA signal. An ice loss factor of 86 % is estimated for Juktanduring 2019. The results indicate that profitability can be increased by reducing the (contracted)grid capacity. Furthermore, the optimized grid capacity is shown to have low sensitivity to powerprice and ice losses. This finding is valuable since the power price market and weather areinherently difficult to predict. It follows that the prediction uncertainties of these inputs are lesssignificant when calculating the optimized grid capacity.
195

A review of two financial market models: the Black--Scholes--Merton and the Continuous-time Markov chain models

Ayana, Haimanot, Al-Swej, Sarah January 2021 (has links)
The objective of this thesis is to review the two popular mathematical models of the financialderivatives market. The models are the classical Black–Scholes–Merton and the Continuoustime Markov chain (CTMC) model. We study the CTMC model which is illustrated by themathematician Ragnar Norberg. The thesis demonstrates how the fundamental results ofFinancial Engineering work in both models.The construction of the main financial market components and the approach used for pricingthe contingent claims were considered in order to review the two models. In addition, the stepsused in solving the first–order partial differential equations in both models are explained.The main similarity between the models are that the financial market components are thesame. Their contingent claim is similar and the driving processes for both models utilizeMarkov property.One of the differences observed is that the driving process in the BSM model is the Brownianmotion and Markov chain in the CTMC model.We believe that the thesis can motivate other students and researchers to do a deeper andadvanced comparative study between the two models.
196

Exploring stellar magnetic activities with Bayesian inference / ベイズ推論による恒星磁気活動の探究

Ikuta, Kai 23 March 2021 (has links)
京都大学 / 新制・課程博士 / 博士(理学) / 甲第23006号 / 理博第4683号 / 新制||理||1672(附属図書館) / 京都大学大学院理学研究科物理学・宇宙物理学専攻 / (主査)准教授 野上 大作, 教授 一本 潔, 教授 太田 耕司 / 学位規則第4条第1項該当 / Doctor of Science / Kyoto University / DFAM
197

Consecutive Covering Arrays and a New Randomness Test

Godbole, A. P., Koutras, M. V., Milienos, F. S. 01 May 2010 (has links)
A k × n array with entries from an "alphabet" A = { 0, 1, ..., q - 1 } of size q is said to form a t-covering array (resp. orthogonal array) if each t × n submatrix of the array contains, among its columns, at least one (resp. exactly one) occurrence of each t-letter word from A (we must thus have n = qt for an orthogonal array to exist and n ≥ qt for a t -covering array). In this paper, we continue the agenda laid down in Godbole et al. (2009) in which the notion of consecutive covering arrays was defined and motivated; a detailed study of these arrays for the special case q = 2, has also carried out by the same authors. In the present article we use first a Markov chain embedding method to exhibit, for general values of q, the probability distribution function of the random variable W = Wk, n, t defined as the number of sets of t consecutive rows for which the submatrix in question is missing at least one word. We then use the Chen-Stein method (Arratia et al., 1989, 1990) to provide upper bounds on the total variation error incurred while approximating L (W) by a Poisson distribution Po (λ) with the same mean as W. Last but not least, the Poisson approximation is used as the basis of a new statistical test to detect run-based discrepancies in an array of q-ary data.
198

Modeling the Spread of Infectious Disease Using Genetic Information Within a Marked Branching Process

Leman, Scotland C., Levy, Foster, Walker, Elaine S. 20 December 2009 (has links)
Accurate assessment of disease dynamics requires a quantification of many unknown parameters governing disease transmission processes. While infection control strategies within hospital settings are stringent, some disease will be propagated due to human interactions (patient-to-patient or patient-to- caregiver-topatient). In order to understand infectious transmission rates within the hospital, it is necessary to isolate the amount of disease that is endemic to the outside environment. While discerning the origins of disease is difficult when using ordinary spatio-temporal data (locations and time of disease detection), genotypes that are common to pathogens, with common sources, aid in distinguishing nosocomial infections from independent arrivals of the disease. The purpose of this study was to demonstrate a Bayesian modeling procedure for identifying nosocomial infections, and quantify the rate of these transmissions. We will demonstrate our method using a 10-year history of Morexella catarhallis. Results will show the degree to which pathogen-specific, genotypic information impacts inferences about the nosocomial rate of infection.
199

Peptide Refinement by Using a Stochastic Search

Lewis, Nicole H., Hitchcock, David B., Dryden, Ian L., Rose, John R. 01 November 2018 (has links)
Identifying a peptide on the basis of a scan from a mass spectrometer is an important yet highly challenging problem. To identify peptides, we present a Bayesian approach which uses prior information about the average relative abundances of bond cleavages and the prior probability of any particular amino acid sequence. The scoring function proposed is composed of two overall distance measures, which measure how close an observed spectrum is to a theoretical scan for a peptide. Our use of our scoring function, which approximates a likelihood, has connections to the generalization presented by Bissiri and co-workers of the Bayesian framework. A Markov chain Monte Carlo algorithm is employed to simulate candidate choices from the posterior distribution of the peptide sequence. The true peptide is estimated as the peptide with the largest posterior density.
200

A Method for Reconstructing Historical Destructive Earthquakes Using Bayesian Inference

Ringer, Hayden J. 04 August 2020 (has links)
Seismic hazard analysis is concerned with estimating risk to human populations due to earthquakes and the other natural disasters that they cause. In many parts of the world, earthquake-generated tsunamis are especially dangerous. Assessing the risk for seismic disasters relies on historical data that indicate which fault zones are capable of supporting significant earthquakes. Due to the nature of geologic time scales, the era of seismological data collection with modern instruments has captured only a part of the Earth's seismic hot zones. However, non-instrumental records, such as anecdotal accounts in newspapers, personal journals, or oral tradition, provide limited information on earthquakes that occurred before the modern era. Here, we introduce a method for reconstructing the source earthquakes of historical tsunamis based on anecdotal accounts. We frame the reconstruction task as a Bayesian inference problem by making a probabilistic interpretation of the anecdotal records. Utilizing robust models for simulating earthquakes and tsunamis provided by the software package GeoClaw, we implement a Metropolis-Hastings sampler for the posterior distribution on source earthquake parameters. In this work, we present our analysis of the 1852 Banda Arc earthquake and tsunami as a case study for the method. Our method is implemented as a Python package, which we call tsunamibayes. It is available, open-source, on GitHub: https://github.com/jwp37/tsunamibayes.

Page generated in 0.0323 seconds