• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2573
  • 671
  • 303
  • 287
  • 96
  • 68
  • 48
  • 36
  • 34
  • 31
  • 26
  • 26
  • 26
  • 26
  • 26
  • Tagged with
  • 5101
  • 1055
  • 752
  • 607
  • 577
  • 567
  • 562
  • 551
  • 517
  • 440
  • 425
  • 415
  • 413
  • 394
  • 383
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
831

Using Markov chain to describe the progression of chronic disease

Davis, Sijia January 1900 (has links)
Master of Science / Department of Statistics / Abigail Jager / A discrete-time Markov chain with stationary transition probabilities is often used for the purpose of investigating treatment programs and health care protocols for chronic disease. Suppose the patients of a certain chronic disease are observed over equally spaced time intervals. If we classify the chronic disease into n distinct health states, the movement through these health states over time then represents a patient’s disease history. We can use a discrete-time Markov chain to describe such movement using the transition probabilities between the health states. The purpose of this study was to investigate the case when the observation interval coincided with the cycle length of the Markov chain as well as the case when the observational interval and the cycle length did not coincide. In particular, we are interested in how the estimated transition matrix behaves as the ratio of observation interval and cycle length changes. Our results suggest that more estimation problems arose for small sample sizes as the length of observational interval increased, and that the deviation from the known transition probability matrix got larger as the length of observational interval increased. With increasing sample size, there were fewer estimation problems and the deviation from the known transition probability matrix was reduced.
832

A comparison of stochastic claim reserving methods

Mann, Eric M. January 1900 (has links)
Master of Science / Department of Statistics / Haiyan Wang / Estimating unpaid liabilities for insurance companies is an extremely important aspect of insurance operations. Consistent underestimation can result in companies requiring more reserves which can lead to lower profits, downgraded credit ratings, and in the worst case scenarios, insurance company insolvency. Consistent overestimation can lead to inefficient capital allocation and a higher overall cost of capital. Due to the importance of these estimates and the variability of these unpaid liabilities, a multitude of methods have been developed to estimate these amounts. This paper compares several actuarial and statistical methods to determine which are relatively better at producing accurate estimates of unpaid liabilities. To begin, the Chain Ladder Method is introduced for those unfamiliar with it. Then a presentation of several Generalized Linear Model (GLM) methods, various Generalized Additive Model (GAM) methods, the Bornhuetter-Ferguson Method, and a Bayesian method that link the Chain Ladder and Bornhuetter-Ferguson methods together are introduced, with all of these methods being in some way connected to the Chain Ladder Method. Historical data from multiple lines of business compiled by the National Association of Insurance Commissioners is used to compare the methods across different loss functions to gain insight as to which methods produce estimates with the minimum loss and to gain a better understanding of the relative strengths and weaknesses of the methods. Key
833

Integrating Pricing and Inventory Control: Is it Worth the Effort?

Gimpl-Heersink, Lisa, Rudloff, Christian, Fleischmann, Moritz, Taudes, Alfred 05 1900 (has links) (PDF)
In this paper we first show that the gains achievable by integrating pricing and inventory control are usually small for classical demand functions. We then introduce reference price models and demonstrate that for this class of demand functions the benefits of integration with inventory control are substantially increased due to the price dynamics. We also provide some analytical results for this more complex model. We thus conclude that integrated pricing/inventory models could repeat the success of revenue management in practice if reference price effects are included in the demand model and the properties of this new model are better understood. (authors' abstract)
834

Analytical Estimation of Value at Risk Under Thick Tails and Fast Volatility Updating

Telfah, Ahmad 16 May 2003 (has links)
Despite its recent advent, value at risk (VaR) became the most widely used technique for measuring future expected risk for both financial and non-financial institutions. VaR, the measure of the worst expected loss over a given horizon at a given confidence level, depends crucially on the distributional aspects of trading revenues. Existing VaR models do not capture adequately some empirical aspects of financial data such as the tail thickness, which is vital in VaR calculations. Tail thickness in financial variables results basically from stochastic volatility and event risk (jumps). Those two sources are not totally separated; under event risk, volatility updates faster than under normal market conditions. Generally, tail thickness is associated with hyper volatility updating. Existing VaR literature accounts partially for tail thickness either by including stochastic volatility or by including jump diffusion, but not both. Additionally, this literature does not account for fast updating of volatility associated with tail thickness. This dissertation fills the gap by developing analytical VaR models account for the total (maximum) tail thickness and the associated fast volatility updating. Those aspects are achieved by assuming that trading revenues are evolving according to a mixed non-affine stochastic volatility-jump diffusion process. The mixture of stochastic volatility and jumps diffusion accounts for the maximum tail thickness, whereas the nonaffine structure of stochastic volatility captures the fast volatility updating. The non-affine structure assumes that volatility dynamics are non-linearly related to the square root of current volatility rather than the traditional linear (affine) relationship. VaR estimates are obtained by deriving the conditional characteristic function, and then inverting it numerically via the Fourier Inversion technique to infer the cumulative distribution function. The application of the developed VaR models on a sample that contains six U.S banks during the period 1995-2002 shows that VaR models based on the non-affine stochastic volatility and jump diffusion process produce more reliable VaR estimates compared with the banks' own VaR models. The developed VaR models could significantly predict the losses that those banks incurred during the Russian crisis and the near collapse of the LTCM in 1998 when the banks' VaR models fail.
835

Indifference pricing of natural gas storage contracts.

Löhndorf, Nils, Wozabal, David January 2017 (has links) (PDF)
Natural gas markets are incomplete due to physical limitations and low liquidity, but most valuation approaches for natural gas storage contracts assume a complete market. We propose an alternative approach based on indifference pricing which does not require this assumption but entails the solution of a high- dimensional stochastic-dynamic optimization problem under a risk measure. To solve this problem, we develop a method combining stochastic dual dynamic programming with a novel quantization method that approximates the continuous process of natural gas prices by a discrete scenario lattice. In a computational experiment, we demonstrate that our solution method can handle the high dimensionality of the optimization problem and that solutions are near-optimal. We then compare our approach with rolling intrinsic valuation, which is widely used in the industry, and show that the rolling intrinsic value is sub-optimal under market incompleteness, unless the decision-maker is perfectly risk-averse. We strengthen this result by conducting a backtest using historical data that compares both trading strategies. The results show that up to 40% more profit can be made by using our indifference pricing approach.
836

Approximate replication of high-breakdown robust regression techniques

Zeileis, Achim, Kleiber, Christian January 2008 (has links) (PDF)
This paper demonstrates that even regression results obtained by techniques close to the standard ordinary least squares (OLS) method can be difficult to replicate if a stochastic model fitting algorithm is employed. / Series: Research Report Series / Department of Statistics and Mathematics
837

Data-based stochastic model reduction for the Kuramoto–Sivashinsky equation

Lu, Fei, Lin, Kevin K., Chorin, Alexandre J. 01 February 2017 (has links)
The problem of constructing data-based, predictive, reduced models for the Kuramoto–Sivashinsky equation is considered, under circumstances where one has observation data only for a small subset of the dynamical variables. Accurate prediction is achieved by developing a discrete-time stochastic reduced system, based on a NARMAX (Nonlinear Autoregressive Moving Average with eXogenous input) representation. The practical issue, with the NARMAX representation as with any other, is to identify an efficient structure, i.e., one with a small number of terms and coefficients. This is accomplished here by estimating coefficients for an approximate inertial form. The broader significance of the results is discussed.
838

Sparse representations and quadratic approximations in path integral techniques for stochastic response analysis of diverse systems/structures

Psaros Andriopoulos, Apostolos January 2019 (has links)
Uncertainty propagation in engineering mechanics and dynamics is a highly challenging problem that requires development of analytical/numerical techniques for determining the stochastic response of complex engineering systems. In this regard, although Monte Carlo simulation (MCS) has been the most versatile technique for addressing the above problem, it can become computationally daunting when faced with high-dimensional systems or with computing very low probability events. Thus, there is a demand for pursuing more computationally efficient methodologies. Recently, a Wiener path integral (WPI) technique, whose origins can be found in theoretical physics, has been developed in the field of engineering dynamics for determining the response transition probability density function (PDF) of nonlinear oscillators subject to non-white, non-Gaussian and non-stationary excitation processes. In the present work, the Wiener path integral technique is enhanced, extended and generalized with respect to three main aspects; namely, versatility, computational efficiency and accuracy. Specifically, the need for increasingly sophisticated modeling of excitations has led recently to the utilization of fractional calculus, which can be construed as a generalization of classical calculus. Motivated by the above developments, the WPI technique is extended herein to account for stochastic excitations modeled via fractional-order filters. To this aim, relying on a variational formulation and on the most probable path approximation yields a deterministic fractional boundary value problem to be solved numerically for obtaining the oscillator joint response PDF. Further, appropriate multi-dimensional bases are constructed for approximating, in a computationally efficient manner, the non-stationary joint response PDF. In this regard, two distinct approaches are pursued. The first employs expansions based on Kronecker products of bases (e.g., wavelets), while the second utilizes representations based on positive definite functions. Next, the localization capabilities of the WPI technique are exploited for determining PDF points in the joint space-time domain to be used for evaluating the expansion coefficients at a relatively low computational cost. Subsequently, compressive sampling procedures are employed in conjunction with group sparsity concepts and appropriate optimization algorithms for decreasing even further the associated computational cost. It is shown that the herein developed enhancement renders the technique capable of treating readily relatively high-dimensional stochastic systems. More importantly, it is shown that this enhancement in computational efficiency becomes more prevalent as the number of stochastic dimensions increases; thus, rendering the herein proposed sparse representation approach indispensable, especially for high-dimensional systems. Next, a quadratic approximation of the WPI is developed for enhancing the accuracy degree of the technique. Concisely, following a functional series expansion, higher-order terms are accounted for, which is equivalent to considering not only the most probable path but also fluctuations around it. These fluctuations are incorporated into a state-dependent factor by which the exponential part of each PDF value is multiplied. This localization of the state-dependent factor yields superior accuracy as compared to the standard most probable path WPI approximation where the factor is constant and state-invariant. An additional advantage relates to efficient structural reliability assessment, and in particular, to direct estimation of low probability events (e.g., failure probabilities), without possessing the complete transition PDF. Overall, the developments in this thesis render the WPI technique a potent tool for determining, in a reliable manner and with a minimal computational cost, the stochastic response of nonlinear oscillators subject to an extended range of excitation processes. Several numerical examples, pertaining to both nonlinear dynamical systems subject to external excitations and to a special class of engineering mechanics problems with stochastic media properties, are considered for demonstrating the reliability of the developed techniques. In all cases, the degree of accuracy and the computational efficiency exhibited are assessed by comparisons with pertinent MCS data.
839

Construction and Approximation of Stable Lévy Motion with Values in Skorohod Space

Saidani, Becem 12 August 2019 (has links)
Under an appropriate regular variation condition, the affinely normalized partial sums of a sequence of independent and identically distributed random variables converges weakly to a non-Gaussian stable random variable. A functional version of this is known to be true as well, the limit process being a stable L´evy process. In this thesis, we developed an explicit construction for the α-stable L´evy process motion with values in D([0, 1]), by considering the cases α < 1 and α > 1. The case α < 1 is the simplest since we can work with the uniform topology of the sup-norm on D([0, 1]) and the construction follows more or less by classical techniques. The case α > 1 required more work. In particular, we encountered two problems : one was related to the construction of a modification of this process (for all time), which is right-continuous and has left-limit with respect to the J1 topology. This problem was solved by using the Itob-Nisio theorem. The other problem was more difficult and we only managed to solve it by developing a criterion for tightness of probability measures on the space of cadlag fonction on [0, T] with values in D([0, 1]), equipped with a generalization of Skorohod’s J1 topology. In parallel with the construction of the infinite-dimensional process Z, we focus on the functional extension of Roueff and Soulier [29]. This part of the thesis was completed using the method of point process, which gave the convergence of the truncated sum. The case α > 1 required more work due to the presence of centering. For this case, we developed an ad-hoc result regarding the continuity of the addition for functions on [0, T] with values in D([0, 1]), which was tailored for our problem.
840

The optimal assignment problem: an investigation into current solutions, new approaches and the doubly stochastic polytope

Vermaak, Frans-Willem 23 May 2011 (has links)
MSc(Eng),Faculty of Engineering and the Built Environment, University of the Witwatersrand, 2010 / This dissertation presents two important results: a novel algorithm that approximately solves the optimal assignment problem as well as a novel method of projecting matrices into the doubly stochastic polytope while preserving the optimal assignment. The optimal assignment problem is a classical combinatorial optimisation problem that has fuelled extensive research in the last century. The problem is concerned with a matching or assignment of elements in one set to those in another set in an optimal manner. It finds typical application in logistical optimisation such as the matching of operators and machines but there are numerous other applications. In this document a process of iterative weighted normalization applied to the benefit matrix associated with the Assignment problem is considered. This process is derived from the application of the Computational Ecology Model to the assignment problem and referred to as the OACE (Optimal Assignment by Computational Ecology) algorithm. This simple process of iterative weighted normalisation converges towards a matrix that is easily converted to a permutation matrix corresponding to the optimal assignment or an assignment close to optimality. The document also considers a method of projecting a matrix into the doubly stochastic polytope while preserving the optimal assignment. Various methods of projecting square matrices into the doubly stochastic polytope exist but none that preserve the assignment. This novel result could prove instrumental in solving assignment problems and promises applications in other optimisation algorithms similar to those that Sinkhorn’s algorithm finds.

Page generated in 0.0487 seconds