• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 135
  • 16
  • Tagged with
  • 151
  • 151
  • 144
  • 141
  • 33
  • 32
  • 26
  • 26
  • 21
  • 17
  • 16
  • 15
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Estimating Optimal Checkpoint Intervals Using GPSS Simulation

Savatovic, Anita, Cakic, Mejra January 2007 (has links)
<p>In this project we illustrate how queueing simulation may be used to find the optimal interval for checkpointing problems and compare results with theoretical computations for simple systems that may be treated analytically.</p><p>We consider a relatively simple model for an internet banking facility. From time to time, the application server breaks down. The information at the time of the breakdown has to be passed onto the back up server before service may be resumed. To make the change over as efficient as possible, information of the state of user’s accounts is saved at regular intervals. This is known as checkpointing.</p><p>Firstly, we use GPSS (a queueing simulation tool) to find, by simulation, an optimal checkpointing interval, which maximises the efficiency of the server. Two measures of efficiency are considered; the availability of the server and the average time a customer spends in the system. Secondly, we investigate how far the queueing theory can go to providing an analytic solution to the problem and see whether or not this is in line with the results obtained through simulation.</p><p>The analysis shows that checkpointing is not necessary if breakdowns occur frequently and log reading after failure does not take much time. Otherwise, checkpointing is necessary and the analysis shows how GPSS may be used to obtain the optimal checkpointing interval. Relatively complicated systems may be simulated, where there are no analytic tools available. In simple cases, where theoretical methods may be used, the results from our simulations correspond with the theoretical calculations.</p>
102

Rank Estimation in Elliptical Models : Estimation of Structured Rank Covariance Matrices and Asymptotics for Heteroscedastic Linear Regression

Kuljus, Kristi January 2008 (has links)
This thesis deals with univariate and multivariate rank methods in making statistical inference. It is assumed that the underlying distributions belong to the class of elliptical distributions. The class of elliptical distributions is an extension of the normal distribution and includes distributions with both lighter and heavier tails than the normal distribution. In the first part of the thesis the rank covariance matrices defined via the Oja median are considered. The Oja rank covariance matrix has two important properties: it is affine equivariant and it is proportional to the inverse of the regular covariance matrix. We employ these two properties to study the problem of estimating the rank covariance matrices when they have a certain structure. The second part, which is the main part of the thesis, is devoted to rank estimation in linear regression models with symmetric heteroscedastic errors. We are interested in asymptotic properties of rank estimates. Asymptotic uniform linearity of a linear rank statistic in the case of heteroscedastic variables is proved. The asymptotic uniform linearity property enables to study asymptotic behaviour of rank regression estimates and rank tests. Existing results are generalized and it is shown that the Jaeckel estimate is consistent and asymptotically normally distributed also for heteroscedastic symmetric errors.
103

Estimation of wood fibre length distributions from censored mixture data

Svensson, Ingrid January 2007 (has links)
The motivating forestry background for this thesis is the need for fast, non-destructive, and cost-efficient methods to estimate fibre length distributions in standing trees in order to evaluate the effect of silvicultural methods and breeding programs on fibre length. The usage of increment cores is a commonly used non-destructive sampling method in forestry. An increment core is a cylindrical wood sample taken with a special borer, and the methods proposed in this thesis are especially developed for data from increment cores. Nevertheless the methods can be used for data from other sampling frames as well, for example for sticks with the shape of an elongated rectangular box. This thesis proposes methods to estimate fibre length distributions based on censored mixture data from wood samples. Due to sampling procedures, wood samples contain cut (censored) and uncut observations. Moreover the samples consist not only of the fibres of interest but of other cells (fines) as well. When the cell lengths are determined by an automatic optical fibre-analyser, there is no practical possibility to distinguish between cut and uncut cells or between fines and fibres. Thus the resulting data come from a censored version of a mixture of the fine and fibre length distributions in the tree. The methods proposed in this thesis can handle this lack of information. Two parametric methods are proposed to estimate the fine and fibre length distributions in a tree. The first method is based on grouped data. The probabilities that the length of a cell from the sample falls into different length classes are derived, the censoring caused by the sampling frame taken into account. These probabilities are functions of the unknown parameters, and ML estimates are found from the corresponding multinomial model. The second method is a stochastic version of the EM algorithm based on the individual length measurements. The method is developed for the case where the distributions of the true lengths of the cells at least partially appearing in the sample belong to exponential families. The cell length distribution in the sample and the conditional distribution of the true length of a cell at least partially appearing in the sample given the length in the sample are derived. Both these distributions are necessary in order to use the stochastic EM algorithm. Consistency and asymptotic normality of the stochastic EM estimates is proved. The methods are applied to real data from increment cores taken from Scots pine trees (Pinus sylvestris L.) in Northern Sweden and further evaluated through simulation studies. Both methods work well for sample sizes commonly obtained in practice.
104

Latent variable models for longitudinal twin data

Dominicus, Annica January 2006 (has links)
Longitudinal twin data provide important information for exploring sources of variation in human traits. In statistical models for twin data, unobserved genetic and environmental factors influencing the trait are represented by latent variables. In this way, trait variation can be decomposed into genetic and environmental components. With repeated measurements on twins, latent variables can be used to describe individual trajectories, and the genetic and environmental variance components are assessed as functions of age. This thesis contributes to statistical methodology for analysing longitudinal twin data by (i) exploring the use of random change point models for modelling variance as a function of age, (ii) assessing how nonresponse in twin studies may affect estimates of genetic and environmental influences, and (iii) providing a method for hypothesis testing of genetic and environmental variance components. The random change point model, in contrast to linear and quadratic random effects models, is shown to be very flexible in capturing variability as a function of age. Approximate maximum likelihood inference through first-order linearization of the random change point model is contrasted with Bayesian inference based on Markov chain Monte Carlo simulation. In a set of simulations based on a twin model for informative nonresponse, it is demonstrated how the effect of nonresponse on estimates of genetic and environmental variance components depends on the underlying nonresponse mechanism. This thesis also reveals that the standard procedure for testing variance components is inadequate, since the null hypothesis places the variance components on the boundary of the parameter space. The asymptotic distribution of the likelihood ratio statistic for testing variance components in classical twin models is derived, resulting in a mixture of chi-square distributions. Statistical methodology is illustrated with applications to empirical data on cognitive function from a longitudinal twin study of aging.
105

Arbitrage-free market models for interest rate options and future options: the multi-strike case

Ye, Hui, Ellanskaya, Anastasia January 2010 (has links)
This work mainly studies modeling and existence issues for martingale models of option markets with one stock and a collection of European call options for one fixed maturity and infinetely many strikes. In particular, we study Dupire's and Schweizer-Wissel's models, especially the latter one. These two types of models have two completely different pricing approachs, one of which is martingale approach (in Dupire's model), and other one is a market approach (in Schweizer-Wissel's model). After arguing that Dupire's model suffers from the several lacks comparing to Schweizer-Wissel's model, we extend the latter one to get the variations for the case of options on interest rate indexes and futures options. Our models are based on the newly introduced definitions of local implied volatilities and a price level proposed by Schweizer and Wissel. We get explicit expressions of option prices as functions of the local implied volatilities and the price levels in our variations of models. Afterwards, the absence of the dynamic arbitrage in the market for such models can be described in terms of the drift restrictions on the models' coefficients. Finally we demonstrate the application of such models by a simple example of an investment portfolio to show how Schweizer-Wissel's model works generally.
106

Estimating Optimal Checkpoint Intervals Using GPSS Simulation

Savatovic, Anita, Cakic, Mejra January 2007 (has links)
In this project we illustrate how queueing simulation may be used to find the optimal interval for checkpointing problems and compare results with theoretical computations for simple systems that may be treated analytically. We consider a relatively simple model for an internet banking facility. From time to time, the application server breaks down. The information at the time of the breakdown has to be passed onto the back up server before service may be resumed. To make the change over as efficient as possible, information of the state of user’s accounts is saved at regular intervals. This is known as checkpointing. Firstly, we use GPSS (a queueing simulation tool) to find, by simulation, an optimal checkpointing interval, which maximises the efficiency of the server. Two measures of efficiency are considered; the availability of the server and the average time a customer spends in the system. Secondly, we investigate how far the queueing theory can go to providing an analytic solution to the problem and see whether or not this is in line with the results obtained through simulation. The analysis shows that checkpointing is not necessary if breakdowns occur frequently and log reading after failure does not take much time. Otherwise, checkpointing is necessary and the analysis shows how GPSS may be used to obtain the optimal checkpointing interval. Relatively complicated systems may be simulated, where there are no analytic tools available. In simple cases, where theoretical methods may be used, the results from our simulations correspond with the theoretical calculations.
107

Reliability calculations for complex systems / Tillförlitlighetsberäkningar för komplexa system

Lenz, Malte, Rhodin, Johan January 2011 (has links)
Functionality for efficient computation of properties of system lifetimes was developed, based on the Mathematica framework. The model of these systems consists of a system structure and the components independent lifetime distributions. The components are assumed to be non-repairable. In this work a very general implementation was created, allowing a large number of lifetime distributions from Mathematica for all the component distributions. All system structures with a monotone increasing structure function can be used. Special effort has been made to compute fast results when using the exponential distribution for component distributions. Standby systems have also been modeled in similar generality. Both warm and cold standby components are supported. During development, a large collection of examples were also used to test functionality and efficiency. A number of these examples are presented. The implementation was evaluated on large real world system examples, and was found to be efficient. New results are presented for standby systems, especially for the case of mixed warm and cold standby components.
108

Parameter Estimation of the Pareto-Beta Jump-Diffusion Model in Times of Catastrophe Crisis

Reducha, Wojciech January 2011 (has links)
Jump diffusion models are being used more and more often in financial applications. Consisting of a Brownian motion (with drift) and a jump component, such models have a number of parameters that have to be set at some level. Maximum Likelihood Estimation (MLE) turns out to be suitable for this task, however it is computationally demanding. For a complicated likelihood function it is seldom possible to find derivatives. The global maximum of a likelihood function defined for a jump diffusion model can however, be obtained by numerical methods. I chose to use the Bound Optimization BY Quadratic Approximation (BOBYQA) method which happened to be effective in this case. However, results of Maximum Likelihood Estimation (MLE) proved to be hard to interpret.
109

Problem of hedging of a portfolio with a unique rebalancing moment

Mironenko, Georgy January 2012 (has links)
The paper deals with the problem of finding an optimal one-time rebalancing strategy for the Bachelier model, and makes some remarks for the similar problem within Black-Scholes model. The problem is studied on finite time interval under mean-square criterion of optimality. The methods of the paper are based on the results for optimal stopping problem and standard mean-square criterion. The solution of the problem, considered in the paper, let us interpret how and - that is more important for us -when investor should rebalance the portfolio, if he wants to hedge it in the best way.
110

Perturbed Renewal Equations with Non-Polynomial Perturbations

Ni, Ying January 2010 (has links)
This thesis deals with a model of nonlinearly perturbed continuous-time renewal equation with nonpolynomial perturbations. The characteristics, namely the defect and moments, of the distribution function generating the renewal equation are assumed to have expansions with respect to a non-polynomial asymptotic scale: $\{\varphi_{\nn} (\varepsilon) =\varepsilon^{\nn \cdot \w}, \nn \in \mathbf{N}_0^k\}$  as $\varepsilon \to 0$, where $\mathbf{N}_0$ is the set of non-negative integers, $\mathbf{N}_0^k \equiv \mathbf{N}_0 \times \cdots \times \mathbf{N}_0, 1\leq k &lt;\infty$ with the product being taken $k$ times and $\w$ is a $k$ dimensional parameter vector that satisfies certain properties. For the one-dimensional case, i.e., $k=1$, this model reduces to the model of nonlinearly perturbed renewal equation with polynomial perturbations which is well studied in the literature.  The goal of the present study is to obtain the exponential asymptotics for the solution to the perturbed renewal equation in the form of exponential asymptotic expansions and present possible applications. The thesis is based on three papers which study successively the model stated above. Paper A investigates the two-dimensional case, i.e. where $k=2$. The corresponding asymptotic exponential expansion for the solution to the perturbed renewal equation is given. The asymptotic results are applied to an example of the perturbed risk process, which leads to diffusion approximation type asymptotics for the ruin probability.  Numerical experimental studies on this example of perturbed risk process are conducted in paper B, where Monte Carlo simulation are used to study the accuracy and properties of the asymptotic formulas. Paper C presents the asymptotic results for the more general case where the dimension $k$ satisfies $1\leq k &lt;\infty$, which are applied to the asymptotic analysis of the ruin probability in an example of perturbed risk processes with this general type of non-polynomial perturbations.  All the proofs of the theorems stated in paper C are collected in its supplement: paper D.

Page generated in 0.1019 seconds