• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 37
  • 23
  • 17
  • 9
  • 7
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 211
  • 211
  • 69
  • 65
  • 63
  • 49
  • 40
  • 39
  • 38
  • 30
  • 30
  • 28
  • 27
  • 23
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

The optimality of a dividend barrier strategy for Levy insurance risk processes, with a focus on the univariate Erlang mixture

Ali, Javid January 2011 (has links)
In insurance risk theory, the surplus of an insurance company is modelled to monitor and quantify its risks. With the outgo of claims and inflow of premiums, the insurer needs to determine what financial portfolio ensures the soundness of the company’s future while satisfying the shareholders’ interests. It is usually assumed that the net profit condition (i.e. the expectation of the process is positive) is satisfied, which then implies that this process would drift towards infinity. To correct this unrealistic behaviour, the surplus process was modified to include the payout of dividends until the time of ruin. Under this more realistic surplus process, a topic of growing interest is determining which dividend strategy is optimal, where optimality is in the sense of maximizing the expected present value of dividend payments. This problem dates back to the work of Bruno De Finetti (1957) where it was shown that if the surplus process is modelled as a random walk with ± 1 step sizes, the optimal dividend payment strategy is a barrier strategy. Such a strategy pays as dividends any excess of the surplus above some threshold. Since then, other examples where a barrier strategy is optimal include the Brownian motion model (Gerber and Shiu (2004)) and the compound Poisson process model with exponential claims (Gerber and Shiu (2006)). In this thesis, we focus on the optimality of a barrier strategy in the more general Lévy risk models. The risk process will be formulated as a spectrally negative Lévy process, a continuous-time stochastic process with stationary increments which provides an extension of the classical Cramér-Lundberg model. This includes the Brownian and the compound Poisson risk processes as special cases. In this setting, results are expressed in terms of “scale functions”, a family of functions known only through their Laplace transform. In Loeffen (2008), we can find a sufficient condition on the jump distribution of the process for a barrier strategy to be optimal. This condition was then improved upon by Loeffen and Renaud (2010) while considering a more general control problem. The first chapter provides a brief review of theory of spectrally negative Lévy processes and scale functions. In chapter 2, we define the optimal dividends problem and provide existing results in the literature. When the surplus process is given by the Cramér-Lundberg process with a Brownian motion component, we provide a sufficient condition on the parameters of this process for the optimality of a dividend barrier strategy. Chapter 3 focuses on the case when the claims distribution is given by a univariate mixture of Erlang distributions with a common scale parameter. Analytical results for the Value-at-Risk and Tail-Value-at-Risk, and the Euler risk contribution to the Conditional Tail Expectation are provided. Additionally, we give some results for the scale function and the optimal dividends problem. In the final chapter, we propose an expectation maximization (EM) algorithm similar to that in Lee and Lin (2009) for fitting the univariate distribution to data. This algorithm is implemented and numerical results on the goodness of fit to sample data and on the optimal dividends problem are presented.
92

A Study of Designs in Clinical Trials and Schedules in Operating Rooms

Hung, Wan-Ping 20 January 2011 (has links)
The design of clinical trials is one of the important problems in medical statistics. Its main purpose is to determine the methodology and the sample size required of a testing study to examine the safety and efficacy of drugs. It is also a part of the Food and Drug Administration approval process. In this thesis, we first study the comparison of the efficacy of drugs in clinical trials. We focus on the two-sample comparison of proportions to investigate testing strategies based on two-stage design. The properties and advantages of the procedures from the proposed testing designs are demonstrated by numerical results, where comparison with the classical method is made under the same sample size. A real example discussed in Cardenal et al. (1999) is provided to explain how the methods may be used in practice. Some figures are also presented to illustrate the pattern changes of the power functions of these methods. In addition, the proposed procedure is also compared with the Pocock (1997) and O¡¦Brien and Fleming (1979) tests based on the standardized statistics. In the second part of this work, the operating room scheduling problem is considered, which is also important in medical studies. The national health insurance system has been conducted more than ten years in Taiwan. The Bureau of National Health Insurance continues to improve the national health insurance system and try to establish a reasonable fee ratio for people in different income ranges. In accordance to the adjustment of the national health insurance system, hospitals must pay more attention to control the running cost. One of the major hospital's revenues is generated by its surgery center operations. In order to maintain financial balance, effective operating room management is necessary. For this topic, this study focuses on the model fitting of operating times and operating room scheduling. Log-normal and mixture log-normal distributions are identified to be acceptable statistically in describing these operating times. The procedure is illustrated through analysis of thirteen operations performed in the gynecology department of a major teaching hospital in southern Taiwan. The best fitting distributions are used to evaluate performances of some operating combinations on daily schedule, which occurred in real data. The fitted distributions are selected through certain information criteria and bootstrapping the log-likelihood ratio test. Moreover, we also classify the operations into three different categories as well as three stages for each operation. Then based on the classification, a strategy of efficient scheduling is proposed. The benefits of rescheduling based on the proposed strategy are compared with the original scheduling observed.
93

MULTI-STATE MODELS FOR INTERVAL CENSORED DATA WITH COMPETING RISK

Wei, Shaoceng 01 January 2015 (has links)
Multi-state models are often used to evaluate the effect of death as a competing event to the development of dementia in a longitudinal study of the cognitive status of elderly subjects. In this dissertation, both multi-state Markov model and semi-Markov model are used to characterize the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient, cognitive states and death as a competing risk. Firstly, a multi-state Markov model with three transient states: intact cognition, mild cognitive impairment (M.C.I.) and global impairment (G.I.) and one absorbing state: dementia is used to model the cognitive panel data. A Weibull model and a Cox proportional hazards (Cox PH) model are used to fit the time to death based on age at entry and the APOE4 status. A shared random effect correlates this survival time with the transition model. Secondly, we further apply a Semi-Markov process in which we assume that the wait- ing times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for the likelihood based estimation. At the end of this dissertation we extend a non-parametric “local EM algorithm” to obtain a smooth estimator of the cause-specific hazard function (CSH) in the presence of competing risk. All the proposed methods are justified by simulation studies and applications to the Nun Study data, a longitudinal study of late life cognition in a cohort of 461 subjects.
94

Parametric Potential-Outcome Survival Models for Causal Inference

Gong, Zhaojing January 2008 (has links)
Estimating causal effects in clinical trials is often complicated by treatment noncompliance and missing outcomes. In time-to-event studies, estimation is further complicated by censoring. Censoring is a type of missing outcome, the mechanism of which may be non-ignorable. While new estimates have recently been proposed to account for noncompliance and missing outcomes, few studies have specifically considered time-to-event outcomes, where even the intention-to-treat (ITT) estimator is potentially biased for estimating causal effects of assigned treatment. In this thesis, we develop a series of parametric potential-outcome (PPO) survival models, for the analysis of randomised controlled trials (RCT) with time-to-event outcomes and noncompliance. Both ignorable and non-ignorable censoring mechanisms are considered. We approach model-fitting from a likelihood-based perspective, using the EM algorithm to locate maximum likelihood estimators. We are not aware of any previous work that addresses these complications jointly. In addition, we give new formulations for the average causal effect (ACE) and the complier average causal effect (CACE) to suit survival analysis. To illustrate the likelihood-based method proposed in this thesis, the HIP breast cancer trial data \citep{Baker98, Shapiro88} were re-analysed using specific PPO-survival models, the Weibull and log-normal based PPO-survival models, which assume that the failure time and censored time distributions both follow Weibull or log-normal distributions. Furthermore, an extended PPO-survival model is also derived in this thesis, which permits investigation into the impact of causal effect after accommodating certain pre-treatment covariates. This is an important contribution to the potential outcomes, survival and RCT literature. For comparison, the Frangakis-Rubin (F-R) model \citep{Frangakis99} is also applied to the HIP breast cancer trial data. To date, the F-R model has not yet been applied to any time-to-event data in the literature.
95

The optimality of a dividend barrier strategy for Levy insurance risk processes, with a focus on the univariate Erlang mixture

Ali, Javid January 2011 (has links)
In insurance risk theory, the surplus of an insurance company is modelled to monitor and quantify its risks. With the outgo of claims and inflow of premiums, the insurer needs to determine what financial portfolio ensures the soundness of the company’s future while satisfying the shareholders’ interests. It is usually assumed that the net profit condition (i.e. the expectation of the process is positive) is satisfied, which then implies that this process would drift towards infinity. To correct this unrealistic behaviour, the surplus process was modified to include the payout of dividends until the time of ruin. Under this more realistic surplus process, a topic of growing interest is determining which dividend strategy is optimal, where optimality is in the sense of maximizing the expected present value of dividend payments. This problem dates back to the work of Bruno De Finetti (1957) where it was shown that if the surplus process is modelled as a random walk with ± 1 step sizes, the optimal dividend payment strategy is a barrier strategy. Such a strategy pays as dividends any excess of the surplus above some threshold. Since then, other examples where a barrier strategy is optimal include the Brownian motion model (Gerber and Shiu (2004)) and the compound Poisson process model with exponential claims (Gerber and Shiu (2006)). In this thesis, we focus on the optimality of a barrier strategy in the more general Lévy risk models. The risk process will be formulated as a spectrally negative Lévy process, a continuous-time stochastic process with stationary increments which provides an extension of the classical Cramér-Lundberg model. This includes the Brownian and the compound Poisson risk processes as special cases. In this setting, results are expressed in terms of “scale functions”, a family of functions known only through their Laplace transform. In Loeffen (2008), we can find a sufficient condition on the jump distribution of the process for a barrier strategy to be optimal. This condition was then improved upon by Loeffen and Renaud (2010) while considering a more general control problem. The first chapter provides a brief review of theory of spectrally negative Lévy processes and scale functions. In chapter 2, we define the optimal dividends problem and provide existing results in the literature. When the surplus process is given by the Cramér-Lundberg process with a Brownian motion component, we provide a sufficient condition on the parameters of this process for the optimality of a dividend barrier strategy. Chapter 3 focuses on the case when the claims distribution is given by a univariate mixture of Erlang distributions with a common scale parameter. Analytical results for the Value-at-Risk and Tail-Value-at-Risk, and the Euler risk contribution to the Conditional Tail Expectation are provided. Additionally, we give some results for the scale function and the optimal dividends problem. In the final chapter, we propose an expectation maximization (EM) algorithm similar to that in Lee and Lin (2009) for fitting the univariate distribution to data. This algorithm is implemented and numerical results on the goodness of fit to sample data and on the optimal dividends problem are presented.
96

Design and analysis of response selective samples in observational studies

Grünewald, Maria January 2011 (has links)
Outcome dependent sampling may increase efficiency in observational studies. It is however not always obvious how to sample efficiently, and how to analyze the resulting data without introducing bias. This thesis describes a general framework for efficiency calculations in multistage sampling, with focus on what is sometimes referred to as ascertainment sampling. A method for correcting for the sampling scheme in analysis of ascertainment samples is also presented. Simulation based methods are used to overcome computational issues in both efficiency calculations and analysis of data. / At the time of doctoral defense, the following paper was unpublished and had a status as follows: Paper 1: Submitted.
97

Models and estimation algorithms for nonparametric finite mixtures with conditionally independent multivariate component densities / Modèles et algorithmes d'estimation pour des mélanges finis de densités de composantes multivariées non paramétriques et conditionnellement indépendantes

Hoang, Vy-Thuy-Lynh 20 April 2017 (has links)
Plusieurs auteurs ont proposé récemment des modèles et des algorithmes pour l'estimation nonparamétrique de mélanges multivariés finis dont l'identifiabilité n'est pas toujours assurée. Entre les modèles considérés, l'hypothèse des coordonnées indépendantes conditionnelles à la sous-population de provenance des individus fait l'objet d'une attention croissante, en raison des développements théoriques et pratiques envisageables, particulièrement avec la multiplicité des variables qui entrent en jeu dans le framework statistique moderne. Dans ce travail, nous considérons d'abord un modèle plus général supposant l'indépendance, conditionnellement à la composante, de blocs multivariés de coordonnées au lieu de coordonnées univariées, permettant toute structure de dépendance à l'intérieur de ces blocs. Par conséquent, les fonctions de densité des blocs sont complètement multivariées et non paramétriques. Nous présentons des arguments d'identifiabilité et introduisons pour l'estimation dans ce modèle deux algorithmes méthodologiques dont les procédures de calcul ressemblent à un véritable algorithme EM mais incluent une étape additionnelle d'estimation de densité: un algorithme rapide montrant l'efficacité empirique sans justification théorique, et un algorithme lissé possédant une propriété de monotonie comme certain algorithme EM, mais plus exigeant en terme de calcul. Nous discutons également les méthodes efficaces en temps de calcul pour l'estimation et proposons quelques stratégies. Ensuite, nous considérons une extension multivariée des modèles de mélange utilisés dans le cadre de tests d'hypothèses multiples, permettant une nouvelle version multivariée de contrôle du False Discovery Rate. Nous proposons une version contrainte de notre algorithme précédent, adaptée spécialement à ce modèle. Le comportement des algorithmes de type EM que nous proposons est étudié numériquement dans plusieurs expérimentations de Monte Carlo et sur des données réelles de grande dimension et comparé avec les méthodes existantes dans la littérature. En n, les codes de nos nouveaux algorithmes sont progressivement ajoutés sous forme de nouvelles fonctions dans le package en libre accès mixtools pour le logiciel de statistique R. / Recently several authors have proposed models and estimation algorithms for finite nonparametric multivariate mixtures, whose identifiability is typically not obvious. Among the considered models, the assumption of independent coordinates conditional on the subpopulation from which each observation is drawn is subject of an increasing attention, in view of the theoretical and practical developments it allows, particularly with multiplicity of variables coming into play in the modern statistical framework. In this work we first consider a more general model assuming independence, conditional on the component, of multivariate blocks of coordinates instead of univariate coordinates, allowing for any dependence structure within these blocks. Consequently, the density functions of these blocks are completely multivariate and nonparametric. We present identifiability arguments and introduce for estimation in this model two methodological algorithms whose computational procedures resemble a true EM algorithm but include an additional density estimation step: a fast algorithm showing empirical efficiency without theoretical justification, and a smoothed algorithm possessing a monotony property as any EM algorithm does, but more computationally demanding. We also discuss computationally efficient methods for estimation and derive some strategies. Next, we consider a multivariate extension of the mixture models used in the framework of multiple hypothesis testings, allowing for a new multivariate version of the False Discovery Rate control. We propose a constrained version of our previous algorithm, specifically designed for this model. The behavior of the EM-type algorithms we propose is studied numerically through several Monte Carlo experiments and high dimensional real data, and compared with existing methods in the literature. Finally, the codes of our new algorithms are progressively implemented as new functions in the publicly-available package mixtools for the R statistical software.
98

Estimação de modelos afins por partes em espaço de estados

Rui, Rafael January 2016 (has links)
Esta tese foca no problema de estimação de estado e de identificação de parâametros para modelos afins por partes. Modelos afins por partes são obtidos quando o domínio do estado ou da entrada do sistema e particionado em regiões e, para cada região, um submodelo linear ou afim e utilizado para descrever a dinâmica do sistema. Propomos um algoritmo para estimação recursiva de estados e um algoritmo de identificação de parâmetros para uma classe de modelos afins por partes. Propomos um estimador de estados Bayesiano que utiliza o filtro de Kalman em cada um dos submodelos. Neste estimador, a função distribuição cumulativa e utilizada para calcular a distribuição a posteriori do estado assim como a probabilidade de cada submodelo. Já o método de identificação proposto utiliza o algoritmo EM (Expectation Maximization algorithm) para identificar os parâmetros do modelo. A função distribuição cumulativa e utilizada para calcular a probabilidade de cada submodelo a partir da medida do sistema. Em seguida, utilizamos o filtro de Kalman suavizado para estimar o estado e calcular uma função substituta da função likelihood. Tal função e então utilizada para identificar os parâmetros do modelo. O estimador proposto foi utilizado para estimar o estado do modelo não linear para vibrações causadas por folgas. Foram realizadas simulações, onde comparamos o método proposto ao filtro de Kalman estendido e o filtro de partículas. O algoritmo de identificação foi utilizado para identificar os parâmetros do modelo do jato JAS 39 Gripen, assim como, o modelos não linear de vibrações causadas por folgas. / This thesis focuses on the state estimation and parameter identi cation problems of piecewise a ne models. Piecewise a ne models are obtained when the state domain or the input domain are partitioned into regions and, for each region, a linear or a ne submodel is used to describe the system dynamics. We propose a recursive state estimation algorithm and a parameter identi cation algorithm to a class of piecewise a ne models. We propose a Bayesian state estimate which uses the Kalman lter in each submodel. In the this estimator, the cumulative distribution is used to compute the posterior distribution of the state as well as the probability of each submodel. On the other hand, the proposed identi cation method uses the Expectation Maximization (EM) algorithm to identify the model parameters. We use the cumulative distribution to compute the probability of each submodel based on the system measurements. Subsequently, we use the Kalman smoother to estimate the state and compute a surrogate function for the likelihood function. This function is used to estimate the model parameters. The proposed estimator was used to estimate the state of the nonlinear model for vibrations caused by clearances. Numerical simulations were performed, where we have compared the proposed method to the extended Kalman lter and the particle lter. The identi cation algorithm was used to identify the model parameters of the JAS 39 Gripen aircraft as well as the nonlinear model for vibrations caused by clearances.
99

Estimação de modelos afins por partes em espaço de estados

Rui, Rafael January 2016 (has links)
Esta tese foca no problema de estimação de estado e de identificação de parâametros para modelos afins por partes. Modelos afins por partes são obtidos quando o domínio do estado ou da entrada do sistema e particionado em regiões e, para cada região, um submodelo linear ou afim e utilizado para descrever a dinâmica do sistema. Propomos um algoritmo para estimação recursiva de estados e um algoritmo de identificação de parâmetros para uma classe de modelos afins por partes. Propomos um estimador de estados Bayesiano que utiliza o filtro de Kalman em cada um dos submodelos. Neste estimador, a função distribuição cumulativa e utilizada para calcular a distribuição a posteriori do estado assim como a probabilidade de cada submodelo. Já o método de identificação proposto utiliza o algoritmo EM (Expectation Maximization algorithm) para identificar os parâmetros do modelo. A função distribuição cumulativa e utilizada para calcular a probabilidade de cada submodelo a partir da medida do sistema. Em seguida, utilizamos o filtro de Kalman suavizado para estimar o estado e calcular uma função substituta da função likelihood. Tal função e então utilizada para identificar os parâmetros do modelo. O estimador proposto foi utilizado para estimar o estado do modelo não linear para vibrações causadas por folgas. Foram realizadas simulações, onde comparamos o método proposto ao filtro de Kalman estendido e o filtro de partículas. O algoritmo de identificação foi utilizado para identificar os parâmetros do modelo do jato JAS 39 Gripen, assim como, o modelos não linear de vibrações causadas por folgas. / This thesis focuses on the state estimation and parameter identi cation problems of piecewise a ne models. Piecewise a ne models are obtained when the state domain or the input domain are partitioned into regions and, for each region, a linear or a ne submodel is used to describe the system dynamics. We propose a recursive state estimation algorithm and a parameter identi cation algorithm to a class of piecewise a ne models. We propose a Bayesian state estimate which uses the Kalman lter in each submodel. In the this estimator, the cumulative distribution is used to compute the posterior distribution of the state as well as the probability of each submodel. On the other hand, the proposed identi cation method uses the Expectation Maximization (EM) algorithm to identify the model parameters. We use the cumulative distribution to compute the probability of each submodel based on the system measurements. Subsequently, we use the Kalman smoother to estimate the state and compute a surrogate function for the likelihood function. This function is used to estimate the model parameters. The proposed estimator was used to estimate the state of the nonlinear model for vibrations caused by clearances. Numerical simulations were performed, where we have compared the proposed method to the extended Kalman lter and the particle lter. The identi cation algorithm was used to identify the model parameters of the JAS 39 Gripen aircraft as well as the nonlinear model for vibrations caused by clearances.
100

NIG distribution in modelling stock returns with assumption about stochastic volatility : Estimation of parameters and application to VaR and ETL

Kucharska, Magdalena, Pielaszkiewicz, Jolanta Maria January 2009 (has links)
We model Normal Inverse Gaussian distributed log-returns with the assumption of stochastic volatility. We consider different methods of parametrization of returns and following the paper of Lindberg, [21] we assume that the volatility is a linear function of the number of trades. In addition to the Lindberg’s paper, we suggest daily stock volumes and amounts as alternative measures of the volatility. As an application of the models, we perform Value-at-Risk and Expected Tail Loss predictions by the Lindberg’s volatility model and by our own suggested model. These applications are new and not described in the literature. For better understanding of our caluclations, programmes and simulations, basic informations and properties about the Normal Inverse Gaussian and Inverse Gaussian distributions are provided. Practical applications of the models are implemented on the Nasdaq-OMX, where we have calculated Value-at-Risk and Expected Tail Loss for the Ericsson B stock data during the period 1999 to 2004.

Page generated in 0.0505 seconds