Spelling suggestions: "subject:"algorithm"" "subject:"analgorithm""
101 |
Estimação de modelos afins por partes em espaço de estadosRui, Rafael January 2016 (has links)
Esta tese foca no problema de estimação de estado e de identificação de parâametros para modelos afins por partes. Modelos afins por partes são obtidos quando o domínio do estado ou da entrada do sistema e particionado em regiões e, para cada região, um submodelo linear ou afim e utilizado para descrever a dinâmica do sistema. Propomos um algoritmo para estimação recursiva de estados e um algoritmo de identificação de parâmetros para uma classe de modelos afins por partes. Propomos um estimador de estados Bayesiano que utiliza o filtro de Kalman em cada um dos submodelos. Neste estimador, a função distribuição cumulativa e utilizada para calcular a distribuição a posteriori do estado assim como a probabilidade de cada submodelo. Já o método de identificação proposto utiliza o algoritmo EM (Expectation Maximization algorithm) para identificar os parâmetros do modelo. A função distribuição cumulativa e utilizada para calcular a probabilidade de cada submodelo a partir da medida do sistema. Em seguida, utilizamos o filtro de Kalman suavizado para estimar o estado e calcular uma função substituta da função likelihood. Tal função e então utilizada para identificar os parâmetros do modelo. O estimador proposto foi utilizado para estimar o estado do modelo não linear para vibrações causadas por folgas. Foram realizadas simulações, onde comparamos o método proposto ao filtro de Kalman estendido e o filtro de partículas. O algoritmo de identificação foi utilizado para identificar os parâmetros do modelo do jato JAS 39 Gripen, assim como, o modelos não linear de vibrações causadas por folgas. / This thesis focuses on the state estimation and parameter identi cation problems of piecewise a ne models. Piecewise a ne models are obtained when the state domain or the input domain are partitioned into regions and, for each region, a linear or a ne submodel is used to describe the system dynamics. We propose a recursive state estimation algorithm and a parameter identi cation algorithm to a class of piecewise a ne models. We propose a Bayesian state estimate which uses the Kalman lter in each submodel. In the this estimator, the cumulative distribution is used to compute the posterior distribution of the state as well as the probability of each submodel. On the other hand, the proposed identi cation method uses the Expectation Maximization (EM) algorithm to identify the model parameters. We use the cumulative distribution to compute the probability of each submodel based on the system measurements. Subsequently, we use the Kalman smoother to estimate the state and compute a surrogate function for the likelihood function. This function is used to estimate the model parameters. The proposed estimator was used to estimate the state of the nonlinear model for vibrations caused by clearances. Numerical simulations were performed, where we have compared the proposed method to the extended Kalman lter and the particle lter. The identi cation algorithm was used to identify the model parameters of the JAS 39 Gripen aircraft as well as the nonlinear model for vibrations caused by clearances.
|
102 |
Estimation of wood fibre length distributions from censored mixture dataSvensson, Ingrid January 2007 (has links)
<p>The motivating forestry background for this thesis is the need for fast, non-destructive, and cost-efficient methods to estimate fibre length distributions in standing trees in order to evaluate the effect of silvicultural methods and breeding programs on fibre length. The usage of increment cores is a commonly used non-destructive sampling method in forestry. An increment core is a cylindrical wood sample taken with a special borer, and the methods proposed in this thesis are especially developed for data from increment cores. Nevertheless the methods can be used for data from other sampling frames as well, for example for sticks with the shape of an elongated rectangular box.</p><p>This thesis proposes methods to estimate fibre length distributions based on censored mixture data from wood samples. Due to sampling procedures, wood samples contain cut (censored) and uncut observations. Moreover the samples consist not only of the fibres of interest but of other cells (fines) as well. When the cell lengths are determined by an automatic optical fibre-analyser, there is no practical possibility to distinguish between cut and uncut cells or between fines and fibres. Thus the resulting data come from a censored version of a mixture of the fine and fibre length distributions in the tree. The methods proposed in this thesis can handle this lack of information.</p><p>Two parametric methods are proposed to estimate the fine and fibre length distributions in a tree. The first method is based on grouped data. The probabilities that the length of a cell from the sample falls into different length classes are derived, the censoring caused by the sampling frame taken into account. These probabilities are functions of the unknown parameters, and ML estimates are found from the corresponding multinomial model.</p><p>The second method is a stochastic version of the EM algorithm based on the individual length measurements. The method is developed for the case where the distributions of the true lengths of the cells at least partially appearing in the sample belong to exponential families. The cell length distribution in the sample and the conditional distribution of the true length of a cell at least partially appearing in the sample given the length in the sample are derived. Both these distributions are necessary in order to use the stochastic EM algorithm. Consistency and asymptotic normality of the stochastic EM estimates is proved.</p><p>The methods are applied to real data from increment cores taken from Scots pine trees (Pinus sylvestris L.) in Northern Sweden and further evaluated through simulation studies. Both methods work well for sample sizes commonly obtained in practice.</p>
|
103 |
Code-aided synchronization for digital burst communicationsHerzet, Cédric 21 April 2006 (has links)
This thesis deals with the synchronization of digital communication systems. Synchronization (from the Greek syn (together) and chronos (time)) denotes the task of making two systems running at the same time. In communication systems, the synchronization of the transmitter and the receiver requires to accurately estimate a number of parameters such as the carrier frequency and phase offsets, the timing epoch...
In the early days of digital communications, synchronizers used to operate in either data-aided (DA) or non-data-aided (NDA) modes. However, with the recent advent of powerful coding techniques, these conventional synchronization modes have been shown to be unable to properly synchronize state-of-the-art receivers.
In this context, we investigate in this thesis a new family of synchronizers referred to as code-aided (CA) synchronizers. The idea behind CA synchronization is to take benefit from the structure of the code used to protect the data to improve the estimation quality achieved by the synchronizers. In a first part of the thesis, we address the issue of turbo synchronization, i.e., the iterative synchronization of continuous parameters. In particular, we derive several mathematical frameworks enabling a systematic derivation of turbo synchronizers and a deeper understanding of their behavior. In a second part, we focus on the so-called CA hypothesis testing problem. More particularly, we derive optimal solutions to deal with this problem and propose efficient implementations of the proposed algorithms. Finally, in a last part of this thesis, we derive theoretical lower bounds on the performance of turbo synchronizers.
|
104 |
Estimation of wood fibre length distributions from censored mixture dataSvensson, Ingrid January 2007 (has links)
The motivating forestry background for this thesis is the need for fast, non-destructive, and cost-efficient methods to estimate fibre length distributions in standing trees in order to evaluate the effect of silvicultural methods and breeding programs on fibre length. The usage of increment cores is a commonly used non-destructive sampling method in forestry. An increment core is a cylindrical wood sample taken with a special borer, and the methods proposed in this thesis are especially developed for data from increment cores. Nevertheless the methods can be used for data from other sampling frames as well, for example for sticks with the shape of an elongated rectangular box. This thesis proposes methods to estimate fibre length distributions based on censored mixture data from wood samples. Due to sampling procedures, wood samples contain cut (censored) and uncut observations. Moreover the samples consist not only of the fibres of interest but of other cells (fines) as well. When the cell lengths are determined by an automatic optical fibre-analyser, there is no practical possibility to distinguish between cut and uncut cells or between fines and fibres. Thus the resulting data come from a censored version of a mixture of the fine and fibre length distributions in the tree. The methods proposed in this thesis can handle this lack of information. Two parametric methods are proposed to estimate the fine and fibre length distributions in a tree. The first method is based on grouped data. The probabilities that the length of a cell from the sample falls into different length classes are derived, the censoring caused by the sampling frame taken into account. These probabilities are functions of the unknown parameters, and ML estimates are found from the corresponding multinomial model. The second method is a stochastic version of the EM algorithm based on the individual length measurements. The method is developed for the case where the distributions of the true lengths of the cells at least partially appearing in the sample belong to exponential families. The cell length distribution in the sample and the conditional distribution of the true length of a cell at least partially appearing in the sample given the length in the sample are derived. Both these distributions are necessary in order to use the stochastic EM algorithm. Consistency and asymptotic normality of the stochastic EM estimates is proved. The methods are applied to real data from increment cores taken from Scots pine trees (Pinus sylvestris L.) in Northern Sweden and further evaluated through simulation studies. Both methods work well for sample sizes commonly obtained in practice.
|
105 |
The optimality of a dividend barrier strategy for Levy insurance risk processes, with a focus on the univariate Erlang mixtureAli, Javid January 2011 (has links)
In insurance risk theory, the surplus of an insurance company is modelled to monitor and quantify its risks. With the outgo of claims and inflow of premiums, the insurer needs to determine what financial portfolio ensures the soundness of the company’s future while satisfying the shareholders’ interests. It is usually assumed that the net profit condition (i.e. the expectation of the process is positive) is satisfied, which then implies that this process would drift towards infinity. To correct this unrealistic behaviour, the surplus process was modified to include the payout of dividends until the time of ruin.
Under this more realistic surplus process, a topic of growing interest is determining which dividend strategy is optimal, where optimality is in the sense of maximizing the expected present value of dividend payments. This problem dates back to the work of Bruno De Finetti (1957) where it was shown that if the surplus process is modelled as a random walk with ± 1 step sizes, the optimal dividend payment strategy is a barrier strategy. Such a strategy pays as dividends any excess of the surplus above some threshold. Since then, other examples where a barrier strategy is optimal include the Brownian motion model (Gerber and Shiu (2004)) and the compound Poisson process model with exponential claims (Gerber and Shiu (2006)).
In this thesis, we focus on the optimality of a barrier strategy in the more general Lévy risk models. The risk process will be formulated as a spectrally negative Lévy process, a continuous-time stochastic process with stationary increments which provides an extension of the classical Cramér-Lundberg model. This includes the Brownian and the compound Poisson risk processes as special cases. In this setting, results are expressed in terms of “scale functions”, a family of functions known only through their Laplace transform. In Loeffen (2008), we can find a sufficient condition on the jump distribution of the process for a barrier strategy to be optimal. This condition was then improved upon by Loeffen and Renaud (2010) while considering a more general control problem.
The first chapter provides a brief review of theory of spectrally negative Lévy processes and scale functions. In chapter 2, we define the optimal dividends problem and provide existing results in the literature. When the surplus process is given by the Cramér-Lundberg process with a Brownian motion component, we provide a sufficient condition on the parameters of this process for the optimality of a dividend barrier strategy.
Chapter 3 focuses on the case when the claims distribution is given by a univariate mixture of Erlang distributions with a common scale parameter. Analytical results for the Value-at-Risk and Tail-Value-at-Risk, and the Euler risk contribution to the Conditional Tail Expectation are provided. Additionally, we give some results for the scale function and the optimal dividends problem. In the final chapter, we propose an expectation maximization (EM) algorithm similar to that in Lee and Lin (2009) for fitting the univariate distribution to data. This algorithm is implemented and numerical results on the goodness of fit to sample data and on the optimal dividends problem are presented.
|
106 |
A Study of Designs in Clinical Trials and Schedules in Operating RoomsHung, Wan-Ping 20 January 2011 (has links)
The design of clinical trials is one of the important problems in medical statistics. Its main purpose is to determine the methodology and the sample size required of a testing study to examine the safety and efficacy of drugs. It is also a part of the Food and Drug Administration approval process. In this thesis, we first study the comparison of the efficacy of drugs in clinical trials. We focus on the two-sample comparison of proportions to investigate testing strategies based on two-stage design. The properties and advantages of the procedures from the proposed testing designs are demonstrated by numerical results, where comparison with the classical method is made under the same sample size. A real example discussed in Cardenal et al. (1999) is provided to explain how the methods may be used in practice. Some figures are also presented to illustrate the pattern changes of the power functions of these methods. In addition, the proposed procedure is also compared with the Pocock (1997) and O¡¦Brien and Fleming (1979) tests based on the standardized statistics.
In the second part of this work, the operating room scheduling problem is considered, which is also important in medical studies. The national health insurance system has been conducted more than ten years in Taiwan. The Bureau of National Health Insurance continues to improve the national health insurance system and try to establish a reasonable fee ratio for people in different income ranges. In accordance to the adjustment of the national health insurance system, hospitals must pay more attention to control the running cost. One of the major hospital's revenues is generated by its surgery center operations. In order to maintain financial balance, effective operating room management is necessary.
For this topic, this study focuses on the model fitting of operating times and operating room scheduling. Log-normal and mixture log-normal distributions are identified to be acceptable statistically in describing these operating times. The procedure is illustrated through analysis of thirteen operations performed in the gynecology department of a major teaching hospital in southern Taiwan. The best fitting distributions are used to evaluate performances of some operating combinations on daily schedule, which occurred in real data. The fitted distributions are selected through certain information criteria and bootstrapping the log-likelihood ratio test. Moreover, we also classify the operations into three different categories as well as three stages for each operation. Then based on the classification, a strategy of efficient scheduling is proposed. The benefits of rescheduling based on the proposed strategy are compared with the original scheduling observed.
|
107 |
MULTI-STATE MODELS FOR INTERVAL CENSORED DATA WITH COMPETING RISKWei, Shaoceng 01 January 2015 (has links)
Multi-state models are often used to evaluate the effect of death as a competing event to the development of dementia in a longitudinal study of the cognitive status of elderly subjects. In this dissertation, both multi-state Markov model and semi-Markov model are used to characterize the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient, cognitive states and death as a competing risk.
Firstly, a multi-state Markov model with three transient states: intact cognition, mild cognitive impairment (M.C.I.) and global impairment (G.I.) and one absorbing state: dementia is used to model the cognitive panel data. A Weibull model and a Cox proportional hazards (Cox PH) model are used to fit the time to death based on age at entry and the APOE4 status. A shared random effect correlates this survival time with the transition model.
Secondly, we further apply a Semi-Markov process in which we assume that the wait- ing times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for the likelihood based estimation.
At the end of this dissertation we extend a non-parametric “local EM algorithm” to obtain a smooth estimator of the cause-specific hazard function (CSH) in the presence of competing risk.
All the proposed methods are justified by simulation studies and applications to the Nun Study data, a longitudinal study of late life cognition in a cohort of 461 subjects.
|
108 |
Parametric Potential-Outcome Survival Models for Causal InferenceGong, Zhaojing January 2008 (has links)
Estimating causal effects in clinical trials is often complicated by treatment noncompliance and missing outcomes. In time-to-event studies, estimation is further complicated by censoring. Censoring is a type of missing outcome, the mechanism of which may be non-ignorable. While new estimates have recently been proposed to account for noncompliance and missing outcomes, few studies have specifically considered time-to-event outcomes, where even the intention-to-treat (ITT) estimator is potentially biased for estimating causal effects of assigned treatment.
In this thesis, we develop a series of parametric potential-outcome (PPO) survival models, for the analysis of randomised controlled trials (RCT) with time-to-event outcomes and noncompliance. Both ignorable and non-ignorable censoring mechanisms are considered. We approach model-fitting from a likelihood-based perspective, using the EM algorithm to locate maximum likelihood estimators. We are not aware of any previous work that addresses these complications jointly. In addition, we give new formulations for the average causal effect (ACE) and the complier average causal effect (CACE) to suit survival analysis. To illustrate the likelihood-based method proposed in this thesis, the HIP breast cancer trial data \citep{Baker98, Shapiro88} were re-analysed using specific PPO-survival models, the Weibull and log-normal based PPO-survival models, which assume that the failure time and censored time distributions both follow Weibull or log-normal distributions. Furthermore, an extended PPO-survival model is also derived in this thesis, which permits investigation into the impact of causal effect after accommodating certain pre-treatment covariates. This is an important contribution to the potential outcomes, survival and RCT literature. For comparison, the Frangakis-Rubin (F-R) model \citep{Frangakis99} is also applied to the HIP breast cancer trial data. To date, the F-R model has not yet been applied to any time-to-event data in the literature.
|
109 |
The optimality of a dividend barrier strategy for Levy insurance risk processes, with a focus on the univariate Erlang mixtureAli, Javid January 2011 (has links)
In insurance risk theory, the surplus of an insurance company is modelled to monitor and quantify its risks. With the outgo of claims and inflow of premiums, the insurer needs to determine what financial portfolio ensures the soundness of the company’s future while satisfying the shareholders’ interests. It is usually assumed that the net profit condition (i.e. the expectation of the process is positive) is satisfied, which then implies that this process would drift towards infinity. To correct this unrealistic behaviour, the surplus process was modified to include the payout of dividends until the time of ruin.
Under this more realistic surplus process, a topic of growing interest is determining which dividend strategy is optimal, where optimality is in the sense of maximizing the expected present value of dividend payments. This problem dates back to the work of Bruno De Finetti (1957) where it was shown that if the surplus process is modelled as a random walk with ± 1 step sizes, the optimal dividend payment strategy is a barrier strategy. Such a strategy pays as dividends any excess of the surplus above some threshold. Since then, other examples where a barrier strategy is optimal include the Brownian motion model (Gerber and Shiu (2004)) and the compound Poisson process model with exponential claims (Gerber and Shiu (2006)).
In this thesis, we focus on the optimality of a barrier strategy in the more general Lévy risk models. The risk process will be formulated as a spectrally negative Lévy process, a continuous-time stochastic process with stationary increments which provides an extension of the classical Cramér-Lundberg model. This includes the Brownian and the compound Poisson risk processes as special cases. In this setting, results are expressed in terms of “scale functions”, a family of functions known only through their Laplace transform. In Loeffen (2008), we can find a sufficient condition on the jump distribution of the process for a barrier strategy to be optimal. This condition was then improved upon by Loeffen and Renaud (2010) while considering a more general control problem.
The first chapter provides a brief review of theory of spectrally negative Lévy processes and scale functions. In chapter 2, we define the optimal dividends problem and provide existing results in the literature. When the surplus process is given by the Cramér-Lundberg process with a Brownian motion component, we provide a sufficient condition on the parameters of this process for the optimality of a dividend barrier strategy.
Chapter 3 focuses on the case when the claims distribution is given by a univariate mixture of Erlang distributions with a common scale parameter. Analytical results for the Value-at-Risk and Tail-Value-at-Risk, and the Euler risk contribution to the Conditional Tail Expectation are provided. Additionally, we give some results for the scale function and the optimal dividends problem. In the final chapter, we propose an expectation maximization (EM) algorithm similar to that in Lee and Lin (2009) for fitting the univariate distribution to data. This algorithm is implemented and numerical results on the goodness of fit to sample data and on the optimal dividends problem are presented.
|
110 |
Design and analysis of response selective samples in observational studiesGrünewald, Maria January 2011 (has links)
Outcome dependent sampling may increase efficiency in observational studies. It is however not always obvious how to sample efficiently, and how to analyze the resulting data without introducing bias. This thesis describes a general framework for efficiency calculations in multistage sampling, with focus on what is sometimes referred to as ascertainment sampling. A method for correcting for the sampling scheme in analysis of ascertainment samples is also presented. Simulation based methods are used to overcome computational issues in both efficiency calculations and analysis of data. / At the time of doctoral defense, the following paper was unpublished and had a status as follows: Paper 1: Submitted.
|
Page generated in 0.0538 seconds