• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 30
  • 30
  • 30
  • 11
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Sur les tests lisses d'ajustement dans le context des series chronologiques

Tagne Tatsinkou, Joseph Francois 12 1900 (has links)
La plupart des modèles en statistique classique repose sur une hypothèse sur la distribution des données ou sur une distribution sous-jacente aux données. La validité de cette hypothèse permet de faire de l’inférence, de construire des intervalles de confiance ou encore de tester la fiabilité du modèle. La problématique des tests d’ajustement vise à s’assurer de la conformité ou de la cohérence de l’hypothèse avec les données disponibles. Dans la présente thèse, nous proposons des tests d’ajustement à la loi normale dans le cadre des séries chronologiques univariées et vectorielles. Nous nous sommes limités à une classe de séries chronologiques linéaires, à savoir les modèles autorégressifs à moyenne mobile (ARMA ou VARMA dans le cas vectoriel). Dans un premier temps, au cas univarié, nous proposons une généralisation du travail de Ducharme et Lafaye de Micheaux (2004) dans le cas où la moyenne est inconnue et estimée. Nous avons estimé les paramètres par une méthode rarement utilisée dans la littérature et pourtant asymptotiquement efficace. En effet, nous avons rigoureusement montré que l’estimateur proposé par Brockwell et Davis (1991, section 10.8) converge presque sûrement vers la vraie valeur inconnue du paramètre. De plus, nous fournissons une preuve rigoureuse de l’inversibilité de la matrice des variances et des covariances de la statistique de test à partir de certaines propriétés d’algèbre linéaire. Le résultat s’applique aussi au cas où la moyenne est supposée connue et égale à zéro. Enfin, nous proposons une méthode de sélection de la dimension de la famille d’alternatives de type AIC, et nous étudions les propriétés asymptotiques de cette méthode. L’outil proposé ici est basé sur une famille spécifique de polynômes orthogonaux, à savoir les polynômes de Legendre. Dans un second temps, dans le cas vectoriel, nous proposons un test d’ajustement pour les modèles autorégressifs à moyenne mobile avec une paramétrisation structurée. La paramétrisation structurée permet de réduire le nombre élevé de paramètres dans ces modèles ou encore de tenir compte de certaines contraintes particulières. Ce projet inclut le cas standard d’absence de paramétrisation. Le test que nous proposons s’applique à une famille quelconque de fonctions orthogonales. Nous illustrons cela dans le cas particulier des polynômes de Legendre et d’Hermite. Dans le cas particulier des polynômes d’Hermite, nous montrons que le test obtenu est invariant aux transformations affines et qu’il est en fait une généralisation de nombreux tests existants dans la littérature. Ce projet peut être vu comme une généralisation du premier dans trois directions, notamment le passage de l’univarié au multivarié ; le choix d’une famille quelconque de fonctions orthogonales ; et enfin la possibilité de spécifier des relations ou des contraintes dans la formulation VARMA. Nous avons procédé dans chacun des projets à une étude de simulation afin d’évaluer le niveau et la puissance des tests proposés ainsi que de les comparer aux tests existants. De plus des applications aux données réelles sont fournies. Nous avons appliqué les tests à la prévision de la température moyenne annuelle du globe terrestre (univarié), ainsi qu’aux données relatives au marché du travail canadien (bivarié). Ces travaux ont été exposés à plusieurs congrès (voir par exemple Tagne, Duchesne et Lafaye de Micheaux (2013a, 2013b, 2014) pour plus de détails). Un article basé sur le premier projet est également soumis dans une revue avec comité de lecture (Voir Duchesne, Lafaye de Micheaux et Tagne (2016)). / Several phenomena from natural and social sciences rely on distribution’s assumption among which the normal distribution is the most popular. The validity of that assumption is useful to setting up forecast intervals or for checking model adequacy of the underlying model. The goodness-of-fit procedures are tools to assess the adequacy of the data’s underlying assumptions. Autoregressive and moving average time series models are often used to find the mathematical behavior of these phenomena from natural and social sciences, and especially in the finance area. These models are based on some assumptions including normality distribution for the innovations. Normality assumption may be helpful for some testing procedures. Furthermore, stronger conclusions can be drawn from the adjusted model if the white noise can be assumed Gaussian. In this work, goodness-of-fit tests for checking normality for the innovations from autoregressive moving average time series models are proposed for both univariate and multivariate cases (ARMA and VARMA models). In our first project, a smooth test of normality for ARMA time series models with unknown mean based on a least square type estimator is proposed. We derive the asymptotic null distribution of the test statistic. The result here is an extension of the paper of Ducharme et Lafaye de Micheaux (2004), where they supposed the mean known and equal to zero. We use the least square type estimator proposed by Brockwell et Davis (1991, section 10.8) and we provide a rigorous proof that it is almost surely convergent. We show that the covariance matrix of the test is nonsingular regardless if the mean is known. We have also studied a data driven approach for the choice of the dimension of the family and we gave a finite sample approximation of the null distribution. Finally, the finite and asymptotic sample properties of the proposed test statistic are studied via a small simulation study. In the second project, goodness-of-fit tests for checking multivariate normality for the innovations from vector autoregressive moving average time series models are proposed. Since these time series models may rely on a large number of parameters, structured parameterization of the functional form is allowed. The methodology also relies on the smooth test paradigm and on families of orthonormal functions with respect to the multivariate normal density. It is shown that the smooth tests converge to convenient chi-square distributions asymptotically. An important special case makes use of Hermite polynomials, and in that situation we demonstrate that the tests are invariant under linear transformations. We observed that the test is not invariant under linear transformations with Legendre polynomials. A consistent data driven method is discussed to choose the family order from the data. In a simulation study, exact levels are studied and the empirical powers of the smooth tests are compared to those of other methods. Finally, an application to real data is provided, specifically on Canadian labour market data and annual global temperature. These works were exposed at several meeting (see for example Tagne, Duchesne and Lafaye de Micheaux (2013a, 2013b, 2014) for more details). A paper based on the first project is submitted in a refereed journal (see Duchesne, Lafaye de Micheaux et Tagne (2016)).
22

Distribuição generalizada de chuvas máximas no Estado do Paraná. / Local and regional frequency analysis by lh-moments and generalized distributions

Pansera, Wagner Alessandro 07 December 2013 (has links)
Made available in DSpace on 2017-05-12T14:46:53Z (GMT). No. of bitstreams: 1 Wagner.pdf: 5111902 bytes, checksum: b4edf3498cca6f9c7e2a9dbde6e62e18 (MD5) Previous issue date: 2013-12-07 / The purpose of hydrologic frequency analysis is to relate magnitude of events with their occurrence frequency based on probability distribution. The generalized probability distributions can be used on the study concerning extreme hydrological events: extreme events, logistics and Pareto. There are several methodologies to estimate probability distributions parameters, however, L-moments are often used due to computational easiness. Reliability of quantiles with high return period can be increased by LH-moments or high orders L-moments. L-moments have been widely studied; however, there is little information about LH-moments on literature, thus, there is a great research requirement on such area. Therefore, in this study, LH-moments were studied under two approaches commonly used in hydrology: (i) local frequency analysis (LFA) and (ii) regional frequency analysis (RFA). Moreover, a database with 227 rainfall stations was set (daily maximum annual), in Paraná State, from 1976 to 2006. LFA was subdivided into two steps: (i) Monte Carlo simulations and (ii) application of results to database. The main result of Monte Carlo simulations was that LH-moments make 0.99 and 0.995 quantiles less biased. Besides, simulations helped on creating an algorithm to perform LFA by generalized distributions. The algorithm was applied to database and enabled an adjustment of 227 studied series. In RFA, the 227stations have been divided into 11 groups and regional growth curves were obtained; while local quantiles were obtained from the regional growth curves. The difference between local quantiles obtained by RFA was quantified with those obtained via LFA. The differences may be approximately 33 mm for return periods of 100 years. / O objetivo da análise de frequência das variáveis hidrológicas é relacionar a magnitude dos eventos com sua frequência de ocorrência por meio do uso de uma distribuição de probabilidade. No estudo de eventos hidrológicos extremos, podem ser usadas as distribuições de probabilidade generalizadas: de eventos extremos, logística e Pareto. Existem diversas metodologias para a estimativa dos parâmetros das distribuições de probabilidade, no entanto, devido às facilidades computacionais, utilizam-se frequentemente os momentos-L. A confiabilidade dos quantis com alto período de retorno pode ser aumentada utilizando os momentos-LH ou momentos-L de altas ordens. Os momentos-L foram amplamente estudados, todavia, os momentos-LH apresentam literatura reduzida, logo, mais pesquisas são necessárias. Portanto, neste estudo, os momentos-LH foram estudados sob duas abordagens comumente utilizadas na hidrologia: (i) Análise de frequência local (AFL) e (ii) Análise de frequência regional (AFR). Além disso, foi montado um banco de dados com 227 estações pluviométricas (máximas diárias anuais), localizadas no Estado do Paraná, no período de 1976 a 2006. A AFL subdividiu-se em duas etapas: (i) Simulações de Monte Carlo e (ii) Aplicação dos resultados ao banco de dados. O principal resultado das simulações de Monte Carlo foi que os momentos-LH tornam os quantis 0,99 e 0,995 menos enviesados. Além disso, as simulações viabilizaram a criação de um algoritmo para realizar a AFL utilizando as distribuições generalizadas. O algoritmo foi aplicado ao banco de dados e possibilitou ajuste das 227 séries estudadas. Na AFR, as 227 estações foram dividas em 11 grupos e foram obtidas as curvas de crescimento regional. Os quantis locais foram obtidos a partir das curvas de crescimento regional. Foi quantificada a diferença entre os quantis locais obtidos via AFL com aqueles obtidos via AFR. As diferenças podem ser de aproximadamente 33 mm para períodos de retorno de 100 anos.
23

Distribuição generalizada de chuvas máximas no Estado do Paraná. / Local and regional frequency analysis by lh-moments and generalized distributions

Pansera, Wagner Alessandro 07 December 2013 (has links)
Made available in DSpace on 2017-07-10T19:23:40Z (GMT). No. of bitstreams: 1 Wagner.pdf: 5111902 bytes, checksum: b4edf3498cca6f9c7e2a9dbde6e62e18 (MD5) Previous issue date: 2013-12-07 / The purpose of hydrologic frequency analysis is to relate magnitude of events with their occurrence frequency based on probability distribution. The generalized probability distributions can be used on the study concerning extreme hydrological events: extreme events, logistics and Pareto. There are several methodologies to estimate probability distributions parameters, however, L-moments are often used due to computational easiness. Reliability of quantiles with high return period can be increased by LH-moments or high orders L-moments. L-moments have been widely studied; however, there is little information about LH-moments on literature, thus, there is a great research requirement on such area. Therefore, in this study, LH-moments were studied under two approaches commonly used in hydrology: (i) local frequency analysis (LFA) and (ii) regional frequency analysis (RFA). Moreover, a database with 227 rainfall stations was set (daily maximum annual), in Paraná State, from 1976 to 2006. LFA was subdivided into two steps: (i) Monte Carlo simulations and (ii) application of results to database. The main result of Monte Carlo simulations was that LH-moments make 0.99 and 0.995 quantiles less biased. Besides, simulations helped on creating an algorithm to perform LFA by generalized distributions. The algorithm was applied to database and enabled an adjustment of 227 studied series. In RFA, the 227stations have been divided into 11 groups and regional growth curves were obtained; while local quantiles were obtained from the regional growth curves. The difference between local quantiles obtained by RFA was quantified with those obtained via LFA. The differences may be approximately 33 mm for return periods of 100 years. / O objetivo da análise de frequência das variáveis hidrológicas é relacionar a magnitude dos eventos com sua frequência de ocorrência por meio do uso de uma distribuição de probabilidade. No estudo de eventos hidrológicos extremos, podem ser usadas as distribuições de probabilidade generalizadas: de eventos extremos, logística e Pareto. Existem diversas metodologias para a estimativa dos parâmetros das distribuições de probabilidade, no entanto, devido às facilidades computacionais, utilizam-se frequentemente os momentos-L. A confiabilidade dos quantis com alto período de retorno pode ser aumentada utilizando os momentos-LH ou momentos-L de altas ordens. Os momentos-L foram amplamente estudados, todavia, os momentos-LH apresentam literatura reduzida, logo, mais pesquisas são necessárias. Portanto, neste estudo, os momentos-LH foram estudados sob duas abordagens comumente utilizadas na hidrologia: (i) Análise de frequência local (AFL) e (ii) Análise de frequência regional (AFR). Além disso, foi montado um banco de dados com 227 estações pluviométricas (máximas diárias anuais), localizadas no Estado do Paraná, no período de 1976 a 2006. A AFL subdividiu-se em duas etapas: (i) Simulações de Monte Carlo e (ii) Aplicação dos resultados ao banco de dados. O principal resultado das simulações de Monte Carlo foi que os momentos-LH tornam os quantis 0,99 e 0,995 menos enviesados. Além disso, as simulações viabilizaram a criação de um algoritmo para realizar a AFL utilizando as distribuições generalizadas. O algoritmo foi aplicado ao banco de dados e possibilitou ajuste das 227 séries estudadas. Na AFR, as 227 estações foram dividas em 11 grupos e foram obtidas as curvas de crescimento regional. Os quantis locais foram obtidos a partir das curvas de crescimento regional. Foi quantificada a diferença entre os quantis locais obtidos via AFL com aqueles obtidos via AFR. As diferenças podem ser de aproximadamente 33 mm para períodos de retorno de 100 anos.
24

Spectrum Sensing Techniques For Cognitive Radio Applications

Sanjeev, G 01 1900 (has links) (PDF)
Cognitive Radio (CR) has received tremendous research attention over the past decade, both in the academia and industry, as it is envisioned as a promising solution to the problem of spectrum scarcity. ACR is a device that senses the spectrum for occupancy by licensed users(also called as primary users), and transmits its data only when the spectrum is sensed to be available. For the efficient utilization of the spectrum while also guaranteeing adequate protection to the licensed user from harmful interference, the CR should be able to sense the spectrum for primary occupancy quickly as well as accurately. This makes Spectrum Sensing(SS) one of the where the goal is to test whether the primary user is inactive(the null or noise-only hypothesis), or not (the alternate or signal-present hypothesis). Computational simplicity, robustness to uncertainties in the knowledge of various noise, signal, and fading parameters, and ability to handle interference or other source of non-Gaussian noise are some of the desirable features of a SS unit in a CR. In many practical applications, CR devices can exploit known structure in the primary signal. IntheIEEE802.22CR standard, the primary signal is a wideband signal, but with a strong narrowband pilot component. In other applications, such as military communications, and blue tooth, the primary signal uses a Frequency Hopping (FH)transmission. These applications can significantly benefit from detection schemes that are tailored for detecting the corresponding primary signals. This thesis develops novel detection schemes and rigorous performance analysis of these primary signals in the presence of fading. For example, in the case of wideband primary signals with a strong narrowband pilot, this thesis answers the further question of whether to use the entire wideband for signal detection, or whether to filter out the pilot signal and use narrowband signal detection. The question is interesting because the fading characteristics of wideband and narrowband signals are fundamentally different. Due to this, it is not obvious which detection scheme will perform better in practical fading environments. At another end of the gamut of SS algorithms, when the CR has no knowledge of the structure or statistics of the primary signal, and when the noise variance is known, Energy Detection (ED) is known to be optimal for SS. However, the performance of the ED is not robust to uncertainties in the noise statistics or under different possible primary signal models. In this case, a natural way to pose the SS problem is as a Goodness-of-Fit Test (GoFT), where the idea is to either accept or reject the noise-only hypothesis. This thesis designs and studies the performance of GoFTs when the noise statistics can even be non-Gaussian, and with heavy tails. Also, the techniques are extended to the cooperative SS scenario where multiple CR nodes record observations using multiple antennas and perform decentralized detection. In this thesis, we study all the issues listed above by considering both single and multiple CR nodes, and evaluating their performance in terms of(a)probability of detection error,(b) sensing-throughput trade off, and(c)probability of rejecting the null-hypothesis. We propose various SS strategies, compare their performance against existing techniques, and discuss their relative advantages and performance tradeoffs. The main contributions of this thesis are as follows: The question of whether to use pilot-based narrowband sensing or wideband sensing is answered using a novel, analytically tractable metric proposed in this thesis called the error exponent with a confidence level. Under a Bayesian framework, obtaining closed form expressions for the optimal detection threshold is difficult. Near-optimal detection thresholds are obtained for most of the commonly encountered fading models. Foran FH primary, using the Fast Fourier Transform (FFT) Averaging Ratio(FAR) algorithm, the sensing-through put trade off are derived in closed form. A GoFT technique based on the statistics of the number of zero-crossings in the observations is proposed, which is robust to uncertainties in the noise statistics, and outperforms existing GoFT-based SS techniques. A multi-dimensional GoFT based on stochastic distances is studied, which pro¬vides better performance compared to some of the existing techniques. A special case, i.e., a test based on the Kullback-Leibler distance is shown to be robust to some uncertainties in the noise process. All of the theoretical results are validated using Monte Carlo simulations. In the case of FH-SS, an implementation of SS using the FAR algorithm on a commercially off-the ¬shelf platform is presented, and the performance recorded using the hardware is shown to corroborate well with the theoretical and simulation-based results. The results in this thesis thus provide a bouquet of SS algorithms that could be useful under different CRSS scenarios.
25

On New Constructive Tools in Bayesian Nonparametric Inference

Al Labadi, Luai 22 June 2012 (has links)
The Bayesian nonparametric inference requires the construction of priors on infinite dimensional spaces such as the space of cumulative distribution functions and the space of cumulative hazard functions. Well-known priors on the space of cumulative distribution functions are the Dirichlet process, the two-parameter Poisson-Dirichlet process and the beta-Stacy process. On the other hand, the beta process is a popular prior on the space of cumulative hazard functions. This thesis is divided into three parts. In the first part, we tackle the problem of sampling from the above mentioned processes. Sampling from these processes plays a crucial role in many applications in Bayesian nonparametric inference. However, having exact samples from these processes is impossible. The existing algorithms are either slow or very complex and may be difficult to apply for many users. We derive new approximation techniques for simulating the above processes. These new approximations provide simple, yet efficient, procedures for simulating these important processes. We compare the efficiency of the new approximations to several other well-known approximations and demonstrate a significant improvement. In the second part, we develop explicit expressions for calculating the Kolmogorov, Levy and Cramer-von Mises distances between the Dirichlet process and its base measure. The derived expressions of each distance are used to select the concentration parameter of a Dirichlet process. We also propose a Bayesain goodness of fit test for simple and composite hypotheses for non-censored and censored observations. Illustrative examples and simulation results are included. Finally, we describe the relationship between the frequentist and Bayesian nonparametric statistics. We show that, when the concentration parameter is large, the two-parameter Poisson-Dirichlet process and its corresponding quantile process share many asymptotic pr operties with the frequentist empirical process and the frequentist quantile process. Some of these properties are the functional central limit theorem, the strong law of large numbers and the Glivenko-Cantelli theorem.
26

On New Constructive Tools in Bayesian Nonparametric Inference

Al Labadi, Luai 22 June 2012 (has links)
The Bayesian nonparametric inference requires the construction of priors on infinite dimensional spaces such as the space of cumulative distribution functions and the space of cumulative hazard functions. Well-known priors on the space of cumulative distribution functions are the Dirichlet process, the two-parameter Poisson-Dirichlet process and the beta-Stacy process. On the other hand, the beta process is a popular prior on the space of cumulative hazard functions. This thesis is divided into three parts. In the first part, we tackle the problem of sampling from the above mentioned processes. Sampling from these processes plays a crucial role in many applications in Bayesian nonparametric inference. However, having exact samples from these processes is impossible. The existing algorithms are either slow or very complex and may be difficult to apply for many users. We derive new approximation techniques for simulating the above processes. These new approximations provide simple, yet efficient, procedures for simulating these important processes. We compare the efficiency of the new approximations to several other well-known approximations and demonstrate a significant improvement. In the second part, we develop explicit expressions for calculating the Kolmogorov, Levy and Cramer-von Mises distances between the Dirichlet process and its base measure. The derived expressions of each distance are used to select the concentration parameter of a Dirichlet process. We also propose a Bayesain goodness of fit test for simple and composite hypotheses for non-censored and censored observations. Illustrative examples and simulation results are included. Finally, we describe the relationship between the frequentist and Bayesian nonparametric statistics. We show that, when the concentration parameter is large, the two-parameter Poisson-Dirichlet process and its corresponding quantile process share many asymptotic pr operties with the frequentist empirical process and the frequentist quantile process. Some of these properties are the functional central limit theorem, the strong law of large numbers and the Glivenko-Cantelli theorem.
27

On New Constructive Tools in Bayesian Nonparametric Inference

Al Labadi, Luai January 2012 (has links)
The Bayesian nonparametric inference requires the construction of priors on infinite dimensional spaces such as the space of cumulative distribution functions and the space of cumulative hazard functions. Well-known priors on the space of cumulative distribution functions are the Dirichlet process, the two-parameter Poisson-Dirichlet process and the beta-Stacy process. On the other hand, the beta process is a popular prior on the space of cumulative hazard functions. This thesis is divided into three parts. In the first part, we tackle the problem of sampling from the above mentioned processes. Sampling from these processes plays a crucial role in many applications in Bayesian nonparametric inference. However, having exact samples from these processes is impossible. The existing algorithms are either slow or very complex and may be difficult to apply for many users. We derive new approximation techniques for simulating the above processes. These new approximations provide simple, yet efficient, procedures for simulating these important processes. We compare the efficiency of the new approximations to several other well-known approximations and demonstrate a significant improvement. In the second part, we develop explicit expressions for calculating the Kolmogorov, Levy and Cramer-von Mises distances between the Dirichlet process and its base measure. The derived expressions of each distance are used to select the concentration parameter of a Dirichlet process. We also propose a Bayesain goodness of fit test for simple and composite hypotheses for non-censored and censored observations. Illustrative examples and simulation results are included. Finally, we describe the relationship between the frequentist and Bayesian nonparametric statistics. We show that, when the concentration parameter is large, the two-parameter Poisson-Dirichlet process and its corresponding quantile process share many asymptotic pr operties with the frequentist empirical process and the frequentist quantile process. Some of these properties are the functional central limit theorem, the strong law of large numbers and the Glivenko-Cantelli theorem.
28

Application Of The Empirical Likelihood Method In Proportional Hazards Model

He, Bin 01 January 2006 (has links)
In survival analysis, proportional hazards model is the most commonly used and the Cox model is the most popular. These models are developed to facilitate statistical analysis frequently encountered in medical research or reliability studies. In analyzing real data sets, checking the validity of the model assumptions is a key component. However, the presence of complicated types of censoring such as double censoring and partly interval-censoring in survival data makes model assessment difficult, and the existing tests for goodness-of-fit do not have direct extension to these complicated types of censored data. In this work, we use empirical likelihood (Owen, 1988) approach to construct goodness-of-fit test and provide estimates for the Cox model with various types of censored data. Specifically, the problems under consideration are the two-sample Cox model and stratified Cox model with right censored data, doubly censored data and partly interval-censored data. Related computational issues are discussed, and some simulation results are presented. The procedures developed in the work are applied to several real data sets with some discussion.
29

CURE RATE AND DESTRUCTIVE CURE RATE MODELS UNDER PROPORTIONAL ODDS LIFETIME DISTRIBUTIONS

FENG, TIAN January 2019 (has links)
Cure rate models, introduced by Boag (1949), are very commonly used while modelling lifetime data involving long time survivors. Applications of cure rate models can be seen in biomedical science, industrial reliability, finance, manufacturing, demography and criminology. In this thesis, cure rate models are discussed under a competing cause scenario, with the assumption of proportional odds (PO) lifetime distributions for the susceptibles, and statistical inferential methods are then developed based on right-censored data. In Chapter 2, a flexible cure rate model is discussed by assuming the number of competing causes for the event of interest following the Conway-Maxwell (COM) Poisson distribution, and their corresponding lifetimes of non-cured or susceptible individuals can be described by PO model. This provides a natural extension of the work of Gu et al. (2011) who had considered a geometric number of competing causes. Under right censoring, maximum likelihood estimators (MLEs) are obtained by the use of expectation-maximization (EM) algorithm. An extensive Monte Carlo simulation study is carried out for various scenarios, and model discrimination between some well-known cure models like geometric, Poisson and Bernoulli is also examined. The goodness-of-fit and model diagnostics of the model are also discussed. A cutaneous melanoma dataset example is used to illustrate the models as well as the inferential methods. Next, in Chapter 3, the destructive cure rate models, introduced by Rodrigues et al. (2011), are discussed under the PO assumption. Here, the initial number of competing causes is modelled by a weighted Poisson distribution with special focus on exponentially weighted Poisson, length-biased Poisson and negative binomial distributions. Then, a damage distribution is introduced for the number of initial causes which do not get destroyed. An EM-type algorithm for computing the MLEs is developed. An extensive simulation study is carried out for various scenarios, and model discrimination between the three weighted Poisson distributions is also examined. All the models and methods of estimation are evaluated through a simulation study. A cutaneous melanoma dataset example is used to illustrate the models as well as the inferential methods. In Chapter 4, frailty cure rate models are discussed under a gamma frailty wherein the initial number of competing causes is described by a Conway-Maxwell (COM) Poisson distribution in which the lifetimes of non-cured individuals can be described by PO model. The detailed steps of the EM algorithm are then developed for this model and an extensive simulation study is carried out to evaluate the performance of the proposed model and the estimation method. A cutaneous melanoma dataset as well as a simulated data are used for illustrative purposes. Finally, Chapter 5 outlines the work carried out in the thesis and also suggests some problems of further research interest. / Thesis / Doctor of Philosophy (PhD)
30

Statistical Inference

Chou, Pei-Hsin 26 June 2008 (has links)
In this paper, we will investigate the important properties of three major parts of statistical inference: point estimation, interval estimation and hypothesis testing. For point estimation, we consider the two methods of finding estimators: moment estimators and maximum likelihood estimators, and three methods of evaluating estimators: mean squared error, best unbiased estimators and sufficiency and unbiasedness. For interval estimation, we consider the the general confidence interval, confidence interval in one sample, confidence interval in two samples, sample sizes and finite population correction factors. In hypothesis testing, we consider the theory of testing of hypotheses, testing in one sample, testing in two samples, and the three methods of finding tests: uniformly most powerful test, likelihood ratio test and goodness of fit test. Many examples are used to illustrate their applications.

Page generated in 0.1016 seconds