• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 66
  • 66
  • 23
  • 23
  • 23
  • 20
  • 15
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Jackknife Empirical Likelihood for the Accelerated Failure Time Model with Censored Data

Bouadoumou, Maxime K 15 July 2011 (has links)
Kendall and Gehan estimating functions are used to estimate the regression parameter in accelerated failure time (AFT) model with censored observations. The accelerated failure time model is the preferred survival analysis method because it maintains a consistent association between the covariate and the survival time. The jackknife empirical likelihood method is used because it overcomes computation difficulty by circumventing the construction of the nonlinear constraint. Jackknife empirical likelihood turns the statistic of interest into a sample mean based on jackknife pseudo-values. U-statistic approach is used to construct the confidence intervals for the regression parameter. We conduct a simulation study to compare the Wald-type procedure, the empirical likelihood, and the jackknife empirical likelihood in terms of coverage probability and average length of confidence intervals. Jackknife empirical likelihood method has a better performance and overcomes the under-coverage problem of the Wald-type method. A real data is also used to illustrate the proposed methods.
52

Empirical Likelihood Method for Ratio Estimation

Dong, Bin 22 February 2011 (has links)
Empirical likelihood, which was pioneered by Thomas and Grunkemeier (1975) and Owen (1988), is a powerful nonparametric method of statistical inference that has been widely used in the statistical literature. In this thesis, we investigate the merits of empirical likelihood for various problems arising in ratio estimation. First, motivated by the smooth empirical likelihood (SEL) approach proposed by Zhou & Jing (2003), we develop empirical likelihood estimators for diagnostic test likelihood ratios (DLRs), and derive the asymptotic distributions for suitable likelihood ratio statistics under certain regularity conditions. To skirt the bandwidth selection problem that arises in smooth estimation, we propose an empirical likelihood estimator for the same DLRs that is based on non-smooth estimating equations (NEL). Via simulation studies, we compare the statistical properties of these empirical likelihood estimators (SEL, NEL) to certain natural competitors, and identify situations in which SEL and NEL provide superior estimation capabilities. Next, we focus on deriving an empirical likelihood estimator of a baseline cumulative hazard ratio with respect to covariate adjustments under two nonproportional hazard model assumptions. Under typical regularity conditions, we show that suitable empirical likelihood ratio statistics each converge in distribution to a 2 random variable. Through simulation studies, we investigate the advantages of this empirical likelihood approach compared to use of the usual normal approximation. Two examples from previously published clinical studies illustrate the use of the empirical likelihood methods we have described. Empirical likelihood has obvious appeal in deriving point and interval estimators for time-to-event data. However, when we use this method and its asymptotic critical value to construct simultaneous confidence bands for survival or cumulative hazard functions, it typically necessitates very large sample sizes to achieve reliable coverage accuracy. We propose using a bootstrap method to recalibrate the critical value of the sampling distribution of the sample log-likelihood ratios. Via simulation studies, we compare our EL-based bootstrap estimator for the survival function with EL-HW and EL-EP bands proposed by Hollander et al. (1997) and apply this method to obtain a simultaneous confidence band for the cumulative hazard ratios in the two clinical studies that we mentioned above. While copulas have been a popular statistical tool for modeling dependent data in recent years, selecting a parametric copula is a nontrivial task that may lead to model misspecification because different copula families involve different correlation structures. This observation motivates us to use empirical likelihood to estimate a copula nonparametrically. With this EL-based estimator of a copula, we derive a goodness-of-fit test for assessing a specific parametric copula model. By means of simulations, we demonstrate the merits of our EL-based testing procedure. We demonstrate this method using the data from Wieand et al. (1989). In the final chapter of the thesis, we provide a brief introduction to several areas for future research involving the empirical likelihood approach.
53

Some questions in risk management and high-dimensional data analysis

Wang, Ruodu 04 May 2012 (has links)
This thesis addresses three topics in the area of statistics and probability, with applications in risk management. First, for the testing problems in the high-dimensional (HD) data analysis, we present a novel method to formulate empirical likelihood tests and jackknife empirical likelihood tests by splitting the sample into subgroups. New tests are constructed to test the equality of two HD means, the coefficient in the HD linear models and the HD covariance matrices. Second, we propose jackknife empirical likelihood methods to formulate interval estimations for important quantities in actuarial science and risk management, such as the risk-distortion measures, Spearman's rho and parametric copulas. Lastly, we introduce the theory of completely mixable (CM) distributions. We give properties of the CM distributions, show that a few classes of distributions are CM and use the new technique to find the bounds for the sum of individual risks with given marginal distributions but unspecific dependence structure. The result partially solves a problem that had been a challenge for decades, and directly leads to the bounds on quantities of interest in risk management, such as the variance, the stop-loss premium, the price of the European options and the Value-at-Risk associated with a joint portfolio.
54

Empirical likelihood and extremes

Gong, Yun 17 January 2012 (has links)
In 1988, Owen introduced empirical likelihood as a nonparametric method for constructing confidence intervals and regions. Since then, empirical likelihood has been studied extensively in the literature due to its generality and effectiveness. It is well known that empirical likelihood has several attractive advantages comparing to its competitors such as bootstrap: determining the shape of confidence regions automatically using only the data; straightforwardly incorporating side information expressed through constraints; being Bartlett correctable. The main part of this thesis extends the empirical likelihood method to several interesting and important statistical inference situations. This thesis has four components. The first component (Chapter II) proposes a smoothed jackknife empirical likelihood method to construct confidence intervals for the receiver operating characteristic (ROC) curve in order to overcome the computational difficulty when we have nonlinear constrains in the maximization problem. The second component (Chapter III and IV) proposes smoothed empirical likelihood methods to obtain interval estimation for the conditional Value-at-Risk with the volatility model being an ARCH/GARCH model and a nonparametric regression respectively, which have applications in financial risk management. The third component(Chapter V) derives the empirical likelihood for the intermediate quantiles, which plays an important role in the statistics of extremes. Finally, the fourth component (Chapter VI and VII) presents two additional results: in Chapter VI, we present an interesting result by showing that, when the third moment is infinity, we may prefer the Student's t-statistic to the sample mean standardized by the true standard deviation; in Chapter VII, we present a method for testing a subset of parameters for a given parametric model of stationary processes.
55

Treatment Comparison in Biomedical Studies Using Survival Function

Zhao, Meng 03 May 2011 (has links)
In the dissertation, we study the statistical evaluation of treatment comparisons by evaluating the relative comparison of survival experiences between two treatment groups. We construct confidence interval and simultaneous confidence bands for the ratio and odds ratio of two survival functions through both parametric and nonparametric approaches.We first construct empirical likelihood confidence interval and simultaneous confidence bands for the odds ratio of two survival functions to address small sample efficacy and sufficiency. The empirical log-likelihood ratio is developed, and the corresponding asymptotic distribution is derived. Simulation studies show that the proposed empirical likelihood band has outperformed the normal approximation band in small sample size cases in the sense that it yields closer coverage probabilities to chosen nominal levels.Furthermore, in order to incorporate prognostic factors for the adjustment of survival functions in the comparison, we construct simultaneous confidence bands for the ratio and odds ratio of survival functions based on both the Cox model and the additive risk model. We develop simultaneous confidence bands by approximating the limiting distribution of cumulative hazard functions by zero-mean Gaussian processes whose distributions can be generated through Monte Carlo simulations. Simulation studies are conducted to evaluate the performance for proposed models. Real applications on published clinical trial data sets are also studied for further illustration purposes.In the end, the population attributable fraction function is studied to measure the impact of risk factors on disease incidence in the population. We develop semiparametric estimation of attributable fraction functions for cohort studies with potentially censored event time under the additive risk model.
56

Treatment Comparison in Biomedical Studies Using Survival Function

Zhao, Meng 03 May 2011 (has links)
In the dissertation, we study the statistical evaluation of treatment comparisons by evaluating the relative comparison of survival experiences between two treatment groups. We construct confidence interval and simultaneous confidence bands for the ratio and odds ratio of two survival functions through both parametric and nonparametric approaches.We first construct empirical likelihood confidence interval and simultaneous confidence bands for the odds ratio of two survival functions to address small sample efficacy and sufficiency. The empirical log-likelihood ratio is developed, and the corresponding asymptotic distribution is derived. Simulation studies show that the proposed empirical likelihood band has outperformed the normal approximation band in small sample size cases in the sense that it yields closer coverage probabilities to chosen nominal levels.Furthermore, in order to incorporate prognostic factors for the adjustment of survival functions in the comparison, we construct simultaneous confidence bands for the ratio and odds ratio of survival functions based on both the Cox model and the additive risk model. We develop simultaneous confidence bands by approximating the limiting distribution of cumulative hazard functions by zero-mean Gaussian processes whose distributions can be generated through Monte Carlo simulations. Simulation studies are conducted to evaluate the performance for proposed models. Real applications on published clinical trial data sets are also studied for further illustration purposes.In the end, the population attributable fraction function is studied to measure the impact of risk factors on disease incidence in the population. We develop semiparametric estimation of attributable fraction functions for cohort studies with potentially censored event time under the additive risk model.
57

Empirical likelihood and mean-variance models for longitudinal data

Li, Daoji January 2011 (has links)
Improving the estimation efficiency has always been one of the important aspects in statistical modelling. Our goal is to develop new statistical methodologies yielding more efficient estimators in the analysis of longitudinal data. In this thesis, we consider two different approaches, empirical likelihood and jointly modelling the mean and variance, to improve the estimation efficiency. In part I of this thesis, empirical likelihood-based inference for longitudinal data within the framework of generalized linear model is investigated. The proposed procedure takes into account the within-subject correlation without involving direct estimation of nuisance parameters in the correlation matrix and retains optimality even if the working correlation structure is misspecified. The proposed approach yields more efficient estimators than conventional generalized estimating equations and achieves the same asymptotic variance as quadratic inference functions based methods. The second part of this thesis focus on the joint mean-variance models. We proposed a data-driven approach to modelling the mean and variance simultaneously, yielding more efficient estimates of the mean regression parameters than the conventional generalized estimating equations approach even if the within-subject correlation structure is misspecified in our joint mean-variance models. The joint mean-variances in parametric form as well as semi-parametric form has been investigated. Extensive simulation studies are conducted to assess the performance of our proposed approaches. Three longitudinal data sets, Ohio Children’s wheeze status data (Ware et al., 1984), Cattle data (Kenward, 1987) and CD4+ data (Kaslowet al., 1987), are used to demonstrate our models and approaches.
58

Statistical Analysis of Skew Normal Distribution and its Applications

Ngunkeng, Grace 01 August 2013 (has links)
No description available.
59

EFFICIENT INFERENCE AND DOMINANT-SET BASED CLUSTERING FOR FUNCTIONAL DATA

Xiang Wang (18396603) 03 June 2024 (has links)
<p dir="ltr">This dissertation addresses three progressively fundamental problems for functional data analysis: (1) To do efficient inference for the functional mean model accounting for within-subject correlation, we propose the refined and bias-corrected empirical likelihood method. (2) To identify functional subjects potentially from different populations, we propose the dominant-set based unsupervised clustering method using the similarity matrix. (3) To learn the similarity matrix from various similarity metrics for functional data clustering, we propose the modularity guided and dominant-set based semi-supervised clustering method.</p><p dir="ltr">In the first problem, the empirical likelihood method is utilized to do inference for the mean function of functional data by constructing the refined and bias-corrected estimating equation. The proposed estimating equation not only improves efficiency but also enables practically feasible empirical likelihood inference by properly incorporating within-subject correlation, which has not been achieved by previous studies.</p><p dir="ltr">In the second problem, the dominant-set based unsupervised clustering method is proposed to maximize the within-cluster similarity and applied to functional data with a flexible choice of similarity measures between curves. The proposed unsupervised clustering method is a hierarchical bipartition procedure under the penalized optimization framework with the tuning parameter selected by maximizing the clustering criterion called modularity of the resulting two clusters, which is inspired by the concept of dominant set in graph theory and solved by replicator dynamics in game theory. The advantage offered by this approach is not only robust to imbalanced sizes of groups but also to outliers, which overcomes the limitation of many existing clustering methods.</p><p dir="ltr">In the third problem, the metric-based semi-supervised clustering method is proposed with similarity metric learned by modularity maximization and followed by the above proposed dominant-set based clustering procedure. Under semi-supervised setting where some clustering memberships are known, the goal is to determine the best linear combination of candidate similarity metrics as the final metric to enhance the clustering performance. Besides the global metric-based algorithm, another algorithm is also proposed to learn individual metrics for each cluster, which permits overlapping membership for the clustering. This is innovatively different from many existing methods. This method is superiorly applicable to functional data with various similarity metrics between functional curves, while also exhibiting robustness to imbalanced sizes of groups, which are intrinsic to the dominant-set based clustering approach.</p><p dir="ltr">In all three problems, the advantages of the proposed methods are demonstrated through extensive empirical investigations using simulations as well as real data applications.</p>
60

Estimação de modelos DSGE usando verossimilhança empírica e mínimo contraste generalizados / DSGE Estimation using Generalized Empirical Likelihood and Generalized Minimum Contrast

Boaretto, Gilberto Oliveira 05 March 2018 (has links)
O objetivo deste trabalho é investigar o desempenho de estimadores baseados em momentos das famílias verossimilhança empírica generalizada (GEL) e mínimo contraste generalizado (GMC) na estimação de modelos de equilíbrio geral dinâmico e estocástico (DSGE), com enfoque na análise de robustez sob má-especificação, recorrente neste tipo de modelo. Como benchmark utilizamos método do momentos generalizado (GMM), máxima verossimilhança (ML) e inferência bayesiana (BI). Trabalhamos com um modelo de ciclos reais de negócios (RBC) que pode ser considerado o núcleo de modelos DSGE, apresenta dificuldades similares e facilita a análise dos resultados devido ao menor número de parâmetros. Verificamos por meio de experimentos de Monte Carlo se os estimadores estudados entregam resultados satisfatórios em termos de média, mediana, viés, erro quadrático médio, erro absoluto médio e verificamos a distribuição das estimativas geradas por cada estimador. Dentre os principais resultados estão: (i) o estimador verossimilhança empírica (EL) - assim como sua versão com condições de momento suavizadas (SEL) - e a inferência bayesiana (BI) foram, nesta ordem, os que obtiveram os melhores desempenhos, inclusive nos casos de especificação incorreta; (ii) os estimadores continous updating empirical likelihood (CUE), mínima distância de Hellinger (HD), exponential tilting (ET) e suas versões suavizadas apresentaram desempenho comparativo intermediário; (iii) o desempenho dos estimadores exponentially tilted empirical likelihood (ETEL), exponential tilting Hellinger distance (ETHD) e suas versões suavizadas foi bastante comprometido pela ocorrência de estimativas atípicas; (iv) as versões com e sem suavização das condições de momento dos estimadores das famílias GEL/GMC apresentaram desempenhos muito similares; (v) os estimadores GMM, principalmente no caso sobreidentificado, e ML apresentaram desempenhos consideravelmente abaixo de boa parte de seus concorrentes / The objective of this work is to investigate the performance of moment-based estimators of the generalized empirical likelihood (GEL) and generalized minimum contrast (GMC) families in the estimation of dynamic stochastic general equilibrium (DSGE) models, focusing on the robustness analysis under misspecification, recurrent in this model. As benchmark we used generalized method of moments (GMM), maximum likelihood (ML) and Bayesian inference (BI). We work with a real business cycle (RBC) model that can be considered the core of DSGE models, presents similar difficulties and facilitates the analysis of results due to lower number of parameters. We verified, via Monte Carlo experiments, whether the studied estimators presented satisfactory results in terms of mean, median, bias, mean square error, mean absolute error and we verified the distribution of the estimates generated by each estimator. Among the main results are: (i) empirical likelihood (EL) estimator - as well as its version with smoothed moment conditions (SEL) - and Bayesian inference (BI) were, in that order, the ones that obtained the best performances, even in misspecification cases; (ii) continuous updating empirical likelihood (CUE), minimum Hellinger distance (HD), exponential tilting (ET) estimators and their smoothed versions exhibit intermediate comparative performance; (iii) performance of exponentially tilted empirical likelihood (ETEL), exponential tilting Hellinger distance (ETHD) and its smoothed versions was seriously compromised by atypical estimates; (iv) smoothed and non-smoothed GEL/GMC estimators exhibit very similar performances; (v) GMM, especially in the over-identified case, and ML estimators had lower performance than their competitors

Page generated in 0.0756 seconds