• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 2
  • Tagged with
  • 14
  • 14
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Search for VH → leptons + b¯b with the ATLAS experiment at the LHC

Debenedetti, Chiara January 2014 (has links)
The search for a Higgs boson decaying to a b¯b pair is one of the key analyses ongoing at the ATLAS experiment. Despite being the largest branching ratio decay for a Standard Model Higgs boson, a large dataset is necessary to perform this analysis because of the very large backgrounds affecting the measurement. To discriminate the electroweak H → b¯b signal from the large QCD backgrounds, the associated production of the Higgs with a W or a Z boson decaying leptonically is used. Different techniques have been proposed to enhance the signal over background ratio in the VH(b¯b) channel, from dedicated kinematic cuts, to a single large radius jet to identify the two collimated b’s in the Higgs high transverse momentum regime, to multivariate techniques. The high-pT approach, using a large radius jet to identify the b’s coming from the Higgs decay, has been tested against an analysis based on kinematic cuts for a dataset of 4.7 fb−1 luminosity at √s = 7 TeV, and compatible results were found for the same transverse momentum range. Using a kinematic cut based approach the VH(b¯b) signal search has been performed for the full LHC Run 1 dataset: 4.7 fb−1 at √s = 7 TeV and 20.7 fb−1 at √s = 8 TeV. Several backgrounds to this analysis, such as Wb¯b have not been measured in data yet, and an accurate study of the theoretical description has been performed, comparing the predictions of various Monte Carlo generators at different orders. The complexity of the analysis requires a profile likelihood fit with several categories and almost 200 parameters, taking into account all the systematics coming from experimental or modelling limitations, to extract the result. To validate the fit model, a test of the ability to extract the signal is performed on the resonant V Z(b¯b) background. A 4.8σ excess compatible with the Standard Model rate expectation has been measured, with a best fit value μVZ = 0.93+0.22−0.21. The full LHC Run1 dataset result for the VH(b¯b) process is a limit of (1.3)1.4 x SM (expected) observed, with a best fit value of 0.2±0.5(stat)±0.4(sys) for a Higgs boson of 125 GeV mass.
2

On approximate likelihood in survival models

Läuter, Henning January 2006 (has links)
We give a common frame for different estimates in survival models. For models with nuisance parameters we approximate the profile likelihood and find estimates especially for the proportional hazard model.
3

A New Third Compartment Significantly Improves Fit and Identifiability in a Model for Ace2p Distribution in Saccharomyces cerevisiae after Cytokinesis.

Järvstråt, Linnea January 2011 (has links)
Asymmetric cell division is an important mechanism for the differentiation of cells during embryogenesis and cancer development. Saccharomyces cerevisiae divides asymmetrically and is therefore used as a model system for understanding the mechanisms behind asymmetric cell division. Ace2p is a transcriptional factor in yeast that localizes primarily to the daughter nucleus during cell division. The distribution of Ace2p is visualized using a fusion protein with yellow fluorescent protein (YFP) and confocal microscopy. Systems biology provides a new approach to investigating biological systems through the use of quantitative models. The localization of the transcriptional factor Ace2p in yeast during cell division has been modelled using ordinary differential equations. Herein such modelling has been evaluated. A 2-compartment model for the localization of Ace2p in yeast post-cytokinesis proposed in earlier work was found to be insufficient when new data was included in the model evaluation. Ace2p localization in the dividing yeast cell pair before cytokinesis has been investigated using a similar approach and was found to not explain the data to a significant degree. A 3-compartment model is proposed. The improvement in comparison to the 2-compartment model was statistically significant. Simulations of the 3-compartment model predicts a fast decrease in the amount of Ace2p in the cytosol close to the nucleus during the first seconds after each bleaching of the fluorescence. Experimental investigation of the cytosol close to the nucleus could test if the fast dynamics are present after each bleaching of the fluorescence. The parameters in the model have been estimated using the profile likelihood approach in combination with global optimization with simulated annealing. Confidence intervals for parameters have been found for the 3-compartment model of Ace2p localization post-cytokinesis. In conclusion, the profile likelihood approach has proven a good method of estimating parameters, and the new 3-compartment model allows for reliable parameter estimates in the post-cytokinesis situation. A new Matlab-implementation of the profile likelihood method is appended.
4

Modified Profile Likelihood Approach for Certain Intraclass Correlation Coefficient

Liu, Huayu 20 April 2011 (has links)
In this paper we consider the problem of constructing confidence intervals and lower bounds forthe intraclass correlation coefficient in an interrater reliability study where the raters are randomly selected from a population of raters.The likelihood function of the interrater reliability is derived and simplified, and the profile likelihood based approach is readily available for computing the confidence intervals of the interrater reliability. Unfortunately, the confidence intervals computed by using the profile likelihood function are in general too narrow to have the desired coverage probabilities. From the point view of practice, a conservative approach, if is at least as precise as any existing method, is preferred sinceit gives the correct results with a probability higher than claimed. Under this rationale, we propose the so-called modified likelihood approach in this paper. Simulation study shows that, the proposed method in general has better performance than currently used methods.
5

Standard two-stage and Nonlinear mixed effect modelling for determination of cell-to-cell variation of transport parameters in Saccharomyces cerevisiae

Janzén, David January 2012 (has links)
The interest for cell-to-cell variation has in recent years increased in a steady pace. Several studies have shown that a large portion of the observed variation in the nature originates from the fact that all biochemical reactions are in some respect stochastic. Interestingly, nature has evolved highly advanced frameworks specialized in dealing with stochasticity in order to still be able to produce the delicate signalling pathways that are present in even very simple single-cell organisms. Such a simple organism is Saccharomyces cerevisiae, which is the organism that has been studied in this thesis. More particulary, the distribution of the transport rate in S. cerevisiae has been studied by a mathematical modelling approach. It is shown that a two-compartment model can adequately describe the flow of a yellow fluorescent protein (YFP) between the cytosol and the nucleus. A profile likelihood (PLH) analysis shows that the parameters in the two-compartment model are identifiable and well-defined under the experimental data of YFP. Furthermore, the result from this model shows that the distribution of the transport rates in the 80 studied cells is lognormal. Also, in contradiction to prior beliefs, no significant difference between recently divided mother and daughter cells in terms of transport rates of YFP is to be seen. The modelling is performed by using both standard two-stage(STS) and nonlinear mixed effect model (NONMEM). A methodological comparison between the two very different mathematical STS and NONMEM is also presented. STS is today the conventional approach in studies of cell-to-cell variation. However, in this thesis it is shown that NONMEM, which has originally been developed for population pharmacokinetic/ pharmacodynamic (PK/PD) studies, is at least as good, or in some cases even a better approach than STS in studies of cell-to-cell variation. Finally, a new approach in studies of cell-to-cell variation is suggested that involves a combination of STS, NONMEM and PLH. In particular, it is shown that this combination of different methods would be especially useful if the data is sparse. By applying this combination of methods, the uncertainty in the estimation of the variability could be greatly reduced.
6

Maximum Likelihood Estimators for ARMA and ARFIMA Models. A Monte Carlo Study.

Hauser, Michael A. January 1998 (has links) (PDF)
We analyze by simulation the properties of two time domain and two frequency domain estimators for low order autoregressive fractionally integrated moving average Gaussian models, ARFIMA (p,d,q). The estimators considered are the exact maximum likelihood for demeaned data, EML, the associated modified profile likelihood, MPL, and the Whittle estimator with, WLT, and without tapered data, WL. Length of the series is 100. The estimators are compared in terms of pile-up effect, mean square error, bias, and empirical confidence level. The tapered version of the Whittle likelihood turns out to be a reliable estimator for ARMA and ARFIMA models. Its small losses in performance in case of ``well-behaved" models are compensated sufficiently in more ``difficult" models. The modified profile likelihood is an alternative to the WLT but is computationally more demanding. It is either equivalent to the EML or more favorable than the EML. For fractionally integrated models, particularly, it dominates clearly the EML. The WL has serious deficiencies for large ranges of parameters, and so cannot be recommended in general. The EML, on the other hand, should only be used with care for fractionally integrated models due to its potential large negative bias of the fractional integration parameter. In general, one should proceed with caution for ARMA(1,1) models with almost canceling roots, and, in particular, in case of the EML and the MPL for inference in the vicinity of a moving average root of +1. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
7

Dimension Reduction and Covariance Structure for Multivariate Data, Beyond Gaussian Assumption

Maadooliat, Mehdi 2011 August 1900 (has links)
Storage and analysis of high-dimensional datasets are always challenging. Dimension reduction techniques are commonly used to reduce the complexity of the data and obtain the informative aspects of datasets. Principal Component Analysis (PCA) is one of the commonly used dimension reduction techniques. However, PCA does not work well when there are outliers or the data distribution is skewed. Gene expression index estimation is an important problem in bioinformatics. Some of the popular methods in this area are based on the PCA, and thus may not work well when there is non-Gaussian structure in the data. To address this issue, a likelihood based data transformation method with a computationally efficient algorithm is developed. Also, a new multivariate expression index is studied and the performance of the multivariate expression index is compared with the commonly used univariate expression index. As an extension of the gene expression index estimation problem, a general procedure that integrates data transformation with the PCA is developed. In particular, this general method can handle missing data and data with functional structure. It is well-known that the PCA can be obtained by the eigen decomposition of the sample covariance matrix. Another focus of this dissertation is to study the covariance (or correlation) structure under the non-Gaussian assumption. An important issue in modeling the covariance matrix is the positive definiteness constraint. The modified Cholesky decomposition of the inverse covariance matrix has been considered to address this issue in the literature. An alternative Cholesky decomposition of the covariance matrix is considered and used to construct an estimator of the covariance matrix under multivariate-t assumption. The advantage of this alternative Cholesky decomposition is the decoupling of the correlation and the variances.
8

Bayesian, Frequentist, and Information Geometry Approaches to Parametric Uncertainty Quantification of Classical Empirical Interatomic Potentials

Kurniawan, Yonatan 20 December 2021 (has links)
Uncertainty quantification (UQ) is an increasingly important part of materials modeling. In this paper, we consider the problem of quantifying parametric uncertainty in classical empirical interatomic potentials (IPs). Previous work based on local sensitivity analysis using the Fisher Information has shown that IPs are sloppy, i.e., are insensitive to coordinated changes of many parameter combinations. We confirm these results and further explore the non-local statistics in the context of sloppy model analysis using both Bayesian (MCMC) and Frequentist (profile likelihood) methods. We interface these tools with the Knowledgebase of Interatomic Models (OpenKIM) and study three models based on the Lennard-Jones, Morse, and Stillinger-Weber potentials, respectively. We confirm that IPs have global properties similar to those of sloppy models from fields such as systems biology, power systems, and critical phenomena. These models exhibit a low effective dimensionality in which many of the parameters are unidentifiable, i.e., do not encode any information when fit to data. Because the inverse problem in such models is ill-conditioned, unidentifiable parameters present challenges for traditional statistical methods. In the Bayesian approach, Monte Carlo samples can depend on the choice of prior in subtle ways. In particular, they often "evaporate" parameters into high-entropy, sub-optimal regions of the parameter space. For profile likelihoods, confidence regions are extremely sensitive to the choice of confidence level. To get a better picture of the relationship between data and parametric uncertainty, we sample the Bayesian posterior at several sampling temperatures and compare the results with those of Frequentist analyses. In analogy to statistical mechanics, we classify samples as either energy-dominated, i.e., characterized by identifiable parameters in constrained (ground state) regions of parameter space, or entropy-dominated, i.e., characterized by unidentifiable (evaporated) parameters. We complement these two pictures with information geometry to illuminate the underlying cause of this phenomenon. In this approach, a parameterized model is interpreted as a manifold embedded in the space of possible data with parameters as coordinates. We calculate geodesics on the model manifold and find that IPs, like other sloppy models, have bounded manifolds with a hierarchy of widths, leading to low effective dimensionality in the model. We show how information geometry can motivate new, natural parameterizations that improve the stability and interpretation of UQ analysis and further suggest simplified, less-sloppy models.
9

Statistical inference for rankings in the presence of panel segmentation

Xie, Lin January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Paul Nelson / Panels of judges are often used to estimate consumer preferences for m items such as food products. Judges can either evaluate each item on several ordinal scales and indirectly produce an overall ranking, or directly report a ranking of the items. A complete ranking orders all the items from best to worst. A partial ranking, as we use the term, only reports rankings of the best q out of m items. Direct ranking, the subject of this report, does not require the widespread but questionable practice of treating ordinal measurement as though they were on ratio or interval scales. Here, we develop and study segmentation models in which the panel may consist of relatively homogeneous subgroups, the segments. Judges within a subgroup will tend to agree among themselves and differ from judges in the other subgroups. We develop and study the statistical analysis of mixture models where it is not known to which segment a judge belongs or in some cases how many segments there are. Viewing segment membership indicator variables as latent data, an E-M algorithm was used to find the maximum likelihood estimators of the parameters specifying a mixture of Mallow’s (1957) distance models for complete and partial rankings. A simulation study was conducted to evaluate the behavior of the E-M algorithm in terms of such issues as the fraction of data sets for which the algorithm fails to converge and the sensitivity of initial values to the convergence rate and the performance of the maximum likelihood estimators in terms of bias and mean square error, where applicable. A Bayesian approach was developed and credible set estimators was constructed. Simulation was used to evaluate the performance of these credible sets as confidence sets. A method for predicting segment membership from covariates measured on a judge was derived using a logistic model applied to a mixture of Mallows probability distance models. The effects of covariates on segment membership were assessed. Likelihood sets for parameters specifying mixtures of Mallows distance models were constructed and explored.
10

Estimação e teste de hipótese baseados em verossimilhanças perfiladas / "Point estimation and hypothesis test based on profile likelihoods"

Silva, Michel Ferreira da 20 May 2005 (has links)
Tratar a função de verossimilhança perfilada como uma verossimilhança genuína pode levar a alguns problemas, como, por exemplo, inconsistência e ineficiência dos estimadores de máxima verossimilhança. Outro problema comum refere-se à aproximação usual da distribuição da estatística da razão de verossimilhanças pela distribuição qui-quadrado, que, dependendo da quantidade de parâmetros de perturbação, pode ser muito pobre. Desta forma, torna-se importante obter ajustes para tal função. Vários pesquisadores, incluindo Barndorff-Nielsen (1983,1994), Cox e Reid (1987,1992), McCullagh e Tibshirani (1990) e Stern (1997), propuseram modificações à função de verossimilhança perfilada. Tais ajustes consistem na incorporação de um termo à verossimilhança perfilada anteriormente à estimação e têm o efeito de diminuir os vieses da função escore e da informação. Este trabalho faz uma revisão desses ajustes e das aproximações para o ajuste de Barndorff-Nielsen (1983,1994) descritas em Severini (2000a). São apresentadas suas derivações, bem como suas propriedades. Para ilustrar suas aplicações, são derivados tais ajustes no contexto da família exponencial biparamétrica. Resultados de simulações de Monte Carlo são apresentados a fim de avaliar os desempenhos dos estimadores de máxima verossimilhança e dos testes da razão de verossimilhanças baseados em tais funções. Também são apresentadas aplicações dessas funções de verossimilhança em modelos não pertencentes à família exponencial biparamétrica, mais precisamente, na família de distribuições GA0(alfa,gama,L), usada para modelar dados de imagens de radar, e no modelo de Weibull, muito usado em aplicações da área da engenharia denominada confiabilidade, considerando dados completos e censurados. Aqui também foram obtidos resultados numéricos a fim de avaliar a qualidade dos ajustes sobre a verossimilhança perfilada, analogamente às simulações realizadas para a família exponencial biparamétrica. Vale mencionar que, no caso da família de distribuições GA0(alfa,gama,L), foi avaliada a aproximação da distribuição da estatística da razão de verossimilhanças sinalizada pela distribuição normal padrão. Além disso, no caso do modelo de Weibull, vale destacar que foram derivados resultados distribucionais relativos aos estimadores de máxima verossimilhança e às estatísticas da razão de verossimilhanças para dados completos e censurados, apresentados em apêndice. / The profile likelihood function is not genuine likelihood function, and profile maximum likelihood estimators are typically inefficient and inconsistent. Additionally, the null distribution of the likelihood ratio test statistic can be poorly approximated by the asymptotic chi-squared distribution in finite samples when there are nuisance parameters. It is thus important to obtain adjustments to the likelihood function. Several authors, including Barndorff-Nielsen (1983,1994), Cox and Reid (1987,1992), McCullagh and Tibshirani (1990) and Stern (1997), have proposed modifications to the profile likelihood function. They are defined in a such a way to reduce the score and information biases. In this dissertation, we review several profile likelihood adjustments and also approximations to the adjustments proposed by Barndorff-Nielsen (1983,1994), also described in Severini (2000a). We present derivations and the main properties of the different adjustments. We also obtain adjustments for likelihood-based inference in the two-parameter exponential family. Numerical results on estimation and testing are provided. We also consider models that do not belong to the two-parameter exponential family: the GA0(alfa,gama,L) family, which is commonly used to model image radar data, and the Weibull model, which is useful for reliability studies, the latter under both noncensored and censored data. Again, extensive numerical results are provided. It is noteworthy that, in the context of the GA0(alfa,gama,L) model, we have evaluated the approximation of the null distribution of the signalized likelihood ratio statistic by the standard normal distribution. Additionally, we have obtained distributional results for the Weibull case concerning the maximum likelihood estimators and the likelihood ratio statistic both for noncensored and censored data.

Page generated in 0.0556 seconds