Spelling suggestions: "subject:"likelihoodratio test"" "subject:"likelihoodfunction test""
51 |
Item Discrimination, Model-Data Fit, and Type I Error Rates in DIF Detection using Lord's <i>χ<sup>2</sup></i>, the Likelihood Ratio Test, and the Mantel-Haenszel ProcedurePrice, Emily A. 11 June 2014 (has links)
No description available.
|
52 |
Statistical Models for Count Data from Multiple Sclerosis Clinical Trials and their ApplicationsRettiganti, Mallikarjuna Rao 17 December 2010 (has links)
No description available.
|
53 |
Adaptive Weights Clustering and Community DetectionBesold, Franz Jürgen 19 April 2023 (has links)
Die vorliegende Dissertation widmet sich der theoretischen Untersuchung zweier neuer Algorithmen für Clustering und Community Detection: AWC (Adaptive Weights Clustering) und AWCD (Adaptive Weights Community Detection). Ein zentraler Aspekt sind dabei die Raten der Konsistenz. Bei der Betrachtung von AWC steht die Tiefe Lücke
zwischen den Clustern, also die relative Differenz der jeweiligen Dichten, im Vordergrund. Bis auf logarithmische Faktoren ist die erreichte Konsistenzrate optimal. Dies erweitert die niedrigdimensionalen Ergebnisse von Efimov, Adamyan and Spokoiny (2019) auf das Mannigfaltigkeitenmodell und berücksichtigt darüber hinaus viel allgemeinere Bedingungen an die zugrunde liegende Dichte und die Form der Cluster. Insbesondere wird der Fall betrachtet, bei dem zwei Punkte des gleichen Clusters nahe an dessen Rand liegen. Zudem werden Ergebnisse für endliche Stichproben und die optimale Wahl des zentralen Parameters λ diskutiert.
Bei der Untersuchung von AWCD steht die Asymptotik der Differenz θ − ρ zwischen
den beiden Bernoulli Parametern eines symmetrischen stochastischen Blockmodells im
Mittelpunkt. Es stellt sich heraus, dass das Gebiet der starken Konsistenz bei weitem nicht optimal ist. Es werden jedoch zwei Modifikationen des Algorithmus vorgeschlagen: Zum einen kann der Bias der beteiligten Schätzer minimiert werden. Zum anderen schlagen wir vor, die Größe der initialen Schätzung der Struktur der Gruppen zu erhöhen, indem auch längere Pfade mit berücksichtigt werden. Mithilfe dieser Modifikationen erreicht der Algorithmus eine nahezu optimale Konsistenzrate. Teilweise können diese Ergebnisse auch auf allgemeinere stochastische Blockmodelle erweitert werden.
Für beide Probleme illustrieren und validieren wir außerdem die theoretischen Resultate durch umfangreiche Experimente.
Abschließend lässt sich sagen, dass die vorliegende Arbeit die Lücke zwischen theoretischen und praktischen Ergebnissen für die Algorithmen AWC und AWCD schließt.
Insbesondere sind beide Algorithmen nach einigen Modifikationen auf relevanten Modellen konsistent mit einer nahezu optimalen Rate. / This thesis presents a theoretical study of two novel algorithms for clustering and community detection: AWC (Adaptive Weights Clustering) and AWCD (Adaptive Weights Community Detection). Most importantly, we discuss rates of consistency. For AWC, we focus on the asymptotics of the depth ε of the gap between clusters, i.e. the relative difference between the density level of the clusters and the density level of the area between them. We show that AWC is consistent with a nearly optimal rate. This extends the low-dimensional results of Efimov, Adamyan and Spokoiny (2019) to the manifold model while also considering much more general assumptions on the underlying density and the shape of clusters. In particular,
we also consider the case of two points in the same cluster that are relatively close to the
boundary. Moreover, we provide finite sample guarantees as well as the optimal tuning parameter λ.
For AWCD, we consider the asymptotics of the difference θ − ρ between the two
Bernoulli parameters of a symmetric stochastic block model. As it turns out, the resulting regime of strong consistency is far from optimal. However, we propose two major modifications to the algorithm: Firstly, we discuss an approach to minimize the bias of
the involved estimates. Secondly, we suggest increasing the starting neighborhood guess
of the algorithm by taking into account paths of minimal path length k. Using these
modifications, we are able to show that AWCD achieves a nearly optimal rate of strong
consistency. We partially extend these results to more general stochastic block models.
For both problems, we illustrate and validate the theoretical study through a wide
range of numerical experiments.
To summarize, this thesis closes the gap between the practical and theoretical studies
for AWC and AWCD. In particular, after some modifications, both algorithms exhibit a
nearly optimal performance on relevant models.
|
54 |
Hur påverkar avrundningar tillförlitligheten hos parameterskattningar i en linjär blandad modell?Stoorhöök, Li, Artursson, Sara January 2016 (has links)
Tidigare studier visar på att blodtrycket hos gravida sjunker under andra trimestern och sedanökar i ett senare skede av graviditeten. Högt blodtryck hos gravida kan medföra hälsorisker, vilket gör mätningar av blodtryck relevanta. Dock uppstår det osäkerhet då olika personer inom vården hanterar blodtrycksmätningarna på olika sätt. Delar av vårdpersonalen avrundarmätvärden och andra gör det inte, vilket kan leda till svårigheter att tolkablodtrycksutvecklingen. I uppsatsen behandlas ett dataset innehållandes blodtrycksvärden hos gravida genom att skatta nio olika linjära regressionsmodeller med blandade effekter. Därefter genomförs en simuleringsstudie med syfte att undersöka hur mätproblem orsakat av avrundningar påverkar parameterskattningar och modellval i en linjär blandad modell. Slutsatsen är att blodtrycksavrundningarna inte påverkar typ 1-felet men påverkar styrkan. Dock innebär inte detta något problem vid fortsatt analys av blodtrycksvärdena i det verkliga datasetet.
|
55 |
Non- and semiparametric models for conditional probabilities in two-way contingency tables / Modèles non-paramétriques et semiparamétriques pour les probabilités conditionnelles dans les tables de contingence à deux entréesGeenens, Gery 04 July 2008 (has links)
This thesis is mainly concerned with the estimation of conditional probabilities in two-way contingency
tables, that is probabilities of type P(R=i,S=j|X=x), for (i,j) in {1, . . . , r}×{1, . . . , s}, where
R and S are the two categorical variables forming the contingency table, with r and s levels respectively, and
X is a vector of explanatory variables possibly associated with R, S, or both. Analyzing such a conditional
distribution is often of interest, as this allows to go further than the usual unconditional study of the behavior
of the variables R and S. First, one can check an eventual effect of these covariates on the distribution of
the individuals through the cells of the table, and second, one can carry out usual analyses of contingency
tables, such as independence tests, taking into account, and removing in some sense, this effect. This helps
for instance to identify the external factors which could be responsible for an eventual association between
R and S. This also gives the possibility to adapt for a possible heterogeneity in the population of interest,
when analyzing the table.
|
56 |
Estimação e teste de hipótese baseados em verossimilhanças perfiladas / "Point estimation and hypothesis test based on profile likelihoods"Silva, Michel Ferreira da 20 May 2005 (has links)
Tratar a função de verossimilhança perfilada como uma verossimilhança genuína pode levar a alguns problemas, como, por exemplo, inconsistência e ineficiência dos estimadores de máxima verossimilhança. Outro problema comum refere-se à aproximação usual da distribuição da estatística da razão de verossimilhanças pela distribuição qui-quadrado, que, dependendo da quantidade de parâmetros de perturbação, pode ser muito pobre. Desta forma, torna-se importante obter ajustes para tal função. Vários pesquisadores, incluindo Barndorff-Nielsen (1983,1994), Cox e Reid (1987,1992), McCullagh e Tibshirani (1990) e Stern (1997), propuseram modificações à função de verossimilhança perfilada. Tais ajustes consistem na incorporação de um termo à verossimilhança perfilada anteriormente à estimação e têm o efeito de diminuir os vieses da função escore e da informação. Este trabalho faz uma revisão desses ajustes e das aproximações para o ajuste de Barndorff-Nielsen (1983,1994) descritas em Severini (2000a). São apresentadas suas derivações, bem como suas propriedades. Para ilustrar suas aplicações, são derivados tais ajustes no contexto da família exponencial biparamétrica. Resultados de simulações de Monte Carlo são apresentados a fim de avaliar os desempenhos dos estimadores de máxima verossimilhança e dos testes da razão de verossimilhanças baseados em tais funções. Também são apresentadas aplicações dessas funções de verossimilhança em modelos não pertencentes à família exponencial biparamétrica, mais precisamente, na família de distribuições GA0(alfa,gama,L), usada para modelar dados de imagens de radar, e no modelo de Weibull, muito usado em aplicações da área da engenharia denominada confiabilidade, considerando dados completos e censurados. Aqui também foram obtidos resultados numéricos a fim de avaliar a qualidade dos ajustes sobre a verossimilhança perfilada, analogamente às simulações realizadas para a família exponencial biparamétrica. Vale mencionar que, no caso da família de distribuições GA0(alfa,gama,L), foi avaliada a aproximação da distribuição da estatística da razão de verossimilhanças sinalizada pela distribuição normal padrão. Além disso, no caso do modelo de Weibull, vale destacar que foram derivados resultados distribucionais relativos aos estimadores de máxima verossimilhança e às estatísticas da razão de verossimilhanças para dados completos e censurados, apresentados em apêndice. / The profile likelihood function is not genuine likelihood function, and profile maximum likelihood estimators are typically inefficient and inconsistent. Additionally, the null distribution of the likelihood ratio test statistic can be poorly approximated by the asymptotic chi-squared distribution in finite samples when there are nuisance parameters. It is thus important to obtain adjustments to the likelihood function. Several authors, including Barndorff-Nielsen (1983,1994), Cox and Reid (1987,1992), McCullagh and Tibshirani (1990) and Stern (1997), have proposed modifications to the profile likelihood function. They are defined in a such a way to reduce the score and information biases. In this dissertation, we review several profile likelihood adjustments and also approximations to the adjustments proposed by Barndorff-Nielsen (1983,1994), also described in Severini (2000a). We present derivations and the main properties of the different adjustments. We also obtain adjustments for likelihood-based inference in the two-parameter exponential family. Numerical results on estimation and testing are provided. We also consider models that do not belong to the two-parameter exponential family: the GA0(alfa,gama,L) family, which is commonly used to model image radar data, and the Weibull model, which is useful for reliability studies, the latter under both noncensored and censored data. Again, extensive numerical results are provided. It is noteworthy that, in the context of the GA0(alfa,gama,L) model, we have evaluated the approximation of the null distribution of the signalized likelihood ratio statistic by the standard normal distribution. Additionally, we have obtained distributional results for the Weibull case concerning the maximum likelihood estimators and the likelihood ratio statistic both for noncensored and censored data.
|
57 |
Small population bias and sampling effects in stochastic mortality modellingChen, Liang January 2017 (has links)
Pension schemes are facing more difficulties on matching their underlying liabilities with assets, mainly due to faster mortality improvements for their underlying populations, better environments and medical treatments and historically low interest rates. Given most of the pension schemes are relatively much smaller than the national population, modelling and forecasting the small populations' longevity risk become urgent tasks for both the industrial practitioners and academic researchers. This thesis starts with a systematic analysis on the influence of population size on the uncertainties of mortality estimates and forecasts with a stochastic mortality model, based on a parametric bootstrap methodology with England and Wales males as our benchmark population. The population size has significant effect on the uncertainty of mortality estimates and forecasts. The volatilities of small populations are over-estimated by the maximum likelihood estimators. A Bayesian model is developed to improve the estimation of the volatilities and the predictions of mortality rates for the small populations by employing the information of larger population with informative prior distributions. The new model is validated with the simulated small death scenarios. The Bayesian methodologies generate smoothed estimations for the mortality rates. Moreover, a methodology is introduced to use the information of large population for obtaining unbiased volatilities estimations given the underlying prior settings. At last, an empirical study is carried out based on the Scotland mortality dataset.
|
58 |
Path Extraction Of Low Snr Dim Targets From Grayscale 2-d Image SequencesErguven, Sait 01 September 2006 (has links) (PDF)
In this thesis, an algorithm for visual detecting and tracking of very low SNR targets, i.e. dim targets, is developed. Image processing of single frame in time cannot be used for this aim due to the closeness of intensity spectrums of the background and target. Therefore / change detection of super pixels, a group of pixels that has sufficient statistics for likelihood ratio testing, is proposed. Super pixels that are determined as transition points are signed on a binary difference matrix and grouped by 4-Connected Labeling method. Each label is processed to find its vector movement in the next frame by Label Destruction and Centroids Mapping techniques. Candidate centroids are put into Distribution Density Function Maximization and Maximum Histogram Size Filtering methods to find the target related motion vectors. Noise related mappings are eliminated by Range and Maneuver Filtering. Geometrical centroids obtained on each frame are used as the observed target path which is put into Optimum Decoding Based Smoothing Algorithm to smooth and estimate the real target path. Optimum Decoding Based Smoothing Algorithm is based on quantization of possible states, i.e. observed target path centroids, and Viterbi Algorithm.
According to the system and observation models, metric values of all possible target paths are computed using observation and transition probabilities. The path which results in maximum metric value at the last frame is decided as the estimated target path.
|
59 |
Sequential probability ratio tests based on grouped observationsEger, Karl-Heinz, Tsoy, Evgeni Borisovich 26 June 2010 (has links) (PDF)
This paper deals with sequential likelihood ratio
tests based on grouped observations.
It is demonstrated that the method of conjugated
parameter pairs known from the non-grouped case
can be extended to the grouped case obtaining
Waldlike approximations for the OC- and ASN-
function.
For near hypotheses so-called F-optimal
groupings are recommended.
As example an SPRT
based on grouped observations for the parameter
of an exponentially distributed random variable is
considered.
|
60 |
Model-Based Optimization of Clinical Trial DesignsVong, Camille January 2014 (has links)
General attrition rates in drug development pipeline have been recognized as a necessity to shift gears towards new methodologies that allow earlier and correct decisions, and the optimal use of all information accrued throughout the process. The quantitative science of pharmacometrics using pharmacokinetic-pharmacodynamic models was identified as one of the strategies core to this renaissance. Coupled with Optimal Design (OD), they constitute together an attractive toolkit to usher more rapidly and successfully new agents to marketing approval. The general aim of this thesis was to investigate how the use of novel pharmacometric methodologies can improve the design and analysis of clinical trials within drug development. The implementation of a Monte-Carlo Mapped power method permitted to rapidly generate multiple hypotheses and to adequately compute the corresponding sample size within 1% of the time usually necessary in more traditional model-based power assessment. Allowing statistical inference across all data available and the integration of mechanistic interpretation of the models, the performance of this new methodology in proof-of-concept and dose-finding trials highlighted the possibility to reduce drastically the number of healthy volunteers and patients exposed to experimental drugs. This thesis furthermore addressed the benefits of OD in planning trials with bio analytical limits and toxicity constraints, through the development of novel optimality criteria that foremost pinpoint information and safety aspects. The use of these methodologies showed better estimation properties and robustness for the ensuing data analysis and reduced the number of patients exposed to severe toxicity by 7-fold. Finally, predictive tools for maximum tolerated dose selection in Phase I oncology trials were explored for a combination therapy characterized by main dose-limiting hematological toxicity. In this example, Bayesian and model-based approaches provided the incentive to a paradigm change away from the traditional rule-based “3+3” design algorithm. Throughout this thesis several examples have shown the possibility of streamlining clinical trials with more model-based design and analysis supports. Ultimately, efficient use of the data can elevate the probability of a successful trial and increase paramount ethical conduct.
|
Page generated in 0.0826 seconds