• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 27
  • 12
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 165
  • 165
  • 33
  • 33
  • 21
  • 21
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Evaluation of network inference algorithms and their effects on network analysis for the study of small metabolomic data sets

Greenyer, Haley 24 May 2022 (has links)
Motivation: Alzheimer’s Disease (AD) is a highly prevalent, neurodegenerative disease which causes gradual cognitive decline. As documented in the literature, evi- dence has recently mounted for the role of metabolic dysfunction in AD. Metabolomic data has therefore been increasingly used in AD studies. Metabolomic disease studies often suffer from small sample sizes and inflated false discovery rates. It is therefore of great importance to identify algorithms best suited for the inference of metabolic networks from small cohort disease studies. For future benchmarking, and for the development of new metabolic network inference methods, it is similarly important to identify appropriate performance measures for small sample sizes. Results: The performances of 13 different network inference algorithms, includ- ing correlation-based, regression-based, information theoretic, and hybrid methods, were assessed through benchmarking and structural network analyses. Benchmark- ing was performed on simulated data with known structures across six sample sizes using three different summative performance measures: area under the Receiver Op- erating Characteristic Curve, area under the Precision Recall Curve, and Matthews Correlation Coefficient. Structural analyses (commonly applied in disease studies), including betweenness, closeness, and eigenvector centrality were applied to simu- lated data. Differential network analysis was additionally applied to experimental AD data. Based on the performance measure benchmarking and network analysis results, I identified Probabilistic Context Likelihood Relatedness of Correlation with Biweight Midcorrelation (PCLRCb) (a novel variation of the PCLRC algorithm) to be best suited for the prediction of metabolic networks from small-cohort disease studies. Additionally, I identified Matthews Correlation Coefficient as the best mea- sure with which to evaluate the performance of metabolic network inference methods across small sample sizes. / Graduate
62

Calculating power for the Finkelstein and Schoenfeld test statistic

Zhou, Thomas J. 07 March 2022 (has links)
The Finkelstein and Schoenfeld (FS) test is a popular generalized pairwise comparison approach to analyze prioritized composite endpoints (e.g., components are assessed in order of clinical importance). Power and sample size estimation for the FS test, however, are generally done via simulation studies. This simulation approach can be extremely computationally burdensome, compounded by an increasing number of composite endpoints and with increasing sample size. We propose an analytic solution to calculate power and sample size for commonly encountered two-component hierarchical composite endpoints. The power formulas are derived assuming underlying distributions in each of the component outcomes on the population level, which provide a computationally efficient and practical alternative to the standard simulation approach. The proposed analytic approach is extended to derive conditional power formulas, which are used in combination with the promising zone methodology to perform sample size re-estimation in the setting of adaptive clinical trials. Prioritized composite endpoints with more than two components are also investigated. Extensive Monte Carlo simulation studies were conducted to demonstrate that the performance of the proposed analytic approach is consistent with that of the standard simulation approach. We also demonstrate through simulations that the proposed methodology possesses generally desirable objective properties including robustness to mis-specified underlying distributional assumptions. We illustrate our proposed methods through application of the proposed formulas by calculating power and sample size for the Transthyretin Amyloidosis Cardiomyopathy Clinical Trial (ATTR-ACT) and the EMPULSE trial for empagliozin treatment of acute heart failure.
63

Statistical Inference for Generalized Yule Coefficients in 2 × 2 Contingency Tables

Bonett, Douglas G., Price, Robert M. 01 February 2007 (has links)
The odds ratio is one of the most widely used measures of association for 2 × 2 tables. A generalized Yule coefficient transforms the odds ratio into a correlation-like scale with a range from -1 to 1. Yule's Y, Yule's Q, Digby's H, and a new coefficient are special cases of a generalized Yule coefficient. The new coefficient is shown to be similar in value to the phi coefficient. A confidence interval and sample size formula for a generalized Yule coefficient are proposed. The proposed confidence interval is shown to perform much better than the Wald intervals that are implemented in statistical packages.
64

Inferential Methods for the Tetrachoric Correlation Coefficient

Bonett, Douglas G., Price, Robert M. 01 January 2005 (has links)
The tetrachoric correlation describes the linear relation between two continuous variables that have each been measured on a dichotomous scale. The treatment of the point estimate, standard error, interval estimate, and sample size requirement for the tetrachoric correlation is cursory and incomplete in modern psychometric and behavioral statistics texts. A new and simple method of accurately approximating the tetrachoric correlation is introduced. The tetrachoric approximation is then used to derive a simple standard error, confidence interval, and sample size planning formula. The new confidence interval is shown to perform far better than the confidence interval computed by SAS. A method to improve the SAS confidence interval is proposed. All of the new results are computationally simple and are ideally suited for textbook and classroom presentations.
65

Sample size re-estimation for superiority clinical trials with a dichotomous outcome using an unblinded estimate of the control group outcome rate

Bliss, Caleb Andrew 22 January 2016 (has links)
Superiority clinical trials are often designed with a planned interim analysis for the purpose of sample size re-estimation (SSR) when limited information is available at the start of the trial to estimate the required sample size. Typically these trials are designed with a two-arm internal pilot where subjects are enrolled to both treatment arms prior to the interim analysis. Circumstances may sometimes call for a trial with a single-arm internal pilot (enroll only in the control group). For a dichotomous outcome, Herson and Wittes proposed a SSR method (HW-SSR) that can be applied to single-arm internal pilot trials using an unblinded estimate of the control group outcome rate. Previous evaluations of the HW-SSR method reported conflicting results regarding the impact of the method on the two-sided Type I error rate and power of the final hypothesis test. In this research we evaluate the HW-SSR method under the null and alternative hypothesis in various scenarios to investigate the one-sided Type I error rate and power of trials with a two-arm internal pilot. We find that the one-sided Type I error rate is sometimes inflated and that the power is sometimes reduced. We propose a new method, the Critical Value and Power Adjusted Sample Size Re-estimation (CVPA-SSR) algorithm to adjust the critical value cutoff used in the final Z-test and the power critical value used in the interim SSR formula to preserve the nominal Type I error rate and the desired power. We conduct simulations for trials with single-arm and two-arm internal pilots to confirm that the CVPA-SSR algorithm does preserve the nominal Type I error rate and the desired power. We investigate the robustness of the CVPA-SSR algorithm for trials with single-arm and two-arm internal pilots when the assumptions used in designing the trial are incorrect. No Type I error inflation is observed but significant over- or under-powering of the trial occurs when the treatment effect used to design the trial is misspecified.
66

A density for a Generalized Likelihood-Ratio Test When the Sample Size is a Random Varible

Neville, Raymond H. 01 May 1966 (has links)
The main objective of this work will be to examine the hypothesis that all the treatment means are the same and equal to some unknown quantity, when we know that the variance is the same for each sample, and to determine if the conventional method for making this test (the F-test) applicable when the sample sizes are assumed to be random variables. This hypothesis can be tested by using a likelihood-ration test. To do this, a density function or distribution has to be found for this ratio, thus permitting us to make probability statements about the occurrence of this ration under the null hypothesis.
67

An approach to conditional power and sample size re-estimation in the presence of within-subject correlated data in adaptive design superiority clinical trials

Mahoney, Taylor Fitzgerald 22 June 2022 (has links)
A common approach to adapt the design of a clinical trial based on interim results is sample size re-estimation (SSR). SSR allows an increase in the trial's sample size in order to maintain, at the desired nominal level, the desired power to reject the null hypothesis conditioned on the interim observed treatment effect and its variance (i.e., the conditional power). There are several established approaches to SSR for clinical studies with independent and identically distributed observations; however, no established methods have been developed for trials where there is more than one observation collected per subject where within-subject correlation exists. Without accurately accounting for the within-subject correlation in SSR, a sponsor may incorrectly estimate the trial's conditional power to obtain statistical significance at the final analysis and hence risk overestimating or underestimating the number of patients required to complete the trial as planned. In this dissertation, we propose an extension of Mehta and Pocock's promising zone approach to SSR that reconciles the within-subject correlation in the data for a variety of superiority clinical trials. We consider trials with continuous and binary primary endpoints, and further we explore cases where patients contribute both the same and varying numbers of observations to the analysis of the primary endpoint. Using a simulation study, we show that in each case, our proposed conditional power formula accurately calculates conditional power and our proposed SSR methodology preserves the nominal type I error rate under the null hypothesis and maintains adequate power under the alternative hypothesis. Additionally, we demonstrate the robustness of our methodology to the mis-specification of a variety of distributional assumptions regarding the underlying population from which the data arise. / 2024-06-21T00:00:00Z
68

Sample Size Calculations in Simple Linear Regression: A New Approach

Guan, Tianyuan 04 October 2021 (has links)
No description available.
69

Why and How to Report Distributions of Optima in Experiments on Heuristic Algorithms

Fitton, N V. January 2001 (has links)
No description available.
70

A MULTIVARIATE STATISTICAL ANALYSIS ON THE SAMPLING UNCERTAINTIES OF GEOMETRIC AND DIMENSIONAL ERRORS FOR CIRCULAR FEATURES

ACHARYA, SRIKANTH B. 13 July 2005 (has links)
No description available.

Page generated in 0.0618 seconds