• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Topics in ordinal logistic regression and its applications

Kim, Hyun Sun 15 November 2004 (has links)
Sample size calculation methods for ordinal logistic regression are proposed to test statistical hypotheses. The author was motivated to do this work by the need for statistical analysis of the red imported fire ants data. The proposed methods use the concept of approximation by the moment-generating function. Some correction methods are also suggested. When a prior data set is available, an empirical method is explored. Application of the proposed methodology to the fire ant mating flight data is demonstrated. The proposed sample size and power calculation methods are applied in the hypothesis testing problems. Simulation studies are also conducted to illustrate their performance and to compare them with existing methods.
2

Topics in ordinal logistic regression and its applications

Kim, Hyun Sun 15 November 2004 (has links)
Sample size calculation methods for ordinal logistic regression are proposed to test statistical hypotheses. The author was motivated to do this work by the need for statistical analysis of the red imported fire ants data. The proposed methods use the concept of approximation by the moment-generating function. Some correction methods are also suggested. When a prior data set is available, an empirical method is explored. Application of the proposed methodology to the fire ant mating flight data is demonstrated. The proposed sample size and power calculation methods are applied in the hypothesis testing problems. Simulation studies are also conducted to illustrate their performance and to compare them with existing methods.
3

Inferring condition specific regulatory networks with small sample sizes : a case study in Bacillus subtilis and infection of Mus musculus by the parasite Toxoplasma gondii

Pacini, Clare January 2017 (has links)
Modelling interactions between genes and their regulators is fundamental to understanding how, for example a disease progresses, or the impact of inserting a synthetic circuit into a cell. We use an existing method to infer regulatory networks under multiple conditions: the Joint Graphical Lasso (JGL), a shrinkage based Gaussian graphical model. We apply this method to two data sets: one, a publicly available set of microarray experiments perturbing the gram-positive bacteria Bacillus subtilis under multiple experimental conditions; the second, a set of RNA-seq samples of Mouse (Mus musculus) embryonic fibroblasts (MEFs) infected with different strains of the parasite Toxoplasma gondii. In both cases we infer a subset of the regulatory networks using relatively small sample sizes. For the Bacillus subtilis analysis we focused on the use of these regulatory networks in synthetic biology and found examples of transcriptional units active only under a subset of conditions, this information can be useful when designing circuits to have condition dependent behaviour. We developed methods for large network decomposition that made use of the condition information and showed a greater specificity of identifying single transcriptional units from the larger network using our method. Through annotating these results with known information we were able to identify novel connections and found supporting evidence for a selection of these from publicly available experimental results. Biological data collection is typically expensive and due to the relatively small sample sizes of our MEF data set we developed a novel empirical Bayes method for reducing the false discovery rate when estimating block diagonal covariance matrices. Using these methods we were able to infer regulatory networks for the host infected with either the ME49 or RH strain of the parasite. This enabled the identification of known and novel regulatory mechanisms. The Toxoplasma gondii parasite has shown to subvert host function using similar mechanisms as cancers and through our analysis we were able to identify genes, networks and ontologies associated with cancer, including connections that have not previously been associated with T. gondii infection. Finally a Shiny application was developed as an online resource giving access to the Bacillus subtilis inferred networks with interactive methods for exploring the networks including expansion of sub networks and large network decomposition.
4

A Bayesian cost-benefit approach to sample size determination and evaluation in clinical trials

Kikuchi, Takashi January 2011 (has links)
Current practice for sample size computations in clinical trials is largely based on frequentist or classical methods. These methods have the drawback of requiring a point estimate of the variance of treatment effect and are based on arbitrary settings of type I and II errors. They also do not directly address the question of achieving the best balance between the costs of the trial and the possible benefits by using a new medical treatment, and fail to consider the important fact that the number of users depends on evidence for improvement compared with the current treatment. A novel Bayesian approach, Behavioral Bayes (or BeBay for short) (Gittins and Pezeshk, 2000a,b, 2002a,b; Pezeshk, 2003), assumes that the number of patients switching to the new treatment depends on the strength of the evidence which is provided by clinical trials, and takes a value between zero and the number of potential patients in the country. The better a new treatment, the more patients switch to it and the more the resulting benefit. The model defines the optimal sample size to be the sample size that maximises the expected net benefit resulting from a clinical trial. Gittins and Pezeshk use a simple form of benefit function for paired comparisons between two medical treatments and assume that the variance of the efficacy is known. The research in this thesis generalises these original conditions by introducing a logistic benefit function to take account of differences in efficacy and safety between two drugs. The model is also extended to the more general cases of unpaired comparisons and unknown variance. The expected net benefit defined by Gittins and Pezeshk is based on the efficacy of the new drug only. It does not consider the incidence of adverse reactions and their effect on patients’ preferences. Here we include the costs of treating adverse reactions and calculate the total benefit in terms of how much the new drug can reduce societal expenditure. We describe how our model may be used for the design of phase III clinical trials, cluster randomised clinical trials and bridging studies. This is done in some detail and using illustrative examples based on published studies. For phase III trials we allow the possibility of unequal treatment group sizes, which often occur in practice. Bridging studies are those carried out to extend the range of applicability of an established drug, for example to new ethnic groups. Throughout the objective of our procedures is to optimise the costbenefit in terms of national health-care. BeBay is the leading methodology for determining sample sizes on this basis. It explicitly takes account of the roles of three decision makers, namely patients and doctors, pharmaceutical companies and the health authority.
5

Therapeutic Assessment as Preparation for Psychotherapy

Vance, Jeffrey Michael 08 1900 (has links)
This study examined the impact therapeutic assessment (TA) had on participants recruited from the UNT Psychology Clinic's waiting list. Using a pretest-posttest design, participants completed measures prior to and following their assessment. UNT Psychology Clinic archive data was used to compare this sample to clients who received traditional information gathering assessments with implicit measures, those receiving assessments relying on only self-report measures, and those who did not receive an assessment before beginning psychotherapy. The findings of this study vary based on the criteria being examined. Due to the small sample in the experimental group, no statistical significance was found through null hypothesis testing. However, the TA group's scores on the Outcome Questionnaire – 45 (OQ) and the Working Alliance Inventory (WAI) indicated better outcomes than those without a TA, with large effect sizes. Furthermore, those who received a TA were more likely than those without a TA to score below the clinically significant cutoff levels on the OQ. The study raises issues for consideration in what is deemed "effective" in therapeutic efficacy research.
6

A Permutation-Based Confidence Distribution for Rare-Event Meta-Analysis

Andersen, Travis 18 April 2022 (has links)
Confidence distributions (CDs), which provide evidence across all levels of significance, are receiving increasing attention, especially in meta-analysis. Meta-analyses allow independent study results to be combined to produce one overall conclusion and are particularly useful in public health and medicine. For studies with binary outcomes that are rare, many traditional meta-analysis methods often fail (Sutton et al. 2002; Efthimiou 2018; Liu et al. 2018; Liu 2019; Hunter and Schmidt 2000; Kontopantelis et al. 2013). Zabriskie et al. (2021b) develop a permutation-based method to analyze such data when study treatment effects vary beyond what is expected by chance. In this work, we prove that this method can be considered a CD. Additionally, we develop two new metrics to assess a CD's relative performance.
7

適應性計數值損失函數管制圖之設計 / Design of the Adaptive Loss Function Control Chart for Binomial Data

李宜臻, Lee,I Chen Unknown Date (has links)
This article proposes the algorithm of a new control chart (loss function control chart) based on the Taguchi loss function with an adaptive scheme for binomial data. The loss function control chart is able to monitor cost variation from the process by applying loss function in the design. This new angle economically explores production cost. This research provides designs of the loss function control chart with specified VSI, optimal VSI, VSS and VP, respectively. Numerical analyses show that the specified VSI loss function chart, the optimal VSI loss function chart, the optimal VSS loss function chart and the optimal VP loss function chart outperform the Fp loss function chart significantly and show costs can be controlled systematically.

Page generated in 0.0492 seconds