• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 75
  • 27
  • 9
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 371
  • 371
  • 62
  • 61
  • 61
  • 41
  • 40
  • 35
  • 33
  • 32
  • 28
  • 27
  • 26
  • 26
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Adjusting for covariate effects in biomarker studies using the subject-specfic threshold ROC curve /

Janes, Holly, January 2005 (has links)
Thesis (Ph. D.)--University of Washington, 2005. / Vita. Includes bibliographical references (p. 173-178).
102

Data-driven approach for control performance monitoring and fault diagnosis

Yu, Jie, January 1900 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2007. / Vita. Includes bibliographical references.
103

The comparative bioavailability and in vitro assessment of solid oral dosage forms of paracetamol

Braae, Karen 02 April 2013 (has links)
The dissolution profiles of eight lots of paracetamol tablets representing seven different tablet brands are determined in a USP rotating basket assembly and a stationary basket-rotating paddle apparatus. The in vitro data are expressed in terms of dissolution parameters and inter-tablet differences are assessed statistically using analysis of variance (ANOVA) and the Scheffe test. Highly significant differences are observed between a number of the tablets at the 95% confidence level. Representative tablets from the dissolution rate study and a control dose of paracetamol dissolved in water are subsequently investigated in a 4 x 4 latin square design bioavailability trial. Serum and urine samples are collected and assayed for paracetamol alone (serum) and together with its metabolites (urine) by means of high pressure liquid chromatography. The in vivo data are expressed in terms of bioavailability parameters and differences between the test doses are assessed by means of ANOVA. No significant differences are observed between the dosage forms at the 95% confidence level.
104

Detection copy number variants profile by multiple constrained optimization

Zhang, Yue 04 September 2017 (has links)
Copy number variation, causing by the genome rearrangement, generally refers to the copy numbers increased or decreased of large genome segments whose lengths are more than 1kb. Such copy number variations mainly appeared as the sub-microscopic level of deletion and duplication. Copy number variation is an important component of genome structural variation, and is one of pathogenic factors of human diseases. Next generation sequencing technology is a popular CNV detection method and it has been widely used in various fields of life science research. It possesses the advantages of high throughput and low cost. By tailoring NGS technology, it is plausible to sequence individual cells. Such single cell sequencing can reveal the gene expression status and genomic variation profile of a single-cell. Single cell sequencing is promising in the study of tumor, developmental biology, neuroscience and other fields. However, there are two challenging problems encountered in CNV detection for NGS data. The first one is that since single-cell sequencing requires a special genome amplification step to accumulate enough samples, a large number of bias is introduced, making the calling of copy number variants rather challenging. The performances of many popular copy number calling methods, designed for bulk sequencings, are not consistent and cannot be applied on single-cell sequenced data directly. The second one is to simultaneously analyze genome data for multiple samples, thus achieving assembling and subgrouping similar cells accurately and efficiently. The high level of noises in single-cell-sequencing data negatively affects the reliability of sequence reads and leads to inaccurate patterns of variations. To handle the problem of reliably finding CNVs in NGS data, in this thesis, we firstly establish a workflow for analyzing NGS and single-cell sequencing data. The CNVs identification is formulated as a quadratic optimization problem with both constraints of sparsity and smoothness. Tailored from alternating direction minimization (ADM) framework, an efficient numerical solution is designed accordingly. The proposed model was tested extensively to demonstrate its superior performances. It is shown that the proposed approach can successfully reconstruct CNVs especially somatic copy number alteration patterns from raw data. By comparing with existing counterparts, it achieved superior or comparable performances in detection of the CNVs. To tackle this issue of recovering the hidden blocks within multiple single-cell DNA-sequencing samples, we present an permutation based model to rearrange the samples such that similar ones are positioned adjacently. The permutation is guided by the total variational (TV) norm of the recovered copy number profiles, and is continued until the TV-norm is minimized when similar samples are stacked together to reveal block patterns. Accordingly, an efficient numerical scheme for finding this permutation is designed, tailored from the alternating direction method of multipliers. Application of this method to both simulated and real data demonstrates its ability to recover the hidden structures of single-cell DNA sequences.
105

Dimension reduction in the regressions through weighted variance estimation

Yang, Yani 01 January 2009 (has links)
No description available.
106

Improved tree species discrimination at leaf level with hyperspectral data combining binary classifiers

Dastile, Xolani Collen January 2011 (has links)
The purpose of the present thesis is to show that hyperspectral data can be used for discrimination between different tree species. The data set used in this study contains the hyperspectral measurements of leaves of seven savannah tree species. The data is high-dimensional and shows large within-class variability combined with small between-class variability which makes discrimination between the classes challenging. We employ two classification methods: G-nearest neighbour and feed-forward neural networks. For both methods, direct 7-class prediction results in high misclassification rates. However, binary classification works better. We constructed binary classifiers for all possible binary classification problems and combine them with Error Correcting Output Codes. We show especially that the use of 1-nearest neighbour binary classifiers results in no improvement compared to a direct 1-nearest neighbour 7-class predictor. In contrast to this negative result, the use of neural networks binary classifiers improves accuracy by 10% compared to a direct neural networks 7-class predictor, and error rates become acceptable. This can be further improved by choosing only suitable binary classifiers for combination.
107

Development of an antiretroviral solid dosage form using multivariate analysis

Nqabeni, Luxolo January 2007 (has links)
The aim of pharmaceutical development is to design a quality product and the manufacturing process to deliver the product in a reproducible manner. The development of a new and generic formulation is based on a large number of experiments. Statistics provides many tools for studying the conditions of formulations and processes and enables us to optimize the same while being able to minimize our experimentation. The purpose of this study was to apply experimental design methodology (DOE) and multivariate analysis to the development and optimization of tablet formulations containing 150 mg lamivudine manufactured by direct compression.
108

Isotropy test and variance estimation for high order statistics of spatial point process

Ma, Tingting 01 January 2011 (has links)
No description available.
109

Generalized Semiparametric Approach to the Analysis of Variance

Pathiravasan, Chathurangi Heshani Karunapala 01 August 2019 (has links) (PDF)
The one-way analysis of variance (ANOVA) is mainly based on several assumptions and can be used to compare the means of two or more independent groups of a factor. To relax the normality assumption in one-way ANOVA, recent studies have considered exponential distortion or tilt of a reference distribution. The reason for the exponential distortion was not investigated before; thus the main objective of the study is to closely examine the reason behind it. In doing so, a new generalized semi-parametric approach for one-way ANOVA is introduced. The proposed method not only compares the means but also variances of any type of distributions. Simulation studies show that proposed method has favorable performance than classical ANOVA. The method is demonstrated on meteorological radar data and credit limit data. The asymptotic distribution of the proposed estimator was determined in order to test the hypothesis for equality of one sample multivariate distributions. The power comparison of one sample multivariate distributions reveals that there is a significant power improvement in the proposed chi-square test compared to the Hotelling's T-Square test for non normal distributions. A bootstrap paradigm is incorporated for testing equidistributions of multiple samples. As far as power comparison simulations for multiple large samples are considered, the proposed test outperforms other existing parametric, nonparametric and semi-parametric approaches for non normal distributions.
110

The Robustness of O'Brien's r Transformation to Non-Normality

Gordon, Carol J. (Carol Jean) 08 1900 (has links)
A Monte Carlo simulation technique was employed in this study to determine if the r transformation, a test of homogeneity of variance, affords adequate protection against Type I error over a range of equal sample sizes and number of groups when samples are obtained from normal and non-normal distributions. Additionally, this study sought to determine if the r transformation is more robust than Bartlett's chi-square to deviations from normality. Four populations were generated representing normal, uniform, symmetric leptokurtic, and skewed leptokurtic distributions. For each sample size (6, 12, 24, 48), number of groups (3, 4, 5, 7), and population distribution condition, the r transformation and Bartlett's chi-square were calculated. This procedure was replicated 1,000 times; the actual significance level was determined and compared to the nominal significance level of .05. On the basis of the analysis of the generated data, the following conclusions are drawn. First, the r transformation is generally robust to violations of normality when the size of the samples tested is twelve or larger. Second, in the instances where a significant difference occurred between the actual and nominal significance levels, the r transformation produced (a) conservative Type I error rates if the kurtosis of the parent population were 1.414 or less and (b) an inflated Type I error rate when the index of kurtosis was three. Third, the r transformation should not be used if sample size is smaller than twelve. Fourth, the r transformation is more robust in all instances to non-normality, but the Bartlett test is superior in controlling Type I error when samples are from a population with a normal distribution. In light of these conclusions, the r transformation may be used as a general utility test of homogeneity of variances when either the distribution of the parent population is unknown or is known to have a non-normal distribution, and the size of the equal samples is at least twelve.

Page generated in 0.1388 seconds