• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 560
  • 255
  • 111
  • 105
  • 91
  • 14
  • 11
  • 10
  • 10
  • 8
  • 8
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1401
  • 350
  • 269
  • 213
  • 186
  • 148
  • 126
  • 116
  • 113
  • 108
  • 107
  • 100
  • 94
  • 90
  • 87
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Deconstructing heterogeneity in adolescent sexual behaviour: a person-centered, developmental systems approach

Howard, Andrea Louise Dalton 11 1900 (has links)
This study examined heterogeneity in adolescents experimentation with partnered sexual behaviours. Participants were 88 high school students in Edmonton, Alberta (M age = 16.59, SD = .95). Students completed surveys online once per two months from December, 2008 through December, 2009. Surveys tracked students reports of seven sexual behaviours ranging in intimacy from holding hands to intercourse. Growth mixture models were used to sort students trajectories of sexual behaviours across months into latent classes based on similar profiles. The best-fitting model revealed three distinct classes, labeled inexperienced, experimenting, and experienced. Students classified as inexperienced primarily reported only lower-intimacy, non-genital sexual behaviours across months, and many reported no sexual behaviours. Students classified as experimenting and experienced reported similar levels of higher-intimacy sexual behaviours across months. Most experimenting students behaviours appeared to increase gradually from less to more intimate, whereas experienced students appeared to make abrupt transitions between lower- and higher-intimacy behaviours, month-to-month. Demographic, personal, peer, and family variables provided additional information that increased distinction among classes, and explained residual within-class heterogeneity. The probability of being classified as inexperienced was highest for students who were younger, reported fewer sexually experienced friends, and lower parent behavioural control. Students who reported higher parent behavioural control had the highest probability of being classified as experimenting. Relations between trajectories of sexual behaviour intimacy and risk factors (e.g., later pubertal timing, fewer problem behaviours) and protective-enhancing resources (e.g., higher psychosocial maturity, more intimate friendships) varied across classes. This study shows that there are multiple pathways of experimentation with sexual behaviour in adolescence. Results are consistent both with studies that emphasize the potential for sex in adolescence to be high-risk, and with studies and arguments that emphasize the potential for sex in adolescence to play an important preparatory role toward healthy adult sexual functioning. Theoretical arguments and discussion are guided by a person-centered, developmental systems approach.
52

Linear clustering with application to single nucleotide polymorphism genotyping

Yan, Guohua 11 1900 (has links)
Single nucleotide polymorphisms (SNPs) have been increasingly popular for a wide range of genetic studies. A high-throughput genotyping technologies usually involves a statistical genotype calling algorithm. Most calling algorithms in the literature, using methods such as k-means and mixturemodels, rely on elliptical structures of the genotyping data; they may fail when the minor allele homozygous cluster is small or absent, or when the data have extreme tails or linear patterns. We propose an automatic genotype calling algorithm by further developing a linear grouping algorithm (Van Aelst et al., 2006). The proposed algorithm clusters unnormalized data points around lines as against around centroids. In addition, we associate a quality value, silhouette width, with each DNA sample and a whole plate as well. This algorithm shows promise for genotyping data generated from TaqMan technology (Applied Biosystems). A key feature of the proposed algorithm is that it applies to unnormalized fluorescent signals when the TaqMan SNP assay is used. The algorithm could also be potentially adapted to other fluorescence-based SNP genotyping technologies such as Invader Assay. Motivated by the SNP genotyping problem, we propose a partial likelihood approach to linear clustering which explores potential linear clusters in a data set. Instead of fully modelling the data, we assume only the signed orthogonal distance from each data point to a hyperplane is normally distributed. Its relationships with several existing clustering methods are discussed. Some existing methods to determine the number of components in a data set are adapted to this linear clustering setting. Several simulated and real data sets are analyzed for comparison and illustration purpose. We also investigate some asymptotic properties of the partial likelihood approach. A Bayesian version of this methodology is helpful if some clusters are sparse but there is strong prior information about their approximate locations or properties. We propose a Bayesian hierarchical approach which is particularly appropriate for identifying sparse linear clusters. We show that the sparse cluster in SNP genotyping datasets can be successfully identified after a careful specification of the prior distributions.
53

Control of spring weed vegetation with saflufenacil

Mellendorf, Tracy 01 January 2009 (has links)
Field and greenhouse studies were conducted in 2007 and 2008 to evaluate the foliar efficacy of saflufenacil on horseweed (Conyza canadensis (L.) Cronq.). In the field, saflufenacil applied alone at the lowest rate (25 g/ha) resulted in less control than all other herbicide treatments that included saflufenacil. The addition of glyphosate to 25 g/ha of saflufenacil increased the level of control over either herbicide applied alone. However, the addition of glyphosate to 50 g/ha of saflufenacil or greater was not beneficial because saflufenacil alone provided at least 95% control. Overall, horseweed height at the time of herbicide application had very little effect on the efficacy of saflufenacil applied alone or in combination with glyphosate. Application variables can enhance the foliar activity of saflufenacil. In the greenhouse, saflufenacil combined with glyphosate provided greater control than saflufenacil applied alone on both glyphosate-susceptible and -resistant horseweed populations. Regardless of horseweed population or glyphosate, saflufenacil had greater activity when crop oil concentrate rather than nonionic surfactant was used as the adjuvant. Decreasing light level within 24 hours of herbicide application resulted in greater saflufenacil activity. Applying saflufenacil in a pH 5 spray solution resulted in greater activity than pH 7 or pH 9. Although effects from saflufenacil applied under different temperatures were evident in early timings, there were no lasting effects on the efficacy of saflufenacil. Saflufenacil had significant activity on both glyphosate-susceptible and -resistant horseweed. Under certain conditions when complete control of horseweed is not achieved, such as low application rates, large target weeds, and varying environmental conditions, application variables including glyphosate tank-mixtures, crop oil concentrate, low spray solution pH, and low light level may increase the level of horseweed control from saflufenacil.
54

Linear clustering with application to single nucleotide polymorphism genotyping

Yan, Guohua 11 1900 (has links)
Single nucleotide polymorphisms (SNPs) have been increasingly popular for a wide range of genetic studies. A high-throughput genotyping technologies usually involves a statistical genotype calling algorithm. Most calling algorithms in the literature, using methods such as k-means and mixturemodels, rely on elliptical structures of the genotyping data; they may fail when the minor allele homozygous cluster is small or absent, or when the data have extreme tails or linear patterns. We propose an automatic genotype calling algorithm by further developing a linear grouping algorithm (Van Aelst et al., 2006). The proposed algorithm clusters unnormalized data points around lines as against around centroids. In addition, we associate a quality value, silhouette width, with each DNA sample and a whole plate as well. This algorithm shows promise for genotyping data generated from TaqMan technology (Applied Biosystems). A key feature of the proposed algorithm is that it applies to unnormalized fluorescent signals when the TaqMan SNP assay is used. The algorithm could also be potentially adapted to other fluorescence-based SNP genotyping technologies such as Invader Assay. Motivated by the SNP genotyping problem, we propose a partial likelihood approach to linear clustering which explores potential linear clusters in a data set. Instead of fully modelling the data, we assume only the signed orthogonal distance from each data point to a hyperplane is normally distributed. Its relationships with several existing clustering methods are discussed. Some existing methods to determine the number of components in a data set are adapted to this linear clustering setting. Several simulated and real data sets are analyzed for comparison and illustration purpose. We also investigate some asymptotic properties of the partial likelihood approach. A Bayesian version of this methodology is helpful if some clusters are sparse but there is strong prior information about their approximate locations or properties. We propose a Bayesian hierarchical approach which is particularly appropriate for identifying sparse linear clusters. We show that the sparse cluster in SNP genotyping datasets can be successfully identified after a careful specification of the prior distributions. / Science, Faculty of / Statistics, Department of / Graduate
55

Complete Bayesian analysis of some mixture time series models

Hossain, Shahadat January 2012 (has links)
In this thesis we consider some finite mixture time series models in which each component is following a well-known process, e.g. AR, ARMA or ARMA-GARCH process, with either normal-type errors or Student-t type errors. We develop MCMC methods and use them in the Bayesian analysis of these mixture models. We introduce some new models such as mixture of Student-t ARMA components and mixture of Student-t ARMA-GARCH components with complete Bayesian treatments. Moreover, we use component precision (instead of variance) with an additional hierarchical level which makes our model more consistent with the MCMC moves. We have implemented the proposed methods in R and give examples with real and simulated data.
56

General blending models for mixture experiments : design and analysis

Brown, Liam John January 2014 (has links)
It is felt the position of the Scheffé polynomials as the primary, or sometimes sole recourse for practitioners of mixture experiments leads to a lack of enquiry regarding the type of blending behaviour that is used to describe the response and that this could be detrimental to achieving experimental objectives. Consequently, a new class of models and new experimental designs are proposed allowing a more thorough exploration of the experimental region with respect to different blending behaviours, especially those not associated with established models for mixtures, in particular the Scheffé polynomials. The proposed General Blending Models for Mixtures (GBMM) are a powerful tool allowing a broad range of blending behaviour to be described. These include those of the Scheffé polynomials (and its reparameterisations) and Becker's models. The potential benefits to be gained from their application include greater model parsimony and increased interpretability. Through this class of models it is possible for a practitioner to reject the assumptions inherent in choosing to model with the Scheffé polynomials and instead adopt a more open approach, flexible to many different types of behaviour. These models are presented alongside a fitting procedure, implementing a stepwise regression approach to the estimation of partially linear models with multiple nonlinear terms. The new class of models has been used to develop designs which allow the response surface to be explored fully with respect to the range of blending behaviours the GBMM may describe. These designs may additionally be targeted at exploring deviation from the behaviour described by the established models. As such, these designs may be thought to possess an enhanced optimality with respect to these models. They both possess good properties with respect to optimality criterion, but are also designed to be robust against model uncertainty.
57

Meta-analysis of safety data: approximation of arcsine transformation and application of mixture distribution modeling

Cheng, Hailong 23 September 2015 (has links)
Meta-analysis is frequently used in the analysis of safety data. In dealing with rare events, the commonly used risk measures, such as the odds ratio, or risk difference, or their variance, can become undefined when no events are observed in studies. The use of an arcsine transformation and arcsine difference (AD) as treatment effect were shown to have desirable statistical properties (Rucker, 2009). However, the interpretation of the AD remains challenging and this may hamper its utility. To convert the AD to a risk measure similar to the risk difference, two previously proposed linear approximation methods, along with new linear and non-linear methods were discussed and evaluated. The existing approximation methods generally provide satisfactory approximation when the event proportions are between 0.15 and 0.85. We propose a new linear approximation method, the modified rationalized arcsine unit (MRAU) which improves the approximation when proportions fall outside the range from 0.15 to 0.85. However, the MRAU can still lead to under- or over-estimation depending on the underlying proportion. We then proposed a non-linear approximation method, based on a Taylor series expansion (TRAUD), which shows the best approximation across the full range of risk levels. However, the variance for TRAUD is less easily estimated and requires bootstrap estimation. Results from simulation studies confirm these findings under a wide array of scenarios. In the second section, heterogeneity in meta-analysis is discussed along with current methods that address the issue. To provide an exploration of the nature of heterogeneity, finite mixture model methods (FMM) were presented, and their application in meta-analysis discussed. The estimates derived from the components in FMM indicate that even with a pre-specified protocol, the studies included in a meta-analysis may come from different distributions that can cause heterogeneity. The estimated number of components may suggest the existence of multiple sub-populations that a simple overall effect estimate will neglect. We propose that in the analysis of safety data, the estimates of the number of components and their respective means can provide valuable information for better patient care. In the final section, the application of the approximation methods and the use of FMM are demonstrated in the analysis of two published meta-analysis examples from the medical literature.
58

Estimation of individual treatment effect via Gaussian mixture model

Wang, Juan 21 August 2020 (has links)
In this thesis, we investigate the estimation problem of treatment effect from Bayesian perspective through which one can first obtain the posterior distribution of unobserved potential outcome from observed data, and then obtain the posterior distribution of treatment effect. We mainly consider how to represent a joint distribution of two potential outcomes - one from treated group and another from control group, which can give us an indirect impression of correlation, since the estimation of treatment effect depends on correlation between two potential outcomes. The first part of this thesis illustrates the effectiveness of adapting Gaussian mixture models in solving the treatment effect problem. We apply the mixture models - Gaussian Mixture Regression (GMR) and Gaussian Mixture Linear Regression (GMLR)- as a potentially simple and powerful tool to investigate the joint distribution of two potential outcomes. For GMR, we consider a joint distribution of the covariate and two potential outcomes. For GMLR, we consider a joint distribution of two potential outcomes, which linearly depend on covariate. Through developing an EM algorithm for GMLR, we find that GMR and GMLR are effective in estimating means and variances, but they are not effective in capturing correlation between two potential outcomes. In the second part of this thesis, GMLR is modified to capture unobserved covariance structure (correlation between outcomes) that can be explained by latent variables introduced through making an important model assumption. We propose a much more efficient Pre-Post EM Algorithm to implement our proposed GMLR model with unobserved covariance structure in practice. Simulation studies show that Pre-Post EM Algorithm performs well not only in estimating means and variances, but also in estimating covariance.
59

A Nonlinear Mixture Autoregressive Model For Speaker Verification

Srinivasan, Sundararajan 30 April 2011 (has links)
In this work, we apply a nonlinear mixture autoregressive (MixAR) model to supplant the Gaussian mixture model for speaker verification. MixAR is a statistical model that is a probabilistically weighted combination of components, each of which is an autoregressive filter in addition to a mean. The probabilistic mixing and the datadependent weights are responsible for the nonlinear nature of the model. Our experiments with synthetic as well as real speech data from standard speech corpora show that MixAR model outperforms GMM, especially under unseen noisy conditions. Moreover, MixAR did not require delta features and used 2.5x fewer parameters to achieve comparable or better performance as that of GMM using static as well as delta features. Also, MixAR suffered less from overitting issues than GMM when training data was sparse. However, MixAR performance deteriorated more quickly than that of GMM when evaluation data duration was reduced. This could pose limitations on the required minimum amount of evaluation data when using MixAR model for speaker verification.
60

Longitudinal Data Clustering Via Kernel Mixture Models

Zhang, Xi January 2021 (has links)
Kernel mixture models are proposed to cluster univariate, independent multivariate and dependent bivariate longitudinal data. The Gaussian distribution in finite mixture models is replaced by the Gaussian and gamma kernel functions, and the expectation-maximization algorithm is used to estimate bandwidths and compute log-likelihood scores. For dependent bivariate longitudinal data, the bivariate Gaussian copula is used to reveal the correlation between two attributes. After that, we use AIC, BIC and ICL to select the best model. In addition, we also introduce a kernel distance-based clustering method to compare with the kernel mixture models. A simulation is performed to illustrate the performance of this mixture model, and results show that the gamma kernel mixture model performs better than the kernel distance-based clustering method based on misclassification rates. Finally, these two models are applied to COVID-19 data, and sixty countries are classified into ten clusters based on growth rates and death rates. / Thesis / Master of Science (MSc)

Page generated in 0.0496 seconds