• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 15
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 289
  • 289
  • 101
  • 99
  • 81
  • 69
  • 69
  • 46
  • 39
  • 38
  • 38
  • 37
  • 35
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Implementation and Application of the Curds and Whey Algorithm to Regression Problems

Kidd, John 01 May 2014 (has links)
A common multivariate statistical problem is the prediction of two or more response variables using two or more predictor variables. The simplest model for this situation is the multivariate linear regression model. The standard least squares estimation for this model involves regressing each response variable separately on all the predictor variables. Breiman and Friedman found a way to take advantage of correlations among the response variables to increase the predictive accuracy for each of the response variables with an algorithm they called Curds and Whey. In this report, I describe an implementation of the Curds and Whey algorithm in the R language and environment for statistical computing, apply the algorithm to some simulated and real data sets, and discuss the R package I developed for Curds and Whey.
142

Comparison of Transition Matrices Between Metropolitan and Non-metropolitan Areas in the State of Utah Using Juvenile Court Data

Song, Sung -ik 01 May 1974 (has links)
The purpose of this paper is to use Markov Chains for the study of youths referred to the juvenile court in the metropolitan and non-metropolitan areas of the state of Utah. Two computer programs were written for creating case histories for each person referred to the court and for testing for the significance of the difference among several transition matrices. Another computer program, which was written by Soo Hong Uh, was used for analyzing realizations of a Markov chains up to the 4th order; a third computer program, originally written by David White, was used for interpreting Markov chains. The paper is divided into SIX chapters: introduction and thesis goals, definition of SMSA (Standard Metropolitan Statistical Area), statistical background, methodology, analysis and summary and conclusions.
143

Physically Based Preconditioning Techniques Applied to the First Order Particle Transport and to Fluid Transport in Porous Media

Rigley, Michael 01 May 2014 (has links)
Physically based preconditioning is applied to linear systems resulting from solving the first order formulation of the particle transport equation and from solving the homogenized form of the simple flow equation for porous media flows. The first order formulation of the particle transport equation is solved two ways. The first uses a least squares finite element method resulting in a symmetric positive definite linear system which is solved by a preconditioned conjugate gradient method. The second uses a discontinuous finite element method resulting in a non-symmetric linear system which is solved by a preconditioned biconjugate gradient stabilized method. The flow equation is solved using a mixed finite element method. Specifically four levels of improvement are applied: homogenization of the porous media domain, a projection method for the mixed finite element method which simplifies the linear system, physically based preconditioning, and implementation of the linear solver in parallel on graphic processing units. The conjugate gradient linear solver for the least squares finite element method is also applied in parallel on graphics processing units. The physically based preconditioner is shown to perform well in each case, in relation to speed-ups gained and as compared with several algebraic preconditioners.
144

Beetles, Fungi and Trees: A Story for the Ages? Modeling and Projecting the Multipartite Symbiosis Between the Mountain Pine Beetle, Dendroctonus ponderosae, and Its Fungal Symbionts, Grosmannia clavigera and Ophiostoma montium

Addison, Audrey L 01 May 2014 (has links)
As data collection and modeling improve, ecologists increasingly discover that interspecies dynamics greatly affect the success of individual species. Models accounting for the dynamics of multiple species are becoming more important. In this work, we explore the relationship between mountain pine beetle (MPB, Dendroctonus ponderosae Hopkins) and two mutualistic fungi, Grosmannia clavigera and Ophiostoma montium. These species are involved in a multipartite symbiosis, critical to the survival of MPB, in which each species benefits. Extensive phenological modeling has been done to determine how temperature affects the timing of life events and cold-weather mortality of MPB. The fungi have also been closely studied to determine how they interact with MPB and how they differ in terms of virulence, response to temperature, and nutritional benefits to developing beetles. Overall, researchers consider G. clavigera to be the superior mutualist. Beetles developing near G. clavigera are larger, produce more brood, and have higher survival rates. Regarding temperature preferences, G. clavigera is considered “cool-loving,” growing at cooler temperatures than O. montium. These findings lead researchers to wonder 1) why has G. clavigera not displaced iv O. montium from the mutualism (if it is the superior mutualist) and 2) what will happen to the MPB-fungus mutualism in the face of a warming climate. In this work we present two models connecting fungal growth in a tree to predictions of MPB emergence: a stochastic, individual-based model and a deterministic, tree-based model. We begin by exploring whether variability in temperature can act as a stabilizing mechanism and find that temperature variability due to MPB periodically transitioning between different thermal environments is the most likely explanation for the continued presence of both fungi in the mutualism. Using the second model, we parameterize and validate the model using attack and emergence observations of MPB and the fungi they are carrying. In the process, we test several submodels to learn more about specific MPB-fungi interactions. Finally, utilizing information from previous fungal growth experiments, we test and parameterize several growth rate curves using Bayesian techniques to determine whether the inclusion of prior knowledge can lead to more realistic fits.
145

ASSESSING THE PSYCHOMETRIC PROPERTIES OF NEWLY DEVELOPED BEHAVIOR AND ATTITUDE TWITTER SCALES: A VALIDITY AND RELIABILITY STUDY

Amiruzzaman, Md 04 December 2019 (has links)
No description available.
146

Bivariate Functional Normalization of Methylation Array Data

Yacas, Clifford January 2021 (has links)
DNA methylation plays a key role in disease analysis, especially for studies that compare known large scale differences in CpG sites, such as cancer/normal studies or between-tissues studies. However, before any analysis can be done, data normalization and preprocessing of methylation data are required. A useful data preprocessing pipeline for large scale comparisons is Functional Normalization (FunNorm), (Fortin et al., 2014) implemented in the minfi package in R. In FunNorm, the univariate quantiles of the methylated and unmethylated signal values in the raw data are used to preprocess the data. However, although FunNorm has been shown to outperform other preprocessing and data normalization processes for these types of studies, it does not account for the correlation between the methylated and unmethylated signals into account; the focus of this paper is to improve upon FunNorm by taking this correlation into account. The concept of a bivariate quantile is used in this study as an attempt to take the correlation between the methylated and unmethylated signals into consideration. From the bivariate quantiles found, the partial least squares method is then used on these quantiles in this preprocessing. The raw datasets used for this research were collected from the European Molecular Biology Laboratory - European Bioinformatics Institute (EMBL-EBI) website. The results from this preprocessing algorithm were then compared and contrasted to the results from FunNorm. Drawbacks, limitations and future research are then discussed. / Thesis / Master of Science (MSc)
147

A Study on Modelling Spatial-Temporal Human Mobility Patterns for Improving Personalized Weather Warning

Xu, Yue 12 July 2018 (has links)
Understanding human mobility patterns is important for severe weather warning since these patterns can help identify where people are in time and in space when flash floods, tornados, high winds and hurricanes are occurring or are predicted to occur. A GIS (Geographic Information Science) data model was proposed to describe the spatial-temporal human activity. Based on this model, a metric was designed to represent the spatial-temporal activity intensity of human mobility, and an index was generated to quantitatively describe the change in human activities. By analyzing high-resolution human mobility data, the paper verified that human daily mobility patterns could be clearly described with the proposed methods. This research was part of a National Science Foundation grant on next generation severe weather warning systems. Data was collected from a specialized mobile app for severe weather warning, called CASA Alerts, which is being used to analyze different aspects of human behavior in response to severe weather warnings. The data set for this research uses GPS location data from more than 300 APP users during a 14 month period (location was reported at 2 minutes interval, or at based on a 100m change in location). A targeted weather warning strategy was proposed as a result of this research, and future research questions were discussed.
148

Elements Of Local Public Health Infrastructure that Correlate with Best Practice Activities: A Preliminary Analysis

Mengzhou Chen (12563353) 19 April 2023 (has links)
<p>Public health infrastructure (PHI) serves as the core foundation for essential public health and its services. However, the U.S. PHI has been weakened by understaffing, underfunding, limited resources and partnerships, and outdated data and information systems over the past few decades. The recent COVID-19 pandemic exacerbated its vulnerability and weakened nature, resulting in increased health disparities and worse health outcomes in general for the nation. The goal of this study was to identify elements of local PHI that are associated with the completion of 20 key public health activities while adjusting for state differences. Cross-sectional secondary data were acquired and linked from two national surveys of local health departments, the National Profile of Local Health Departments survey and the National Longitudinal Survey of Public Health Systems. In total, 20 multivariable logistic regression models were created to analyze the relationships between variables. State fixed effects were used in multivariable models to control for state differences. It was found that state differences affected the correlations of infrastructure variables. Several staffing elements, abilities to provide certain services, and participation in certain types of actions were strongly correlated with the completion of best practice activities. These findings will add to the discussion of what the minimum necessary elements of PHI may be.</p>
149

Estimation and Uncertainty Quantification in Tensor Completion with Side Information

Somnooma Hilda Marie Bernadette Ibriga (11206167) 30 July 2021 (has links)
<div>This work aims to provide solutions to two significant issues in the effective use and practical application of tensor completion as a machine learning method. The first solution addresses the challenge in designing fast and accurate recovery methods in tensor completion in the presence of highly sparse and highly missing data. The second takes on the need for robust uncertainty quantification methods for the recovered tensor.</div><div><br></div><div><b>Covariate-assisted Sparse Tensor Completion</b></div><div><b><br></b></div><div>In the first part of the dissertation, we aim to provably complete a sparse and highly missing tensor in the presence of covariate information along tensor modes. Our motivation originates from online advertising where users click-through-rates (CTR) on ads over various devices form a CTR tensor that can have up to 96% missing entries and has many zeros on non-missing entries. These features makes the standalone tensor completion method unsatisfactory. However, beside the CTR tensor, additional ad features or user characteristics are often available. We propose Covariate-assisted Sparse Tensor Completion (COSTCO) to incorporate covariate information in the recovery of the sparse tensor. The key idea is to jointly extract latent components from both the tensor and the covariate matrix to learn a synthetic representation. Theoretically, we derive the error bound for the recovered tensor components and explicitly quantify the improvements on both the reveal probability condition and the tensor recovery accuracy due to covariates. Finally, we apply COSTCO to an advertisement dataset from a major internet platform consisting of a CTR tensor and ad covariate matrix, leading to 23% accuracy improvement over the baseline methodology. An important by-product of our method is that clustering analysis on ad latent components from COSTCO reveal interesting and new ad clusters, that link different product industries which are not formed in existing clustering methods. Such findings could be directly useful for better ad planning procedures.</div><div><b><br></b></div><div><b>Uncertainty Quantification in Covariate-assisted Tensor Completion</b></div><div><br></div><div>In the second part of the dissertation, we propose a framework for uncertainty quantification for the imputed tensor factors obtained from completing a tensor with covariate information. We characterize the distribution of the non-convex estimator obtained from using the algorithm COSTCO down to fine scales. This distributional theory in turn allows us to construct proven valid and tight confidence intervals for the unseen tensor factors. The proposed inferential procedure enjoys several important features: (1) it is fully adaptive to noise heteroscedasticity, (2) it is data-driven and automatically adapts to unknown noise distributions and (3) in the high missing data regime, the inclusion of side information in the tensor completion model yields tighter confidence intervals compared to those obtained from standalone tensor completion methods.</div><div><br></div>
150

Solving the Differential Equation for the Probit Function Using a Variant of the Carleman Embedding Technique.

Alu, Kelechukwu Iroajanma 07 May 2011 (has links) (PDF)
The probit function is the inverse of the cumulative distribution function associated with the standard normal distribution. It is of great utility in statistical modelling. The Carleman embedding technique has been shown to be effective in solving first order and, less efficiently, second order nonlinear differential equations. In this thesis, we show that solutions to the second order nonlinear differential equation for the probit function can be approximated efficiently using a variant of the Carleman embedding technique.

Page generated in 0.0944 seconds