• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 331
  • 135
  • 10
  • 4
  • Tagged with
  • 927
  • 927
  • 466
  • 437
  • 384
  • 380
  • 380
  • 184
  • 174
  • 92
  • 67
  • 66
  • 63
  • 62
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

A Modified Cluster-Weighted Approach to Nonlinear Time Series

Lyman, Mark Ballatore 11 July 2007 (has links) (PDF)
In many applications involving data collected over time, it is important to get timely estimates and adjustments of the parameters associated with a dynamic model. When the dynamics of the model must be updated, time and computational simplicity are important issues. When the dynamic system is not linear the problem of adaptation and response to feedback are exacerbated. A linear approximation of the process at various levels or “states” may approximate the non-linear system. In this case the approximation is linear within a state and transitions from state to state over time. The transition probabilities are parametrized as a Markov chain, and the within-state dynamics are modeled by an AR time series model. However, in order to make the estimates available almost instantaneously, least squares and weighted least squares estimates are used. This is a modification of the cluster-weighted models proposed by Gershenfeld, Schoner, and Metois (1999). A simulation study compares the models and explores the adequacy of least squares estimators.
102

Clustering Methods for Delineating Regions of Spatial Stationarity

Collings, Jared M. 30 November 2007 (has links) (PDF)
This paper seeks to further investigate data extracted by the use of Functional Magnetic Resonance Imaging (FMRI) as it is applied to brain tissue and how it measures blood flow to certain areas of the brain following the application of a stimulus. As a precursor to detailed spatial analysis of this kind of data, this paper develops methods of grouping data based on the necessary conditions for spatial statistical analysis. The purpose of this paper is to examine and develop methods that can be used to delineate regions of stationarity. One of the major assumptions used in spatial estimation is that the data field is homogeneous with respect to the mean and the covariance function. As such, any spatial estimation presupposes that these criteria are met. With respect to analyses that may be considered new or experimental, however, there is no evidence that these assumptions will hold.
103

Modeling Transition Probabilities for Loan States Using a Bayesian Hierarchical Model

Monson, Rebecca Lee 30 November 2007 (has links) (PDF)
A Markov Chain model can be used to model loan defaults because loans move through delinquency states as the borrower fails to make monthly payments. The transition matrix contains in each location a probability that a borrower in a given state one month moves to the possible delinquency states the next month. In order to use this model, it is necessary to know the transition probabilities, which are unknown quantities. A Bayesian hierarchical model is postulated because there may not be sufficient data for some rare transition probabilities. Using a hierarchical model, similarities between types or families of loans can be taken advantage of to improve estimation, especially for those probabilities with little associated data. The transition probabilities are estimated using MCMC and the Metropolis-Hastings algorithm.
104

Ordinal Regression to Evaluate Student Ratings Data

Bell, Emily Brooke 07 July 2008 (has links) (PDF)
Student evaluations are the most common and often the only method used to evaluate teachers. In these evaluations, which typically occur at the end of every term, students rate their instructors on criteria accepted as constituting exceptional instruction in addition to an overall assessment. This presentation explores factors that influence student evaluations using the teacher ratings data of Brigham Young University from Fall 2001 to Fall 2006. This project uses ordinal regression to model the probability of an instructor receiving a good, average, or poor rating. Student grade, instructor status, class level, student gender, total enrollment, term, GE class status, and college are used as explanatory variables.
105

Assessing Multivariate Heritability through Nonparametric Methods

Carper, Benjamin Alan 17 July 2008 (has links) (PDF)
The similarities between generations of living subjects are often quantified by heritability. By distinguishing genotypic variation, or variation due to parental pairings, from phenotypic variation, or normal intraspecies variation, the heritability of traits can be estimated. Due to the multivariate nature of many traits, such as size and shape, computation of heritability can be difficult. Also, assessment of the variation of the heritability estimate is extremely difficult. This study uses nonparametric methods, namely the randomization test and the bootstrap, to obtain both a measure of the extremity of the observed heritability and an assessment of the uncertainty.
106

Utilizing Universal Probability of Expression Code (UPC) to Identify Disrupted Pathways in Cancer Samples

Withers, Michelle Rachel 03 March 2011 (has links) (PDF)
Understanding the role of deregulated biological pathways in cancer samples has the potential to improve cancer treatment, making it more effective by selecting treatments that reverse the biological cause of the cancer. One of the challenges with pathway analysis is identifying a deregulated pathway in a given sample. This project develops the Universal Probability of Expression Code (UPC), a profile of a single deregulated biological path- way, and projects it into a cancer cell to determine if it is present. One of the benefits of this method is that rather than use information from a single over-expressed gene, it pro- vides a profile of multiple genes, which has been shown by Sjoblom et al. (2006) and Wood et al. (2007) to be more effective. The UPC uses a novel normalization and summarization approach to characterize a deregulated pathway using only data from the array (Mixture model-based analysis of expression arrays, MMAX), making it applicable to all microarray platforms, unlike other methods. When compared to both Affymetrix's PMA calls (Hubbell, Liu, and Mei 2002) and Barcoding (Zilliox and Irizarry 2007), it performs comparably.
107

Assessment of aCGH Clustering Methodologies

Baker, Serena F. 18 October 2010 (has links) (PDF)
Array comparative genomic hybridization (aCGH) is a technique for identifying duplications and deletions of DNA at specific locations across a genome. Potential objectives of aCGH analysis are the identification of (1) altered regions for a given subject, (2) altered regions across a set of individuals, and (3) clinically relevant clusters of hybridizations. aCGH analysis can be particularly useful when it identifies previously unknown clusters with clinical relevance. This project focuses on the assessment of existing aCGH clustering methodologies. Three methodologies are considered: hierarchical clustering, weighted clustering of called aCGH data, and clustering based on probabilistic recurrent regions of alteration within subsets of individuals. Assessment is conducted first through the analysis of aCGH data obtained from patients with ovarian cancer and then through simulations. Performance assessment for the data analysis is based on cluster assignment correlation with clinical outcomes (e.g., survival). For each method, 1,000 simulations are summarized with Cohen's kappa coefficient, interpreted as the proportion of correct cluster assignments beyond random chance. Both the data analysis and the simulation results suggest that hierarchical clustering tends to find more clinically relevant clusters when compared to the other methods. Additionally, these clusters are composed of more patients who belong in the clusters to which they are assigned.
108

Assessing the Effect of Wal-Mart in Rural Utah Areas

Nelson, Angela 06 July 2011 (has links) (PDF)
Walmart and other “big box” stores seek to expand in rural markets, possibly due to cheap land and lack of zoning laws. In August 2000, Walmart opened a store in Ephraim, a small rural town in central Utah. It is of interest to understand how Walmart's entrance into the local market changes the sales tax revenue base for Ephraim and for the surrounding municipalities. It is thought that small “Mom and Pop” stores go out of business because they cannot compete with Walmart's prices, leading to a decrease in variety, selection, convenience, and most importantly, sales tax revenue base in areas surrounding Ephraim. This shift in sales tax base is assessed using mixed models. It is found that the entrance of Walmart in Sanpete County has a significant change on sales tax revenue, specifically in the retail industry. A method of calculating the loss for each city is discussed and a sensitivity analysis is performed. This project also documents what has been done to assemble the data set. In addition to discussing the assumptions made to clean the data, explanations of area and industry definition exploration are explained and defended.
109

Comparative Rankings: Ascertaining Pre- and Post-Test Differences in a Survey Instrument

Reiss, Elayne 01 January 2003 (has links)
Surveys provide some of the most vital information to statisticians, allowing them a glimpse into the minds of respondents. With such importance, it is imperative to properly analyze surveys to ensure that the conclusions reached truly address the analytical goals. To confront this issue squarely, this thesis analyzes a particular set of surveys collected from a group of students at a local elementary school before and after the implementation of a program called Conscious Discipline, which is designed to combat behavioral issues in the classroom. With the goal of determining whether or not the students' attitudes toward their school environment changed, three analysis methods are considered. The final method, a usage of Kendall's Tau which involves the comparison of a ranked set of survey responses from an expert to the responses of the students, is determined to address the goal the most efficiently and is explored at length. The heart of the investigation entails the utilization of a program to generate the distribution of the test statistic. With the distribution in place, the test statistic is calculated for the specific survey data and compared, to determine if Conscious Discipline has truly made a difference for this particular group of students.
110

A Subset Selection Rule for Three Normal Populations

Culpepper, Bert 01 July 1982 (has links) (PDF)
No description available.

Page generated in 0.1155 seconds