• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 13
  • 6
  • 6
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 94
  • 94
  • 51
  • 21
  • 18
  • 14
  • 11
  • 10
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Using Box-Scores to Determine a Position's Contribution to Winning Basketball Games

Page, Garritt L. 16 August 2005 (has links) (PDF)
Basketball is a sport that has become increasingly popular world-wide. At the professional level it is a game in which each of the five positions has a specific responsibility that requires unique skills. It seems likely that it would be valuable for coaches to know which skills for each position are most conducive to winning. Knowing which skills to develop for each position could help coaches optimize each player's ability by customizing practice to contain drills that develop the most important skills for each position that would in turn improve the team's overall ability. Through the use of Bayesian hierarchical modeling and NBA box-score performance categories, this project will determine how each position needs to perform in order for their team to be successful.
2

A Latent Health Factor Model for Estimating Estuarine Ecosystem Health

Wu, Margaret 05 1900 (has links)
Assessment of the “health” of an ecosystem is often of great interest to those interested in monitoring and conservation of ecosystems. Traditionally, scientists have quantified the health of an ecosystem using multimetric indices that are semi-qualitative. Recently, a statistical-based index called the Latent Health Factor Index (LHFI) was devised to address many inadequacies of the conventional indices. Relying on standard modelling procedures, unlike the conventional indices, accords the LHFI many advantages: the LHFI is less arbitrary, and it allows for straightforward model inference and for formal statistical prediction of health for a new site (using only supplementary environmental covariates). In contrast, with conventional indices, formal statistical prediction does not exist, meaning that proper estimation of health for a new site requires benthic data which are expensive and time-consuming to gather. As the LHFI modelling methodology is a relatively new concept, it has so far only been demonstrated (and validated) on freshwater ecosystems. The goal of this thesis is to apply the LHFI modelling methodology to estuarine ecosystems, particularly to the previously unassessed system in Richibucto, New Brunswick. Specifically, the aims of this thesis are threefold: firstly, to investigate whether the LHFI is even applicable to estuarine systems since estuarine and freshwater metrics, or indicators of health, are quite different; secondly, to determine the appropriate form that the LHFI model if the technique is applicable; and thirdly, to assess the health of the Richibucto system. Note that the second objective includes determining which covariates may have a significant impact on estuarine health. As scientists have previously used the AZTI Marine Biotic Index (AMBI) and the Infaunal Trophic Index (ITI) as measurements of estuarine ecosystem health, this thesis investigates LHFI models using metrics from these two indices simultaneously. Two sets of models were considered in a Bayesian framework and implemented using Markov chain Monte Carlo techniques, the first using only metrics from AMBI, and the second using metrics from both AMBI and ITI. Both sets of LHFI models were successful in that they were able to make distinctions between health levels at different sites.
3

A Latent Health Factor Model for Estimating Estuarine Ecosystem Health

Wu, Margaret 05 1900 (has links)
Assessment of the “health” of an ecosystem is often of great interest to those interested in monitoring and conservation of ecosystems. Traditionally, scientists have quantified the health of an ecosystem using multimetric indices that are semi-qualitative. Recently, a statistical-based index called the Latent Health Factor Index (LHFI) was devised to address many inadequacies of the conventional indices. Relying on standard modelling procedures, unlike the conventional indices, accords the LHFI many advantages: the LHFI is less arbitrary, and it allows for straightforward model inference and for formal statistical prediction of health for a new site (using only supplementary environmental covariates). In contrast, with conventional indices, formal statistical prediction does not exist, meaning that proper estimation of health for a new site requires benthic data which are expensive and time-consuming to gather. As the LHFI modelling methodology is a relatively new concept, it has so far only been demonstrated (and validated) on freshwater ecosystems. The goal of this thesis is to apply the LHFI modelling methodology to estuarine ecosystems, particularly to the previously unassessed system in Richibucto, New Brunswick. Specifically, the aims of this thesis are threefold: firstly, to investigate whether the LHFI is even applicable to estuarine systems since estuarine and freshwater metrics, or indicators of health, are quite different; secondly, to determine the appropriate form that the LHFI model if the technique is applicable; and thirdly, to assess the health of the Richibucto system. Note that the second objective includes determining which covariates may have a significant impact on estuarine health. As scientists have previously used the AZTI Marine Biotic Index (AMBI) and the Infaunal Trophic Index (ITI) as measurements of estuarine ecosystem health, this thesis investigates LHFI models using metrics from these two indices simultaneously. Two sets of models were considered in a Bayesian framework and implemented using Markov chain Monte Carlo techniques, the first using only metrics from AMBI, and the second using metrics from both AMBI and ITI. Both sets of LHFI models were successful in that they were able to make distinctions between health levels at different sites.
4

Modeling Transition Probabilities for Loan States Using a Bayesian Hierarchical Model

Monson, Rebecca Lee 30 November 2007 (has links) (PDF)
A Markov Chain model can be used to model loan defaults because loans move through delinquency states as the borrower fails to make monthly payments. The transition matrix contains in each location a probability that a borrower in a given state one month moves to the possible delinquency states the next month. In order to use this model, it is necessary to know the transition probabilities, which are unknown quantities. A Bayesian hierarchical model is postulated because there may not be sufficient data for some rare transition probabilities. Using a hierarchical model, similarities between types or families of loans can be taken advantage of to improve estimation, especially for those probabilities with little associated data. The transition probabilities are estimated using MCMC and the Metropolis-Hastings algorithm.
5

Classification Analysis for Environmental Monitoring: Combining Information across Multiple Studies

Zhang, Huizi 29 September 2006 (has links)
Environmental studies often employ data collected over large spatial regions. Although it is convenient, the conventional single model approach may fail to accurately describe the relationships between variables. Two alternative modeling approaches are available: one applies separate models for different regions; the other applies hierarchical models. The separate modeling approach has two major difficulties: first, we often do not know the underlying clustering structure of the entire data; second, it usually ignores possible dependence among clusters. To deal with the first problem, we propose a model-based clustering method to partition the entire data into subgroups according to the empirical relationships between the response and the predictors. To deal with the second, we propose Bayesian hierarchical models. We illustrate the use of the Bayesian hierarchical model under two situations. First, we apply the hierarchical model based on the empirical clustering structure. Second, we integrate the model-based clustering result to help determine the clustering structure used in the hierarchical model. The nature of the problem is classification since the response is categorical rather than continuous and logistic regression models are used to model the relationship between variables. / Ph. D.
6

Statistical methods for the analysis of corrosion data for integrity assessments

Tan, Hwei-Yang January 2017 (has links)
In the oil and gas industry, statistical methods have been used for corrosion analysis for various asset systems such as pipelines, storage tanks, and so on. However, few industrial standards and guidelines provide comprehensive stepwise procedures for the usage of statistical approaches for corrosion analysis. For example, the UK HSE (2002) report "Guidelines for the use of statistics for analysis of sample inspection of corrosion" demonstrates how statistical methods can be used to evaluate corrosion samples, but the methods explained in the document are very basic and do not consider risk factors such as pressure, temperature, design, external factors and other factors for the analyses. Furthermore, often the industrial practice that uses linear approximation on localised corrosion such as pitting is considered inappropriate as pitting growth is not uniform. The aim of this research is to develop an approach that models the stochastic behaviour of localised corrosion and demonstrate how the influencing factors can be linked to the corrosion analyses, for predicting the remaining useful life of components in oil and gas plants. This research addresses a challenge in industry practice. Non-destructive testing (NDT) and inspection techniques have improved in recent years making more and more data available to asset operators. However, this means that these data need to be processed to extract meaningful information. Increasing computer power has enabled the use of statistics for such data processing. Statistical software such as R and OpenBUGS is available to users to explore new and pragmatic statistical methods (e.g. regression models and stochastic models) and fully use the available data in the field. In this thesis, we carry out extreme value analysis to determine maximum defect depth of an offshore conductor pipe and simulate the defect depth using geometric Brownian motion in Chapter 2. In Chapter 3, we introduce a Weibull density regression that is based on a gamma transformation proportional hazards model to analyse the corrosion data of piping deadlegs. The density regression model takes multiple influencing factors into account; this model can be used to extrapolate the corrosion density of inaccessible deadlegs with data available from other piping systems. In Chapter 4, we demonstrate how the corrosion prediction models in Chapters 2 and 3 could be used to predict the remaining useful life of these components. Chapter 1 sets the background to the techniques used, and Chapter 5 presents concluding remarks based on the application of the techniques.
7

Species trees from gene trees: reconstructing Bayesian posterior distributions of a species phylogeny using estimated gene tree distributions

Liu, Liang 14 September 2006 (has links)
No description available.
8

Bayesian hierarchical modelling of dual response surfaces

Chen, Younan 08 December 2005 (has links)
Dual response surface methodology (Vining and Myers (1990)) has been successfully used as a cost-effective approach to improve the quality of products and processes since Taguchi (Tauchi (1985)) introduced the idea of robust parameter design on the quality improvement in the United States in mid-1980s. The original procedure is to use the mean and the standard deviation of the characteristic to form a dual response system in linear model structure, and to estimate the model coefficients using least squares methods. In this dissertation, a Bayesian hierarchical approach is proposed to model the dual response system so that the inherent hierarchical variance structure of the response can be modeled naturally. The Bayesian model is developed for both univariate and multivariate dual response surfaces, and for both fully replicated and partially replicated dual response surface designs. To evaluate its performance, the Bayesian method has been compared with the original method under a wide range of scenarios, and it shows higher efficiency and more robustness. In applications, the Bayesian approach retains all the advantages provided by the original dual response surface modelling method. Moreover, the Bayesian analysis allows inference on the uncertainty of the model parameters, and thus can give practitioners complete information on the distribution of the characteristic of interest. / Ph. D.
9

ACCOUNTING FOR MATCHING UNCERTAINTY IN PHOTOGRAPHIC IDENTIFICATION STUDIES OF WILD ANIMALS

Ellis, Amanda R. 01 January 2018 (has links)
I consider statistical modelling of data gathered by photographic identification in mark-recapture studies and propose a new method that incorporates the inherent uncertainty of photographic identification in the estimation of abundance, survival and recruitment. A hierarchical model is proposed which accepts scores assigned to pairs of photographs by pattern recognition algorithms as data and allows for uncertainty in matching photographs based on these scores. The new models incorporate latent capture histories that are treated as unknown random variables informed by the data, contrasting past models having the capture histories being fixed. The methods properly account for uncertainty in the matching process and avoid the need for researchers to confirm matches visually, which may be a time consuming and error prone process. Through simulation and application to data obtained from a photographic identification study of whale sharks I show that the proposed method produces estimates that are similar to when the true matching nature of the photographic pairs is known. I then extend the method to incorporate auxiliary information to predetermine matches and non-matches between pairs of photographs in order to reduce computation time when fitting the model. Additionally, methods previously applied to record linkage problems in survey statistics are borrowed to predetermine matches and non-matches based on scores that are deemed extreme. I fit the new models in the Bayesian paradigm via Markov Chain Monte Carlo and custom code that is available by request.
10

Renormalization group and phase transitions in spin, gauge, and QCD like theories

Liu, Yuzhi 01 July 2013 (has links)
In this thesis, we study several different renormalization group (RG) methods, including the conventional Wilson renormalization group, Monte Carlo renormalization group (MCRG), exact renormalization group (ERG, or sometimes called functional RG), and tensor renormalization group (TRG). We use the two dimensional nearest neighbor Ising model to introduce many conventional yet important concepts. We then generalize the model to Dyson's hierarchical model (HM), which has rich phase properties depending on the strength of the interaction. The partition function zeros (Fisher zeros) of the HM model in the complex temperature plane is calculated and their connection with the complex RG flows is discussed. The two lattice matching method is used to construct both the complex RG flows and calculate the discrete β functions. The motivation of calculating the discrete β functions for various HM models is to test the matching method and to show how physically relevant fixed points emerge from the complex domain. We notice that the critical exponents calculated from the HM depend on the blocking parameter b. This motivated us to analyze the connection between the discrete and continuous RG transformation. We demonstrate numerical calculations of the ERG equations. We discuss the relation between Litim and Wilson-Polchinski equation and the effect of the cut-off functions in the ERG calculation. We then apply methods developed in the spin models to more complicated and more physically relevant lattice gauge theories and lattice quantum chromodynamics (QCD) like theories. Finite size scaling (FSS) technique is used to analyze the Binder cumulant of the SU(2) lattice gauge model. We calculate the critical exponent nu and omega of the model and show that it is in the same universality class as the three dimensional Ising model. Motivated by the walking technicolor theory, we study the strongly coupled gauge theories with conformal or near conformal properties. We compare the distribution of Fisher zeros for lattice gauge models with four and twelve light fermion flavors. We also briefly discuss the scaling of the zeros and its connection with the infrared fixed point (IRFP) and the mass anomalous dimension. Conventional numerical simulations suffer from the critical slowing down at the critical region, which prevents one from simulating large system. In order to reach the continuum limit in the lattice gauge theories, one needs either large volume or clever extrapolations. TRG is a new computational method that may calculate exponentially large system and works well even at the critical region. We formulate the TRG blocking procedure for the two dimensional O(2) (or XY ) and O(3) spin models and discuss possible applications and generalizations of the method to other spin and lattice gauge models. We start the thesis with the introduction and historical background of the RG in general.

Page generated in 0.114 seconds