• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2047
  • 601
  • 262
  • 260
  • 61
  • 32
  • 26
  • 19
  • 15
  • 14
  • 10
  • 8
  • 6
  • 6
  • 5
  • Tagged with
  • 4146
  • 815
  • 761
  • 732
  • 723
  • 722
  • 714
  • 661
  • 580
  • 451
  • 433
  • 416
  • 412
  • 370
  • 315
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
971

The Effect Touches, Post Touches, and Dribbles Have on Offense for Men's Division I Basketball

Jackson, Kim T. 04 March 2009 (has links) (PDF)
The purposes of this study were to evaluate the effects touches per play, post touches per play, and dribbles to end a play (DEP) have on points per play, field goal percentage, turnovers, and fouls. This was done to provide empirical evidence on anecdotal theories held by coaches concerning ball movement, dribbles, and post touches. The data collected were statistically analyzed using Bayesian hierarchical models. This study reports some intriguing trends. First, exceeding nine passes and three dribbles to end a play results in a decrease in points per play and field goal percentage. Second, up to three dribbles into a shot was more productive and efficient than shooting with no dribbles. Third, post play does not have as big an effect on offensive basketball as previously expected. Lastly, offensive rebounds seem to universally have a positive effect upon offensive basketball. This study supported some anecdotal beliefs about basketball, while not others, supporting the idea for statistically based studies to be conducted on anecdotal beliefs held about basketball.
972

Analyzing the Effectiveness of Safety Measures Using Bayesian Methods

Thurgood, Daniel J. 13 July 2010 (has links) (PDF)
Recent research has shown that traditional safety evaluation methods have been inadequate in accurately determining the effectiveness of roadway safety measures. In recent years, advanced statistical methods have been utilized in traffic safety studies to more accurately determine the effectiveness of roadway safety measures. These methods, particularly hierarchical Bayesian statistical techniques, have the capabilities to account for the shortcomings of traditional methods. Hierarchical Bayesian modeling is a powerful tool for expressing rich statistical models that more fully reflect a given problem than a simpler model could. This paper uses a hierarchical Bayesian model to analyze the effectiveness of two types of road safety measures: raised medians and cable barriers. Several sites where these safety measures have been implemented in the last 10 years were evaluated using available crash data. This study analyzes the effectiveness of raised medians and cable barriers of roadway safety by determining the effect each has on crash frequency and severity at selected locations. The results of this study show that the installation of a raised median is an effective technique to reduce the overall crash frequency and severity on Utah roadways. The analysis of cable barriers showed that cable barriers were effective in decreasing cross-median crashes and crash severity.
973

An Introduction to Bayesian Methodology via WinBUGS and PROC MCMC

Lindsey, Heidi Lula 06 July 2011 (has links) (PDF)
Bayesian statistical methods have long been computationally out of reach because the analysis often requires integration of high-dimensional functions. Recent advancements in computational tools to apply Markov Chain Monte Carlo (MCMC) methods are making Bayesian data analysis accessible for all statisticians. Two such computer tools are Win-BUGS and SASR 9.2's PROC MCMC. Bayesian methodology will be introduced through discussion of fourteen statistical examples with code and computer output to demonstrate the power of these computational tools in a wide variety of settings.
974

Predicting Maximal Oxygen Consumption (VO2max) Levels in Adolescents

Shepherd, Brent A. 09 March 2012 (has links) (PDF)
Maximal oxygen consumption (VO2max) is considered by many to be the best overall measure of an individual's cardiovascular health. Collecting the measurement, however, requires subjecting an individual to prolonged periods of intense exercise until their maximal level, the point at which their body uses no additional oxygen from the air despite increased exercise intensity, is reached. Collecting VO2max data also requires expensive equipment and great subject discomfort to get accurate results. Because of this inherent difficulty, it is often avoided despite its usefulness. In this research, we propose a set of Bayesian hierarchical models to predict VO2max levels in adolescents, ages 12 through 17, using less extreme measurements. Two models are developed separately, one that uses submaximal exercise data and one that uses physical fitness questionnaire data. The best submaximal model was found to include age, gender, BMI, heart rate, rate of perceived exertion, treadmill miles per hour, and an interaction between age and heart rate. The second model, designed for those with physical limitations, uses age, gender, BMI, and two separate questionnaire results measuring physical activity levels and functional ability levels, as well as an interaction between the physical activity level score and gender. Both models use separate model variances for males and females.
975

Bayesian Pollution Source Apportionment Incorporating Multiple Simultaneous Measurements

Christensen, Jonathan Casey 12 March 2012 (has links) (PDF)
We describe a method to estimate pollution profiles and contribution levels for distinct prominent pollution sources in a region based on daily pollutant concentration measurements from multiple measurement stations over a period of time. In an extension of existing work, we will estimate common source profiles but distinct contribution levels based on measurements from each station. In addition, we will explore the possibility of extending existing work to allow adjustments for synoptic regimes—large scale weather patterns which may effect the amount of pollution measured from individual sources as well as for particular pollutants. For both extensions we propose Bayesian methods to estimate pollution source profiles and contributions.
976

Hitters vs. Pitchers: A Comparison of Fantasy Baseball Player Performances Using Hierarchical Bayesian Models

Huddleston, Scott D. 17 April 2012 (has links) (PDF)
In recent years, fantasy baseball has seen an explosion in popularity. Major League Baseball, with its long, storied history and the enormous quantity of data available, naturally lends itself to the modern-day recreational activity known as fantasy baseball. Fantasy baseball is a game in which participants manage an imaginary roster of real players and compete against one another using those players' real-life statistics to score points. Early forms of fantasy baseball began in the early 1960s, but beginning in the 1990s, the sport was revolutionized due to the advent of powerful computers and the Internet. The data used in this project come from an actual fantasy baseball league which uses a head-to-head, points-based scoring system. The data consist of the weekly point totals that were accumulated over the first three-fourths of the 2011 regular season by the top 110 hitters and top 70 pitchers in Major League Baseball. The purpose of this project is analyze the relative value of pitchers versus hitters in this league using hierarchical Bayesian models. Three models will be compared, one which differentiates between hitters and pitchers, another which also differentiates between starting pitchers and relief pitchers, and a third which makes no distinction whatsoever between hitters and pitchers. The models will be compared using the deviance information criterion (DIC). The best model will then be used to predict weekly point totals for the last fourth of the 2011 season. Posterior predictive densities will be compared to actual weekly scores.
977

Species Identification and Strain Attribution with Unassembled Sequencing Data

Francis, Owen Eric 18 April 2012 (has links) (PDF)
Emerging sequencing approaches have revolutionized the way we can collect DNA sequence data for applications in bioforensics and biosurveillance. In this research, we present an approach to construct a database of known biological agents and use this database to develop a statistical framework to analyze raw reads from next-generation sequence data for species identification and strain attribution. Our method capitalizes on a Bayesian statistical framework that accommodates information on sequence quality, mapping quality and provides posterior probabilities of matches to a known database of target genomes. Importantly, our approach also incorporates the possibility that multiple species can be present in the sample or that the target strain is not even contained within the reference database. Furthermore, our approach can accurately discriminate between very closely related strains of the same species with very little coverage of the genome and without the need for genome assembly - a time consuming and labor intensive step. We demonstrate our approach using genomic data from a variety of known bacterial agents of bioterrorism and agents impacting human health.
978

Joint Weibull Models for Survival and Longitudinal Data with Dynamic Predictions

Uvasheva, Dilyara 22 August 2022 (has links)
Patients who were previously diagnosed with prostate cancer usually undergo a routine clinical monitoring that involves measuring the Prostate-specific antigen (PSA). The trajectory of this biomarker over time serves as an indication of cancer recurrence. If the PSA value begins to increase, the cancer is said to be more likely to recur and thus, the patient is advised to start a treatment. There are two reasons for stopping the patient follow-up and this poses a certain challenge. One of them is starting a salvage hormone therapy and another is actual recurrence of cancer. When analyzing such data, we need to account for informative dropout, otherwise, neglecting it may lead to increased bias in estimation of the PSA trajectory. Thus, hormone therapy serves as a censoring event, which is a defining feature of survival analysis. Motivated by the PSA data, we need to efficiently describe the dropout mechanism using the joint model. The survival submodel is based on the Weibull distribution and we use the Bayesian inference to fit this model, more specifically, we use the R-INLA package, which is a much faster alternative to MCMC-based inference. The fact that our joint model with a linear bivariate Gaussian association structure is a latent Gaussian model (LGM) allows us to use this inferential tool. Based on this work, we are then able to develop dynamic predictions of prostate cancer recurrence. Making accurate prognosis for cancer data is clinically impactful and could ultimately contribute to the development of precision medicine.
979

One-Stage and Bayesian Two-Stage Optimal Designs for Mixture Models

Lin, Hefang 31 December 1999 (has links)
In this research, Bayesian two-stage D-D optimal designs for mixture experiments with or without process variables under model uncertainty are developed. A Bayesian optimality criterion is used in the first stage to minimize the determinant of the posterior variances of the parameters. The second stage design is then generated according to an optimality procedure that collaborates with the improved model from first stage data. Our results show that the Bayesian two-stage D-D optimal design is more efficient than both the Bayesian one-stage D-optimal design and the non-Bayesian one-stage D-optimal design in most cases. We also use simulations to investigate the ratio between the sample sizes for two stages and to observe least sample size for the first stage. On the other hand, we discuss D-optimal second or higher order designs, and show that Ds-optimal designs are a reasonable alternative to D-optimal designs. / Ph. D.
980

Dynamic reliability assessment of flare systems by combining fault tree analysis and Bayesian networks

Kabir, Sohag, Taleb-Berrouane, M., Papadopoulos, Y. 24 September 2019 (has links)
Yes / Flaring is a combustion process commonly used in the oil and gas industry to dispose flammable waste gases. Flare flameout occurs when these gases escape unburnt from the flare tip causing the discharge of flammable and/or toxic vapor clouds. The toxic gases released during this process have the potential to initiate safety hazards and cause serious harm to the ecosystem and human health. Flare flameout could be caused by environmental conditions, equipment failure, and human error. However, to better understand the causes of flare flameout, a rigorous analysis of the behavior of flare systems under failure conditions is required. In this article, we used fault tree analysis (FTA) and the dynamic Bayesian network (DBN) to assess the reliability of flare systems. In this study, we analyzed 40 different combinations of basic events that can cause flare flameout to determine the event with the highest impact on system failure. In the quantitative analysis, we use both constant and time-dependent failure rates of system components. The results show that combining these two approaches allows for robust probabilistic reasoning on flare system reliability, which can help improving the safety and asset integrity of process facilities. The proposed DBN model constitutes a significant step to improve the safety and reliability of flare systems in the oil and gas industry.

Page generated in 0.0601 seconds