• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 7
  • 7
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Contributions to the Analysis of Experiments Using Empirical Bayes Techniques

Delaney, James Dillon 10 July 2006 (has links)
Specifying a prior distribution for the large number of parameters in the linear statistical model is a difficult step in the Bayesian approach to the design and analysis of experiments. Here we address this difficulty by proposing the use of functional priors and then by working out important details for three and higher level experiments. One of the challenges presented by higher level experiments is that a factor can be either qualitative or quantitative. We propose appropriate correlation functions and coding schemes so that the prior distribution is simple and the results easily interpretable. The prior incorporates well known experimental design principles such as effect hierarchy and effect heredity, which helps to automatically resolve the aliasing problems experienced in fractional designs. The second part of the thesis focuses on the analysis of optimization experiments. Not uncommon are designed experiments with their primary purpose being to determine optimal settings for all of the factors in some predetermined set. Here we distinguish between the two concepts of statistical significance and practical significance. We perform estimation via an empirical Bayes data analysis methodology that has been detailed in the recent literature. But then propose an alternative to the usual next step in determining optimal factor level settings. Instead of implementing variable or model selection techniques, we propose an objective function that assists in our goal of finding the ideal settings for all factors over which we experimented. The usefulness of the new approach is illustrated through the analysis of some real experiments as well as simulation.
2

Effect Sizes, Significance Tests, and Confidence Intervals: Assessing the Influence and Impact of Research Reporting Protocol and Practice

Hess, Melinda Rae 30 October 2003 (has links)
This study addresses research reporting practices and protocols by bridging the gap from the theoretical and conceptual debates typically found in the literature with more realistic applications using data from published research. Specifically, the practice of using findings of statistical analysis as the primary, and often only, basis for results and conclusions of research is investigated through computing effect size and confidence intervals and considering how their use might impact the strength of inferences and conclusions reported. Using a sample of published manuscripts from three peer-rviewed journals, central quantitative findings were expressed as dichotomous hypothesis test results, point estimates of effect sizes and confidence intervals. Studies using three different types of statistical analyses were considered for inclusion: t-tests, regression, and Analysis of Variance (ANOVA). The differences in the substantive interpretations of results from these accomplished and published studies were then examined as a function of these different analytical approaches. Both quantitative and qualitative techniques were used to examine the findings. General descriptive statistical techniques were employed to capture the magnitude of studies and analyses that might have different interpretations if althernative methods of reporting findings were used in addition to traditional tests of statistical signficance. Qualitative methods were then used to gain a sense of the impact on the wording used in the research conclusions of these other forms of reporting findings. It was discovered that tests of non-signficant results were more prone to need evidence of effect size than those of significant results. Regardless of tests of significance, the addition of information from confidence intervals tended to heavily impact the findings resulting from signficance tests. The results were interpreted in terms of improving the reporting practices in applied research. Issues that were noted in this study relevant to the primary focus are discussed in general with implicaitons for future research. Recommendations are made regarding editorial and publishing practices, both for primary researchers and editors.
3

Assessing Adverse Impact: An Alternative to the Four-Fifths Rule

Ercan, Seydahmet 06 September 2012 (has links)
The current study examines the behaviors of four adverse impact measurements: the 4/5ths rule, two tests of significance (ZD and ZIR), and a newly developed AI measurement (Lnadj). Upon the suggestion of the Office of Federal Contract Compliance Program Manual about the sensitivity of the assessment of AI when the sample size is very large (Office of Federal Contract Compliance Programs, 2002), Lnadj is a new statistic that has been developed and proposed as an alternative practical significance test to the 4/5ths rule. The results indicated that, unlike the 4/5ths rule and other tests for adverse impact, Lnadj is an index of practical significance that is less sensitive to differences across selection conditions that are not supposed to affect tests of adverse impact. Furthermore, Lnadj decreases Type I error rates when there is a small d value and Type II error rates when there is moderate to large d value.
4

Effect sizes, signficance tests, and confidence intervals [electronic resource] : assessing the influence and impact of research reporting protocol and practice / by Melinda Rae Hess.

Hess, Melinda Rae. January 2003 (has links)
Includes vita. / Title from PDF of title page. / Document formatted into pages; contains 223 pages. / Thesis (Ph.D.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: This study addresses research reporting practices and protocols by bridging the gap from the theoretical and conceptual debates typically found in the literature with more realistic applications using data from published research. Specifically, the practice of using findings of statistical analysis as the primary, and often only, basis for results and conclusions of research is investigated through computing effect size and confidence intervals and considering how their use might impact the strength of inferences and conclusions reported. Using a sample of published manuscripts from three peer-rviewed journals, central quantitative findings were expressed as dichotomous hypothesis test results, point estimates of effect sizes and confidence intervals. Studies using three different types of statistical analyses were considered for inclusion: t-tests, regression, and Analysis of Variance (ANOVA). / ABSTRACT: The differences in the substantive interpretations of results from these accomplished and published studies were then examined as a function of these different analytical approaches. Both quantitative and qualitative techniques were used to examine the findings. General descriptive statistical techniques were employed to capture the magnitude of studies and analyses that might have different interpretations if althernative methods of reporting findings were used in addition to traditional tests of statistical signficance. Qualitative methods were then used to gain a sense of the impact on the wording used in the research conclusions of these other forms of reporting findings. It was discovered that tests of non-signficant results were more prone to need evidence of effect size than those of significant results. / ABSTRACT: Regardless of tests of signficance, the addition of information from confidence intervals tended to heavily impact the findings resulting from signficance tests. The results were interpreted in terms of improving the reporting practices in applied research. Issues that were noted in this study relevant to the primary focus are discussed in general with implicaitons for future research. Recommendations are made regarding editorial and publishing practices, both for primary researchers and editors. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.
5

Statistical Methods for Non-Linear Profile Monitoring

Quevedo Candela, Ana Valeria 02 January 2020 (has links)
We have seen an increased interest and extensive research in the monitoring of a process over time whose characteristics are represented mathematically in functional forms such as profiles. Most of the current techniques require all of the data for each profile to determine the state of the process. Thus, quality engineers from industrial processes such as agricultural, aquacultural, and chemical cannot make process corrections to the current profile that are essential for correcting their processes at an early stage. In addition, the focus of most of the current techniques is on the statistical significance of the parameters or features of the model instead of the practical significance, which often relates to the actual quality characteristic. The goal of this research is to provide alternatives to address these two main concerns. First, we study the use of a Shewhart type control chart to monitor within profiles, where the central line is the predictive mean profile and the control limits are formed based on the prediction band. Second, we study a statistic based on a non-linear mixed model recognizing that the model leads to correlations among the estimated parameters. / Doctor of Philosophy / Checking the stability over time of the quality of a process which is best expressed by a relationship between a quality characteristic and other variables involved in the process has received increasing attention. The goal of this research is to provide alternative methods to determine the state of such a process. Both methods presented here are compared to the current methodologies. The first method will allow us to monitor a process while the data is still being collected. The second one is based on the quality characteristic of the process and takes full advantage of the model structure. Both methods seem to be more robust than the current most well-known method.
6

The use of effect sizes in credit rating models

Steyn, Hendrik Stefanus 12 1900 (has links)
The aim of this thesis was to investigate the use of effect sizes to report the results of statistical credit rating models in a more practical way. Rating systems in the form of statistical probability models like logistic regression models are used to forecast the behaviour of clients and guide business in rating clients as “high” or “low” risk borrowers. Therefore, model results were reported in terms of statistical significance as well as business language (practical significance), which business experts can understand and interpret. In this thesis, statistical results were expressed as effect sizes like Cohen‟s d that puts the results into standardised and measurable units, which can be reported practically. These effect sizes indicated strength of correlations between variables, contribution of variables to the odds of defaulting, the overall goodness-of-fit of the models and the models‟ discriminating ability between high and low risk customers. / Statistics / M. Sc. (Statistics)
7

The use of effect sizes in credit rating models

Steyn, Hendrik Stefanus 12 1900 (has links)
The aim of this thesis was to investigate the use of effect sizes to report the results of statistical credit rating models in a more practical way. Rating systems in the form of statistical probability models like logistic regression models are used to forecast the behaviour of clients and guide business in rating clients as “high” or “low” risk borrowers. Therefore, model results were reported in terms of statistical significance as well as business language (practical significance), which business experts can understand and interpret. In this thesis, statistical results were expressed as effect sizes like Cohen‟s d that puts the results into standardised and measurable units, which can be reported practically. These effect sizes indicated strength of correlations between variables, contribution of variables to the odds of defaulting, the overall goodness-of-fit of the models and the models‟ discriminating ability between high and low risk customers. / Statistics / M. Sc. (Statistics)

Page generated in 0.0659 seconds