671 |
Essays on Adaptive Experimentation: Bringing Real-World Challenges to Multi-Armed BanditsQin, Chao January 2024 (has links)
Classical randomized controlled trials have long been the gold standard for estimating treatment effects. However, adaptive experimentation, especially through multi-armed bandit algorithms, aims to improve efficiency beyond traditional randomized controlled trials. While there is a vast literature on multi-armed bandits, a simple yet powerful framework in reinforcement learning, real-world challenges can hinder the successful implementation of adaptive algorithms. This thesis seeks to bridge this gap by integrating real-world challenges into multi-armed bandits.
The first chapter examines two competing priorities that practitioners often encounter in adaptive experiments: maximizing total welfare through effective treatment assignments and swiftly conducting experiments to implement population-wide treatments. We propose a unified model that simultaneously accounts for within-experiment performance and post-experiment outcomes. We provide a sharp theory of optimal performance that not only unifies canonical results from the literature on regret minimization and best-arm identification but also uncovers novel insights. Our theory reveals that familiar algorithms, such as the recently proposed top-two Thompson sampling algorithm, can optimize a broad class of objectives if a single scalar parameter is appropriately adjusted. Furthermore, we demonstrate that substantial reductions in experiment duration can often be achieved with minimal impact on total regret.
The second chapter studies the fundamental tension between the distinct priorities of non-adaptive and adaptive experiments: robustness to exogenous variation and efficient information gathering. We introduce a novel multi-armed bandit model that incorporates nonstationary exogenous factors, and propose deconfounded Thompson sampling, a more robust variant of the prominent Thompson sampling algorithm. We provide bounds on both within-experiment and post-experiment regret of deconfounded Thompson sampling, illustrating its resilience to exogenous variation and the delicate balance it strikes between exploration and exploitation. Our proofs leverage inverse propensity weights to analyze the evolution of the posterior distribution, a departure from established methods in the literature. Hinting that new understanding is indeed necessary, we demonstrate that a deconfounded variant of the popular upper confidence bound algorithm can fail completely.
|
672 |
Statistical modelling by neural networksFletcher, Lizelle 30 June 2002 (has links)
In this thesis the two disciplines of Statistics and Artificial Neural Networks
are combined into an integrated study of a data set of a weather modification
Experiment.
An extensive literature study on artificial neural network methodology has
revealed the strongly interdisciplinary nature of the research and the applications
in this field.
An artificial neural networks are becoming increasingly popular with data
analysts, statisticians are becoming more involved in the field. A recursive
algoritlun is developed to optimize the number of hidden nodes in a feedforward
artificial neural network to demonstrate how existing statistical techniques
such as nonlinear regression and the likelihood-ratio test can be applied in
innovative ways to develop and refine neural network methodology.
This pruning algorithm is an original contribution to the field of artificial
neural network methodology that simplifies the process of architecture selection,
thereby reducing the number of training sessions that is needed to find
a model that fits the data adequately.
[n addition, a statistical model to classify weather modification data is developed
using both a feedforward multilayer perceptron artificial neural network
and a discriminant analysis. The two models are compared and the effectiveness
of applying an artificial neural network model to a relatively small
data set assessed.
The formulation of the problem, the approach that has been followed to
solve it and the novel modelling application all combine to make an original
contribution to the interdisciplinary fields of Statistics and Artificial Neural
Networks as well as to the discipline of meteorology. / Mathematical Sciences / D. Phil. (Statistics)
|
673 |
A new approach to pricing real options on swaps : a new solution technique and extension to the non-a.s. finite stopping realmChu, Uran 07 June 2012 (has links)
This thesis consists of extensions of results on a perpetual American swaption problem.
Companies routinely plan to swap uncertain benefits with uncertain costs in the
future for their own benefits. Our work explores the choice of timing policies associated
with the swap in the form of an optimal stopping problem. In this thesis, we have shown
that Hu, Oksendal's (1998) condition given in their paper to guarantee that the optimal
stopping time is a.s. finite is in fact both a necessary and sufficient condition. We have
extended the solution to the problem from a region in the parameter space where optimal
stopping times are a.s. finite to a region where optimal stopping times are non-a.s. finite,
and have successfully calculated the probability of never stopping in this latter region. We
have identified the joint distribution for stopping times and stopping locations in both the
a.s. and non-a.s. finite stopping cases. We have also come up with an integral formula for
the inner product of a generalized hyperbolic distribution with the Cauchy distribution.
Also, we have applied our results to a back-end forestry harvesting model where
stochastic costs are assumed to exponentiate upwards to infinity through time. / Graduation date: 2013
|
674 |
Statistical modelling by neural networksFletcher, Lizelle 30 June 2002 (has links)
In this thesis the two disciplines of Statistics and Artificial Neural Networks
are combined into an integrated study of a data set of a weather modification
Experiment.
An extensive literature study on artificial neural network methodology has
revealed the strongly interdisciplinary nature of the research and the applications
in this field.
An artificial neural networks are becoming increasingly popular with data
analysts, statisticians are becoming more involved in the field. A recursive
algoritlun is developed to optimize the number of hidden nodes in a feedforward
artificial neural network to demonstrate how existing statistical techniques
such as nonlinear regression and the likelihood-ratio test can be applied in
innovative ways to develop and refine neural network methodology.
This pruning algorithm is an original contribution to the field of artificial
neural network methodology that simplifies the process of architecture selection,
thereby reducing the number of training sessions that is needed to find
a model that fits the data adequately.
[n addition, a statistical model to classify weather modification data is developed
using both a feedforward multilayer perceptron artificial neural network
and a discriminant analysis. The two models are compared and the effectiveness
of applying an artificial neural network model to a relatively small
data set assessed.
The formulation of the problem, the approach that has been followed to
solve it and the novel modelling application all combine to make an original
contribution to the interdisciplinary fields of Statistics and Artificial Neural
Networks as well as to the discipline of meteorology. / Mathematical Sciences / D. Phil. (Statistics)
|
675 |
The use of classification methods for gross error detection in process dataGerber, Egardt 12 1900 (has links)
Thesis (MScEng)-- Stellenbosch University, 2013. / ENGLISH ABSTRACT: All process measurements contain some element of error. Typically, a distinction is made between
random errors, with zero expected value, and gross errors with non-zero magnitude. Data Reconciliation
(DR) and Gross Error Detection (GED) comprise a collection of techniques designed to attenuate
measurement errors in process data in order to reduce the effect of the errors on subsequent use of the
data. DR proceeds by finding the optimum adjustments so that reconciled measurement data satisfy
imposed process constraints, such as material and energy balances. The DR solution is optimal under
the assumed statistical random error model, typically Gaussian with zero mean and known covariance.
The presence of outliers and gross errors in the measurements or imposed process constraints invalidates
the assumptions underlying DR, so that the DR solution may become biased. GED is required to detect,
identify and remove or otherwise compensate for the gross errors. Typically GED relies on formal
hypothesis testing of constraint residuals or measurement adjustment-based statistics derived from the
assumed random error statistical model.
Classification methodologies are methods by which observations are classified as belonging to one of
several possible groups. For the GED problem, artificial neural networks (ANN’s) have been applied
historically to resolve the classification of a data set as either containing or not containing a gross error.
The hypothesis investigated in this thesis is that classification methodologies, specifically classification
trees (CT) and linear or quadratic classification functions (LCF, QCF), may provide an alternative to the
classical GED techniques.
This hypothesis is tested via the modelling of a simple steady-state process unit with associated
simulated process measurements. DR is performed on the simulated process measurements in order to
satisfy one linear and two nonlinear material conservation constraints. Selected features from the DR
procedure and process constraints are incorporated into two separate input vectors for classifier
construction. The performance of the classification methodologies developed on each input vector is
compared with the classical measurement test in order to address the posed hypothesis.
General trends in the results are as follows: - The power to detect and/or identify a gross error is a strong function of the gross error magnitude
as well as location for all the classification methodologies as well as the measurement test.
- For some locations there exist large differences between the power to detect a gross error and the
power to identify it correctly. This is consistent over all the classifiers and their associated
measurement tests, and indicates significant smearing of gross errors.
- In general, the classification methodologies have higher power for equivalent type I error than
the measurement test.
- The measurement test is superior for small magnitude gross errors, and for specific locations,
depending on which classification methodology it is compared with.
There is significant scope to extend the work to more complex processes and constraints, including
dynamic processes with multiple gross errors in the system. Further investigation into the optimal
selection of input vector elements for the classification methodologies is also required. / AFRIKAANSE OPSOMMING: Alle prosesmetings bevat ʼn sekere mate van metingsfoute. Die fout-element van ʼn prosesmeting word
dikwels uitgedruk as bestaande uit ʼn ewekansige fout met nul verwagte waarde, asook ʼn nie-ewekansige
fout met ʼn beduidende grootte. Data Rekonsiliasie (DR) en Fout Opsporing (FO) is ʼn versameling van
tegnieke met die doelwit om die effek van sulke foute in prosesdata op die daaropvolgende aanwending
van die data te verminder. DR word uitgevoer deur die optimale veranderinge aan die oorspronklike
prosesmetings aan te bring sodat die aangepaste metings sekere prosesmodelle gehoorsaam, tipies
massa- en energie-balanse. Die DR-oplossing is optimaal, mits die statistiese aannames rakende die
ewekansige fout-element in die prosesdata geldig is. Dit word tipies aanvaar dat die fout-element
normaal verdeel is, met nul verwagte waarde, en ʼn gegewe kovariansie matriks.
Wanneer nie-ewekansige foute in die data teenwoordig is, kan die resultate van DR sydig wees. FO is
daarom nodig om nie-ewekansige foute te vind (Deteksie) en te identifiseer (Identifikasie). FO maak
gewoonlik staat op die statistiese eienskappe van die meting aanpassings wat gemaak word deur die DR
prosedure, of die afwykingsverskil van die model vergelykings, om formele hipoteses rakende die
teenwoordigheid van nie-ewekansige foute te toets.
Klassifikasie tegnieke word gebruik om die klasverwantskap van observasies te bepaal. Rakende die FO
probleem, is sintetiese neurale netwerke (SNN) histories aangewend om die Deteksie en Identifikasie
probleme op te los. Die hipotese van hierdie tesis is dat klassifikasie tegnieke, spesifiek klassifikasiebome
(CT) en lineêre asook kwadratiese klassifikasie funksies (LCF en QCF), suksesvol aangewend
kan word om die FO probleem op te los.
Die hipotese word ondersoek deur middel van ʼn simulasie rondom ʼn eenvoudige gestadigde toestand
proses-eenheid wat aan een lineêre en twee nie-lineêre vergelykings onderhewig is. Kunsmatige
prosesmetings word geskep met behulp van lukrake syfers sodat die foutkomponent van elke
prosesmeting bekend is. DR word toegepas op die kunsmatige data, en die DR resultate word gebruik
om twee verskillende insetvektore vir die klassifikasie tegnieke te skep. Die prestasie van die
klassifikasie metodes word vergelyk met die metingstoets van klassieke FO ten einde die gestelde
hipotese te beantwoord. Die onderliggende tendense in die resultate is soos volg:
- Die vermoë om ‘n nie-ewekansige fout op te spoor en te identifiseer is sterk afhanklik van die
grootte asook die ligging van die fout vir al die klassifikasie tegnieke sowel as die metingstoets.
- Vir sekere liggings van die nie-ewekansige fout is daar ‘n groot verskil tussen die vermoë om die
fout op te spoor, en die vermoë om die fout te identifiseer, wat dui op smering van die fout. Al
die klassifikasie tegnieke asook die metingstoets baar hierdie eienskap.
- Oor die algemeen toon die klassifikasie metodes groter sukses as die metingstoets.
- Die metingstoets is meer suksesvol vir relatief klein nie-ewekansige foute, asook vir sekere
liggings van die nie-ewekansige fout, afhangende van die klassifikasie tegniek ter sprake.
Daar is verskeie maniere om die bestek van hierdie ondersoek uit te brei. Meer komplekse, niegestadigde
prosesse met sterk nie-lineêre prosesmodelle en meervuldige nie-ewekansige foute kan
ondersoek word. Die moontlikheid bestaan ook om die prestasie van klassifikasie metodes te verbeter
deur die gepaste keuse van insetvektor elemente.
|
676 |
Exploring family resilience in urban Shona Christian families in ZimbabweMuchesa, Oleander 02 1900 (has links)
This study addresses the factors that assist families towards family adaptation during adversities and contribute to family resilience. The study aimed to identify, describe and explore family resilience factors that enable urban Shona Christian families to withstand life crises in the midst of a society facing economic hardships and manage to bounce back from these challenges. The study also sought to reach out to families facing challenges and who are struggling to adapt and recover from their challenges. The Resiliency model of Family Stress, Adjustment and Adaptation was used as a theoretical framework for this study (McCubbin, Thompson & McCubbin, 2001).
A quantitative method was employed. A total of 106 participants including parents and adolescents from 53 families independently completed 6 questionnaires including a biographical questionnaire. The questionnaires measured family adaptation and aspects of family functioning in accordance with the Resiliency model of Family Stress, Adjustment and Adaptation. The data collected was subjected to correlation regression analysis which was computed using SPSS to identify family resilience factors that assisted families in family adaptation.
The results showed that family adaptation was fostered by first, the family’s internal strengths; affirming and less incendiary communication; passive appraisal; and control over life events and hardships. Secondly, the family’s external strengths; seeking spiritual support; social support from within the community; and mobilising the family to acquire community resources and accept help from others. These findings could be used to develop interventions that promote family resilience and establish the potential of family members within a family when facing adversities. / Psychology / M.A. (Social Science)
|
677 |
A brief introduction to basic multivariate economic statistical process controlMudavanhu, Precious 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Statistical process control (SPC) plays a very important role in monitoring and improving
industrial processes to ensure that products produced or shipped to the customer meet the
required specifications. The main tool that is used in SPC is the statistical control chart. The
traditional way of statistical control chart design assumed that a process is described by a
single quality characteristic. However, according to Montgomery and Klatt (1972) industrial
processes and products can have more than one quality characteristic and their joint effect
describes product quality. Process monitoring in which several related variables are of
interest is referred to as multivariate statistical process control (MSPC). The most vital and
commonly used tool in MSPC is the statistical control chart as in the case of the SPC. The
design of a control chart requires the user to select three parameters which are: sample size,
n , sampling interval, h and control limits, k.Several authors have developed control charts
based on more than one quality characteristic, among them was Hotelling (1947) who
pioneered the use of the multivariate process control techniques through the development of a
2 T -control chart which is well known as Hotelling 2 T -control chart.
Since the introduction of the control chart technique, the most common and widely used
method of control chart design was the statistical design. However, according to Montgomery
(2005), the design of control has economic implications. There are costs that are incurred
during the design of a control chart and these are: costs of sampling and testing, costs
associated with investigating an out-of-control signal and possible correction of any
assignable cause found, costs associated with the production of nonconforming products, etc.
The paper is about giving an overview of the different methods or techniques that have been
employed to develop the different economic statistical models for MSPC.
The first multivariate economic model presented in this paper is the economic design of the
Hotelling‟s 2 T -control chart to maintain current control of a process developed by
Montgomery and Klatt (1972). This is followed by the work done by Kapur and Chao (1996)
in which the concept of creating a specification region for the multiple quality characteristics
together with the use of a multivariate quality loss function is implemented to minimize total
loss to both the producer and the customer. Another approach by Chou et al (2002) is also
presented in which a procedure is developed that simultaneously monitor the process mean
and covariance matrix through the use of a quality loss function. The procedure is based on the test statistic 2ln L and the cost model is based on Montgomery and Klatt (1972) as well
as Kapur and Chao‟s (1996) ideas. One example of the use of the variable sample size
technique on the economic and economic statistical design of the control chart will also be
presented. Specifically, an economic and economic statistical design of the 2 T -control chart
with two adaptive sample sizes (Farazet al, 2010) will be presented. Farazet al (2010)
developed a cost model of a variable sampling size 2 T -control chart for the economic and
economic statistical design using Lorenzen and Vance‟s (1986) model.
There are several other approaches to the multivariate economic statistical process control
(MESPC) problem, but in this project the focus is on the cases based on the phase II stadium
of the process where the mean vector, and the covariance matrix, have been fairly well
established and can be taken as known, but both are subject to assignable causes. This latter
aspect is often ignored by researchers. Nevertheless, the article by Farazet al (2010) is
included to give more insight into how more sophisticated approaches may fit in with
MESPC, even if the mean vector, only may be subject to assignable cause.
Keywords: control chart; statistical process control; multivariate statistical process control;
multivariate economic statistical process control; multivariate control chart; loss function. / AFRIKAANSE OPSOMMING: Statistiese proses kontrole (SPK) speel 'n baie belangrike rol in die monitering en
verbetering van industriële prosesse om te verseker dat produkte wat vervaardig word, of na
kliënte versend word wel aan die vereiste voorwaardes voldoen. Die vernaamste tegniek wat
in SPK gebruik word, is die statistiese kontrolekaart. Die tradisionele wyse waarop statistiese
kontrolekaarte ontwerp is, aanvaar dat ‟n proses deur slegs 'n enkele kwaliteitsveranderlike
beskryf word. Montgomery and Klatt (1972) beweer egter dat industriële prosesse en
produkte meer as een kwaliteitseienskap kan hê en dat hulle gesamentlik die kwaliteit van 'n
produk kan beskryf. Proses monitering waarin verskeie verwante veranderlikes van belang
mag wees, staan as meerveranderlike statistiese proses kontrole (MSPK) bekend. Die mees
belangrike en algemene tegniek wat in MSPK gebruik word, is ewe eens die statistiese
kontrolekaart soos dit die geval is by SPK. Die ontwerp van 'n kontrolekaart vereis van die
gebruiker om drie parameters te kies wat soos volg is: steekproefgrootte, n , tussensteekproefinterval,
h en kontrolegrense, k . Verskeie skrywers het kontrolekaarte ontwikkel
wat op meer as een kwaliteitseienskap gebaseer is, waaronder Hotelling wat die gebruik van
meerveranderlike proses kontrole tegnieke ingelei het met die ontwikkeling van die
T2 -kontrolekaart wat algemeen bekend is as Hotelling se 2 T -kontrolekaart (Hotelling,
1947).
Sedert die ingebruikneming van die kontrolekaart tegniek is die statistiese ontwerp daarvan
die mees algemene benadering en is dit ook in daardie formaat gebruik. Nietemin, volgens
Montgomery and Klatt (1972) en Montgomery (2005), het die ontwerp van die kontrolekaart
ook ekonomiese implikasies. Daar is kostes betrokke by die ontwerp van die kontrolekaart
en daar is ook die kostes t.o.v. steekproefneming en toetsing, kostes geassosieer met die
ondersoek van 'n buite-kontrole-sein, en moontlike herstel indien enige moontlike korreksie
van so 'n buite-kontrole-sein gevind word, kostes geassosieer met die produksie van niekonforme
produkte, ens. In die eenveranderlike geval is die hantering van die ekonomiese
eienskappe al in diepte ondersoek. Hierdie werkstuk gee 'n oorsig oor sommige van die
verskillende metodes of tegnieke wat al daargestel is t.o.v. verskillende ekonomiese
statistiese modelle vir MSPK. In die besonder word aandag gegee aan die gevalle waar die
vektor van gemiddeldes sowel as die kovariansiematriks onderhewig is aan potensiële
verskuiwings, in teenstelling met 'n neiging om slegs na die vektor van gemiddeldes in
isolasie te kyk synde onderhewig aan moontlike verskuiwings te wees.
|
678 |
Applications of Box-Jenkins methods of time series analysis to the reconstruction of drought from tree ringsMeko, David Michael. January 1981 (has links)
The lagged responses of tree-ring indices to annual climatic or hydrologic series are examined in this study. The objectives are to develop methods to analyze the lagged responses of individual tree-ring indices, and to improve upon conventional methods of adjusting for the lag in response in regression models to reconstruct annual climatic or hydrologic series. The proposed methods are described and applied to test data from Oregon and Southern California. Transfer-function modeling is used to estimate the dependence of the current ring on past years' climate and to select negative lags for reconstruction models. A linear system is assumed; the input is an annual climatic variable, and the output is a tree-ring index. The estimated impulse response function weights the importance of past and current years' climate on the current year's ring. The identified transfer function model indicates how many past years' rings are necessary to account for the effects of past years' climate. Autoregressive-moving-average (ARMA) modeling is used to screen out climatically insensitive tree-ring indices, and to estimate the lag in response to climate unmasked from the effects of autocorrelation in the tree-ring and climatic series. The climatic and tree-ring series are each prewhitened by ARMA models, and crosscorrelation between the ARMA residuals are estimated. The absence of significant crosscorrelations Implies low sensitivity. Significant crosscorrelations at lags other than zero indicate lag in response. This analysis can also aid in selecting positive lags for reconstruction models. An alternative reconstruction method that makes use of the ARMA residuals is also proposed. The basic concept is that random (uncorrelated in time) shocks of climate induce annual random shocks of tree growth, with autocorrelation in the tree-ring index resulting from inertia in the system. The steps in the method are (1) fit ARMA models to the tree-ring index and the climatic variable, (2) regress the ARMA residuals of the climatic variable on the ARMA residuals of the treering index, (3) substitute the long-term prewhitened tree-ring index into the regression equation to reconstruct the prewhitened climatic variable, and (4) build autocorrelation back into the reconstruction with the ARMA model originally fit to the climatic variable. The trial applications on test data from Oregon and Southern California showed that the lagged response of tree rings to climate varies greatly from site to site. Sensitive tree-ring series commonly depend significantly only on one past year's climate (regional rainfall index). Other series depend on three or more past years' climate. Comparison of reconstructions by conventional lagging of predictors with reconstructions by the random-shock method indicate that while the lagged models may reconstruct the amplitude of severe, long-lasting droughts better than the random-shock model, the random-shock model generally has a flatter frequency response. The random-shock model may therefore be more appropriate where the persistence structure is of prime interest. For the most sensitive series with small lag in response, the choice of reconstruction method makes little difference in properties of the reconstruction. The greatest divergence is for series whose impulse response weights from the transfer function analysis do not die off rapidly with time.
|
679 |
The modifiable areal unit phenomenon : an investigation into the scale effect using UK census dataManley, David J. January 2006 (has links)
The Modifiable Areal Unit Phenomenon (MAUP) has traditionally been regarded as a problem in the analysis of spatial data organised in areal units. However, the approach adopted here is that the MAUP provides an opportunity to gain information about the data under investigation. Crucially, attempts to remove the MAUP from spatial data are regarded as an attempt to remove the geography. Therefore, the work seeks to provide an insight to the causes of, and information behind, the MAUP. The data used is from the 1991 Census of Great Britain. This was chosen over 2001 data due to the availability of individual level data. These data are of key importance to the methods employed. The methods seek to provide evidence of the magnitude of the MAUP, and more specifically the scale effect in the GB Census. This evidence is built on using correlation analysis to demonstrate the statistical significance of the MAUP. Having established the relevance of the MAUP in the context of current geographical research, the factors that contribute to the incidence of the MAUP are considered, and it is noted that a wide range of influences are important. These include the population size and density of an area, along with proportion of a variable. This discussion also recognises the importance of homogeneity as an influential factor, something that is referenced throughout the work. Finally, a search is made for spatial processes. This uses spatial autocorrelation and multilevel modelling to investigate the impact spatial processes have in a range of SAR Districts, like Glasgow, Reigate and Huntingdonshire, on the scale effect. The research is brought together, not to solve the MAUP but to provide an insight into the factors that cause the MAUP, and demonstrate the usefulness of the MAUP as a concept rather than a problem.
|
680 |
Comparing outcome measures derived from four research designs incorporating the retrospective pretest.Nimon, Kim F. 08 1900 (has links)
Over the last 5 decades, the retrospective pretest has been used in behavioral science research to battle key threats to the internal validity of posttest-only control-group and pretest-posttest only designs. The purpose of this study was to compare outcome measures resulting from four research design implementations incorporating the retrospective pretest: (a) pre-post-then, (b) pre-post/then, (c) post-then, and (d) post/then. The study analyzed the interaction effect of pretest sensitization and post-intervention survey order on two subjective measures: (a) a control measure not related to the intervention and (b) an experimental measure consistent with the intervention. Validity of subjective measurement outcomes were assessed by correlating resulting to objective performance measurement outcomes. A Situational Leadership® II (SLII) training workshop served as the intervention. The Work Involvement Scale of the self version of the Survey of Management Practices Survey served as the subjective control measure. The Clarification of Goals and Objectives Scale of the self version of the Survey of Management Practices Survey served as the subjective experimental measure. The Effectiveness Scale of the self version of the Leader Behavior Analysis II® served as the objective performance measure. This study detected differences in measurement outcomes from SLII participant responses to an experimental and a control measure. In the case of the experimental measure, differences were found in the magnitude and direction of the validity coefficients. In the case of the control measure, differences were found in the magnitude of the treatment effect between groups. These differences indicate that, for this study, the pre-post-then design produced the most valid results for the experimental measure. For the control measure in this study, the pre-post/then design produced the most valid results. Across both measures, the post/then design produced the least valid results.
|
Page generated in 0.0872 seconds