A number of randomization statistical procedures have been developed to analyze the results from single-case multiple-baseline intervention investigations. In a previous simulation study, comparisons of the various procedures revealed distinct differences among them in their ability to detect immediate abrupt intervention effects of moderate size, with some procedures (typically those with randomized intervention start points) exhibiting power that was both respectable and superior to other procedures (typically those with single fixed intervention start points). In Investigation 1 of the present follow-up simulation study, we found that when the same randomization-test procedures were applied to either delayed abrupt or immediate gradual intervention effects: (1) the powers of all of the procedures were severely diminished; and (2) in contrast to the previous study's results, the single fixed intervention start-point procedures generally outperformed those with randomized intervention start points. In Investigation 2 we additionally demonstrated that if researchers are able to successfully anticipate the specific alternative effect types, it is possible for them to formulate adjusted versions of the original randomization-test procedures that can recapture substantial proportions of the lost powers.
Identifer | oai:union.ndltd.org:arizona.edu/oai:arizona.openrepository.com:10150/625957 |
Date | 08 1900 |
Creators | Levin, Joel R., Ferron, John M., Gafurov, Boris S. |
Contributors | University of Arizona |
Publisher | PERGAMON-ELSEVIER SCIENCE LTD |
Source Sets | University of Arizona |
Language | English |
Detected Language | English |
Type | Article |
Rights | © 2017 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved. |
Relation | http://linkinghub.elsevier.com/retrieve/pii/S0022440517300171 |
Page generated in 0.0025 seconds