This study investigated the results of administering two intelligence tests, the Wechsler Intelligence Scale for Children -Third Edition (WISC-111), and the Stanford-Binet Intelligence Scale - Fourth Edition, to each of 33 Australian children with an intellectual disability. The experiment used a counterbalanced design in which the tests, order of presentation of the tests, the gender of the subjects, and the gender of the test administrators were factors. The 33 volunteer subjects, 14 males and 19 females, aged between 6 and 16 years, and known to have an intellectual disability, were allocated randomly for the assessments. The test administrators were students in the Clinical and Organisational Masters Program from the University of South Australia. It was hypothesised that; there would be a difference between the IQs on the two tests; that on average the WISC-111 FSIQ would be lower than the SB-1V TC; and that there would be a positive relationship between the WISC-111 FSIQ and the SB-1 V TC Statistical analysis of the data found the two tests' overall scores to be significantly different, while the counterbalanced factors and their interactions did not reach significance. There was a significant 4 point difference found between the mean WISC-111 FSIQs and SB-1V TCs. The results of a Pearson Product Moment Correlation Coefficient revealed a strong positive correlation (r = .83). between the WISC-111 FSIQ and SB-1V TC. This finding supported the concurrent validity of the tests in this special population sample. It was suggested that while the two tests measured similar theoretical constructs of intelligence, the two tests were not identical and therefore the results were not interchangeable. Variable patterns of results were found among subtest scores from the two tests, and the implications for field work discussed. The differences between raw WISC-111 FSIQ and SB-1V TC scores were calculated, and a z transformation was applied to the difference scores. The resulting difference distribution and cumulative percentages were then suggested as a reference table for practitioners. Studies that examined clerical errors in scoring intelligence test protocols were reviewed. The manually scored test protocols in this study were rescored using a computer scoring programme and 27 errors were found and corrected. From the results of the experiment several suggestions were made; that agencies using large numbers of intelligence tests, or which test the same child over time, should make a decision to use the same test, wherever possible, for comparison; that all intelligence test protocols be computer scored as a checking mechanism; and that all professional staff should be aware of the possible differences which can occur between intelligence scores, resulting from norming and other differences. / thesis (MSocSc)--University of South Australia, 1999.
Identifer | oai:union.ndltd.org:ADTP/173438 |
Date | January 1999 |
Creators | Hansen, Daryl P |
Source Sets | Australiasian Digital Theses Program |
Language | English |
Detected Language | English |
Rights | © 1999 Daryl P Hansen |
Page generated in 0.0414 seconds