In most mature welfare states, policy evaluations are sponsored by the very organisations that designed and implemented the intervention in the first place. Research in the area of clinical trials has consistently shown that this type of arrangement creates a moral hazard and may lead to overestimates of the effect of the treatment. Yet, no one so far has investigated whether social interventions were subject to such ‘confirmation bias’. The objective of this study was twofold. Firstly, it assessed the scientific credibility of a sample of government-sponsored pilot evaluations. Three common research prescriptions were considered: (a) the proportionality of timescales, (b) the representativeness of pilot sites; and (c) the completeness of outcome reporting. Secondly, it examined whether the known commitment of the government to a reform was associated with less credible evaluations. These questions were answered using a ‘meta-research’ methodology, which departs from the traditional interviews and surveys of agents that have dominated the literature so far. I developed the new PILOT dataset for that specific purpose. PILOT includes data systematically collected from over 230 pilot and experimental evaluations spanning 13 years of government-commissioned research in the UK (1997-2010) and four government departments (Department for Work and Pensions, Department for Education, Home Office and Ministry of Justice). PILOT was instrumental in (a) modeling pilot duration using event history analysis; (b) modeling pilot site selection using logistic regression; and (c) the systematic selection of six evaluation reports for qualitative content analysis. A total of 17 interviews with policy researchers were also conducted to inform the case study and the overall research design. The results show little overt evidence of crude bias or ‘bad’ design. On average, government-sponsored pilots (a) were based on timescales that were proportional to the scope of the research; (b) were not primarily designed with the aim of warranting representativeness; and (c) were rather comprehensively analysed in evaluation reports. In addition, the results indicate that the known commitment of the government to a reform had no significant effect on the selection of pilot sites and on the reporting of outcomes. However, it was associated with significantly shorter pilots. In conclusion, there is some evidence that the known commitment of a government to a reform is associated with less credible evaluations; however this effect is only tangible in the earlier stages of the research cycle. In this respect, sponsorship bias would appear to be more limited than in the context of industry-sponsored clinical trials. Policy recommendations are provided, as this project was severely hindered by important ‘black box’ issues and by the poor quality of evaluation reports.
Identifer | oai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:634508 |
Date | January 2014 |
Creators | Vaganay, Arnaud |
Publisher | London School of Economics and Political Science (University of London) |
Source Sets | Ethos UK |
Detected Language | English |
Type | Electronic Thesis or Dissertation |
Source | http://etheses.lse.ac.uk/1040/ |
Page generated in 0.0022 seconds