Over the last few years policy-makers in Education have looked to Medicine to see how ‘what works’ is best evaluated. A press release on 3 May 2013 by Michael Gove and the National College for Teaching and Leadership pronounced that ‘New randomised controlled trials will drive forward evidence-based research’ https://www.gov.uk/government/news/new-randomised-controlled-trials-will-drive-forward-evidence-based-research. Since then the government have been investing large sums of money into funding randomised control trials (RCTs), principally but not exclusively through the evaluation work of the Education Endowment Fund (EEF) https://educationendowmentfoundation.org.uk/evaluation/about-eef-evaluation/. Organisations like EEF are ‘seeking proposals for evidence-based, scalable ideas’ from schools or academics (https://educationendowmentfoundation.org.uk/apply-for-funding/) and then designing and running national RCTs, evaluated by commissioned panels. Another example is the Closing the Gap: Test and Learn trials organised through CfBT, Curee and the University of Durham (For further information see: http://www.curee.co.uk/CTG).
Are these trials ethical and in what specific ways should ethical thinking add to a reflection on their use?
It is interesting to look to the guides advocated for their use. The handbook recommended on the EEF website by Togerson and Togerson (no date given) does not include any reference to ‘ethics’ or ‘respect’. Connelly (2015) points to an established text book by Cohen, Manion and Morrison which presents an argument that randomising deals with participants as ‘manipulable, controllable and inanimate’ (p314). A guide produced by NFER (Hutchison and Styles, 2010) recognises that there is a case made against such trials as ‘unethical’, by withholding an intervention from the control group, or to be ‘perceived as unethical’, and therefore affecting participation rates. The authors accept that the second issue is an important one to consider and will need a full explanation to be provided to the participating population but argue that ‘if we do not know whether the intervention works or is even detrimental to outcomes, we are in an ethically neutral scenario when randomising. If we know it works (for example, through the use of previous randomised trials and a meta-analysis) then no evaluation is necessary, and we should not be randomising.’ (p5).
RCTs are currently coming under scrutiny. Paul Connelly from Queens University (2015) presented a keynote speech at the BERA annual conference in Belfast (now available as a videoed lecture and article in a special issue of Research Intelligence) and Mark Boylan from Sheffield Hallam University led a discussion ‘Evaluating the impact of professional learning: policy, practice and tensions in the use and misuse of randomised controlled trials’ to the UCET CPD committee in October 2015.
These discussions have focused mainly on misconceptions and an education about their rationale, while centring less on the ethicality of these RCTs despite this being one of the main arguments levied against them. At one level any ethical appraisal of RCTs should consider the cost-benefit analysis to students/pupils. Boylan noted that utilitarianism arguments of individual deprivation for the benefit of the greater good are often used to justify the control aspect of these designs. But what does it feel like to be in these control groups? How can these groups benefit from anything that is presumably considered worth trialling? Practitioner researchers are often faced with the decision and find ways to try one new practice before another, whilst not necessarily withholding any. They think hard about equity in their teaching and their responsibilities to try what they believe (and are enquiring into) best supports childrens’ learning. Some of these larger trials require more substantial withholding (perhaps from one whole school or whole classes) from interventions, until that cycle’s data has been collected and analysed. One issue with this is that these large trials often find negligable positive effect sizes when reported for whole populations. This was true for one trial I was involved in supporting in schools. I was involved in supporting a school in the experimental group. Luckily they collected rich qualitative data to help understand any class-size effects that were fed back to them from the main study by the quantitative data sets collected. This allowed the teachers to isolate where the main positive effects were found and the factors affecting these effects. The large scale study was not of use to them (or to the control schools) without either rich or fine-grained analysis. Connelly (2015) reports that qualitative data is increasingly being included in RCTs (34% of the 746 in his systematic survey of RCTs in the UK since 1980) and that multivariable sub-group analysis is needed (found in 46% in the studies). This does beg the question as to whether these studies are designed in a way which benefit individuals (pupils, classes and schools)?
At another level this issue relates to whose studies these are? Whose data is the data collected in these trials? Are these studies being carried out by the teaching profession or of the teaching profession? Do teachers feel empowered by the data collected in these studies? Who gives consent for them to take place? Class teachers, headteachers, governing bodies, parent communities? As the studies are randomised schools and classes are selected based on the designers’ criteria. Doesn’t that cause a pressure from the utilitarian argument to consent. Is there fully informed consent by all stakeholders? Perhaps this is where we are back to needing a fuller education about RCTs as at BERA and UCET events of the last two months. But are the teachers, parents, governors and students/pupils included in this conversation about the place of RCTs in improving the practices of the teaching profession? Are they able to offer up their data voluntarily? Do they benefit appropriately (thinking here about dissemination of results in meaningful ways)?
Cohen, L., Manion, L. and Morrison, K. (2011) Research Methods in Education. London: Routledge.
Connelly, P. (2015) The Trials of Evidence-Based Practice in Education, https://www.youtube.com/watch?v=svuMXlAsaCE and https://www.bera.ac.uk/researchers-resources/publications/research-intelligence-conference-special (p6)
Hutchison, D. and Styles, B. (2010). A Guide to Running Randomised Controlled Trials for Educational Researchers. Slough: NFER.
Togerson, C.J and Togerson, D.J. (no date) Randomised trials in education: An introductory handbook. Universities of Durham and York.
This situation leads us as (in my case educational) researchers to take especial care of whether, how, when, where and with what permissions we place images (collected as data) in the public, digital sphere.