Just how big an improvement on a scale that measures the severity of a disease is clinically meaningful? One point? Two points? Five points? And is a change from 3 to 2 equivalent to a change from 23-22? How do you go about finding out? These are the issues that I had been grappling with in a Research Design Service North East (RDS NE) client meeting straight before the recent seminar on “Target difference for sample size calculations” by Jenni Hislop (@jennimhislop) from the Health Economics Group at Newcastle University (@HealthEconNCL) as part of the RDS NE Methodology Munch series.
It seems logical that the target difference for a trial should be both deemed important (to patients and clinicians) and realistic (an achievable difference based on currently available information). The concept of MCID (minimal clinically important difference) represents a (usually) patient-centred approach to determining the smallest difference an intervention should provide on a single measure to be considered effective. Ideally it should be based on the smallest benefit patients would identify as being valuable. This seems to me a reasonable way to think about the target difference for a trial – one that is viewed as important by patients and clinicians and is achievable.
Jenni reviewed the ways in which target difference has been identified in published studies. Some focused on important differences and others on differences that would be realistic to expect in practice and some could be applied to either way a target difference could be specified. By far the most common method reported is the ‘Anchor’ method. This is typically based much more on importance than what is realistic and takes the view that the primary outcome should be based on the patient’s or clinician’s view of what is important. A further six methods for specifying a target difference were identified: Distribution, Health economic, Opinion-seeking, Pilot study, Review of the evidence base, and Standardised effect size. The two statistical methods (Distribution and Standardised effect size) could be used to identify detectable differences which may be smaller than the minimum difference that is considered clinically important. For more details on each of these you can read the full HTA report, or summarised papers on the reviewed methods, guidance for their use and details of their current use in practice.
I think the ‘take-home’ message for me was that trials must be designed to detect a difference in the primary outcome that has meaning to the patients involved – is not only statistically significantly different – and to be realistically achievable. Failure to do this has ethical implications. Recruiting an unnecessarily large sample potential exposes more patients than necessary to risk and recruiting too small a sample risks failing to detect a relevant difference. A number of methods to guide specification of a target difference exist – the DELTA project provides practical guidance in this area, including advice on how to report how decisions on the target difference for a trial were made and why it is important to report this. You can find out more about their ongoing work here.
If you are a researcher in the North East region and would like support with a research funding application that involves specifying the target difference you can get in touch with us via our website
This is a guest post by Louise Hayes, Health Research Methodologist, Research Design Service North East