The COMPare (CEBM Outcome Monitoring Project) team is monitoring clinical trials for switched outcomes. Through increased awareness of misreported outcomes, individual accountability, and feedback for specific journals, we hope to fix the ongoing problem of outcome switching.
Why outcome switching matters
Before carrying out a clinical trial, all outcomes that will be measured (e.g. blood pressure after one year of treatment) should be pre-specified in a trial protocol, and on a clinical trial registry.
This is because if researchers measure lots of things, some of those things are likely to give a positive result by random chance (a false positive). A pre-specified outcome is much less likely to give a false-positive result.
Once the trial is complete, the trial report should then report all pre-specified outcomes. Where reported outcomes differ from those pre-specified, this must be declared in the report, along with an explanation of the timing and reason for the change. This ensures a fair picture of the trial results.
However, in reality, pre-specified outcomes are often left unreported, while outcomes that were not pre-specified are reported, without being declared as novel. This is an extremely common problem that distorts the evidence we use to make real-world clinical decisions.
Our approach
COMPare (CEBM Outcome Monitoring Project) takes a new approach. We are monitoring all trials published in the top five medical journals (NEJM, JAMA, The Lancet, Annals of Internal Medicine, BMJ).
We are analysing each trial for outcome switching, by comparing the protocol (or registry entry if a pre-trial protocol is unavailable) with the trial report. For any trial where we find that outcomes have been switched, we are writing letters to the journal to correct the record.
Detailed methodology
Our results table contains a row for every trial. There are links to our letter, and the letter’s publication status. We also show the proportion of pre-specified outcomes correctly reported (for a correctly reported paper this should be 100%); and the number of novel, undeclared, non-prespecified outcomes added in the paper (for a correctly reported paper this should be 0).
Each row also contains a link to our full assessment sheet for that trial, which is filled out by the two coders who audited that trial, then checked at our weekly project meeting. It lists the pre-specified outcomes, whether they were reported, and also lists any new outcomes added.
This raw data is shared online to ensure that all our data and working is fully transparent. We are happy to receive feedback on any scoring.
Some trials have perfect outcome reporting and no letter is required: these are posted immediately to the table. Otherwise, we add trials to the table either (i) four weeks after sending our letter or (ii) when the journal publishes or refuses to publish the letter (whichever comes first).
This procedure is specified in detail in our full protocol (PDF).
Questions and feedback
If you have any questions about our methodology, please read our full protocol first, as your question may be answered there. We are happy to receive feedback: [email protected].