This is a long post on a response we have received from the editors of Annals of Internal Medicine, one of the world’s leading medical journals. We hope you will read it, as their approach to this issue raises some extremely important issues.
The COMPare project sets out to address the problem of outcome switching in clinical trials. We wanted to move on from simple studies reporting the prevalence of this issue, and instead we are exploring what happens when you try to correct the record on individual trials. From the outset, the key aspect of this work was: how will journals respond? In advance of our first academic publication on the project, consistent with our open data approach, we are now beginning to share the responses we have had from journals. We think they provide important qualitative information on the reasons why this problem has persisted for so long. They are also highly varied: the BMJ, for example, demonstrated best practice by immediately issuing a correction when alerted to misreporting of pre-specified outcomes in their journal. Other journals have not responded in the same way.
This blog post is about a long response to COMPare written by the editors of Annals , after we sent a series of correction letters on trials misreported in that journal. The editors have published their own response as a letter in the online edition of the journal, and it will soon appear in the paper edition, alongside some of our letters. We believe their response reveals a set of important misunderstandings about the issue of outcome switching, from the editors of the 4th highest impact factor journal in medicine.
The COMPare team have submitted a response to Annals for publication in the journal. We are waiting to hear if they will print it. Because the rest of this post is long, and detailed, we start with a summary:
1. The Annals editors argue that switching outcomes is acceptable if there’s a good reason to do so. We agree, and so does CONSORT, but both we and CONSORT say you must tell the reader you’ve switched those outcomes. Throughout their comment the editors seem confused on this key issue. They argue that they should be entitled to permit undeclared outcome switching on the pages of Annals according to their own skilful judgement. This conflicts with their own prior commitment to good reporting standards, and with the standards readers will expect.
2. The editors say that registry information is often poor quality, with outcomes poorly pre-specified. We agree, but think inadequate pre-specification should be pointed out in a trial’s results paper, not glossed over. Trialists should be reminded and motivated to maintain their registry entries, since registers were developed specifically to address selective outcome reporting.
3. The editors argue our coding is unfair because we ignore protocols that are either unpublished or published after a trial began. These protocols cannot be an adequate source of pre-specified outcomes. Throughout their comment, the editors repeatedly make arguments that rely on “pre-specified” outcomes being extracted from documents dated after the trial began. We are concerned and surprised that the Annals editors do not recognise and understand the importance of this.
4. Further to their misunderstandings on timing of pre-specification, the editors claim to have found one small error in our coding: they apparently think the pre-specified primary outcome “glycaemic control” can be reasonably reported as HOMA-IR. We are keen for feedback but explain in detail why they are wrong in our full response below.
5. Lastly, their letter is anonymous. We find this concerning, especially since one deputy editor of Annals is current secretary of ICMJE and the figurehead for this sector’s approach to trial registration and selective reporting. We urge the editors to correct the record on trials they have misreported; clarify their support for CONSORT; and implement new processes to ensure that undeclared outcome switching does not persist in Annals at the current rate.
For the rest of this post, we will go through the Annals editors’ letter line by line, in order to set out their misunderstandings, their errors, their arguments, and our concerns, in detail. Be aware that this is long. However, we feel that doing this thoroughly is important, because Annals editors are in a position of both direct power, and significant cultural influence, over trial reporting.
Here we go.
The COMPare Project recently commented on Gepner and colleagues’ article and mentioned MacPherson and associates’ trial on their Web site This watchdog group aims to monitor the reporting of pre-specified outcomes in all clinical trials published in 5 top-tier medical journals, including Annals. Because these comments express concerns about “switched” or incompletely reported outcomes, we would like to describe our editorial process and potential reasons for the discrepancies noted by COMPare. We attempted to post a public comment on the COMPare website to express our concerns about their assessments, but there were no means for doing so.
Our project sets out to correct the record on incorrect outcome reporting in clinical trial reports published in the top 5 medical journals, by submitting correction letters to the journal that printed the error. We submit letters to ensure that those using the results of trials to inform their own clinical decision making and research are able to access information on the reporting error. All our raw data is posted on our site, and we are happy to review coding in the light of commentary from authors. Our email address, firstname.lastname@example.org, is openly available on our website for anyone wishing to comment on our assessments of trials. Since the completion of phase one (writing the correction letters) we have moved on to phase two (discussing journals’ responses, and publishing commentary on specific aspects of coding individual trials, to illustrate the mechanics of misreporting by using concrete examples). We will gladly publish comments on our analyses, but we think it is most appropriate that informed discussion on individual trials switching their pre-specified outcomes should take place in the pages of the journal that published the trial.
We share COMPare’s overarching goals to assure the validity and reporting quality of biomedical studies, but we differ on how to best achieve those aims.
For the avoidance of doubt, our “overarching goals” are very specific: to identify and correct the record where trials have switched outcomes, to document the responses to our efforts to do so, and to improve standards of outcome reporting.
We routinely ask authors of clinical trials to submit their protocols with their manuscripts, and we also examine trial registries for the initial and final information entered about trials. We review both because registries include only extracted information, do not routinely monitor whether the data in the registry match the protocol, and may not be updated when the protocol changes. We therefore rely primarily on the protocol for details about pre-specified primary and secondary outcomes, study interventions and procedures, and statistical analysis.
This seems odd and internally contradictory. They say that registry entries are flawed as they may not be kept up to date with changes, but this is confused, since it is the initial pre-specified outcomes that are of key interest when assessing outcome switching, not changes (although changes against these pre-specified outcomes should also be logged on the registry, and discussed and declared in the trial report). The Annals editors say that they use the trial registry entry to see if there are switched outcomes (although our letters show multiple undeclared discrepancies): but they also say in the same paragraph that registries are unreliable and cannot be used. Later in their response they make further concerning comments about registries, which we discuss in more detail below.
To be consistent with CONSORT recommendations, we ask authors to describe, either in the manuscript or in an appendix, any major differences between the trial registry and protocol, including changes to trial endpoints or procedures.
As the letters from COMPare have repeatedly pointed out, authors have specifically not declared or explained the outcomes they have switched, in trials published in Annals, and the editors have published the trial reports anyway. That is why we have written to the journal: to correct the record. It is good that the editors of Annals have re-asserted their commitment to the principle that changes to pre-specified outcomes should be declared in the trial publication. However we feel the editors of Annals should now correct the record on the multiple misreported trials, identified by COMPare, that are not consistent with these CONSORT guidelines ; and set out what new processes they will put in place to ensure that outcome switching does not persist in trial reports in Annals at the high current rate.
Next, the Annals editors go on to critique the methods of COMPare. For a brief recap, please consider reading our operations manual , and FAQ. In short: we identify the trials in the top 5 journals; we identify the pre-specified outcomes; then we look to see if those have been correctly reported; and finally we write a corrections letter and share our coding sheet online. “Correctly reported” means either that the results are given, or that there is a clear explanation of what has been switched, and why. We prefer to use protocols, as this is where CONSORT states outcomes should be pre-specified . If there is no protocol available from before the trial start date, we look for a registry entry from before the trial start date. If there is neither a protocol nor a registry from before the trial start date, then there are no pre-specified outcomes to be assessed, all reported outcomes are novel outcomes, and all should be declared as such.
According to COMPare’s protocol, abstractors are to look first for a protocol that is published before the trial’s start date. If they find no such publication, they are supposed to review the initial trial registry data. Thus, COMPare’s review excludes most protocols published after the start of a trial and unpublished protocols or amendments and ignores amendments or updates to the registry or protocols that occurred after a trial’s start date.
Some of this paragraph is correct, but equally, some of it is odd. It is correct to say that we look for a protocol. It is also correct to say that we ignore protocols published after the start of a trial. There is an implicit criticism of this notion from Annals, but our reasons are clear: protocols published after the start of a trial, by definition, cannot contain pre-specified outcomes. It is also correct to say we exclude unpublished protocols (and amendments), because they are unpublished, so readers cannot access them when reading a trial.
The Annals editors’ last point is extremely odd. Consistent with CONSORT, we are happy to accept all amendments that are clearly declared as amendments in the published paper reporting the results. Furthermore, it seems the Annals editors agree, as they say themselves two paragraphs earlier in this same letter: “To be consistent with CONSORT recommendations, we ask authors to describe, either in the manuscript or in an appendix, any major differences between the trial registry and protocol, including changes to trial endpoints or procedures”. But they do not do this, as we have shown in our letters.
The initial trial registry data, which often include outdated, vague or erroneous entries, serve as COMPare’s “gold standard”.
This is the second of two extremely concerning statements on registries from the editors of Annals. Trial registers were only established after a long battle, and they have a clear purpose: to create a time-stamped public record of key information on each clinical trial, including its pre-specified outcomes, so that this can be assessed against results publications, in order to spot discrepancies and omissions. The International Committee of Medical Journal Editors statement on trial registration  states very clearly that: “the purpose of clinical trial registration is to prevent selective publication and selective reporting of research outcomes” (our italics). Darren Taichman, deputy editor of Annals, is the current secretary of ICMJE. There are no named authors to the Annals reply, which is signed solely “The Editors”: it would be useful to know if Prof Taichman is among the authors, and if the secretary of ICMJE agrees with the Annals editors’ views on trial registration.
There is an additional issue to discuss here. The initial trial registry data is not “COMPare’s gold standard”: as is clear in our operations manual, our FAQ, and elsewhere, we only use a registry as a second best option, if a protocol pre-dating the trial start date is not available. It is particularly surprising to see the Annals editors incorrectly state that COMPare’s gold standard is the registry, because two paragraphs earlier, in the same letter, the same Annals editors have said (correctly) the precise opposite (“According to COMPare’s protocol, abstractors are to look first for a protocol that is published before the trial’s start date. If they find no such publication, they are supposed to review the initial trial registry data.”). This is a concerning inconsistency.
The editors of Annals now go on to discuss COMPare’s coding of specific trials in some detail. We welcome feedback on the coding of individual trials, and are happy to correct our data in the light of new information. Unfortunately here the editors of Annals again make a series of internally inconsistent and factually incorrect statements.
Our review indicates problems with COMPare’s methods. For one trial COMPare apparently considered the protocol published well after data collection ended.
This is not true. We used the registry entry for this trial, because the protocol was published after the trial started, and after data collection ended . We cannot understand why the Annals editors would say otherwise. The registry entry and protocol are both clearly linked on the data sheet for this trial, which is shared in full, as is all underlying raw data from COMPare. Furthermore, the pre-specified outcomes in our publicly available assessment sheet on this trial are clearly consistent with the registry entry that pre-dates the trial start date, and not with the protocol that was published well after it began.
However, they did not consider the protocol published 2 years before MacPherson and associate’s primary trial was published.
This protocol was indeed published 2 years before the trial was published : but it was published more than a year after the start date of the trial , and this is the key date. If a protocol is published after the start of a trial then by definition it cannot contain pre-specified outcomes. That is why, consistent with our operations manual, this trial’s protocol could not be used. We are concerned and surprised to see any confusion from the editors of Annals on this issue: outcomes can only be regarded as pre-specified if they pre-date the start date of the trial. Any changes subsequent to that should be declared and discussed in the trial report.
The protocol for MacPherson’s trial was more specific in describing the timing of the primary outcome (assessment of neck pain at 12 months) than the registry (assessment of neck pain at 3, 6 and 12 months), yet COMPare deemed the authors’ presentation of the 12 month assessment as primary within the trial publication to be “incorrect”.
We didn’t use this protocol, because, as discussed, it is from after the trial began. Therefore by definition this protocol cannot contain pre-specified outcomes. We are again surprised and concerned to see any confusion on this issue by the editors of Annals.
Similarly, the group’s assessment of Gepner and colleague’s trial included an erroneous assumption about one of the pre-specified primary outcomes, glycemic control, which the authors operationalized differently than the abstractors.
The primary outcome for this trial was pre-specified as “Glycaemic control” . This is a vague and poorly pre-specified outcome. Item 6a of the CONSORT statement explains that “all outcome measures, whether primary or secondary, should be identified and completely defined” and that details of how and when they were assessed should be reported in the trial publication .
Consistent with our general commitment to good faith, we gave the authors the benefit of the doubt in assuming that “Glycaemic control” meant HbA1c, which allowed us to say that they had successfully reported one pre-specified outcome (although HbA1c was only reported as a secondary outcome in the Annals paper on the trial ). When the Annals editors say that “Glycaemic control” was “operationalized differently” by the authors, we can only assume they are suggesting that either “homeostasis model assessment of insulin resistance (HOMA-IR)”, or “fasting plasma glucose level” should have been regarded as correct reporting of “glycaemic control”. Neither of these is as credible a candidate for this vaguely specified outcome as HbA1c. HOMA-IR is complex and uncommon composite outcome (defined as “a method for assessing β-cell function and insulin resistance (IR) from basal (fasting) glucose and insulin or C-peptide concentrations” ).
For context: HbA1c, and fasting insulin, which were both reported as secondary outcomes, showed no statistically significant change at p<0.05; while both HOMA-IR and fasting glucose did give a statistically significant results and were reported as primary outcomes in Annals, despite neither being mentioned at all in the list of pre-specified outcomes written before trial commencement .
Furthermore, the protocol for that trial clearly listed the secondary outcomes that COMPare deemed as being not prespecified.
This protocol was not published before the trial began, and it is not available anywhere online.
On the basis of our long experience reviewing research articles, we have learned that prespecified outcomes or analytic methods can be suboptimal or wrong. Regardless of prespecification, we sometimes require the published article to improve on the pre-specified methods or not emphasize an end point that misrepresents the health impact of an intervention. Although prespecification is important in science, it is not an altar at which to worship. Prespecification can be misused to sanctify both inappropriate endpoints, such as biomarkers, when actual health outcomes are available and methods that are demonstrably inferior.
This is another extremely concerning set of views from the editors of Annals. The CONSORT guidelines are clear: switching outcomes can often be justifiable, but the decision to switch pre-specified outcomes must be declared and explained in the paper reporting the results. We reflect this in our coding scheme: a trial scores full marks if it switches outcomes, as long as the switches are discussed. This is for good reasons. Where outcome switching is not declared, the reader of the paper is deprived of vitally important context, not least for interpreting the p-values reported, which may need modification in the light of multiple (undeclared) analyses. This is not an “altar at which to worship”, but a fundamental principle of good reporting, and good statistical analysis. For the editors of Annals to say that they know best about which endpoints should be reported, without declaring to their readers that they have been modified since pre-specification, is also extraordinary. It represents a significant break with all that is currently recognised as best practice, and surely requires extensive explanation and discussion, if not formal consultation with readers who will assume that Annals is respecting the globally recognised norms that they have elsewhere espoused support for. In short: if pre-specified outcomes are subsequently regarded as inappropriate, and they are changed, then, as CONSORT says, this should be declared and discussed.
The Centre for Evidence-Based Medicine Outcome Monitoring Project’s assessments appear to be based on the premise that trials are or can be perfectly designed at the outset, that the initial trial registry fully represents the critical aspects of trial conduct, all primary and secondary end points are reported in a single trial publication, and that any changes investigators make to a trial protocol or analytic procedures after the trial start date indicate bad science.
This is entirely untrue. Consistent with CONSORT – and as clearly laid out in our operations manual, our website, and our other publications – we regard changes to pre-specified outcomes as normal, we simply expect them to be flagged up in the trial report. This is also expected by CONSORT, and the Annals editors have claimed in this very letter that they comply with this expectation that changes should be declared. We are simply writing letters to Annals, and to other journals, where they have failed to adhere to this important commitment.
In reality, many changes to trial protocols or reports occur for justifiable reasons: institutional review board recommendations, advances in statistical methods, low event or accrual rates, problems with data collection and changes requested during peer review.
We recognise that changes can be valid. We do not regard deviations from pre-specified outcomes as unacceptable. However these changes do affect the significance of results, and therefore must be declared in full in the trial publication.
The Centre for Evidence Based Medicine’s rigid evaluations and the labeling of any discrepancies as possible evidence of research misconduct may have the undesired effect of undermining the work of responsible investigators, peer reviewers and journal editors to improve both the conduct and reporting of science.
We do not regard outcome switching as evidence of misconduct. Our impression is that this is a structural and cultural failing to take outcome switching seriously, as demonstrated by this reply from the Annals editors. However we do believe that permissiveness around outcome switching, where it is done thoughtlessly, will give cover to those who do engage in outcome switching to deliberately exaggerate their results.
We have led or participated in many efforts to improve the transparency and accuracy of scientific reporting. We will continue to encourage authors of clinical trials to make their protocols available to others and to update their trial registry information.
While these commitments are good in general, they do not address the main that we have raised: undeclared outcome switching in prominent clinical trial reports. Indeed, updating trial registry entry information without declaring this in the publication is precisely the sort of bad practice we aim to stop.
We respect COMPare’s effort to draw attention to the importance of accurate and complete public description of clinical trial procedures. But we don’t believe their approach – that purports to draw a simple methodological line between “good” and “bad” reporting (or editing) – serves our common cause well. Until the COMPare Project’s methodology is modified to provide a more accurate, complete and nuanced evaluation of published trial reports, we caution readers and the research community against considering COMPare’s assessments as an accurate reflection of the quality of the conduct or reporting of clinical trials. The Editors
We are puzzled that this problematic letter from Annals has no named signatories, and is signed only “The Editors”, but we are also very happy to receive feedback and adjust our coding in the light of new information. We have set out our methods clearly, and consulted extensively on them before commencing coding. From the content of their letter it would seem that the editors of Annals have gone through COMPare’s assessments on trials in various journals, not just Annals, in some detail, in order to identify flaws. We are grateful for their review. However, as discussed above, their review of our data has only yielded: two complaints about the documents used to identify pre-specified outcomes that demonstrate Annals’ editors difficulty in understanding what constitutes a “pre-specified” outcome, rather than a shortcoming in the COMPare coding; and a disagreement on whether the poorly pre-specified outcome “glycaemic control” is best interpreted as meaning HbA1c, or an uncommonly used composite outcome estimating β-cell function and insulin resistance (IR) from fasting glucose and insulin or C-peptide concentrations.
However, more broadly we recognise and are glad of Annals’ support for improving trial methods and reporting. It is because of this enduring and well recognised commitment from Annals that we hope the editors will reconsider their response to our correction letters, and their policies on outcome switching.
Specifically we urge the editors to do three positive things. Firstly, we hope they will correct the record on the many misreported trials identified by COMPare. Secondly, we hope they will address the confusion in their letter by clarifying their support for the CONSORT guidelines on outcome switching, and explaining that trialists must pre-specify outcomes in a registry or publicly accessible protocol (which pre-dates commencement of the trial) and then either report those outcomes or explain any changes in the trial report. Lastly, and most importantly: we hope that the editors will implement new processes to ensure that outcome switching does not persist in Annals at the current rate; and then share these methods so that other journals can learn from Annals’ experience of addressing this problem.
Ben Goldacre, Henry Drysdale, Carl Heneghan, Kamal Mahtani, Eirion Slade, Ioan Milosevic, Aaron Dale, Philip Hartley, and Anna Powell-Smith (the COMPare team).
 Annals Editors, Discrepancies Between Prespecified and Reported Outcomes, Ann Intern Med. Published online 22 December 2015: http://annals.org/article.aspx?articleid=2478526, last accessed 20/01/2016.
 Moher, D, Hopewell, S, Schulz, K et al. CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials, BMJ 2010; 340:c869.
 Ben Goldacre, Henry Drysdale and Carl Heneghan, The Centre for Evidence Based Medicine Outcome Monitoring Project (COMPare) Protocol: http://compare-trials.org/wp-content/uploads/2016/01/protocol.pdf, last accessed 20/01/2016.
 ICMJE statement on clinical trial registration: http://www.icmje.org/recommendations/browse/publishing-and-editorial-issues/clinical-trial-registration.html, last accessed 19/01/2016.
 Lind M et al, Design and methods of a randomised double-blind trial of adding liraglutide to control HgA1c in patients with impaired glycaemic control treated with multiple daily insulin injections (MDI Liraglutide trial). Prim Care Diabetes 2015; 9:15-22.
 MacPherson H et al. Alexander technique lessons, acupuncture sessions, or usual care for patients with chronic neck pain (ATLAS) study: study protocol for a randomised controlled trial. Trials 2013; 14: 209
 MacPherson H et al. Alexander technique lessons or acupuncture sessions for persons with chronic neck pain: a randomized trial. Ann Intern Med. 2015;163:653-62.
 The Cardiovascular Diabetes and Ethanol (CASCADE) trial registry entry. ClinicalTrials.gov: NCT00784433: https://clinicaltrials.gov/ct2/show/NCT00784433, last accessed 19/01/2016
 Gepner et al, Effects of Initiating Moderate Alcohol Intake on Cardiometabolic Risk in Adults With Type 2 Diabetes, Ann Intern Med. 2015;163(8):569-579.
 Wallace T et al, Use and Abuse of HOMA Modeling, Diabetes Care 2004; 27:6:1487-1495