Here we publish some correspondence with JAMA, who have declined to correct the record on a series of trials that they have misreported in their pages.
First, one paragraph of background, for those unfamiliar with the problem of outcome switching, and COMPare’s efforts to address it. It is well established that academic journals routinely permit outcome switching in the trial reports that they publish, despite public commitments to address this problem. This is widely recognised as bad, for reasons we have already outlined. For six weeks from October 2015, the COMPare team analysed every RCT published in the top 5 medical journals to check if they had correctly reported their pre-specified outcomes. Where we found discrepancies between the outcomes that were pre-specified and those reported, we wrote a letter to that journal to correct the record. Our intention was to test whether the structures of science are working as they should, and specifically to see whether academic journals are transparent and self-correcting in the face of legitimate concerns being raised about misreporting of trial results. The responses we have received from journals have been extremely varied: from full public engagement and transparent corrections in the BMJ, to inaccurate and concerning responses from the Annals of Internal Medicine and the New England Journal of Medicine.
JAMA is a very widely read journal, with an impact factor of around 35. So far we have analysed 13 of their trials. Of 22 primary outcomes pre-specified across these papers, 17 were correctly reported. However, we also found 87 additional outcomes reported that were not pre-specified in the protocol or trial registry, and only 34 of these were declared as novel. Furthermore, 70 of the 105 pre-specified secondary outcomes spread across these publications were never reported. Our full results and raw data are available here. Only 2 of the trials we assessed from JAMA were perfectly reported.
We therefore sent 11 letters to JAMA to correct the record on the individual trials that misreported their pre-specified outcomes in their pages. None of these letters have been published. For an example of the kinds of problems we identified in trial reports published by JAMA, we have recently published this blog describing the process of grading a JAMA trial, and the problems uncovered. This assessment found that 16 of 37 pre-specified outcomes were not reported or mentioned in the trial publication, and that 6 non-pre-specified outcomes were added to the report, without being declared as such. The correction letter sent to JAMA on 8th December 2015 was rejected: readers are therefore being kept unaware of these shortcomings by JAMA editors declining to publish critical feedback explaining how they misreported the results of a clinical trial.
At COMPare, we set out to establish why outcome switching persists in academic journals, despite public commitments to best practice. The responses we have received from journals in the course of the project shed important light on the answers to this question, and while we have focused on outcome switching specifically, we think the findings from our project have broader implications for how problems in scientific research are managed.
In accordance with our commitment to open science, because we are citing and analysing journal correspondence in our forthcoming research paper, and in order to drive forward discussions on how to fix the problem of outcome switching, we are sharing our correspondence with the editors of JAMA below, in full.
COMPare to JAMA, 12/11/2015:
(One of the 11 letters we sent to JAMA describing discrepancies)
Dear Editor,
Your recent publication Lemiale et al [1] reports outcomes that are different to those initially registered [2].
Two non-pre-specified outcomes were reported without declaration that they were not pre-specified (“Mortality at 6 months” and “Good performance status in 6 month survivors”). One pre-specified outcome was left unreported (“performance status (OMS scale ranging from 0-perfect status- to 4-moribund)”).
In addition one pre-specified secondary outcome (“Echec du bras de randomisation”) was listed only in the French language protocol, but did not appear in the list of pre-specified outcomes in the registry entry: the results for this were not reported in the results table, but were reported in free text.
JAMA has endorsed the CONSORT guidelines on best practice in trial reporting [3]. In order to reduce the risk of selective outcome reporting, CONSORT includes a commitment that all pre-specified primary and secondary outcomes should be reported; and that, where new outcomes are reported, it should be made clear that these were added at a later date, with an explanation of when and for what reason.
This letter has been sent as part of the COMPare project [4]. We aim to review all trials published from now in a sample of top journals, including JAMA. Where outcomes have been incorrectly reported we are writing letters to correct the record, and to audit the extent of this problem, in the hope that this will reduce its prevalence. We are maintaining a website at COMPare-Trials.org where we will be posting the submission date and publication date of this letter, alongside a summary of the data on each trial and journal. We hope that you will publish this letter so that anyone using the results of this trial to inform decision-making is aware of the discrepancies.
Many thanks,
Ioan Milosevic, Henry Drysdale and Ben Goldacre on behalf of the COMPare project team.
References:
[1] Lemiale V et al., Effect of Noninvasive Ventilation vs Oxygen Therapy on Mortality Among Immunocompromised Patients With Acute Respiratory Failure – A Randomized Clinical Trial, JAMA 2015; 314(16): 1711-1719
[2] Trial registry entry: https://clinicaltrials.gov/ct2/show/record/NCT01915719
[4] COMPare project website www.COMPare-Trials.org
JAMA to COMPare, 09/12/2015:
(identical to the 10 other rejection emails)
Dear Mr Milosevic:
Thank you for your recent letter to the editor. Unfortunately, because of the many submissions we receive and our space limitations in the Letters section, we are unable to publish your letter in JAMA.
After considering the opinions of our editorial staff, we determined your letter did not receive a high enough priority rating for publication in JAMA. We are able to publish only a small fraction of the letters submitted to us each year, which means that published letters must have an extremely high rating.
You are welcome to contact the corresponding author of the article, although we cannot guarantee a response. We do appreciate you taking time to write to us and thank you for the opportunity to look at your letter.
Sincerely yours,
Jody W. Zylke, MD
Deputy Editor, JAMA
Letters Section Editor
JAMA to COMPare, 9/12/2015:
Dear Dr Milosevic and COMPare Project Team:
Thank you for your recent letters about several randomized clinical trials published in JAMA. We agree that it is important for researchers to pre-specify primary and secondary outcomes before conducting a trial and to report outcomes accurately in their publications. In fact, we carefully monitor this during editorial review.
However, the letters you submitted are for the most part not consistent with the approach outlined in your website, “comparing the clinical trials registry and trial protocol with the trial report.” Most of the letters have noted discrepancies between the trial registry and trial report, but it appears that you have not always checked for discrepancies with the trial protocols, which have been included as a supplement with each trial published in JAMA since mid-2014.
We carefully check for discrepancies between the protocol and the manuscript, because the protocol ordinarily is the document submitted to the IRB and funding agency. In our experience, the trial registration may not always accurately reflect the protocol, especially if clearly documented, justified, and approved revisions to the protocol have occurred. Inaccuracies in the trial registration documents are more of an issue for the individuals overseeing the trial registries.
Also, please note that authors are not always required to report all secondary outcomes and all pre-specified exploratory or other outcomes in a single publication, as it is not always feasible given the length restrictions to include all outcomes in the primary report.
In addition, some of the information in your letters is vague, containing only numbers and not specific outcomes, making it difficult to understand the specific issues or reply to them. Moreover, the last 2 paragraphs of the letters you have submitted, concerning CONSORT and the COMPare project, are identical.
We are willing to evaluate letters that point out major discrepancies between the protocol (considering all protocol revisions and amendments, and the final statistical analysis plan) and the published article. Examples may include a primary outcome reported as a secondary outcome or a secondary outcome elevated to a primary outcome. We will not ordinarily consider letters that simply note discrepancies with the trial registration or point out unpublished secondary outcomes. In addition, letters must be specific about the discrepancies and must provide supporting documentation (ie, copies of the sections of the article and final protocol that differ) in which the discrepancies in outcomes are clearly highlighted and for which the information about study outcomes is not taken out of context with the entire protocol. Also, letters that contain material or language that is duplicative of other letters or material will not be considered for publication.
Thank you for your interest in attempting to help ensure accurate reporting in the medical literature.
Sincerely,
Jody Zylke, MD
Letters Editor, JAMA
COMPare to JAMA, 01/02/2016:
Dear JAMA / Dr Zylke,
Many thanks for your reply to our letter submission; we appreciate the feedback on our process and the consideration you have given to our arguments. With regard to your specific concerns:
On the source of pre-specified outcomes. We try, wherever possible, to use outcomes as pre-specified in the protocol. However, it is often the case that the protocol version supplied is dated post trial commencement, without a detailed history of changes that allow us to check how the endpoints have been altered, or justification for those changes. For example, in the article by Wechsler et al., one of the changes in the version history provides a change to version 3 dated March 8, 2013 (two years after trial commencement): “Change in analytic model for secondary endpoints” with the reason given as: “Incorporating Dr Israel’s comments.” It is for these reasons that we have used the registry entry in this case, as that provides an exact change log, allowing us to see the endpoints as they were at trial commencement. Furthermore, CONSORT requires that deviations from the initially pre-specified outcomes are declared and discussed in the paper reporting the trial’s results (not multiple versions of earlier lengthy protocols). We feel this is reasonable, because otherwise all readers would have to search through large numbers of additional documents to identify any changes, which in our now considerable experience are unlikely to be adequately declared and explained in any case.
On the issue of trial registries. We use the registry entry to identify pre-specified outcomes where there is no publicly accessible protocol dating from before trial commencement. We are extremely concerned by your comment that “the trial registration may not always accurately reflect the protocol”, and that inaccuracies “are more of an issue for the individuals overseeing the trial registries”. Trial registers were specifically set up to address the issue of selective non-publication of whole trials and individual outcomes, strongly endorsed by ICMJE, and often the only place where pre-specified outcomes from before trial commencement can be found. We feel the onus is on trialists and journal editors to ensure that all pre-specified outcomes are reported correctly, and that where there are discrepancies or inaccuracies, these are correctly highlighted in the report. Where outcomes are pre-specified so unclearly that correct reporting cannot be ascertained, this could also be pointed out in a trial report.
On the issue of space. We understand that space restrictions may not allow all secondary or exploratory outcomes to be reported in one document, but would hope that these omissions are at least mentioned, with a reason given, as (again) stated by CONSORT, endorsed by JAMA. Declaring any switched or unreported outcomes allows the reader to place the reported outcomes in their appropriate statistical and interpretative context. Furthermore, many reports we have analysed throughout our study have added several novel non-pre-specified outcomes, in addition to removing pre-specified outcomes. We therefore feel ‘space restrictions’ is not a reasonable justification for undeclared non-reporting of pre-specified outcomes.
With regards to duplication of material, we are happy to edit the paragraphs in question, but feel it is important for readers to understand the reason for the letter. We also feel that your response to our letters – and specifically the discrepancies between your policy as described to us in your letter, and your endorsement of CONSORT – deserve to be a matter of clear and open public discussion. It is likely that many readers will be surprised to read the views you have expressed on the acceptability of silent undeclared outcome switching, the acceptability of taking pre-specified outcomes from after trial commencement, and the role of trial registers.
We are confident that JAMA is committed to the highest standards on reporting clinical trials. However we are also aware that studies have repeatedly been published demonstrating a very high prevalence of discrepancies between pre-specified and reported outcomes [1,2], despite public statements by journals that they police this issue. Our findings are consistent with these previous studies, but , rather than simply publishing an anonymised prevalence figure on outcome switching overall, we have begun writing on every individual trial. This is for two reasons: firstly, to correct the record on individual trials in individual journals; and secondly to create a public discussion on best practice and policies around misreported outcomes, with real worked examples where outcome switching has occurred.
We therefore strongly urge you to publish our letters, so that JAMA readers are made aware of the discrepancies between pre-specified and reported outcomes in these individual trials, and of the wider issues.
Yours,
Ioan Milosevic, Ben Goldacre, Henry Drysdale
[1] Fleming, Padhraig S., Despina Koletsi, Kerry Dwan, and Nikolaos Pandis. “Outcome Discrepancies and Selective Reporting: Impacting the Leading Journals?” PLoS ONE 10, no. 5 (May 21, 2015): e0127495. doi:10.1371/journal.pone.0127495.
[2] Jones, Christopher W., Lukas G. Keil, Wesley C. Holland, Melissa C. Caughey, and Timothy F. Platts-Mills. “Comparison of Registered and Published Outcomes in Randomized Controlled Trials: A Systematic Review.” BMC Medicine 13 (2015): 282. doi:10.1186/s12916-015-0520-3.
This correspondence speaks for itself. As we have shown in detail, trial reports published in JAMA repeatedly misreport their pre-specified outcomes, and JAMA have so far acted to deprive readers of this important information. There are two additional issues to raise, that are not covered in the above reply.
Firstly, JAMA state that they monitor pre-specification and reporting of outcomes, stating: “We agree that it is important for researchers to pre-specify primary and secondary outcomes before conducting a trial and to report outcomes accurately in their publications. In fact, we carefully monitor this during editorial review.” However, as our 13 analyses of their publications have shown, trials published in JAMA routinely misreport their pre-specified outcomes. If you have any doubts, we strongly urge you to read through this example of how JAMA misreported a clinical trial. These repeated failures strongly suggest that their internal mechanisms, whatever they may be, have failed, and that they warrant review. It would be useful if JAMA could set out what their internal checking processes consist of, so that others can learn from their experiences.
Secondly, there is some useful feedback from JAMA, although it is hard to act on, paradoxically, because of restrictions imposed by JAMA and other journals themselves. JAMA say: “In addition, some of the information in your letters is vague, containing only numbers and not specific outcomes, making it difficult to understand the specific issues or reply to them.” We would be keen to include a list of all misreported outcomes in all letters. Sadly we are unable to do so, because of limits imposed by journals on the word length of letters submitted in response to research they have published. For JAMA, this limit is 400 words; for NEJM, the limit is 175 words. Both restrict public discussion of methodological flaws, and both prevent all misreported outcomes being enumerated in a letter, because the scale of misreporting in these journals is so extensive that the unreported and non-prespecified outcomes simply cannot be listed within their own word limits.
However, all the raw data from the COMPare project is already shared in full at COMPare-trials.org. Our openly shared study database includes every raw scoring sheet for every trial, and lists every single pre-specified and reported outcome for every single trial that COMPare has analysed. This open data resource is already signposted in every letter, and has already been accessed extensively by both authors and journal editors seeking to find flaws in our findings. However, we can try to make the accessibility of that resource even clearer. If anyone has any additional suggestions on how to address the conflict between JAMA wanting more detail, but restricting word length in correspondence, we would be keen to hear them.
Ben Goldacre and Henry Drysdale, on behalf of the COMPare Trials Project.
G.Y. says
Thank you for reporting on this important topic.
You write, “CONSORT includes a commitment that all pre-specified primary and secondary outcomes should be reported; and that, where new outcomes are reported, it should be made clear that these were added at a later date, with an explanation of when and for what reason.”
Your wording perhaps implies a stronger safeguard than CONSORT (Section 6b) actually recommends. CONSORT (disappointingly) requires pre-specified outcomes to be reported BUT if new outcomes are reported, there should be an explanation given (and the original analyses need not be reported).
This allows authors to ditch their pre-specified analyses and replace them with others, as long as they give a reason for it – no matter how ridiculous.
The PACE trial of rehabilitative therapies for chronic fatigue syndrome is the classic examplar of this. All the main outcome analyses and “recovery” analyses that were specified in the protocol were abandoned. The reasons given were weak (“to allow more sensitive analysis”) or wrong (one new threshold was created by applying a mean and SD to clearly non-normal population data of the wrong age range).
The new analyses led to a threshold for clinical effectiveness and recovery for physical function of 60/100 on the SF-36 scale that was so low that it was below the level of disability required for trial entry (65/100) and similar to the mean score for patients with Class II congestive heart failure.
Nevertheless, key analyses based on this ludicrous threshold (and another like it, for fatigue) are the basis for claims that the therapies were successful. They were published without question in The Lancet and Psychological Medicine – presumably because the authors had offered “an explanation”.
Over 12,000 patients have signed a petition demanding retraction of these analyses and have been ignored. Both the patients and 42 scientists, in an open letter have requested per-protocol reanalysis. 27 ME/CFS organisations representing tens of thousands of patients have written to the study authors asking them to release the data for independent reanalysis, while the authors appeal a decision by the Information Commissioner. The study authors have persistently refused to do the original analyses themselves.
This is the situation that we’re now in, because CONSORT is weak on this issue. Per-protocol analyses should be reported, no matter what. Additional analyses should be allowed, with reasons given.
A strengthened CONSORT should be law, not guidelines, with stiff legal penalties for breaches. It’s unacceptable that patients risk their health in trials and get treated the way that patients have been treated in PACE.