Secondary Outcome Discrepancies Exceeded Primary Outcome Differences
Among the 66 total primary outcomes reported in the 48 trials, 83% focused on efficacy, and the rest focused on safety. The published primary and secondary outcomes matched completely between ClinicalTrials.gov and PubMed in 15% of the studies, which included 6 industry-funded trials (15% of all of them) and 1 nonindustry-funded trial (11%).
Nearly a quarter of the trials (23%, 11 trials), however, had discrepancies between prespecified and published primary outcomes. Among the 25 differing primary outcomes, 8 were prespecified but not in the final publication, 14 were published but not prespecified, and 3 prespecified primary outcomes became secondary outcomes when published. No differences in the rate of these discrepancies existed between industry-funded vs nonindustry-funded studies.
No influence from positive findings showed up either: 45 of the 66 published primary outcomes were positive findings, but only half of the newly added primary outcomes were positive. The proportion of positive findings from prespecified primary outcomes was 68%, with no significant difference.
That lack of influence either from industry funding or from positive findings was reassuring, said Dr Luykx, though the lack of industry influence was not necessarily surprising.
“There are very good clinical trials from both the industry and from academia, but we were pretty surprised by the directionality of results. We would have expected to find more outcome bias there,” he told Psychiatry Advisor. “That’s an indication that scientific misconduct is not at the root of our findings, and I think that’s a very optimistic message.”
He added that this finding may also suggest ClinicalTrials.gov may be helping in preventing scientific misconduct, although their study was not designed to answer that question.
It was in the secondary primary outcomes that big differences occurred, but, again, without apparent influence from industry or positive publication bias. Of 284 published secondary outcomes, an average of 5.9 per RCT, 81% did not match between prespecified protocols and final published results. Just under a third (29%) of prespecified ones were not reported in publications, and 54% of all reported secondary outcomes were not in the original protocols. In 4 studies, at least 1 secondary outcome became a primary outcome.
“Finally, we found a significant positive correlation between the number of prespecified secondary outcomes and the number of nonreported secondary outcomes in the accompanying publication…indicating that researchers registering large numbers of secondary outcomes are least likely to fully adhere to their ClincalTrials.gov records,” the authors reported.
Most of the prespecified safety and tolerability secondary outcomes (72%) did show up in the final publications, as well as 335 additional safety and tolerability outcomes. Overall, 5.5 times more total safety and tolerability outcomes were published than were prespecified.
Although no red flags in terms of scientific misconduct existed, Dr Luykx believes their study findings indicate a need for more explanation and transparency in published studies, something journal editors need to begin requesting.
“If scientific journals would place more emphasis on comparing the protocols with the publications and having the researchers state what changed from the initial protocol to the publications, then you would be able to address these differences and very honestly address why they changed,” Dr Luykx said. “You would get publications that would reflect reality a little bit more and a more balanced view of the research as it was being performed.”
Otherwise, he suggested, the value of registration with ClinicalTrials.gov is being underused, and clinicians reading the papers may miss out on valuable context about the trials.
“If you address these issues in the publication, then you would get a situation that is more reflective of clinical practice,” Dr Luykx told Psychiatry Advisor. “Conducting a trial is similar to treating patents–things happen that you didn’t expect–so RCTs should reflect clinical practice more and address these issues that occur when researchers conduct a trial.”
Of course, clinicians do not have time to keep up with all research, much less compare published findings to ClinicalTrials.gov registered protocols, but there may be occasional articles where the extra effort is worthwhile, Dr Luykx said.
“Clinicians can read the methods and then enter the RCT into ClinicalTrials.gov to briefly look at outcome measures,” he said. “I wouldn’t say to do that with every RCT you read as a clinician, but if it’s a practice-changing article, as a clinician, you could go back to the protocol and see if all the outcome measures are as they’re stated.”
Lancee M, Lemmens CMC, Kahn RS, Vinkers CH, JJ Luykx. Outcome reporting bias in randomized-controlled trials investigating antipsychotic drugs. Transl Psychiatry. 2017;7(9):e1232.