Quality of scientific literature: A report from the 2017 Peer Review Congress
Hridey Manghwani, Manager, Corporate Communications, Editage, is attending the 2017 Peer Review Congress. This is Hridey’s individual report of the 2nd day of the conference.
After some interesting sessions on the quality of reporting, it was time to discuss the quality of scientific literature. The session embarked off with an excellent presentation from Dr. Harold Sox from the Patient Centred Outcomes Research Institute (PCORI), whose group has assessed scientific quality in a series of comparative effectiveness research (CER) studies. A series of CER studies funded by PCORI during its very first funding cycle (2013) were analyzed. Original applications were used to examine PI-related variables potentially associated with quality and adherence to PCORI’s methodological criteria established in 2012 and updated recently. Twenty more reports will be reported after completion of peer review. The examine hopes to provide measures of specific methodological shortcomings after following up on 300 studies through July 2019.
The next presentation by Matthew J. Page focused on the drawbacks of using statistical methods in systematic reviews of therapeutic studies. His group analyzed the interpretation of the statistical methods for a broad set of 32 Cochrane and 78 non-Cochrane studies. The choice of statistical model used was not clinically justified for over 70% of the studies. Moreover, pooled results were not interpreted as an average effect and prediction intervals were not reported for many studies. In many cases, overlapping confidence intervals were not explained and reasons for asymmetry in funnel plots were not justified. The conclusions of the examine clearly highlighted the need for use of better statistical analysis methods. These conclusions raised many questions about whether guidelines in the Cochrane handbook were clear, whether they were being understood and followed. During the Q&A, Leslie Citrome, Editor-in-Chief of The International Journal of Clinical Practice, collective that his decision to reject a manuscript or send it for peer review is based on the probe design and quality of data. However, he depends on the peer reviewers to vet the statistical methods used in the systematic review. He said a implement to analyze the statistical methods would make his life much lighter as a journal editor.
Next up was Marc Dewey, an associate editor at Radiology, who spoke about reproducibility issues in imaging journals such as Radiology. He went on to share the results of a survey of authors and reviewers about the reporting guidelines and checklists for contributors to Radiology for studies submitted inbetween 2016 and 2017. The results of his survey exposed that most authors used these guidelines when writing the manuscript. Almost 80% of authors and 50% of reviewers found the guideline checklists to be useful. However, discussions after Marc’s presentation indicated that there is a lack of awareness about CONSORT and EQUATOR guidelines for clinical trials and many did not cite these guidelines in their manuscripts. Many of these guidelines are to be used by scientists at the time they design their explore, but were being referred to only at the manuscript prep phase.
Jeannine Botos from JNCI-Oxford University Press then took the podium to discuss the use of standard reporting guidelines (SRGs) among JNCI authors, and how it related to editorial outcomes and reviewer ratings. JNCI rejects more than 75% of submitted manuscripts before peer review. Albeit SRGs were not associated with editorial outcomes or reviewer ratings, reviewer ratings for adherence to guidelines and clarity of presentation were associated with editorial decision after peer review.
Emily Senna from the University of Edinburgh discussed the effect that intervention had on improving compliance with ARRIVE guidelines for reporting of In Vivo animal research. ARRIVE guidelines were published in 2010 and endorsed by all major UK funders and over 1000 journals. However, endorsing did not mean that these journals were enforcing these guidelines. In fact, results of this explore opened the door for alteration in editorial policies to include an ARRIVE checklist that would be checked when a manuscript was being submitted.
All presentations in this session highlighted the poor quality of studies, especially systematic reviews for clinical trials. Developing numerous different guidelines will only confuse authors more and will not lead to better adherence to guidelines. Should they even be called ‘guidelines’? Why not just call them rules and enforce them at the time of manuscript conformity? Some journals very likely don’t have the bandwidth to check if the guidelines were being followed. They also cannot transfer the cargo onto peer reviewers, who are already underappreciated and not compensated for their efforts. Is there a solution? Most solutions only seem to lead to a different set of problems.
The next session on Trial Registration began with An-Wen Chan from the University of Toronto who compared protocols, registries, and published articles to understand the association inbetween trial registration and reporting of trials. In addition to being unregistered and unpublished, clinical studies are often discrepant in the reporting of primary outcomes. Journal editors, legislators, funders, regulators, and ethics committees must mandate trial registration and provide access to all protocols to the public. By enlargening transparency, biased reporting of trial results can be curbed.
Constance Zou from the Yale School of Medicine discussed the Influence of the FDA Amendments Act (FDAAA) on registration, reporting of results, and publication. Her explore concluded that the FDAAA mitigated selective publication and reporting of clinical trial results and improved the availability of evidence for physicians and patients to make informed decisions regarding care for neuropsychiatric illnesses.
Rebecca J. Williams from ClinicalTrials.gov then collective their evaluation of results in ClinicalTrials.gov and its relationship to peer reviewed literature. They concluded that for ended or terminated trials registered on ClinicalTrials.gov, inbetween 33% and 57% had reported data but not been cited in any PubMed articles. This clearly suggested that ClinicalTrials.gov was a unique source of results for many trials.
Day Two ended with the feeling that trial registrations and the way they are done must be made better. Transparency and public access will hopefully lead to cleaner scientific literature in the future.
We’ll bring you the reports from Day Three soon. Stay tuned!
- Biases in scientific publication: A report from the 2017 Peer Review Congress
- Research integrity and misconduct in science: A report from the 2017 Peer Review Congress
- Quality of reporting in scientific publications: A report from the 2017 Peer Review Congress