Can we fix the reproducibility crisis that is plaguing science?
In my 30 years of working in the field of analytical chemistry, I have reviewed many manuscripts for a multitude of journals. Most of the manuscripts contained little or no analytical method validation, which means they lacked evidence of repeatability, reproducibility, robustness, ruggedness and/or most of the other requirements to demonstrate method validation. Remarkably, most of these papers were submitted to analytically oriented journals, as opposed to biological ones, such as Science, Nature, Cell, or BioTechniques. Therefore, it does not come as a surprise that the existing and latest literature lacks reproducibility and other aspects of analytical method validation.
Perhaps the essence of good scientific research and publications is that the work, when successfully finished and published, will be reproducible in the palms of others, similarly trained, ready, and tooled in their own laboratories. The authors should, at the very least, ensure that their research is repeatable in their own lab and reproducible in other labs, before it is submitted for publication. If this crucial element is not met, then the findings should never be published. The original work should go after scientific protocols, such as using good laboratory practices (GLP), and ideally be conducted in a GLP facility with quality control and quality assurance. And, there must be sufficient hard data (numbers) with at least three repeats for each and every measurement, all of which is statistically treated and tabulated.
Why is this so significant? More and more manuscripts are rejected because they do not contain elements of repeatability or reproducibility, and such papers will only add to the existing literature that is irreproducible. If my own practice is relatable to others reviewing the current, analytically oriented literature, then it is clear that more requests must be placed on the authors by the journals and editors when the manuscript is being evaluated. The manuscript should include evidence of repeatability, reproducibility, robustness, ruggedness and other aspects of true and accomplish, analytical method validation. And, if these things are not evident, then such work is likely to fail when published; other researchers will not be able to reproduce these findings, and we will proceed to face the situation we are now facing: a generic lack of past/present reproducibility for many, or most, scientific publications.
Most of the analytical manuscripts I receive contain either partial or no analytical method validation. This means they lack robustness, ruggedness, repeatability, reproducibility, thresholds of detection, boundaries of quantitation, calibration plots for linearity of quantitation, statistical treatment of data where the number of repeats, n, must be at least three or more, stability of reagents, quality control, quality assurance, good laboratory practices, and so forward. And, a majority of the papers that lack elements of reproducibility come from academia, as compared to those from industrial or government labs, pertaining to fields such as analytical instrumentation, pharmaceuticals, biopharmaceuticals, and others. Labs/firms must meet relevant regulatory requirements laid down by U.S. Food & Drug Administration, European Medicines Agency, and Japan Pharmaceutical Association, which means high quality method validation evidence – something that academia is even now not coerced or required to pursue by most journals or editors. Most academics choose to save the time, money, and effort in pursuing any aspects of analytical method validation, which if they had followed all along might have avoided the current crisis of irreproducibility.
Considering the above arguments, how can science rectify these issues and produce literature in the future that will be fully reproducible? Does the fault reside with authors, reviewers, editors, journal publishers, funding agencies, or somewhere else? It is the authors who are ultimately responsible for submitting valid, repeatable, reproducible, and fair results/data. For their part, reviewers must be more rigorous while evaluating submissions that do not contain elements of repeatability, reproducibility, method validation, and such other criteria that give credibility to the work. Some journals provide guidance and instructions online on their website to reviewers, so that only the best papers reach publication. Editors should consider reviewers’ comments and recommendations before arriving at a decision. Perhaps they should not forward to reviewers the manuscripts that have no evidence of repeatability or reproducibility. Eventually, it is the publishers who must switch their policies about what must be contained in all submissions to better ensure their validity, repeatability, and reproducibility once published.
Therefore, journals have been approaching the problem of irreproducibility and retraction (which usually goes after) in ways that would prevent such instances. Some journals such as Nature, BioTechniques, and The Analyst request authors to indicate in the research description certain initial goals, such as evidence of repeatability and reproducibility, as well as the number of repeats for each experiment. Some of these guidelines discuss analytical method validation criteria that should be provided in the assets of the paper. Perhaps, journals editors and reviewers need to be more circumspect in what they are approving for final publication, especially when there is little to no evidence of any analytical method validation, as it can lead to failed attempts at repeating and reproducing the studies. The overall purpose should be to ensure that the final manuscript will contain enough data, details, and method validation to ensure it will be reproducible in the palms of its readers after publication.
A lack of reproducibility and replicability affects the tempo at which science progresses. Apart from this, it can have an adverse influence on funding patterns, as irreproducible research is a major cargo on science spending. Such research can also put health care policies in jeopardy and even lead people not to trust science. Therefore, publishing good, high quality science should be a priority for the major stakeholders of science.