Skip to Main Content

There’s a common refrain in engineering: When it comes to the triple goal of “faster, better, cheaper,” you can really ever achieve only two of the three. This wisdom applies as well to the rapidly moving field of scientific preprints — fast-turnaround, web-based publications of research findings that have not yet been subjected to review by outside experts.

Preprints are all the rage today among scientists conducting research on the novel coronavirus and the related issues of Covid-19 diagnosis, treatment, and vaccine development. That’s understandable, and in many respects commendable, as a way to quickly share knowledge about this global health crisis.

advertisement

Preprint servers such as bioRxiv and medRxiv post research reports almost immediately and with only modest oversight, allowing researchers around the world to rapidly build on each other’s work. In contrast, although many scientific journals have significantly accelerated their vetting process for papers related to Covid-19, they more typically take months to review submissions in order to check whether the experimental protocols were appropriately designed, whether statistical analyses were conducted properly, and more.

The results posted on preprint servers may not be perfect — they may even be flat-out wrong. But the preprint philosophy is that errors will get fixed over time as the scientific community crowdsources and opines on the findings, and that the pros of quick sharing among scientists outweigh the cons of sometimes sharing erroneous findings.

If this were just about scientists sharing their work with each other, no problem. But especially when the topic is of great public interest, such as reassuring or frightening findings about a deadly pandemic, preliminary findings don’t tend to stay under cover. Scientists, who gain professional prestige by being first to discover something or generate new knowledge, increasingly share their preprint results on social media, where a single influencer can amplify a tentative observation into a market-moving headline seen by millions around the world. Journalists have their own incentives to rush stories based on preprints into production, partly because they serve a news-consuming public that demands new content by the hour (desperate for a hint of normalcy, how many of us hang breathlessly on every hint of a scientific “breakthrough”?) and partly because the competition to be first is fiercer than ever in today’s shrinking and economically strained world of journalism. Bottom line: preliminary, minimally reviewed information about Covid-19 is spreading as fast as the virus itself.

advertisement

We know from social science research that when unconfirmed but widely shared first-draft results later prove to be wrong, it can be very difficult for people to “unlearn” what they thought was true. Misinformation, it turns out, is a lot like toothpaste: not impossible to put back in the tube, but pretty difficult. So what’s the solution?

One approach floated by a number of scientists in a recent New York Times op-ed calls for pools of independent scientists to conduct “rapid reviews” of preprints soon after they are posted publicly. These reviewers could notify the authors about any apparent errors for quick correction, hopefully before journalists and others widely cover or share the initial report. It’s an attractive and seemingly simple option until you consider the “faster, better, cheaper” conundrum.

Research, especially clinical research, is inherently complicated, and it takes time to consider the many places it may have gone astray. At SciLine — the service that one of us (R.W.) directs, based at the nonprofit American Association for the Advancement of Science — we’ve found that, when asked, most scientists are generously willing to be connected to reporters seeking expertise on deadline on virtually any science-related topic. But if that reporter wants the scientist to comment on a published report, whether in a conventional journal or on a preprint server, they appreciate some extra time — we like to give them a day — to find time in their busy schedules to read the paper closely and consider its details before commenting. Without that, “better” gets sacrificed for “faster.”

There are possible ways to get both “better” and “faster.” Imagine, for example, a coterie of scientists whose job is to be permanently on call, like lawyers on retainer, ready to sift through a new research report as soon as it is posted publicly and conduct a quick and careful critique. They could alert the scientists who did the work about mistakes that need quick correction. And they could post their comments on a clearinghouse website for journalists or others to check with before they share the new finding with others or cover it for their news organization. But then we are trading away “cheaper,” because academic scientists are not paid to sit around and comment on others’ work. So someone may have to fund a bank of on-call scientist reviewers.

And how many scientists would that take? Given the range of research disciplines being brought to bear just against Covid-19, including virology, epidemiology, behavioral psychology — even remote learning and pedagogy — and the additional need to examine each report’s statistics and methodology, we could soon be talking about a large stable of paid experts. Again, there goes “cheaper.”

Clearly there is no easy solution here, though we would argue that the ideal approach would be to put a small toll on each of the three legs of engineering excellence. If, for example, scientists were to share their work more collaboratively for feedback before posting their findings, they might get some great input and avoid embarrassing errors in exchange for a small loss of “faster.” Journalists could commit to an extra layer of due diligence — getting comments from a few more knowledgeable sources than normal when reporting on preprints, for example. And if those journalists emphasized in their stories the uncertainty that’s always implicit in such fast-release findings, they could help minimize the need for follow-up stories that contradict earlier news, exactly the kind of “coffee is good for you/coffee is bad for you” whiplash that encourages the public to distrust journalism and science at a time when support for both is more important than ever.

As for the public, which is clinging to every shred of news in these trying times, perhaps we can all somehow rise a bit above the fray and remember that, just as journalism is said to be the first rough draft of history, so science is a long process of constantly trying to prove itself wrong, sometimes with great, if perhaps painful, success.

Let’s revel in the knowledge that preprints today are helping researchers share — especially with each other — their latest advances with great ease and speed. At the same time, let’s impose some discipline on our own proclivities to celebrate prematurely or sink into despair. And let’s keep in mind that despite the turbulence at its leading edge, science does a great job, over time, of getting things right.

Rick Weiss is the director of SciLine, a philanthropically funded free service for journalists and scientists based at the American Association for the Advancement of Science. Jonathan Moreno is a professor of medical ethics and health policy at the University of Pennsylvania.

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.