The general wisdom is that negative results are harder to publish than one that show an experimental manipulation did have an statistically significant effect (“p < 0.05”). Anecdotally, the paper of mine that had the longest, toughest slog to publication was one with negative results.
Is the solution to this problem to create another journal? No.
First, we already have journals in biology that specifically say in their titles that they exist to publish negative results. We have the Journal of Negative Results in BioMedicine (started 2002) and Journal of Negative Results - Ecology & Evolutionary Biology (started 2004).
Second, we have journals that, while not specifically created to accept negative results, specifically include publication of negative results in their editorial mandate. Usually, this is phrased as “reviewed only for technical soundness, not perceived importance,” and these have become known as “megajournals” (regardless of how many papers they actually publish). This format, pioneered by PLOS ONE, is still quite new. Several megajournals are less than five years old (click to enlarge pic below).
The age of these journals is important to consider when talking about publishing negative results. In my experience, many academics take a long time to realize when the publishing landscape has changed. For example, I have been in many discussions with scientists who are actively publishing, active on social media, who mistakenly believe that “open access” is synonymous with “article processing charge” (APC). This is incorrect.
It takes time to change academics’ publishing habits. Five years is not enough to see how the creation of these journals affects the publication of negative results.
And more journals are on the way. The Society for the Study of Evolution has Evolution Letters coming, and Society for Integrative and Comparative Biology has an open access journal coming (though it seems likely these will review for “impact,” not only for technical soundness).
I do realize that some journals are better at upholding this editorial standard than others. For example, sometimes PLOS ONE reviewers have sent back reviews considering “importance” of the findings, even though the journal tells them not to do that.
In biology, you probably have at least six perfectly respectable journals that happily publish negative results. This is why I contend that we do not need to create new journals for negative results. We need to use the ones we have.
I think the underlying problem with discussions of negative results is that we talk about “negative results” as though they were all the same, scientifically: “no effect.” All negative results are not equivalent; some are more interesting than others. Below is a crude first attempt to rank them.
- Negative results that refute strongly held hypotheses. Physicists hypothesized that space contained an aether. Nope. Harry Whittington though the Burgess Shale fossil, Opabinia, was an arthropod. Nope. That was just a big old bunch of negative results. But they were clearly recognized as important in getting us off the wrong path.
- Negative results that fail to replicate an effect. These are tricky. We all recognize that replication is important, but how we react to them differs. Sometimes, failure to replicate is seen as important is demonstrating incorrect claims (like Rosie Redfield and others showing that GFAJ-1 bacteria, sometimes referred to as “arsenic life”, did indeed have phosphorus in its DNA rather than arsenic as initially claimed). Sometimes, failure to replicate can be dismissed as technical incompetence. (The “Tiger Woods” explanation.)
- “Hey, I wonder if...” (HIWI*) negative results. These are negative results that have no strong hypotheses driving the experimental outcome. Like asking, “What is the effect of gamma rays on man-in-the-moon marigolds?” Well, do you have any reason to believe that gamma rays would affect the marigolds differently than other organisms? If you don’t, negative results are deeply uninteresting.
In other words, that results are negative has very little bearing on how people view their importance. The importance of the hypothesis that underlies those negative results play a much bigger role in whether people are liable to think those negative results are interesting.
That is, even if you have another journal specifically for negative results, people are still going to think some results are more interesting and publishable than others. People whose negative results fall into the HIWI category (which may be a lot of those experiments) are still going to have a rough ride in publication, even for journals that consider negative results.
External links
Garraway L. 2017. Remember why we work on cancer. Nature 543(7647): 613–615. http://dx.doi.org/10.1038/543613a (Source of the “Tiger Woods” metaphor)
* In my head, “HIWI” rhymes with “Wi-Fi.”
This post prompted by Twitter discussion with Anthony Caravaggi.
No comments:
Post a Comment