07 June 2010

Paper trail, or: Did they say that? Peer-reviewed journal edition

ResearchBlogging.orgThis post was chosen as an Editor's Selection for ResearchBlogging.orgEverybody makes mistakes. But the peer-reviewed scientific literature tries to reduce mistakes by having fairly rigorous rules for citation. Citing original sources increases transparency and greatly facilitates fact-checking.

For instance, in one of our recent papers, we pointed out that a reference given in another paper did not support the point being made (as far as we could tell). Probably most practicing scientists have a story like that. But how common is that sort of error?

A new paper by Todd and company claims that the error rate for peer-reviewed scientific journals in marine biology is about one in four.

That’s a bit of a surprise.

To come up with this number, they looked at one reference given in support of a claim from three articles of two issues of 33 journals (198 references total – pity they couldn’t have found a way to squeeze in two more to make it an even number). The authors say the reference was selected at random, but they didn’t really, because they only picked assertions that were supported by one reference, so a bit of selection is going on.

Although the title of the article claims one quarter of citations is “inappropriate,” the picture is not as bleak as the title suggests. In fact, only 6% of references did not support the claim being made; a figure closer to one in twenty than one in four.

The “one in four” comes from their claim that 75% of reference support the claim in the paper making them. The remaining 25% that they deem “inappropriate,” however, fall into several categories, and not all of those categories are equally problematic.

About 10% of citations are ambiguous, although the Todd and colleagues say they tended to give benefit of the doubt to those doing the citing. Annoying, but perhaps not damaging to the scientific literature.

The remainder of citations (7.6%) they deem “inappropriate” are cases where the claim is not supported by original data in the paper being cited, but in another paper that the cited paper itself cites. These are almost certainly people citing review articles. Todd and company argue that this is “inappropriate,” but this is surely debatable. Off the top of my head, here are some positive reasons to cite reviews:

  • Review articles often more cohesive than original articles
  • Easier to read text with a single citation than a long list of citations
  • Saves space
  • Can save time and effort for readers to track down only one article instead of a long series

Todd and company did look at impact factor of the journal the original paper doing the citing appeared in, with the unstated hypothesis being that “good” journals (i.e., higher impact factors) would have fewer inappropriate citations than “bad” journals. No relationship, which I think goes to show reviewers, and the review process, is a pretty homogeneous lot. (Every journal has a reviewer 2 who hates the paper.)

When initial discussion of this broke out on Twitter, Carin Bondar’s initial response was, “Impact factor is screwed!” I don’t think so. Impact factor would screwed only if there was some sort of systematic bias in which journals are being inappropriately cited. As far as I can tell, everything in this paper seems to suggest the errors are occurring at random. (Impact factors can be abused, but they provide a rough measure for authors to figure out if a journal has any credibility, which is a major problem in this day of experimentation in scientific publishing.)

Journal editors and reviewers should be the gatekeepers here. Reviewers should have some understanding of the relevant literature for the paper they are reviewing. But there are too few people to review too many papers, and it may be unreasonable for reviewers to do more than just a spot check. Maybe journals, particularly those of published by some of the extremely profitable academic publishers, could hire professional fact checkers. That a journal that had full-time fact checkers could easily be a selling point as a good impact factor.

That this article shows how easy it is to go and fact check these things shows, more than anything else, that the citation system works. We just don’t take enough advantage of the opportunities.

Reference

Todd P, Guest J, Lu J, & Chou L. 2010. One in four citations in marine biology papers is inappropriate. Marine Ecology Progress Series 408: 299-303. DOI: 10.3354/meps08587

Photo by gadl on Flickr, used under a Creative Commons license.

2 comments:

  1. Great post Zen! I still wonder about the randomness of incorrect citations though...if a certain 'high impact' journal has a higher number of citations, there will naturally be a higher number of incorrect ones. I guess we'd need to assume that the proportions would be the same, but how many people cite a popular paper just because that is the 'norm'?! I find this so interesting!

    Also...good point about using the review papers. Cites within cites are a major issue for which there are no clear instruction or guidelines.

    I still think impact factors are in trouble...maybe not entirely screwed :)

    ReplyDelete
  2. Review papers may save time, but mainly should save you time in figuring out which original papers you should concentrate on reading. Especially if misrepresentation occurs and cases are frequent (as this paper apparently claims), you're taking a risk in not checking the original data.

    In citing, you should distinguish between claims/data from references you've actually read and papers which only refer to such claims. In the first case, cite as normal, in the second, cite as (Author A, cited in Author B), or refer as (See references in Author B).

    Guidelines exist, Carin, but a lot of people dont't follow them ;-)

    ReplyDelete

Comments are moderated. Real names and pseudonyms are welcome. Anonymous comments are not and will be removed.