20 October 2010

Science editorials and buyer beware

(With sincere acknowledgments to Sts. Murray and Kern)

If you are a science reader, you hope that scientific journals are grounded in fact. As scholars publishing in scientific journals, we value the importance of known authors providing empirical evidence that has been vetted by peer review. We know, however, that journals are interested in retaining their readership and may resort to various manipulations to increase their so-called “Impact Factor.” By extension, editors could inflate Impact Factors by explicitly publishing controversial articles; mundane articles could result in a reduction of impact. The practice of looking for papers that are known to be of interest to the general media is well known. I firmly believe that this system does not serve science well and has to tendency of corrupting scientific literature.

The lay public – people outside the scientific profession – receives a murky mixed message from scientific journals. The usual empirical papers may be mixed with opinion and conjecture and policy recommendations, sometimes only peripherally related to science. We science scholars should be very concerned about how the others might perceive these ideas.

Given the above, the current phenomenon of “editorials” should be of serious concern to scientists. Editorials often allegedly present the journal’s “position” (more properly, random stuff). With the increasing move of journals onto the internet, these unvetted rantings can find a much larger audience than was possible in the past. The issues with credibility are obvious: a) They often have no references. b) They are often anonymous. c) In some journals, they refer to breaking events that occurred in the previous week, which hardly makes it plausible that they have gone through a thorough peer review process. Who fact-checks these? They could be written to promote an agenda—to encourage insane working hours, to denigrate trainees for their lack of “passion,” to make sweeping generalizations about the quality and reliability of writers using particular kinds of software, etc.—without the constraint of truth. Where are the journal’s feedback loops the publish criticisms of the positions that border on the laughable or outright balmy? A letter that is often limited to a tiny fraction of the original article?

I would warn everyone to beware. But I’m a blogger with no published cancer research, so am clearly unreliable and uncommitted.

No comments: