03 November 2009

Rating journals and articles

Richard Smith argues that having measurements (or, as fashion calls them, “metrics”) for individual scientific articles means that impact factors for scientific journals are going to go away very soon.

I don’t think it’s going to be quite that easy.

Scientists have had “article level” measurements for a long time: the number of citations an article receives in other papers. To be fair, Smith mentions citations:

Citations are used to calculate the impact factor, but these citations come from only one (expensive) database. It’s better to use more than one database.

Putting aside the impact factor calculation for a second, has Smith not seen that Google Scholar has citation information? I also do not know why Smith claims more databases are better (apart from the possibility that one might be free). We want the actual number of times an article has been cited, so multiple databases should all contain the same number of citations an article receives.

But as long as there are multiple venues for researchers to publish their research, editors and publishers will use some kind of measure of their publications to establish credibility. Does it matter if a journal promotes itself by a high impact factor, a high number of combined article downloads, or some other aggregate number?

And researchers need some sort of way to determine a journal’s credibility. Otherwise, we’re left with individuals pulling hoaxes to figure out if a journal is the real deal or not.*

More and faster measurements of articles is interesting and valuable, though the underlying question remains unanswered: What measurement of an article should we be most interested in? Page views are different from downloads, which are different from blog mentions, and all are different from citations. In an open access arena, you can probably crank up the number of page views and downloads by publishing anything with “dinosaur” in the title. (And I say this because I love me some dinosaurs!) In terms of scientific impact, an article read by 500 people might be more influential than one read by 5,000 people, if it’s the right 500 people; i.e., hits the target audience squarely.

In the end, all of these numbers are about money: grant money, salary money, merit money, and so on. If it weren’t for that, nobody would care about impact factors – or any other measurement of a scientific article’s worth – in the first place.

* Though to hear some of the advocates for Open Access projects like PLoS One, you could be forgiven for thinking that what they want to see is a situation where there are no other journals.

No comments: