Academic articles are now evaluated according to essentially the same metrics as Buzzfeed posts and Instagram selfies.
The problem with their thesis is that article metrics are not new. They even discuss this:
Indexing platforms like Scopus, Web of Science and Google Scholar record how many other articles or books cite your article. The idea is that if a paper is good, it is worth talking about. The only thing is, citation rankings count positive and negative references equally.
I’m pretty sure that it has been literally decades since I read articles about using citation data as metrics for article impact. And one of the concerns raised then was about mindless counting of citation data. “But,” people would object, “if an article got a lot of citations because people were saying how bad it was?”
This is not a hypothetical scenario. Go into Google Scholar, and look at the number of citations for the retracted paper that tried to link vaccination to autism. 2,624 citations. Or the “arsenic life” paper, which has been discredited, though not retracted. 437 citations. By comparison, my most cited papers are in the low tens of citations.
The defence for using citations as metrics was that negative citations rarely happen. (I seem to recall seeing some empirical data backing that, but it’s been a long time.) But it was certainly much harder for people to dig down and find whether citations were positive or negative before so much of scientific publishing moved online. (Yes, I remember those days when journals were only on paper. ‘Cause I be old.)
Indeed, one of the advantages of the Altnetric applet is that it is trivial to go in and see what people are saying. Click on the recorded tweets, and you can see comments like, “So hard to read it without being outraged,” “Apparently not a parody: paper making 'the case for colonialism'. How does a respected journal allow this? Shocked.”and simply, “Seriously?!” Hard to find any tweets saying something like, “This is a thoughtful piece.”
It wouldn’t surprise me if the Altmetric folks are working on code that will pick out the valence of words in tweets about the paper; “excellent” versus “outraged,” say. Some papers are analyzing “mood” in tweets already (e.g., Suguwara and colleague 2017).
So the issue that Roelofs and Gallien are discussing is not a new kind of issue, although it could be new to the degree it is happening. But Roelofs and Gallien fail to show even a single example of the thing they claim to be such a danger: someone, anyone arguing that this is a good paper, a worthy piece of scholarship, because of its altmetric score.
It is fair to point out that any metric needs to be used thoughtfully. But I am not seeing any chilling effects, or even much potential for chilling effects, that they are so worried about.
A partisan news site can play to outrage and be rewarded because of their business models. They generate revenue by ads and clickthroughs. Academic journals are not like such news sites. They are not ad supported. They are not generating revenue by clicks and eyeballs. Academic journals exist in a reputation economy. They rely on reputation for article submissions, peer reviews, and editors.
For me, a bigger problem that journals might be rewarded from criticism by high altmetric scores is that journals can be so effectively isolated from criticism by publisher bundling (a.k.a “big deals”). It’s almost impossible for a library to cancel a subscription to one journal. (And yes, I know there are arguments for cancelling all subscriptions.)
External links
Clickbait and impact: how academia has been hacked
If we had a ten year old infrastructure (as opposed to a 20+ year old), we could simply choose what kind of citation we wanted to make and aggregate them all automatically - instead of creating smart algorithms that can guess 80% correct.
ReplyDelete