“We need to change incentives!”
Ah, how many times I have heard some variation of that phrase in describing scientific publishing.
With the creation of UTRGV, my department was forced to create new evaluation documents for annual review, for merit and tenure, and so on. Creating policy documents sounds dull, but I was quite excited by this. You don’t get many opportunities the scrape away all the junk that accumulated over the past few decades that nobody could be bothered to change. This is not an opportunity that comes along every day.
I argued to change our department’s incentives structure. I had a few things I wanted to accomplish.
- I wanted us to reward open access publication and data sharing.
- I wanted to broaden the range of things that could be considered scholarly products to include more than journal articles.
- I wanted our evaluation document to reflect that the current world of scientific publishing is largely online.
My arguments did not convince my colleagues. Mostly.
People voted in favour of rewarding people for editing a book (which was previously missing from our list), or getting a patent. Progress!
People did not vote in favour of reward sharing datasets (e.g., on Figshare) or computer code (e.g., on Github), although those votes were close. Promising.
The discussion over rewarding publication was revealing.
Previously, we had given multipliers for whether a paper was published in a regional, national, or international journal. I proposed that instead, we give more weight for an open access journal article, and less weight for an article that appeared in a print only journal (e.g., not available online).
There were two arguments against rewarding open access papers.
The first was “But it costs money.” I pointed out that many open access journals charge nothing, or have fee waivers. I was also not sure why “I have to pay” was seen as a problem, since one of the legacy departments has long rewarded people for each scientific society they belong to, and that’s an out of pocket expense to get a reward, too.
The second objection was prestige. I provided links and papers to support the arguments of the benefits of open access, the pitfalls of Impact Factor, and that reprint requests don’t cut it compared to genuine open access. But they were not swayed.
Ultimately, it felt like asking my colleagues to image a world where a PLOS ONE paper was worth more in an evaluation than a Nature paper was like asking them to picture a reddish shade of green. They just couldn’t imagine it.
The department voted against the new multipliers.
So the next time you hear, “We just have to change incentives for scientists,” remember that these existing incentives are often ones that many scientists actually want. They are in a cage of their own making and could leave at any time, but won’t.
Photo by Amber Case on Flickr; used under a Creative Commons license.
Zen, thank you so much for pushing for this; congratulations on the small advances you won.
ReplyDeleteI am so sorry that a majority of your colleagues were so backward-thinking that they chose to blow this unique opportunity to make a real difference.
It certainly shows what we're up against.
Yes, thanks for pushing this. Departments/institutions are very different. I'm on a similar committee and in our committee we won't let any impact factor or other such nonsense enter. I'm aware I'm just lucky, though. In my former university it was much as you described.
ReplyDeleteHowever, at a review panel with funders, I was able to at least get people thinking and hesitating by challenging them on the notion of journal rank:
http://blogarchive.brembs.net/news.php?item.911.5
I told them that the available evidence I had seen suggested that GlamPapers are actually weaker than other papers and specifically, individually, asked them if they had any evidence to the contrary that I may be unaware of. None of them had any such evidence, so the journal rank argument was significantly weakened for the remainder of the site visit.
Ex-scientists, however, who had become decision-makers, simply dismissed the evidence right off the bat, by essentially stating that their experience of having done everything right trumps any evidence anyone could offer them:
http://bjoern.brembs.net/2015/07/evidence-resistant-science-leaders/
I'm curious, when you presented your colleagues with the data that GlamPapers are methodologically weaker than other papers, did they present contradicting evidence, dismiss the evidence or argue in some other, weaselly way such as "but there is nothing else we can do"?