Showing posts with label open access. Show all posts
Showing posts with label open access. Show all posts

27 June 2021

The paradox of MDPI

One of the most puzzling trends in scientific publishing for the last couple of years has been the status of the open access publisher MDPI.

On the one hand, some people I know and respect have published their papers there. I’ve reviewed for some journals, and have seen that authors do make requested changes and there is some real peer review going on.

On the other hand, few other publishers today seem so actively engaged in pissing off the people they work with. Scientists complain about constant requests to review, particularly in areas far outside their domain expertise – an easily avoided and amateurish mistake. 

And MDPI’s boss seems like a dick.

A few people have been trying to make sense of this paradox. Dan Brockington wrote a couple of analyses over the last two years (here, here) that were broadly supportive of what MDPI has done.

Today, I stumbled across this post by Paolo Crosetto that takes a long view of MDPI’s record. It prompted another analysis by Brockington here.

Both are longish reads, but are informed by lots of data, and both are nuanced, avoiding simple “good or bad” narratives. I think one of the most interesting graphs is this one in Crosetto’s post on processing turnarounds:

Graph of time from submission to acceptance at MDPI journals.  2016 shows wide variation from journal to journal; 2020 data shows little variation.

There used to be variation in how long it took to get a paper accepted in am MDPI journal. Now there is almost no smear how long it takes to get a paper accepted in an MDPI journal. That sort of change seems highly unlikely to happen just by accident. It looks a lot like a top down directive coming from the publisher, putting a thumb on the decision making process, not a result of editors running their journals independently.

Both Crosetto and Brockington acknowledge that there is good research in some journals. 

The questions seems to be whether the good reputation is getting thrown away by the publisher’s pursuit of more articles, particularly in “Special Issues.” Crosetto suspects the MDPI is scared and wants to extract as much money (or “rent” as he calls it) from as many people as fast as possible. Brockington says that this may or may not be a problem. It all depends on something rather unpredictable: scientists’ reactions. 

Scientists may be super annoyed by the spammy emails, but they might be happier about fast turn around times (which people want to an unrealistic degree) with high chance of acceptance. 

If the last decade or so in academic publishing has taught us anything, it’s that there seems to be no upper limit for scientists’ desire for venues in which to publish their work.

PLOS ONE blew open the doors and quickly became the world’s biggest journal by a long ways. But even though it published tens of thousands of papers in a single year, PLOS ONE clones cropped up and even managed to surpass it in the number of papers published per year. 

MDPI is hardly alone in presenting bigger menus for researchers to choose where to publish. Practically every publisher is expanding its list of journals at a decent clip. I remember when Nature was one journal, not a brand that slapped across the titles of over 50 journals.

MDPI is becoming a case study in graylisting. As much as we crave clear categories for journals as “real” (whitelists) or “predatory” (blacklists), the reality can be complicated.

Update, 1 July 2021: A poll I ran on Twitter indicates deep skepticism of MDPI, with lots of people saying they would not publish there.

Would you submit an article to an MDPI journal?

I have done: 9.4%
I would do: 3.9%
I would not: 50%
Show results: 36.7%

Update, 21 August 2021: A new paper by Oviedo-García analyzes MDPI’s publishing practices. It makes note of many of the features in the blog posts above: the burgeoning number of special issues, the consistently short review times across all journals. Oviedo-García basically calls MDPI a predatory publisher.

This earned a response from MDPI, which unsurprisingly disagrees.

External links

An open letter to MDPI publishing

MDPI journals: 2015 to 2019

Is MDPI a predatory publisher?

MDPI journals: 2015 to 2020 

Oviedo-García MÁ. 2021. Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI). Research Evaluation: in press. https://doi.org/10.1093/reseval/rvab020

Comment on: 'Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI)' from Oviedo-García

Related posts 

My resolve not to shoot the hostage is tested

Graylists for academic publishing

08 December 2020

My resolve not to shoot the hostage is tested

I’ve written before about how refusing to review a paper because you don’t like a journal hurts authors more than editors or publishers. I called refusing to review “shooting the hostage.”

I am being sorely tested in my resolve not to shoot the hostage.

MDPI is a publisher already short of good will for their amateurish practices. Their president last week seems intend in burning any remaining good will by spouting pretty fascist-sounding rhetoric.

When I got an invitation to review yesterday, I legitimately couldn’t do it because I’m moving. But it was a lot easier to say “No” than it would have been otherwise.

24 October 2019

How academic publishing is like a really nice bra

In my jackdaw meanderings around the internet, I stumbled on this thread from Cora Harrington.

Sometimes I like to look at lace prices on sites like Sophie Hallette. It’s good for giving perspective on how, even if the cost of lingerie was just fabrics (and it’s not because people should be paid for their labor), many items would still be expensive.

She gives many examples, of which I will show just one (emphasis added):

The Chloris reembroidered lace is around $1600/meter.


And that isn’t the most expensive one. Cora concludes:

When someone says “There’s no way x could cost that much,” keep in mind that there are fabrics - literally just the fabrics - that can cost 4 figures per meter.

And the labor - the expertise - involved in knowing how to handle these fabrics is worth many, many times more.

This made me think a lot about academic publishing. Because I am always fascinated by people who say something like undergraduate textbooks or journal subscriptions or article processing fees for open access publishing costs “too much.” When someone says something costs “Too much,” that means they have some notion in their head of what the “right” price is.

But as this example shows, people don’t always have a clear conception of the costs involved. And people complaining about costs sometimes tend to assume that the labour involved is simple, quick, and not worth paying a decent wage for.

This is not to say prices can’t be too high. But at least as far as academic publishing goes, I’ve only seen one attempt to work out what costs are. That is, apart from publishers themselves, who have conflicts of interest in calculating and disclosing costs.

11 May 2019

A pre-print experiment, part 3: Someone did notice

In 2016, I wrote a grumpy blog post about my worries that posting preprints is probably strongly subject to the Matthew effect. It was a reaction to Twitter anecdotes about researchers (usually famous) posting preprints and immediately getting lots of attention and feedback on their work. I wanted to see if someone less famous (i.e., me) could get attention for a preprint without personally spruiking it extensively on social media.

I felt my preprint was ignored (until I wrote aforementioned grumpy blog post). But here we are a few years later, and I’m re-evaluating that conclusion.

A new article about bioRxiv is out (Abdill and Blekman 2019), and it includes Rxivist, a website that tracks data about manuscripts in bioRvix. Having posted a paper in BioRvix, that means that my paper is tracked in Rxivist.

It’s always interesting to be a data point in someone else’s paper.

The search function is a little wonky,but I did find my paper, and was surprised (click to enlarge).


Rxivist showed that there has been a small but consistent number of downloads (Downloaded 421 times). Not only that, but the paper is faring pretty well compared to others on the site.
  • Download rankings, all-time:
    • Site-wide: 17,413 out of 49,290
    • In ecology: 542 out of 2,046
  • Since beginning of last month:
    • Site-wide: 19,899 out of 49,290
My little sand crab natural history paper is in the top half of papers in bioRxiv?

I did not expect that. Not at all.

I know there is an initial spike because I wrote my grumpy blog post and did an interview about preprints that got some attention, but even so. I know there aren’t hundreds of people doing research on sand crabs around the word, so hundreds of downloads is a much wider reach than I expected.

And some of the biggest months (October 2018) are after the final, official paper was published in Journal of Coastal Research. The final paper is open access on the journal website, too, so it’s not as though people are downloading the preprint because they are circimventing paywalls. (Though in researching this blog post, I learned a secondary site, BioOne, is not treating the paper as open access. Sigh.) (Update, 14 May 2019: BioOne fixed the open access problem!)

I am feeling much better about those numbers now than in the first few months after I posted the paper. I never would have anticipated that long tail of downloads years after the final paper is out.

And Rxivist certainly does a better job of providing metrics than the journal article does:


There’s an Altmetric score but nothing else. It’s nice that the Altmetric score for the preprint and published paper are directly comparable (and I’m happy to see the score of 24 for the paper is a little higher than the preprint at 13!), but I miss the data that Rxivist provides.

Other journals provide top 10 lists (and I’ve been happy to be on those a couple of times), but they tend to be very minimal. You often don’t know the underlying formula for how they generate those lists. The Journal of Coastal Research has a top 50 articles page that shows raw download numbers for those articles, and if you are not in that list, you have no idea how your article is doing.

While I still never got any feedback on my article before publication, I don’t feel like posting that preprint was a waste of time like I once did.

References

Abdill RJ, Blekhman R. 2019. Tracking the popularity and outcomes of all bioRxiv preprints. eLife 8: e45133. https://doi.org/10.7554/eLife.45133

Faulkes Z. 2016. The long-term sand crab study: phenology, geographic size variation, and a rare new colour morph in Lepidopa benedicti (Decapoda: Albuneidae). BioRXiv https://doi.org/10.1101/041376

Faulkes Z. 2017. The phenology of sand crabs, Lepidopa benedicti (Decapoda: Albuneidae). Journal of Coastal Research 33(5): 1095-1101. https://doi.org/10.2112/JCOASTRES-D-16-00125.1 (BioOne site is paywalled; open access at https://www.jcronline.org/doi/full/10.2112/JCOASTRES-D-16-00125.1)

Related posts

A pre-print experiment: will anyone notice?
A pre-print experiment, continued

Fiddly bits and increments

External links

Sand crab paper on Rxivist

12 April 2019

“Open access” is not synonymous with “author pays”

Wingfield and Millar have a well-meaning but misleading article, “The open access research model is hurting academics in poorer countries.” They say:

The open access model merely changes who pays. So rather than individuals or institutions paying to have access to publications, increasingly, academics are expected to pay for publishing their research in these “open access” journals. ... The bottom line is that payment has been transferred from institutions and individuals paying to have access to researchers having to pay to have their work published.

The first sentence is correct. The second is even correct. It is true that there are now more journals that require article processing charges than their used to be. Importantly, though, the phenomenon of authors paying is not new. “Pages charges” existed long before open access.

But they lose all nuance in the third sentence and commit a category error. They are confusing “freedom to read” with “business model.” These two things are not the same.

There are many counter examples to their central premise. SciELO journals are open access, but have no article processing fees. I could go on.

I am not saying that there is not a concern about the effects of article processing charges. It isn’t even restricted to scientists in “poorer countries.” Michael Hendricks, a biologist at one of Canada’s major research universities (hardly a “poorer country” by any measure, and not even a “poorer institution” by any measure) is concerned about the cost of article processing charges. He wrote:

US$2500 is 1% of an R01 modular budget. It is 2.5% of the average CIHR Project grant. It’s 10% of the average NSERC grant.

Add to that the vastly differing support across universities for article processing charges (ours is $0). There is no way around that fact that shifting publication costs from libraries to PIs imposes a massively different burden according to PI, field of science, nation, and institution.

The solution is that universities should pay article processing charges by cancelling subscriptions (with huge $ savings). But they generally aren’t. The only way I see to force the issue is for funders to make article processing charges ineligible, which will be seen as an attack on open access.

It’s real problem: library subscription costs are staying the same or going up. At the same time, more and more grant money is being spent on article processing charges. The public paying even more for science dissemination than they were is not what we want. Funders and/or universities have to stop this.

But looking back up to the counter-example, SciELO, shows something important. It shows that you can create open access journals with alternative business models that are not “author pays.” It’s unusual, maybe even difficult, but it’s not impossible.

That’s a line we should be pursuing. Not dumping on open access because people can’t distinguish between “common” and “necessary.”

External links

The open access research model is hurting academics in poorer countries

28 March 2018

Innovation must be accompanied by education


When Apple launched the iPod, the company had to put a lot of effort into educating people about digital music.

Mr. Jobs pulled the white, rectangular device out of the front pocket of his jeans and held it up for the audience. Polite applause. Many looked like they didn’t get it.

That was just fine with Mr. Jobs. They’d understand soon enough.

Apple had to inform the mass market that digital downloads could be legal (remember Napster?). They had to let people know how much music you could have with you. They had to let people know about the iTunes store. Without all those pieces of the puzzle, the iPod would have tanked.

I was reminded of these scene when Timothy Verstynan asked:

Why can’t we have a scientific journal where, instead of PDFs, papers are published as @ProjectJupyter notebooks (say using Binders), with full access to the data & code used to generate the figures/main results? What current barriers are preventing that?

I follow scientific publishing at a moderate level. I write about it. I’m generally interested in it. And I have no idea what Jupyter notebooks and binders are. If I don’t know about it, I can guarantee that nobody else in my department will have the foggiest idea.

This is a recurring problem with discussions around reforming or innovating in scientific publishing. The level of interest and innovation and passion around new publication ideas just doesn’t reach a wide community.

I think that this is because those people interested might undervalue the importance of educating other scientists about their ideas. Randy Olson talks a lot about how scientists are cheapskates with their communications budgets. They just don’t think it¤s important, and assume the superiority of the ideas will carry the day.

I’ve talked with colleagues about open access many times, and discover over and over that people have huge misconceptions about what open access is and how it works. And open access is something that has been around for a decade and has been written about a lot.

Publishing reformers drop the iPod, but don’t do the legwork to tell people how the iPod works.

So to answer Timothy’s initial question: the current barrier is ignorance.

29 January 2018

Goodbye, Storify


Storify is shutting down soon. Which is a shame. There was a point where I, and others, were using is a lot. It was a nice way to compile lots of internet resources into a single coherent timeline.

This has some relevance to matters of scientific publishing. On lots of sites like Quora, I see variations of, “Why can’t scientific articles be free to read?” Heck, here are some:


Online services — like Storify — may contribute to the lack of understanding that publishing is not free, regardless of whether the reader pays or not. They make stories, it costs them only a sign-up information, and they wonder why scientific publishing can’t be the same.


People do not understand that services that are called “free” are only free to them, not free across the board. Someone is paying bills. Preprint servers get millions of dollars in support to keep them running.
 
Or, you have operations that are not able to make a go of it, and close up shop, like Storify is now doing. Or like Google Reader did. (That one still makes me sad.)

The closing of Storify shows one of the reasons “free” is not a good way to think about scientific publishing. “Free to read,” sure. But as much as I love me some free to user online services (like Blogger, which has powered my writing here for over a decade and a half), they’re not a good model for scholarly publication.

I am playing with Wakelet as a replacement for Storify.

Hat tip to Carl Zimmer for the news about Storify.

24 September 2017

Paying to publish and Poynder

Richard Poynder and Michael Eisen got into it on Twitter over the weekend over open access publishing. Poynder wrote:


My view is that PLOS legitimised a deeply-flawed business model: pay-to-publish.

Hm. The problem is that many journals used “pay to publish” before PLOS journals came along. They were called “page charges.” You can still find many journals with page charges that are not open access. Cofactor has a list here.

These seem to indicate that asking scientists to bear some of the cost of publication is not inherently problematic. At least, I certainly don’t recall any serious discussion about them as deeply flawed. There probably should have been. But people accepted page charges as a normal, routine part of some corners of academic publishing. Saying PLOS legitimized that model is questionable.

PLOS ONE revolutionized academic publishing. But what was revolutionary was its editorial policy of not screening for “importance.” That lead to it publishing a lot of papers and generating a lot of money. It was through that combination that PLOS ONE paved the way for many imitators, including bad journals (documented in Stinging the Predators).

To me, the bigger problem is that “pay to publish” is very often equated – wrongly – with “open access.” The business model used to support publishing is not closely related to whether people can freely read the paper.

External links

Journals that charge authors (and not for open access publication)

18 September 2017

A pre-print experiment, continued


Over a year ago, I uploaded a preprint into bioRxiv. When people upload preprints, bioRxiv sensible puts on a disclaimer that, “This article is a preprint and has not been peer-reviewed.”

A little over a week ago, the final, paginated version of the paper that arose from the preprint was published. Now, bioRxiv is supposed to update its notice automatically to say, “Now published in (journal name and DOI).”

Perhaps because the final paper was substantially different than the preprint – in particular, the title changed – bioRxiv didn’t catch it. I had to email bioRxiv’s moderators through the contact form asking them to make the update.

The preprint was making more work for me. Again. It wasn’t a lot of work, I admit, but people advocating preprints often talk about them as though they take effectively zero time. They don’t. You have to pay attention to them to ensure things are being done properly. I want people to cite the final paper when it’s available, not the preprint.

Some journals are talking about using bioRxiv as their submission platform. This would be a good step, because it would remove work duplication.

I’m glad I’ve been through the preprint experience. But I am still not sold on its benefits to me as a routine part of my workflow. It seems all the advantages that I might gain from preprints can be achieved by other methods, notably publishing in open access journals with a good history of good peer review and production time.

Related posts

A pre-print experiment: will anyone notice?

26 July 2017

World’s worst... scientific papers


I have a new project to share! Just for fun, I spent the last few days making another little ebook, similar to what I did with Presentation Tips.

Stinging the Predators is a collection of deliberately horrible papers that were created to punk predatory journals. There have been six such pranks in the last two years. The most recent, which sort of triggered this project, was on Neuroskeptic’s blog last Saturday. Thinking about all the “sting” papers I’d seen over the years, it occurred to me that fake papers were practically their own emerging genre. And what better way to draw attention to a genre than with a curated anthology?

I collected all the sting papers I knew about. There turned out to be thirteen, and collecting them convinced me that it was useful to have all these examples in one place. Each paper has a short new introduction, and links to articles about it. I rounded off the collection with some short essays, some of which appeared here on the blog before, and a couple of which were new.

Once I got started with this project, I couldn’t let it go. I promised myself I would only let myself work on it for a few days, and then get back to work on writing that could be published by other people.

The ebook is available on figshare and on DoctorZen.net.

Update, 28 July 2017: After I posted the first version, I was reminded of another sting paper on Google Plus (see? It’s not a ghost town). I found another abstract after that. I decided to make a quick turnaround from version 1 to 2. There are now fifteen entries in this anthology.

The easy to remember link is http://bit.ly/StingPred. (Capitalization matters! “stingpred” will not work.)

Update, 31 July 2017: I know, two revisions in less than a week? I learned of another sting paper, and another conference abstract, bringing the total number of entries to 169 pages of mostly rubbish. (Some will probably say all is rubbish.)

Update, 7 August 2017: This little project is featured in Times Higher Education today and the Improbable Research blog!

External links

Stinging the Predators on Figshare
Predatory Journals Hit By ‘Star Wars’ Sting
Worst ever research papers revealed
“All these papers were deliberately bad”

03 July 2017

American Society of Parasitologists, Day 5

For the last day at the Parasitologists conference, I mostly sat in on taxonomy talks. Now, I love taxonomists and admire the work that they do to no end, but I think it’s fair to say that their talks do not always have the most compelling narratives. So most of my notes for talks I saw were very short.

Sara Brandt: Schistosome taxonomy. Thinks snail ecology plays the biggest role in determining the schistosome relationships.

Santos Portugal (@jsportugal3): Tick phylogeny.

Tim Ruhnke: Cestode tapeworm phylogeny.

Veronica Mantovani Bueno: More cestode tapeworm phylogeny. The revision the taxonomy of host skates and rays led to big changes in interpretation of the taxonomy and ecology of their cestode parasites. There seems to be very relaxed associations between host and parasite. Some of the cestodes she studies have very similar DNA sequences, but dramatically different morphology.

Anna Phillips: new medicinal leech. #CollectionsAreEssential

Carlos Ruiz: I came in late and missed the start of this talk, but it involved possible new copepod species.

Jackson Roberts: Turtle blood flukes, of which he described one new species. A bunch of stuff is coming about flukes in South American turtles.

Bret Warren: Looking at flukes in sturgeon. I learned that Lake Winnebago has a sturgeon fishery, which is spearfishing in winter, through holes in ice. That alone was worth the price of admission. Here’s a video of this great tradition:


Carlos Ruiz again (this was sprung on him about 10 minutes before the talk): Myxozoans are parasitic jellyfish. In this case, they cause “whirling disease” in fish. Very tough to get rid of. Started with reports from anglers noticing strange fish. State natural resources came on board to get samples.

After the contributed talks, the moment I had been waiting for: poster session! I had a poster that I was very happy with. I’ll show it on the Better Posters blog after the paper is published. (I’m writing it now!)


I was also super pleased to be reunited with my SICB symposium partner in crime, Kelly Weinersmith, who had new progeny with her.


Because the diversity of parasite research is so wide, it can be hard to detect commonalities across a conference (which I saw less than half of, at best). But there were a recurring theme from this meeting.

Parasitology, like much of biology, has been transformed by molecular biology. The techniques are making it possible to answer questions that would have been very difficult to answer without them. For instance, “Is this species of parasite in this intermediate host the same species in this definitive host?”

But parasitologists emphatically do not want molecular biology to take over their field.

Several speakers referenced the #CollectionsAreEssential hashtag on Twitter, which was prompted by the possible loss of NSF funding supporting museum collections. Museum collections are constantly under threat, and constantly proving useful to current science.

Several people noted that DNA sequence data needs to be connected to “ground truths”: you have to be able to see the organism whose DNA you are sequencing.

The recurring theme of this meeting was that for parasitology to remain a viable field, never mind a vibrant one, organismal biology has to remain strong. This is going to be a challenge, because many people find the “Sequence it all and let algorithms sort it out” approach enticing and alluring.

One last note that is tangential to the conference, but relevant to a recent post on publishing costsThe Journal of Parasitology has very competitive article processing charges, particularly for open access. Even nonmembers can publish open access for $1,000, about the same as PeerJ, which is one of the most cost effective open access megajournals.

Related posts

American Society of Parasitologists, Day 1 and 2
American Society of Parasitologists, Day 3
American Society of Parasitologists, Day 4

27 June 2017

The problem is scientists, not publishers

Because I hate people who just retweet something interesting and say, “Thread,” I’m compiling Jason Hoyt’s series of tweets about the state of scientific publishing into a blog post. Jason’s thread was initiated by this article in The Guardian, “Is the staggeringly profitable business of scientific publishing bad for science?”

For context, Jason is founder and CEO of PeerJ, which I have published in, and will do so again. I have lightly edited the tweets for clarity and emphasis.

This will be controversial, but the problem is scientists, not publishers. While the article may get the history of the problem accurate, it is going to perpetuate several myths about the current root of the issue. Scientists continuing to blame publishers, rather than the root, is pretty damn unscientific.

Plenty of cheap or even free publishing solutions exist. PeerJ even provides lifetime publishing open access for almost nothing, but very few scientists care about price when deciding where to publish. Scientists care about impressing grant and tenure and hiring committees, made up by other scientists, and the committees care about Impact Factor as a vanity metric for quality. It is the tenure/hiring, NIH, NSF, grant committees, not publishers, that are the ones in power and need to make the changes. Demanding publishers do so will do little.

So why aren’t the pitchforks out against the committees? There is only one group that can lead that charge, and it isn’t the publishers. Why aren’t committees looking at the merits of the article rather than the journal it is published in? Why aren’t committees more proactive in saying publish in cheaper open access alternatives like PeerJ?

PeerJ started out at $99 for lifetime publishing. That would have saved governments and funders $9 billion a year. Yet zero funders and committees have yet to approach PeerJ since it was launched five years ago with any support, acknowledgement or promotion. Instead, Nobel laureates and funders launch an elitist journal (I believe Jason is referring to eLife. - ZF), perpetuate the Impact Factor, whilst hypocritically blaming Cell, Nature, and Science journals. Instead, they fund elitist “non-profit” journals that charge $2,000 an article yet still only cover half their costs. This makes no sense.

Then scientists wonder why PeerJ had the gall to raise lifetime open access publishing from $99 to $399. The world doesn’t want nice things.

One of the myths is that academics do all the work. And daily I see an academic complain about journals and propose starting their own. Well – I am one of those academics who started their own. And let me say a peer-reviewed journal does not run itself. At best you’d get a few pubs out per year with only volunteers. And the quality would be shite.

For starters – authors demand peer review to be timely. Counter to that, the reviewers don’t want to be rushed and get angry. Without anyone chasing reviewers, the world would never see reviews hit the light of day. That’s fine then, the world doesn’t need millions of papers, just the ones people want to actually review without chasing. The problem with that is the literature is full of papers now highly cited that were rejected many times. So who is going to chase the reviewers for a million manuscripts? Volunteers? Nope. You have paid staff. So how do you pay the staff? Either through grants, subscriptions, or open access fees. So now the journal you started in protest of commercial publishers is in the same boat.

“But certainly you could do it cheaper!” you say. Cheaper than $99 for lifetime publishing like PeerJ? Don’t forget long-term archiving storage, a stupid typeset PDF, because that’s what readers demand, etcetera, etcetera.

“Well, screw it,” they say. “We’ll do preprints and have ‘overlay’ journals for post-pub peer review.” Except preprints aren’t free either. Arxiv costs more than $1 million a year to operate as a non-profit. And again, who is going to chase the reviewers for the preprint post-pub reviews? Volunteers? Volunteers for over 1 million preprints?

So again, when I see people complain about high cost of publishing, I have to laugh. More like cry. We have the solutions already, but little uptake. Who is to blame then? When I read tweets from academics that they won’t bother reading low Impact Factor journals, who is to blame? (By the way, how unscientific is skipping a literature review just because the journal has a lower Impact Factor, for fuck’s sake?)

The world doesn’t want nice things. We built a quid pro quo system of cheap open access with PeerJ. We asked $99 lifetime members, if invited, to do a peer review to support the community. People complained about the quid pro quo. The world could still have cheap publishing – if it is willing.

Elsevier and others are more than happy to keep taking the blame for the system. It’s a misdirection. As long as scientists don’t start protesting tenure and hiring committees, then Elsevier’s profit margins are safe. Every time there is a new Elsevier boycott, it lasts a week, and then everyone forgets. They know this. And they’re just cogs in the system like everyone else.

Update: Shortly after I published this, this appeared in my Twitter timeline, which is in line with many of Jason’s points. Butch Brodie promoted Evolution Letters by saying it was open access, had low publication costs for Society for the Study of Evolution members, and those costs go back to society initiatives.


The article processing fees for Evolution Letters is $1,800 if you’re a member of the Society for the Study of Evolution. If you’re not, it’ll cost you $2,500. And you know that not all of that fee will go directly back to the society. Some of it is going to the publisher.

PeerJ is also open access and is cheaper: $1,095. You pay a 64% premium to have your article in a society journal.

I suggest publishing in PeerJ, donating a few hundred bucks to your scientific society directly, and leaving yourself a few bucks for dinner and a movie.

Update, 10 October 2017: I learned that the Evolution Letters publisher gets 50% of article processing fees, and the two publishers each get 25%. So, as a Society for the Study of Evolution member, $450 is going to the society if I publish in the journal.

Which means my claim before was correct. I can publish in PeerJ for $1,095, donate $505 to the Society for a total of $1,600. The Society gets more money and I get more money.

Also, this is a good time to link to this description of the editorial process at American Naturalist. I don’t think all journals put this amount of effort into editorial, and I’m not sure all authors want that much input (some might deem it “interference”).

External links

How much does it cost to run a small scholarly publisher?
On pastrami and the business of PLOS
An efficient journal
The Journal of Open Source Software costs
The secret lives of manuscripts

Related posts

The cages we scientists make for ourselves
 

19 June 2017

Beware simple narratives in academic publishing

NeuroLogica blog has an article examining the loss of Jeffrey Beall’s list of dubious publishers. This post presents a nice, clean narrative: a good guys versus bad guys story. Jeffrey Beall is the good guy and predatory publishers are the bad guys. You can practically here the movie trailer voiceover. “In a world where lawless predatory journals abound, one librarian has the courage to name and shame them. Fighting strongarm tactics from the publishers and spineless university administration, he fights to save the world from ever more dodgy science.”

But the reality is more complicated, I’ll argue.

The NeuroLogica post says:

Traditional journals earn their money from subscriptions and advertising.

This suggests that libraries – the main customers for traditional commercial publishers – are going in and making journal subscription decisions on a case by case basis (like you would with magazine subscriptions). But many libraries don’t have that option. Instead, most libraries get journals through “big deals” from publishers, where large numbers of journals bundled together in a single indivisible package. These “big deals” don’t have a standard price (plotted graphically here) and librarians are bound by confidentiality agreements not to discuss them.

Subscription publishers have incentives to create more journals to justify increasing the price tag on their “big deals.” It’s not clear that incentives to create more journals has substantially different results than incentives to accept more papers.

Many journals do not run ads at all. Some do, but don’t run many.

And, as one commenter noted, this description overlooks page charges entirely.

NeuroLogica continues:

In 2013 Science magazine published the results of a sting in which a fake and terrible paper was submitted to over 300 open access journals. Sixty percent of the journals published the bogus paper, which should not have made it past even the flimsiest peer-review.

The implication here is that zero percent of subscription journals would have accepted the fake paper. But we don’t know, because no subscription journals were sent the fake paper. But some of the journals that accepted the fake paper were listed in Web of Science, which is supposed to be a vetted database of “best of the best” scientific journals. This suggests more subscription journals might have fallen for this fake paper than we would like to think.

The game of “How did this get published?” is one that scientists played long before the phrase “open access” was coined.

Predatory journal contribute to a blurring of the lines between science and pseudoscience, essentially flooding the world with low quality and bogus studies and promoting the borderline academics who produce them.

One of the biggest academic publishers in the world, Elsevier, publishes a subscription journal called Homeopathy. It doesn’t get much more pseudoscientific than that.

Beall was providing an invaluable service by pointing out practices among some journals that violated the spirit and the process of quality control in science.

Granted, but we should not overlook that “Beall’s list” was written and maintained by one person. His decisions were based on criteria that were not objective or transparent. For example, Beall once included then new publisher Hindawi on his list of predatory publishers, then later removed it for no readily apparent reasons.

This seems like an opportune moment to note that there is a new service from a Texas company called Cabell’s that will attempt to provide both a journal blacklist and a journal whitelist. This is an established company (founded 1978), but their lists are new. I think this is a very interesting development worth watching.

External links

Open Access Predatory Journals

Cabell’s: ‘Our journal Blacklist differs from Jeffrey Beall’s’

Related posts
 
Open access of vanity press, the Science “sting” edition
How much harm is done by predatory journals? 
Time for a new list of junk journals

25 January 2017

Time for a new list of junk journals


Earlier this month, Jeffrey Beall (rhymes with “wheel”) removed his blog and well-known list of probable “predatory” open access publishers. I was too slow in writing a blog post about it, but this excellent one by Neuroskeptic covers a lot of territory I would have covered.

In my view the demise of Beall’s project is a sad day for science. While his work was sometimes controversial, he was just about the only person who seemed to take predatory publishing seriously and who tried to do something about it. ...

On the other hand, I don’t think he should have been doing this job alone. ... The one-man nature of Beall’s operation left him open to charges of being arbitrary and opaque in how he decided where to draw the line between legitimate and predatory publishing. I think he made the right calls the vast majority of the time, but then again, he has not been transparent about why he shut down the site.

I haven’t forgotten that Beall once argued (badly) open access was motivated by “anti-corporate” sentiment, and that weakened his credibility.

While I have argued before that I don’t think junk journals are that big a problem, it’s not zero harm, either. Like many predators, junk journals prey on the weak: researchers who are disconnected from a professional community and don’t have a clear understanding of why some journals are not legitimate.

In discussions about our new tenure and promotion requirements in our department, the worry about “What if someone just publishes in predatory journals?” was brought up repeatedly. I argued that this was not that big a problem, and that we had criticized colleagues for publishing in dodgy journals. Beall’s list was one of the resources that we could point at to back up criticisms.

People want resources to help them find their way in the wild west of scientific publishing in the early twenty-first century. And while the Directory of Open Access Journals is a valuable, it has a problem: it’s a whitelist. Beall’s list was a blacklist, and somewhere along the way, Beall mentioned something important:

No one lies about being on a blacklist.

We can’t spend all our time sending obviously bad manuscripts to junk journals to punk them. I kind of hate to say it, but we could use a journal blacklist. Maybe even one that would call out legitimate publishers who don’t clean out their stable as they should.

Related posts

How much harm is done by predatory journals?
Dubious journals from major scientific publishers: Homeopathy

External links

Predatory Publishers: Why I’ll Miss Jeffrey Beall
Beall’s litter
This 'predatory' science journal published our ludicrous editorial mocking its practices 

21 June 2016

Evolution 2016, Day 4


Yes, I saw some cool science, yesterday, including some cool and contradictory results on how predators shape brain evolution (big brains are favoured in high predator environments in guppies but not killifish?) and jump right to the big news.

The Society for the Study of Evolution's flagship journal, Evolution, will be moving to an online only journal, with all papers becoming open access two years after publication. Decades of papers, including many classics, will be free to read in early 2017.

The Society is also launching a new online, open access, "high impact" journal in early 2017, Evolution Letters.

Let me be among the first to congratulate the Society for the Study of Evolution for moving their publications toward a superior and more modern way of scientific communication.

And I think I am among the first to pat the Society on the back, because, judging from the reaction in my Twitter feed, these announcements are widely regarded as bad moves. 

People are mad that Evolution won't be immediate open access, that the two year embargo is too long for NIH funded researchers, and that the journal is still being published by Wiley, one of the biggest for profit publishers.

People (including, it must be said, myself) worry that Evolution Letters might as well be titled Evolution Rejects. The perception is that the journal will be a dumping ground for those papers that are not considered novel enough for the flagship journal.

I've been critical before about the creation of new journal that serve no editorial purpose. I worry  Evolution Letters will be one of those. It's not being created to define an emerging field of research, but as part of a business plan. But I have no doubt that it will have an audience. Scientific manuscripts expand to fill the available journals.

I worry that the "54 40 or fight" attitude to open access might be a little counterproductive. While It's   important shift that Overton window, it might be that criticizing good but imperfect progress might discourage people from trying to make any progress at all.

08 June 2016

The cages we scientists make for ourselves

 
“We need to change incentives!”

Ah, how many times I have heard some variation of that phrase in describing scientific publishing.

With the creation of UTRGV, my department was forced to create new evaluation documents for annual review, for merit and tenure, and so on. Creating policy documents sounds dull, but I was quite excited by this. You don’t get many opportunities the scrape away all the junk that accumulated over the past few decades that nobody could be bothered to change. This is not an opportunity that comes along every day.

I argued to change our department’s incentives structure. I had a few things I wanted to accomplish.

  • I wanted us to reward open access publication and data sharing.
  • I wanted to broaden the range of things that could be considered scholarly products to include more than journal articles.
  • I wanted our evaluation document to reflect that the current world of scientific publishing is largely online.

My arguments did not convince my colleagues. Mostly.

People voted in favour of rewarding people for editing a book (which was previously missing from our list), or getting a patent. Progress!

People did not vote in favour of reward sharing datasets (e.g., on Figshare) or computer code (e.g., on Github), although those votes were close. Promising.

The discussion over rewarding publication was revealing.

Previously, we had given multipliers for whether a paper was published in a regional, national, or international journal. I proposed that instead, we give more weight for an open access journal article, and less weight for an article that appeared in a print only journal (e.g., not available online).

There were two arguments against rewarding open access papers.

The first was “But it costs money.” I pointed out that many open access journals charge nothing, or have fee waivers. I was also not sure why “I have to pay” was seen as a problem, since one of the legacy departments has long rewarded people for each scientific society they belong to, and that’s an out of pocket expense to get a reward, too.

The second objection was prestige. I provided links and papers to support the arguments of the benefits of open access, the pitfalls of Impact Factor, and that reprint requests don’t cut it compared to genuine open access. But they were not swayed.

Ultimately, it felt like asking my colleagues to image a world where a PLOS ONE paper was worth more in an evaluation than a Nature paper was like asking them to picture a reddish shade of green. They just couldn’t imagine it.

The department voted against the new multipliers.

So the next time you hear, “We just have to change incentives for scientists,” remember that these existing incentives are often ones that many scientists actually want. They are in a cage of their own making and could leave at any time, but won’t.

Photo by Amber Case on Flickr; used under a Creative Commons license.

17 March 2016

A pre-print experiment: will anyone notice?


In late February, there was a lot of chatter on my Twitter feed from the #ASAPBio meeting, about using pre-prints in biology.This has been the accepted practice in physics for decades.

My previous experience with pre-prints was underwhelming. I’d rather have one definitive version of record. And I’d like the benefits of it being reviewed and edited before release. Besides, my research is so far from glamorous that I’m not convinced a pre-print makes a difference.

Following the ASAPbio meeting, I saw congratulatory tweets like this:

Randy Schekman strikes again: yet another #nobelpreprint - Richard Sever
Marty Chalfie on bioRxiv! That’s Nobel #2 today - Richard Sever
Yay, Hopi Hoekstra (@hopihoekstra) just published on @biorxivpreprint - Leslie Voshall

Similarly, a New York Times article on pre-prints that appeared several weeks later focused on the Nobel laureates. I admit I got annoyed by tweets and articles about Nobel winners and Ivy League professors and HHMI labs and established professors at major research universities using pre-prints. I wasn’t the only one:

 I wish this article didn’t erase the biologists who have been posting to arXiv for years.

If pre-prints are going to become the norm in biology, they can’t just work for the established superstars. Pre-prints have to have benefits for the rank and file out there. It can’t just be “more work.”

For example, I think one of the reasons PLOS ONE was a success was that it provided benefits, not just for superstars, but for regular scientists doing solid but incremental work: it provided a venue that didn’t screen for importance. That was a huge change. In contrast, new journals that try to cater to science superstars by publishing “high impact” science (PLOS Biology or eLife and Science Advances), while not failures, have not taken off in the same was that PLOS ONE did.

I decided I would try an experiment.

I don’t do the most glamorous scientific research, but I do have a higher than average social media profile for a scientist. (I have more Twitter followers than my university does.) So I thought, “Let’s put up a pre-print on biorXiv and see if anyone comments.”

I spent the better part of a morning (Thursday, 25 February 2016) uploading the pre-print. Since I had seen people whinging about “put your figures in the main body of the text, not at the end of the paper,” I had to spend time reformatting the manuscript so it looked kind of nice. I also made sure my Twitter handle was on the front page, to make it easy for people to let me know they’d seen my paper.

I was a little annoyed that I had to go through one of those clunky manuscript submission systems that I do for journals. I had to take a few stabs at converting the document into a PDF. biorXiv has a built-in PDF conversion built into it, but the results were unsatisfactory. There were several image conversion problems. One picture looked like it came out of a printer running low on ink. Lines on some of the graphs looked like they had been dipped in chocolate. Converting the file to PDF on my desktop looked much better. I uploaded that, only to find that even that had to go through a PDF conversion process that chewed up some more time.

biorXiv preprints are vetted by actual people, so I waited a few hours (three hours and thirty-nine minutes) to get back a confirmation email. It was up on biorXiv within a couple of hours. All in all, pretty quick.

I updated the “Non-peer reviewed papers” section of my home page. I put a little “New!” icon next to the link and everything. But I didn’t go out and promote it. I deliberately didn’t check it on biorXiv to ensure that my own views wouldn’t get counted. Because the point was to see whether anyone would notice without active promotion.

I waited. I wasn’t sure how long to wait.

After a day, my article had an Altmetric score of 1. biorXivpreprints and three other accounts that looked like bots tweeted the paper, apparently because they trawl and tweet every biorXiv paper. (By the way, “Bat_papers” Twitter account? There are no bats in my paper.) The four Twitter accounts combined had fewer followers than me. Looking at the Altmetrics page did remind me, however, that I need to make the title of my paper more Twitter friendly. It was way longer than 140 characters.

Four days later (29 February 2016), I got a Google Scholar alert in my inbox alerting me to the presence of my pre-print. Again, this was an automated response. That was another way people could have found my paper.

Three weeks has gone by now. And that’s all the action I’ve seen on the pre-print. Even with a New York Times article brought attention to pre-prints and biorXiv, nobody noticed mine. Instead, the attention is focused on the “established labs,” as Arturo Casadevall calls them. The cool kids.

I learned that for rank and file biologists, posting work on pre-prints is probably just another task to do whose tangible rewards compared to a journal article are “few to none.” Like Kim Kardashian posting a selfie, pre-prints will probably only get attention if a person who is already famous does it.

Update, 18 March 2016: This post has been getting quite a bit of interest (thank you!), and I think as a result, the Altmetric score on the article reference herein has jumped from 1 to 11 (though mostly due to being included in this blog post).

Related posts

The science of asking
Mission creep in scientific publishing

Reference

Faulkes Z. 2016. The long-term sand crab study: phenology, geographic size variation, and a rare new colour morph in Lepidopa benedicti (Decapoda: Albuneidae) biorXiv. http://dx.doi.org/10.1101/041376

External links

The selfish scientist’s guide to preprint posting
Handful of Biologists Went Rogue and Published Directly to Internet
Taking the online medicine (The Economist article)

Picture from here.

26 February 2016

Open access paper, closed data


A new paper in Science Advances by Kicheli-Katz and Regev (2016) is interesting in several ways. It’s a very interesting look at gender bias using eBay auction data.

I’ve used data from eBay myself (Faulkes 2015), and it was hard going. I visited the website roughly daily and pulled auction data into a spreadsheet by hand for each listing. Nothing was automated.

Kicheli-Katz and Rege, however, got data directly from eBay. And they got over one million transactions to analyze. That’s an awesome sample that gives them a lot of statistical power. I was curious to know more, but found this text in several places in the paper.

According to our agreement with eBay, we cannot use, reproduce, or access the data.

I raised my eyebrows at that a bit. All sorts of questions bubbled into my mind. The acknowledgements section provided more detail, however:

eBay has provided written assurance that researchers wishing to replicate the work would be afforded access to the data under the same conditions as the authors.

Okay. That’s more helpful information, and I wish it was in the main body of the text, rather than buried at the end of the acknowledgements. On the Science Advances webpage, it is the second to last thing on the page (the last is the copyright notice).

Clearly, social media companies realize that their data is scientifically valuable and want to use that for their own gain. OKCupid used to write amazing blog posts based on their data that give you a hint of how rich the kinds of questions and answers you can tackle are using large social media datasets. (Although OKCupid also got flak for running unregulated experiments on its users, and rightfully so.)

The eBay disclaimer runs counter to the major trends we are seeing in scientific publication: more transparency, more access to raw data. I am not sure how confident I am with eBay’s promise to share the data with other researchers. From the point of view of reading this paper, eBay is mostly a big black box.

Update, 12 April 2017: I emailed the eBay representative listed as a contact in the paper on 26 February 2016, inquiring about working with eBay’s research lab.

I never got a reply.

References

Kricheli-Katz T, Regev T. 2016. How many cents on the dollar? Women and men in product markets. Science Advances 2(2): e1500599. http://dx.doi.org/10.1126/sciadv.1500599

Faulkes Z. 2015. Marmorkrebs (Procambarus fallax f. virginalis) are the most popular crayfish in the North American pet trade. Knowledge and Management of Aquatic Ecosystems 416: 20. http://dx.doi.org/10.1051/kmae/2015016

Picture from here.

19 February 2016

Steampunk open access


Consider a world much like our own. Scientists do research, and publish their findings in academic journals, made of paper and ink and glue and leather.

Except... that the concept of open access took hold in the early days of academic publishing. Authors paid article processing fees to pay for printing costs. The resulting print journals and books were sent to libraries and to archive free of charge. People who wanted copies merely had to ask.

This system continues until the late twentieth century, when the Internet becomes readily available to researchers and the public alike. Now, all those centuries of research (still freely available, mind you) are in an antiquated, inconvenient format. People want get digital copies of research papers that interest them.

Who would put in all the hours of work to digitize those open access articles? Who would pay the costs? Who would create the websites? Who would assign the DOIs?

There used to be a complaint that younger scientists’ reference lists rarely went back before the 1990s because papers weren’t online. Imagine how much worse the problem would be in the steampunk open access scenario.

In our world, a lot of the world’s scientific back catalog got digitized by those bad guys, commercial publishers.

It is short-sighted to think that electronic text, and particular PDFs, are the final form of academic communication. When the next transition of formats occurs, how will open access papers make the jump?

Photo by Don Shall on Flickr; used under a Creative Commons license.

28 January 2016

The cost of selectivity


Scientific Reports and Nature Communications are both published by the same company, Nature Publishing Group. Both are online only, open access journals.

But Eigenfactor pointed out that Scientific Reports charges an article processing fee of US$1,495, while Nature Communications costs more than double that, US$5,200.

Why the price difference? Since they are both at the same publisher, it’s obviously not a simple infrastructure difference, like one journal having a physical print run, different manuscript submission systems, and so on. Both appear to offer the same services to authors.

There seem to be two factors that might explain the price difference: staff and selectivity.

Scientific Reports seems to be run by editors who are working scientists. Presumably they are not drawing most of their salary from the publisher, and are working mostly on a volunteer basis, which is common for scientific journals. The editors of Nature Communications are Nature Publishing Group staffers, who presumably are getting salary from the publisher. I wonder what the expected and actual difference in outcomes are between these two editorial schemes.

Neither journal seems to report what percent of manuscripts are ultimately published, but the criteria for Scientific Reports is that research be “scientifically valid and technically sound.” On the other hand, Nature Communications says it publishes “important advances of significance to specialists,” so clearly it is setting itself up as the more exclusive club.

Why be selective in a purely online journal? There is no limit to the number of pages, and I expect the cost of server storage per paper is fairly trivial. The selectivity is no doubt to increase journal Impact Factor, which in turn drives prestige and desirability. And at first glance, it seems to be working: the journals’ web pages report Scientific Reports Impact Factor is about 5, and Nature Communications is 11 and change.

But... the blog for Frontiers in journals (owned by Nature, incidentally) has a post claiming there is no relationship between Impact Factor and rejection rates. The problem that James Hardcastle and Anna Sharman pointed out is that while they archived data on Figshare, the data does no include journal names, so it’s not verifiable.

As far as I can tell, the only revenue stream for these journals is their article processing charges. As I mentioned before, this means that published papers are subsidizing the costs for the rejected ones. When I started this post, I though the comparison of these two journals might give a glimpse into just how big that subsidy is. But it’s hard to disentangle from the differences in editorial management.

I’m intrigued by all this because the open access “baby journals” that share the name of a paywalled glamour magazine (Science, Nature, Cell) seem to be able to charge prices that are well above the market for most open access journals. To reuse yesterday’s graph, they all break the axis:


I’m curious as to why this pricing scheme survives. Do people confuse Nature Communications with Nature? Is the reputation of the publisher just that strong that it commands a premium, even for a relatively new journal? Is there no competition on value or services to the authors? Do people really expect ten times the prestige because they paid ten times the cost?

Related posts

Fluctuating publication costs

External links

Selecting for impact: new data debunks old beliefs