Showing posts with label open access. Show all posts
Showing posts with label open access. Show all posts

25 November 2015

Academic boycotts

Mark Carrigan’s website poses this question as part of a lead-up to a roundtable in early December:

Why have researchers been so ready to campaign against for-profit academic publishers such as Elsevier, Springer, Wiley-Blackwell, and Taylor & Francis/Informa, but not against for-profit platforms such as Academia.edu, ResearchGate, and Google Scholar?

Because “someone’s making money” isn’t what bugs researchers. What bugs researchers is being impeded from the business of doing their research.

Paywalls and subscription fees obstruct academics trying to do research.

Platforms like Google Scholar facilitate research. I could not do my job half as effectively if Google Scholar didn’t exist. Academia and ResearchGate haven’t been as useful yet, but I have never felt the frustration in using them like I have when I’ve hit the “pay now” screen for that article I want to read..

It’s also worth noting that the original question contains an assumption: that campaigns against for-profit publishers are major academic movements. But those calls for boycotts have...well... not exactly left those business struggling. They are still highly profitable, and show no visible signs of worrying that academics are going to stop submitting papers to their journals.

External links

As Academia.edu Grows, Some Scholars Voice Concerns

23 November 2015

Getting what’s paid for, scientific publishing edition


I like open access, but I think for profit publishers can continue to have a role in the scientific publishing ecosystem. But boy oh boy, some academic publishers do make it hard for a body to support them.

Recently, two independent blog posts from two separate scientists described the same problem from two separate publishers: both were downloading a lot of papers from their academic library for research purposes, and libraries were threatened by the publishers.

In both cases, the libraries had fully paid subscriptions to these journals.

Case one from Chris Hartgerink:

I started ‘bulk’ downloading research papers from, for instance, Sciencedirect. I was doing this for scholarly purposes and took into account potential server load by limiting the amount of papers I downloaded per minute to 9. I had no intention to redistribute the downloaded materials, had legal access to them because my university pays a subscription, and I only wanted to extract facts from these papers. ...

Elsevier notified my university that this was a violation of the access contract, that this could be considered stealing of content, and that they wanted it to stop.

Case two:

I was frequently obstructed by BioOne. My IP address kept getting blocked, stopping me from downloading any further papers from this publisher. I should note here that my institution (NHMUK) pays BioOne to provide access to all their papers – my access is both legitimate and paid-for. ...

I swiftly found out that downloading more that 100 full text articles in a single session is automatically deemed “excessive” and “a violation of permissible activity”.

My reaction is to give these publishers some high level side eye.

This is mind boggling. It’s so completely at odds with people’s understanding of what you should get from paying a subscription to an online resource. If your institution’s subscription fees are paid up, you should be able to access the resource. End of story.

Additional: Nature News picked up the first half of this story and covered Elsevier’s actions. Meanwhile, Elsevier has attempted to fix the situation, but Chris Hartgerink says Elsevier’s solution is not a very good one. It imposes several restrictions on the license, and doesn’t include images, which are necessary for the research.

External links

Traditional Publishers: please stop blocking research
Elsevier stopped me doing my research

23 September 2015

Ancient legacies promoting ancient legacies


In all the excitement about the discovery of the new fossils of Homo naledi, many of my friends in the science community have remarked on this discovery being published in the journal eLife, a new open access journal, rather than Science or Nature.

“Look, this shows that you don’t have to publish short articles in those closed access journals to get lots of attention!”

What I haven’t heard many people point out is that the discovery of Homo naledi had the advantage of being publicized by a well-oiled, well established, recognized print brand: National Geographic.

The style of coverage for Homo naledi was almost exactly what you would see for Science or Nature: simultaneous press releases, probably embargoes, cover of a magazine,and so on. The only difference is that National Geographic isn’t a peer reviewed journal, but I’m not sure that difference is one that a lot of the non-scientist crowd (maybe even including many in journalism) would recognize. I would wager that for many, National Geographic is viewed as having the same authority as Science or Nature.

Don’t get me wrong: it’s great that this kind scientific research is in an open access journal with an unlimited page count. See this post by team member John Hawks which shows how this publication compares to the scientific arguments over other fossils: short papers, long waits for descriptions, etc. And the scans of the fossils that people can print on 3-D printers are something pretty new to scientific publication. All of that is important for the science, but I’m asking more about the outreach.

If this same amount of attention had been garnered by the eLife articles alone – or, to head into complete fantasy, a bioRxiv or PeerJ pre-print (say) – then it will be safer to say the landscape for scientific publicity, news, and outreach has changed significantly. Right now, it’s just showing how much muscle the established media brands still have.

External links

New species of human relative discovered in South African cave
Is Homo naledi just a primitive version of Homo erectus?
Cover image from here.

19 August 2015

Journal fees: would you rather pay a little on a bet or a lot on a sure thing?



I’m looking for a place to publish one of my next manuscripts, and I’m considering which journal to send it to. One that I’m considering is unusual: it has a modest (less than $100), but non-refundable, submission fee.

Thinking about this fee was very much deterring me from submitting there. But, then I realized that lots of journals have article processing fees that are much steeper (over $1,000).

If an appropriate journal had an article processing fee that was less than $100, I’d probably jump at submitting there. I was more annoyed by the prospect of a submission fee than an article processing fee, even though both are going to cost.

The psychology is interesting. Psychologists Kahneman and Tversky got famous studying problems like this and how people evaluate different costs, and showed that people were far from the rational optimizers that economic theory often treated them as. And I am certainly not a rational optimizer. I am in academia, after all.

A submission fee has that dread of, “It could all be for nothing.” Rejection is common – I daresay the norm – in academic publishing. Even journals that review only for technical competence reject something like a third of papers, judging from PLOS ONE: it typically rejects about 30% of submissions. An article processing fee for accepted papers feels more like you’re getting something for your buck.

Plus, there’s the fact that a submission fee is just not the norm in academic publishing. Running contrary to expectation doesn’t help my willingness to shell out the fee.

Logically, journals that ask only for article processing fees are using those fees to subsidize the editorial costs for all the rejected papers. For some journals, a modest submission fee might be one way to bring down the costs of article processing fees, which remains a road block for many researchers, particularly those who want to publish in open access journals.

Plus, all of this does raise the question of why more journals’ costs aren’t in line with, say, PeerJ?

Photo by Ben Husmann on Flickr; used under a Creative Commons license.

17 August 2015

Journal articles as a revenue source

From last week, news came that the Ecological Society of America has contracted Wiley, one of the “big five” academic publishers, to publish ESA journals.

I have decidedly mixed feelings about this. A big part of the mixed feelings comes from a couple of years back, when ESA spoke out against open access. At the time, someone from the society wrote:

This is perhaps a good example of the inherent conflict between the interests of those who believe research publications should make their content freely available to all and the reality that there are significant costs associated with publishing scholarly research journals.


I appreciate that there are costs in publishing that need to be covered. Heck, I can even see a case for charging for scientific research. I can understand that a scientific society might have not want to maintain the infrastructure needed to publish journals. As the Gavin’s blog post noted:

Somewhere in Ithaca there is a single computer running DOS(!) that performs a critical part of the current journal publishing platform used by ESA
 
Still, there are a couple of things that bug me about the Wiley deal, First, this is another example of the ongoing pattern of scientific publication becoming increasingly concentrated with a few publishers. I’ve said before I think a healthy publishing ecosystem, like a biological ecosystem, has lots of diversity. Second, this sounds like ESA sees publications as a revenue stream for them.

Gavin Simpson wrote:

The payment to ESA from Wiley in the 2015–16 budget is $1,350,357. ... (I)n 2016–17 the payment from Wiley will be $2,700,714(.) ... What is, I think, indicative is that the senior ESA staff and academics were clearly anticipating significant improvements in the “profit” generated by the Society’s journals that can be directed towards activities the Society does on behalf of its members and its support for ecology.

Again, I have mixed feelings about all this. I understand that scientific societies can do good work and need revenue. But it can also be the case that societies like that can become insular and self-perpetuating, more concerned about their own continued existence than serving others. I’m not saying ESA is at that point at all... just that societies could have incentives that are not in the general interest, or the interest of their field, but only in their interest.

So I am not sure that charging for publications is the way to go about raising revenue for a professional scientific society.

Related posts

ESA still not supporting open access

External links

ESA’s publishing deal with Wiley 

References

Larivière V, Haustein S, Mongeon P. 2015. The oligopoly of academic publishers in the digital era. PLOS ONE 10: e0127502. http://dx.doi.org/10.1371/journal.pone.0127502

01 August 2015

Comments for second half of July 2015

Dr. Becca talks about how to get yourself noticed as a scientist, in a good way.

Richard Poynder has a piece looking at one scientific publisher’s policies on embargoes, which concludes with a sour analysis of the effect of the open access movement: pretty much none, he reckons.

Small Pond Science looks at that “Facebook for scientists” site, ResearchGate. Funny how showing that people read your papers makes you pay attention.

05 June 2015

Top ten again for crayfish nociception!

Just a brief moment of celebration as I point out that our latest paper on crayfish nociception is in the top ten most read Biology Open articles for a second straight month!

I’m very pleased.

01 May 2015

Fix journals you have before you make new ones

In an editorial, Society for Neuroscience president Steve Hyman takes on issues of replication and rigor. Near the end, he writes:

With the launch of eNeuro, SfN aims to alter some of the troubling patterns in publication.

I still don’t see why the society needs a new journal to alter those patterns, when it could just change the editorial policies of the journal it already has.

If you think publishing negative results is important, if you think publishing replications is important, change the editorial policies, priorities, and format of Journal of Neuroscience. Cordoning these results into eNeuro sends a clear signal that replications and negative results are second rate science.

Additional, 6 May 2015: Ivan Oransky notes that the Journal of Neuroscience has some “troubling patterns in publication,” like not explaining editorial decisions on retractions and publication bans.

13 April 2015

Why can’t I cite Mythbusters?

I’m sorry, Adam and Jamie. I tried.

In our most recent paper on nociception, one of the major points is that not all animals react to potentially nasty stimuli the same way. And it turns out that there’s a very nice demonstration of that idea on Mythbusters. In the 2008 Shark Week “Jawsome Special,” the Mythbusters did a segment called “Spicy Salsa Shark Shield.”

They showed that sharks were not deterred by the presence of capsaicin-laden material.


I included this in the references as:

Dallow, A. and Lentle, T. (2008). Mythbusters: Shark Week Special 2, Episode 102. Discovery Channel, USA: Beyond Entertainment Limited.

The journal copy editors wrote:

Only peer-reviewed references are permitted in the reference list.

This policy is nowhere to be found in the journal’s instructions to authors.

I took it out, because there were other references that made the point about how responses to noxious stimuli varied from species to species. It wasn’t worth fighting over, and I think I’d have lost.

Regardless of the esteem you hold for the work done by Mythbusters, the journal’s citation policing raises bigger issues. A current trend in academic publishing is to broaden the kinds of research products that people can get academic credit for. Why should only publications “count,” and not sharing a database, or writing useful code?

Consider figshare, the cloud storage data archiving service. One of their big selling points as a data repository is that they generate digital object identifiers (DOI) for stuff submitted there. The DOI itself is not the selling point, but they strongly imply that this makes whatever is archived on Figshare citable in scholarly publications. Here’s the top from their “about” page: making things citable is the first thing the list. (my emphasis):

figshare is a repository where users can make all of their research outputs available in a citable, shareable and discoverable manner.

But all those hopes can be dashed by a single sentence from a journal. “Nope, we only take peer reviewed papers.”

Interestingly, the journal let us keep in a couple of conference abstracts. The abstracts were in published in a peer-reviewed society journal, but the abstracts themselves were not peer-reviewed.

I’ve noted before that some journals have had a tradition of allowing people to cite “grey literature,” like conference abstracts, newspaper articles, web pages, or tweets. I think that is a positive thing. I worry that this sort of “journals citing (and therefore promoting) journals” policy might become more common as journals compete for scientific products. Such policies could hamper the development of academic publishing innovations.

Even worse, a “journal only” policy has the potential to force authors into intellectual dishonesty. “We got this idea from another lab’s data on figshare, but we can’t say that in the paper, so we’ll just have to say something else.”

Authors and possibly reviewers should be determining what is a legitimate citation on a case by case basis, rather than a journal setting a blanket policy favourable to itself.

06 April 2015

How much harm is done by predatory journals?

There is a cottage industry of people who feel the need to show, “There are journals that will publish crap!” And it’s getting tiring.

The Scholarly Kitchen did this to Bentham Journals a few years ago, we had the Bohannon “sting” in Science, the angry “Get me off your fucking mailing list” paper. A recent entry into this pageant is a cocoa puffs paper. A new editorial calls predatory journals “publication pollution.”

To listen to some of these, you could be forgiven for thinking that publishing a paper in one of these journals is practically academic misconduct: a career-ending, unrecoverable event.

I talk to a lot of working scientists, both online and in person. And in all of that time, how many scientists have I heard of who have reported someone who submitted to one of these journals, who were not satisfied with their experience?

Three. One experience is described in two posts (here and here), and a couple of others were tweeted at me when I asked for examples. And two were “my friend” stories, not personal accounts. For the amount of handwringing over predatory publishers, this is a vanishingly small number.

Of course, these numbers are probably under reported, because nobody wants to admit that they published in a junk journal. It’s like admitting you got taken in by an email from someone claiming to be a Nigerian prince. It’s embarrassing to admit when you should have known better.

Let’s say that someone pays and publishes a paper in a predatory journal. Who is harmed, how much are they harmed, and what recourse is there to address the harm?

The author

An author who publishes in such a journal has paid the article processing charge. Okay, that sucks. But presumably the author knew she or he was going to be getting an invoice, and would not have gone that route if she or he was utterly unable to pay.

Assuming that the author has not gone into great financial hardship, let’s say the paper is published online, but without proper peer review. What are the possible outcomes, and what harms might arise?

If the paper is competent, the author could harmed because people will not read the paper because of the journal. But the paper is available for other researchers can use it and cite it if they so choose. People cite non-reviewed stuff all the time (conference abstracts, non journal articles).

If an author realizes that this was a non-peer reviewed venue, what can she or he do about it? The author can try to retract it. If the journal does not, the author can try to publish it elsewhere. Real journal editors might be sympathetic to the plight of authors who made a mistake in the publishing venue.

An author could choose not to list the paper on her or his CV. Other professionals do similar things. Actor Peter MacNicol never listed the movie Dragonslayer on his list of films.

Ultimately, I don’t see severe harm done to an honest author who publishes in the wrong journal. It’s reasonable to ask if that harm couldn’t have been avoided with a little due diligence. Authors should know the principle “Caveat emptor” applies as much to journals as other services.

The public

Another argument is that the harm of publishing in predatory journals is that the public or the unwary will be confused, because the findings could be untrue. Let’s examine a few scenarios of how findings could be false.

The research was not done well. This is no different from research published in other journals. There are many, many cases of research that was poorly done, but published anyway. This is why post-publication peer review is important. This is why replication is important. Scientists perform post-publication peer review all the time. It is our job. This is what we do.

The researchers are malicious. It is possible that someone with an agenda might try to give dubious information some sort of veneer of respectability by publishing it in a predatory journal. But... why? There are many easier ways for people with an agenda to spread lies than publishing in a crummy journal.

Professional climate denier Marc Morano has never published a scientific article. Neither has dubious diet critic the Food Babe. They don’t need to, when they’ve found so many media platforms that give them a so much bigger audience. It’s not clear how an article in a junk journal is supposed to be a more effective way of spreading untrue information than a blog, or an infomercial, appearing on a cable news network sympathetic to certain ideas and ideologies, or any of the other hundreds of ways people can spread lies.

This raises the question of how the general public finds out about research of any sort, including the dodgy stuff. Most members of the general public are not scouring academic journals. For there to be significant spread of the false research findings, it would either have to be spread through the general media or social media

General media. Science journalists who have any baseline competence should understand scientific publishing enough to realize that not every research article in every scientific journal is true. Publishing in a little known journal should raise an immediate red flag and warrant investigation before filing a story. If any journalist doesn’t do that, you have “churnalism,” and in my mind, that’s a separate – and much bigger – problem than a junk journal.

Social media. So far, I know of no cases where an article from an alleged “predatory” journal has gone viral. But let’s say it does. One of the powers of social media is that if something does go viral, it gets a lot of attention, including relevant experts can talk about it. They are probably going to comment, and be asked to comment, and can explain why such and such a paper is problematic. One of the wonderful things about the dress was that it gave lots of experts a chance to explain what we know about visual system.

Other scientists

I am not sure I see much potential harm for other scientists if a paper is published in a crappy journal. Because the entire point of a journal being called “predatory” is a way of saying that it has no standing in the scientific community. So if a journal is already being ignored by a scientific community, how is it going supposed to affect that community?

Evaluating articles is what we working professional are supposed to be doing. Like, all the time. I suppose that there is a minor harm in that people might have an opportunity cost in time spent debunking papers in junk journals. But more likely, papers in bogus journals are going to suffer the same fate as a lot of other articles: they’ll just be ignored.

Another argument might be that the general scientific community is harmed because there is reduced public trust in science. As I outlined above, I can’t see that happening.

The major reasons that scientists get their panties in a bunch about predatory journals is not because junk “predatory” have done much demonstrable harm to anyone, other than authors who are out their processing fees. I see lots hand waving about the “purity and integrity of the scientific record,” which is never how it’s been. The scientific literature has always been messy. We always have verify, replicate, and often correct published results.

Stephen Curry wrote:

“The danger of this model is that upfront fees provide short term incentives for journals to accept papers from anyone who has the money to pay, regardless of their scientific value or accuracy.” Is there any evidence that this is a serious risk? As the author himself notes, no journal will build a reputation for quality by publishing any old rubbish. This is a bit of a straw man argument.

Some people have claimed that these predatory journals exploit scientists in developing countries. It reminds me a little of someone on Twitter who recounted asking at a historical tour, “Were slaves kept here?” The guide answered, “Yes, they had good houses and were well cared for.” The problem wasn’t whether they had decent housing, the problem was they were slaves.

The problems for researchers in developing countries are not predatory journals. The problems that such researchers have is bad infrastructure, lack of support, and poor mentoring that prevents them from putting together papers that could be published in mainstream scientific journals. That they may be working under incentives that do not reward them for discriminating between journals. I also am waiting to hear from the waves of dissatisfied scientists from developing countries who feel they got ripped off.

I also noticed this when I tried to read a new entry in the “OMG predatory journals” collection:


It’s not quite an open access irony award winner... but it’s close. You want to complain about scientific publishing? Let’s talk about the regular, routine obstruction to reading the scientific literature that occurs even a professional working scientist at an expanding university with ever increasing research expectations. That affects routinely me, in a way predatory journals never have.

Open access is a new business model. Who benefits from constantly crying wolf on “predatory” journals? Established journals from established publishers, whose business model includes, in part, in asking over US$30 to read an editorial.

We should be worried about parasites as well as predators in the scientific publishing ecosystem.

Additional, 8 April 2015: There is a little bit of data indicating that these junk journals are not being read here. Hat tip to Lenny Teytelman.

Related posts

Science Online 2013: Open access or vanity press appetizer
Open access or vanity press, the Science sting edition

External links

Why A Fake Article Titled "Cuckoo for Cocoa Puffs?" Was Accepted By 17 Medical Journals
Comment on “Open Access must be open at both ends”
Beyond Beall’s List: We need a better understanding of predatory publishing without overstating its size and danger.
Some perspective on “predatory” open access journals
Science’s Big Scandal
Science and medicine have a 'publication pollution' problem
Academic journals in glass houses... 

10 March 2015

Tuesday Crustie: Young but old

This is a baby crab, therefore young.


This is also a reconstruction of a 150 million year old fossil, therefore old. Huag and colleagues show this is the oldest baby crab in the fossil record (150 million years!). That it’s the oldest may not be as impressive when you consider that it’s only the second baby crab in the fossil record ever found. So it had a fifty-fifty chance to be oldest!

What a spectacular find.

For more on this discovery, read here. I’d like to direct you to the original paper, but despite Nature Communications being an “open access journal now,” not all of its papers are open access - including this one. Sigh.

Reference

Haug JT, Martin JW, Haug C. 2015. A 150-million-year-old crab larva and its implications for the early rise of brachyuran crabs. Nature Communications 6: 6417. http://dx.doi.org/10.1038/ncomms7417


16 February 2015

The Journal of Funding Agency

An argument in scientific publishing is, “Who pays?” For many journals run by traditional, for-profit publishers, usually the library pays. For many open access online journals, the author pays.

Scientists don’t want to pay out of pocket. This is a legitimate concern, because the article processing charges can be thousands of dollars (though not all are). Many have argued that funding agencies should ultimately be the ones who pay, because they are sponsoring the research, and they have a vested interest in seeing the research published as widely as possible.

Many agencies have taken up this cause, and have polices that require open access publication.

Still... this seems a long and needlessly complicated path for the money to take. Researchers have to write grants, budget for an unknown number of papers, which then have to go to the journal.

Why don’t funding agencies start their own open access journals?

The rule would be simple: If you have research supported by the funding agency, it’s free to publish open access in that agency’s journal.

If your research is supported by other agencies, you’d pay an article fee.

I wonder if funding agencies might actually save money by having their own publishing arms. They wouldn’t have to worry about the budgeting for the publication fees. It would simplify both the writing and review of grant proposals.

Most funding agencies already have the infrastructure to publish stuff. After all, they publish reports and calls for proposal and so on all the time. They have connections to peer reviewers, because they use them to review grant proposals.

Some government agencies have had their own journals for a long time. Canada’s NRC Research Press is one example. I don’t know those journals payment system, although I think most are using the “library pays” subscription model. It might have the potential to be “house publisher” for scientists with Canadian federal funding.

HHMI, The Wellcome Trust, and the Max Planck Institute got into the publishing end of things with eLife. But they are just “supporting” the journal, rather than it being in house. There may be advantages to this, mainly editorial independence.

Update, 23 March 2017: The Gates Foundation is taking up this idea. They are creating something called Gates Open Research. The news article is interesting because it variously called this a publishing “venture” and “platform” rather than a journal.

External links

Gates Foundation announces open-access publishing venture

Photo by Steven Depolo on Flickr; used under a Creative Commons license.

20 January 2015

Who paid for my open access articles?


A recurring concern from some researchers about open access is the cost to authors. This is an area of persistent misconceptions and a lot of fear. It’s a legitimate question of whether article processing charges create a Matthew effect, with labs with grants gaining an unfair advantage over those without grants. Or, worse, shutting out contributors entirely.

This interests me, because by most standards, I am a scientific “have not.” And yet, I’ve published many of my articles open access for some years now. I did not have stand alone research grants in that time. How did I do it?

It’s a mix.

The most common situation was that the journal did not levy an article processing charge. In other words, these papers were free to me. (In fairness, one – Aquatic Biosystems – was a limited time “free to publish” offer; they normally do charge a fee.)


While I personally did not have grant support, our institution has had undergraduate training grants (notably HHMI). Those external grants picked up the tabs for a couple of papers with undergraduate co-authors:


Lately, I’ve been fortunate to have my chair agree to support article processing charges of a couple of some papers from departmental funds.

  • Carreon N, Faulkes Z. 2014. Position of larval tapeworms, Polypocephalus sp., in the ganglia of shrimp, Litopenaeus setiferus. Integrative and Comparative Biology 54(2): 143-148. http://dx.doi.org/10.1093/icb/icu043
  • Faulkes Z. 2015. A bomb set to drop: parthenogenetic Marmorkrebs for sale in Ireland, a European location without non-indigenous crayfish. Management of Biological Invasions 6(1): 111-114. http://dx.doi.org/10.3391/mbi.2015.6.1.09
  • Faulkes Z. 2018. Resolving authorship disputes by mediation and arbitration. Research Integrity and Peer Review 3: 12. https://doi.org/10.1186/s41073-018-0057-z

I paid the costs of a couple of few papers out of my own pocket. The two PeerJ papers were covered by a single lifetime membership, which was $99 at the time.

  • Faulkes Z. 2010. The spread of the parthenogenetic marbled crayfish, Marmorkrebs (Procambarus sp.), in the North American pet trade. Aquatic Invasions 5(4): 447-450. http://dx.doi.org/10.3391/ai.2010.5.4.16
  • Feria TP, Faulkes Z. 2011. Forecasting the distribution of Marmorkrebs, a parthenogenetic crayfish with high invasive potential, in Madagascar, Europe, and North America. Aquatic Invasions 6(1): 55-67. http://dx.doi.org/10.3391/ai.2011.6.1.07
  • Faulkes Z. 2015. Motor neurons in the escape response circuit of white shrimp (Litopenaeus setiferus). PeerJ 3: e1112. http://dx.doi.org/10.7717/peerj.1112
  • Faulkes Z. 2017. Filtering out parasites: Sand crabs (Lepidopa benedicti) are infected by more parasites than sympatric mole crabs (Emerita benedicti). PeerJ 5: e3852. https://doi.org/10.7717/peerj.3852

I paid the costs for two using indirect costs recovered from an external undergraduate training grant that I was awarded.


Update, 29 September 2019: One paper was paid for mostly by the indirect costs mentioned above, with my department kicking in the remainder.

  • DeLeon H III, Garcia J Jr., Silva DC, Quintanilla O, Faulkes Z, Thomas JM III. Culturing embryonic cells from the parthenogenetic clonal marble crayfish Marmorkrebs Procambarus virginalis Lyko, 2017 (Decapoda: Astacidea: Cambaridae). Journal of Crustacean Biology: in press. https://doi.org/10.1093/jcbiol/ruz063

Finally, I don’t know how the article processing fee my most recent paper was the papers below were paid for by my co-authors.


Looking at this list, I’m willing to bet that some researchers will say, “But Zen, even if you didn’t have traditional research grants to pick up the tab, you’ve still had a lot of support to pay for open access.” True. It’s hard to say if the number of open access papers would have been much different if, say, my department declined to pay for papers. I might have tried other journals, might have dipped into my pocket again, might have tried to find other pots of money.

From this perspective, the issue that might stop some researchers (retirees and amateurs, say) from publishing open access would not be “lack of grants,” but being disconnected from larger institutions. Being part of an institution brings a lot of infrastructure, and diverse resources that go way beyond who has external grants.

All of that said, several of my articles in “traditional” subscription-based journals also had page charges (one journal asked me for $320 for its 2.75 year publication process). It’s interesting to me that people don’t very often bring up those page charges as barriers to publication.

Additional, 17 March 2015: Updated list.

Additional, 2 August 2015: I added two new papers to the main list: one had no publication fees, the other was modest (PeerJ) and paid for out of pocket.

Additional, 18 November 2015: Added newest paper to list (reasonably low fees that I will pay for out of pocket).

Additional, 20 January 2016: Updated the list. One paper that I thought might cost me something ended up costing nothing. I also completed some citations.

Additional, 7 February 2017: Updated the list. Second time I dipped into some funds I have from indirect costs.

Additional, 10 June 2018: Updated the list, adding two more papers. And here’s a graph!


Additional, 18 June 2018: Updated the list, with one more paper. Graph above not updated yet.

Additional, 24 September 2018: While tweeting out this link again, I am reminded that I need to add Hilda Bastian’s important blog post about open access fees.

(N)ot having an APC doesn’t mean the journal is accessible to everybody. It has to accept work from your field. You have to be eligible to publish in it – an APC is not the only possible access issue. You have to write in its language of publication. And you want it to be accessible to people in your field. In mine, that means you really want it to be indexed in PubMed. And having DOIs is critical for citations to be counted by key systems.

Hilda does a little analysis to show that this winnows out to a small number of journals for most people.

Additional, 9 November 2018: Updated the list, with one more paper. Graph updated!


Additional, 29 September 2019: Updated the list, with one more paper.

Related posts

Waiving publication fees
The journal ecosystem

Photo by penguincakes on Flickr; used under a Creative Commons license.

26 December 2014

The science of asking

Holidays are a time for opening things (like presents)! Today, I want share my latest experiences in open science, involving the recent paper I co-authored (Byrnes et al., 2014) about science crowdfunding and #SciFund. I was very pleased to see it land on the front page of PLOS ONE when it was released:


While I’ve been a supporter of open access, I have never quite gotten on board with what some have called “open notebook” science: posting the data as you go. For me, there are too many distractions and dead ends in an ongoing project. I would much rather wait until I have a complete story, all ready to be tied up in a bow in a published paper.

The #SciFund project, however, was much different. I got involved shortly after round one closed. I think it was the morning after it ended. Someone (for the life of me, can’t find it) posted an initial analysis of the round one projects: how much money they’d raised, donors, the project description length, and so on. I took that, gathered even more data, and shared it as a spreadsheet on Google Docs. Someone (Jarrett Byrnes, I think) then took that data, and archived it on Figshare.

I did the same after round two. And again on round three. And four. I stayed up quite late a couple of times so that I could collect the social media data (tweets and Facebook likes) from the Rockethub website as soon as the projects closed out. And I archived all that data, again, on Figshare.

So this time, all the data was public from the start.

Then, Jarrett and I blogged about what we were seeing in the data on the #SciFund blog. For instance, here’s one by me comparing the three rounds, and here’s one where Jarret admits that the model he developed to explain success in Round 1 didn’t explain success in Rounds 2 and 3:

(M)y first response was to freak out a little bit on the inside. I mean, have we been wrong this entire time? Have I been talking out of my scientific butt? Or did I get something deeply and fundamentally wrong.

The published paper reminded me of how long in the making this thing was. It was submitted in the middle of June 2012, and was published December 2014. This is the second time this year I’ve had a paper that took well over two years to make it to print. Unlike the first case, which was due mostly to delays on the editorial side, the journal and the authors both contributed to this delay. First, like the Southwestern Naturalist situation, the PLOS ONE editor initially handling the article went AWOL on us, and we had to find a new editor. Second, we authors were not always prompt making our revisions. Coordinating four authors can be tricky, and I can testify that we all worked on this thing. There are no gift authorships here!

After we had submitted the manuscript to PLOS ONE for review, we had a reasonably complete draft of the manuscript, and we submitted it as a pre-print to the PeerJ pre-print server.

What did I learn from this experience with doing the analysis out in the open?

Journal pre-publication peer review still matters

Plenty of people had chances to comment on our work, particularly after it was deposited in the PeerJ pre-print server. We did get comments on the pre-print, but the journal reviewers’ comments were more comprehensive.

Maybe we just got lucky with our reviewer. But, others have also expressed the opinion that people are most liable to act as peer reviewers when they are being asked to do it for a journal.

Publishing a peer-reviewed journal article still matters

By the time the PLOS ONE paper came out, I’d spent several years blogging about #SciFund here at NeuroDojo, on the #SciFund blog, and talking a lot about it on Twitter and other social media. The pre-print is very similar to the final published PLOS ONE paper. I worried nobody would pay an attention to the PLOS ONE paper, because there was not a lot there that we had not already talked about. A lot, I’d thought.

Boy, was I wrong. The altmetrics for this article quickly rose, and are now tied with my article about post-publication peer review from much earlier this year.

The word of mouth was helped by Jai organizing a press release through University of California Santa Barbara. (I tried to interest my university in putting out a press release. Silence from them.) That helped generate a reasonable number of pointers to this article on Twitter.

We also tried making a video abstract. It has a couple of hundred views now, which is not horrible.


And we did a panel discussion the week after the paper was released, too:


Following the panel discussion, the #SciFund paper rated an article on the Nature website.

But even among my regular followers, people who I thought might have looked at the pre-print, were commenting on the published paper. The pre-print didn’t get the traction that the final published paper did.

An excellent example of this is that we had one detailed comment picking apart Figure 8 in the PLOS ONE paper. Someone could have made this comment at the pre-print stage – this is one of the usual arguments for making pre-prints available. But the PLOS ONE comments feature isn’t used that often, so my reaction to the criticism was kind of summed up thus:


We’re still trying to figure out how to respond formally. Should we try to issue an erratum to the figure? Just post a corrected figure in the comments? But here is a new version of the figure:


Being open and sharing data is a good thing to do. But my experience with this paper suggests to me that the “screw journals” approach is not ready for prime time yet. And this was a project that, in theory, should have been a good one to try the “just blog it all as you go” method of sharing science. #SciFund was born and raised online. It exists because social media exists. I would have expected this paper to have reached its audience well before the final PLOS ONE paper came out. But all the blogging, tweeting, and pre-prints did not equal the impact of the actual journal article.

I am pleased this article is finally out. But we still have analysis from round four of #SciFund, and we are starting to eye round five, so I don’t think this will be my last crowdfunding paper.

Reference

Byrnes JEK, Ranganathan J, Walker BLE, Faulkes Z. 2014. To crowdfund research, scientists must build an audience for their work. PLOS ONE 9(12): e110329. http://dx.doi.org/10.1371/journal.pone.0110329

External links

Crowdfunding 101 (Press release)
Crowdfunding works for science
Secret to crowdfunding success: build a fanbase
Do pre-prints count for anything?

Hat tip to Amanda Palmer, fellow traveler in crowdfunding, whose book title inspired this post’s title.

25 August 2014

Scientists must take the lead on improving academic publishing

I was having a discussion last week with some colleagues who are not in the sciences about their publishing experience. The conversation echoed an online one I had with Rebecca Schulman on Twitter after she published an article in Slate about academic publishing. The academic publishing process she described was not one I recognized. I barely recognized the publishing process in the chat last night, either.

This individual described the process of submitting to a reputable business journal.

  1. Start with a non-refundable $350 submission fee.
  2. Don’t expect any reviews in less than six months. In fact, even six months is optimistic.
  3. If your manuscript is given a recommendation of “revise and resubmit,” that will cost you another $150 submission fee.

I hear the litany of complaints above, and it seems to me that this is a ripe opportunity for an academic publisher. Surely someone can provide a better service to scholars than this. I wonder, “Where is the answer to PeerJ for business, the humanities, and other academic disciplines?”

The answer to why new journals are having a tough time breaking into those market, of course, is “prestige.” Certain journals were viewed as “the best” (suspected translation: high Impact Factor) which means people will put up with a huge amount of inconvenience and strife to get published in them.

We scientists may whinge about our journals, but they seem to be doing a better job across the board than in other academic fields. There are exceptions (grrr), but there are recognized as exceptions. Scientists have been slowly changing how academic publishing is done. And yes, it’s been too slow and too little, but we’ve made progress. We need to keep pushing so that our colleagues in other departments and colleges will have some of the same options in publishing that we are starting to enjoy.

Additional, 20 September 2014: There is a very interesting comparison of open access issues in sciences and humanities here. Hat tip to Mike Taylor.

External links

Revise and resumbit

Picture from here.

12 August 2014

The sticker price on AAAS’s Zune journal


We now have the first look at the American Association for the Advancement of Science’s promised open access journal, Science Advances.

Wow, that’s expensive.

They want $3,000 as an article processing fee. I have no idea what services they offer will justify a price that is double that of PLOS ONE and thirty times that of PeerJ.

It’s as if they don’t want it to succeed, as if their publisher thinks that the open access model of scientific publishing is fundamentally flawed...

Update, 13 August 2014: A couple of other journals in the same market as Science Advances (online, open access) have made a couple of interesting announcements today.

eNeuro is specifically soliciting for Negative Results, Failure to Replicate, and Confirmation.” This is important, because replications are difficult to publish, but important.

eLife has just announced “Research advances”, which lets authors publish “significant additions” to their articles that are already out.

I like these. The more journals try these sorts of innovations, the faster they might be able to shake the label of being Zune journals. Will Science Advances try anything this innovative?

And while I’m here, note that a $3,000 sticker price for the default options (less than 10 page article, more restrictive copyright license) makes Science Advances one of the more expensive open access journals. I’ve placed it on the graph seen on this site.


Meanwhile, an open letter started by Jon Tennant to AAAS (I signed) has gotten a write-up on The Scientist.

Update, 14 August 2014: I realized the image of Science Advances cost didn’t show the range of possible expenses, so I updated it. I had to extend the X axis.


Just for the heck of it, I did the same for eNeuro (costs described here):


Still on the high end, but slightly closer to market value than Science Advances.

Related posts

AAAS creates another Zune journal 
Will AAAS get burned in the (scholarly) kitchen?
Opinion: OA Advocates Slam Science Advances

Hat tip to Jon Tennant. 

06 August 2014

Will AAAS get burned in the (scholarly) kitchen?

Yesterday, a single tweet announced that Kent Anderson would be the publisher for the American Association for the Advancement of Science (AAAS). This has now been confirmed by AAAS (longer press release here). I would like to congratulate Anderson for landing what must be a plum gig.

And I wish AAAS luck, because they will need it.

Anderson has a lot of knowledge about the scientific publishing industry, that much is clear. He has served as president of the Soceity for Scholarly Publishing.

But he also has a definite point of view about scientific publishing, which he has puts forward at the blog The Scholarly Kitchen. If I may try to summarize some of his positions (The Scholarly Kitchen is a group blog, so I may be misremembering other people’s positions as his; correct me if I’m wrong):

Anderson argues that subscription journals by “for profit” publishers is the One True Path of scientific publishing. This, he says, is the only method of disseminating scientific information that has been sustainable (e.g., here).

He has been critical of open access publishing.  (Honestly, I wonder if he has a voodoo doll wearing a PLOS ONE t-shirt.) In this interview, he says, “I think (open access) is structurally flawed.”

He has been extremely critical of PubMed directing traffic away from publishers (e.g., here and here). That huge numbers of working biomedical scientists find PubMed invaluable to their research and use it every day seems not to matter much in Anderson’s point of view.

At every turn, Anderson had advocated positions that favour the interests of publishers over either the needs of working scientists or the general public.

It’s okay to have a point of view. It’s okay to defend publishers (I do it myself sometimes.) But it seems odd to put Anderson, for-profit publisher booster, in charge of the publishing arm of not-for-profit, member-driven scientific society, particularly at a time when the society is launching open access journals. Anderson has a reputation, and it’s not of someone who is interested in innovative publishing.

Anderson’s appointment is not a good move, public relations wise, for AAAS. But it will give other publishers an opportunity to contrast their positions with those espoused by Anderson. What publishers will support giving researchers and readers valuable tools that they want?

I once thought that Science magazine was rife for takeover by its members to become an open access journal. Anderson’s appointment seems like an open access takeover is exactly what AAAS fears, and they want someone to dig a moat and man the barricades.

Edit: Struck “for profit” in a couple of places, as the subscription model is probably more central to Anderson’s argument. And, as noted in the comments, Anderson has worked with non-profits.


Related posts

Occupy Science (the journal)
AAAS creates another Zune journal

External links

Interview with the Scholarly Kitchen’s Kent Anderson 
Meet Kent Anderson, anti-#openaccess campaigner, publisher of Science

05 August 2014

Better a deluge than a drought

Another prominent opinion piece is crying again that there is too much low-quality research.

This annoys me so much. It presumes “quality” can be judged immediately and accurately, and that researchers should all be of the same mind about what the “right” research questions to ask are.

I wonder if, in other fields of creative endeavor, people write editorials calling for less work. “People are releasing more songs than ever, but have you noticed that there are still only 52 number one songs on Billboard magazine each year? We haven’t had any increases in number one songs since the 1950s!”

Would anyone ask a musician, “Why don’t you just write hits?” without expecting to get punched in the face? No, because there is some understanding that not everything is going to be a hit.

Let’s look at a few bits of the article here.

Estimates vary wildly, but probably between a quarter and a third of all research papers in the natural sciences go uncited.

“Uncited research” is research that has not been cited yet. This is a great strength of scientific literature: you can go back and look at the old stuff. Some projects have “long tails,” and it’s not possible to know when someone will stumble across something you have created and find out something relevant to what they are doing. For example, Hill (1979) was not cited for almost two decades. Failure? Maybe. But because there is no statute of limitations on when we can cite papers, it eventually was cited. (Yes, I’ve cited it.)

Scientific papers are love letters to the future. We write them in hope that not only will they be useful within the first few years of publication, but that they may be useful to researchers living long after we are not.

Some works will only reach a small audience. That does not automatically make them less worthy, or less influential.

To use the music analogy again:


In 1968, The Velvet Underground were releasing records that very few people bought. But their work lasted, and regularly shows up on “Best of all time” lists.

In comparison, the 1968 Grammy winner for Record of the Year that year was “Up, Up, and Away.”


It’s a breezy, catchy, even memorable tune, but... I bet it doesn’t show on on many “Best of all time” lists now. I wager not many people would know the name of the band now.

This in turn leads to the bane of every scientist's existence: far too many papers to read in far too little time.

Not my bane. It has never been easier for me to find papers that are relevant to my interests, thanks to Google Scholar and similar tools.

One reason is the rise of author-pays open-access publishing. This has the benefit of allowing anyone to read the research in question without paying the publisher, but also has the disadvantage of giving publishers a strong commercial incentive to issue as much content as possible. ...
(S)ubscription business models at least help to concentrate the minds of publishers on the poor souls trying to keep up with their journals.

Elsevier has almost 3,000 technical journals, and Springer has 2,200 journals, and Wiley has 1,500 journals, most of which are subscription journals. That, to me, does not suggest that subscription-based publishers are trying to keep the literature down to a manageable size.

Subscription publishers have incentives to publish more scientific literature, just like open access publishers do. If each journal tends to be profitable, then you have incentive for publishers to make more journals. The more journals they can put in their “big deal” packages, they more they might make.

The incentive to publish is not coming from publishers. The incentive to publish comes from administrations, funding agencies, hiring committees, tenure and promotion committees. They all count publications. This seems indisputable. Indeed, the article goes on to admit this:

On one hand funders and employers should encourage scientists to issue smaller numbers of more significant research papers. This could be achieved by placing even greater emphasis on the impact of a researcher's very best work and less on their aggregate activity.

How are we going to evaluate “best” work? Unfortunately, the typical way that “very best work” is evaluate now is the journal Impact Factor (van Dijk et al. 2014). The problems of using Impact Factor to assess individual work are many, to put it mildly (Brembs et al. 2013).

In the end, we get a bait and switch! Instead of what the piece initially calls for (publish less), it ends with a call to publish even more. Now we are supposed to publish data in addition to our papers:

On the other they should require scientists to share all of their results as far as practically possible. But most of these should not appear in the form of traditional scholarly papers, which are too laborious for both the author and the reader to fulfil such a role. Rather, less significant work should be a issued in a form that is simple, standardised and easy for computers to index, retrieve, merge and analyse. Humans would interact with them only when looking for aggregated information on very specific topics.

So the issue is filter failure, not information overload.

Whatever the shortcomings of traditional journal articles are, they realize the awesome power of narrative. This is, I think, the reason why scientific journals have never just published the data, as I wrote before:

If science is purely and solely about “the facts,” why do we publish scientific papers at all? Why not just upload methods and datasets? If you have the data and the methods to generate them, isn’t that all you need to assess the “facts” in play?

(T)here is an inherent connection between stories and experimental science: they are both about causes. A satisfying story is built around causal connections. Without those causal connections, you have a series of disconnected events that makes about as much sense as a random inkblot.

If we struggle with too many papers now, we will struggle even more with too many datasets.

Ignorance is a much, much bigger problem than too much knowledge.

References

Brembs B, Button K, Munafò M. 2013. Deep Impact: Unintended consequences of journal rank. Frontiers in Human Neuroscience 7: 291. http://www.frontiersin.org/Journal/Abstract.aspx?s=537&name=human_neuroscience&ART_DOI=10.3389/fnhum.2013.00291

Hill GW. 1979. Biogenic sedimentary structures produced by the mole crab Lepidopa websteri Benedict. Texas Journal of Science 31(1): 43-51.

van Dijk D, Manor O, Carey LB. Publication metrics and success on the academic job market. Current Biology 24(11): R516-R517. http://www.cell.com/current-biology/abstract/S0960-9822(14)00477-1

Related posts

I’m a pebble in the avalanche
Balkanizing small universities 
Storytelling is dead, long live narrative

External links

Stop the deluge of science research

Photo by Broo_am (Andy B) on Flickr; used under a Creative Commons license.

06 June 2014

Self-fulfilling prophecies, academic publishing edition


From the “The Top 100 Things I’d Do If I Ever Became An Evil Overlord”:

I will dress in bright and cheery colors, and so throw my enemies into confusion.

In other words, if you are an Evil Overlord, you don’t need to act the part so obviously.

The academic publisher Taylor & Francis would do well to heed this lesson. If you want to talk about a self-fulfilling prophecy, read this article. We learn that a journal article, critical of the business practices of academic publishers, was delayed and held up by, wait for it, the academic publisher of the journal in question.

(T)he non-appearance of the journal in September was followed, two months later, by a letter from a senior manager at Taylor & Francis demanding that more than half of the proposition article be cut. ... (W)hen the edition was finally published, Taylor & Francis unilaterally added a long disclaimer to each article warning that “the accuracy of the content should not be relied upon”.

It is rare to have such obvious meddling over such an obvious conflict of interest.

Despite my joking about publishers as Evil Overlords, this isn’t just about academic publishers. This is about the conflicts of interest that can arise any time you have gatekeepers. Open access publishing in and of itself wouldn’t solve this problem. (The article in question is open access.) Nor would having journals published by scientific societies solve this problem. An editor could be just as difficult and unreasonable as a publisher. So can reviewers.

Hat tip to Brembs.

Update, 24 June 2014: Taylor & Francis have apologized for their behaviour... albeit in a weak way:

Professor Macdonald said it had taken “hours” to agree a version of the letter with which he was satisfied and which avoided words the publisher was unwilling to use, such as “sorry”, “mistake” or “censorship”.

I’m not sure if it counts as an apology if you’re not willing to say “sorry.” C’mon, Taylor & Francis. Say, “Sorry.” It won’t kill you.

The article critical of publishers has become the most read thing in the journal Prometheus ever.

Hat tip to Mike Taylor on this update.

Reference

Harvie D, Lightfoot G, Lilley S, Weir K. 2013. Publisher, be damned! From price gouging to the open road. Prometheus: Critical Studies in Innovation 31(3): 229-239. DOI:

External links

Peter’s Evil Overlord list
Resignations threat over Taylor & Francis ‘censorship’
Taylor & Francis apologises after ‘censorship’ debate

Picture from here.

15 April 2014

If I’m to be skeptical of open access publishing, I’d like to have good reasons

A.A. Agrawal (whose alliterative name I envy) has penned a letter saying the people should be skeptical of open access publishing. In brief, his reasons are:

1. “Some are for profit!” Just like most traditional scientific publishers. While Agrawal says this creates a conflict of interest, the profit incentive can also move journals to provide more and better services for authors at lower prices; see this interview with open access publisher Ahmed Hindawi.

Hindawi’s argument seems to be that in a subscription market librarians have no choice but to buy access to an entire journal in order to provide their institution’s researchers with access to any single article in it — since no other journal can substitute for the one in which the desired article has been published.

In an OA world, by contrast, authors will have the choice of taking their papers to a number of different publishers (i.e. shop around). And since the paper will be freely available to all once it is publisher, there can be no monopoly on access. This, says Hindawi, will drive prices down.

2. “Most don’t copy edit.” Data or citation needed.

3. “There isn’t a citation advantage.” This is the point that is most annoying, because it shows poor scholarship. Agrawal cites one study that finds no citation for advantage, but there have been many others. It’s worth having a look through this summary of many projects testing this hypothesis. Not all of them show an open access advantage, but many do. Picking one study alone, even if it’s a good one, smacks of confirmation bias.

4. “People care about journal prestige,” Alas, the “prestige” argument this is probably one of the main reasons that people will not consider particular open access journals. However, it ignores the prospects for article level metrics. And I am not impressed with an argument that can be rephrased as, “Your peers are lazy.”

Hat tip to Prof-like Substance.

Reference

Agrawal AA. 2014. Four more reasons to be skeptical of open-access publishing. Trends in Plant Science 19(3): 133.

Reposted at Always Researching, with a link here but not credit (yet). Hat tip to Ross Mounce for noticing.