03 October 2013

Open access or vanity press, the Science “sting” edition

Earlier this year, I co-moderated a panel at Science Online about open access. To me, it was about the reputation economy in science, and how new journals and models could gain credibility in a changing publishing environment. Today, Science magazine published an article that is being described as a “sting” operation to test whether a fatally flawed paper could get published in open access journals.

The quick reaction from the online community: why wasn’t this investigation extended to subscription based journals? This is a completely valid criticism. As it ran, this article seems to have a predetermined end point: to show that open access publication is harmful to science.

I may have more to say about this later, but for now, I’m collecting reactions in the external links below.

Additional, 4 October 2013: Here’s a general problem with this sort of investigation. If you have studied statistics, you know there are two kinds of errors you can make: a “miss” (type I error) or a “false alarm” (type II error).

The Science article, and every case of hoax articles accepted by journals, is only getting at the “misses” of peer review: the crazy papers that should be rejected, but weren’t.

Nobody has a way of investigating the “false alarms” of peer review: decent papers that should not be rejected and published, but are denied for publication by reviewer or editor. I think everyone has stories of prominent, significant papers that were initially rejected that went on to have a big impact in the field. The issue here is that there is no agreement on what makes a paper acceptable for publication, particularly given that many journals screen for “importance,” which is completely subjective.

If you could do the experiment – “Here is a paper that is completely publishable, competent science. Now let’s see how many journals reject it.” – you might find that the rate of mistaken rejections is far higher than mistaken acceptances.

Related posts

Science Online 2013: “Open access or vanity press?” appetizer

External links

Who’s afraid of peer review? (Science magazine’s “sting”)
Live Chat: Exploring the 'Wild West' of Open Access (Recorded 10 October 2013; video of chat still available for review)

Science reporter spoofs hundreds of open access journals with fake papers
I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals
New “sting” of weak open-access journals
2009 reflection on the 2013 Bohannon sting.
Flawed sting operation singles out open access journals
John Bohannon’s peer-review sting against Science
Which is it?
Open access publishing hoax: what Science magazine got wrong
Science Magazine rejects data, publishes anecdote
How embarrassing was the ‘journal sting’ for Science magazine?
Open access “sting” reveals deception, missed opportunities
Who’s afraid of open access?
Academic publishing: Science’s Sokal moment
"Open access spam" and how journals sell scientific reputation
What Science’s “Sting Operation” reveals: open access fiasco or peer review hellhole?
Science Mag sting of OA journals: is it about Open Access or about peer review?
What’s “open” got to do with it?
Heads up new Science is a special issue on scholarly communication
Glam mag fucks up, news at 11
The troubled peer review system, the open access wars, & the blurry line between human subjects research & investigative journalism
On John Bohannon article in Science
What Science — and the Gonzo Scientist — got wrong: open access will make research better
Stones, glass houses, etc.
Predatoromics of science communication
Science gone bad, or the day after the sting
Fake cancer study spotlights bogus science journals
Science Magazine Conducts Sting Operation on OA Publishers: This article notes that at least four websites – I hesitate to call them journals – published the faked paper, against the wishes of the author. Two have since been withdrawn.
A publishing sting, but what was stung?
Science's open access challenge
Unscientific spoof paper accepted by 157 "black sheep" open access journals - but the Bohannon study has severe flaws itself
Critics say sting on open-access journals misses larger point
Hoax reveals ‘Wild West’ of open-access science journals
Fake science journals create a scholarly Wild West for fast money
The Wild West world of open-access journals
What hurts science – rejection of good or acceptance of bad?
It may have been a flawed #OpenAccess "Sting" but WE ROCKED IT so submit to our journal ...
Sting operation demonstrates the value of journal Impact Factors
About Science's open access “sting”
A veritable sting
The open access sting: a missed opportunity?
Science’s straw man sting
“Truthiness” isn’t quite truth, and “sciencey” isn’t quite science, even if published in Science: Mike Taylor’s “Anti-tutorial: how to design and execute a really bad study”
Science magazine’s open-access sting lacks bite
The good, the bad, and the ugly: Open access, peer review, investigative reporting, and pit bulls
Who is afraid of Peer review: Sting Operation of The Science: Some analysis of the metadata
The real peer review: post-publication
The Bohannon “Sting”; Can we trust AAAS/Science or is this PRISM reemerging from the grave?
Who was stung – open access or peer-review?
Why a Harvard scientist wrote a bogus paper and submitted it for publication - CBC Radio interview with Bohannon
Bohannon and Science: bogus articles and PR spin instead of peer review
Peer review quality is independent of open access
Fallout from John Bohannon’s “Who’s afraid of peer review”
Lessons from the faux journal investigation - 15 October 2013
Fallout from Science’s publisher sting: Journal closes in Croatia - Retraction Watch, 17 October 2013
The Science magazine hoax (mBio wasn’t fooled, in case you’re wondering) - mBiosphere, added 18 October 2013
John Bohannon’s Open Access sting paper annoys many, scares the easily scared, accomplishes relatively little - Melville House, added 20 October
DOAJ's response to the recent article in Science entitled “Who’s Afraid of Peer Review?” - Directory of Open Access Journals, added 21 October 2013
Second response to the Bohannon article 2013-10-18 - Directory of Open Access Journals, added 21 October 2013

5 comments:

  1. There have been a few "stings" where people have submitted the text of highly regarded past papers to various journals, only to have them rejected, generally without the referee/ editor noticing that it's the same text as a famous experiment. Those are usually cited as examples of the obliviousness of editors/ referees, but might speak somewhat to the "false rejection" issue.

    ReplyDelete
  2. For me it is more important to find out few sources of light in the ocean of darkness. People are more busy to find out the weakness of the study, how this study should have been conducted, etc. Some peoples are considering this as a ‘designer study to produce some designed baby’, etc. And I AGREE to all of them. Yes all of them are true. But in this huge quarrel and cacophony are we not neglecting some orphan babies born from this study (yes they born accidentally and not designed or expected to be born: as most of the large inventions are accidentally happened)?

    I have made some childish analysis with the raw-data of the report of John Bohannon.

    Bohannon used very few words for praising or highlighting the journals/publishers who successfully passed the test. He only mentioned about PlOS One and Hindawi, who are already accepted by academicians for their high reputation. At least I expected that Bohannon will include a table to highlight the journals/publishers, who passed test. I spent very little time to analyze the data. Surprisingly I found some errors made by Bohannon to rightly indicate the category of publishers (DOAJ / Beall). I have indicated some errors and I could not complete the cross-checking of all 304 publishers/journal. Bohannon used DOAJ/Beall as his main category of selecting the journals. But error in properly showing this category-data, may indicate that he spent more time in collecting the raw data, than analyzing the data or curating the data.

    I found more members of Beall’s list is present in Bohannon’s study. But Bohannon did not report this fact.

    Table 1: List of 20 journals/publishers, who Rejected the paper after substantial review (May be considered white-listed journal/publisher)
    Table 2: List of 8 journals/publishers, who Rejected the paper after superficial review (May be considered white-listed-borderline journal/publisher)
    Table 3: List of 16 journals/publishers, who Accepted the paper after substantial review (May be considered blacklisted-borderline journal/publisher)
    Table 4: List of journals/publishers, who Accepted the paper superficial/NO review (May be considered confirmed blacklisted journal/publisher)
    Table 5: List of journals/publishers, who Rejected the paper but no review details recorded (Labeling of this journal/publisher is avoided)

    Link to my post: http://wp.me/p3eOrO-Z


    Akbar Khan
    India

    ReplyDelete
  3. For me it is more important to find out few sources of light in the ocean of darkness. People are more busy to find out the weakness of the study, how this study should have been conducted, etc. Some peoples are considering this as a ‘designer study to produce some designed baby’, etc. And I AGREE to all of them. Yes all of them are true. But in this huge quarrel and cacophony are we not neglecting some orphan babies born from this study (yes they born accidentally and not designed or expected to be born: as most of the large inventions are accidentally happened)?

    I have made some childish analysis with the raw-data of the report of John Bohannon.

    Bohannon used very few words for praising or highlighting the journals/publishers who successfully passed the test. He only mentioned about PlOS One and Hindawi, who are already accepted by academicians for their high reputation. At least I expected that Bohannon will include a table to highlight the journals/publishers, who passed test. I spent very little time to analyze the data. Surprisingly I found some errors made by Bohannon to rightly indicate the category of publishers (DOAJ / Beall). I have indicated some errors and I could not complete the cross-checking of all 304 publishers/journal. Bohannon used DOAJ/Beall as his main category of selecting the journals. But error in properly showing this category-data, may indicate that he spent more time in collecting the raw data, than analyzing the data or curating the data.

    I found more members of Beall’s list is present in Bohannon’s study. But Bohannon did not report this fact.

    Table 1: List of 20 journals/publishers, who Rejected the paper after substantial review (May be considered white-listed journal/publisher)
    Table 2: List of 8 journals/publishers, who Rejected the paper after superficial review (May be considered white-listed-borderline journal/publisher)
    Table 3: List of 16 journals/publishers, who Accepted the paper after substantial review (May be considered blacklisted-borderline journal/publisher)
    Table 4: List of journals/publishers, who Accepted the paper superficial/NO review (May be considered confirmed blacklisted journal/publisher)
    Table 5: List of journals/publishers, who Rejected the paper but no review details recorded (Labeling of this journal/publisher is avoided)

    Link to my post: http://wp.me/p3eOrO-Z


    Akbar Khan
    India

    ReplyDelete
  4. Pre-Green Fool's-Gold and Post-Green Fair-Gold OA

    I would be surprised if there were no subscription journlals that accepted the Bohannon sting paper for publication too. But I would be even more surprised if the same proportion of field-, age-, size- and impact-factor-matched subscription journals accepted it as did the pay-to-publish OA journals ("Gold OA").

    Subscription journals have to maintain enough of an appearance of peer review to sustain their subscriptions. Pay-to-publish Gold OA journals just have to maintain enough of an appearance of peer review to attract authors (and maybe pay-to-publish is enough to attract many authors in our publish-or-perish world without even the appearance of peer review, especially along with the fashionable allure -- or excuse -- of the journal's being OA).

    This problem would not be remedied by just lowering Gold OA journal publication fees. Nor is it a systemic problem of peer review. It is a problem of peer review for Gold OA journals at a time when there is still far too little OA and most journals are still subscription journals, most authors are still confused about OA, many think that OA means Gold OA journals, and, most important, there are not yet enough effective mandates from research funders and institutions that require authors to make their papers OA by depositing them in their institutional OA repositories ("Green OA") regardless of where they were published.

    If it were mandatory to make all papers Green OA, authors would simply deposit their peer-reviewed final drafts in their institutional OA repositories, free for all, immediately upon acceptance for publication. They would not have to pay to publish in Gold OA journals. Once all journal articles were being made Green OA in this way, institutions would cancel all their journal subscriptions, which would in turn force all journals to cut costs and convert to Gold OA publishing at a much lower fee than is being charged now by OA journals: post-Green Fair Gold instead of today's pre-Green Fool's Gold.

    But, most important, the only remaining service that journals (all of them having become Gold OA) would be performing then, post-Green, would be peer review. All access-provision and archiving would be done by the Green OA repositories (so not more print or PDF editions). And for just peer review, journals would no longer be charging for publishing (which would then just amount to a tag certifying that the article had been accepted by journal J): they would be charging for the peer review.

    And each round of peer review (which peers do for free, by the way, so the only real cost is the editor who evaluates the submission, picks the referees, and adjudicates the referee reports, plus the referee tracking and communication software) would be paid for on a "no-fault" basis, per round of peer review, whether the outcome was acceptance, rejection, or revision and resubmission for another (paid) round of peer review.

    Unlike with today's Fool's Gold junk journals that were caught by Bohannon's sting, not only will no-fault post-Green, Fair-Gold peer-review remove any incentive to accept lower quality papers (and thereby reduce the reputation of the journal) -- because the journal is paid in any case -- but it will help make Fair-Gold OA costs even lower, per round of peer review, because it will not wrap the costs of the rejected or multiply revised and re-refereed papers into the cost of each accepted paper, as they do now.

    So post-Green Fair Gold will not only reduce costs but it will raise peer-review standards.

    None of this is possible, however, unless Green OA is effectively mandated by all institutions and funders first.

    ReplyDelete
  5. Harnad, S. (2007) The Green Road to Open Access: A Leveraged Transition. In: Anna Gacs. The Culture of Periodicals from the Perspective of the Electronic Age. L'Harmattan. 99-106.


    ________ (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).

    ________ (2013) The Science Peer-Review "Sting": Where the Fault Lies. Open Access Archivangelism 1059.

    ReplyDelete

Comments are moderated. Real names and pseudonyms are welcome. Anonymous comments are not and will be removed.