Look, I am not one for conspiracy theories.
But if I was a major academic publisher, and I saw a lot of business going to new, upstart publishers that had even slightly questionable editorial practices, I’m not sure I would back a way from a little aggressive corporate public relations. I might set up a website that claimed to be providing a service by listing dodgy journals and publishers, and then – Bam! 💥 – I label a bunch of stuff coming from the competition as “predatory.”
Meanwhile, today I learned about a website called Predatory Reports in Scientific Publishing. I noticed today because they recently stuck a bunch of MDPI journals on their blacklist.
Their Twitter account has been around for a little over a year. It’s interesting that I never heard of them before they target MDPI, which fits the narrative of people who are already grumpy (legitimately so) with MDPI.
Looking around their website, I clicked the “About” page, I thought this was an overly honest image:
An empty conference room.
The written description is no more informative. This group is “association of scientists and researchers.” No names, emails, how many people are involved, nothing.
Now, there may be a reason for this. The debates about anonymity in science are long, but usually come down to, “There are some cases where it’s appropriate.”
For instance, the creator of the original blacklist of predatory journals, Jeffrey Beall, tended to have people threaten to sue him with alarming regularity. You may recall he took down his list and never explained why.
But a far more important question is how this “association” decides what journals to include on their list. There is zero description of criteria or process here.Compare this to another organization that tries to identify predatory publishers, Cabell’s. Their list is a commercial product, but their “About predatory reports” page at least promises that the full report includes, “A complete record of when, what and why a journal is put in Predatory Reports.”
I looked at one of their blog posts about SCRIP to try to glean how they are assessing publishers. They quote Beall’s work, they quote Cabell’s report, and have a list of references at the end. It looks like just a haphazard compilation of other people’s writings.
This is a step up from how I thought the list might be generated (just by using vibes), but there doesn’t seem to be anything new here. I don’t see a lot of value.
For an association that wants to “build trust in scientific research and publications,” they don’t seem to have given much thought to how a blacklist made by a completely opaque process by a group of utterly unknown people looks more like a star chamber than a trustworthy organization.
No comments:
Post a Comment