Showing posts with label publication. Show all posts
Showing posts with label publication. Show all posts

09 December 2020

Pick where you publish wisely

Saw a complaint about how PLOS had created predatory journals and made publication inaccessible through its article processing charge (APC) business model.

When PLOS ONE started, there was no guarantee the APC  model would work. It works because scientists choose to publish there, fully well aware of the costs.

It’s not a journal’s fault that scientists use them. Scientists have options. If you don't like a business model, don’t submit there. Then, convince colleagues.

21 August 2020

Age is irrelevant to bad research papers

Despite its decisive drubbing in Kitzmiller v. Dover, intelligent design just keeps showing up like the proverbial bad penny. The latest poking of the intelligent design helmet out of the foxhole is a paper in the PNAS, spotted on Twitter.

I don’t like that an intelligent design paper was published in a journal. But nor do I like comments about the age of the author.

Guess that’s what happens when a paper about evolution by an 87-year-old physicist is reviewed by another physicist and a complex systems theorist. (here)

And:

Author is 87. (here)

That the author is in his 80s is trotted out as though it’s an explanation. How? How is the author’s age at all relevant? I don’t know of any data that show people slide into intelligent design beliefs as they get older.

Don’t judge work by the age of authors. That’s ageist.

08 July 2020

Update on the Better Posters book

Better Posters book coverThe Better Posters book is inching closer to reality!

The book now has a:


Most of the discussion about the content book will be over at the Better Posters blog, although I will occasionally talk about about the creation and backstory of the book here.

My only disappointment is that the ISBN is not a prime number. Divisible by 379. Damnit.


29 May 2020

When manuscript formatting is a deal breaker

Academics have bugbears. And a very common one is manuscript formatting. “We shouldn't have to spend our time fiddling with a word processor to meet a journal’s formatting requirements, we should be collecting data! Grumble grumble grumble.”

I get it. I get why people like to complain about this. But to be honest, of all the things to complain about in academia, I always thought this did not warrant the amount of whinging it got.

Most journals have clear instructions. They are usually not terribly difficult to follow if you’ve had a bit of practice. With modern reference manage software, switching between reference styles – the most time-consuming, tedious part of formatting – can be as easy as pushing a couple of buttons.

But today I was reminded that everyone has their limits.

I was looking for a home for a manuscript I was working on, and came across a journal I thought would be a very appropriate home. I looked at the instructions for authors, and came to a screeching halt here.

Do not insert references using software such as EndNote, which adds a layer of embedded coding to the manuscript; use only MS Word’s built-in endnote function to insert references.

I was surprised. I’d never seen a journal prohibit authors from using a tool that so obviously exists to make writing for journals easier.

I’ve used EndNote for years, and I have no desire to learn how to use Word’s referencing tools. Too far from my writing workflow. I will not be submitting my paper to this journal.

I don’t want to piss off editors by submitting a manuscript that doesn’t meet their manuscript guidelines. I like following instructions. But with other relevant journals out there, it seems like the easiest thing for me to do is to...

"Just walk away" - Lord Humungous from Mad Max 2 / The Road Warrior

Just walk away.

I’m not alone here. To my surprise, a lot of people are willing to walk away from submitting to a journal purely because of the formatting requirements, based on this Twitter poll.

20 March 2020

Find me on Amazon!

I am getting ready for the forthcoming release of the Better Posters book! I now have my own author’s page on Amazon!

Find me at: https://www.amazon.com/author/zenfaulkes

My Kindle edition of Presentation Tips and the Freshwater Crayfish anthology I helped edit are already there.

30 January 2020

Time Higher Ed feature article on authorship disputes


I’m busy copyediting the Better Posters book and grading and teaching, but I wanted to stick my head out of my hole to point to a great feature article in Times Higher Education on the subject of authorship disputes.

I have a few quotes in this article. It’s clearly an outcome of the paper on authorship disputes I published over a year ago now. (Sometimes, you’re so busy with one project you forget about the “long tail” of earlier projects.) I was also lucky that I’ve talked to journalist Jack Grove before and was in his email contact list

I’m rather amused that while I chose to illustrate these conflicts with a picture of chess pieces, the Times chose... hockey. As a Canadian, I can do nothing but approve.

External links

What can be done to resolve academic authorship disputes?
Whose Paper is it Anyway? A Discussion on Authorship (Illustration)

Related posts

You think you deserved authorship, but didn’t get it. Now what? 
How wasting time on the internet led to my new authorship disputes paper


27 November 2019

Is this a real journal?

A student of mine went to conference, then got an email from unknown journal. The student asked me if this was normal and whether the journal was legit. Here’s the process I went through to evaluate the journal and try to help the student.

I googled the journal title. First thing I noticed was the domain name. The publisher's name is not a correctly spelled English word, which either means the publisher is trying to be gimmicky or using a non-English spelling. Neither makes a good first impression.

The sidebar lists journal information, and I see “Year first Published: 2019”. So even if this is a legitimate journal, it has no track record and probably no reputation. And journals are all about reputation.

Nor does the journal info sidebar say anything about the journal being indexed anywhere, like Web of Knowledge or Scopus. Most aspiring legitimate journals at least mention indexing, whether they currently have it, because most authors want their work to be findable in academic searches.

The second paragraph of the journal description has a glaringly obvious typo about the type of research the journal publishes (“-olog” instead of “-ology”). This suggests that someone is not paying attention to the home page. This could be because they are a fly-by-night operation that is only interested in charging authors, or that they’re new or inexperienced and can’t be bothered to proofread.

So this looks like either a scam (likely) or something made by careless amateurs. Neither’s good.

24 October 2019

How academic publishing is like a really nice bra

In my jackdaw meanderings around the internet, I stumbled on this thread from Cora Harrington.

Sometimes I like to look at lace prices on sites like Sophie Hallette. It’s good for giving perspective on how, even if the cost of lingerie was just fabrics (and it’s not because people should be paid for their labor), many items would still be expensive.

She gives many examples, of which I will show just one (emphasis added):

The Chloris reembroidered lace is around $1600/meter.


And that isn’t the most expensive one. Cora concludes:

When someone says “There’s no way x could cost that much,” keep in mind that there are fabrics - literally just the fabrics - that can cost 4 figures per meter.

And the labor - the expertise - involved in knowing how to handle these fabrics is worth many, many times more.

This made me think a lot about academic publishing. Because I am always fascinated by people who say something like undergraduate textbooks or journal subscriptions or article processing fees for open access publishing costs “too much.” When someone says something costs “Too much,” that means they have some notion in their head of what the “right” price is.

But as this example shows, people don’t always have a clear conception of the costs involved. And people complaining about costs sometimes tend to assume that the labour involved is simple, quick, and not worth paying a decent wage for.

This is not to say prices can’t be too high. But at least as far as academic publishing goes, I’ve only seen one attempt to work out what costs are. That is, apart from publishers themselves, who have conflicts of interest in calculating and disclosing costs.

04 October 2019

Who co-authored the most read paper in JCB? Me.

Screenshot of Journal of CRustacean Biology advance articles and "Most read" list, with crayfish cell culture at top of "Most read"

Yes, I know there are all kinds of problems with mystery metrics. Yes, I know this reflects the new paper I co-authored being, well, a new paper with no paywall. Yes, I know that this won’t necessarily reflect the long time impact of the paper.

Still. It feels nice.

Far too often, publishing academic papers feels like shouting into a vacuum. Or the most agonizing of slow burns, where it takes years to know if other people will pick up on what you’ve done. So a little short term feedback like this is pleasant.

26 July 2019

“Follow the rules like everyone else” is not punishment

Because I curate a collection of stings and hoaxes, I have been following the so-called “grievance studies” affair by Helen Pluckrose, James Lindasy, and assistant professor Peter Boghossian (the only academic of the trio). They sent hoax papers to journals. Many people have sent hoax papers to journal (hence my anthology), but Pluckrose and colleagues described it as an experiment and published it.

Inside Higher Education reports:

Boghossian was ordered last year to take research compliance training; he has not yet done so, the letter states. Because Boghossian has not completed Protection of Human Subjects training, he is forbidden from engaging in research involving human subjects or any other sponsored research.

In other words, “Follow the same rules as everyone else.”

Just by way of comparison, and to give you an idea of what research with humans normally entails, I did an online survey for a couple of research papers (here’s one). That’s less intrusive than what Boghossian and colleagues did. I had to:

  • Go through “research with human subjects” training.
  • Submit a proposal to an institutional review board and have it approved.
  • Include detailed descriptions of the potential benefits and risks to anyone viewing the survey.

So “Take training before you do more research” is what anyone should do.

But some reporting makes it sound like Boghossian is being treated arbitrarily (emphasis added).


My prediction is that this is going to become a talking point in the American culture wars, with some trying to paint Boghossian’s letter as a dire consequence that has a chilling effect on academic freedom, is political correctness gone mad, continue buzzwords until exhausted.

Unfortunately, the language of the letter Boghossian got was pretty severe, which will contribute to the impression that the consequences for Boghossian are bad.

And it is bad, of course. It’s embarrassing to get called out for your actions and told you didn’t do the right thing by this institution and your profession.

But I bet a lot of people wish their punishment for something was a letter saying, “Follow the rules.” I’m sure some teenagers would like that more then being grounded.

19 July 2019

Audiopapers

Photograph of microphoneCorina Newsome! This is your fault! You have to go and say:

Can we get scientific journal articles on audiobook? Please?

There is a long thread that follows about possible solutions. But two things emerge:

  1. Software to read papers aloud automatically doesn’t do a very good job.
  2. Quite a few people want these.

Following my long standing tradition of, “What the heck, I’ll have a go,” I’d like to present my first audiopaper! It’s a reading of my paper from last year on authorship disputes.

I decided to do this because I wanted to get more mileage out of a mic I’d bought for a podcast interview (forthcoming), and because I still have this discussion in the back of my head.

I often tell students, “Always plot the data”, since different patterns can give same summary stats. How could I help visually impaired students do something similar?

And the answer is that while there have been experiments in sonification of data, it seems to have stayed experimental and never moved into simple practical use. It got me thinking about how little we do for visually impaired researchers.

I picked my authorship disputes paper for a few reasons.

  1. There are no bothersome figures to worry about describing.
  2. The topic probably has wider appeal than my data driven papers.
  3. The paper is open access, so I wouldn’t run afoul of any copyright issues.
  4. The paper is reasonably short.

I wrote an little into and a little outro. I pulled out my mic, fired up Audacity, and got reading. My first problem was finding a position for the mic where I could still see the computer screen so I could read from my paper.

I broke it into sections (slightly more sections than headings the paper). I think it took between one and two hours to read the whole thing. It’s not quite a single take, but it’s close.

I’ve since figured out that I can probably do longer sessions, because I worked out how to identify sections I want to edit out because I stumbled or mispronounced words. After I screw up a sentence, I snap my fingers three times. This creates three sharp spikes in the playback visualization that is easy to see. That makes it easy to find the mistake, then edit the gaffe and the finger snaps out of the recording.

Screen shot of sound recording in Audacity comparing speech and finger snaps.

I learned that it can be surprisingly hard to say “screenplay” correctly. And I curse my past self who wrote tongue twisters like “collaborative creator credit.”

Editing the recording also took about an hour. Besides cutting out my stumbles and finger snaps, I cut out some longer pauses and occasional little background sounds. The recording was a bit quiet, so I increased the gain a few decibels.

Will I do more of these? It completely depends on the response to this experiment. I probably picked my single easiest paper to read and turn into an audio recording. It would only get harder from here. And I have other projects that I should be working on.

If people like this effort, I’ll see about doing more, maybe with better production. (I wanted to put in some music, but that was taking too long for a one off.)

External links

Resolving authorship disputes by mediation and arbitration on Soundcloud

10 June 2019

Journal shopping in reverse: unethical, impolite, or expected?

A recent article describes a practice unknown to me. Some authors submit papers for review, get positive reviews, then withdraw it if the reviews are positive and try again in another “higher impact” or “better” journal.

It is entirely normal for authors to go “journal shopping” when reviews are bad: submit the article,and if the reviewers don’t like it, resubmit it to another. But this is the first time I’d heard of this process going the other way. It would never even occur to me to do this.

Nancy Gough tweeted her agreement with this article, and said that this behaviour was unethical. And she got an earful. Frankly, online reaction to this article seemed to be summed up as, “I know you are, but what am I?”

A lot of the reaction that I saw (though I didn’t see all of it) seemed to be, “Journals exploit us, so we should exploit journals!” or “Journals should pay us for our time.” This seemed to be a directed at for profit publishers, but people seemed to be lumping journals from for profit publishers and non profit journals from scientific societies together.

The “People in glass houses should not throw stones” have a point, but I’m not sure it addresses the actual issue. Publishers didn’t create the norms of refereeing and peer review. That was us, guys. Scientists. We created the idea that there are research communities. We created the idea that reviewing papers is a service to that community.

I don’t know that I would call “withdraw after positive reviews and resubmit to a journal perceived as better” unethical, but I think it’s a dick move.

Like asking someone to a dance and then never dancing with them. Sure, there’s no rules against it, but it’s not too much to expect a little reciprocity. The “Me first, me only” attitude drags.

Since the whole behaviour is “glam humping” and impact factor chasing, this seems a good time to link out to a couple of articles that point out the many ways that impact factor is deeply flawed: here and here.

I’ve written before about grumpiness about peer review being due in part to an eroded sense of research community. I guess people don’t want to see journals as part of the research community, but they are.

Related posts

A sense of community

External links


08 June 2019

Shoot the hostage, preprint edition

It takes a certain kind of academic who refuses to review papers. Not because of lack of expertise, a lack of time, or a conflict of interest, but because you don’t like how other authors have decided to disseminate their results.

I’ve been declining reviews for manuscripts that aren’t posted as preprints for the last couple of months (I get about 1-2 requests to review per week). I’ve been emailing the authors for every paper I decline to suggest posting.

This isn’t a new tactic, and I’ve made my thoughts on it known. But this takes review refusal to a new level. This individual isn’t just informing the editor he won’t review, but chases down the authors to tell them how to do their job.

I’m sure the emails are meant as helpful, and may be well crafted and polite. Still. Does advocating for preprints have to be done right then?

I see reviewing as service, as something you do to help make your research community function, and to build trust and reciprocity. I don’t think reviewing as an opportunity to chastise your colleagues for their publication decisions. But I guess some people are unconcerned whether they are seen as “generous” in their community or... something else.

And I am still struggling to work out if there are any conditions where I think it would genuinely be worth it to say refuse to review.

Additional, 9 June 2019: I ran a poll on Twitter. 18% described this as “Collegial peer pressure.” The other 82% percent described it as “Asinine interference.”


Related posts

Shoot the hostage

11 May 2019

A pre-print experiment, part 3: Someone did notice

In 2016, I wrote a grumpy blog post about my worries that posting preprints is probably strongly subject to the Matthew effect. It was a reaction to Twitter anecdotes about researchers (usually famous) posting preprints and immediately getting lots of attention and feedback on their work. I wanted to see if someone less famous (i.e., me) could get attention for a preprint without personally spruiking it extensively on social media.

I felt my preprint was ignored (until I wrote aforementioned grumpy blog post). But here we are a few years later, and I’m re-evaluating that conclusion.

A new article about bioRxiv is out (Abdill and Blekman 2019), and it includes Rxivist, a website that tracks data about manuscripts in bioRvix. Having posted a paper in BioRvix, that means that my paper is tracked in Rxivist.

It’s always interesting to be a data point in someone else’s paper.

The search function is a little wonky,but I did find my paper, and was surprised (click to enlarge).


Rxivist showed that there has been a small but consistent number of downloads (Downloaded 421 times). Not only that, but the paper is faring pretty well compared to others on the site.
  • Download rankings, all-time:
    • Site-wide: 17,413 out of 49,290
    • In ecology: 542 out of 2,046
  • Since beginning of last month:
    • Site-wide: 19,899 out of 49,290
My little sand crab natural history paper is in the top half of papers in bioRxiv?

I did not expect that. Not at all.

I know there is an initial spike because I wrote my grumpy blog post and did an interview about preprints that got some attention, but even so. I know there aren’t hundreds of people doing research on sand crabs around the word, so hundreds of downloads is a much wider reach than I expected.

And some of the biggest months (October 2018) are after the final, official paper was published in Journal of Coastal Research. The final paper is open access on the journal website, too, so it’s not as though people are downloading the preprint because they are circimventing paywalls. (Though in researching this blog post, I learned a secondary site, BioOne, is not treating the paper as open access. Sigh.) (Update, 14 May 2019: BioOne fixed the open access problem!)

I am feeling much better about those numbers now than in the first few months after I posted the paper. I never would have anticipated that long tail of downloads years after the final paper is out.

And Rxivist certainly does a better job of providing metrics than the journal article does:


There’s an Altmetric score but nothing else. It’s nice that the Altmetric score for the preprint and published paper are directly comparable (and I’m happy to see the score of 24 for the paper is a little higher than the preprint at 13!), but I miss the data that Rxivist provides.

Other journals provide top 10 lists (and I’ve been happy to be on those a couple of times), but they tend to be very minimal. You often don’t know the underlying formula for how they generate those lists. The Journal of Coastal Research has a top 50 articles page that shows raw download numbers for those articles, and if you are not in that list, you have no idea how your article is doing.

While I still never got any feedback on my article before publication, I don’t feel like posting that preprint was a waste of time like I once did.

References

Abdill RJ, Blekhman R. 2019. Tracking the popularity and outcomes of all bioRxiv preprints. eLife 8: e45133. https://doi.org/10.7554/eLife.45133

Faulkes Z. 2016. The long-term sand crab study: phenology, geographic size variation, and a rare new colour morph in Lepidopa benedicti (Decapoda: Albuneidae). BioRXiv https://doi.org/10.1101/041376

Faulkes Z. 2017. The phenology of sand crabs, Lepidopa benedicti (Decapoda: Albuneidae). Journal of Coastal Research 33(5): 1095-1101. https://doi.org/10.2112/JCOASTRES-D-16-00125.1 (BioOne site is paywalled; open access at https://www.jcronline.org/doi/full/10.2112/JCOASTRES-D-16-00125.1)

Related posts

A pre-print experiment: will anyone notice?
A pre-print experiment, continued

Fiddly bits and increments

External links

Sand crab paper on Rxivist

05 February 2019

Second letter in Science!

I have yet another story of a publication that started because I was wasting time on the Internet. I say again: blogging is one of the best ways for an academic to work out ideas.

This new publication is my second brush with the realm of glamour magazines in my career. It’s a letter again and not a research article, but I’ll take it.

Blog readers and maybe some of my Twitter followers might recognize the arguments. They are the same ones I made in this blog post previously. Somewhere along the way, I found myself referencing it in tweets that I thought, “Maybe I can bring this to a wider audience.” More people read the glamour magazines than my blog. I chose to try for Science because it seemed to me that GRE discussions were most relevant to the US.

While the letter is short, it actually expanded from what I originally submitted. Letters editor Jennifer Sills pushed me to expand the last paragraph to include a few sentences about possible solutions. This was a good push, and the letter is better because of it.  I’ll quote Clay Shirky again (emphasis added):

(W)hat are the parent professions needed around writing? Publishing isn’t one of them. Editing, we need, desperately.

While blogging is one of the best ways I have found to develop and work through academic ideas, an editor who genuinely edits is invaluable in fine tuning and honing ideas.

References

Faulkes Z. 2019. #GRExit's unintended consequences. Science 363(6425): 356. https://doi.org/10.1126/science.aaw1012

Related posts

Letter in Science!
I come to bury the GRE, not to praise it
Publishing may be a button, but publishing isn’t all we need

10 July 2018

Ego

Hypothesis:

Complaints about peer review are often made by people who believe that their work is so infallible and perfect that it cannot be made better by peer review.

It trickles through in complaints about how long peer review is taking (when the review time may be reasonable), It trickles through when asking what reviewers could possibly say about a manuscript. It trickles through when questioning the value of journals organizing peer review.

This is a dangerous habit of mind for a scientist. As David Brin likes to say, “Criticism is the only known antidote to error.”

Sure, most scientists are professionals who are trained to produce competent science. It’s not surprising that most papers pass peer review, and that the improvement is not always that large. But there shouldn’t be an expectation that everything a scientist does is going to be worth publishing as is. Everyone makes mistakes.

As Rose Eveleth said yesterday:

Sure, some editors are annoying, but you know what is worse? Literally any writer’s raw copy.

This is a message a lot of scientists need to hear.

External links

How long should peer review take?

14 June 2018

Another preprint complication

While I knew some journals won’t publish papers that had previously been posted as preprints, I didn’t know that some journals are picky.

Jens Joschinski wrote:

Some journals (well, @ASNAmNat) will not accept papers posted at @PeerJPreprints or other commercial services.

This makes no sense to me. What does the business model of the preprint server have to do with anything regarding later publication?

There’s a list of journal policies. Thanks to Jessica Polka.

But frankly, every little bit of legwork just makes me less inclined to post preprints. I’ll still do it if I think I have some compelling reasons to do so, but doing this routinely as part of my regular publication practices? Maybe not.

11 June 2018

Does biorXiv have different rules for different scientists?

Last year, I submitted a preprint to biorXiv. I was underwhelmed by the experience.

But I am a great believer in the saying, “Never try something once and say, ‘It did not work.’” (Sometimes attributed to Thomas Edison, I think.) I submitted another manuscript over the weekend which I thought might be a little more suited to preprinting, so after I submitted it to the journal, I went and uploaded it to biorXiv. It was the weekend, so it sat until Monday. Today, I received a reply. My preprint was rejected.

bioRxiv is intended for the posting of research papers, not commentaries(.)

How interesting.

I like that this demonstrates that preprint servers are not a “dumping ground” where anyone can slap up any old thing.

My paper is not a research paper. I don’t deny that. Following that rule, biorXiv made a perfectly understandable decision.

But the whole reason I thought this new paper might be appropriate to send to biorXiv was I had seen papers like “Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication” on the site before. I opened up that PDF and looked at it again. There’s no “Methods” section. There’s no graphs of data. There’s no data that I can find at all.

How is that a research paper? And how is that not a commentary? Maybe I’m missing something.

But although the paper above doesn’t have data, what it does have is a lead author who was the former editor-in-chief of Science and current current president of the National Academy of Science of the US, Marcia McNutt. The paper was submitted in May 2017, some time after McNutt became president of the National Academy in 2016.

And while she is the only one to have “National Academy of Sciences” listed in the authors’ affiliations, the rest of the author list is nothing to sneeze at. It boasts other people with “famous scientist” credentials, like Nobel laureate and eLife editor Randy Schekman. Most of the authors are involved in big science journals.

One of my criticisms of preprints is that they would make the Matthew Effect for publication worse. People who are in well-known labs at well-known institutions would receive the lion’s share of attention. People who are not would have just another expectation with minimal benefits.

But this feels even worse. This feels like there’s one set of rules for the rank and file scientists (“No commentaries!”) and another set of rules for scientists with name recognition (“Why yes, we’d love to have your commentary.”).

I like the idea of preprints, but this is leaving a sour taste in my mouth.

Update, 12 June 2018: The manuscript found a home at a different preprint server, Peer Preprints.

Related posts

A pre-print experiment: will anyone notice?
A pre-print experiment, continued

External links

Twitter thread
Transparency in authors' contributions and responsibilities to promote integrity in scientific publication

28 March 2018

Innovation must be accompanied by education


When Apple launched the iPod, the company had to put a lot of effort into educating people about digital music.

Mr. Jobs pulled the white, rectangular device out of the front pocket of his jeans and held it up for the audience. Polite applause. Many looked like they didn’t get it.

That was just fine with Mr. Jobs. They’d understand soon enough.

Apple had to inform the mass market that digital downloads could be legal (remember Napster?). They had to let people know how much music you could have with you. They had to let people know about the iTunes store. Without all those pieces of the puzzle, the iPod would have tanked.

I was reminded of these scene when Timothy Verstynan asked:

Why can’t we have a scientific journal where, instead of PDFs, papers are published as @ProjectJupyter notebooks (say using Binders), with full access to the data & code used to generate the figures/main results? What current barriers are preventing that?

I follow scientific publishing at a moderate level. I write about it. I’m generally interested in it. And I have no idea what Jupyter notebooks and binders are. If I don’t know about it, I can guarantee that nobody else in my department will have the foggiest idea.

This is a recurring problem with discussions around reforming or innovating in scientific publishing. The level of interest and innovation and passion around new publication ideas just doesn’t reach a wide community.

I think that this is because those people interested might undervalue the importance of educating other scientists about their ideas. Randy Olson talks a lot about how scientists are cheapskates with their communications budgets. They just don’t think it¤s important, and assume the superiority of the ideas will carry the day.

I’ve talked with colleagues about open access many times, and discover over and over that people have huge misconceptions about what open access is and how it works. And open access is something that has been around for a decade and has been written about a lot.

Publishing reformers drop the iPod, but don’t do the legwork to tell people how the iPod works.

So to answer Timothy’s initial question: the current barrier is ignorance.

27 September 2017

Scientific societies need to compete on services


There’s an editorial in the journal mBio asking members to publish in society journals.

The editorial contain some nice data outlining the changes in the publishing scene. But the argument the editorial advances sound like special pleading.

Just as the prefixes “Nature” and “Cell” seem to bring gravitas to many journal titles for some scientists and represent implied signals for authors and readers about the quality of the papers that they publish, journals published by professional scientific societies should carry the same authority. After all, they have a long tradition of authoritative leadership and management and are edited by some of the most accomplished scientists in their fields. Professional societies provide legitimacy to the journals they publish. When an author submits a paper to a scientific society journal, or when someone reads a paper published in a scientific society journal, they can be assured that the journal is legitimate and has a decades-long track record of quality.

This paragraph may be right that journals from societies “should” have authority. But you cannot assert authority or credibility. Credibility is determined by what other people say or do. The editors should be asking themselves the hard question of why they lost that credibility.

For instance, my reaction is similar to one I had for Evolution Letters. Let’s do a price check. It costs at least twice as much publish in the society journal mBio ($2,250-$3,000) as PeerJ ($1,095). It’s not clear to me what that I get shelling out all that extra cash.

The editorial tries to claim societies offer superior editorial services.

Scientific society journals are managed and edited by scientists actively working in the fields covered by the journals. ... Although there are certain challenges in using academic editors (12), they bring the experience, expertise, and authority that enables professional societies to refine their missions and set the standards of their fields as they evolve.

This one is tough, because all I can say is that I have published with many journals, and I have not seen consistent differences in the editorial services between those run by scientific societies and those that are not. A Twitter poll suggests society journals may have a very slight edge.

The editorial ends with a very generic one that publishing in a society journal “helps the society.” That may be, but people will only stick with sub-par services out of sentiment for so long.

Society journals are carriage makers that see the Model Ts on the road. And they don’t know how to adapt. Their only argument is, “But the horse is a noble animal. You like horses, right?” If scientific societies don’t figure this out quickly, they will be relegated to a tiny little niche in the academic publishing industry.

Related posts

The problem is scientists, not publishers


References

Schloss PD, Johnston M, Casadevall A. 2017. Support science by publishing in scientific society journals. mBio 8(5): e01633-17. https://doi.org/10.1128/mBio.01633-17