26 July 2021

The currency of science is currency

Coins and bills from many countries.

 I used to tell students, “Your publications are gold. Peer reviewed journal articles are the currency of science.”

I don’t think I’m going to tell them that any more.

Kat Milligan-Myhre brought this blog post to my attention, bemoaning that administration at University of Wisconsin sees money as an end, not a means to an end. This aligns somewhat with my own experience in the US. I want to talk a little about we got to this point. 

In the early 2000s, two things happened.

Graph showing basic research expenditures from US goverment agencies from 1976-2020, with NIH having the largest budgets.

First, in the US, the National Institutes of Health (NIH) doubled their budget throughout much of the 1990s. Universities responded to that incentive and built infrastructure and hired faculty in biomedicine. But that budget doubling stopped in the 2000s. When you account for inflation, the budget shrank over the next few years. Universities had invested so deeply that they couldn’t back out, and were determined to get that money. 

In my experience, many university administrators were not truly aware that the budget had stopped increasing and did not realize how competitive grants had become. They had spend the better part of a decade hearing how big the NIH budgets were and couldn’t face the new reality.

You may object that this is only biomedicine and only in the US. True. But the NIH had the biggest basic research budget, and trends in the US tend to get reflected elsewhere.

Plus, something else happened in the 2000s. Academic publishing embraced the Internet and stopped relying on paper copies.

I point, as I often do, to the debut of PLOS ONE in 2006 as a significant turning point in academic publishing. It only reviewed for “technical soundness,” not “quality.” Because of that, people complained that “they will publish anything” (even when this was clearly not so). Nevertheless, it certainly expanded the range of papers that were publishable.

It not only published more papers, but other publishers copied the model. The number of peer-reviewed journals expanded significantly. 

And we also got journals that only pretended to be peer-reviewed adding confusion to the mix.

People evaluating faculty had often counted the number of publications because it is simple and does not require deep knowledge of the content of the papers. But as a whole swathe of new journals arrived, I am willing to bet that more and more faculty were able to push out papers somewhere.

From the point of view of someone attempting to evaluate faculty (because “excellence” and everything), this meant that publication number was not informative because there was less variation, and because administrators often aren’t active researchers themselves, they worried about noise and whether they could trust whether the journals were “real” journals.

But the grant process was still exclusive, and – importantly – still run by scientists. Grants still had the imprimatur of peer review.

Even if you put aside the desire for money, I could see how an administrator might prefer to switch from a simple metric that has lost much of its signal and is potentially corrupted (number of publications) to a different simple metric that is more exclusive and is still perceived as having integrity (number of grant dollars).

At this point, maybe we should update our vocabulary. Instead of “professor” or “researcher,” we should call people “research fundraisers.”

From now on, I’ll probably be advising students to look for competitive opportunities like scholarships and society grants as much as I advise them to publish.

External links

wtf uw 2: the new wisconsin idea is money

23 July 2021

ComSciCon 2021

I’m excited to be presenting at ComSciCon 2021! I’ll be part of a panel and workshop on creative storytelling on 5 August 2021

It’s a bit intimidating to be sharing space with:

I look forward to contributing!

External links

ComSciCon


22 July 2021

Mission declined

Should you choose to accept it

Terry McGlynn tweeted (in reply to an article I couldn’t see): 

Want more people to accept and understand evolution? 
Your mission, should you choose to accept it, is to emphasize that religion and evolution are compatible.

My problem with this is that “religion” is not one thing. There are thousands of religions. There are even many branches of what is ostensibly a single religion.

For many people, they are not concerned with whether some scientific claim like evolution is compatible with some religions. They are concerned, often deeply so, if a claim is compatible with their religion. I do see how saying, “The Catholic church is okay with evolution” is supposed to convince a Protestant. 

Trying to convince someone that “religion and evolution is compatible” mean trying convince people to change their religion. I am not prepared to wade into theological disputes between religions.

I do not want that mission.

19 July 2021

Damn it.

Dr. Kristine Lowe
One of the things I was most proud of doing in my time at The University of Texas-Pan American was chairing the search committee that recommended hired Dr. Kristine Lowe.

Kristi died yesterday.

Damn it.

Kristi was great with students and had a lot of them go through her lab.

Given that we had, like, 60% women as our students, I think her presence was so important for our department. At the time, there was only one woman on tenure / tenure track in the entire department.

Eventually, she started to step more into administrative and leadership roles and had been chair of the department for several years. Unsurprisingly given the department composition she came into years ago, she was the first woman to chair the department.

She was always friendly, supportive, and always willing to work with you. She was a good colleague and I hate losing her.

Damn it.

04 July 2021

You don’t have to use bad data

 A routine case of a bad paper attracting a lot of criticism and then getting retracted

The one thing that I wanted to comment on was one of the authors trying to defend their work.

We are happy to concede that the data we used… are far from perfect, and we said so in our paper. But we did not use them incorrectly. We used imperfect data correctly. We are not responsible for the validity and correctness of the data, but for the correctness of the analysis. We contend that our analysis was correct. We agree with LAREB that their data is not good enough. But this is not our fault(.)

My head is kind is spinning from this argument. If you know the data are bad, you could, you know, leave it alone and not write an entire academic paper that depended on it. Especially when it concerns an ongoing public health crisis.

The data may not be your fault, but that does not mean you are without fault.

External links

https://retractionwatch.com/2021/07/02/journal-retracts-paper-claiming-two-deaths-from-covid-19-vaccination-for-every-three-prevented-cases/


01 July 2021

Ant bites

In two days, two insightful pieces of writing have dropped that feel like bookends to each other. Both deal with the effects of social media – or, to be more specific, Twitter – on individuals who get on the wrong end of anger.

First is a retrospective and analysis by Emily VanDerWerff of how Twitter controversy about a single science fiction short story effectively crushed the writer’s desire to ever write again. And that was probably the smallest effect the controversy had on author Isabel Fall.

Second is a description of how social media dynamics are still not grasped by journalism as a field. Charlie Warzel provides brings some useful terms that I hadn’t seen before, like “context collapse” to the discussion.

Both remind us that human beings are used to dealing with small social networks. We aren’t ready for the level of attention that you can get if you become the center of a viral online discussion. VanDerWerff writes:

But in any internet maelstrom that gets held up as a microcosm of the Way We Live Today, one simple factor often gets washed away: These things happened to someone. And the asymmetrical nature of the harm done to that person is hard to grasp until you’ve been that person. A single critical tweet about the matter was not experienced by Isabel Fall as just one tweet. She experienced it as part of a tsunami that nearly took her life.

Warzel says:

Many leaders at big news organizations don’t think in terms of “attack vectors” or amplifier accounts, they think in terms of editorial bias and newsworthiness. They don’t fully understand the networked nature of fandoms and communities or the way that context collapse causes legitimate reporting to be willfully misconstrued and used against staffers. They might grasp, but don’t fully understand, how seemingly mundane mentions on a cable news show like Tucker Carlson’s can then lead to intense, targeted harassment on completely separate platforms.

The “Ant bites” of the title of this post?

When you tweet something, it can feel like you have the power of an ant. And a single ant is usually inconsequential. “Squished like an ant.”

But in 1998, Joe Straczynski wrote a warning (in the usenet newsgroup rec.arts.sf.tv.babylon5.moderated).

(E)ven a whole human being can be eaten by ants.

It’s easy to make the mistake of tweeting at or about someone and think you’re just making conversation. Sure, if you were in a room with a person and knew them, it’d probably be fine. But you forget that you probably see only the tiniest sliver of that person’s experience. Your tiny little comment might be part of a much bigger pattern for the recipient. A single ant bit. But the person on the other end might be getting eaten alive by ants.

I am thinking back to a lot of online controversies in science around, say, a decade ago. I think we probably underestimated how rough those could be on researchers. Nobody had a “social media IQ” then. The good news was that the online communities were smaller then, so the anthill might not have delivered quite as many bites as it could now.

External links

How Twitter can ruin a life

What newsrooms still don’t understand about the internet

 

27 June 2021

The paradox of MDPI

One of the most puzzling trends in scientific publishing for the last couple of years has been the status of the open access publisher MDPI.

On the one hand, some people I know and respect have published their papers there. I’ve reviewed for some journals, and have seen that authors do make requested changes and there is some real peer review going on.

On the other hand, few other publishers today seem so actively engaged in pissing off the people they work with. Scientists complain about constant requests to review, particularly in areas far outside their domain expertise – an easily avoided and amateurish mistake. 

And MDPI’s boss seems like a dick.

A few people have been trying to make sense of this paradox. Dan Brockington wrote a couple of analyses over the last two years (here, here) that were broadly supportive of what MDPI has done.

Today, I stumbled across this post by Paolo Crosetto that takes a long view of MDPI’s record. It prompted another analysis by Brockington here.

Both are longish reads, but are informed by lots of data, and both are nuanced, avoiding simple “good or bad” narratives. I think one of the most interesting graphs is this one in Crosetto’s post on processing turnarounds:

Graph of time from submission to acceptance at MDPI journals.  2016 shows wide variation from journal to journal; 2020 data shows little variation.

There used to be variation in how long it took to get a paper accepted in am MDPI journal. Now there is almost no smear how long it takes to get a paper accepted in an MDPI journal. That sort of change seems highly unlikely to happen just by accident. It looks a lot like a top down directive coming from the publisher, putting a thumb on the decision making process, not a result of editors running their journals independently.

Both Crosetto and Brockington acknowledge that there is good research in some journals. 

The questions seems to be whether the good reputation is getting thrown away by the publisher’s pursuit of more articles, particularly in “Special Issues.” Crosetto suspects the MDPI is scared and wants to extract as much money (or “rent” as he calls it) from as many people as fast as possible. Brockington says that this may or may not be a problem. It all depends on something rather unpredictable: scientists’ reactions. 

Scientists may be super annoyed by the spammy emails, but they might be happier about fast turn around times (which people want to an unrealistic degree) with high chance of acceptance. 

If the last decade or so in academic publishing has taught us anything, it’s that there seems to be no upper limit for scientists’ desire for venues in which to publish their work.

PLOS ONE blew open the doors and quickly became the world’s biggest journal by a long ways. But even though it published tens of thousands of papers in a single year, PLOS ONE clones cropped up and even managed to surpass it in the number of papers published per year. 

MDPI is hardly alone in presenting bigger menus for researchers to choose where to publish. Practically every publisher is expanding its list of journals at a decent clip. I remember when Nature was one journal, not a brand that slapped across the titles of over 50 journals.

MDPI is becoming a case study in graylisting. As much as we crave clear categories for journals as “real” (whitelists) or “predatory” (blacklists), the reality can be complicated.

Update, 1 July 2021: A poll I ran on Twitter indicates deep skepticism of MDPI, with lots of people saying they would not publish there.

Would you submit an article to an MDPI journal?
 
I have done: 9.4%
I would do: 3.9%
I would not: 50%
Show results: 36.7%

External links

An open letter to MDPI publishing

MDPI journals: 2015 to 2019

Is MDPI a predatory publisher?

MDPI journals: 2015 to 2020 

Related posts 

My resolve not to shoot the hostage is tested

Graylists for academic publishing