09 December 2021

University tells scholars what journals to publish in

University of SOuth Bohemia logo and MDPI logo

The latest news around the controversial publisher MDPI, from Katarina Sam on Twitter:

Our uni made official statement about publications in MDPI journals: Such papers will not be funded, supported, and considered as a valid scientific result. We were also recommended not to do any services for them.

From her Google Scholar page,  Katarina appears to be at the University of South Bohemia (which is an awesome name, incidentally). When I search for “University of South Bohemia MDPI,” I can’t find any official statement. The first page of hits is a list of MDPI articles where one or more authors have an affiliation with the University of South Bohemia. Searching the university website also returns no policy statement, but a few articles publihsed in MDPI journals.

I am interested in the policy statement because this seems to me to be very weird and very bad.

I was under the impression that the ability to choose which journals to publish in was part and parcel of academic freedom. Indeed, one of the arguments against open access mandates from funding agencies and others was that it compromised academic freedom. But I think people made a fuss there because such mandates meant they wouldn’t be able to publish in glamour mags like Nature and Science

Here, I am less sure people are going to make a fuss because a lot of people... dislike MDPI. 

I am very, very nervous about an institution trying to ban its faculty from using, not a single journal, but an entire publishing company.

I think MDPI will be outraged because the people in charge seem to have thin skins, but I don’t think they will be harmed much. MDPI clearly has authors who value their services.

I’m more concerned by the harm this precedent sets.

Update: Charles University has a vice-dean writing blog posts about MDPI. Not so much policy, but an expression of concern, I suppose.

But what’s the point of noting that MDPI is a “Chinese company (with a postal address in Basel, of course).” How is national origin relevant to the quality of an academic publisher? What is being implied here?

More edits: The National Publications Committee of Norway (they have one of those?) announced a new listing for dubious journals: “level X.” The article begins by singling out an MDPI journal that got added to the list.

I was about to tweet earlier today that “I wonder if the push against MDPI was because an institution was unhappy about paying open access fees.” And sure enough:

The phenomenon Røeggen refers to is open publishing with author payment. - Many are worried about the phenomenon. Not for open publishing, which I experience that there is great support for in Norwegian research, but the solution we have received in open publishing where the institutions have to pay when the authors publish, says Røeggen. ...

In recent years, the requirement for open publication, through national guidelines and the so-called Plan S, has created a significant shift. Now more and more money is being paid for publishing, instead of paying to be able to read the magazines.

Emphasis added. 

And I’ve said it before, but I’ll say it again: It is weird to me that academics will bitch and moan about how long journal reviews take (“Careers are on the line here! People’s lives are put on hold because reviews take so long!”), but then consistent and prompt review is used as a reason to suspect a journal.

Questions have been asked about the rapid pace from submission of manuscripts to publication and that this is in line with research quality.

MDPI is not alone here, though. Frontiers publishing and more traditional publisher Taylor & Francis also have journals making the “level X” status. Norway is also avoiding the “carpet bombing” approach taken by the University of South Bohemia.

Still. This is not a great week for MDPI specifically and perhaps for open access more generally.

Related posts

The paradox of MDPI

My resolve not to shoot the hostage is tested

External links

The same week that the researchers' article was published, the journal ended up on the gray zone list (Automatic translation of Norwegian title; article in Norwegian, naturally)

30 November 2021

PolicyViz interview

The real reason to write a book is to do interviews.

I’ve long noticed that I know the basic arguments of many books I’ve never read, because of the interviews authors gave arising from the book.

So I was very excited to talk about the Better Posters book with to Jon Schwabish (author of the excellent Better Data Visualizations, which I reviewed here) on the PolicyViz podcast. The episode is now available wherever you listen to podcasts!

Jon is a great person to talk to, and his questions got me thinking about some new topics that I hadn’t considered before.

This season, Jon has been experimenting with a video version of the podcast. I already knew of my bad speaking habits as an interviewer on audio (I go on tangents way to easily, I start sentences without knowing where they’ll land), but now I get to see entirely new bad habits (looking away from the camera, shifting my weight).

I mean seriously, why am I looking to my right so much? There’s nothing there...

If you are not interested in my voice or my face (and I can’t say I’d blame you), the show notes boast a complete transcript.

External links

PolicyViz podcast Episode #206: Zen Faulkes show notes 

22 November 2021

UK eyes new crustacean legislation

The Guardian is reporting that there is the potential new animal welfare regulations that would affect decapod crustaceans and cephalopods. The London School of Economics, whose report is being used to justify the move, seems rather more confident than The Guardian and is basically saying this is a done deal and that it will happen.

I am a little concerned by the backstory here, particularly cased on this:

The study, conducted by experts from the London School of Economics (LSE) concluded there was “strong scientific evidence decapod crustaceans and cephalopod molluscs are sentient”. ...

Zac Goldsmith, the animal welfare minister, said: “The UK has always led the way on animal welfare and our action plan for animal welfare goes even further by setting out our plans to bring in some of the strongest protections in the world for pets, livestock and wild animals.

“The animal welfare sentience bill provides a crucial assurance that animal wellbeing is rightly considered when developing new laws. The science is now clear that crustaceans and molluscs can feel pain and therefore it is only right they are covered by this vital piece of legislation.”

See, I want to know what Minister Goldsmith knows that I don’t. Because I follow scientific literature on this topic and the science on whether crustaceans “feel pain” is nowhere near as clear as Goldsmith claims. We are only barely getting a handle on whether crustaceans have nociceptors,

And “sentience”? Yeah, I don’t think there is a generally agree upon set of criteria for that, either.

A cursory glance at the London School of Economics report shows that none of the authors have stated experience in crustacean biology. (One studies cephalopod cognition.) A major review on this topic by Diggles (2018) is not included. Some of the references in the report are dated 2021, so leaving out a 2018 paper is a puzzling omission. 

But at first blush, this report looks more comprehensive than the documents used to argue for legislation in Switzerland. But I’ve only glanced at it so far, and will need some more time to read in detail.

References

Diggles BK. 2018. Review of some scientific issues related to crustacean welfare. ICES Journal of Marine Science: fsy058. http://dx.doi.org/10.1093/icesjms/fsy058

Related posts

Switzerland’s lobster laws are not paragons of science-based policy

External links

Boiling of live lobsters could be banned in UK under proposed legislation

Review of the Evidence of Sentience in Cephalopod Molluscs and Decapod Crustaceans

Octopuses, crabs and lobsters to be recognised as sentient beings under UK law following LSE report findings

19 November 2021

Do not make professors guess a student’s childhood

I was filling in recommendation forms for students today, and was gobsmacked by this question:

English Competency: For students whose first language is not English, please rank the applicant’s ability and comment on the applicant’s English competency in the box provided below.

Wow, that’s a bad question. Wait, let me upgrade that. That’s a freaking terrible question.

Why am I only asked to assess the English competency of students “whose first language is not English”? I know a lot of students who are native speakers whose linguistic skills are not good.

More to the point, how can I possibly know what a student’s first language is?

Maybe a student will mention this to me, but probably not. It’s not in a student’s records for a class. I am quite confident it is not part of a student’s university record.

(And this was a non-optional part of a form, which is also weird, because presumably I am supposed to skip it for native English speakers?)

The only way anyone could complete this part of the form is by making assumptions. So this question is code for:

“Does this student speak with an accent?”

“Does this student’s name look European?”

“Does this person have black or brown skin?”

The question singles out some people as needing extra “assessment”, but it’s based on the recommender’s stereotypes about who a “non native English speaker” is.

If you’re going to ask a question about language proficiency, ask, “Rate this applicant’s proficiency in communication” for every single applicant. Don’t even mention the language. Because there are some people who will never speak English who should be afforded the opportunity to have an education. (I’,m thinking of people who sign, for one.)

Update, 23 November 2021: In this case, a happy ending! The program changed the question so that every recommender is simply asked to comment on language skills for every applicant.

08 November 2021

The University of Austin: Stop it, you’re just embarassing yourself.

 Spotted on Twitter this morning (hat tip to Michael Hendricks):

We got sick of complaining about how broken higher education is. So we decided to do something about it. Announcing a new university dedicated to the fearless pursuit of truth::

It offers no degrees. It has no accreditation. This is its physical location:

REsidential house in Austin Texas that does not look like a university campus.

But they offer “Forbidden courses” where students can have “spirited discussions” about “provocative questions.”

Presumably for a tuition fee. Since this wouldn’t lead to any degree or course credits, not at all clear why would a student do this when they can just have an drunken argument in any bar with “provocative questions.”

Having been through the creation of a new university (in Texas, no less), I can say with confidence:

This is trash.

This is probably one of two things.

One possibility is that it’s a wild mix of huge egos and a cash grab. It will come to nothing besides  separating a few suckers from their money. It reminds me of a “university” created by a former US president that was sued and gave out a settlement of $25 million

Or maybe it’s a pure criminal operation

Everything about this stinks like the kind of stink that make you involuntarily gag and fight the urge to vomit.

Update: Sarah Jones reminds me:

I’m not convinced this experiment is going to last, but they seem to have money and as a general rule I think it’s wise to take the right as seriously as it takes itself.

This is true. Being badly wrong has not prevented many ideas from having amazing longevity.

31 October 2021

Science Twitter calaveras

Thanks Namnezia!

Skeleton wearing t-shirt with crayfish standing in front of poster
Poor ol' Zen Faulkes, 

the mob confused him for Guy Fawkes. 

Said "You got the wrong guy, I study crayfish!" 

But they thought he was being all selfish. 

From the bonfire he yelled "No, really, look at my poster!" 

That didn't work,they thought he was another imposter.

2011

25 October 2021

How to fix an author in 10 ways

Wow, it’s been far too long since I’ve had a new paper with my name on it.

I got an email out of the blue asking if I would be interested in participating in writing this paper. It arose from my earlier paper on authorship disputes (Faulkes 2018) and why I think we should have more alternative dispute resolution in academic publishing.

I said yes, obviously.

You will notice that there are an equal number of “strategies” and “authors.” This is not coincidence. For the most part, each person on the author list tackled one section of the paper.

I’m section #8. 😉

My section was originally something like “Seek arbitration.” This was obviously inspired by the title of my previous paper, but I wanted to make something that was a little more expansive and wasn’t as close to what I’d written before.

After we each wrote our sections, all the authors read through and left comments for each other. Steven Cooke did the work to smooth out the rough edges and harmonize the contributions of all the authors.

I made one other contribution. I think after the first draft went for review, Steven Cooke suggested it would be nice to have some sort of figure in the paper. I made this very quick and dirt concept figure in PowerPoint:

Flow chart with causes of disputes on left, dispute in middle, and solutions for disputes on right.

The final version in the paper is much better! It added more elements (four solutions instead of three). It used a lot of icons to make it much more visual.

So I am extremely pleased to have been part of this paper. I hope people find it useful.

References

Cooke SJ, Young N, Donaldson MR, Nyboer EA, Roche DG, Madliger CL, Lennox RJ, Chapman JM, Faulkes Z, Bennett JR. 2021. Ten strategies for avoiding and overcoming authorship conflicts in academic publishing. FACETS 6: 1753-1770. https://doi.org/10.1139/facets-2021-0103

Faulkes, Z. 2018. Resolving authorship disputes by mediation and arbitration. Research Integrity and Peer Review 3: 12. https://doi.org/10.1186/s41073-018-0057-z

 

24 October 2021

Science isn’t the only one fighting recommendation algorithms and bemoaning education

This article about crises in American evangelical churches resonates with crises we see in science communication.

The churches’ problems? People aren’t getting enough education and social media’s recommendation algorithms are too influential.

“What we’re seeing is massive discipleship failure caused by massive catechesis failure,” James Ernest, the vice president and editor in chief at Eerdmans, a publisher of religious books, told me. Ernest was one of several figures I spoke with who pointed to catechism, the process of instructing and informing people through teaching, as the source of the problem. “The evangelical Church in the U.S. over the last five decades has failed to form its adherents into disciples. So there is a great hollowness.” ...

“Culture catechizes,” Alan Jacobs, a distinguished professor of humanities in the honors program at Baylor University, told me. ... Our current political culture, Jacobs argued, has multiple technologies and platforms for catechizing—television, radio, Facebook, Twitter, and podcasts among them. People who want to be connected to their political tribe—the people they think are like them, the people they think are on their side—subject themselves to its catechesis all day long, every single day, hour after hour after hour. ...

(W)hen people’s values are shaped by the media they consume, rather than by their religious leaders and communities, that has consequences. “What all those media want is engagement, and engagement is most reliably driven by anger and hatred,” Jacobs argued. “They make bank when we hate each other.(”)

And wow, does that ever sound familiar.

The clergy bemoaning the lack of education in religious instruction puts a twist on the long-running arguments about teaching creationism in public schools. It suggests the reason some fundamentalists fought so hard on those issues because at some level they saw their own catechesis was failing.

Related posts

Recommendation algorithms are the biggest problem in science communication today

External links

The evangelical church is breaking apart


13 September 2021

The rise of TikTok in misinformation

Ben Collins’s Twitter thread about how misinformation about medication to get worms out of horses has become the cause du jour for many.

Can’t stress how wild the ivermectin Facebook groups have become. So many people insisting to each other to never go to an ER, in part because they might not get ivermectin, but sometimes because they fear nurses are killing them on purpose “for the insurance money.” ... It’s just a constant stream of DIY vitamin therapies and new, seemingly random antiviral drugs every day — but not the vaccine.

This is distressing, but I wanted to home in on this comment in the thread.

The ivermectin Facebook groups also offer a window into how pervasive antivaxx COVID “treatment” videos are on TikTok.

The groups serve as a de facto aggregator for antivaxx TikTok, a space that is enormous but inherently unquantifiable to researchers.

When I last wrote about the dangers of recommendation algorithms (in pre-pandemic 2019), I focused on YouTube. TikTok existed then (it started in 2016), but it wasn’t included in Pew Research’s list of social media platforms until this year.

Graph showing use of social media platforms in the US. 81% use Youtube, 61% use Facebook. No other platform is used by more than 50% of Americans. 21% of Americans use TikTok.

Even today, TikTok isn’t even used by one in four Americans. It’s more like one in five. It’s impressive that it’s pulled close to Twitter, which has been around far longer. And also frightening that it is having this outsized effect that is leading people to try... anything

Everything is a miracle cure, or it isn't, but every drug is worth a shot. Except, of course, the thing that works: the vaccine. Anything pro-vaccine is instantly written off as “propaganda.”

There are lots of issues raised here that I can’t process all at once. But I think Collins’s comment that TikTok is unmeasurable for researchers strikes me at something important. Could the requirement for more data transparency in how TikTok selects what videos to show someone help? Not sure. 

But we may be at the start of an arms race between social media platforms using data to show things to viewers, and researchers trying to “break the code” to figure out just what the heck people are actually seeing.

Related posts

Recommendation algorithms are the biggest problem in science communication today

External links

Social Media Use in 2021 - Pew Research

Naming the animals in research papers

This is Bruce. 

Bruce, a kea with no upper beak, holding an object with his tongue and lower beak.

Bruce has been making the news rounds because of a new paper demonstrating that he uses pebbles to groom. Bruce is a kea, a parrot that normally has a large upper beak, which Bruce does not have. In the picture above, you can see him using his tongue and remaining lower beak to pick up an object.

What I want to talk about is not the tool use (although that is cool), but that I know this bird was given a name. Because I found this paper within days of finding another paper about an unusual bird: an Australian musk duck named Ripper. 

Ripper’s claim to fame was that he was able to imitate sounds, like creaking metal and even human voices. Ripper seems to have picked up the phrase, “You bloody fool” from humans around him. 

This is interesting because vocal learning is found in only a few lineages and hasn’t been documented in ducks before.

But what interested me in both papers is that the scientific papers repeated refer to these bird by the names that humans gave them. Not just once in the methods as an aside, but all the way through.

I can see the value of using a given name in news articles and blog posts like the one I’m writing. And maybe it makes scanning the paper a little easier. But the kea paper refers to “Bruce” 62 times; the duck paper refers to “Ripper” 40 times. The extensive referencing to these names in the journal articles gives me pause.

It’s been clear for a long time that the efforts to keep animals at arm’s length to avoid humanizing them (a position taken furthest, perhaps, by B.F. Skinner and other behaviourists in American psychology) is a lost cause. The approach of people like Jane Goodall (who named her chimps rather than just giving them numbers) has won. 

But these two approaches sit on opposite ends of a continuum. And quite often, there’s a pendulum swing in attitudes. I wonder if the pendulum has maybe swung a little too far towards our willingness to humanize animals in the scientific literature.

It’s easy to slip into teleology (assuming everything has a purpose) and anthropomorphism (thinking animals are like humans). And constantly referring to animals’ names throughout a paper seems to make that even easier. 

I’m not saying that the names we give animals should never be mentioned in papers. But maybe it could be once or twice instead of dozens of times. 

And hey, these animals didn’t get to pick their names. Maybe that duck was thinking, “I say ‘Bloody fool’, and they name me ‘Ripper’ on top of that? Could I be any more of a cliché Australian?)

A Twitter poll suggests I am not alone in being wary of this practice.

References

Bastos APM, Horváth K, Webb JL, Wood PM, Taylor AH. 2021. Self-care tooling innovation in a disabled kea (Nestor notabilis). Scientific Reports 11(1): 18035. https://doi.org/10.1038/s41598-021-97086-w

ten Cate C, Fullagar PJ. 2021. Vocal imitations and production learning by Australian musk ducks (Biziura lobata). Philosophical Transactions of the Royal Society B: Biological Sciences 376(1836): 20200243. https://doi.org/10.1098/rstb.2020.0243

02 September 2021

Gender’s role in authorship disputes: Women try to prevent fights but still lose out

Black king and white queen chess pieces.
Sometimes, I hate being right.

Ni and colleagues have an important new paper on authorship conflicts that confirms something I sort of predicted back in 2018. That people with less professional and social power get hosed by authorship disputes. In 2018, I wrote, “loss of credit due to authorship disputes may be a little recognized factor driving underrepresented individuals out of scientific careers”.

Ni and colleagues confirmed that women get into more authorship disputes than men. This is despite women more often trying to negotiate authorship in advance. (To me, most surprising and depressing finding.)

To me, this supports the argument that we need more ways to help authors resolve disputes and support authors with less power. I suggested mediation and arbitration could be more common. I don’t care if people buy into that idea, but I hope that people can see that the status quo harms our efforts to create a more equitable scientific culture.

A caveat about the new paper. For some analyses, the paper guesses at gender by using an algorithm on names. This has problems. (Edit: Several authors tweeted a clarification for how this algorithm was used and that it was tangential to the main findings. Ni, LaRiviere, Sugimoto) This suggests journals should capture more data about authors than name (usually in English only), institution, and email. Or maybe that could become part of an ORCID profile. Actually, I like that idea more.

A logical follow up to this new paper would be to look at how authorship disputes shake out in other historically excluded groups. I’ll bet a dollar that white men consistently experience fewer authorship disputes than anyone else.

The Physics Today website has more analysis. (I was interviewed for this, but didn’t end up with any quotes in the article.) So too does Science magazine news.

Additional, 3 September 2021: I just want to emphasize why I think authorship disputes are so important for us to talk about.

There is a lot of research on gender differences in science. We know things women tend to get cited less than men, women tend to win fewer awards than men, women tend to be invited speakers less often then men.

But we know all these things because they are public.

It’s a major win for researchers have get access to journal records of submissions, because it’s something that is typically not disclosed and is very informative.

When we analyze citations and the like, we kind of assume that they reflect the work done. Maybe not perfectly, but reasonably.

Authorship disputes are kind of insidious because they are generally private, and affect what gets into the public record in the first place. It is an unrecognized and unacknowledged filter affecting people’s careers.

References

Ni C, Smith E, Yuan H, Larivière V, Sugimoto CR. 2021. The gendered nature of authorship. Science Advances 7(36): https://www.science.org/doi/10.1126/sciadv.abe4639

Faulkes Z. 2018. Resolving authorship disputes by mediation and arbitration. Research Integrity and Peer Review 3: 12. https://doi.org/10.1186/s41073-018-0057-z

External links

Women in science face authorship disputes more often than men

Are women researchers shortchanged on authorship? New study highlights gender disparities

Pic from here.

29 August 2021

Kill the chalk talk?

A “chalk talk” is an unscripted presentation that allows a job candidate to outline something “on the spot,” live. In many departments, this is supposed to be a chance for a candidate to talk about their research and how they plan to create an ongoing, funded research program.

Since chalkboards have mostly been abandoned for years, the term probably only still exists because it rhymes. In fact, in engineering and computing business and industry job interviews, these are called whiteboard interviews. But there, some have pointed out that whiteboarding is a problematic interview practice.

It’s got nothing to do with the skills needed to succeed at the job.

Wayne Brady
Asking a scientist to try to create a grant proposal on the spot is like asking a film actor to do an improv comedy scene as part of an audition. If that were the norm. Wayne Brady would probably be the biggest start in Hollywood. Sure, some actors are great at improv, but that’s generally not how film works. Films have scripts and preparation time. 

Chalk talks are likely to be one of those common practices that contribute to academia’s low diversity. It seems likely that there could be many people who are thoughtful but slow to process questions who might fare poorly in a chalk talk. On the other hand, there may be people who are exceedingly good at oral explanations but who are undisciplined writers. 

And if this is all allegedly about grants, grants live or die on the page, not the stage. Besides, job candidates always send a research statement as part of the application package. So marking someone explain it all again feels more like hazing than a useful way of assessing a candidate.

Departments don’t need these. I witnessed a fair number of academic job interviews in at least three institutions in as many countries. None of them asked a job candidate to do a so-called “chalk talk.” 

Update, 30 August 2021: Unsurprisingly, some people disagree with this. Jason Rasgon offered this defense (I’ve stitched together a few tweets):

Chalk talks have some value for a very narrow slice of institutions (generally soft money Medical or professional schools). I agree, mostly useless for other places. But when I was soft money I saw many faculty applicants who had a written document who couldn't articulate a plan of attack at the chalk talk. Several of them were still hired, they didn't last. When you’re soft money (the institution pays little to no salary, you have to pay your own salary by your own grants - ZF), you don’t have luxury of taking your time to organically develop a research program. You have to hit ground sprinting from day 1. Chalk talk should be easy, as you should have all this developed (and maybe submitted already). If you don’t, you’ll have problems. The people I saw have issues knew in general what they wanted to do but hadn't sat down and worked out the specifics.

Michael Eisen writes (again, stitching tweets together):

I’ve attended over 100 faculty candidate chalk talks, and would say they've played a significant role in hiring decisions maybe a dozen times, all in the favor of the candidate, because they gave us insights into strengths that did not manifest in their application. In contrast to the dominant opinion, which seems to be that chalk talks are about gotchas or testing peoples' ability to think on their feet, in my estimate their major role in the process is to give people an opportunity to address weaknesses in the rest of their application. When the format and purpose is clearly articulated, and when individual faculty are not allowed to derail the process (which can happen), they are an immensely useful part of the process. ... (T)he call for eliminating chalk talks is a copout that doesn't actually address the problem, and is part of a bad trend to provide candidates with fewer ways to demonstrate their qualities

I’m thinking about these points. I appreciate what they are getting at, but I think the qualifiers in these are hugely important. “When the purpose is clearly articulated” is my big one, but “when not derailed” is also up there.

We want job candidate to interact with people and answer questions about their research, but it seems unclear what people are trying to assess and whether this is the best format to do it.

Eisen is right that if you remove part of the interview process, you have less information about the candidate. So re-evaluating the chalk talk concept should also include “Can we come up with something that more directly evaluates the relevant skills?”

If the goal is to evaluate someone’s research, maybe those statements of research interest that is required of every job application should play a bigger role? 

A written statement better reflects grant writing. It’s faster and more time efficient to read a few pages than listen to hour long talk. And it’s less stressful for candidate.

If the research statement that was part of the initial application is too short, maybe creating a beefed-up version can be part of the shortlist interview process.

More additional: Later, Eisen wrote:

i actually wish we had prospective candidates teach a class, but that's obviously a bit too much to ask of them
For many institutions, asking a candidate to prepare a research seminar and a guest lecture in a class is probably more helpful than asking a job candidate to effectively give two research talks (seminar and chalk talk).

Andy Fraser wrote:

I have seen candidates with stellar CVs that gave extremely polished research talks that apparently could not cogently discuss a single part of their research proposal or who reacted with hostility to questions. I don't know how else I would have found that out.

In light of the excellent commentary, I have added a question mark to the title of this post. 😉

External links

It’s time to end whiteboard interviews for software engineers

14 August 2021

Are we trying to solve the wrong problem with anonymity in peer review?

One of academia’s evergreen topics, anonymity in peer review, has cropped up again on Twitter. 

This is something I’ve changed my mind about a lot since I’ve been blogging. But today, the thought struck me that we put too much emphasis over whether review should be anonymous or not.

We do not put enough emphasis on people not being assholes about getting reviews.

The whole thing driving arguments around anonymity is, as far as I can see, driven by fear of retribution. “If I write a bad review of someone else, they will be petty and have the ability to sink something of mine later.”

As much as I appreciate the argument that people are horrible (they are) and you have to account for things that people regularly do (people are often petty and vindictive), it seems to me there is a lot of possibility to move the dial on author behaviour. To say, “We won’t put up with your little revenge plots.”

Do I know how to do this? No, not yet. I’m just blogging here.

It’s also likely that a lot of bad behaviour here is driven by the sense of research being a zero sum game. Which, in the current funding climate, it is dangerously close to being. If there was no so much competition and windows of opportunity were not so small, people would worry less about whether some jerk is going to try to give you a bad review on a grant because you were critical of their manuscript.

10 August 2021

Thanks to you, ComSciCon!

Zen Faulkes on Zoom screen with Godzilla in background.Just briefly wanted to follow up on my appearance last week at ComSciCon. The storytelling panel I was part of was excellent (despite my presence) and I was also able to work with small group on short journalism pitches. 
There was not a coherent thread or panel hashtag, so here are tweets from the storytelling session. (Darn it, I miss Storify.) These are in roughly chronological order, and I would normally cut and paste the tweet text, but I have some other tasks to do and it would take too long to make it pretty right now.

Thanks to the organizers for posting a non-derpy Zoom screenshot of me!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Related posts
 

26 July 2021

The currency of science is currency

Coins and bills from many countries.

 I used to tell students, “Your publications are gold. Peer reviewed journal articles are the currency of science.”

I don’t think I’m going to tell them that any more.

Kat Milligan-Myhre brought this blog post to my attention, bemoaning that administration at University of Wisconsin sees money as an end, not a means to an end. This aligns somewhat with my own experience in the US. I want to talk a little about we got to this point. 

In the early 2000s, two things happened.

Graph showing basic research expenditures from US goverment agencies from 1976-2020, with NIH having the largest budgets.

First, in the US, the National Institutes of Health (NIH) doubled their budget throughout much of the 1990s. Universities responded to that incentive and built infrastructure and hired faculty in biomedicine. But that budget doubling stopped in the 2000s. When you account for inflation, the budget shrank over the next few years. Universities had invested so deeply that they couldn’t back out, and were determined to get that money. 

In my experience, many university administrators were not truly aware that the budget had stopped increasing and did not realize how competitive grants had become. They had spend the better part of a decade hearing how big the NIH budgets were and couldn’t face the new reality.

You may object that this is only biomedicine and only in the US. True. But the NIH had the biggest basic research budget, and trends in the US tend to get reflected elsewhere.

Plus, something else happened in the 2000s. Academic publishing embraced the Internet and stopped relying on paper copies.

I point, as I often do, to the debut of PLOS ONE in 2006 as a significant turning point in academic publishing. It only reviewed for “technical soundness,” not “quality.” Because of that, people complained that “they will publish anything” (even when this was clearly not so). Nevertheless, it certainly expanded the range of papers that were publishable.

It not only published more papers, but other publishers copied the model. The number of peer-reviewed journals expanded significantly. 

And we also got journals that only pretended to be peer-reviewed adding confusion to the mix.

People evaluating faculty had often counted the number of publications because it is simple and does not require deep knowledge of the content of the papers. But as a whole swathe of new journals arrived, I am willing to bet that more and more faculty were able to push out papers somewhere.

From the point of view of someone attempting to evaluate faculty (because “excellence” and everything), this meant that publication number was not informative because there was less variation, and because administrators often aren’t active researchers themselves, they worried about noise and whether they could trust whether the journals were “real” journals.

But the grant process was still exclusive, and – importantly – still run by scientists. Grants still had the imprimatur of peer review.

Even if you put aside the desire for money, I could see how an administrator might prefer to switch from a simple metric that has lost much of its signal and is potentially corrupted (number of publications) to a different simple metric that is more exclusive and is still perceived as having integrity (number of grant dollars).

At this point, maybe we should update our vocabulary. Instead of “professor” or “researcher,” we should call people “research fundraisers.”

From now on, I’ll probably be advising students to look for competitive opportunities like scholarships and society grants as much as I advise them to publish.

External links

wtf uw 2: the new wisconsin idea is money

23 July 2021

ComSciCon 2021

I’m excited to be presenting at ComSciCon 2021! I’ll be part of a panel and workshop on creative storytelling on 5 August 2021

It’s a bit intimidating to be sharing space with:

I look forward to contributing!

External links

ComSciCon


22 July 2021

Mission declined

Should you choose to accept it

Terry McGlynn tweeted (in reply to an article I couldn’t see): 

Want more people to accept and understand evolution? 
Your mission, should you choose to accept it, is to emphasize that religion and evolution are compatible.

My problem with this is that “religion” is not one thing. There are thousands of religions. There are even many branches of what is ostensibly a single religion.

For many people, they are not concerned with whether some scientific claim like evolution is compatible with some religions. They are concerned, often deeply so, if a claim is compatible with their religion. I do see how saying, “The Catholic church is okay with evolution” is supposed to convince a Protestant. 

Trying to convince someone that “religion and evolution is compatible” mean trying convince people to change their religion. I am not prepared to wade into theological disputes between religions.

I do not want that mission.

19 July 2021

Damn it.

Dr. Kristine Lowe
One of the things I was most proud of doing in my time at The University of Texas-Pan American was chairing the search committee that recommended hired Dr. Kristine Lowe.

Kristi died yesterday.

Damn it.

Kristi was great with students and had a lot of them go through her lab.

Given that we had, like, 60% women as our students, I think her presence was so important for our department. At the time, there was only one woman on tenure / tenure track in the entire department.

Eventually, she started to step more into administrative and leadership roles and had been chair of the department for several years. Unsurprisingly given the department composition she came into years ago, she was the first woman to chair the department.

She was always friendly, supportive, and always willing to work with you. She was a good colleague and I hate losing her.

Damn it.

04 July 2021

You don’t have to use bad data

 A routine case of a bad paper attracting a lot of criticism and then getting retracted

The one thing that I wanted to comment on was one of the authors trying to defend their work.

We are happy to concede that the data we used… are far from perfect, and we said so in our paper. But we did not use them incorrectly. We used imperfect data correctly. We are not responsible for the validity and correctness of the data, but for the correctness of the analysis. We contend that our analysis was correct. We agree with LAREB that their data is not good enough. But this is not our fault(.)

My head is kind is spinning from this argument. If you know the data are bad, you could, you know, leave it alone and not write an entire academic paper that depended on it. Especially when it concerns an ongoing public health crisis.

The data may not be your fault, but that does not mean you are without fault.

External links

https://retractionwatch.com/2021/07/02/journal-retracts-paper-claiming-two-deaths-from-covid-19-vaccination-for-every-three-prevented-cases/


01 July 2021

Ant bites

In two days, two insightful pieces of writing have dropped that feel like bookends to each other. Both deal with the effects of social media – or, to be more specific, Twitter – on individuals who get on the wrong end of anger.

First is a retrospective and analysis by Emily VanDerWerff of how Twitter controversy about a single science fiction short story effectively crushed the writer’s desire to ever write again. And that was probably the smallest effect the controversy had on author Isabel Fall.

Second is a description of how social media dynamics are still not grasped by journalism as a field. Charlie Warzel provides brings some useful terms that I hadn’t seen before, like “context collapse” to the discussion.

Both remind us that human beings are used to dealing with small social networks. We aren’t ready for the level of attention that you can get if you become the center of a viral online discussion. VanDerWerff writes:

But in any internet maelstrom that gets held up as a microcosm of the Way We Live Today, one simple factor often gets washed away: These things happened to someone. And the asymmetrical nature of the harm done to that person is hard to grasp until you’ve been that person. A single critical tweet about the matter was not experienced by Isabel Fall as just one tweet. She experienced it as part of a tsunami that nearly took her life.

Warzel says:

Many leaders at big news organizations don’t think in terms of “attack vectors” or amplifier accounts, they think in terms of editorial bias and newsworthiness. They don’t fully understand the networked nature of fandoms and communities or the way that context collapse causes legitimate reporting to be willfully misconstrued and used against staffers. They might grasp, but don’t fully understand, how seemingly mundane mentions on a cable news show like Tucker Carlson’s can then lead to intense, targeted harassment on completely separate platforms.

The “Ant bites” of the title of this post?

When you tweet something, it can feel like you have the power of an ant. And a single ant is usually inconsequential. “Squished like an ant.”

But in 1998, Joe Straczynski wrote a warning (in the usenet newsgroup rec.arts.sf.tv.babylon5.moderated).

(E)ven a whole human being can be eaten by ants.

It’s easy to make the mistake of tweeting at or about someone and think you’re just making conversation. Sure, if you were in a room with a person and knew them, it’d probably be fine. But you forget that you probably see only the tiniest sliver of that person’s experience. Your tiny little comment might be part of a much bigger pattern for the recipient. A single ant bit. But the person on the other end might be getting eaten alive by ants.

I am thinking back to a lot of online controversies in science around, say, a decade ago. I think we probably underestimated how rough those could be on researchers. Nobody had a “social media IQ” then. The good news was that the online communities were smaller then, so the anthill might not have delivered quite as many bites as it could now.

External links

How Twitter can ruin a life

What newsrooms still don’t understand about the internet

 

27 June 2021

The paradox of MDPI

One of the most puzzling trends in scientific publishing for the last couple of years has been the status of the open access publisher MDPI.

On the one hand, some people I know and respect have published their papers there. I’ve reviewed for some journals, and have seen that authors do make requested changes and there is some real peer review going on.

On the other hand, few other publishers today seem so actively engaged in pissing off the people they work with. Scientists complain about constant requests to review, particularly in areas far outside their domain expertise – an easily avoided and amateurish mistake. 

And MDPI’s boss seems like a dick.

A few people have been trying to make sense of this paradox. Dan Brockington wrote a couple of analyses over the last two years (here, here) that were broadly supportive of what MDPI has done.

Today, I stumbled across this post by Paolo Crosetto that takes a long view of MDPI’s record. It prompted another analysis by Brockington here.

Both are longish reads, but are informed by lots of data, and both are nuanced, avoiding simple “good or bad” narratives. I think one of the most interesting graphs is this one in Crosetto’s post on processing turnarounds:

Graph of time from submission to acceptance at MDPI journals.  2016 shows wide variation from journal to journal; 2020 data shows little variation.

There used to be variation in how long it took to get a paper accepted in am MDPI journal. Now there is almost no smear how long it takes to get a paper accepted in an MDPI journal. That sort of change seems highly unlikely to happen just by accident. It looks a lot like a top down directive coming from the publisher, putting a thumb on the decision making process, not a result of editors running their journals independently.

Both Crosetto and Brockington acknowledge that there is good research in some journals. 

The questions seems to be whether the good reputation is getting thrown away by the publisher’s pursuit of more articles, particularly in “Special Issues.” Crosetto suspects the MDPI is scared and wants to extract as much money (or “rent” as he calls it) from as many people as fast as possible. Brockington says that this may or may not be a problem. It all depends on something rather unpredictable: scientists’ reactions. 

Scientists may be super annoyed by the spammy emails, but they might be happier about fast turn around times (which people want to an unrealistic degree) with high chance of acceptance. 

If the last decade or so in academic publishing has taught us anything, it’s that there seems to be no upper limit for scientists’ desire for venues in which to publish their work.

PLOS ONE blew open the doors and quickly became the world’s biggest journal by a long ways. But even though it published tens of thousands of papers in a single year, PLOS ONE clones cropped up and even managed to surpass it in the number of papers published per year. 

MDPI is hardly alone in presenting bigger menus for researchers to choose where to publish. Practically every publisher is expanding its list of journals at a decent clip. I remember when Nature was one journal, not a brand that slapped across the titles of over 50 journals.

MDPI is becoming a case study in graylisting. As much as we crave clear categories for journals as “real” (whitelists) or “predatory” (blacklists), the reality can be complicated.

Update, 1 July 2021: A poll I ran on Twitter indicates deep skepticism of MDPI, with lots of people saying they would not publish there.

Would you submit an article to an MDPI journal?

I have done: 9.4%
I would do: 3.9%
I would not: 50%
Show results: 36.7%

Update, 21 August 2021: A new paper by Oviedo-García analyzes MDPI’s publishing practices. It makes note of many of the features in the blog posts above: the burgeoning number of special issues, the consistently short review times across all journals. Oviedo-García basically calls MDPI a predatory publisher.

This earned a response from MDPI, which unsurprisingly disagrees.

Update, 7 March 2022: Mark Hanson lays out more issues with MDPI in this Twitter thread. A few point that he brings forth that I have not seen before:

Articles in MDPI journals have unusually high numbers of self-citations.

His blog post is also worth checking out.

Update, 13 October 2023https://twitter.com/eggersnsf/status/1557273726571487232 shows data that shows MDPI is shooting ahead of Frontiers, PLOS, and Hindawi in terms of articles published. So clearly they are offering some sort of service that people want.

Update, 10 November 2022: Dan Brockington has an updated analysis of MDPI journals.

External links

An open letter to MDPI publishing

MDPI journals: 2015 to 2019

Is MDPI a predatory publisher?

MDPI journals: 2015 to 2020 

Oviedo-García MÁ. 2021. Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI). Research Evaluation: in press. https://doi.org/10.1093/reseval/rvab020

Comment on: 'Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI)' from Oviedo-García

Related posts 

My resolve not to shoot the hostage is tested

Graylists for academic publishing

25 June 2021

What the American furore over critical race theory means for science education

People shouting at school board meeting
 

Well, that escalated quickly.

In the last few months, “critical race theory” has gone from an academic idea mostly (but not exclusively) taught in law school to a flashpoint item that American right wingers are willing to go to the mattresses over. 

There have been ample shouting in school board meetings.

Nikole Hannah-Jones was denied tenure after winning a Pulitzer prize and a McArthur award for the 1619 Project. Not because her colleagues voted against her, but because the normally quite Board of Trustees voted against her.

Critical race theory is not my area of expertise (please forgive me, social science and humanities scholars, for musing here), but watching this feels very much like something I am familiar with and have watched closely for a long time: the fight over teaching evolution.

I noted on Twitter that a lot of the same forces that traditionally mobilize against the teaching of evolution are now mobilizing against the teaching of critical race theory. Laws are successfully getting passed in state legislatures. And while one side is willing to show up in large numbers and shout at school boards, the other... has not been so active.

There’s a lot of reasons for the lopsided reaction, bit I can’t help but wonder if people who are not freaking out over critical race theory are not bringing the same fire to the fight is because they look at the laws that are being passed, think, “That’s going to court and the laws will get overturned.”

A few things make this sound plausible on the face of it. 

Critical race theory isn’t taught in K-12 schools, so prohibiting the teaching of something that isn’t taught is bizarre.

But more to my point here, this is what happened with the teaching of evolution. There was often widespread political and public support for laws that tried to regulate the reaching of evolution. But time and again in the US, the courts have said, “No, those anti-evolution laws are unconstitutional.”

I do not feel extremely confident that an American court case would strike down the laws prohibiting critical race theory. 

While regulating or banning the teaching of evolution and banning the teaching of critical race theory are both pet projects of the political right in the US, the arguments over evolution are inextricably intertwined with religious beliefs. A government promoting particular religions (mainly fundamentalist Protestant Christianity) runs afoul of the establishment clause of the first amendment of the US constitution. That has been a key legal aspect of all the cases.

The opposition to critical race theory is not as clearly driven by religion, which makes the legal issues very different. Even if the end result – states dictating what can and cannot be taught – looks very much the same.

This is why academics of any stripe, including my fellow scientists, need to be paying very close attention to how these fights over critical race theory play out. 

University instructors have mostly taken it for granted that they can teach subjects as they see fit. The fights over critical race theory are a test case.Climate science is another area that many on the right would probably love to dictate how it is taught. Ditto anything around sexual and gender diversity. 

If those laws are not smacked down fast and hard, whether in the courts or by political and public action, this could be the start of a sustained squeeze on how universities teach in the US. And America’s reputation for its universities in the world will suffer.

Picture from here.

06 June 2021

The week of IAmSciComm, 6 June 2021!

I have just taken over the reigns of the @IAmSciComm rotating curator Twitter account! This is my second time hosting, and am gratified to be asked back.

Here is a rough schedule for the week.

Monday, 7 June: Show me a poster, graphic, or dataviz!   Tuesday, 8 June: Why streaks matter!  Wednesday, 9 June: From blog to book!   Thursday, 10 June: Posters for everyone!   Friday, 11 June: Posters reviewed!  Saturday, 12 June: The randomizer!

  • Monday, 7 June: Show me a poster, graphic, or dataviz! 
  • Tuesday, 8 June: Why streaks matter!
  • Wednesday, 9 June: From blog to book! 
  • Thursday, 10 June: Posters for everyone! 
  • Friday, 11 June: Posters reviewed!
  • Saturday, 12 June: The randomizer!

Join me, won’t you?

Related posts

The IAmSciComm threads

External links

IAmSciComm home page

04 June 2021

Experiments doesn’t always lead to papers

Drugmonkey tweeted

Telling academic trainees in the biomedical sciences to put their heads down, do a lot of experiments and the papers will just emerge as a side-product is the worst sort of gaslighting.

He’s right, as he often is, but it bears examining why that’s the case.

First, not all experiments should be published. Experiments can have design flaws like uncontrolled variables, small sample sizes, and all the other points of weakness that we learn to find and attack in journal club in graduate school.

Second, even if an experiment is technically sound and therefore publishable in theory, it may not be publishable in practice. In many fields, it’s almost impossible to publish a single experiment, because the bar for publication is high. People usually want to see a problem tackled by multiple experiments. The amount of data that is expected in a publishable paper has increased and probably will continue to do so.

The bar for what is considered “publishable” is malleable. We have seen that in the last two years with the advent of COVID-19. There was an explosion of scientific papers, many of which probably would not have been publishable if there wasn’t a pandemic going on. People were starved for information and researchers responded in force. You have to understand what is interesting to your research community.

Third, experimental design is a very different skill from writing and scientific publication. 

Fourth, it’s not a given that everyone feels the same drive to publish. Different people have different work priorities. For instance, I saw a lot of my colleagues who had big labs groups with a lot of students who churned through regularly. Those labs generated a lot of posters and a lot of master’s theses. According to our department guidelines, theses were supposed to represent publishable work.

But all of that didn’t turn into papers consistently. I think people got positive feedback for having lots of students (and looking “busy”) and came to view “I have a lot of students” as their personal measure of success. Or, they just got into the habit of thinking, “I’ll write up that work later.” “Or just one more experiment so we can get it in a better journal.”

I had fewer students and master’s theses written than my colleagues, but I published substantially more papers. I say this not to diss my colleagues or brag on myself, but it’s a fact. I made publication a priority.

Publishing papers requires very intentional, deliberate planning. It requires understanding the state of the art in the research literature. It requires setting aside time to write the papers. It requires understanding what journals publish what kinds of results. Just doing experiments in the lab will not cause papers to fall down like autumn leaves being shaken loose from trees.

24 May 2021

It’s Book Day! Better Posters is here!

It’s been a long time coming.

Today is the official release date for the Better Posters book.

Better Posters book in box.

The Better Posters blog started in 2009, inspired by in large part by Garr Reynolds Presentation Zen blog. Reynolds’s blog made the transition to book in late 2007. As my blog kept going, I quietly entertained the hope that maybe I might be able to write a book to do for conference posters something like what Reynolds (and many others) did for oral presentations.

Somewhere along the way, I even wrote out a partial incomplete outline for what a book might contain.

Flash forward to sometime in 2017. Nigel Massen from Pelagic Publishing contacts me about the potential for... a book on conference posters! I send him my crappy outline. Despite its crappiness, Nigel assures me that all books start with a crappy outline. I manage to convince him I can actually do this.

I start writing it in earnest in the first few months of 2018, and submit it on time on Halloween 2019. So yes, the writing and organizing and figure making and emailing people to ask if I can show their posters took well over a year. (In my defense, I was teaching a full course load while writing the book, too).

But proving that if it wasn’t for the last minute, nothing would get done, I was furiously making some fairly significant changes even on deadline day.

The original plan was to release the book in the first quarter of 2020. which would be in time for the normal summer conference season for academics. 

I don’t know if we would have made target, but the COVID-19 pandemic derailed that plan. There was just no way to release the book during a pandemic. The book got pushed back a couple of times until today.

After more than three years, it’s a little hard to believe that other people can now read this thing. It doesn’t quite feel the books that inspired it. When you have to write 80 thousand words, you just get too tired to emulate anyone else and what you get represents you and your voice, for good or bad.

I have more to say on the creation of this book, but for now, I will just say that if you get it, I hope you find it useful.

If you’re interested, the book is available in both paperback and ebook versions for Kobo, Kindle, Nook.

If you cannot buy a copy of the book yourself, you might recommend it to your university library or local community library.

Photo by Anita Davelos.