18 May 2024

You need how many letters to prove you should get tenure?!

Many academics badly underestimate how much diversity there is in how universities do business. This includes me. Because I was caught flat-footed by a new paper that examines a common part of the tenure process for academics.

When someone is going up for tenure and promotion, it’s common to ask people from outside the university to write a letter describing how this person fits into the research community. These external letters can be a good safety valve to prevent a department from glossing over problems with one of their faculty.

When I was at UTRGV, the “external letter” requirement was just getting implemented. One of the key factors was how many letters to ask for. Speaking at a panel this week in DC, someone mentioned the practice, and talked about the difficulty in getting “three letters.”

Three letters turns out to be on the low end of the task. The most common minimum number of letters was five.

Some universities require a minimum – I say again, a minimum – of ten external letters.

All I can think of is, “How insecure do you have to be in your decision to hire someone that you need external validation from ten other people?”

References 

Hannon L, Bergey M. Policy variation in the external evaluation of research for tenure at U.S. universities. Research Evaluation: rvae018. https://doi.org/10.1093/reseval/rvae018

03 May 2024

Potential problems of paying peer reviewers

First, some backstory.

I wrote this essay for a journal. It was provisionally accepted, but it needed revision. Then I changed jobs, and I was never able to sit down and get the revisions done. A few other things I observed (unrelated to this article) made me reluctant to resubmit this to the journal I originally sent it to. But it’s hard to find a journal, or even a preprint server, interested in opinion articles like this. So I didn’t revise and resubmit it.

But because this topic makes the rounds in academic social media routinely, I thought this was worth sharing in some form, even this first unrevised version.

Because this has been sitting on my hard drive for a while, the article has, to some degree, been overtaken by events. In particular, my comments about using AI for peer review were seen as unrealistic when I submitted this – but that was before the release of ChatGPT.

This article is archived on Figshare: https://doi.org/10.6084/m9.figshare.25746996

 


 

Potential problems of paying peer reviewers

Zen Faulkes

School of Interdisciplinary Science, McMaster University (Note, 3 May 2024: I am no longer affiliated with McMaster. Please don’t bug them.)

Pre-publication peer review organized by academic journal editors, funding agencies, and so on (simply called “peer review” from here on) is viewed as a hallmark of quality academic publication. But there are concerns that “reviewer fatigue” is causing trouble for peer review. Authors complain that they receive too many requests to review manuscripts, and editors complain about the difficulty of finding willing and qualified peer reviewers (Fox 2017, Goldman 2015, Vesper 2018). Therefore, many academics suggest that publishers should pay researchers for peer review. This idea appears popular: a poll asking, “Do you think reviewers should be paid for reviewing?” found 85.6% of respondents voted yes, compared to 11.3% voting no, and the remaining 3.0% unsure (n=1,234, with 6.2% opting “Show results” instead of voting) (Academic Chatter 2022).

It is easy to see the appeal of paid peer review. It provides a new incentive for people to agree to review. Reading papers thoroughly and then crafting coherent, useful reviews of the manuscript is work. It takes time, effort, and has opportunity costs for the reviewer. Professional work deserves to be compensated, and many are galled that they are expected to “work for free.” The argument is that publishers are exploiting academics who are foolish enough to provide their work for free. The feeling of exploitation is made worse because most academic publishers are for profit businesses that are extremely profitable. Academics often describe publishers’ profits as “obscene” (Stafford and Brand 2021, Taylor 2012). Even non-profit publishers are criticized for the salaries of those working for the company (Eisen 2016).

It is understandable that academics would resent businesses that make money from scholarly goods and services, which are often publicly funded, when those academic are struggling financially. Compensation for postdoctoral researchers and contingent faculty jobs can be below the poverty line (Gee 2017, Semeniuk 2022). One UK funding agency sent an email suggesting doctoral students could work extra jobs to make ends meet, and listed examples such as “Babysitting” and “Fruit picking” (Lowe 2022). Money is a problem for many academics. But when so many problems can be solved with money, it is easy to forget that not all problems can be solved with money. 

Money cannot buy time

Major main reason for prospective reviewers declining invitations is lack of time (Tite and Schroter 2007). When time is at issue, paying for reviews is unlikely to get people to accept the review (Tite and Schroter 2007).

Some aspects of peer review might be improved by paying reviewers, but the overall process of scholarly communication might not be improved.

Payment suggests corruption

Researchers have a long tradition of providing academic knowledge, and other goods and services, for minimal to no costs: “We don’t do it for the money.” Because of this, science was mainly practiced by the wealthy before it was professionalized in the 20th century. This was seen as a way of ensuring research integrity: wealthy researchers were seen as honest, because they would have no financial incentive to lie about their findings.

Unsurprisingly, one of the most common tactics of anti-science campaigns and conspiracy theorists is to argue that scientists are corrupted by money. “Shills for big pharma” (Blaskiewicz 2013, Smith 2017), “getting rich on grant money” (Merzdorf et al. 2019) “companies stifle world changing technology / medical treatments because it would cut into profits” (Demeo 1996), and exhortations to “Follow the money!” are but a few of the variations of this. While it is always difficult to measure the persuasiveness of arguments “in the wild,” that these arguments are used so often over so many years suggests people truly are persuaded by them.

If reviewers were paid, anti-science campaigners could easily point to reviewer payments as evidence that academic publishing is merely a money-making game for insiders (researchers).

Unqualified reviewers

Peer review is valued because it is conducted by fellow experts in a field. Because there are costs to reviewing and little to no recognition, there are few incentives for reviewers to review a paper that is not professionally relevant to them.

Paying for reviews creates a generalized incentive for reviewers to say yes to any paper they are invited to review, regardless of their knowledge of the topic. Because reviewers’ identities are typically confidential, authors who receive poor reviews may want to argue that they had a reviewer who only wanted the money.

Unhelpful reviewers

The overall quality of reviews varies. Some reviewers are detailed and contain multiple specific actionable items that can improve the paper. Other reviews are cursory and contain only vague statements that cannot be easily addressed by authors. Other reviews may be substantive but make unreasonable requests for new experiments and analyses. 

Worse, there are many examples of egregiously bad reviews, running the gamut from unhelpful to rude, racist, and sexist commentary unrelated to the content of the article (Silbiger and Stubler 2019, Woolson 2015).

Fair compensation for review

Whether a journal should pay for a perfunctory or insulting review is a specific example of a much larger issue: determining what is fair compensation for a review. Academic manuscripts differ wildly in length and complexity. The amount of effort that people must put into writing reviews also varies. An experienced researcher may be able to assess and review a manuscript very quickly and respond with a concise but informative review. 

Because so much discontent about volunteer peer review concerns academic work being undervalued, those arguing that peer reviewers should be paid should also make concrete suggestions for pay scales.

Alternatives to reviewer payment

Paid peer review aims to solve two problems: unwillingness to review (“reviewer fatigue”) and uncompensated labour. A more radical solution to these problems would be to take humans out of the peer review process. There are at least two ways this could be done.

The first solution would be hand over the bulk of peer review to specialist artificial intelligence (AI) expert systems that can review manuscripts. Recent advances in AI systems that can process and manipulate natural language texts, such as GTP-3, this may be viable in the near future (Schulz et al. 2022).

The second solution – less speculative but more radical – is to do away with peer review entirely (in the limited sense of prepublication review organized by editors). There are strong criticisms that pre-publication peer review does not provide the quality control and improvement that researchers want (Smith 2010). Ongoing post-publication peer review might provide an alternative to the journal centered form of peer review that is currently the norm.

If neither of these suggestions are embraced by the research community in the near future (which seems likely), it may be possible to address issues with voluntary peer review by more careful accounting of service obligation.

The main components of an academic’s job are often described as research, teaching, and service. Academics in universities are often explicitly told how much of their time should be allocated to each of those three parts of the job, but service is often the smallest piece of that budget. It is unsurprising that peer review, a service commitment to the research community, suffers when institutions explicitly show that service is the smallest part of the job description. Nevertheless, researchers with service obligations might be able to account for time spent on peer reviews more carefully to argue for release of other service commitments (e.g., committee work). Similarly, administrators might work to ensure that service obligations are not overlooked or satisfied by trivial “box checking” services.

Service loads in peer reviewing are also distributed unevenly. Journal editors often ask people to review whose jobs have no expectation of service. This includes early career researchers such as grad students and post docs, researchers who hold jobs outside academia, and retired academics. This is hardly fair to those individuals, and people in these positions have the best case for being paid for review because it is literally not their job.

Journal editors might also be able to better balance the service load by broadening the pool of who is asked to review. Many who are willing to provide peer review are rarely, if ever, asked to do so (Vesper 2018). In particular, researchers in “emerging economies” are probably underused as potential peer reviewers (Vesper 2018).

References

Academic Chatter. 2022. Twitter. https://twitter.com/academicchatter/status/1543229417060696066

Blaskiewicz R. 2013. The Big Pharma conspiracy theory. Medical Writing 22(4): 259-261. https://doi.org/10.1179/2047480613Z.000000000142

Demeo S. 1996. The corporate suppression of inventions, conspiracy theories, and an ambivalent American dream. Science as Culture 6(2): 194-219. https://doi.org/10.1080/09505439609526464

Eisen M. 2016. On pastrami and the business of PLOS. https://www.michaeleisen.org/blog/?p=1883

Fox CW. 2017. Difficulty of recruiting reviewers predicts review scores and editorial decisions at six journals of ecology and evolution. Scientometrics 113(1): 465-477. https://doi.org/10.1007/s11192-017-2489-5

Gee A. 2017. Facing poverty, academics turn to sex work and sleeping in cars. https://www.theguardian.com/us-news/2017/sep/28/adjunct-professors-homeless-sex-work-academia-poverty

Goldman HV. 2015. More delays in peer review: Finding reviewers willing to contribute. https://www.editage.com/insights/more-delays-in-peer-review-finding-reviewers-willing-to-contribute

Lowe A. 2022. Twitter. https://twitter.com/adriana_lowe/status/1549754463619108865

Merzdorf J, Pfeiffer LJ, & Forbes B. 2019. Heated discussion: Strategies for communicating climate change in a polarized era. Journal of Applied Communications 103: 3. link.gale.com/apps/doc/A600269487/AONE?u=anon~a35eccce&sid=bookmark-AONE&xid=f37daf5b

Schulz R, Barnett A, Bernard R, Brown NJL, Byrne JA, Eckmann P, Gazda MA, Kilicoglu H, Prager EM, Salholz-Hillel M, ter Riet G, Vines T, Vorland CJ, Zhuang H, Bandrowski A, & Weissgerber TL. 2022. Is the future of peer review automated? BMC Research Notes 15(1): 203. https://doi.org/10.1186/s13104-022-06080-6

Semeniuk I. 2022. Prominent researchers urge Ottawa to increase top science scholarships above poverty line. https://www.theglobeandmail.com/canada/article-prominent-researchers-urge-ottawa-to-increase-scholarships-for-top/

Silbiger NJ & Stubler AD. 2019. Unprofessional peer reviews disproportionately harm underrepresented groups in STEM. PeerJ 7: e8247. https://doi.org/10.7717/peerj.8247

Smith R. 2010. Classical peer review: an empty gun. Breast Cancer Research 12(Suppl 4): S13. https://doi.org/10.1186/bcr2742

Smith TC. 2017. Vaccine rejection and hesitancy: A review and call to action. Open Forum Infectious Diseases 4(3): ofx146. https://doi.org/10.1093/ofid/ofx146

Stafford T & Brand CO. 2021. Commercial involvement in academic publishing is key to research reliability and should face greater public scrutiny. https://doi.org/10.31222/osf.io/rjmvh

Taylor M. 2012. Academic publishers have become the enemies of science. https://www.theguardian.com/science/2012/jan/16/academic-publishers-enemies-science

Tite L & Schroter S. 2007. Why do peer reviewers decline to review? A survey. Journal of Epidemiology and Community Health 61(1): 9-12. https://jech.bmj.com/content/jech/61/1/9.full.pdf

Vesper I. 2018. Peer reviewers unmasked: largest global survey reveals trends. Nature. https://doi.org/10.1038/d41586-018-06602-y

Woolson C. 2015. Sexist review causes Twitter storm. Nature 521: 9. https://doi.org/10.1038/521009f

29 April 2024

Open access: What is a paper for, anyway?

Brian McGill at Dynamic Ecology blog has an interesting overview of publishing trends. The paragraph that seems to have gotten the most traction is this one: 

Open access has been a disaster. Scientists never really wanted it. We have ended up here for two reasons. First, pipe dreaming academics who believed in the mirage of “Diamond OA” (nobody pays and it is free to publish). Guess what – publishing a paper costs money – $500-$2000 depending on how much it is subsidized by volunteer scientists. We don’t really want Bill Gates etc. to pay for diamond OA. And universities and especially libraries are already overextended. There is no free publishing. The second and, in my opinion most to blame, are the European science grant funders who banded together and came up with Plan S and other schemes to force their scientists to only publish OA. At least in Europe the funding agencies mostly held scientists harmless by paying, and because of the captive audience, publishers went to European countries first for Read and Publish agreements. So European scientists haven’t been hurt too badly. But North America has so far refused to go down the same path, leaving North American scientists without grants (a majority of them) with an ever shrinking pool of subscription-based journals to publish in. And scientists from less rich countries are hurt even worse. Let’s get honest. How long before every university in Africa is covered by a Read and Publish agreement from the for profit companies?

What is interesting about this assessment is that he calls the open access situation a “disaster” on the basis of one very narrow measure: “How does it affect writing scientists?” By “writing scientists,” I mean what are usually called “principle investigators” (PIs), faculty who are busy running a lab and need publications for career advancement.

Two things.

First, most of the paragraph is concerned about how article processing charges affect scientists without grants who need to publish. I emphasize “charges” because, as I have said before, we need to separate open access – a description of who can read scientific articles – from the business models used to support open access. McGill is complaining about the latter, and isn’t addressing the former.

I do agree that many researchers have unrealistic expectations about the costs of publication. I agree that there has not been enough discussion about alternative business models for open access.

Second, journal articles do not just exist merely for the benefit of scientists who need publications to get promotion or tenure. There are not only people who write articles, there are people who read journal articles. You should consider the sizable benefits of more people being able to read scientific papers before judging the success of open access.

Article processing charges do create barriers for researchers with limited resources. But the research of hypothetical African scientists is impeded by not being able to read the scientific literature, not just by being unable to publish in the scientific literature.

If we are concerned about African researcher not being able to pay article processing charges, should we not also be concerned about African researchers being able to buy journal articles or African research libraries being able to buy journal subscriptions?

I see increased ability to read the world’s scholarly literature as a good thing. I don’t see it as an unalloyed good that must be pursued above all else. But it should be in the mix as we’re taking stock of open access.

20 March 2024

The precarious nature of scientific careers (inspired by Sydney Sweeney)

I recently read this article on actress Sydney Sweeney:

https://defector.com/the-money-is-in-all-the-wrong-places

(I liked you in Madame Web, Sydney, don’t let the haters hate.)

I did not expect that a story riffing off the career of a successful Hollywood actress would resonate so much.

Because the article was pointing out that “success” has been so eroded that it is almost meaningless now. Sweeney is doing well for herself, but she dare not stop grinding.

Academia looks like this to me. Even if you achieve “success” — landing one of those increasingly rare tenure track faculty jobs — the grinding not only does not stop, it probably intensifies. Just like Sweeney doesn’t feel she can stop taking ads for Instagram posts or take off a few months to have a kid, how many researchers are subjecting themselves to long hours to write grant proposals and publish papers because they are told that if they stop swimming, they’ll drown?

Actors and scientists are far from alone in this predicament. It’s widespread. But I think it’s worth asking, “What does success look like?” And now, it looks like “success” has razor thin margins of error.

18 March 2024

Contamination of the scientific literature by ChatGPT

Mushroom cloud from atomic bomb
I’ve written a little bit about how often notions of “purity” come up in discussions of scientific publishing.

I see lots hand waving about the “purity and integrity of the scientific record,” which is never how it’s been. The scientific literature has always been messy.

But in the last couple of weeks, I’m starting to think that maybe this is a more useful metaphor than it has been in the past. And the reason is, maybe unsurprisingly, generative AI. 

Part of my thinking was inspired by this article about the “enshittification” of the internet. People are complaining about searching for anything online, because so much of search results is being dominated by low quality content designed to attract clicks, not be accurate. And increasingly, that’s being generated by AI. Which was trained on online text. So we have a positive feedback loop of crap.

(G)enerative artificial intelligence is poison for an internet dependent on algorithms.

But it’s not just the big platforms like Amazon and Etsy and Google that is being overwhelmed by AI content. Academic journals are turning out to be susceptible to enshittification.

Right after that article appeared, science social media was widely sharing examples of published papers in academic journals with clear, obvious signs of being blindly pasted from generative AI large language models like ChatGPT. Guillaume Cabanac has provided many examples of ChatGPT giveaways like, “Certainly, here is a possible introduction to your topic:” or “regenerate response” or apologizing that “I am a large language model so cannot...”.

It’s not clear how widespread this problem is, but that even these most obvious examples are not getting screened out by routine quality control is concerning.

And another preprint making the rounds show more subtle telltale signs that a lot of reviewers are using ChatGPT to write their reviews.

So we have machines writing articles that machines are reviewing and humans seem to be hellbent on taking themselves out of this loop no matter what the consequence. I can’t remember where I first heard the saying, but “It is not enough than a machine knows the answer” feels like an appropriate reminder.

The word that springs to mind with all of this is “contaminated.” Back to the article that started this post:

After the world’s governments began their above-ground nuclear weapons tests in the mid-1940s, radioactive particles made their way into the atmosphere, permanently tainting all modern steel production, making it challenging (or impossible) to build certain machines (such as those that measure radioactivity). As a result, we’ve a limited supply of something called “low-background steel,” pre-war metal that oftentimes has to be harvested from ships sunk before the first detonation of a nuclear weapon, including those dating back to the Roman Empire.

Just like the use of atomic bombs in the atmosphere created a dividing line of “before” and “after” there was widespread contamination of low-level radiation, the release of ChatGPT is enhancing and deepening another dividing line. Scientific literature has been contaminated with ChatGPT. Admittedly, this contamination might turn out to be at a low level that might not even be harmful, just like most of us don’t really think about the atmospheric radiation from years of above ground testing of atomic bombs.

While I said that it isn’t helpful to talk about “purity” of academic literature before, I think this is truly a different situation than we have encountered before. We’re not talking about messiness because research is messy, or that humans are sloppy. We’re talking about an external technology that in impinging on how articles are written and reviewed. It is a different problem that might warrant describing it as “contamination.”

(I say generative AI is deepening the dividing line because the problems language AI are creating were preceded by the widespread release and availability of Photoshop and other image editing software. Both have eroded our confidence that what we see in scientific papers represents the real work of human beings.)

Related posts

How much harm is done by predatory journals? 

External links

Are we watching the internet die?

12 March 2024

Scientists will not “just”: Individual scientists can’t solve systemic problems

Undark has an interview with Paul Sutter about the problems of science. Now before I get into my reaction, I want to say that this interview was conducted because Sutter has a book, Rescuing Science, that covers these topics. I haven’t read the book. Maybe it has more nuance than this short interview.

Sutter has a lot of complaints about the current state of science, but his big one?

(A)n inability for scientists to meaningfully engage with the public.

The interview tries to peel back the layers of why this is. Sutter, like many academics, blames the incentives.

We, as a community of scientists, are so obsessed with publishing papers (that) this is causing is an environment where scientific fraud can flourish unchecked.

Sutter goes on to critique journal Impact Factor and h-index and peer review and a lot of the usual suspects. But Sutter’s solution to these big systemic problems might be summed up as, “Scientists need to get out more.” He wants scientists to do more public communication. Lots more.

Scientists should be the face of science. How do we increase diversity and representation within science? How about showing people what scientists actually look like. How do people actually understand the scientific method? What if scientists actually explained how they’re applying it in their everyday job.

This is a very familiar view to me. It’s why I started blogging here more than twenty years ago. Much of what I have achieved professionally, I can credit in part or in whole, to blogging and otherwise being Very Online. But I try to have a clear-eyed view of what I was able to achieve in building trust in science: probably not much.

I see three problems.

First. it feels like Sutter is saying, “If only scientists would just talk to non-scientists more.” I am reminded of this:

If your solution to a human problem involves the phrase “If only everyone would just...”, you don’t have a solution. Never in the History of Ever has everyone “just” anything... and we’re not likely to start now. 

(This tweet is from Laura Hunter. I remember seeing some version of this on Twitter, but can’t recall if I saw Laura Hunter’s tweet specifically.)

Second, lots of excellent scientists are not great at communicating to non-specialist audiences. They can’t stop using words like “canonical” in interviews. They aren’t trained in this.

Third, Sutter points out that the interests of science journalists are not always aligned with the interests of scientists. But that is a feature, not a bug. Journalists are not supposed to be stenographers, or cheerleaders, for science. And Sutter spends lot of time criticizing journalism while the profession is practically collapsing in front of our eyes while we watch. Local news outlets are vanishing. Online publications are shuttering and laying off hundreds of staffers. Popular Science is gone.

Wait, I have more than three.

Fourth, this sounds like trying to fix traffic jams (systemic problems) by asking people to drive carefully (individual actions). That doesn’t have a good record of success. 

In the interview, Sutter doesn’t come to grips with the systemic issues of money and power that are being leveraged to make coordinated attacks on science.

I’ve said it before: Individual scientists – who are struggling to write grants after grant to keep their labs afloat – are not on a level playing field with international media corporations backed by billions of dollars. Or tech corporations that write recommendation algorithms. Or religions that have a several thousand year head start in winning hearts and shaping culture.

I am guessing that if Sutter were to point to the sort of people who exemplify his preferred method of restoring trust in science by getting out and talking publicly, he might point to Anthony Fauci or Peter Hotez – both of whom were excellent communicators throughout the COVID-19 pandemic. But we have seen the power asymmetry: Fauci and Hotez were physically threatened for publicly talking about science.

Anti-science is large, well-funded, and organized. Single scientists with a blog – or even a few invitations to speak on national platforms – are overmatched.

External links

Paul M. Sutter Thinks We’re Doing Science (and Journalism) Wrong

Rescuing Science

11 March 2024

How Godzilla movies reflect scientific research

Leonard Maltin
I have a very specific memory of film critic Leonard Maltin on Entertainment Tonight reviewing Godzilla 1985 (the first American release of 1984’s Gojira or The Return of Godzilla). Maltin said something like, “Many remakes fail because they stray too far from the original. Godzilla 1985 doesn’t have that problem.”

It’s still the same cheap Japanese monster movie.”

That stung so much that here I am remembering it almost 40 years later.

I can’t think of anything that summarized the attitude towards Godzilla for so long. 

So last night’s Oscar win for Godzilla Minus One feels like vindication for a lifelong fan like me. 

Godzilla -1 effects team with Oscars

For decades, Godzilla movies were the butt of jokes. And deservedly so, I have to say. As much as I count myself as a Godzilla fan, I have no desire to watch Son of Godzilla or All Monsters Attack (Godzilla’s Revenge) ever again. (Shudder.)

But fandom is a funny thing. You love the things you love and it still kind of stings when you you hear them derided.

But somewhere in the years after Maltin snubbed the 30th anniversary movie, something shifted in people’s attitude towards Godzilla.

Those of us who watched a few dubbed movies as kids remembered Godzilla as we grew up. I heard in the 1990s that there was a new “high tech” series of Godzilla movies being made in Japan. The Internet removed friction for finding out fannish stuff. You could find retrospectives about the making of the series in English.

And say what you will about the American Godzilla made in 1998, Hollywood wouldn’t have shelled out the cash to try to make that movie a big summer blockbuster if there wasn’t some sort of name recognition.

After all those years in the wilderness, what films could show was finally catching up with the visions of Godzilla that fans held in their heads.

And it occurred to me that this is sometimes how science works.

You have an idea. You get it out there. 

Maybe it’s derided as cheap and mostly dismissed. But you try again.

And other people pick up some aspect of it. And maybe sometimes the results are embarrassing, with the offshoots are not as good as the original was.

And sometimes, if you wait and keep trying, that original idea somehow stands the test of time. Other people come around and start to recognize that idea was a good idea. And you end up with something that gets better that you ever though.

Related posts

“Why do you love monsters?”