29 November 2024

A journal paying for reviews

The BMJ will start paying £50 for reviews.

The catch? It’s not for peer review, it’s for reviews from patients and the public. They still expect peer reviews to be done as part of service.

Their announcement post does not describe how they will address several of what I consider to be potential problems. While my previous post was about posting for reviews from academics, I think many of the issues will also apply to public and patient review.

I’m going to sign up as a public reviewer. Because why not? I could use 50 quid. And I’d like to see what this looks like from the inside.

Related posts

Potential problems of paying peer reviewers

External links

The BMJ will remunerate patient and public reviewers

25 November 2024

“Neurosurgery on Saturn” paper shows academics’ blind spots

The planet Saturn.
In the last few days, a bunch of people on Bluesky discovered a paper that has been out for a few months, “Practice of neurosurgery on Saturn.”

Looking through the social media posts about this paper, a lot of people played that favourite academic game, “How did this get published?” Many people suggested it’s a hoax. Academic hoaxes are a particular interest of mine, and I am always looking for the next entry in the Stinging the Predators collection of academic hoaxes.

I didn’t think this was a hoax? Hoaxers usually reveal the prank almost immediately, and this paper had been out for months.

My hunch seems to have been correct. The lead author has an extensive Google Scholar page and said on PubPeer: “it is clear that the document focuses on fictitious and hypothetical situations.” I am not clear about what the point of the article is, but never mind. It’s filed under “Letters to the editor,” which I think is an arena where researchers and journals can be allowed a little leeway.

But this is a good example of something that I think is decidedly lacking in many discussions about academic publishing and academic integrity. In none of the posts I read did anyone do any actual investigation.

Nobody looked to see if the authors were real.

Nobody emailed the authors. It seems to just be happenstance that the lead author saw the PubPeer comments and replied.

Nobody emailed the journal (although editors are often notoriously slow to reply to these sorts of things).

In collecting academic hoaxes, I’ve noticed a similar pattern. People create hoaxes to show that there are bad journals out there that accept anything for money. But by and large, that is where it stops

People know predatory journals are out there, but nobody is actively digging behind the scenes to see how they work. How do people decide to start running them? How much money do they make? Why would a scam artist only in it for the money do apparently counterintuitive things like waiving the article processing charges? (There are multiple instances of that in the Stinging the Predators collection.)

A recent paper came out that made a similar point about the lack of investigation about paper mills  (Byrne et al. 2024). 

Academics treat too many of these problems around dubious publishing as some sort of black box that cannot be opened. They only study the outputs. I think someone needs to approach these sorts of problems more like an investigative journalist or an undercover law enforcement officer might.

Go in and find the people who are doing all this dishonest stuff. Get them talking. I want hear what some of the people organizing predatory journals or paper mills or citations rings have to say. Why do they do what they do?

I don’t expect academics themselves to do this. This kind of investigative journalism is expensive and time consuming and being done less and less. But without this kind of insight, we will probably never be able to understand and curb these problems.

References

Byrne JA, Abalkina A, Akinduro-Aje O, Christopher J, Eaton SE, Joshi N, Scheffler U, Wise NH, Wright J. 2024. A call for research to address the threat of paper mills. PLoS Biology 22(11): e3002931. https://doi.org/10.1371/journal.pbio.3002931

Mostofi K, Peyravi M. 2024. Practice of neurosurgery on Saturn. International Journal of Surgery Case Reports 122: 110139. https://doi.org/10.1016/j.ijscr.2024.110139

External links

PubPeer commentary on “Practice of neurosurgery on Saturn”

Altmetric page for “Practice of neurosugery on Saturn”

Google Scholar page for Keyvan Mostofi

Photo by Steve Hill on Flickr, used under a Creative Commons licence. 

Generative AI: Two of of three is bad

There’s an old joke: “Fast, cheap, good: Pick two.”

Generative AI is fast and cheap. It will not be good.

I would like “good” to be a higher priority.

Edit, 1 December 2024: I think academics, under the intense pressure to be productive, prioritize those three characteristics in that order. The first two might switch (grad students are more likely to prioritize “cheap”), but I think “good” is regularly coming last.



22 November 2024

Clearing out the vault by posting preprints

Old books on a shelf
My last jobs didn’t have any expectations for research, so my publication rate slowed significantly. I wanted to do something about that.

A few months ago, I published a submitted but not completed article on paying peer reviewers here on the blog and on Figshare. This got me digging around in folders on my hard drive and I got thinking about other articles that I had written, submitted, but weren’t accepted. And at the time, I just ran out of steam to revise and resubmit them in ways that would satisfy an editor and a couple of reviewers. 

So in recent weeks, I have taken to submitting some of those manuscripts to a preprint server.

In the past, I was a little cool to the value of preprints. I always thought they had some uses, but I was skeptical that they would replace journals, which was the stated wish for some of the strongest preprint advocates. I was always worried about the Matthew effect: that famous labs would benefit from preprints and everyone else would just have more work to do.

But since I last wrote about preprints, the attention landscape has changed. More people in biology have started scanning through preprint servers part of their routine. I was surprised when a reporter emailed me about one of these preprints and wanted to chat for an interview. I don’t think that would have happened a few years ago.

Whether these will be “final” public version of these little projects, I can’t say. I have other projects that I want to get out that have not been written up at all yet that I want to try to get into a journal. But I am glad that every one of these received at least a little attention on Bluesky.

Here’s the list of my recent preprints, all on bioRxiv.

Update, 26 November 2024: I’ve now had two journalists reach out to me because of these preprints. I’ve never had that high a level of interest from the dozens of peer-reviewed journal articles I published.

Related posts

A pre-print experiment: will anyone notice?

A pre-print experiment, continued  

A pre-print experiment, part 3: Someone did notice  

Photo by umjadedoan on Flickr. Used under a Creative Commons license.

18 November 2024

The cat that fooled Google Scholar, the newest hoax in my collection

I finally got a chance to update my collection of academic hoaxes!

Stinging the Predators 23.0: Now with a cat! (The Internet loves cats, right?) http://bit.ly/StingPred

I am now up to 42 academic hoaxes, which is triple the number that I started with in version 1.0 in 2017.

The latest hoax targets not a predatory journal, but an academic search engine. While this is unusual, it is not the first hoax that was pulled to show how easy it is to manipulate Google Scholar. (One of the things that has been interesting to me as this project has continued is that many hoaxers feel compelled to make the same point again.)

And one other things that has been rewarding is that this collection, which I’ve only ever had on Figshare and promoted on my socials and personal websites, has been viewed tens of thousands of times and has been cited by scholars writing in proper journals a few time.

External links

Faulkes Z. 2024. Stinging the Predators: A Collection of Papers That Should Never Have Been Published (version 23). Figshare. https://doi.org/10.6084/m9.figshare.5248264



14 November 2024

Okay, stop. Saying “science isn’t political” will not keep science safe from political attacks

Advice from people with experience fighting powerful fascist opponents: “Do not comply in advance.

In a new Science editorial, National Academies of Sciences, Engineering, and Medicine president Marcia McNutt starts complying well in advance. 

The editorial begins with blatant concern trolling:

I had become ever more concerned that science has fallen victim to the same political divisiveness tearing at the seams of American society.

Okay, stop. McNutt provides zero examples of supposed “divisiveness” of the scientific community. A recent Nature article showed exactly the opposite. The scientific community was strongly united in its preferred American election outcome: 86% favoured the losing candidate. I guess McNutt sees that as bad because it makes it harder for her to play nice with with administration of the winning candidate.

(The scientific community) must take a critical look at what responsibility it bears in science becoming politically contentious(.)

Okay, stop. Again, McNutt provides zero examples of scientists making science politically contentious. On the other hand, I can point to many examples of politicians who waded into scientific debates about the reality of whole branches of science (evolution, geology, climate science) and health care.

No, the problem (according to McNutt) is that we scientists don’t explain ourselves well. We don’t tell politicians and citizens about how we are just disinterested third parties.

(S)cience, at its most basic, is apolitical.

Okay, stop. I know this is a popular claim, but it’s time to put this into the ground and bury it six feet deep. There is a very good Nature podcast series that takes this claim apart. Claiming “We’re not political” is a fiction that favours those who are politically privileged. McNutt would have had a much stronger case if she had said that science is not partisan. Or that reality is apolitical.

But science is an organized profession done by humans, so science is political.

It is strange to say that science is apolitical when I see the two entangled all the time.

McNutt continues:

Whether conservative or liberal, citizens ignore the nature of reality at their peril.

Okay, stop. Many elected politicians and citizens are just fine with ignoring the nature of reality – as long as they personally are not affected. And those who are personally affected, in genuine peril, may not be able to generate enough political clout to change policy to make themselves safe on their own. They need allies.

McNutt’s argument that “the arc of the scientific universe is long, but it bends towards truth” is callous. It seems like her view is that scientists should passively sit on the sidelines, presenting data but never advocating, while watch people make the same mistakes about discredited claims that actively harm people over and over.

I can’t help but wonder where McNutt has been for the last decade or so when she can, apparently in all seriousness, write a sentence like this about climate science: 

It is up to society and its elected leadership to decide how to balance these options, including the use of renewable energy, climate adaptation, carbon capture, or even various interventions that reflect sunlight back into space.

Okay, stop.

Does McNutt understand that the incoming elected leadership repeatedly stated that their option is, “Climate science is all a hoax. We don’t need to do anything”?

Does McNutt understand that elected leaders can use their power to take actions that are not supported by the majority of society or scientific evidence?

Should scientists simply accept that their elected leadership is condemning millions to ever greater misery every day this denial of reality goes on?

McNutt is just trying to get her organization out of the line of fire of an incoming government that is more overtly hostile to science than maybe any other American government ever. 

The NAS stands ready, as it always has, to advise the incoming administration.

While it is the job of a civil servant to work with a new boss after every election, most scientists are not civil servants. They are not obligated to support a new government. They can, and should, do much more than just provide data and hope that elected leaders eventually come around to face scientific reality. If anyone is not coming around to face reality – the political reality, in this case – it’s McNutt.

External links

Science is neither red nor blue

The US election is monumental for science, say Nature readers — here’s why

“Stick to the science”: When science gets political (Three part podcast series)

06 November 2024

Untitled post 2024

You get hurt, hurt ‘em back. You get killed…

Walk it off.

 Captain America, Avengers: Age of Ultron

05 November 2024

Gen AI fatalism

Generative AI fatalism (or Gen AI fatalism): The assertion that generative AI must be accepted, because its widespread adoption is inevitable and cannot be stopped.

A new article by James Zuo about ChatGPT in peer review is a particularly spectacular example of gen AI fatalism. Zuo mentions that many peer reviews show signs of being created by large language models, like ChatGPT. He lists many problems and only trivial potential advantages, and calls for more human interactions in the peer review process.

Since Zuo has nothing very positive to say about generative AI in peer review, the fatalism on display is stark.

The tidal wave of LLM use in academic writing and peer review cannot be stopped.

No! 

This is a mere assertion that has no basis behind it. Technology adoption and use is not some sort of physical law. It’s not a constant like the speed of light. It’s not the expansion of the universe driven by the Big Bang, dark matter, and dark energy.

The adoption and use of technology is a choice. Technologies fail in the marketplace all the time. We abandon technologies all the time.

If generative AI is causing problems in the peer review process, we should say that. We should work to educate our colleagues about the inherent problems with generative AI in the review process.

I suspect that people use ChatGPT for the simple reason that they don’t want to do peer review. They do not believe they are adequately rewarded for the task. So, as is so often the case, we need to look at academic incentives. We need to look at why we peer review and what it can accomplish.

Creating journal policies about using ChatGPT is little more than trying to paper up holes in a boat. I would welcome the equivalent of pulling the boat into dry dock and doing a complete refit.

Reference

https://doi.org/10.1038/d41586-024-03588-8

Stay out of my academic searches, gen AI

Something I had long dreaded came to pass yesterday.


Google Scholar landing page with " New! AI outlines in Scholar PDF Reader: skim the bullets, deep read what you need"
Google Scholar introduced generative AI.

“New! AI outlines in Scholar PDF Reader: skim the bullets, deep read what you need.”

Andrew Thaler had the perfect riposte to this new feature.

If only scholarly publications came with a short synopsis of the paper right up front, written by the authors to highlight the important and salient points of the study.

We could give it a nifty name, like “abstract.”
Exactly! Not only do researchers already outline papers, many journals require two such outlines: an abstract and some sort of plain English summary.

I don’t need this. I don’t want this. No more generative AI infiltrating into every nook and cranny of the Web, please.

Punching fascists: A part of our heritage

Apropos of nothing, here are two pictures I thought I’d posted back in November 2016, but I can’t seem to find any more.

They’re a couple of panels of Canadian World War II comic character Johnny Canuck punching Hitler.

Johnny Canuck punching Hitler.

Johnny Canuck punching Hitler.


04 November 2024

Research on crustacean nociceptors missing useful evidence

Crab, Carcinas maenas
A new paper by Kasiouras and colleagues attempts to provide evidence for nociceptive neurons in a decapod crustacean, a shore crab. 

The authors took crabs, poked them and dropped some acid on them, and recorded the neural responses. Neurons did indeed respond, in dose-and intensity specific ways. From this, the authors conclude these are potential nociceptive responses.

I am unconvinced. There is a big difference between showing that neurons can respond to a possibly noxious stimulus and showing that those neurons are responding as nociceptors.

First:

The same category of stimuli can have detected by by nociceptors and by “regular” sensory neurons that are not nociceptors. For example, an increase in temperature can stimulate both nociceptors and thermoreceptive sensory neurons. Mechanical pressure can stimulate both mechanoreceptors and nociceptors. That there are two neural pathways are probably why we distinguish a burn (painful) as different than heat, or a pinch from a strong touch.

The results would be more convincing if the authors showed that neurons responded in ways that are typical of other nociceptors. Many (though not all) nociceptors have a few common properties.

  1. They respond to several different types of stimuli. For example, they react to high temperature and acid and mechanical pressure and a chemical like capsaicin. (The technical term is that they are polymodal or  multimodal sensory neurons.)
  2. They respond to repeated stimulation with increased activity. Most sensory neurons “get used to” the same sensory stimuli over time, but many nociceptors do the opposite. (The technical term is that they show sensitization.)

The authors couldn’t do either of these, because they are recording from the whole nerve cord and they did not pick out the activity of single neurons. Sometimes it is possible to recognize the activity of single neurons in this sort of record with spike sorting techniques, but that is done in this paper.

Second:

Species respond to potentially noxious stimuli in different ways. Some species respond to capsaicin with nociceptive behaviours, but others do not. No stimulus is guaranteed trigger nociceptors in all cases. 

This paper used mechanical touch and acetic acid, but because the paper did no behavioural experiments, it’s not clear if the crabs perform nociceptive behaviour in response to the level of stimuli presented.

Another paper used acetic acid as a potentially noxious stimulus with shore crabs, and crabs do respond to it (Elwood et al. 2017), but that paper was criticized (Diggles 2018; not cited in the current study) for not considering the possibility that acetic acid caused a feeding (gustatory) response, not a nociceptive response. Acetic acid is the technical name for vinegar, after all.

The results would be more convincing if the electophysiological recordings of neurons were connected back to the crab’s behaviour. For example, the authors could tried an experiment to shown that a touch of low intensity caused one sort of behaviour, but a touch of higher intensity caused a different behaviour that looks like nociceptive behaviour. Then they could have seen what the differences in neural activity to those two kinds of tactile stimulation were.

I am glad to see more labs trying to establish the presence or absence of nociceptors in crustaceans. There is still more work to demonstrate their existence and characterize their physiology if they exist.

References

Diggles BK. 2018. Review of some scientific issues related to crustacean welfare. ICES Journal of Marine Science: fsy058. http://dx.doi.org/10.1093/icesjms/fsy058
 

Elwood RW, Dalton N, Riddell G. 2017. Aversive responses by shore crabs to acetic acid but not to capsaicin. Behavioural Processes 140: 1-5. https://doi.org/10.1016/j.beproc.2017.03.022

Kasiouras E, Hubbard PC, Gräns A, Sneddon LU. 2024. Putative nociceptive responses in a decapod crustacean: The shore crab (Carcinus maenas). Biology 13(11): 851. https://doi.org/10.3390/biology13110851
 

Picture by Dunnock_D on Flickr; used under a Creative Commons license.

02 November 2024

Pursing integrity over excellence in research assessment

I was reading yet another “Scientists behaving badly” article. This one was about Jonathan Pruitt, who used to work where I used to work (different departments). And, as it usual in these articles, there is a section about how institutions assess research:

Many attribute the rising rate of retractions to academia’s high-pressure publish-or-perish norms, where the only route to a coveted and increasingly scarce permanent job, along with the protections of tenure and better pay, is to produce as many journal articles as possible. That research output isn’t just about publishing papers but also how often they’re cited. Some researchers might try to bring up their h-index (which measures scientific impact, based on how many times one is published and cited), or a journal might hope that sharing particularly enticing results will enhance its impact factor (calculated using the number of citations a journal receives and the number of articles it publishes within a two-year period).

It finally occurred to me that focusing on the indicators like citations and Impact Factor are all symptoms of a larger mindset.

The major watchword for administrators and funders for decades has been “excellence.” Some prefer a near synonym, “impact.” Everyone wanted to reward excellent research, which is an easy sell. Nobody wants to admit that they want to support average research — even though that’s what most research is, by definition. But most of science progresses by millimetres on the basis of average research. Even poor research can have some nuggets of data or ideas that can be useful to others.

I suggest a new overarching principle to guide assessment: integrity. We should be paying attention to, and rewarding research and researchers that act in the most trustworthy ways. Who are building the most solid and dependable research. That can be assessed by practices like data sharing and code sharing, and by showing of community use and replication.

The pursuit of excellence has proved too fickle and too likely to make people act sketchy to become famous. Let’s change the target.

Update, 28 November 2024: Blog post by Grace Gottleib on how to assess integrity.

External links

A rock-star researcher spun a web of lies—and nearly got away with it


17 October 2024

Write in Gallifreyan

Oh, this is fun. Here is “Doctor Zen” in Gallifreyan writing from Doctor Who.

"Doctor Zen" in Gallifreyan

I have to be careful, or I’ll spend hours translating things into this script.

 External links

Gallifreyan translator

11 October 2024

What is misinformation for?

A new article on how many people in the US are increasingly hostile to reality has much to contemplate, but I wanted to briefly muse on this:

So much of the conversation around misinformation suggests that its primary job is to persuade. But as Michael Caulfield, an information researcher at the University of Washington, has argued, “The primary use of ‘misinformation’ is not to change the beliefs of other people at all. Instead, the vast majority of misinformation is offered as a service for people to maintain their beliefs in face of overwhelming evidence to the contrary.” This distinction is important, in part because it assigns agency to those who consume and share obviously fake information.

I see the point, and agree with it to some extent, but I think this underestimates the persuasive power of misinformation.

It neglects the “rabbit hole” effect that misinformation has had on fostering conspiracy theories and radicalization. It neglects the slow corrosion that has been happening in political discourse. It’s not just that political parties (particularly in the US) are polarized, but that some have gone ever more extreme.

I can see a connection between Caulfield’s “misinformation helps maintain beliefs” and persuasion. People’s beliefs are informed by different points of view. Without countervailing points of view, those existing beliefs can become more certain and more readily drift to ever more extreme versions of that belief.

Misinformation is often better described as straight-up propaganda, though. But we seem to have lost that word through fear of calling lies, lies.

External links

I’m running out of ways to explain how bad this is

09 October 2024

“Equal contribution” statements don’t mean much: Nobel prize edition

This is not a post about the Nobel prizes. It is a post about authorship.

The Nobel Prize for chemistry was given two people for protein folding. I told students in my introductory biology classes for years that whoever could solve that problem should book a ticket to Stockholm, because it would get a Nobel, and I’m pleased to see I was right on that count.

Screenshot of Nature article "Highly accurate protein structure prediction with AlphaFold" with expanded credit showing that 19 authors were credited as making equal contributions to the paper.
On Bluesky, Michael Hoffman pointed out that the key paper about AlphaFold has an equal contribution statement:

(T)he AlphaFold paper has 19 authors who “contributed equally” but only two of them (Demis Hassabis and John M. Jumper - ZF) get part of the Nobel Prize 🤔 

So why those two people out of all the 19 who made, allegedly, equal contributions? The paper has a “Contributions” statement:

J.J. and D.H. led the research.

I don’t think there has ever been a clearer demonstration that “equal contribution” statements don’t mean much of anything except to maybe the people involved. And their relatives.

Also worth noting that in the 19 equal contributions were, I believe, two women. (Guess based on given names, which is not ideal, I know. Still.)

More generally, authorship is a terrible way of assigning credit. I have, and will continue to, argue that the CRediT system of identifying specific contributions should be adopted just across the board.

References

Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, Tunyasuvunakool K, Bates R, Žídek A, Potapenko A, Bridgland A, Meyer C, Kohl SAA, Ballard AJ, Cowie A, Romera-Paredes B, Nikolov S, Jain R, Adler J, Back T, Petersen S, Reiman D, Clancy E, Zielinski M, Steinegger M, Pacholska M, Berghammer T, Bodenstein S, Silver D, Vinyals O, Senior AW, Kavukcuoglu K, Kohli P,  Hassabis D. 2021. Highly accurate protein structure prediction with AlphaFold. Nature 596: 583–589. https://doi.org/10.1038/s41586-021-03819-2

03 October 2024

Losing your academic email cuts you off from the scientific community

I haven’t had a job in a university for a while, and I’m realizing how much I cannot do and how many opportunities I am missing because I don’t have a university email address.

One of the biggest issues is Google Scholar.

Google Scholar still has my last institutional email from last year. I could leave my email blank, but I don’t want to, because “Unverified profiles can’t appear in search results.” That is bad for me professionally – I want people to be able to find my papers in Google Scholar search. It’s also, it must be said, bad for research more generally. I wonder how many people realize that profile search are filtered by institutional emails.

if journal editors find my Google Scholar profile, they will only see my old email. If they send a request to that email for me to review a manuscript, they won’t get a response. Given how many editors complain about how they “just can’t find people willing to peer review some articles,” I wonder how many potential reviewers are lost because they change email addresses?

Other examples:

Pubpeer won’t accept a Gmail address in their signup.

ResearchGate warns you about deleting an institutional email but allows you to do it.


20 September 2024

My pitch for Space: 2099

September 13, 1999. Today, we remember the brave souls who vanished from our lives, when the moon was flung out of earth orbit.

We just passed the 25 anniversary of Breakaway Day, September 13, 1999, when 311 people on Moonbase Alpha were lost when the moon was blown out of Earth orbit by a nuclear explosion.

At least, that was what happened in the TV series Space: 1999.

Because of this (fictitious) anniversary, and seeing a panel about The Eagle Obsession, a documentary about Space: 1999, I’ve been thinking about this show quite a bit. Which is strange, because I never loved the show. I’m a fan of the Eagle transporter, but not a fan of the show.

I was never a fan because the characters couldn’t win. The moment the people on Moonbase Alpha found a home, that was the end of the show. This is not a format I’ve ever found appealing. And it is one that gets used a lot in science fiction. Lost in Space, Land of the Giants, The Time Tunnel, Battlestar Galactica (both versions), Land of the Lost, and Star Trek: Voyager, to name a few. At least the Battlestar Galactica reboot and Voyager got final episodes that tried to wrap things up, but shows almost never got proper conclusions in the 1970s.

The best the characters of Space: 1999 could hope for was to survive — to earn a temporary reprieve from execution by an uncaring universe. 

There has been talk time from time about rebooting the show. It’s mentioned in the panel. There were some plans that went so far as getting some promo material out. Space: 2099.

Space: 2099

The tagline for Space: 2099: “Man’s giant leap was just a stumble in the dark.” It’s a good line, but it’s still pretty bleak.

So I was turning over in my mind about what a reboot would look like that I would actually enjoy. I thought, “Humans alone on a hostile alien environment, cut off from help… wait. That’s The Martian.” 

I enjoyed The Martian about as much as I did not enjoy Space: 1999. There were two key differences. The first was hope. This was clear in the poster tagline, “Bring him home.” Getting Watney back home was going to be almost impossible, but the story lives in the “almost.”

The Martian with tagline "Bring him home."

The second key difference was Mark Watney’s character, who might be described as “an excellent player playing excellently.” (I’ve heard the movie described as “competence porn.”) The audience pulled for him as he solved problem after problem.

I would like to imbue some of that more hopeful attitude into Moonbase Alpha. Something a little closer to The Martian or For All Mankind than Werner Herzog. 

If I were to pitch a Space: 2099 reboot:

A disaster cuts off Moonbase Alpha from Earth. But Moonbase Alpha is crewed not just by 300 random humans, but by 300 explorers. They are resilient. They are adaptable. They are highly skilled. They know they are in the most dangerous environment that any humans have ever been, but they will not. give. up. They will not settle for mere existence on an airless rock. They want to go home – but if they don’t, they are determined to create a base where they can thrive and live as richly as they did on Earth.

P.S.—I have one of the Eagle stories on The Eagle Obsession website! It’s submitted one of my old blog posts about why the Eagle is my favourite spaceship.

Related posts

Space: 1999 is now further in the past than it was set in the future 

My favourite spaceship (It’s the Eagle)

Eagles and Falcon

16 September 2024

Big blue sky

Bluesky is my main social media platform now.

Bluesky user DoctorZen #265,499 - First 10% Certified Bluesky Elder

I mean, they called me an “elder”! Not just in an, “Wow, you old” way. In a nice way!

Yes, Threads has picked up, and I still poke around Mastodon sometimes, but Bluesky has left me feeling... nice? As of right now, it probably has more of the online science crowd. The developers seem to be making decent decisions. And it’s not overrun with ads and company accounts. I’m sure those last will come, eventually, but not yet.

I will probably start to occasionally pull from Bluesky and write about it in the way as I have done before, but I don’t want to add another tag to my blog. I will be using “Twitter” as the tag for my social media posts here on the blog. Because from my current point of view, that tag is more about the microblogging format now than about the specific platform.

P.S.— Bluesky game now is to see who’s enrollment number is a prime. Mine isn’t, but 265,499 is 13 squared times 1,571. Pretty good.

Update, 21 November 2024: Several new articles are out reporting that Bluesky is doing very well at attracting academics to the platform.

Like ‘old Twitter’: The scientific community finds a new home on Bluesky

Bluesky is the new destination for X/Twitter’s health and science community. Here’s why

Update, 22 November 2024: When both Nature and Science have an article on a topic in the same week, it’s officially a noteworthy trend. Honestly, that combination of news article is probably going to be the proverbial last spike to complete the railroad that will bring academic holdouts onto Bluesky.embers

‘A place of joy’: why scientists are joining the rush to Bluesky

Update, 29 November 2024: More coverage of #AcademicSky:

https://www.thetransmitter.org/community/huge-influx-of-neuroscientists-migrates-to-bluesky/

https://mikeyoungacademy.dk/bluesky-is-emerging-as-the-new-platform-for-science/


14 September 2024

Scholarly publishers sued

Complaining about academic publishing is somewhere between “hobby” and “righteous cause” for many researchers. So I imagine that many will be cheering loudly at the news that six major academic publishers are getting sued.

People are likely to focus on the three things the case says makes for unfair business practices:

  1. That peer reviewers aren’t paid, and that this is enforced by a “tit for tat” system where journals won’t publish your paper if you don’t review. (I know zero well documented cases of this.)
  2. That journals will not consider work submitted or published elsewhere (the Ingelfinger rule, I think).
  3. Submitted papers can’t be shared while under review.

If I wanted to sue academic publishers, I’m not sure these are the lines of attack I would use. These claims seem hard to stick to the publishers.

The first two points are about practices go back decades, well before the consolidation of so much academic publishing into a few companies. Publishers can say, “These were practices established by the community that we adopted.”

And there are many journals not run by these companies that do the same things. I suspect that the “We don’t consider work under review elsewhere” is common across all publishing, not just academic journals. Publishers can say, “If all these other publishers have these practices, we are just in line with industry standards. And by the way, why are we being selectively prosecuted?”

The third point, that journals “prohibit scholars from sharing advancements in submitted manuscripts while those under peer review” seems to pretend that preprint servers don’t exist.

But these counterexamples are beside the point, because legal question here isn’t whether journals do these things or that they are bad for research. The legal question is whether the publishers conspired to create those conditions.

I think that will be hard to show.

Now, may the plaintiffs can produce something like internal memos or emails between the publishers trying to kill proposals to pay peer reviewers. The academic equivalent to the tobacco industry’s “Doubt is our product” memo. That would be truly devastating. And, in all honesty, I wouldn’t put it past publishers to have some of these emails buried on servers someplace. 

This lawsuit will be interesting to watch. Maybe the plaintiffs aren’t expecting to win, but are doing some consciousness raising. Even if they lose, this lawsuit might do some good by getting academics talking about publication, and maybe by prodding publishers to do better work.

External links

Academic Journal Publishers Antitrust Litigation

Prodding the behemoth with a stick 

10 August 2024

Banzai!

The Adventures of Buckaroo Bazai Celebrating 40 years across teh 8th Dimension

Yes, this post is a not too subtle acknowledgement of the fact that I be old, but I don’t care. Buckaroo Banzai is one of those movies that just shaped such a huge part of my mental furniture that I have to acknowledge that today is the 40th anniversary.

The attraction for me is the lead character. I wanted to grow up to be Buckaroo Banzai. Still do. I always loved the portrayal of someone smart and a willing to go full bore into pursing passions: science, music, and helping others. And maybe that I also had a name that evoked Japan increased my identification with the character a bit.

Just before he utters one of the film’s most famous lines, “No matter where you go...”, there’s another moment that underscores the character:

“Don’t be mean. We don’t have to be mean.”

It reminds me a lot of a line that is shared often around on acadamic social media: Lots of people are smart. Distinguish yourself by being kind.

19 July 2024

Dissertation thank yous

There is a lot of jaded-ness in academia right now. 

But there is still a lot to celebrate and a lot of sense of achievement. Particularly for graduate students.

So I just love this look at doctoral dissertation acknowledgements. I was particularly struck by this reflection:

No matter how impenetrable the thesis title, the project’s success always seems to come down to the same simple thing: other people.

While graduate programs tend to focus on students, it’s a great reminder that humans are a social species and we are deeply affected by those around us.

External links

The unexpected poetry of PhD acknowledgements

18 May 2024

You need how many letters to prove you should get tenure?!

Many academics badly underestimate how much diversity there is in how universities do business. This includes me. Because I was caught flat-footed by a new paper that examines a common part of the tenure process for academics.

When someone is going up for tenure and promotion, it’s common to ask people from outside the university to write a letter describing how this person fits into the research community. These external letters can be a good safety valve to prevent a department from glossing over problems with one of their faculty.

When I was at UTRGV, the “external letter” requirement was just getting implemented. One of the key factors was how many letters to ask for. Speaking at a panel this week in DC, someone mentioned the practice, and talked about the difficulty in getting “three letters.”

Three letters turns out to be on the low end of the task. The most common minimum number of letters was five.

Some universities require a minimum – I say again, a minimum – of ten external letters.

All I can think of is, “How insecure do you have to be in your decision to hire someone that you need external validation from ten other people?”

References 

Hannon L, Bergey M. Policy variation in the external evaluation of research for tenure at U.S. universities. Research Evaluation: rvae018. https://doi.org/10.1093/reseval/rvae018

03 May 2024

Potential problems of paying peer reviewers

First, some backstory.

I wrote this essay for a journal. It was provisionally accepted, but it needed revision. Then I changed jobs, and I was never able to sit down and get the revisions done. A few other things I observed (unrelated to this article) made me reluctant to resubmit this to the journal I originally sent it to. But it’s hard to find a journal, or even a preprint server, interested in opinion articles like this. So I didn’t revise and resubmit it.

But because this topic makes the rounds in academic social media routinely, I thought this was worth sharing in some form, even this first unrevised version.

Because this has been sitting on my hard drive for a while, the article has, to some degree, been overtaken by events. In particular, my comments about using AI for peer review were seen as unrealistic when I submitted this – but that was before the release of ChatGPT.

This article is archived on Figshare: https://doi.org/10.6084/m9.figshare.25746996

 


 

Potential problems of paying peer reviewers

Zen Faulkes

School of Interdisciplinary Science, McMaster University (Note, 3 May 2024: I am no longer affiliated with McMaster. Please don’t bug them.)

Pre-publication peer review organized by academic journal editors, funding agencies, and so on (simply called “peer review” from here on) is viewed as a hallmark of quality academic publication. But there are concerns that “reviewer fatigue” is causing trouble for peer review. Authors complain that they receive too many requests to review manuscripts, and editors complain about the difficulty of finding willing and qualified peer reviewers (Fox 2017, Goldman 2015, Vesper 2018). Therefore, many academics suggest that publishers should pay researchers for peer review. This idea appears popular: a poll asking, “Do you think reviewers should be paid for reviewing?” found 85.6% of respondents voted yes, compared to 11.3% voting no, and the remaining 3.0% unsure (n=1,234, with 6.2% opting “Show results” instead of voting) (Academic Chatter 2022).

It is easy to see the appeal of paid peer review. It provides a new incentive for people to agree to review. Reading papers thoroughly and then crafting coherent, useful reviews of the manuscript is work. It takes time, effort, and has opportunity costs for the reviewer. Professional work deserves to be compensated, and many are galled that they are expected to “work for free.” The argument is that publishers are exploiting academics who are foolish enough to provide their work for free. The feeling of exploitation is made worse because most academic publishers are for profit businesses that are extremely profitable. Academics often describe publishers’ profits as “obscene” (Stafford and Brand 2021, Taylor 2012). Even non-profit publishers are criticized for the salaries of those working for the company (Eisen 2016).

It is understandable that academics would resent businesses that make money from scholarly goods and services, which are often publicly funded, when those academic are struggling financially. Compensation for postdoctoral researchers and contingent faculty jobs can be below the poverty line (Gee 2017, Semeniuk 2022). One UK funding agency sent an email suggesting doctoral students could work extra jobs to make ends meet, and listed examples such as “Babysitting” and “Fruit picking” (Lowe 2022). Money is a problem for many academics. But when so many problems can be solved with money, it is easy to forget that not all problems can be solved with money. 

Money cannot buy time

Major main reason for prospective reviewers declining invitations is lack of time (Tite and Schroter 2007). When time is at issue, paying for reviews is unlikely to get people to accept the review (Tite and Schroter 2007).

Some aspects of peer review might be improved by paying reviewers, but the overall process of scholarly communication might not be improved.

Payment suggests corruption

Researchers have a long tradition of providing academic knowledge, and other goods and services, for minimal to no costs: “We don’t do it for the money.” Because of this, science was mainly practiced by the wealthy before it was professionalized in the 20th century. This was seen as a way of ensuring research integrity: wealthy researchers were seen as honest, because they would have no financial incentive to lie about their findings.

Unsurprisingly, one of the most common tactics of anti-science campaigns and conspiracy theorists is to argue that scientists are corrupted by money. “Shills for big pharma” (Blaskiewicz 2013, Smith 2017), “getting rich on grant money” (Merzdorf et al. 2019) “companies stifle world changing technology / medical treatments because it would cut into profits” (Demeo 1996), and exhortations to “Follow the money!” are but a few of the variations of this. While it is always difficult to measure the persuasiveness of arguments “in the wild,” that these arguments are used so often over so many years suggests people truly are persuaded by them.

If reviewers were paid, anti-science campaigners could easily point to reviewer payments as evidence that academic publishing is merely a money-making game for insiders (researchers).

Unqualified reviewers

Peer review is valued because it is conducted by fellow experts in a field. Because there are costs to reviewing and little to no recognition, there are few incentives for reviewers to review a paper that is not professionally relevant to them.

Paying for reviews creates a generalized incentive for reviewers to say yes to any paper they are invited to review, regardless of their knowledge of the topic. Because reviewers’ identities are typically confidential, authors who receive poor reviews may want to argue that they had a reviewer who only wanted the money.

Unhelpful reviewers

The overall quality of reviews varies. Some reviewers are detailed and contain multiple specific actionable items that can improve the paper. Other reviews are cursory and contain only vague statements that cannot be easily addressed by authors. Other reviews may be substantive but make unreasonable requests for new experiments and analyses. 

Worse, there are many examples of egregiously bad reviews, running the gamut from unhelpful to rude, racist, and sexist commentary unrelated to the content of the article (Silbiger and Stubler 2019, Woolson 2015).

Fair compensation for review

Whether a journal should pay for a perfunctory or insulting review is a specific example of a much larger issue: determining what is fair compensation for a review. Academic manuscripts differ wildly in length and complexity. The amount of effort that people must put into writing reviews also varies. An experienced researcher may be able to assess and review a manuscript very quickly and respond with a concise but informative review. 

Because so much discontent about volunteer peer review concerns academic work being undervalued, those arguing that peer reviewers should be paid should also make concrete suggestions for pay scales.

Alternatives to reviewer payment

Paid peer review aims to solve two problems: unwillingness to review (“reviewer fatigue”) and uncompensated labour. A more radical solution to these problems would be to take humans out of the peer review process. There are at least two ways this could be done.

The first solution would be hand over the bulk of peer review to specialist artificial intelligence (AI) expert systems that can review manuscripts. Recent advances in AI systems that can process and manipulate natural language texts, such as GTP-3, this may be viable in the near future (Schulz et al. 2022).

The second solution – less speculative but more radical – is to do away with peer review entirely (in the limited sense of prepublication review organized by editors). There are strong criticisms that pre-publication peer review does not provide the quality control and improvement that researchers want (Smith 2010). Ongoing post-publication peer review might provide an alternative to the journal centered form of peer review that is currently the norm.

If neither of these suggestions are embraced by the research community in the near future (which seems likely), it may be possible to address issues with voluntary peer review by more careful accounting of service obligation.

The main components of an academic’s job are often described as research, teaching, and service. Academics in universities are often explicitly told how much of their time should be allocated to each of those three parts of the job, but service is often the smallest piece of that budget. It is unsurprising that peer review, a service commitment to the research community, suffers when institutions explicitly show that service is the smallest part of the job description. Nevertheless, researchers with service obligations might be able to account for time spent on peer reviews more carefully to argue for release of other service commitments (e.g., committee work). Similarly, administrators might work to ensure that service obligations are not overlooked or satisfied by trivial “box checking” services.

Service loads in peer reviewing are also distributed unevenly. Journal editors often ask people to review whose jobs have no expectation of service. This includes early career researchers such as grad students and post docs, researchers who hold jobs outside academia, and retired academics. This is hardly fair to those individuals, and people in these positions have the best case for being paid for review because it is literally not their job.

Journal editors might also be able to better balance the service load by broadening the pool of who is asked to review. Many who are willing to provide peer review are rarely, if ever, asked to do so (Vesper 2018). In particular, researchers in “emerging economies” are probably underused as potential peer reviewers (Vesper 2018).

References

Academic Chatter. 2022. Twitter. https://twitter.com/academicchatter/status/1543229417060696066

Blaskiewicz R. 2013. The Big Pharma conspiracy theory. Medical Writing 22(4): 259-261. https://doi.org/10.1179/2047480613Z.000000000142

Demeo S. 1996. The corporate suppression of inventions, conspiracy theories, and an ambivalent American dream. Science as Culture 6(2): 194-219. https://doi.org/10.1080/09505439609526464

Eisen M. 2016. On pastrami and the business of PLOS. https://www.michaeleisen.org/blog/?p=1883

Fox CW. 2017. Difficulty of recruiting reviewers predicts review scores and editorial decisions at six journals of ecology and evolution. Scientometrics 113(1): 465-477. https://doi.org/10.1007/s11192-017-2489-5

Gee A. 2017. Facing poverty, academics turn to sex work and sleeping in cars. https://www.theguardian.com/us-news/2017/sep/28/adjunct-professors-homeless-sex-work-academia-poverty

Goldman HV. 2015. More delays in peer review: Finding reviewers willing to contribute. https://www.editage.com/insights/more-delays-in-peer-review-finding-reviewers-willing-to-contribute

Lowe A. 2022. Twitter. https://twitter.com/adriana_lowe/status/1549754463619108865

Merzdorf J, Pfeiffer LJ, & Forbes B. 2019. Heated discussion: Strategies for communicating climate change in a polarized era. Journal of Applied Communications 103: 3. link.gale.com/apps/doc/A600269487/AONE?u=anon~a35eccce&sid=bookmark-AONE&xid=f37daf5b

Schulz R, Barnett A, Bernard R, Brown NJL, Byrne JA, Eckmann P, Gazda MA, Kilicoglu H, Prager EM, Salholz-Hillel M, ter Riet G, Vines T, Vorland CJ, Zhuang H, Bandrowski A, & Weissgerber TL. 2022. Is the future of peer review automated? BMC Research Notes 15(1): 203. https://doi.org/10.1186/s13104-022-06080-6

Semeniuk I. 2022. Prominent researchers urge Ottawa to increase top science scholarships above poverty line. https://www.theglobeandmail.com/canada/article-prominent-researchers-urge-ottawa-to-increase-scholarships-for-top/

Silbiger NJ & Stubler AD. 2019. Unprofessional peer reviews disproportionately harm underrepresented groups in STEM. PeerJ 7: e8247. https://doi.org/10.7717/peerj.8247

Smith R. 2010. Classical peer review: an empty gun. Breast Cancer Research 12(Suppl 4): S13. https://doi.org/10.1186/bcr2742

Smith TC. 2017. Vaccine rejection and hesitancy: A review and call to action. Open Forum Infectious Diseases 4(3): ofx146. https://doi.org/10.1093/ofid/ofx146

Stafford T & Brand CO. 2021. Commercial involvement in academic publishing is key to research reliability and should face greater public scrutiny. https://doi.org/10.31222/osf.io/rjmvh

Taylor M. 2012. Academic publishers have become the enemies of science. https://www.theguardian.com/science/2012/jan/16/academic-publishers-enemies-science

Tite L & Schroter S. 2007. Why do peer reviewers decline to review? A survey. Journal of Epidemiology and Community Health 61(1): 9-12. https://jech.bmj.com/content/jech/61/1/9.full.pdf

Vesper I. 2018. Peer reviewers unmasked: largest global survey reveals trends. Nature. https://doi.org/10.1038/d41586-018-06602-y

Woolson C. 2015. Sexist review causes Twitter storm. Nature 521: 9. https://doi.org/10.1038/521009f

29 April 2024

Open access: What is a paper for, anyway?

Brian McGill at Dynamic Ecology blog has an interesting overview of publishing trends. The paragraph that seems to have gotten the most traction is this one: 

Open access has been a disaster. Scientists never really wanted it. We have ended up here for two reasons. First, pipe dreaming academics who believed in the mirage of “Diamond OA” (nobody pays and it is free to publish). Guess what – publishing a paper costs money – $500-$2000 depending on how much it is subsidized by volunteer scientists. We don’t really want Bill Gates etc. to pay for diamond OA. And universities and especially libraries are already overextended. There is no free publishing. The second and, in my opinion most to blame, are the European science grant funders who banded together and came up with Plan S and other schemes to force their scientists to only publish OA. At least in Europe the funding agencies mostly held scientists harmless by paying, and because of the captive audience, publishers went to European countries first for Read and Publish agreements. So European scientists haven’t been hurt too badly. But North America has so far refused to go down the same path, leaving North American scientists without grants (a majority of them) with an ever shrinking pool of subscription-based journals to publish in. And scientists from less rich countries are hurt even worse. Let’s get honest. How long before every university in Africa is covered by a Read and Publish agreement from the for profit companies?

What is interesting about this assessment is that he calls the open access situation a “disaster” on the basis of one very narrow measure: “How does it affect writing scientists?” By “writing scientists,” I mean what are usually called “principle investigators” (PIs), faculty who are busy running a lab and need publications for career advancement.

Two things.

First, most of the paragraph is concerned about how article processing charges affect scientists without grants who need to publish. I emphasize “charges” because, as I have said before, we need to separate open access – a description of who can read scientific articles – from the business models used to support open access. McGill is complaining about the latter, and isn’t addressing the former.

I do agree that many researchers have unrealistic expectations about the costs of publication. I agree that there has not been enough discussion about alternative business models for open access.

Second, journal articles do not just exist merely for the benefit of scientists who need publications to get promotion or tenure. There are not only people who write articles, there are people who read journal articles. You should consider the sizable benefits of more people being able to read scientific papers before judging the success of open access.

Article processing charges do create barriers for researchers with limited resources. But the research of hypothetical African scientists is impeded by not being able to read the scientific literature, not just by being unable to publish in the scientific literature.

If we are concerned about African researcher not being able to pay article processing charges, should we not also be concerned about African researchers being able to buy journal articles or African research libraries being able to buy journal subscriptions?

I see increased ability to read the world’s scholarly literature as a good thing. I don’t see it as an unalloyed good that must be pursued above all else. But it should be in the mix as we’re taking stock of open access.

20 March 2024

The precarious nature of scientific careers (inspired by Sydney Sweeney)

I recently read this article on actress Sydney Sweeney:

https://defector.com/the-money-is-in-all-the-wrong-places

(I liked you in Madame Web, Sydney, don’t let the haters hate.)

I did not expect that a story riffing off the career of a successful Hollywood actress would resonate so much.

Because the article was pointing out that “success” has been so eroded that it is almost meaningless now. Sweeney is doing well for herself, but she dare not stop grinding.

Academia looks like this to me. Even if you achieve “success” — landing one of those increasingly rare tenure track faculty jobs — the grinding not only does not stop, it probably intensifies. Just like Sweeney doesn’t feel she can stop taking ads for Instagram posts or take off a few months to have a kid, how many researchers are subjecting themselves to long hours to write grant proposals and publish papers because they are told that if they stop swimming, they’ll drown?

Actors and scientists are far from alone in this predicament. It’s widespread. But I think it’s worth asking, “What does success look like?” And now, it looks like “success” has razor thin margins of error.

18 March 2024

Contamination of the scientific literature by ChatGPT

Mushroom cloud from atomic bomb
I’ve written a little bit about how often notions of “purity” come up in discussions of scientific publishing.

I see lots hand waving about the “purity and integrity of the scientific record,” which is never how it’s been. The scientific literature has always been messy.

But in the last couple of weeks, I’m starting to think that maybe this is a more useful metaphor than it has been in the past. And the reason is, maybe unsurprisingly, generative AI. 

Part of my thinking was inspired by this article about the “enshittification” of the internet. People are complaining about searching for anything online, because so much of search results is being dominated by low quality content designed to attract clicks, not be accurate. And increasingly, that’s being generated by AI. Which was trained on online text. So we have a positive feedback loop of crap.

(G)enerative artificial intelligence is poison for an internet dependent on algorithms.

But it’s not just the big platforms like Amazon and Etsy and Google that is being overwhelmed by AI content. Academic journals are turning out to be susceptible to enshittification.

Right after that article appeared, science social media was widely sharing examples of published papers in academic journals with clear, obvious signs of being blindly pasted from generative AI large language models like ChatGPT. Guillaume Cabanac has provided many examples of ChatGPT giveaways like, “Certainly, here is a possible introduction to your topic:” or “regenerate response” or apologizing that “I am a large language model so cannot...”.

It’s not clear how widespread this problem is, but that even these most obvious examples are not getting screened out by routine quality control is concerning.

And another preprint making the rounds show more subtle telltale signs that a lot of reviewers are using ChatGPT to write their reviews.

So we have machines writing articles that machines are reviewing and humans seem to be hellbent on taking themselves out of this loop no matter what the consequence. I can’t remember where I first heard the saying, but “It is not enough than a machine knows the answer” feels like an appropriate reminder.

The word that springs to mind with all of this is “contaminated.” Back to the article that started this post:

After the world’s governments began their above-ground nuclear weapons tests in the mid-1940s, radioactive particles made their way into the atmosphere, permanently tainting all modern steel production, making it challenging (or impossible) to build certain machines (such as those that measure radioactivity). As a result, we’ve a limited supply of something called “low-background steel,” pre-war metal that oftentimes has to be harvested from ships sunk before the first detonation of a nuclear weapon, including those dating back to the Roman Empire.

Just like the use of atomic bombs in the atmosphere created a dividing line of “before” and “after” there was widespread contamination of low-level radiation, the release of ChatGPT is enhancing and deepening another dividing line. Scientific literature has been contaminated with ChatGPT. Admittedly, this contamination might turn out to be at a low level that might not even be harmful, just like most of us don’t really think about the atmospheric radiation from years of above ground testing of atomic bombs.

While I said that it isn’t helpful to talk about “purity” of academic literature before, I think this is truly a different situation than we have encountered before. We’re not talking about messiness because research is messy, or that humans are sloppy. We’re talking about an external technology that in impinging on how articles are written and reviewed. It is a different problem that might warrant describing it as “contamination.”

(I say generative AI is deepening the dividing line because the problems language AI are creating were preceded by the widespread release and availability of Photoshop and other image editing software. Both have eroded our confidence that what we see in scientific papers represents the real work of human beings.)

Related posts

How much harm is done by predatory journals? 

External links

Are we watching the internet die?

12 March 2024

Scientists will not “just”: Individual scientists can’t solve systemic problems

Undark has an interview with Paul Sutter about the problems of science. Now before I get into my reaction, I want to say that this interview was conducted because Sutter has a book, Rescuing Science, that covers these topics. I haven’t read the book. Maybe it has more nuance than this short interview.

Sutter has a lot of complaints about the current state of science, but his big one?

(A)n inability for scientists to meaningfully engage with the public.

The interview tries to peel back the layers of why this is. Sutter, like many academics, blames the incentives.

We, as a community of scientists, are so obsessed with publishing papers (that) this is causing is an environment where scientific fraud can flourish unchecked.

Sutter goes on to critique journal Impact Factor and h-index and peer review and a lot of the usual suspects. But Sutter’s solution to these big systemic problems might be summed up as, “Scientists need to get out more.” He wants scientists to do more public communication. Lots more.

Scientists should be the face of science. How do we increase diversity and representation within science? How about showing people what scientists actually look like. How do people actually understand the scientific method? What if scientists actually explained how they’re applying it in their everyday job.

This is a very familiar view to me. It’s why I started blogging here more than twenty years ago. Much of what I have achieved professionally, I can credit in part or in whole, to blogging and otherwise being Very Online. But I try to have a clear-eyed view of what I was able to achieve in building trust in science: probably not much.

I see three problems.

First. it feels like Sutter is saying, “If only scientists would just talk to non-scientists more.” I am reminded of this:

If your solution to a human problem involves the phrase “If only everyone would just...”, you don’t have a solution. Never in the History of Ever has everyone “just” anything... and we’re not likely to start now. 

(This tweet is from Laura Hunter. I remember seeing some version of this on Twitter, but can’t recall if I saw Laura Hunter’s tweet specifically.)

Second, lots of excellent scientists are not great at communicating to non-specialist audiences. They can’t stop using words like “canonical” in interviews. They aren’t trained in this.

Third, Sutter points out that the interests of science journalists are not always aligned with the interests of scientists. But that is a feature, not a bug. Journalists are not supposed to be stenographers, or cheerleaders, for science. And Sutter spends lot of time criticizing journalism while the profession is practically collapsing in front of our eyes while we watch. Local news outlets are vanishing. Online publications are shuttering and laying off hundreds of staffers. Popular Science is gone.

Wait, I have more than three.

Fourth, this sounds like trying to fix traffic jams (systemic problems) by asking people to drive carefully (individual actions). That doesn’t have a good record of success. 

In the interview, Sutter doesn’t come to grips with the systemic issues of money and power that are being leveraged to make coordinated attacks on science.

I’ve said it before: Individual scientists – who are struggling to write grants after grant to keep their labs afloat – are not on a level playing field with international media corporations backed by billions of dollars. Or tech corporations that write recommendation algorithms. Or religions that have a several thousand year head start in winning hearts and shaping culture.

I am guessing that if Sutter were to point to the sort of people who exemplify his preferred method of restoring trust in science by getting out and talking publicly, he might point to Anthony Fauci or Peter Hotez – both of whom were excellent communicators throughout the COVID-19 pandemic. But we have seen the power asymmetry: Fauci and Hotez were physically threatened for publicly talking about science.

Anti-science is large, well-funded, and organized. Single scientists with a blog – or even a few invitations to speak on national platforms – are overmatched.

External links

Paul M. Sutter Thinks We’re Doing Science (and Journalism) Wrong

Rescuing Science

11 March 2024

How Godzilla movies reflect scientific research

Leonard Maltin
I have a very specific memory of film critic Leonard Maltin on Entertainment Tonight reviewing Godzilla 1985 (the first American release of 1984’s Gojira or The Return of Godzilla). Maltin said something like, “Many remakes fail because they stray too far from the original. Godzilla 1985 doesn’t have that problem.”

It’s still the same cheap Japanese monster movie.”

That stung so much that here I am remembering it almost 40 years later.

I can’t think of anything that summarized the attitude towards Godzilla for so long. 

So last night’s Oscar win for Godzilla Minus One feels like vindication for a lifelong fan like me. 

Godzilla -1 effects team with Oscars

For decades, Godzilla movies were the butt of jokes. And deservedly so, I have to say. As much as I count myself as a Godzilla fan, I have no desire to watch Son of Godzilla or All Monsters Attack (Godzilla’s Revenge) ever again. (Shudder.)

But fandom is a funny thing. You love the things you love and it still kind of stings when you you hear them derided.

But somewhere in the years after Maltin snubbed the 30th anniversary movie, something shifted in people’s attitude towards Godzilla.

Those of us who watched a few dubbed movies as kids remembered Godzilla as we grew up. I heard in the 1990s that there was a new “high tech” series of Godzilla movies being made in Japan. The Internet removed friction for finding out fannish stuff. You could find retrospectives about the making of the series in English.

And say what you will about the American Godzilla made in 1998, Hollywood wouldn’t have shelled out the cash to try to make that movie a big summer blockbuster if there wasn’t some sort of name recognition.

After all those years in the wilderness, what films could show was finally catching up with the visions of Godzilla that fans held in their heads.

And it occurred to me that this is sometimes how science works.

You have an idea. You get it out there. 

Maybe it’s derided as cheap and mostly dismissed. But you try again.

And other people pick up some aspect of it. And maybe sometimes the results are embarrassing, with the offshoots are not as good as the original was.

And sometimes, if you wait and keep trying, that original idea somehow stands the test of time. Other people come around and start to recognize that idea was a good idea. And you end up with something that gets better that you ever though.

Related posts

“Why do you love monsters?”

10 March 2024

How can we fix Daylight Saving Time?

I’ve mentioned this on social media before, but I might as well get it into the blog for archival purposes.

We just switched to Daylight Saving Time overnight. Nobody likes this switch, because you lose an hour, it messes with sleep, you have to change microwave clocks we always forgot how to do that.

It occured to me that the fundamental problem with the change is that an hour is too big a step in one go. But nobody notices when timekeepers throw in a leap second. (Yes, that is a real thing I learned about years ago.)

Instead of two hour long time shifts twice a year, why not make more but smaller changes? Like 10 minutes, every month. It comes to the same amount of time over the course of a year.

“Zen, that may be easier on sleep cycles, but that means ten more annoying clock changes every year.”

Use new technology instead of being stuck using old technology. More and more clocks are setting themselves automatically using an Internet signal or some radio signal from an atomic clock. We could get even more if we mandated self-setting clocks in new equipment

Build the future, not the past.

21 February 2024

I told you transcript changes didn’t affect grade inflation

Way back when, I blogged about a Texas proposal to include average course grades next to a student’s earned grades on the student transcript. The argument was that this could be a way to curb grade inflation. I was skeptical. 

This never came to pass in Texas, but what I didn’t know at the time that this was the practice at Cornell University.

A practice they just stopped.

It turned out that – surprise! – showing average class grades didn’t stop grade inflation. In fact, showing class averages probably increased grade inflation. Because with easy access to average course grades, students preferentially took the classes seen to be “easy A’s”.

I have to admit I didn’t see that possibility, but it tracks.

Related posts

The “Texas transcript” is a good idea, but won’t solve grade inflation

External links

Cornell Discontinues Median Grade Visibility on Transcripts 15 Years After Inception  


19 February 2024

Rats, responsibility, and reputations in research, or: That generative AI figure of rat “dck”

Say what you will about social media, it is a very revealing way to learn what your colleagues think.

Last week, science Twitter could not stop talking about this figure:

Figure generated by AI showing rat with impossibly large genitalia. The figure has labels but none of the letter make actual words.

There were two more multi-part figures that are less obviously comical but equally absurd.

The paper these figures were in has now been retracted, but I found the one above in this tweet by CJ Houldcroft. You can also find them in Elizabeth Bik’s blog.

This is clearly a “cascade of fail” situation with lots of blame to go around. But the discussion made me wonder where people put responsibility for this. I ran a poll on Twitter and a poll on Mastodon asking who came out looking the worst. The combined results from 117 respondents were:

Publisher: 31.6%
Editor: 30.8%
Peer reviewers: 25.6%
Authors: 12.0% 

I can both understand these results to some degree and have these results blow my mind 🤯 a little. 

People know the name of the publisher, and many folks have been criticizing Frontiers as a publisher for a while. Critics will see this as more confirmation that Frontiers is performing poorly. So Frontiers looks bad.

The editor and peer reviewers look bad because, as the saying goes, “You had one job.” They are supposed to be responsible for quality control and they didn’t do that. (Though one reviewer sad he didn’t think the figures were his problem, which will get its own post over on the Better Posters blog later.)

But I am still surprised that the authors are getting off so lightly in this discussion. It almost feels like blaming the fire department instead of the arsonist.

At the surface level, the authors did nothing technically wrong. The journal allows AI figures if they are disclosed, and the authors disclosed it. But the figures are so horribly and obviously wrong that to even submit it feels to me more like misconduct than sloppiness.

And is so often the case, when you pull at one end of a thread, it’s interesting to see what starts to unravel.

Last author Ding-Jun Hao (whose name also appears in papers as Dingjun Hao) has had multiple papers retracted before this one (read PubPeer comments on one retracted paper), which a pseudonymous commenter on Twitter claimed was the work of a papermill. Said commenter further claimed that another paper is from a different papermill.

Lead author Xinyu Guo appears to have been author on another retracted paper.

I’ve been reminded of this quote from a former journal editor:

“Don’t blame the journal for a bad paper. Don’t blame the editor for a bad paper. Don’t blame the reviewers for a bad paper. Blame the authors for having the temerity to put up bad research for publication.” - Fred Schram in 2011, then editor of Journal of Crustacean Biology

Why do people think the authors don’t look so bad in this fiasco?

I wonder if other working scientists relate all to well to the pressure to publish, and think, “Who among us has not been tempted to use shortcuts like generative AI to get more papers out?”

I wonder if people think, “They’re from China, and China has a problem with academic misconduct.” Here’s an article from nine years ago about China trying to control its academic misconduct issues.

I wonder if people just go, “Never heard of them.” Hard to damage your reputation if you don’t have one.

But this strategy may finally be too risky. China has announced new measures to improve academic integrity issues, which could include any retracted paper requiring an explanation. And the penalties listed could be severe. Previous investigations of retractions in China resulted in “salary cuts, withdrawal of bonuses, demotions and timed suspensions from applying for research grants and rewards.”

Related posts

The Crustacean Society 2011: Day 3

References

[Retracted] Guo X, Dong L, Hao D, 2024. Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway. Frontiers in Cell and Developmental Biology 11:1339390. https://doi.org/10.3389/fcell.2023.1339390

Retraction notice for Guo et al.

External links

Scientific journal publishes AI-generated rat with gigantic penis in worrying incident

Study featuring AI-generated giant rat penis retracted, journal apologizes 

The rat with the big balls and the enormous penis – how Frontiers published a paper with botched AI-generated images

China conducts first nationwide review of retractions and research misconduct (2024)

China pursues fraudsters in science publishing (2015)