22 November 2024

Clearing out the vault by posting preprints

Old books on a shelf
My last jobs didn’t have any expectations for research, so my publication rate slowed significantly. I wanted to do something about that.

A few months ago, I published a submitted but not completed article on paying peer reviewers here on the blog and on Figshare. This got me digging around in folders on my hard drive and I got thinking about other articles that I had written, submitted, but weren’t accepted. And at the time, I just ran out of steam to revise and resubmit them in ways that would satisfy an editor and a couple of reviewers. 

So in recent weeks, I have taken to submitting some of those manuscripts to a preprint server.

In the past, I was a little cool to the value of preprints. I always thought they had some uses, but I was skeptical that they would replace journals, which was the stated wish for some of the strongest preprint advocates. 

But since I last wrote about preprints, the attention landscape has changed. More people in biology have started scanning through preprint servers part of their routine. I was surprised when a reporter emailed me about one of these preprints and wanted to chat for an interview. I don’t think that would have happened a few years ago.

Whether these will be “final” public version of these little projects, I can’t say. I have other projects that I want to get out that have not been written up at all yet that I want to try to get into a journal. But I am glad that every one of these received at least a little attention on Bluesky.

Here’s the list of my recent preprints, all on bioRxiv.

Related posts

A pre-print experiment: will anyone notice?

A pre-print experiment, continued  

A pre-print experiment, part 3: Someone did notice  

Photo by umjadedoan on Flickr. Used under a Creative Commons license.

18 November 2024

The cat that fooled Google Scholar, the newest hoax in my collection

I finally got a chance to update my collection of academic hoaxes!

Stinging the Predators 23.0: Now with a cat! (The Internet loves cats, right?) http://bit.ly/StingPred

I am now up to 42 academic hoaxes, which is triple the number that I started with in version 1.0 in 2017.

The latest hoax targets not a predatory journal, but an academic search engine. While this is unusual, it is not the first hoax that was pulled to show how easy it is to manipulate Google Scholar. (One of the things that has been interesting to me as this project has continued is that many hoaxers feel compelled to make the same point again.)

And one other things that has been rewarding is that this collection, which I’ve only ever had on Figshare and promoted on my socials and personal websites, has been viewed tens of thousands of times and has been cited by scholars writing in proper journals a few time.

External links

Faulkes Z. 2024. Stinging the Predators: A Collection of Papers That Should Never Have Been Published (version 23). Figshare. https://doi.org/10.6084/m9.figshare.5248264



14 November 2024

Okay, stop. Saying “science isn’t political” will not keep science safe from political attacks

Advice from people with experience fighting powerful fascist opponents: “Do not comply in advance.

In a new Science editorial, National Academies of Sciences, Engineering, and Medicine president Marcia McNutt starts complying well in advance. 

The editorial begins with blatant concern trolling:

I had become ever more concerned that science has fallen victim to the same political divisiveness tearing at the seams of American society.

Okay, stop. McNutt provides zero examples of supposed “divisiveness” of the scientific community. A recent Nature article showed exactly the opposite. The scientific community was strongly united in its preferred American election outcome: 86% favoured the losing candidate. I guess McNutt sees that as bad because it makes it harder for her to play nice with with administration of the winning candidate.

(The scientific community) must take a critical look at what responsibility it bears in science becoming politically contentious(.)

Okay, stop. Again, McNutt provides zero examples of scientists making science politically contentious. On the other hand, I can point to many examples of politicians who waded into scientific debates about the reality of whole branches of science (evolution, geology, climate science) and health care.

No, the problem (according to McNutt) is that we scientists don’t explain ourselves well. We don’t tell politicians and citizens about how we are just disinterested third parties.

(S)cience, at its most basic, is apolitical.

Okay, stop. I know this is a popular claim, but it’s time to put this into the ground and bury it six feet deep. There is a very good Nature podcast series that takes this claim apart. Claiming “We’re not political” is a fiction that favours those who are politically privileged. McNutt would have had a much stronger case if she had said that science is not partisan. Or that reality is apolitical.

But science is an organized profession done by humans, so science is political.

It is strange to say that science is apolitical when I see the two entangled all the time.

McNutt continues:

Whether conservative or liberal, citizens ignore the nature of reality at their peril.

Okay, stop. Many elected politicians and citizens are just fine with ignoring the nature of reality – as long as they personally are not affected. And those who are personally affected, in genuine peril, may not be able to generate enough political clout to change policy to make themselves safe on their own. They need allies.

McNutt’s argument that “the arc of the scientific universe is long, but it bends towards truth” is callous. It seems like her view is that scientists should passively sit on the sidelines, presenting data but never advocating, while watch people make the same mistakes about discredited claims that actively harm people over and over.

I can’t help but wonder where McNutt has been for the last decade or so when she can, apparently in all seriousness, write a sentence like this about climate science: 

It is up to society and its elected leadership to decide how to balance these options, including the use of renewable energy, climate adaptation, carbon capture, or even various interventions that reflect sunlight back into space.

Okay, stop.

Does McNutt understand that the incoming elected leadership repeatedly stated that their option is, “Climate science is all a hoax. We don’t need to do anything”?

Does McNutt understand that elected leaders can use their power to take actions that are not supported by the majority of society or scientific evidence?

Should scientists simply accept that their elected leadership is condemning millions to ever greater misery every day this denial of reality goes on?

McNutt is just trying to get her organization out of the line of fire of an incoming government that is more overtly hostile to science than maybe any other American government ever. 

The NAS stands ready, as it always has, to advise the incoming administration.

While it is the job of a civil servant to work with a new boss after every election, most scientists are not civil servants. They are not obligated to support a new government. They can, and should, do much more than just provide data and hope that elected leaders eventually come around to face scientific reality. If anyone is not coming around to face reality – the political reality, in this case – it’s McNutt.

External links

Science is neither red nor blue

The US election is monumental for science, say Nature readers — here’s why

“Stick to the science”: When science gets political (Three part podcast series)

06 November 2024

Untitled post 2024

You get hurt, hurt ‘em back. You get killed…

Walk it off.

 Captain America, Avengers: Age of Ultron

05 November 2024

Gen AI fatalism

Generative AI fatalism (or Gen AI fatalism): The assertion that generative AI must be accepted, because its widespread adoption is inevitable and cannot be stopped.

A new article by James Zuo about ChatGPT in peer review is a particularly spectacular example of gen AI fatalism. Zuo mentions that many peer reviews show signs of being created by large language models, like ChatGPT. He lists many problems and only trivial potential advantages, and calls for more human interactions in the peer review process.

Since Zuo has nothing very positive to say about generative AI in peer review, the fatalism on display is stark.

The tidal wave of LLM use in academic writing and peer review cannot be stopped.

No! 

This is a mere assertion that has no basis behind it. Technology adoption and use is not some sort of physical law. It’s not a constant like the speed of light. It’s not the expansion of the universe driven by the Big Bang, dark matter, and dark energy.

The adoption and use of technology is a choice. Technologies fail in the marketplace all the time. We abandon technologies all the time.

If generative AI is causing problems in the peer review process, we should say that. We should work to educate our colleagues about the inherent problems with generative AI in the review process.

I suspect that people use ChatGPT for the simple reason that they don’t want to do peer review. They do not believe they are adequately rewarded for the task. So, as is so often the case, we need to look at academic incentives. We need to look at why we peer review and what it can accomplish.

Creating journal policies about using ChatGPT is little more than trying to paper up holes in a boat. I would welcome the equivalent of pulling the boat into dry dock and doing a complete refit.

Reference

https://doi.org/10.1038/d41586-024-03588-8

Stay out of my academic searches, gen AI

Something I had long dreaded came to pass yesterday.


Google Scholar landing page with " New! AI outlines in Scholar PDF Reader: skim the bullets, deep read what you need"
Google Scholar introduced generative AI.

“New! AI outlines in Scholar PDF Reader: skim the bullets, deep read what you need.”

Andrew Thaler had the perfect riposte to this new feature.

If only scholarly publications came with a short synopsis of the paper right up front, written by the authors to highlight the important and salient points of the study.

We could give it a nifty name, like “abstract.”
Exactly! Not only do researchers already outline papers, many journals require two such outlines: an abstract and some sort of plain English summary.

I don’t need this. I don’t want this. No more generative AI infiltrating into every nook and cranny of the Web, please.

Punching fascists: A part of our heritage

Apropos of nothing, here are two pictures I thought I’d posted back in November 2016, but I can’t seem to find any more.

They’re a couple of panels of Canadian World War II comic character Johnny Canuck punching Hitler.

Johnny Canuck punching Hitler.

Johnny Canuck punching Hitler.


04 November 2024

Research on crustacean nociceptors missing useful evidence

Crab, Carcinas maenas
A new paper by Kasiouras and colleagues attempts to provide evidence for nociceptive neurons in a decapod crustacean, a shore crab. 

The authors took crabs, poked them and dropped some acid on them, and recorded the neural responses. Neurons did indeed respond, in dose-and intensity specific ways. From this, the authors conclude these are potential nociceptive responses.

I am unconvinced. There is a big difference between showing that neurons can respond to a possibly noxious stimulus and showing that those neurons are responding as nociceptors.

First:

The same category of stimuli can have detected by by nociceptors and by “regular” sensory neurons that are not nociceptors. For example, an increase in temperature can stimulate both nociceptors and thermoreceptive sensory neurons. Mechanical pressure can stimulate both mechanoreceptors and nociceptors. That there are two neural pathways are probably why we distinguish a burn (painful) as different than heat, or a pinch from a strong touch.

The results would be more convincing if the authors showed that neurons responded in ways that are typical of other nociceptors. Many (though not all) nociceptors have a few common properties.

  1. They respond to several different types of stimuli. For example, they react to high temperature and acid and mechanical pressure and a chemical like capsaicin. (The technical term is that they are polymodal or  multimodal sensory neurons.)
  2. They respond to repeated stimulation with increased activity. Most sensory neurons “get used to” the same sensory stimuli over time, but many nociceptors do the opposite. (The technical term is that they show sensitization.)

The authors couldn’t do either of these, because they are recording from the whole nerve cord and they did not pick out the activity of single neurons. Sometimes it is possible to recognize the activity of single neurons in this sort of record with spike sorting techniques, but that is done in this paper.

Second:

Species respond to potentially noxious stimuli in different ways. Some species respond to capsaicin with nociceptive behaviours, but others do not. No stimulus is guaranteed trigger nociceptors in all cases. 

This paper used mechanical touch and acetic acid, but because the paper did no behavioural experiments, it’s not clear if the crabs perform nociceptive behaviour in response to the level of stimuli presented.

Another paper used acetic acid as a potentially noxious stimulus with shore crabs, and crabs do respond to it (Elwood et al. 2017), but that paper was criticized (Diggles 2018; not cited in the current study) for not considering the possibility that acetic acid caused a feeding (gustatory) response, not a nociceptive response. Acetic acid is the technical name for vinegar, after all.

The results would be more convincing if the electophysiological recordings of neurons were connected back to the crab’s behaviour. For example, the authors could tried an experiment to shown that a touch of low intensity caused one sort of behaviour, but a touch of higher intensity caused a different behaviour that looks like nociceptive behaviour. Then they could have seen what the differences in neural activity to those two kinds of tactile stimulation were.

I am glad to see more labs trying to establish the presence or absence of nociceptors in crustaceans. There is still more work to demonstrate their existence and characterize their physiology if they exist.

References

Diggles BK. 2018. Review of some scientific issues related to crustacean welfare. ICES Journal of Marine Science: fsy058. http://dx.doi.org/10.1093/icesjms/fsy058
 

Elwood RW, Dalton N, Riddell G. 2017. Aversive responses by shore crabs to acetic acid but not to capsaicin. Behavioural Processes 140: 1-5. https://doi.org/10.1016/j.beproc.2017.03.022

Kasiouras E, Hubbard PC, Gräns A, Sneddon LU. 2024. Putative nociceptive responses in a decapod crustacean: The shore crab (Carcinus maenas). Biology 13(11): 851. https://doi.org/10.3390/biology13110851
 

Picture by Dunnock_D on Flickr; used under a Creative Commons license.

02 November 2024

Pursing integrity over excellence in research assessment

I was reading yet another “Scientists behaving badly” article. This one was about Jonathan Pruitt, who used to work where I used to work (different departments). And, as it usual in these articles, there is a section about how institutions assess research:

Many attribute the rising rate of retractions to academia’s high-pressure publish-or-perish norms, where the only route to a coveted and increasingly scarce permanent job, along with the protections of tenure and better pay, is to produce as many journal articles as possible. That research output isn’t just about publishing papers but also how often they’re cited. Some researchers might try to bring up their h-index (which measures scientific impact, based on how many times one is published and cited), or a journal might hope that sharing particularly enticing results will enhance its impact factor (calculated using the number of citations a journal receives and the number of articles it publishes within a two-year period).

It finally occurred to me that focusing on the indicators like citations and Impact Factor are all symptoms of a larger mindset.

The major watchword for administrators and funders for decades has been “excellence.” Some prefer a near synonym, “impact.” Everyone wanted to reward excellent research, which is an easy sell. Nobody wants to admit that they want to support average research — even though that’s what most research is, by definition. But most of science progresses by millimetres on the basis of average research. Even poor research can have some nuggets of data or ideas that can be useful to others.

I suggest a new overarching principle to guide assessment: integrity. We should be paying attention to, and rewarding research and researchers that act in the most trustworthy ways. Who are building the most solid and dependable research. That can be assessed by practices like data sharing and code sharing, and by showing of community use and replication.

The pursuit of excellence has proved too fickle and too likely to make people act sketchy to become famous. Let’s change the target.

External links

A rock-star researcher spun a web of lies—and nearly got away with it