05 November 2024

Gen AI fatalism

Generative AI fatalism (or Gen AI fatalism): The assertion that generative AI must be accepted, because its widespread adoption is inevitable and cannot be stopped.

A new article by James Zuo about ChatGPT in peer review is a particularly spectacular example of gen AI fatalism. Zuo mentions that many peer reviews show signs of being created by large language models, like ChatGPT. He lists many problems and only trivial potential advantages, and calls for more human interactions in the peer review process.

Since Zuo has nothing very positive to say about generative AI in peer review, the fatalism on display is stark.

The tidal wave of LLM use in academic writing and peer review cannot be stopped.

No! 

This is a mere assertion that has no basis behind it. Technology adoption and use is not some sort of physical law. It’s not a constant like the speed of light. It’s not the expansion of the universe driven by the Big Bang, dark matter, and dark energy.

The adoption and use of technology is a choice. Technologies fail in the marketplace all the time. We abandon technologies all the time.

If generative AI is causing problems in the peer review process, we should say that. We should work to educate our colleagues about the inherent problems with generative AI in the review process.

I suspect that people use ChatGPT for the simple reason that they don’t want to do peer review. They do not believe they are adequately rewarded for the task. So, as is so often the case, we need to look at academic incentives. We need to look at why we peer review and what it can accomplish.

Creating journal policies about using ChatGPT is little more than trying to paper up holes in a boat. I would welcome the equivalent of pulling the boat into dry dock and doing a complete refit.

Reference

https://doi.org/10.1038/d41586-024-03588-8

Stay out of my academic searches, gen AI

Something I had long dreaded came to pass yesterday.


Google Scholar landing page with " New! AI outlines in Scholar PDF Reader: skim the bullets, deep read what you need"
Google Scholar introduced generative AI.

“New! AI outlines in Scholar PDF Reader: skim the bullets, deep read what you need.”

Andrew Thaler had the perfect riposte to this new feature.

If only scholarly publications came with a short synopsis of the paper right up front, written by the authors to highlight the important and salient points of the study.

We could give it a nifty name, like “abstract.”
Exactly! Not only do researchers already outline papers, many journals require two such outlines: an abstract and some sort of plain English summary.

I don’t need this. I don’t want this. No more generative AI infiltrating into every nook and cranny of the Web, please.

Punching fascists: A part of our heritage

Apropos of nothing, here are two pictures I thought I’d posted back in November 2016, but I can’t seem to find any more.

They’re a couple of panels of Canadian World War II comic character Johnny Canuck punching Hitler.

Johnny Canuck punching Hitler.

Johnny Canuck punching Hitler.


04 November 2024

Research on crustacean nociceptors missing useful evidence

Crab, Carcinas maenas
A new paper by Kasiouras and colleagues attempts to provide evidence for nociceptive neurons in a decapod crustacean, a shore crab. 

The authors took crabs, poked them and dropped some acid on them, and recorded the neural responses. Neurons did indeed respond, in dose-and intensity specific ways. From this, the authors conclude these are potential nociceptive responses.

I am unconvinced. There is a big difference between showing that neurons can respond to a possibly noxious stimulus and showing that those neurons are responding as nociceptors.

First:

The same category of stimuli can have detected by by nociceptors and by “regular” sensory neurons that are not nociceptors. For example, an increase in temperature can stimulate both nociceptors and thermoreceptive sensory neurons. Mechanical pressure can stimulate both mechanoreceptors and nociceptors. That there are two neural pathways are probably why we distinguish a burn (painful) as different than heat, or a pinch from a strong touch.

The results would be more convincing if the authors showed that neurons responded in ways that are typical of other nociceptors. Many (though not all) nociceptors have a few common properties.

  1. They respond to several different types of stimuli. For example, they react to high temperature and acid and mechanical pressure and a chemical like capsaicin. (The technical term is that they are polymodal or  multimodal sensory neurons.)
  2. They respond to repeated stimulation with increased activity. Most sensory neurons “get used to” the same sensory stimuli over time, but many nociceptors do the opposite. (The technical term is that they show sensitization.)

The authors couldn’t do either of these, because they are recording from the whole nerve cord and they did not pick out the activity of single neurons. Sometimes it is possible to recognize the activity of single neurons in this sort of record with spike sorting techniques, but that is done in this paper.

Second:

Species respond to potentially noxious stimuli in different ways. Some species respond to capsaicin with nociceptive behaviours, but others do not. No stimulus is guaranteed trigger nociceptors in all cases. 

This paper used mechanical touch and acetic acid, but because the paper did no behavioural experiments, it’s not clear if the crabs perform nociceptive behaviour in response to the level of stimuli presented.

Another paper used acetic acid as a potentially noxious stimulus with shore crabs, and crabs do respond to it (Elwood et al. 2017), but that paper was criticized (Diggles 2018; not cited in the current study) for not considering the possibility that acetic acid caused a feeding (gustatory) response, not a nociceptive response. Acetic acid is the technical name for vinegar, after all.

The results would be more convincing if the electophysiological recordings of neurons were connected back to the crab’s behaviour. For example, the authors could tried an experiment to shown that a touch of low intensity caused one sort of behaviour, but a touch of higher intensity caused a different behaviour that looks like nociceptive behaviour. Then they could have seen what the differences in neural activity to those two kinds of tactile stimulation were.

I am glad to see more labs trying to establish the presence or absence of nociceptors in crustaceans. There is still more work to demonstrate their existence and characterize their physiology if they exist.

References

Diggles BK. 2018. Review of some scientific issues related to crustacean welfare. ICES Journal of Marine Science: fsy058. http://dx.doi.org/10.1093/icesjms/fsy058
 

Elwood RW, Dalton N, Riddell G. 2017. Aversive responses by shore crabs to acetic acid but not to capsaicin. Behavioural Processes 140: 1-5. https://doi.org/10.1016/j.beproc.2017.03.022

Kasiouras E, Hubbard PC, Gräns A, Sneddon LU. 2024. Putative nociceptive responses in a decapod crustacean: The shore crab (Carcinus maenas). Biology 13(11): 851. https://doi.org/10.3390/biology13110851
 

Picture by Dunnock_D on Flickr; used under a Creative Commons license.

02 November 2024

Pursing integrity over excellence in research assessment

I was reading yet another “Scientists behaving badly” article. This one was about Jonathan Pruitt, who used to work where I used to work (different departments). And, as it usual in these articles, there is a section about how institutions assess research:

Many attribute the rising rate of retractions to academia’s high-pressure publish-or-perish norms, where the only route to a coveted and increasingly scarce permanent job, along with the protections of tenure and better pay, is to produce as many journal articles as possible. That research output isn’t just about publishing papers but also how often they’re cited. Some researchers might try to bring up their h-index (which measures scientific impact, based on how many times one is published and cited), or a journal might hope that sharing particularly enticing results will enhance its impact factor (calculated using the number of citations a journal receives and the number of articles it publishes within a two-year period).

It finally occurred to me that focusing on the indicators like citations and Impact Factor are all symptoms of a larger mindset.

The major watchword for administrators and funders for decades has been “excellence.” Some prefer a near synonym, “impact.” Everyone wanted to reward excellent research, which is an easy sell. Nobody wants to admit that they want to support average research — even though that’s what most research is, by definition. But most of science progresses by millimetres on the basis of average research. Even poor research can have some nuggets of data or ideas that can be useful to others.

I suggest a new overarching principle to guide assessment: integrity. We should be paying attention to, and rewarding research and researchers that act in the most trustworthy ways. Who are building the most solid and dependable research. That can be assessed by practices like data sharing and code sharing, and by showing of community use and replication.

The pursuit of excellence has proved too fickle and too likely to make people act sketchy to become famous. Let’s change the target.

External links

A rock-star researcher spun a web of lies—and nearly got away with it


17 October 2024

Write in Gallifreyan

Oh, this is fun. Here is “Doctor Zen” in Gallifreyan writing from Doctor Who.

"Doctor Zen" in Gallifreyan

I have to be careful, or I’ll spend hours translating things into this script.

 External links

Gallifreyan translator

11 October 2024

What is misinformation for?

A new article on how many people in the US are increasingly hostile to reality has much to contemplate, but I wanted to briefly muse on this:

So much of the conversation around misinformation suggests that its primary job is to persuade. But as Michael Caulfield, an information researcher at the University of Washington, has argued, “The primary use of ‘misinformation’ is not to change the beliefs of other people at all. Instead, the vast majority of misinformation is offered as a service for people to maintain their beliefs in face of overwhelming evidence to the contrary.” This distinction is important, in part because it assigns agency to those who consume and share obviously fake information.

I see the point, and agree with it to some extent, but I think this underestimates the persuasive power of misinformation.

It neglects the “rabbit hole” effect that misinformation has had on fostering conspiracy theories and radicalization. It neglects the slow corrosion that has been happening in political discourse. It’s not just that political parties (particularly in the US) are polarized, but that some have gone ever more extreme.

I can see a connection between Caulfield’s “misinformation helps maintain beliefs” and persuasion. People’s beliefs are informed by different points of view. Without countervailing points of view, those existing beliefs can become more certain and more readily drift to ever more extreme versions of that belief.

Misinformation is often better described as straight-up propaganda, though. But we seem to have lost that word through fear of calling lies, lies.

External links

I’m running out of ways to explain how bad this is

09 October 2024

“Equal contribution” statements don’t mean much: Nobel prize edition

This is not a post about the Nobel prizes. It is a post about authorship.

The Nobel Prize for chemistry was given two people for protein folding. I told students in my introductory biology classes for years that whoever could solve that problem should book a ticket to Stockholm, because it would get a Nobel, and I’m pleased to see I was right on that count.

Screenshot of Nature article "Highly accurate protein structure prediction with AlphaFold" with expanded credit showing that 19 authors were credited as making equal contributions to the paper.
On Bluesky, Michael Hoffman pointed out that the key paper about AlphaFold has an equal contribution statement:

(T)he AlphaFold paper has 19 authors who “contributed equally” but only two of them (Demis Hassabis and John M. Jumper - ZF) get part of the Nobel Prize đŸ¤” 

So why those two people out of all the 19 who made, allegedly, equal contributions? The paper has a “Contributions” statement:

J.J. and D.H. led the research.

I don’t think there has ever been a clearer demonstration that “equal contribution” statements don’t mean much of anything except to maybe the people involved. And their relatives.

Also worth noting that in the 19 equal contributions were, I believe, two women. (Guess based on given names, which is not ideal, I know. Still.)

More generally, authorship is a terrible way of assigning credit. I have, and will continue to, argue that the CRediT system of identifying specific contributions should be adopted just across the board.

References

Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, Tunyasuvunakool K, Bates R, Å½Ă­dek A, Potapenko A, Bridgland A, Meyer C, Kohl SAA, Ballard AJ, Cowie A, Romera-Paredes B, Nikolov S, Jain R, Adler J, Back T, Petersen S, Reiman D, Clancy E, Zielinski M, Steinegger M, Pacholska M, Berghammer T, Bodenstein S, Silver D, Vinyals O, Senior AW, Kavukcuoglu K, Kohli P,  Hassabis D. 2021. Highly accurate protein structure prediction with AlphaFold. Nature 596: 583–589. https://doi.org/10.1038/s41586-021-03819-2

03 October 2024

Losing your academic email cuts you off from the scientific community

I haven’t had a job in a university for a while, and I’m realizing how much I cannot do and how many opportunities I am missing because I don’t have a university email address.

One of the biggest issues is Google Scholar.

Google Scholar still has my last institutional email from last year. I could leave my email blank, but I don’t want to, because “Unverified profiles can’t appear in search results.” That is bad for me professionally – I want people to be able to find my papers in Google Scholar search. It’s also, it must be said, bad for research more generally. I wonder how many people realize that profile search are filtered by institutional emails.

if journal editors find my Google Scholar profile, they will only see my old email. If they send a request to that email for me to review a manuscript, they won’t get a response. Given how many editors complain about how they “just can’t find people willing to peer review some articles,” I wonder how many potential reviewers are lost because they change email addresses?

Other examples:

Pubpeer won’t accept a Gmail address in their signup.

ResearchGate warns you about deleting an institutional email but allows you to do it.