16 February 2026

Limitations to scientific progress

As a comparative biologist, I appreciated Nanthia Suthana’s new opinion piece about how neuroscience research is relatively divided by what model species researchers are working on.

I am interested in an assumption underlying Suthana’s thesis:

As a result, neuroscience’s primary limitation today is not a lack of data or tools, but persistent fragmentation across model systems, recording modalities and analytic traditions.

This made me wonder how fields assess their progress. Judging from conference attendance and journals, neuroscience is a phenomenally healthy field of research. Yet it is a field that somehow seems to think that, darn it, we should be further along.

How do research disciplines measure their own progress? To put it another way, if we were able to successfully remove a suggested limitation, what would we know?

If we got the “cross species dialogue” that Suthana thinks neuroscience needs, what would be the thing we would learn? All I can gather from the article is that we could better “refine or revise” our theories. But I’m not sure which theories those are, or what discoveries we might expect.

I realize that it might seem a big ask to get a preview of what discoveries we might make if we did more comparative biology in neuroscience. Unexpected lucky findings are the norm in every field of science. But in some fields, it is very clear about what certain limitations are, and what could be learned if those obstacles were removed.

Astronomers knew for decades that they would be able to see deeper into space if they have a space-based telescope.

Particle physicists knew for decades that they could test for the presence of the Higgs boson if they had particle accelerators that operated at higher energy levels.

Paleontologists knew for decades that major evolutionary events, like vertebrates living on land full time, should be in rocks of a particular age.

But there are a lot of fields that are just not like that. Neuroscience might be one of them. Or maybe it isn’t like that yet.

Related posts 

Nominees for the Newton of neuroscience 

External links 

Neuroscience has a species problem 

 

06 February 2026

Politics and pendulums

A lot of people, and organizations, assuming the status quo is immutable. So they talk like politics is all swings and roundabouts. Some days you’re up, some days you’re down. That the “pendulum will swing back.”

But societies aren’t pendulums governed by physical laws.

They ignore that many societies have undergone irreversible changes. Often sudden, sometime calamitous.

Organizations, in particular, get so accustomed to “normal times” that they have no crisis mode. 

I am reminded of this because Science magazine – who I have often criticized for underplaying threats to American science – is at it again. This week’s editorial argues that the real wins for science are all quiet backroom deals, and that loud protests don’t get stuff done. I’d analyze it more, but luckily, Joshua Weitz already did that.

27 January 2026

No more H1-B visas in Texas for a year

Reuters is reporting that Texas governor Greg Abbot is telling Texas’s public universities and other agencies to stop asking for new H1-B visas. It will last until March 2027. 

I held that visa once, at a public university, which ultimately allowed me to work in Texas for 19 years.  Even though I haven't lived in Texas for a few years now, this feels sucky.

External links

Texas governor halts new H-1B visa petitions by state agencies, public universities 

08 January 2026

How retractions happen

We all have mental models of how we think things work. And it6s always a shock to learn how those models are wrong.

For a long time, my mental model of academic journal operations was that the editor-in-chief was ultimately responsible for what appeared in the journal. Recently, a former journal editor-in-chief commented on Bluesky that he did not have unilateral authority to issue retractions. (Can’t find the post now, will link it in if I find it.)

Rather, he had to request articles be retracted, and those requests when to the publisher’s Ethics Committee. (Oh, it was an Elsevier journal, by the way.) The Ethics Committee decided whether to retract or not.

This seems to me to be a very big and important role. And I know nothing about how it operates. How do people get on this publisher’s committee? How large is the committee? How often does it meet? Who is the committee answerable to? Do other publishers operate this way? And so on.

There is a whole level of journal operation that I was completely oblivious to.

01 December 2025

New project documenting A.I. slop graphics in academic journals

The last month saw a couple of relatively high profile examples of generative A.I. slop appearing in academic journals. From my collection of hoaxes, one of the things I have learned is that it is valuable to keep track of these, because editors and publishers are motivated to remove these and pretend they didn’t happen. 

So I am going to start compiling examples of academic slop. 

I’m going to focus on graphics. They are more within my realm of interest and expertise. Plus, there are too many examples of ChatGPT writing in journals. 

If you stumble across an example of an A.I. slop graphic in an academic journal, please let me know by filling out this form:

Slop graphics in academic journals 

28 November 2025

More A.I. slop with the autism bicycle

It’s Scientific Reports turn to be embarrassed for publishing obvious generative A.I. slop.

The nonsensical bicycle, the bizarre almost words, the woman’s legs going through whatever she is sitting on. Just a mess.

The good news is that this is apparently going to be retracted, and that word came pretty quickly. But it is a bit concerning that the news of this retraction came from a journalist’s newsletter on that a platform that a lot more people should leave.

There is now a pop-up that reads:

28 November 2025 Editor’s Note: Readers are alerted that the contents of this paper are subject to criticisms that are being considered by editors. A further editorial response will follow the resolution of these issues. 

Less than 10 days from publication to alerting people of a problem is practically lightning speed in academic publishing.

My experience has been that when one finds one problem, there may be more lurking. So I looked for other papers by the author. I found none.

I then checked the listed institution: Anhui Vocational College of Press and Publishing. This does appear to be a real institution in China. But as the name suggests, it seems to be centred on publishing, design, and politics. It is not at all clear why a faculty member would write a paper on autism.

As I was looking around in search results for any more information about this institution, I stumbled upon two retracted papers from another faculty member. There are other papers from other faculty out there that seem to be more what you would expect, and are presumably not retracted.

It’s just strange.

Working scientists have to get organized and push back against journals that are not stopping – or are even willingly using – generative A.I. slop.

Reference

Jiang S. 2025. Bridging the gap: explainable AI for autism diagnosis and parental support with TabPFNMix and SHAP. Scientific Reports 15: 40850. https://doi.org/10.1038/s41598-025-24662-9

External links

Riding the autism bicycle to retraction town