21 February 2024

I told you transcript changes didn’t affect grade inflation

Way back when, I blogged about a Texas proposal to include average course grades next to a student’s earned grades on the student transcript. The argument was that this could be a way to curb grade inflation. I was skeptical. 

This never came to pass in Texas, but what I didn’t know at the time that this was the practice at Cornell University.

A practice they just stopped.

It turned out that – surprise! – showing average class grades didn’t stop grade inflation. In fact, showing class averages probably increased grade inflation. Because with easy access to average course grades, students preferentially took the classes seen to be “easy A’s”.

I have to admit I didn’t see that possibility, but it tracks.

Related posts

The “Texas transcript” is a good idea, but won’t solve grade inflation

External links

Cornell Discontinues Median Grade Visibility on Transcripts 15 Years After Inception  

19 February 2024

Rats, responsibility, and reputations in research, or: That generative AI figure of rat “dck”

Say what you will about social media, it is a very revealing way to learn what your colleagues think.

Last week, science Twitter could not stop talking about this figure:

Figure generated by AI showing rat with impossibly large genitalia. The figure has labels but none of the letter make actual words.

There were two more multi-part figures that are less obviously comical but equally absurd.

The paper these figures were in has now been retracted, but I found the one above in this tweet by CJ Houldcroft. You can also find them in Elizabeth Bik’s blog.

This is clearly a “cascade of fail” situation with lots of blame to go around. But the discussion made me wonder where people put responsibility for this. I ran a poll on Twitter and a poll on Mastodon asking who came out looking the worst. The combined results from 117 respondents were:

Publisher: 31.6%
Editor: 30.8%
Peer reviewers: 25.6%
Authors: 12.0% 

I can both understand these results to some degree and have these results blow my mind 🤯 a little. 

People know the name of the publisher, and many folks have been criticizing Frontiers as a publisher for a while. Critics will see this as more confirmation that Frontiers is performing poorly. So Frontiers looks bad.

The editor and peer reviewers look bad because, as the saying goes, “You had one job.” They are supposed to be responsible for quality control and they didn’t do that. (Though one reviewer sad he didn’t think the figures were his problem, which will get its own post over on the Better Posters blog later.)

But I am still surprised that the authors are getting off so lightly in this discussion. It almost feels like blaming the fire department instead of the arsonist.

At the surface level, the authors did nothing technically wrong. The journal allows AI figures if they are disclosed, and the authors disclosed it. But the figures are so horribly and obviously wrong that to even submit it feels to me more like misconduct than sloppiness.

And is so often the case, when you pull at one end of a thread, it’s interesting to see what starts to unravel.

Last author Ding-Jun Hao (whose name also appears in papers as Dingjun Hao) has had multiple papers retracted before this one (read PubPeer comments on one retracted paper), which a pseudonymous commenter on Twitter claimed was the work of a papermill. Said commenter further claimed that another paper is from a different papermill.

Lead author Xinyu Guo appears to have been author on another retracted paper.

I’ve been reminded of this quote from a former journal editor:

“Don’t blame the journal for a bad paper. Don’t blame the editor for a bad paper. Don’t blame the reviewers for a bad paper. Blame the authors for having the temerity to put up bad research for publication.” - Fred Schram in 2011, then editor of Journal of Crustacean Biology

Why do people think the authors don’t look so bad in this fiasco?

I wonder if other working scientists relate all to well to the pressure to publish, and think, “Who among us has not been tempted to use shortcuts like generative AI to get more papers out?”

I wonder if people think, “They’re from China, and China has a problem with academic misconduct.” Here’s an article from nine years ago about China trying to control its academic misconduct issues.

I wonder if people just go, “Never heard of them.” Hard to damage your reputation if you don’t have one.

But this strategy may finally be too risky. China has announced new measures to improve academic integrity issues, which could include any retracted paper requiring an explanation. And the penalties listed could be severe. Previous investigations of retractions in China resulted in “salary cuts, withdrawal of bonuses, demotions and timed suspensions from applying for research grants and rewards.”

Related posts

The Crustacean Society 2011: Day 3


[Retracted] Guo X, Dong L, Hao D, 2024. Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway. Frontiers in Cell and Developmental Biology 11:1339390. https://doi.org/10.3389/fcell.2023.1339390

Retraction notice for Guo et al.

External links

Scientific journal publishes AI-generated rat with gigantic penis in worrying incident

Study featuring AI-generated giant rat penis retracted, journal apologizes 

The rat with the big balls and the enormous penis – how Frontiers published a paper with botched AI-generated images

China conducts first nationwide review of retractions and research misconduct (2024)

China pursues fraudsters in science publishing (2015)