31 December 2024

Elsevier turns generative AI loose on manuscripts for no discernable reason

From time to time, editorial boards quit journals. It happens often enough that I usually don’t pay much attention. But the outgoing editors of the Journal of Human Evolution described a new complaint, involving – surprise! – generative AI.

In fall of 2023, for example, without consulting or informing the editors, Elsevier initiated the use of AI during production, creating article proofs devoid of capitalization of all proper nouns (e.g., formally recognized epochs, site names, countries, cities, genera, etc.) as well italics for genera and species. These AI changes reversed the accepted versions of papers that had already been properly formatted by the handling editors. This was highly embarrassing for the journal and resolution took six months and was achieved only through the persistent efforts of the editors. AI processing continues to be used and regularly reformats submitted manuscripts to change meaning and formatting and require extensive author and editor oversight during proof stage. 

This is maddening, because it’s yet another example of gen AI creating problems, never solving them.

I am also baffled. Because usually I can at least understand why a publisher has done certain things. Often the explanation tracks back to, “Cut costs.”

But I cannot for the life of me figure out why a publisher would let a generative AI system loose on a completed, edited manuscript. I cannot believe that is in any way cost-saving.

Generative AI is notoriously expensive. And they were doing this to nominally completed manuscripts, added an additional layer of work. If it were in the hopes of saving costs eventually, you would think there would just be some internal testing, not being unleashed like a rabid raccoon on a working journal.

Nor do I think anyone with any publishing experience would believe it would improve the manuscript.

I am left absolutely confused by what Elsevier is thinking here. But moreover, I am am worried that other publishers are going to try the same thing and we just haven’t heard about it yet.

Other major publishers should see an opening here. They could get out and publicly promise academics that the job of final edit stays with human editors.

Unfortunately, I am having a hard time seeing this happening, as I suspect increasingly the heads of these companies see themselves as data and analytics companies more than publishing companies.

Update, 6 January 2025: Retraction Watch has a response from Elsevier to the resignation. They claim they weren’t using generative AI, but admit they “trialled a production workflow” that caused the errors.

I’m not sure that’s much better.

Obviously, you also hope that workflow changes would be well-thought out enough that they wouldn’t introduce mistakes.

But I’m more unimpressed by Elsevier’s lack of transparency. What were they doing to the workflow that caused the mistakes? That the editors thought this was generative AI suggests that Elsevier did not explain the new workflow to the editors well, if at all. 

Update, 10 January 2025: Science magazine is covering this story, and they have quotes from the editors that contradicts Elsevier’s “We weren’t using AI” claim:

According to Taylor and paleoanthropologist Clément Zanolli at the University of Bordeaux, another former editor-in-chief who signed the statement, Elsevier told them in 2023 that AI software had introduced the formatting errors.

Elsevier told Science the same thing they told Retraction Watch: they were testing a new production system. But the statement to Science did not say there was no AI involved, which they said to Retraction Watch.

External links

Evolution journal editors resign en masse to protest Elsevier changes

Evolution journal editors resign en masse 

Elsevier denies AI use in response to evolution journal board resignations 

Journal editors’ mass resignation marks ‘sad day for paleoanthropology’

20 December 2024

Le Monde est à vous: Academic hoaxes in French newspaper article

The second article that arose from my posting of preprints is now available. The title is, “Why scientific hoaxes aren’t always useless.”

Translation of first paragraph by Google Translate:

Canadian biologist Zen Faulkes is not a naturalist, even if he likes to collect crayfish or sand crabs for his research. However, he has a taste for unexpected collections. For several years, he has been collecting, listing and classifying... scientific hoaxes. That is to say, parody, ironic or insane research articles that should never have been published. “Of course, if these texts disappeared, it would not be a great loss. But it is important to keep track of them and try to learn some lessons from their existence,” says the researcher, whose collection spans 432 pages, for forty-two examples.

While I have been very pleased in the last few weeks to have been quoted in a couple of fairly high profile venues, I am struck by the disconnect this week between good professional news and terrible personal news. (There have been a couple of deaths close to our family in the last week.)

External links

Pourquoi les canulars scientifiques ne sont pas toujours inutiles (Paywalled and in French)

Related blog posts

Clearing out the vault by posting preprints

09 December 2024

Bar graphs, how do they work?

I make a brief cameo appearance in a new article about data visualization. Bar graphs are about as simple and basic as you get in data visualization, but a couple of new preprints (one of which is mine) show that people struggle to get even those right. The major preprint by Lin and Landry finds all sorts of issues in bar graphs. Mine is much smaller, and I just want people to label their error bars.

By the way, this was the unexpected call I got after posting a preprint last month.

Related posts

Clearing out the vault by posting preprints

Reference

Heidt A. 2024. Bad bar charts are pervasive across biology. Nature. https://doi.org/10.1038/d41586-024-03996-w

04 December 2024

“Pay me now or pay me later” in reproducibility

“Reproducibility debt” is an interesting and useful take on the matter of reproducibility and replication. I stumbled across a discussion on a recent podcast.

What I like about this phrasing is that a lot of discussion around reproducibility focuses on bad practices. Things like p-hacking, HARKing, and the like. Framing issues around reproducibility as debt makes it more obvious that what we are talking about are trade-offs.

You might have a more reproducible result if you had a bigger sample size and wrote a perfect paper. But that takes time (opportunity costs), and often takes money (financial costs). And there are benefits to getting papers out - both personal (another thing to add to your annual evaluation) and to the community (puts new ideas out and generates leads for others).

In the short term, it can make sense to take on debt. But you will have to pay it back later.

The paper develops a preliminary list of the kinds of trade-offs that cause reproducibility debt. 

  • Data-centric issues (e.g., data storage)
  • Code-centric issues (e.g., code development)
  • Documentation issues (i.e., incomplete or unclear documentation)
  • Tools-centric issues (e.g., software infrastructure)
  • Versioning issues (e.g., code unavailable)
  • Human-centric issues (e.g., lack of funding)
  • Legal issues (e.g., intellectual property conflicts)

It’s very software focused, so I don’t think the list is comprehensive. For example, in biology, reproducibility might become an issue because a species becomes rare or extinct.

If we have reproducibility debt, maybe we can also conceive of reproducibility bankruptcy: a point where the accumulated shortcuts add up to a complete inability to move forward on knowledge.

References

Hassan Z, Treude C, Norrish M, Williams G, Potanin A. 2024. Characterising reproducibility debt in scientific software: A systematic literature review. http://dx.doi.org/10.2139/ssrn.4801433 

Hassan Z, Treude C, Norrish M, Williams G, Potanin A. 2024. Reproducibility debt: Challenges and future pathways. Companion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering: 462-466. https://doi.org/10.1145/3663529.3663778

External links

To Be Reproducible or Not To Be Reproducible? That is so Not the Question

29 November 2024

A journal paying for reviews

The BMJ will start paying £50 for reviews.

The catch? It’s not for peer review, it’s for reviews from patients and the public. They still expect peer reviews to be done as part of service.

Their announcement post does not describe how they will address several of what I consider to be potential problems. While my previous post was about posting for reviews from academics, I think many of the issues will also apply to public and patient review.

I’m going to sign up as a public reviewer. Because why not? I could use 50 quid. And I’d like to see what this looks like from the inside.

Related posts

Potential problems of paying peer reviewers

External links

The BMJ will remunerate patient and public reviewers

25 November 2024

“Neurosurgery on Saturn” paper shows academics’ blind spots

The planet Saturn.
In the last few days, a bunch of people on Bluesky discovered a paper that has been out for a few months, “Practice of neurosurgery on Saturn.”

Looking through the social media posts about this paper, a lot of people played that favourite academic game, “How did this get published?” Many people suggested it’s a hoax. Academic hoaxes are a particular interest of mine, and I am always looking for the next entry in the Stinging the Predators collection of academic hoaxes.

I didn’t think this was a hoax? Hoaxers usually reveal the prank almost immediately, and this paper had been out for months.

My hunch seems to have been correct. The lead author has an extensive Google Scholar page and said on PubPeer: “it is clear that the document focuses on fictitious and hypothetical situations.” I am not clear about what the point of the article is, but never mind. It’s filed under “Letters to the editor,” which I think is an arena where researchers and journals can be allowed a little leeway.

But this is a good example of something that I think is decidedly lacking in many discussions about academic publishing and academic integrity. In none of the posts I read did anyone do any actual investigation.

Nobody looked to see if the authors were real.

Nobody emailed the authors. It seems to just be happenstance that the lead author saw the PubPeer comments and replied.

Nobody emailed the journal (although editors are often notoriously slow to reply to these sorts of things).

In collecting academic hoaxes, I’ve noticed a similar pattern. People create hoaxes to show that there are bad journals out there that accept anything for money. But by and large, that is where it stops

People know predatory journals are out there, but nobody is actively digging behind the scenes to see how they work. How do people decide to start running them? How much money do they make? Why would a scam artist only in it for the money do apparently counterintuitive things like waiving the article processing charges? (There are multiple instances of that in the Stinging the Predators collection.)

A recent paper came out that made a similar point about the lack of investigation about paper mills  (Byrne et al. 2024). 

Academics treat too many of these problems around dubious publishing as some sort of black box that cannot be opened. They only study the outputs. I think someone needs to approach these sorts of problems more like an investigative journalist or an undercover law enforcement officer might.

Go in and find the people who are doing all this dishonest stuff. Get them talking. I want hear what some of the people organizing predatory journals or paper mills or citations rings have to say. Why do they do what they do?

I don’t expect academics themselves to do this. This kind of investigative journalism is expensive and time consuming and being done less and less. But without this kind of insight, we will probably never be able to understand and curb these problems.

References

Byrne JA, Abalkina A, Akinduro-Aje O, Christopher J, Eaton SE, Joshi N, Scheffler U, Wise NH, Wright J. 2024. A call for research to address the threat of paper mills. PLoS Biology 22(11): e3002931. https://doi.org/10.1371/journal.pbio.3002931

Mostofi K, Peyravi M. 2024. Practice of neurosurgery on Saturn. International Journal of Surgery Case Reports 122: 110139. https://doi.org/10.1016/j.ijscr.2024.110139

External links

PubPeer commentary on “Practice of neurosurgery on Saturn”

Altmetric page for “Practice of neurosugery on Saturn”

Google Scholar page for Keyvan Mostofi

Photo by Steve Hill on Flickr, used under a Creative Commons licence. 

Generative AI: Two of of three is bad

There’s an old joke: “Fast, cheap, good: Pick two.”

Generative AI is fast and cheap. It will not be good.

I would like “good” to be a higher priority.

Edit, 1 December 2024: I think academics, under the intense pressure to be productive, prioritize those three characteristics in that order. The first two might switch (grad students are more likely to prioritize “cheap”), but I think “good” is regularly coming last.



22 November 2024

Clearing out the vault by posting preprints

Old books on a shelf
My last jobs didn’t have any expectations for research, so my publication rate slowed significantly. I wanted to do something about that.

A few months ago, I published a submitted but not completed article on paying peer reviewers here on the blog and on Figshare. This got me digging around in folders on my hard drive and I got thinking about other articles that I had written, submitted, but weren’t accepted. And at the time, I just ran out of steam to revise and resubmit them in ways that would satisfy an editor and a couple of reviewers. 

So in recent weeks, I have taken to submitting some of those manuscripts to a preprint server.

In the past, I was a little cool to the value of preprints. I always thought they had some uses, but I was skeptical that they would replace journals, which was the stated wish for some of the strongest preprint advocates. I was always worried about the Matthew effect: that famous labs would benefit from preprints and everyone else would just have more work to do.

But since I last wrote about preprints, the attention landscape has changed. More people in biology have started scanning through preprint servers part of their routine. I was surprised when a reporter emailed me about one of these preprints and wanted to chat for an interview. I don’t think that would have happened a few years ago.

Whether these will be “final” public version of these little projects, I can’t say. I have other projects that I want to get out that have not been written up at all yet that I want to try to get into a journal. But I am glad that every one of these received at least a little attention on Bluesky.

Here’s the list of my recent preprints, all on bioRxiv.

Update, 26 November 2024: I’ve now had two journalists reach out to me because of these preprints. I’ve never had that high a level of interest from the dozens of peer-reviewed journal articles I published.

Related posts

A pre-print experiment: will anyone notice?

A pre-print experiment, continued  

A pre-print experiment, part 3: Someone did notice  

Photo by umjadedoan on Flickr. Used under a Creative Commons license.

18 November 2024

The cat that fooled Google Scholar, the newest hoax in my collection

I finally got a chance to update my collection of academic hoaxes!

Stinging the Predators 23.0: Now with a cat! (The Internet loves cats, right?) http://bit.ly/StingPred

I am now up to 42 academic hoaxes, which is triple the number that I started with in version 1.0 in 2017.

The latest hoax targets not a predatory journal, but an academic search engine. While this is unusual, it is not the first hoax that was pulled to show how easy it is to manipulate Google Scholar. (One of the things that has been interesting to me as this project has continued is that many hoaxers feel compelled to make the same point again.)

And one other things that has been rewarding is that this collection, which I’ve only ever had on Figshare and promoted on my socials and personal websites, has been viewed tens of thousands of times and has been cited by scholars writing in proper journals a few time.

External links

Faulkes Z. 2024. Stinging the Predators: A Collection of Papers That Should Never Have Been Published (version 23). Figshare. https://doi.org/10.6084/m9.figshare.5248264



14 November 2024

Okay, stop. Saying “science isn’t political” will not keep science safe from political attacks

Advice from people with experience fighting powerful fascist opponents: “Do not comply in advance.

In a new Science editorial, National Academies of Sciences, Engineering, and Medicine president Marcia McNutt starts complying well in advance. 

The editorial begins with blatant concern trolling:

I had become ever more concerned that science has fallen victim to the same political divisiveness tearing at the seams of American society.

Okay, stop. McNutt provides zero examples of supposed “divisiveness” of the scientific community. A recent Nature article showed exactly the opposite. The scientific community was strongly united in its preferred American election outcome: 86% favoured the losing candidate. I guess McNutt sees that as bad because it makes it harder for her to play nice with with administration of the winning candidate.

(The scientific community) must take a critical look at what responsibility it bears in science becoming politically contentious(.)

Okay, stop. Again, McNutt provides zero examples of scientists making science politically contentious. On the other hand, I can point to many examples of politicians who waded into scientific debates about the reality of whole branches of science (evolution, geology, climate science) and health care.

No, the problem (according to McNutt) is that we scientists don’t explain ourselves well. We don’t tell politicians and citizens about how we are just disinterested third parties.

(S)cience, at its most basic, is apolitical.

Okay, stop. I know this is a popular claim, but it’s time to put this into the ground and bury it six feet deep. There is a very good Nature podcast series that takes this claim apart. Claiming “We’re not political” is a fiction that favours those who are politically privileged. McNutt would have had a much stronger case if she had said that science is not partisan. Or that reality is apolitical.

But science is an organized profession done by humans, so science is political.

It is strange to say that science is apolitical when I see the two entangled all the time.

McNutt continues:

Whether conservative or liberal, citizens ignore the nature of reality at their peril.

Okay, stop. Many elected politicians and citizens are just fine with ignoring the nature of reality – as long as they personally are not affected. And those who are personally affected, in genuine peril, may not be able to generate enough political clout to change policy to make themselves safe on their own. They need allies.

McNutt’s argument that “the arc of the scientific universe is long, but it bends towards truth” is callous. It seems like her view is that scientists should passively sit on the sidelines, presenting data but never advocating, while watch people make the same mistakes about discredited claims that actively harm people over and over.

I can’t help but wonder where McNutt has been for the last decade or so when she can, apparently in all seriousness, write a sentence like this about climate science: 

It is up to society and its elected leadership to decide how to balance these options, including the use of renewable energy, climate adaptation, carbon capture, or even various interventions that reflect sunlight back into space.

Okay, stop.

Does McNutt understand that the incoming elected leadership repeatedly stated that their option is, “Climate science is all a hoax. We don’t need to do anything”?

Does McNutt understand that elected leaders can use their power to take actions that are not supported by the majority of society or scientific evidence?

Should scientists simply accept that their elected leadership is condemning millions to ever greater misery every day this denial of reality goes on?

McNutt is just trying to get her organization out of the line of fire of an incoming government that is more overtly hostile to science than maybe any other American government ever. 

The NAS stands ready, as it always has, to advise the incoming administration.

While it is the job of a civil servant to work with a new boss after every election, most scientists are not civil servants. They are not obligated to support a new government. They can, and should, do much more than just provide data and hope that elected leaders eventually come around to face scientific reality. If anyone is not coming around to face reality – the political reality, in this case – it’s McNutt.

External links

Science is neither red nor blue

The US election is monumental for science, say Nature readers — here’s why

“Stick to the science”: When science gets political (Three part podcast series)

06 November 2024

Untitled post 2024

You get hurt, hurt ‘em back. You get killed…

Walk it off.

 Captain America, Avengers: Age of Ultron

05 November 2024

Gen AI fatalism

Generative AI fatalism (or Gen AI fatalism): The assertion that generative AI must be accepted, because its widespread adoption is inevitable and cannot be stopped.

A new article by James Zuo about ChatGPT in peer review is a particularly spectacular example of gen AI fatalism. Zuo mentions that many peer reviews show signs of being created by large language models, like ChatGPT. He lists many problems and only trivial potential advantages, and calls for more human interactions in the peer review process.

Since Zuo has nothing very positive to say about generative AI in peer review, the fatalism on display is stark.

The tidal wave of LLM use in academic writing and peer review cannot be stopped.

No! 

This is a mere assertion that has no basis behind it. Technology adoption and use is not some sort of physical law. It’s not a constant like the speed of light. It’s not the expansion of the universe driven by the Big Bang, dark matter, and dark energy.

The adoption and use of technology is a choice. Technologies fail in the marketplace all the time. We abandon technologies all the time.

If generative AI is causing problems in the peer review process, we should say that. We should work to educate our colleagues about the inherent problems with generative AI in the review process.

I suspect that people use ChatGPT for the simple reason that they don’t want to do peer review. They do not believe they are adequately rewarded for the task. So, as is so often the case, we need to look at academic incentives. We need to look at why we peer review and what it can accomplish.

Creating journal policies about using ChatGPT is little more than trying to paper up holes in a boat. I would welcome the equivalent of pulling the boat into dry dock and doing a complete refit.

Reference

https://doi.org/10.1038/d41586-024-03588-8

Stay out of my academic searches, gen AI

Something I had long dreaded came to pass yesterday.


Google Scholar landing page with " New! AI outlines in Scholar PDF Reader: skim the bullets, deep read what you need"
Google Scholar introduced generative AI.

“New! AI outlines in Scholar PDF Reader: skim the bullets, deep read what you need.”

Andrew Thaler had the perfect riposte to this new feature.

If only scholarly publications came with a short synopsis of the paper right up front, written by the authors to highlight the important and salient points of the study.

We could give it a nifty name, like “abstract.”
Exactly! Not only do researchers already outline papers, many journals require two such outlines: an abstract and some sort of plain English summary.

I don’t need this. I don’t want this. No more generative AI infiltrating into every nook and cranny of the Web, please.

Punching fascists: A part of our heritage

Apropos of nothing, here are two pictures I thought I’d posted back in November 2016, but I can’t seem to find any more.

They’re a couple of panels of Canadian World War II comic character Johnny Canuck punching Hitler.

Johnny Canuck punching Hitler.

Johnny Canuck punching Hitler.


04 November 2024

Research on crustacean nociceptors missing useful evidence

Crab, Carcinas maenas
A new paper by Kasiouras and colleagues attempts to provide evidence for nociceptive neurons in a decapod crustacean, a shore crab. 

The authors took crabs, poked them and dropped some acid on them, and recorded the neural responses. Neurons did indeed respond, in dose-and intensity specific ways. From this, the authors conclude these are potential nociceptive responses.

I am unconvinced. There is a big difference between showing that neurons can respond to a possibly noxious stimulus and showing that those neurons are responding as nociceptors.

First:

The same category of stimuli can have detected by by nociceptors and by “regular” sensory neurons that are not nociceptors. For example, an increase in temperature can stimulate both nociceptors and thermoreceptive sensory neurons. Mechanical pressure can stimulate both mechanoreceptors and nociceptors. That there are two neural pathways are probably why we distinguish a burn (painful) as different than heat, or a pinch from a strong touch.

The results would be more convincing if the authors showed that neurons responded in ways that are typical of other nociceptors. Many (though not all) nociceptors have a few common properties.

  1. They respond to several different types of stimuli. For example, they react to high temperature and acid and mechanical pressure and a chemical like capsaicin. (The technical term is that they are polymodal or  multimodal sensory neurons.)
  2. They respond to repeated stimulation with increased activity. Most sensory neurons “get used to” the same sensory stimuli over time, but many nociceptors do the opposite. (The technical term is that they show sensitization.)

The authors couldn’t do either of these, because they are recording from the whole nerve cord and they did not pick out the activity of single neurons. Sometimes it is possible to recognize the activity of single neurons in this sort of record with spike sorting techniques, but that is done in this paper.

Second:

Species respond to potentially noxious stimuli in different ways. Some species respond to capsaicin with nociceptive behaviours, but others do not. No stimulus is guaranteed trigger nociceptors in all cases. 

This paper used mechanical touch and acetic acid, but because the paper did no behavioural experiments, it’s not clear if the crabs perform nociceptive behaviour in response to the level of stimuli presented.

Another paper used acetic acid as a potentially noxious stimulus with shore crabs, and crabs do respond to it (Elwood et al. 2017), but that paper was criticized (Diggles 2018; not cited in the current study) for not considering the possibility that acetic acid caused a feeding (gustatory) response, not a nociceptive response. Acetic acid is the technical name for vinegar, after all.

The results would be more convincing if the electophysiological recordings of neurons were connected back to the crab’s behaviour. For example, the authors could tried an experiment to shown that a touch of low intensity caused one sort of behaviour, but a touch of higher intensity caused a different behaviour that looks like nociceptive behaviour. Then they could have seen what the differences in neural activity to those two kinds of tactile stimulation were.

I am glad to see more labs trying to establish the presence or absence of nociceptors in crustaceans. There is still more work to demonstrate their existence and characterize their physiology if they exist.

References

Diggles BK. 2018. Review of some scientific issues related to crustacean welfare. ICES Journal of Marine Science: fsy058. http://dx.doi.org/10.1093/icesjms/fsy058
 

Elwood RW, Dalton N, Riddell G. 2017. Aversive responses by shore crabs to acetic acid but not to capsaicin. Behavioural Processes 140: 1-5. https://doi.org/10.1016/j.beproc.2017.03.022

Kasiouras E, Hubbard PC, Gräns A, Sneddon LU. 2024. Putative nociceptive responses in a decapod crustacean: The shore crab (Carcinus maenas). Biology 13(11): 851. https://doi.org/10.3390/biology13110851
 

Picture by Dunnock_D on Flickr; used under a Creative Commons license.

02 November 2024

Pursing integrity over excellence in research assessment

I was reading yet another “Scientists behaving badly” article. This one was about Jonathan Pruitt, who used to work where I used to work (different departments). And, as it usual in these articles, there is a section about how institutions assess research:

Many attribute the rising rate of retractions to academia’s high-pressure publish-or-perish norms, where the only route to a coveted and increasingly scarce permanent job, along with the protections of tenure and better pay, is to produce as many journal articles as possible. That research output isn’t just about publishing papers but also how often they’re cited. Some researchers might try to bring up their h-index (which measures scientific impact, based on how many times one is published and cited), or a journal might hope that sharing particularly enticing results will enhance its impact factor (calculated using the number of citations a journal receives and the number of articles it publishes within a two-year period).

It finally occurred to me that focusing on the indicators like citations and Impact Factor are all symptoms of a larger mindset.

The major watchword for administrators and funders for decades has been “excellence.” Some prefer a near synonym, “impact.” Everyone wanted to reward excellent research, which is an easy sell. Nobody wants to admit that they want to support average research — even though that’s what most research is, by definition. But most of science progresses by millimetres on the basis of average research. Even poor research can have some nuggets of data or ideas that can be useful to others.

I suggest a new overarching principle to guide assessment: integrity. We should be paying attention to, and rewarding research and researchers that act in the most trustworthy ways. Who are building the most solid and dependable research. That can be assessed by practices like data sharing and code sharing, and by showing of community use and replication.

The pursuit of excellence has proved too fickle and too likely to make people act sketchy to become famous. Let’s change the target.

Update, 28 November 2024: Blog post by Grace Gottleib on how to assess integrity.

External links

A rock-star researcher spun a web of lies—and nearly got away with it


17 October 2024

Write in Gallifreyan

Oh, this is fun. Here is “Doctor Zen” in Gallifreyan writing from Doctor Who.

"Doctor Zen" in Gallifreyan

I have to be careful, or I’ll spend hours translating things into this script.

 External links

Gallifreyan translator

11 October 2024

What is misinformation for?

A new article on how many people in the US are increasingly hostile to reality has much to contemplate, but I wanted to briefly muse on this:

So much of the conversation around misinformation suggests that its primary job is to persuade. But as Michael Caulfield, an information researcher at the University of Washington, has argued, “The primary use of ‘misinformation’ is not to change the beliefs of other people at all. Instead, the vast majority of misinformation is offered as a service for people to maintain their beliefs in face of overwhelming evidence to the contrary.” This distinction is important, in part because it assigns agency to those who consume and share obviously fake information.

I see the point, and agree with it to some extent, but I think this underestimates the persuasive power of misinformation.

It neglects the “rabbit hole” effect that misinformation has had on fostering conspiracy theories and radicalization. It neglects the slow corrosion that has been happening in political discourse. It’s not just that political parties (particularly in the US) are polarized, but that some have gone ever more extreme.

I can see a connection between Caulfield’s “misinformation helps maintain beliefs” and persuasion. People’s beliefs are informed by different points of view. Without countervailing points of view, those existing beliefs can become more certain and more readily drift to ever more extreme versions of that belief.

Misinformation is often better described as straight-up propaganda, though. But we seem to have lost that word through fear of calling lies, lies.

External links

I’m running out of ways to explain how bad this is

09 October 2024

“Equal contribution” statements don’t mean much: Nobel prize edition

This is not a post about the Nobel prizes. It is a post about authorship.

The Nobel Prize for chemistry was given two people for protein folding. I told students in my introductory biology classes for years that whoever could solve that problem should book a ticket to Stockholm, because it would get a Nobel, and I’m pleased to see I was right on that count.

Screenshot of Nature article "Highly accurate protein structure prediction with AlphaFold" with expanded credit showing that 19 authors were credited as making equal contributions to the paper.
On Bluesky, Michael Hoffman pointed out that the key paper about AlphaFold has an equal contribution statement:

(T)he AlphaFold paper has 19 authors who “contributed equally” but only two of them (Demis Hassabis and John M. Jumper - ZF) get part of the Nobel Prize 🤔 

So why those two people out of all the 19 who made, allegedly, equal contributions? The paper has a “Contributions” statement:

J.J. and D.H. led the research.

I don’t think there has ever been a clearer demonstration that “equal contribution” statements don’t mean much of anything except to maybe the people involved. And their relatives.

Also worth noting that in the 19 equal contributions were, I believe, two women. (Guess based on given names, which is not ideal, I know. Still.)

More generally, authorship is a terrible way of assigning credit. I have, and will continue to, argue that the CRediT system of identifying specific contributions should be adopted just across the board.

References

Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, Tunyasuvunakool K, Bates R, Žídek A, Potapenko A, Bridgland A, Meyer C, Kohl SAA, Ballard AJ, Cowie A, Romera-Paredes B, Nikolov S, Jain R, Adler J, Back T, Petersen S, Reiman D, Clancy E, Zielinski M, Steinegger M, Pacholska M, Berghammer T, Bodenstein S, Silver D, Vinyals O, Senior AW, Kavukcuoglu K, Kohli P,  Hassabis D. 2021. Highly accurate protein structure prediction with AlphaFold. Nature 596: 583–589. https://doi.org/10.1038/s41586-021-03819-2

03 October 2024

Losing your academic email cuts you off from the scientific community

I haven’t had a job in a university for a while, and I’m realizing how much I cannot do and how many opportunities I am missing because I don’t have a university email address.

One of the biggest issues is Google Scholar.

Google Scholar still has my last institutional email from last year. I could leave my email blank, but I don’t want to, because “Unverified profiles can’t appear in search results.” That is bad for me professionally – I want people to be able to find my papers in Google Scholar search. It’s also, it must be said, bad for research more generally. I wonder how many people realize that profile search are filtered by institutional emails.

if journal editors find my Google Scholar profile, they will only see my old email. If they send a request to that email for me to review a manuscript, they won’t get a response. Given how many editors complain about how they “just can’t find people willing to peer review some articles,” I wonder how many potential reviewers are lost because they change email addresses?

Other examples:

Pubpeer won’t accept a Gmail address in their signup.

ResearchGate warns you about deleting an institutional email but allows you to do it.