20 March 2024

The precarious nature of scientific careers (inspired by Sydney Sweeney)

I recently read this article on actress Sydney Sweeney:


(I liked you in Madame Web, Sydney, don’t let the haters hate.)

I did not expect that a story riffing off the career of a successful Hollywood actress would resonate so much.

Because the article was pointing out that “success” has been so eroded that it is almost meaningless now. Sweeney is doing well for herself, but she dare not stop grinding.

Academia looks like this to me. Even if you achieve “success” — landing one of those increasingly rare tenure track faculty jobs — the grinding not only does not stop, it probably intensifies. Just like Sweeney doesn’t feel she can stop taking ads for Instagram posts or take off a few months to have a kid, how many researchers are subjecting themselves to long hours to write grant proposals and publish papers because they are told that if they stop swimming, they’ll drown?

Actors and scientists are far from alone in this predicament. It’s widespread. But I think it’s worth asking, “What does success look like?” And now, it looks like “success” has razor thin margins of error.

18 March 2024

Contamination of the scientific literature by ChatGPT

Mushroom cloud from atomic bomb
I’ve written a little bit about how often notions of “purity” come up in discussions of scientific publishing.

I see lots hand waving about the “purity and integrity of the scientific record,” which is never how it’s been. The scientific literature has always been messy.

But in the last couple of weeks, I’m starting to think that maybe this is a more useful metaphor than it has been in the past. And the reason is, maybe unsurprisingly, generative AI. 

Part of my thinking was inspired by this article about the “enshittification” of the internet. People are complaining about searching for anything online, because so much of search results is being dominated by low quality content designed to attract clicks, not be accurate. And increasingly, that’s being generated by AI. Which was trained on online text. So we have a positive feedback loop of crap.

(G)enerative artificial intelligence is poison for an internet dependent on algorithms.

But it’s not just the big platforms like Amazon and Etsy and Google that is being overwhelmed by AI content. Academic journals are turning out to be susceptible to enshittification.

Right after that article appeared, science social media was widely sharing examples of published papers in academic journals with clear, obvious signs of being blindly pasted from generative AI large language models like ChatGPT. Guillaume Cabanac has provided many examples of ChatGPT giveaways like, “Certainly, here is a possible introduction to your topic:” or “regenerate response” or apologizing that “I am a large language model so cannot...”.

It’s not clear how widespread this problem is, but that even these most obvious examples are not getting screened out by routine quality control is concerning.

And another preprint making the rounds show more subtle telltale signs that a lot of reviewers are using ChatGPT to write their reviews.

So we have machines writing articles that machines are reviewing and humans seem to be hellbent on taking themselves out of this loop no matter what the consequence. I can’t remember where I first heard the saying, but “It is not enough than a machine knows the answer” feels like an appropriate reminder.

The word that springs to mind with all of this is “contaminated.” Back to the article that started this post:

After the world’s governments began their above-ground nuclear weapons tests in the mid-1940s, radioactive particles made their way into the atmosphere, permanently tainting all modern steel production, making it challenging (or impossible) to build certain machines (such as those that measure radioactivity). As a result, we’ve a limited supply of something called “low-background steel,” pre-war metal that oftentimes has to be harvested from ships sunk before the first detonation of a nuclear weapon, including those dating back to the Roman Empire.

Just like the use of atomic bombs in the atmosphere created a dividing line of “before” and “after” there was widespread contamination of low-level radiation, the release of ChatGPT is enhancing and deepening another dividing line. Scientific literature has been contaminated with ChatGPT. Admittedly, this contamination might turn out to be at a low level that might not even be harmful, just like most of us don’t really think about the atmospheric radiation from years of above ground testing of atomic bombs.

While I said that it isn’t helpful to talk about “purity” of academic literature before, I think this is truly a different situation than we have encountered before. We’re not talking about messiness because research is messy, or that humans are sloppy. We’re talking about an external technology that in impinging on how articles are written and reviewed. It is a different problem that might warrant describing it as “contamination.”

(I say generative AI is deepening the dividing line because the problems language AI are creating were preceded by the widespread release and availability of Photoshop and other image editing software. Both have eroded our confidence that what we see in scientific papers represents the real work of human beings.)

Related posts

How much harm is done by predatory journals? 

External links

Are we watching the internet die?

12 March 2024

Scientists will not “just”: Individual scientists can’t solve systemic problems

Undark has an interview with Paul Sutter about the problems of science. Now before I get into my reaction, I want to say that this interview was conducted because Sutter has a book, Rescuing Science, that covers these topics. I haven’t read the book. Maybe it has more nuance than this short interview.

Sutter has a lot of complaints about the current state of science, but his big one?

(A)n inability for scientists to meaningfully engage with the public.

The interview tries to peel back the layers of why this is. Sutter, like many academics, blames the incentives.

We, as a community of scientists, are so obsessed with publishing papers (that) this is causing is an environment where scientific fraud can flourish unchecked.

Sutter goes on to critique journal Impact Factor and h-index and peer review and a lot of the usual suspects. But Sutter’s solution to these big systemic problems might be summed up as, “Scientists need to get out more.” He wants scientists to do more public communication. Lots more.

Scientists should be the face of science. How do we increase diversity and representation within science? How about showing people what scientists actually look like. How do people actually understand the scientific method? What if scientists actually explained how they’re applying it in their everyday job.

This is a very familiar view to me. It’s why I started blogging here more than twenty years ago. Much of what I have achieved professionally, I can credit in part or in whole, to blogging and otherwise being Very Online. But I try to have a clear-eyed view of what I was able to achieve in building trust in science: probably not much.

I see three problems.

First. it feels like Sutter is saying, “If only scientists would just talk to non-scientists more.” I am reminded of this:

If your solution to a human problem involves the phrase “If only everyone would just...”, you don’t have a solution. Never in the History of Ever has everyone “just” anything... and we’re not likely to start now. 

(This tweet is from Laura Hunter. I remember seeing some version of this on Twitter, but can’t recall if I saw Laura Hunter’s tweet specifically.)

Second, lots of excellent scientists are not great at communicating to non-specialist audiences. They can’t stop using words like “canonical” in interviews. They aren’t trained in this.

Third, Sutter points out that the interests of science journalists are not always aligned with the interests of scientists. But that is a feature, not a bug. Journalists are not supposed to be stenographers, or cheerleaders, for science. And Sutter spends lot of time criticizing journalism while the profession is practically collapsing in front of our eyes while we watch. Local news outlets are vanishing. Online publications are shuttering and laying off hundreds of staffers. Popular Science is gone.

Wait, I have more than three.

Fourth, this sounds like trying to fix traffic jams (systemic problems) by asking people to drive carefully (individual actions). That doesn’t have a good record of success. 

In the interview, Sutter doesn’t come to grips with the systemic issues of money and power that are being leveraged to make coordinated attacks on science.

I’ve said it before: Individual scientists – who are struggling to write grants after grant to keep their labs afloat – are not on a level playing field with international media corporations backed by billions of dollars. Or tech corporations that write recommendation algorithms. Or religions that have a several thousand year head start in winning hearts and shaping culture.

I am guessing that if Sutter were to point to the sort of people who exemplify his preferred method of restoring trust in science by getting out and talking publicly, he might point to Anthony Fauci or Peter Hotez – both of whom were excellent communicators throughout the COVID-19 pandemic. But we have seen the power asymmetry: Fauci and Hotez were physically threatened for publicly talking about science.

Anti-science is large, well-funded, and organized. Single scientists with a blog – or even a few invitations to speak on national platforms – are overmatched.

External links

Paul M. Sutter Thinks We’re Doing Science (and Journalism) Wrong

Rescuing Science

11 March 2024

How Godzilla movies reflect scientific research

Leonard Maltin
I have a very specific memory of film critic Leonard Maltin on Entertainment Tonight reviewing Godzilla 1985 (the first American release of 1984’s Gojira or The Return of Godzilla). Maltin said something like, “Many remakes fail because they stray too far from the original. Godzilla 1985 doesn’t have that problem.”

It’s still the same cheap Japanese monster movie.”

That stung so much that here I am remembering it almost 40 years later.

I can’t think of anything that summarized the attitude towards Godzilla for so long. 

So last night’s Oscar win for Godzilla Minus One feels like vindication for a lifelong fan like me. 

Godzilla -1 effects team with Oscars

For decades, Godzilla movies were the butt of jokes. And deservedly so, I have to say. As much as I count myself as a Godzilla fan, I have no desire to watch Son of Godzilla or All Monsters Attack (Godzilla’s Revenge) ever again. (Shudder.)

But fandom is a funny thing. You love the things you love and it still kind of stings when you you hear them derided.

But somewhere in the years after Maltin snubbed the 30th anniversary movie, something shifted in people’s attitude towards Godzilla.

Those of us who watched a few dubbed movies as kids remembered Godzilla as we grew up. I heard in the 1990s that there was a new “high tech” series of Godzilla movies being made in Japan. The Internet removed friction for finding out fannish stuff. You could find retrospectives about the making of the series in English.

And say what you will about the American Godzilla made in 1998, Hollywood wouldn’t have shelled out the cash to try to make that movie a big summer blockbuster if there wasn’t some sort of name recognition.

After all those years in the wilderness, what films could show was finally catching up with the visions of Godzilla that fans held in their heads.

And it occurred to me that this is sometimes how science works.

You have an idea. You get it out there. 

Maybe it’s derided as cheap and mostly dismissed. But you try again.

And other people pick up some aspect of it. And maybe sometimes the results are embarrassing, with the offshoots are not as good as the original was.

And sometimes, if you wait and keep trying, that original idea somehow stands the test of time. Other people come around and start to recognize that idea was a good idea. And you end up with something that gets better that you ever though.

Related posts

“Why do you love monsters?”

10 March 2024

How can we fix Daylight Saving Time?

I’ve mentioned this on social media before, but I might as well get it into the blog for archival purposes.

We just switched to Daylight Saving Time overnight. Nobody likes this switch, because you lose an hour, it messes with sleep, you have to change microwave clocks we always forgot how to do that.

It occured to me that the fundamental problem with the change is that an hour is too big a step in one go. But nobody notices when timekeepers throw in a leap second. (Yes, that is a real thing I learned about years ago.)

Instead of two hour long time shifts twice a year, why not make more but smaller changes? Like 10 minutes, every month. It comes to the same amount of time over the course of a year.

“Zen, that may be easier on sleep cycles, but that means ten more annoying clock changes every year.”

Use new technology instead of being stuck using old technology. More and more clocks are setting themselves automatically using an Internet signal or some radio signal from an atomic clock. We could get even more if we mandated self-setting clocks in new equipment

Build the future, not the past.