31 January 2025

With funding under assault, time to revisit science crowdfunding

All out attack
In the last two weeks, the new White House has thrown more monkey wrenches into the American scientific research machine than ever before. Grant money has been frozen, and a lot of people who were getting salaries from grants are facing rent day with no way to pay.

For everyone whose grant or pay is in limbo, I’m so sorry. You don’t deserve this bullshit.

There’s a lot to unpack and I won’t try to do it all in this post. But the current crisis did make me think back a few years about science crowdfunding. In particular, I think I had a good point about the importance of diversifying your portfolio for research funding.

So I wrote a little thread on Bluesky suggesting that some scientists might consider what they could do with crowdfunding for research.

This blog post is also a good opportunity to point to this post on Southern Fried Science about Andrew Thaler’s experience with Patreon:

By just about any metric, the return on investment from Patreon exceeds, by an order of magnitude, just about an other funding I’ve received in the last decade. For most of us, we don’t need to move mountains, we just need the space to stop and breathe and create.

I’m under no illusions about what science crowdfunding can and cannot do. And it sucks that I have to bring it up because we are facing an all out attack on research funding. But times are what they are and we have to think in new ways to keep the gears turning.

External links

My Bluesky thread on science crowdfunding

My blog posts about the #SciFund experience

Small drops make mighty oceans: 10 years as a scientist on Patreon

09 January 2025

We were friends once

In 2021, I was in front of the US Consulate in Toronto. And there was a small plaque that was surprisingly moving to me.


As the United States marks the 10th anniversary of the terror attacks of September 11, 2001, in the name of all Americans, Consulate General Toronto thanks the people of Ontario for their support and generosity following the worst attack on American since WWII.

The nearly 3,000 innocent victims from ninety countries included twenty-four Canadians, our friends and family in Ontario, and all of Canada, were with us in our darkest hour, as you always are. Neither terrorism nor any adversity can conquer free peoples, and we are grateful to stand with the best neighbours any nation ever had.

As together we look back, together we go forward.

We are eternally grateful.

September 11, 2011

 

Reading the text of this now is just crushing to me. I’m sad that “eternal” gratitude seems to have lasted less than 15 years since the plaque was revealed.

External links

Thanks and remembrance

08 January 2025

More publishers toying with AI in journal review

The Bookseller is reporting that Springer Nature is testing AI checks in their production pipeline for journal articles.

According to the publisher, the AI systems are designed to alert editors to “quality issues.” I find this frustratingly unclear, much like Elsevier’s testing of production pipeline changes. What, exactly, are they looking for with these systems?

I can see some automatic checking being very valuable. For example, it blows my mind that many journals apparently do no basic checks for plagiarism. I can also see value in an AI system that scanned basic statistical information, like whether reported p values were possible given the sample size, test statistic, and so on. For example, there is a program called Statcheck that has been proposed for exactly such purposes (Nuijten & Wicherts 2023, 2024), although there are ongoing debates about its utility (Schmidt 2017, Böschen 2024).

If the publishers were confident that these systems were genuinely making the peer review process better and were catching things like:

  • Image manipulation
  • Plagiarism
  • Tortured phrases
  • Citation cartels
  • Non-existent authors or reviewers
  • Statistical errors 
  • Undisclosed use of generative AI

All of which are real problems that need addressing. So why are academic publishers being so cagey about what processes they are implementing and what they are supposed to catch? Are they worried that this provides information that cheaters can use to bypass their “quality issues” detection systems? Something else?

Publishers always claim that they add value to academic publication. These new AI checks provide a real opportunity that they could show how they are adding value to academics who are increasingly mad at them and asking “What good are you?”

Related posts

Elsevier turns generative AI loose on manuscripts for no discernable reason

External links

Springer Nature reveals AI-driven tool to ‘automate some editorial quality checks’

References

Böschen I. 2024. statcheck is flawed by design and no valid spell checker for statistical results. arXiv: 2408.07948. https://doi.org/10.48550/arXiv.2408.07948 

Nuijten MB, Wicherts J. 2023. The effectiveness of implementing statcheck in the peer review process to avoid statistical reporting errors. PsyArXiv. https://doi.org/10.31234/osf.io/bxau9

Nuijten MB, Wicherts JM. 2024. Implementing Statcheck during peer review is related to a steep decline in statistical-reporting inconsistencies. Advances in Methods and Practices in Psychological Science 7(2): 25152459241258945. https://doi.org/10.1177/25152459241258945
 
Schmidt T. 2017. Statcheck does not work: All the numbers. Reply to Nuijten et al. (2017). PsyArXiv. https://psyarxiv.com/hr6qy