13 September 2021

The rise of TikTok in misinformation

Ben Collins’s Twitter thread about how misinformation about medication to get worms out of horses has become the cause du jour for many.

Can’t stress how wild the ivermectin Facebook groups have become. So many people insisting to each other to never go to an ER, in part because they might not get ivermectin, but sometimes because they fear nurses are killing them on purpose “for the insurance money.” ... It’s just a constant stream of DIY vitamin therapies and new, seemingly random antiviral drugs every day — but not the vaccine.

This is distressing, but I wanted to home in on this comment in the thread.

The ivermectin Facebook groups also offer a window into how pervasive antivaxx COVID “treatment” videos are on TikTok.

The groups serve as a de facto aggregator for antivaxx TikTok, a space that is enormous but inherently unquantifiable to researchers.

When I last wrote about the dangers of recommendation algorithms (in pre-pandemic 2019), I focused on YouTube. TikTok existed then (it started in 2016), but it wasn’t included in Pew Research’s list of social media platforms until this year.

Graph showing use of social media platforms in the US. 81% use Youtube, 61% use Facebook. No other platform is used by more than 50% of Americans. 21% of Americans use TikTok.

Even today, TikTok isn’t even used by one in four Americans. It’s more like one in five. It’s impressive that it’s pulled close to Twitter, which has been around far longer. And also frightening that it is having this outsized effect that is leading people to try... anything

Everything is a miracle cure, or it isn't, but every drug is worth a shot. Except, of course, the thing that works: the vaccine. Anything pro-vaccine is instantly written off as “propaganda.”

There are lots of issues raised here that I can’t process all at once. But I think Collins’s comment that TikTok is unmeasurable for researchers strikes me at something important. Could the requirement for more data transparency in how TikTok selects what videos to show someone help? Not sure. 

But we may be at the start of an arms race between social media platforms using data to show things to viewers, and researchers trying to “break the code” to figure out just what the heck people are actually seeing.

Related posts

Recommendation algorithms are the biggest problem in science communication today

External links

Social Media Use in 2021 - Pew Research

Naming the animals in research papers

This is Bruce. 

Bruce, a kea with no upper beak, holding an object with his tongue and lower beak.

Bruce has been making the news rounds because of a new paper demonstrating that he uses pebbles to groom. Bruce is a kea, a parrot that normally has a large upper beak, which Bruce does not have. In the picture above, you can see him using his tongue and remaining lower beak to pick up an object.

What I want to talk about is not the tool use (although that is cool), but that I know this bird was given a name. Because I found this paper within days of finding another paper about an unusual bird: an Australian musk duck named Ripper. 

Ripper’s claim to fame was that he was able to imitate sounds, like creaking metal and even human voices. Ripper seems to have picked up the phrase, “You bloody fool” from humans around him. 

This is interesting because vocal learning is found in only a few lineages and hasn’t been documented in ducks before.

But what interested me in both papers is that the scientific papers repeated refer to these bird by the names that humans gave them. Not just once in the methods as an aside, but all the way through.

I can see the value of using a given name in news articles and blog posts like the one I’m writing. And maybe it makes scanning the paper a little easier. But the kea paper refers to “Bruce” 62 times; the duck paper refers to “Ripper” 40 times. The extensive referencing to these names in the journal articles gives me pause.

It’s been clear for a long time that the efforts to keep animals at arm’s length to avoid humanizing them (a position taken furthest, perhaps, by B.F. Skinner and other behaviourists in American psychology) is a lost cause. The approach of people like Jane Goodall (who named her chimps rather than just giving them numbers) has won. 

But these two approaches sit on opposite ends of a continuum. And quite often, there’s a pendulum swing in attitudes. I wonder if the pendulum has maybe swung a little too far towards our willingness to humanize animals in the scientific literature.

It’s easy to slip into teleology (assuming everything has a purpose) and anthropomorphism (thinking animals are like humans). And constantly referring to animals’ names throughout a paper seems to make that even easier. 

I’m not saying that the names we give animals should never be mentioned in papers. But maybe it could be once or twice instead of dozens of times. 

And hey, these animals didn’t get to pick their names. Maybe that duck was thinking, “I say ‘Bloody fool’, and they name me ‘Ripper’ on top of that? Could I be any more of a cliché Australian?)

A Twitter poll suggests I am not alone in being wary of this practice.

References

Bastos APM, Horváth K, Webb JL, Wood PM, Taylor AH. 2021. Self-care tooling innovation in a disabled kea (Nestor notabilis). Scientific Reports 11(1): 18035. https://doi.org/10.1038/s41598-021-97086-w

ten Cate C, Fullagar PJ. 2021. Vocal imitations and production learning by Australian musk ducks (Biziura lobata). Philosophical Transactions of the Royal Society B: Biological Sciences 376(1836): 20200243. https://doi.org/10.1098/rstb.2020.0243

02 September 2021

Gender’s role in authorship disputes: Women try to prevent fights but still lose out

Black king and white queen chess pieces.
Sometimes, I hate being right.

Ni and colleagues have an important new paper on authorship conflicts that confirms something I sort of predicted back in 2018. That people with less professional and social power get hosed by authorship disputes. In 2018, I wrote, “loss of credit due to authorship disputes may be a little recognized factor driving underrepresented individuals out of scientific careers”.

Ni and colleagues confirmed that women get into more authorship disputes than men. This is despite women more often trying to negotiate authorship in advance. (To me, most surprising and depressing finding.)

To me, this supports the argument that we need more ways to help authors resolve disputes and support authors with less power. I suggested mediation and arbitration could be more common. I don’t care if people buy into that idea, but I hope that people can see that the status quo harms our efforts to create a more equitable scientific culture.

A caveat about the new paper. For some analyses, the paper guesses at gender by using an algorithm on names. This has problems. (Edit: Several authors tweeted a clarification for how this algorithm was used and that it was tangential to the main findings. Ni, LaRiviere, Sugimoto) This suggests journals should capture more data about authors than name (usually in English only), institution, and email. Or maybe that could become part of an ORCID profile. Actually, I like that idea more.

A logical follow up to this new paper would be to look at how authorship disputes shake out in other historically excluded groups. I’ll bet a dollar that white men consistently experience fewer authorship disputes than anyone else.

The Physics Today website has more analysis. (I was interviewed for this, but didn’t end up with any quotes in the article.) So too does Science magazine news.

Additional, 3 September 2021: I just want to emphasize why I think authorship disputes are so important for us to talk about.

There is a lot of research on gender differences in science. We know things women tend to get cited less than men, women tend to win fewer awards than men, women tend to be invited speakers less often then men.

But we know all these things because they are public.

It’s a major win for researchers have get access to journal records of submissions, because it’s something that is typically not disclosed and is very informative.

When we analyze citations and the like, we kind of assume that they reflect the work done. Maybe not perfectly, but reasonably.

Authorship disputes are kind of insidious because they are generally private, and affect what gets into the public record in the first place. It is an unrecognized and unacknowledged filter affecting people’s careers.

References

Ni C, Smith E, Yuan H, Larivière V, Sugimoto CR. 2021. The gendered nature of authorship. Science Advances 7(36): https://www.science.org/doi/10.1126/sciadv.abe4639

Faulkes Z. 2018. Resolving authorship disputes by mediation and arbitration. Research Integrity and Peer Review 3: 12. https://doi.org/10.1186/s41073-018-0057-z

External links

Women in science face authorship disputes more often than men

Are women researchers shortchanged on authorship? New study highlights gender disparities

Pic from here.

29 August 2021

Kill the chalk talk?

A “chalk talk” is an unscripted presentation that allows a job candidate to outline something “on the spot,” live. In many departments, this is supposed to be a chance for a candidate to talk about their research and how they plan to create an ongoing, funded research program.

Since chalkboards have mostly been abandoned for years, the term probably only still exists because it rhymes. In fact, in engineering and computing business and industry job interviews, these are called whiteboard interviews. But there, some have pointed out that whiteboarding is a problematic interview practice.

It’s got nothing to do with the skills needed to succeed at the job.

Wayne Brady
Asking a scientist to try to create a grant proposal on the spot is like asking a film actor to do an improv comedy scene as part of an audition. If that were the norm. Wayne Brady would probably be the biggest start in Hollywood. Sure, some actors are great at improv, but that’s generally not how film works. Films have scripts and preparation time. 

Chalk talks are likely to be one of those common practices that contribute to academia’s low diversity. It seems likely that there could be many people who are thoughtful but slow to process questions who might fare poorly in a chalk talk. On the other hand, there may be people who are exceedingly good at oral explanations but who are undisciplined writers. 

And if this is all allegedly about grants, grants live or die on the page, not the stage. Besides, job candidates always send a research statement as part of the application package. So marking someone explain it all again feels more like hazing than a useful way of assessing a candidate.

Departments don’t need these. I witnessed a fair number of academic job interviews in at least three institutions in as many countries. None of them asked a job candidate to do a so-called “chalk talk.” 

Update, 30 August 2021: Unsurprisingly, some people disagree with this. Jason Rasgon offered this defense (I’ve stitched together a few tweets):

Chalk talks have some value for a very narrow slice of institutions (generally soft money Medical or professional schools). I agree, mostly useless for other places. But when I was soft money I saw many faculty applicants who had a written document who couldn't articulate a plan of attack at the chalk talk. Several of them were still hired, they didn't last. When you’re soft money (the institution pays little to no salary, you have to pay your own salary by your own grants - ZF), you don’t have luxury of taking your time to organically develop a research program. You have to hit ground sprinting from day 1. Chalk talk should be easy, as you should have all this developed (and maybe submitted already). If you don’t, you’ll have problems. The people I saw have issues knew in general what they wanted to do but hadn't sat down and worked out the specifics.

Michael Eisen writes (again, stitching tweets together):

I’ve attended over 100 faculty candidate chalk talks, and would say they've played a significant role in hiring decisions maybe a dozen times, all in the favor of the candidate, because they gave us insights into strengths that did not manifest in their application. In contrast to the dominant opinion, which seems to be that chalk talks are about gotchas or testing peoples' ability to think on their feet, in my estimate their major role in the process is to give people an opportunity to address weaknesses in the rest of their application. When the format and purpose is clearly articulated, and when individual faculty are not allowed to derail the process (which can happen), they are an immensely useful part of the process. ... (T)he call for eliminating chalk talks is a copout that doesn't actually address the problem, and is part of a bad trend to provide candidates with fewer ways to demonstrate their qualities

I’m thinking about these points. I appreciate what they are getting at, but I think the qualifiers in these are hugely important. “When the purpose is clearly articulated” is my big one, but “when not derailed” is also up there.

We want job candidate to interact with people and answer questions about their research, but it seems unclear what people are trying to assess and whether this is the best format to do it.

Eisen is right that if you remove part of the interview process, you have less information about the candidate. So re-evaluating the chalk talk concept should also include “Can we come up with something that more directly evaluates the relevant skills?”

If the goal is to evaluate someone’s research, maybe those statements of research interest that is required of every job application should play a bigger role? 

A written statement better reflects grant writing. It’s faster and more time efficient to read a few pages than listen to hour long talk. And it’s less stressful for candidate.

If the research statement that was part of the initial application is too short, maybe creating a beefed-up version can be part of the shortlist interview process.

More additional: Later, Eisen wrote:

i actually wish we had prospective candidates teach a class, but that's obviously a bit too much to ask of them
For many institutions, asking a candidate to prepare a research seminar and a guest lecture in a class is probably more helpful than asking a job candidate to effectively give two research talks (seminar and chalk talk).

Andy Fraser wrote:

I have seen candidates with stellar CVs that gave extremely polished research talks that apparently could not cogently discuss a single part of their research proposal or who reacted with hostility to questions. I don't know how else I would have found that out.

In light of the excellent commentary, I have added a question mark to the title of this post. 😉

External links

It’s time to end whiteboard interviews for software engineers

14 August 2021

Are we trying to solve the wrong problem with anonymity in peer review?

One of academia’s evergreen topics, anonymity in peer review, has cropped up again on Twitter. 

This is something I’ve changed my mind about a lot since I’ve been blogging. But today, the thought struck me that we put too much emphasis over whether review should be anonymous or not.

We do not put enough emphasis on people not being assholes about getting reviews.

The whole thing driving arguments around anonymity is, as far as I can see, driven by fear of retribution. “If I write a bad review of someone else, they will be petty and have the ability to sink something of mine later.”

As much as I appreciate the argument that people are horrible (they are) and you have to account for things that people regularly do (people are often petty and vindictive), it seems to me there is a lot of possibility to move the dial on author behaviour. To say, “We won’t put up with your little revenge plots.”

Do I know how to do this? No, not yet. I’m just blogging here.

It’s also likely that a lot of bad behaviour here is driven by the sense of research being a zero sum game. Which, in the current funding climate, it is dangerously close to being. If there was no so much competition and windows of opportunity were not so small, people would worry less about whether some jerk is going to try to give you a bad review on a grant because you were critical of their manuscript.

10 August 2021

Thanks to you, ComSciCon!

Zen Faulkes on Zoom screen with Godzilla in background.Just briefly wanted to follow up on my appearance last week at ComSciCon. The storytelling panel I was part of was excellent (despite my presence) and I was also able to work with small group on short journalism pitches. 
There was not a coherent thread or panel hashtag, so here are tweets from the storytelling session. (Darn it, I miss Storify.) These are in roughly chronological order, and I would normally cut and paste the tweet text, but I have some other tasks to do and it would take too long to make it pretty right now.

Thanks to the organizers for posting a non-derpy Zoom screenshot of me!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Related posts
 

26 July 2021

The currency of science is currency

Coins and bills from many countries.

 I used to tell students, “Your publications are gold. Peer reviewed journal articles are the currency of science.”

I don’t think I’m going to tell them that any more.

Kat Milligan-Myhre brought this blog post to my attention, bemoaning that administration at University of Wisconsin sees money as an end, not a means to an end. This aligns somewhat with my own experience in the US. I want to talk a little about we got to this point. 

In the early 2000s, two things happened.

Graph showing basic research expenditures from US goverment agencies from 1976-2020, with NIH having the largest budgets.

First, in the US, the National Institutes of Health (NIH) doubled their budget throughout much of the 1990s. Universities responded to that incentive and built infrastructure and hired faculty in biomedicine. But that budget doubling stopped in the 2000s. When you account for inflation, the budget shrank over the next few years. Universities had invested so deeply that they couldn’t back out, and were determined to get that money. 

In my experience, many university administrators were not truly aware that the budget had stopped increasing and did not realize how competitive grants had become. They had spend the better part of a decade hearing how big the NIH budgets were and couldn’t face the new reality.

You may object that this is only biomedicine and only in the US. True. But the NIH had the biggest basic research budget, and trends in the US tend to get reflected elsewhere.

Plus, something else happened in the 2000s. Academic publishing embraced the Internet and stopped relying on paper copies.

I point, as I often do, to the debut of PLOS ONE in 2006 as a significant turning point in academic publishing. It only reviewed for “technical soundness,” not “quality.” Because of that, people complained that “they will publish anything” (even when this was clearly not so). Nevertheless, it certainly expanded the range of papers that were publishable.

It not only published more papers, but other publishers copied the model. The number of peer-reviewed journals expanded significantly. 

And we also got journals that only pretended to be peer-reviewed adding confusion to the mix.

People evaluating faculty had often counted the number of publications because it is simple and does not require deep knowledge of the content of the papers. But as a whole swathe of new journals arrived, I am willing to bet that more and more faculty were able to push out papers somewhere.

From the point of view of someone attempting to evaluate faculty (because “excellence” and everything), this meant that publication number was not informative because there was less variation, and because administrators often aren’t active researchers themselves, they worried about noise and whether they could trust whether the journals were “real” journals.

But the grant process was still exclusive, and – importantly – still run by scientists. Grants still had the imprimatur of peer review.

Even if you put aside the desire for money, I could see how an administrator might prefer to switch from a simple metric that has lost much of its signal and is potentially corrupted (number of publications) to a different simple metric that is more exclusive and is still perceived as having integrity (number of grant dollars).

At this point, maybe we should update our vocabulary. Instead of “professor” or “researcher,” we should call people “research fundraisers.”

From now on, I’ll probably be advising students to look for competitive opportunities like scholarships and society grants as much as I advise them to publish.

External links

wtf uw 2: the new wisconsin idea is money

23 July 2021

ComSciCon 2021

I’m excited to be presenting at ComSciCon 2021! I’ll be part of a panel and workshop on creative storytelling on 5 August 2021

It’s a bit intimidating to be sharing space with:

I look forward to contributing!

External links

ComSciCon