13 September 2021

The rise of TikTok in misinformation

Ben Collins’s Twitter thread about how misinformation about medication to get worms out of horses has become the cause du jour for many.

Can’t stress how wild the ivermectin Facebook groups have become. So many people insisting to each other to never go to an ER, in part because they might not get ivermectin, but sometimes because they fear nurses are killing them on purpose “for the insurance money.” ... It’s just a constant stream of DIY vitamin therapies and new, seemingly random antiviral drugs every day — but not the vaccine.

This is distressing, but I wanted to home in on this comment in the thread.

The ivermectin Facebook groups also offer a window into how pervasive antivaxx COVID “treatment” videos are on TikTok.

The groups serve as a de facto aggregator for antivaxx TikTok, a space that is enormous but inherently unquantifiable to researchers.

When I last wrote about the dangers of recommendation algorithms (in pre-pandemic 2019), I focused on YouTube. TikTok existed then (it started in 2016), but it wasn’t included in Pew Research’s list of social media platforms until this year.

Graph showing use of social media platforms in the US. 81% use Youtube, 61% use Facebook. No other platform is used by more than 50% of Americans. 21% of Americans use TikTok.

Even today, TikTok isn’t even used by one in four Americans. It’s more like one in five. It’s impressive that it’s pulled close to Twitter, which has been around far longer. And also frightening that it is having this outsized effect that is leading people to try... anything

Everything is a miracle cure, or it isn't, but every drug is worth a shot. Except, of course, the thing that works: the vaccine. Anything pro-vaccine is instantly written off as “propaganda.”

There are lots of issues raised here that I can’t process all at once. But I think Collins’s comment that TikTok is unmeasurable for researchers strikes me at something important. Could the requirement for more data transparency in how TikTok selects what videos to show someone help? Not sure. 

But we may be at the start of an arms race between social media platforms using data to show things to viewers, and researchers trying to “break the code” to figure out just what the heck people are actually seeing.

Related posts

Recommendation algorithms are the biggest problem in science communication today

External links

Social Media Use in 2021 - Pew Research

Naming the animals in research papers

This is Bruce. 

Bruce, a kea with no upper beak, holding an object with his tongue and lower beak.

Bruce has been making the news rounds because of a new paper demonstrating that he uses pebbles to groom. Bruce is a kea, a parrot that normally has a large upper beak, which Bruce does not have. In the picture above, you can see him using his tongue and remaining lower beak to pick up an object.

What I want to talk about is not the tool use (although that is cool), but that I know this bird was given a name. Because I found this paper within days of finding another paper about an unusual bird: an Australian musk duck named Ripper. 

Ripper’s claim to fame was that he was able to imitate sounds, like creaking metal and even human voices. Ripper seems to have picked up the phrase, “You bloody fool” from humans around him. 

This is interesting because vocal learning is found in only a few lineages and hasn’t been documented in ducks before.

But what interested me in both papers is that the scientific papers repeated refer to these bird by the names that humans gave them. Not just once in the methods as an aside, but all the way through.

I can see the value of using a given name in news articles and blog posts like the one I’m writing. And maybe it makes scanning the paper a little easier. But the kea paper refers to “Bruce” 62 times; the duck paper refers to “Ripper” 40 times. The extensive referencing to these names in the journal articles gives me pause.

It’s been clear for a long time that the efforts to keep animals at arm’s length to avoid humanizing them (a position taken furthest, perhaps, by B.F. Skinner and other behaviourists in American psychology) is a lost cause. The approach of people like Jane Goodall (who named her chimps rather than just giving them numbers) has won. 

But these two approaches sit on opposite ends of a continuum. And quite often, there’s a pendulum swing in attitudes. I wonder if the pendulum has maybe swung a little too far towards our willingness to humanize animals in the scientific literature.

It’s easy to slip into teleology (assuming everything has a purpose) and anthropomorphism (thinking animals are like humans). And constantly referring to animals’ names throughout a paper seems to make that even easier. 

I’m not saying that the names we give animals should never be mentioned in papers. But maybe it could be once or twice instead of dozens of times. 

And hey, these animals didn’t get to pick their names. Maybe that duck was thinking, “I say ‘Bloody fool’, and they name me ‘Ripper’ on top of that? Could I be any more of a cliché Australian?)

A Twitter poll suggests I am not alone in being wary of this practice.

References

Bastos APM, Horváth K, Webb JL, Wood PM, Taylor AH. 2021. Self-care tooling innovation in a disabled kea (Nestor notabilis). Scientific Reports 11(1): 18035. https://doi.org/10.1038/s41598-021-97086-w

ten Cate C, Fullagar PJ. 2021. Vocal imitations and production learning by Australian musk ducks (Biziura lobata). Philosophical Transactions of the Royal Society B: Biological Sciences 376(1836): 20200243. https://doi.org/10.1098/rstb.2020.0243

02 September 2021

Gender’s role in authorship disputes: Women try to prevent fights but still lose out

Black king and white queen chess pieces.
Sometimes, I hate being right.

Ni and colleagues have an important new paper on authorship conflicts that confirms something I sort of predicted back in 2018. That people with less professional and social power get hosed by authorship disputes. In 2018, I wrote, “loss of credit due to authorship disputes may be a little recognized factor driving underrepresented individuals out of scientific careers”.

Ni and colleagues confirmed that women get into more authorship disputes than men. This is despite women more often trying to negotiate authorship in advance. (To me, most surprising and depressing finding.)

To me, this supports the argument that we need more ways to help authors resolve disputes and support authors with less power. I suggested mediation and arbitration could be more common. I don’t care if people buy into that idea, but I hope that people can see that the status quo harms our efforts to create a more equitable scientific culture.

A caveat about the new paper. For some analyses, the paper guesses at gender by using an algorithm on names. This has problems. (Edit: Several authors tweeted a clarification for how this algorithm was used and that it was tangential to the main findings. Ni, LaRiviere, Sugimoto) This suggests journals should capture more data about authors than name (usually in English only), institution, and email. Or maybe that could become part of an ORCID profile. Actually, I like that idea more.

A logical follow up to this new paper would be to look at how authorship disputes shake out in other historically excluded groups. I’ll bet a dollar that white men consistently experience fewer authorship disputes than anyone else.

The Physics Today website has more analysis. (I was interviewed for this, but didn’t end up with any quotes in the article.) So too does Science magazine news.

Additional, 3 September 2021: I just want to emphasize why I think authorship disputes are so important for us to talk about.

There is a lot of research on gender differences in science. We know things women tend to get cited less than men, women tend to win fewer awards than men, women tend to be invited speakers less often then men.

But we know all these things because they are public.

It’s a major win for researchers have get access to journal records of submissions, because it’s something that is typically not disclosed and is very informative.

When we analyze citations and the like, we kind of assume that they reflect the work done. Maybe not perfectly, but reasonably.

Authorship disputes are kind of insidious because they are generally private, and affect what gets into the public record in the first place. It is an unrecognized and unacknowledged filter affecting people’s careers.

References

Ni C, Smith E, Yuan H, Larivière V, Sugimoto CR. 2021. The gendered nature of authorship. Science Advances 7(36): https://www.science.org/doi/10.1126/sciadv.abe4639

Faulkes Z. 2018. Resolving authorship disputes by mediation and arbitration. Research Integrity and Peer Review 3: 12. https://doi.org/10.1186/s41073-018-0057-z

External links

Women in science face authorship disputes more often than men

Are women researchers shortchanged on authorship? New study highlights gender disparities

Pic from here.

29 August 2021

Kill the chalk talk?

A “chalk talk” is an unscripted presentation that allows a job candidate to outline something “on the spot,” live. In many departments, this is supposed to be a chance for a candidate to talk about their research and how they plan to create an ongoing, funded research program.

Since chalkboards have mostly been abandoned for years, the term probably only still exists because it rhymes. In fact, in engineering and computing business and industry job interviews, these are called whiteboard interviews. But there, some have pointed out that whiteboarding is a problematic interview practice.

It’s got nothing to do with the skills needed to succeed at the job.

Wayne Brady
Asking a scientist to try to create a grant proposal on the spot is like asking a film actor to do an improv comedy scene as part of an audition. If that were the norm. Wayne Brady would probably be the biggest start in Hollywood. Sure, some actors are great at improv, but that’s generally not how film works. Films have scripts and preparation time. 

Chalk talks are likely to be one of those common practices that contribute to academia’s low diversity. It seems likely that there could be many people who are thoughtful but slow to process questions who might fare poorly in a chalk talk. On the other hand, there may be people who are exceedingly good at oral explanations but who are undisciplined writers. 

And if this is all allegedly about grants, grants live or die on the page, not the stage. Besides, job candidates always send a research statement as part of the application package. So marking someone explain it all again feels more like hazing than a useful way of assessing a candidate.

Departments don’t need these. I witnessed a fair number of academic job interviews in at least three institutions in as many countries. None of them asked a job candidate to do a so-called “chalk talk.” 

Update, 30 August 2021: Unsurprisingly, some people disagree with this. Jason Rasgon offered this defense (I’ve stitched together a few tweets):

Chalk talks have some value for a very narrow slice of institutions (generally soft money Medical or professional schools). I agree, mostly useless for other places. But when I was soft money I saw many faculty applicants who had a written document who couldn't articulate a plan of attack at the chalk talk. Several of them were still hired, they didn't last. When you’re soft money (the institution pays little to no salary, you have to pay your own salary by your own grants - ZF), you don’t have luxury of taking your time to organically develop a research program. You have to hit ground sprinting from day 1. Chalk talk should be easy, as you should have all this developed (and maybe submitted already). If you don’t, you’ll have problems. The people I saw have issues knew in general what they wanted to do but hadn't sat down and worked out the specifics.

Michael Eisen writes (again, stitching tweets together):

I’ve attended over 100 faculty candidate chalk talks, and would say they've played a significant role in hiring decisions maybe a dozen times, all in the favor of the candidate, because they gave us insights into strengths that did not manifest in their application. In contrast to the dominant opinion, which seems to be that chalk talks are about gotchas or testing peoples' ability to think on their feet, in my estimate their major role in the process is to give people an opportunity to address weaknesses in the rest of their application. When the format and purpose is clearly articulated, and when individual faculty are not allowed to derail the process (which can happen), they are an immensely useful part of the process. ... (T)he call for eliminating chalk talks is a copout that doesn't actually address the problem, and is part of a bad trend to provide candidates with fewer ways to demonstrate their qualities

I’m thinking about these points. I appreciate what they are getting at, but I think the qualifiers in these are hugely important. “When the purpose is clearly articulated” is my big one, but “when not derailed” is also up there.

We want job candidate to interact with people and answer questions about their research, but it seems unclear what people are trying to assess and whether this is the best format to do it.

Eisen is right that if you remove part of the interview process, you have less information about the candidate. So re-evaluating the chalk talk concept should also include “Can we come up with something that more directly evaluates the relevant skills?”

If the goal is to evaluate someone’s research, maybe those statements of research interest that is required of every job application should play a bigger role? 

A written statement better reflects grant writing. It’s faster and more time efficient to read a few pages than listen to hour long talk. And it’s less stressful for candidate.

If the research statement that was part of the initial application is too short, maybe creating a beefed-up version can be part of the shortlist interview process.

More additional: Later, Eisen wrote:

i actually wish we had prospective candidates teach a class, but that's obviously a bit too much to ask of them
For many institutions, asking a candidate to prepare a research seminar and a guest lecture in a class is probably more helpful than asking a job candidate to effectively give two research talks (seminar and chalk talk).

Andy Fraser wrote:

I have seen candidates with stellar CVs that gave extremely polished research talks that apparently could not cogently discuss a single part of their research proposal or who reacted with hostility to questions. I don't know how else I would have found that out.

In light of the excellent commentary, I have added a question mark to the title of this post. 😉

External links

It’s time to end whiteboard interviews for software engineers

14 August 2021

Are we trying to solve the wrong problem with anonymity in peer review?

One of academia’s evergreen topics, anonymity in peer review, has cropped up again on Twitter. 

This is something I’ve changed my mind about a lot since I’ve been blogging. But today, the thought struck me that we put too much emphasis over whether review should be anonymous or not.

We do not put enough emphasis on people not being assholes about getting reviews.

The whole thing driving arguments around anonymity is, as far as I can see, driven by fear of retribution. “If I write a bad review of someone else, they will be petty and have the ability to sink something of mine later.”

As much as I appreciate the argument that people are horrible (they are) and you have to account for things that people regularly do (people are often petty and vindictive), it seems to me there is a lot of possibility to move the dial on author behaviour. To say, “We won’t put up with your little revenge plots.”

Do I know how to do this? No, not yet. I’m just blogging here.

It’s also likely that a lot of bad behaviour here is driven by the sense of research being a zero sum game. Which, in the current funding climate, it is dangerously close to being. If there was no so much competition and windows of opportunity were not so small, people would worry less about whether some jerk is going to try to give you a bad review on a grant because you were critical of their manuscript.

10 August 2021

Thanks to you, ComSciCon!

Zen Faulkes on Zoom screen with Godzilla in background.Just briefly wanted to follow up on my appearance last week at ComSciCon. The storytelling panel I was part of was excellent (despite my presence) and I was also able to work with small group on short journalism pitches. 
There was not a coherent thread or panel hashtag, so here are tweets from the storytelling session. (Darn it, I miss Storify.) These are in roughly chronological order, and I would normally cut and paste the tweet text, but I have some other tasks to do and it would take too long to make it pretty right now.

Thanks to the organizers for posting a non-derpy Zoom screenshot of me!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Related posts
 

26 July 2021

The currency of science is currency

Coins and bills from many countries.

 I used to tell students, “Your publications are gold. Peer reviewed journal articles are the currency of science.”

I don’t think I’m going to tell them that any more.

Kat Milligan-Myhre brought this blog post to my attention, bemoaning that administration at University of Wisconsin sees money as an end, not a means to an end. This aligns somewhat with my own experience in the US. I want to talk a little about we got to this point. 

In the early 2000s, two things happened.

Graph showing basic research expenditures from US goverment agencies from 1976-2020, with NIH having the largest budgets.

First, in the US, the National Institutes of Health (NIH) doubled their budget throughout much of the 1990s. Universities responded to that incentive and built infrastructure and hired faculty in biomedicine. But that budget doubling stopped in the 2000s. When you account for inflation, the budget shrank over the next few years. Universities had invested so deeply that they couldn’t back out, and were determined to get that money. 

In my experience, many university administrators were not truly aware that the budget had stopped increasing and did not realize how competitive grants had become. They had spend the better part of a decade hearing how big the NIH budgets were and couldn’t face the new reality.

You may object that this is only biomedicine and only in the US. True. But the NIH had the biggest basic research budget, and trends in the US tend to get reflected elsewhere.

Plus, something else happened in the 2000s. Academic publishing embraced the Internet and stopped relying on paper copies.

I point, as I often do, to the debut of PLOS ONE in 2006 as a significant turning point in academic publishing. It only reviewed for “technical soundness,” not “quality.” Because of that, people complained that “they will publish anything” (even when this was clearly not so). Nevertheless, it certainly expanded the range of papers that were publishable.

It not only published more papers, but other publishers copied the model. The number of peer-reviewed journals expanded significantly. 

And we also got journals that only pretended to be peer-reviewed adding confusion to the mix.

People evaluating faculty had often counted the number of publications because it is simple and does not require deep knowledge of the content of the papers. But as a whole swathe of new journals arrived, I am willing to bet that more and more faculty were able to push out papers somewhere.

From the point of view of someone attempting to evaluate faculty (because “excellence” and everything), this meant that publication number was not informative because there was less variation, and because administrators often aren’t active researchers themselves, they worried about noise and whether they could trust whether the journals were “real” journals.

But the grant process was still exclusive, and – importantly – still run by scientists. Grants still had the imprimatur of peer review.

Even if you put aside the desire for money, I could see how an administrator might prefer to switch from a simple metric that has lost much of its signal and is potentially corrupted (number of publications) to a different simple metric that is more exclusive and is still perceived as having integrity (number of grant dollars).

At this point, maybe we should update our vocabulary. Instead of “professor” or “researcher,” we should call people “research fundraisers.”

From now on, I’ll probably be advising students to look for competitive opportunities like scholarships and society grants as much as I advise them to publish.

External links

wtf uw 2: the new wisconsin idea is money

23 July 2021

ComSciCon 2021

I’m excited to be presenting at ComSciCon 2021! I’ll be part of a panel and workshop on creative storytelling on 5 August 2021

It’s a bit intimidating to be sharing space with:

I look forward to contributing!

External links

ComSciCon


22 July 2021

Mission declined

Should you choose to accept it

Terry McGlynn tweeted (in reply to an article I couldn’t see): 

Want more people to accept and understand evolution? 
Your mission, should you choose to accept it, is to emphasize that religion and evolution are compatible.

My problem with this is that “religion” is not one thing. There are thousands of religions. There are even many branches of what is ostensibly a single religion.

For many people, they are not concerned with whether some scientific claim like evolution is compatible with some religions. They are concerned, often deeply so, if a claim is compatible with their religion. I do see how saying, “The Catholic church is okay with evolution” is supposed to convince a Protestant. 

Trying to convince someone that “religion and evolution is compatible” mean trying convince people to change their religion. I am not prepared to wade into theological disputes between religions.

I do not want that mission.

19 July 2021

Damn it.

Dr. Kristine Lowe
One of the things I was most proud of doing in my time at The University of Texas-Pan American was chairing the search committee that recommended hired Dr. Kristine Lowe.

Kristi died yesterday.

Damn it.

Kristi was great with students and had a lot of them go through her lab.

Given that we had, like, 60% women as our students, I think her presence was so important for our department. At the time, there was only one woman on tenure / tenure track in the entire department.

Eventually, she started to step more into administrative and leadership roles and had been chair of the department for several years. Unsurprisingly given the department composition she came into years ago, she was the first woman to chair the department.

She was always friendly, supportive, and always willing to work with you. She was a good colleague and I hate losing her.

Damn it.

04 July 2021

You don’t have to use bad data

 A routine case of a bad paper attracting a lot of criticism and then getting retracted

The one thing that I wanted to comment on was one of the authors trying to defend their work.

We are happy to concede that the data we used… are far from perfect, and we said so in our paper. But we did not use them incorrectly. We used imperfect data correctly. We are not responsible for the validity and correctness of the data, but for the correctness of the analysis. We contend that our analysis was correct. We agree with LAREB that their data is not good enough. But this is not our fault(.)

My head is kind is spinning from this argument. If you know the data are bad, you could, you know, leave it alone and not write an entire academic paper that depended on it. Especially when it concerns an ongoing public health crisis.

The data may not be your fault, but that does not mean you are without fault.

External links

https://retractionwatch.com/2021/07/02/journal-retracts-paper-claiming-two-deaths-from-covid-19-vaccination-for-every-three-prevented-cases/


01 July 2021

Ant bites

In two days, two insightful pieces of writing have dropped that feel like bookends to each other. Both deal with the effects of social media – or, to be more specific, Twitter – on individuals who get on the wrong end of anger.

First is a retrospective and analysis by Emily VanDerWerff of how Twitter controversy about a single science fiction short story effectively crushed the writer’s desire to ever write again. And that was probably the smallest effect the controversy had on author Isabel Fall.

Second is a description of how social media dynamics are still not grasped by journalism as a field. Charlie Warzel provides brings some useful terms that I hadn’t seen before, like “context collapse” to the discussion.

Both remind us that human beings are used to dealing with small social networks. We aren’t ready for the level of attention that you can get if you become the center of a viral online discussion. VanDerWerff writes:

But in any internet maelstrom that gets held up as a microcosm of the Way We Live Today, one simple factor often gets washed away: These things happened to someone. And the asymmetrical nature of the harm done to that person is hard to grasp until you’ve been that person. A single critical tweet about the matter was not experienced by Isabel Fall as just one tweet. She experienced it as part of a tsunami that nearly took her life.

Warzel says:

Many leaders at big news organizations don’t think in terms of “attack vectors” or amplifier accounts, they think in terms of editorial bias and newsworthiness. They don’t fully understand the networked nature of fandoms and communities or the way that context collapse causes legitimate reporting to be willfully misconstrued and used against staffers. They might grasp, but don’t fully understand, how seemingly mundane mentions on a cable news show like Tucker Carlson’s can then lead to intense, targeted harassment on completely separate platforms.

The “Ant bites” of the title of this post?

When you tweet something, it can feel like you have the power of an ant. And a single ant is usually inconsequential. “Squished like an ant.”

But in 1998, Joe Straczynski wrote a warning (in the usenet newsgroup rec.arts.sf.tv.babylon5.moderated).

(E)ven a whole human being can be eaten by ants.

It’s easy to make the mistake of tweeting at or about someone and think you’re just making conversation. Sure, if you were in a room with a person and knew them, it’d probably be fine. But you forget that you probably see only the tiniest sliver of that person’s experience. Your tiny little comment might be part of a much bigger pattern for the recipient. A single ant bit. But the person on the other end might be getting eaten alive by ants.

I am thinking back to a lot of online controversies in science around, say, a decade ago. I think we probably underestimated how rough those could be on researchers. Nobody had a “social media IQ” then. The good news was that the online communities were smaller then, so the anthill might not have delivered quite as many bites as it could now.

External links

How Twitter can ruin a life

What newsrooms still don’t understand about the internet

 

27 June 2021

The paradox of MDPI

One of the most puzzling trends in scientific publishing for the last couple of years has been the status of the open access publisher MDPI.

On the one hand, some people I know and respect have published their papers there. I’ve reviewed for some journals, and have seen that authors do make requested changes and there is some real peer review going on.

On the other hand, few other publishers today seem so actively engaged in pissing off the people they work with. Scientists complain about constant requests to review, particularly in areas far outside their domain expertise – an easily avoided and amateurish mistake. 

And MDPI’s boss seems like a dick.

A few people have been trying to make sense of this paradox. Dan Brockington wrote a couple of analyses over the last two years (here, here) that were broadly supportive of what MDPI has done.

Today, I stumbled across this post by Paolo Crosetto that takes a long view of MDPI’s record. It prompted another analysis by Brockington here.

Both are longish reads, but are informed by lots of data, and both are nuanced, avoiding simple “good or bad” narratives. I think one of the most interesting graphs is this one in Crosetto’s post on processing turnarounds:

Graph of time from submission to acceptance at MDPI journals.  2016 shows wide variation from journal to journal; 2020 data shows little variation.

There used to be variation in how long it took to get a paper accepted in am MDPI journal. Now there is almost no smear how long it takes to get a paper accepted in an MDPI journal. That sort of change seems highly unlikely to happen just by accident. It looks a lot like a top down directive coming from the publisher, putting a thumb on the decision making process, not a result of editors running their journals independently.

Both Crosetto and Brockington acknowledge that there is good research in some journals. 

The questions seems to be whether the good reputation is getting thrown away by the publisher’s pursuit of more articles, particularly in “Special Issues.” Crosetto suspects the MDPI is scared and wants to extract as much money (or “rent” as he calls it) from as many people as fast as possible. Brockington says that this may or may not be a problem. It all depends on something rather unpredictable: scientists’ reactions. 

Scientists may be super annoyed by the spammy emails, but they might be happier about fast turn around times (which people want to an unrealistic degree) with high chance of acceptance. 

If the last decade or so in academic publishing has taught us anything, it’s that there seems to be no upper limit for scientists’ desire for venues in which to publish their work.

PLOS ONE blew open the doors and quickly became the world’s biggest journal by a long ways. But even though it published tens of thousands of papers in a single year, PLOS ONE clones cropped up and even managed to surpass it in the number of papers published per year. 

MDPI is hardly alone in presenting bigger menus for researchers to choose where to publish. Practically every publisher is expanding its list of journals at a decent clip. I remember when Nature was one journal, not a brand that slapped across the titles of over 50 journals.

MDPI is becoming a case study in graylisting. As much as we crave clear categories for journals as “real” (whitelists) or “predatory” (blacklists), the reality can be complicated.

Update, 1 July 2021: A poll I ran on Twitter indicates deep skepticism of MDPI, with lots of people saying they would not publish there.

Would you submit an article to an MDPI journal?

I have done: 9.4%
I would do: 3.9%
I would not: 50%
Show results: 36.7%

Update, 21 August 2021: A new paper by Oviedo-García analyzes MDPI’s publishing practices. It makes note of many of the features in the blog posts above: the burgeoning number of special issues, the consistently short review times across all journals. Oviedo-García basically calls MDPI a predatory publisher.

This earned a response from MDPI, which unsurprisingly disagrees.

External links

An open letter to MDPI publishing

MDPI journals: 2015 to 2019

Is MDPI a predatory publisher?

MDPI journals: 2015 to 2020 

Oviedo-García MÁ. 2021. Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI). Research Evaluation: in press. https://doi.org/10.1093/reseval/rvab020

Comment on: 'Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI)' from Oviedo-García

Related posts 

My resolve not to shoot the hostage is tested

Graylists for academic publishing

25 June 2021

What the American furore over critical race theory means for science education

People shouting at school board meeting
 

Well, that escalated quickly.

In the last few months, “critical race theory” has gone from an academic idea mostly (but not exclusively) taught in law school to a flashpoint item that American right wingers are willing to go to the mattresses over. 

There have been ample shouting in school board meetings.

Nikole Hannah-Jones was denied tenure after winning a Pulitzer prize and a McArthur award for the 1619 Project. Not because her colleagues voted against her, but because the normally quite Board of Trustees voted against her.

Critical race theory is not my area of expertise (please forgive me, social science and humanities scholars, for musing here), but watching this feels very much like something I am familiar with and have watched closely for a long time: the fight over teaching evolution.

I noted on Twitter that a lot of the same forces that traditionally mobilize against the teaching of evolution are now mobilizing against the teaching of critical race theory. Laws are successfully getting passed in state legislatures. And while one side is willing to show up in large numbers and shout at school boards, the other... has not been so active.

There’s a lot of reasons for the lopsided reaction, bit I can’t help but wonder if people who are not freaking out over critical race theory are not bringing the same fire to the fight is because they look at the laws that are being passed, think, “That’s going to court and the laws will get overturned.”

A few things make this sound plausible on the face of it. 

Critical race theory isn’t taught in K-12 schools, so prohibiting the teaching of something that isn’t taught is bizarre.

But more to my point here, this is what happened with the teaching of evolution. There was often widespread political and public support for laws that tried to regulate the reaching of evolution. But time and again in the US, the courts have said, “No, those anti-evolution laws are unconstitutional.”

I do not feel extremely confident that an American court case would strike down the laws prohibiting critical race theory. 

While regulating or banning the teaching of evolution and banning the teaching of critical race theory are both pet projects of the political right in the US, the arguments over evolution are inextricably intertwined with religious beliefs. A government promoting particular religions (mainly fundamentalist Protestant Christianity) runs afoul of the establishment clause of the first amendment of the US constitution. That has been a key legal aspect of all the cases.

The opposition to critical race theory is not as clearly driven by religion, which makes the legal issues very different. Even if the end result – states dictating what can and cannot be taught – looks very much the same.

This is why academics of any stripe, including my fellow scientists, need to be paying very close attention to how these fights over critical race theory play out. 

University instructors have mostly taken it for granted that they can teach subjects as they see fit. The fights over critical race theory are a test case.Climate science is another area that many on the right would probably love to dictate how it is taught. Ditto anything around sexual and gender diversity. 

If those laws are not smacked down fast and hard, whether in the courts or by political and public action, this could be the start of a sustained squeeze on how universities teach in the US. And America’s reputation for its universities in the world will suffer.

Picture from here.

07 June 2021

The @IAmSciComm threads, 2021 edition

Twitter heading for IAmSciComm hosted by Zen Faulkes

I’ve started my time hosting the IAmSciComm Twitter account, and will be adding my threads here as I go so that they are easy to find.

Monday, 7 June 2021

Tuesday, 8 June 2021

Wednesday, 9 June 2021

Thursday, 10 June 2021

Friday, 11 June 2021

Saturday, 12 June 2021

Sunday, 13 June 2021

Related posts

The IAmSciComm threads 

06 June 2021

The week of IAmSciComm, 6 June 2021!

I have just taken over the reigns of the @IAmSciComm rotating curator Twitter account! This is my second time hosting, and am gratified to be asked back.

Here is a rough schedule for the week.

Monday, 7 June: Show me a poster, graphic, or dataviz!   Tuesday, 8 June: Why streaks matter!  Wednesday, 9 June: From blog to book!   Thursday, 10 June: Posters for everyone!   Friday, 11 June: Posters reviewed!  Saturday, 12 June: The randomizer!

  • Monday, 7 June: Show me a poster, graphic, or dataviz! 
  • Tuesday, 8 June: Why streaks matter!
  • Wednesday, 9 June: From blog to book! 
  • Thursday, 10 June: Posters for everyone! 
  • Friday, 11 June: Posters reviewed!
  • Saturday, 12 June: The randomizer!

Join me, won’t you?

Related posts

The IAmSciComm threads

External links

IAmSciComm home page

04 June 2021

Experiments doesn’t always lead to papers

Drugmonkey tweeted

Telling academic trainees in the biomedical sciences to put their heads down, do a lot of experiments and the papers will just emerge as a side-product is the worst sort of gaslighting.

He’s right, as he often is, but it bears examining why that’s the case.

First, not all experiments should be published. Experiments can have design flaws like uncontrolled variables, small sample sizes, and all the other points of weakness that we learn to find and attack in journal club in graduate school.

Second, even if an experiment is technically sound and therefore publishable in theory, it may not be publishable in practice. In many fields, it’s almost impossible to publish a single experiment, because the bar for publication is high. People usually want to see a problem tackled by multiple experiments. The amount of data that is expected in a publishable paper has increased and probably will continue to do so.

The bar for what is considered “publishable” is malleable. We have seen that in the last two years with the advent of COVID-19. There was an explosion of scientific papers, many of which probably would not have been publishable if there wasn’t a pandemic going on. People were starved for information and researchers responded in force. You have to understand what is interesting to your research community.

Third, experimental design is a very different skill from writing and scientific publication. 

Fourth, it’s not a given that everyone feels the same drive to publish. Different people have different work priorities. For instance, I saw a lot of my colleagues who had big labs groups with a lot of students who churned through regularly. Those labs generated a lot of posters and a lot of master’s theses. According to our department guidelines, theses were supposed to represent publishable work.

But all of that didn’t turn into papers consistently. I think people got positive feedback for having lots of students (and looking “busy”) and came to view “I have a lot of students” as their personal measure of success. Or, they just got into the habit of thinking, “I’ll write up that work later.” “Or just one more experiment so we can get it in a better journal.”

I had fewer students and master’s theses written than my colleagues, but I published substantially more papers. I say this not to diss my colleagues or brag on myself, but it’s a fact. I made publication a priority.

Publishing papers requires very intentional, deliberate planning. It requires understanding the state of the art in the research literature. It requires setting aside time to write the papers. It requires understanding what journals publish what kinds of results. Just doing experiments in the lab will not cause papers to fall down like autumn leaves being shaken loose from trees.

24 May 2021

It’s Book Day! Better Posters is here!

It’s been a long time coming.

Today is the official release date for the Better Posters book.

Better Posters book in box.

The Better Posters blog started in 2009, inspired by in large part by Garr Reynolds Presentation Zen blog. Reynolds’s blog made the transition to book in late 2007. As my blog kept going, I quietly entertained the hope that maybe I might be able to write a book to do for conference posters something like what Reynolds (and many others) did for oral presentations.

Somewhere along the way, I even wrote out a partial incomplete outline for what a book might contain.

Flash forward to sometime in 2017. Nigel Massen from Pelagic Publishing contacts me about the potential for... a book on conference posters! I send him my crappy outline. Despite its crappiness, Nigel assures me that all books start with a crappy outline. I manage to convince him I can actually do this.

I start writing it in earnest in the first few months of 2018, and submit it on time on Halloween 2019. So yes, the writing and organizing and figure making and emailing people to ask if I can show their posters took well over a year. (In my defense, I was teaching a full course load while writing the book, too).

But proving that if it wasn’t for the last minute, nothing would get done, I was furiously making some fairly significant changes even on deadline day.

The original plan was to release the book in the first quarter of 2020. which would be in time for the normal summer conference season for academics. 

I don’t know if we would have made target, but the COVID-19 pandemic derailed that plan. There was just no way to release the book during a pandemic. The book got pushed back a couple of times until today.

After more than three years, it’s a little hard to believe that other people can now read this thing. It doesn’t quite feel the books that inspired it. When you have to write 80 thousand words, you just get too tired to emulate anyone else and what you get represents you and your voice, for good or bad.

I have more to say on the creation of this book, but for now, I will just say that if you get it, I hope you find it useful.

If you’re interested, the book is available in both paperback and ebook versions for Kobo, Kindle, Nook.

If you cannot buy a copy of the book yourself, you might recommend it to your university library or local community library.

Photo by Anita Davelos.

04 May 2021

Type of scientific papers, very specific niche edition

Last week, Randall Munroe started a whole thing with one of his xkcd cartoons, “Types of scientific papers.”

People were inspired to make different versions for their field, then sub-field, then sub-sub-field, so I took the meme to it’s logical conclusion.

Types of scientific paper that Zen Faulkes has co-authored

07 April 2021

Gratitude



I’m grateful that:

  1. I had a book in me that I thought was worth writing.
  2. Pelagic Publishing gave me a chance to write it.
  3. I made it through a pandemic to see it in print.

I hope people find it helpful.

03 April 2021

Why “descriptive” science is downplayed

On Twitter, Rejji Kuruvilla asked:

I’m sorry, but WHY are descriptive studies a problem in grant or ms review? If you don’t provide the 1st description or visualization of a biological process, how do you provide the basis for hypothesis or mechanism-driven science?

Oh, I feel this. I complained about this since my grad school days. One of the biggest scientific endeavors that closed out the twentieth century, the human genome project, was pure description. I love this from Niko Tinbergen, in his most famous paper (1963):

Contempt for simple observation is a lethal trait in any science(.)

Here’s what I think is going on.

First, I suspect “descriptive” as a critique might mean any one of three things.

  1. Not hypothesis driven.
  2. Not investigated by controlled experiment.
  3. Studies only a single level of organization.

The bias against description is a symptom of the fact that basic biological research is has been supported by medical agencies. In the United States, the budget for the National Institutes of Health dwarfs that of the National Science Foundation. (Interestingly, this isn’t the case in Canada.)

Medical agencies aren’t funding science for science’s sake. They are not interested in making discoveries that broaden our understanding of the natural world. They have sick people they want to make better. They want treatments. They want results.

To their credit, most medical funding agencies recognize that investing in basic science pays dividends. That’s why they support it at all. But their priorities are not those of curiosity-driven science.

So it is no surprise that such agencies would strongly favour hypothesis-driven research. Because as much as I love basic description and serendipitous discoveries, I absolutely recognize that hypothesis-driven research –  particularly strong inference of pitting competing hypotheses against each other – is ferociously efficient at generating new knowledge.

I don’t think hypothesis-driven research is enough. But even I have to say that I don’t think any other approach generates knowledge as quickly or as consistently.

If you get the “too descriptive” critique, you can’t fix it just by working in the word “hypothesis” into your proposal at every opportunity. The “descriptive” critique is not necessarily about whether you have a hypothesis at all, but whether you address a hypothesis that is actively investigated by your research community. 

You can have a perfectly hypothesis driven project, but is the hypothesis doesn’t addresses what the community cares about, it will still get called “descriptive.”

Another aspect of the critique is that “descriptive” studies are contrasted against “mechanistic” studies. Again, I think this is a symptom of the “medicalization” of research funding.

This semester, I was unexpected assigned to teach part of a course in human pathophysiology. This is way outside my expertise, and I’ve been forced to learn about medical topics more than I ever have before in my life. And after a few months of digging into bone, muscle, and hormonal disorders, it’s surprised me how often developing treatments drill down to understanding molecular interactions. 

A description like “There’s too much hormone” is necessary. But treatments are often based in, “This drug blocks the receptor to this hormone” and “This drug blocks synthesis of this hormone.” In other words, the research spans multiple levels of analysis. When people talk about “mechanism,” they usually mean that they are looking for a level of analysis that is at more finely grained, by at least one step. 

If you are studying an organism, they want an explanation at the level of organs. 

If you’re studying organs, they want an explanation at the level of tissue.

If you’re studying cells, they want an explanation at the level of molecules.

(At least in biology, we usually stop there and don’t require explanations at the quantum level. Thank goodness.)

So seeing the challenges of the problems and the successes gained from these these molecular approaches helps me see why funding agencies like them. They have a proven track record.

I’ve also found that many students struggle to articulate hypotheses. I wonder if early career researchers writing their grants might also be struggling with this.


References

Tinbergen, N. 1963. On aims and methods in ethology. Ethology 20(4): 410-433.

24 March 2021

NSF GRFP award skew in 2021

Matthew Cover pulled numbers I was going to look for on this year’s graduate research fellowships from the National Science Foundation (NSF):

Congrats to the 13 current Cal State University students who were awarded NSF GRFPs!! For context, that is fewer awards to the largest 4-year system in the country (0.5million) than Stanford- 81, MIT- 76, Princeton- 34, Northwestern- 30, Yale- 26, Chicago- 22, Rice- 22, Duke- 20

I’ve been talking this a few years now, so why stop?

You know, there are some problems in academia that you recognize are going to be slow to fix because they rely on cultural changes.

This is not one of them. 

The NSF could dictate how awards are going to be distributed. But the agency seems unwilling to have that conversation.

Update, 5 April 2021: Megan Barkdull ran some numbers for this year’s awards. She focused on institutional “prestige” and found, unsurprisingly, that the Ivy league universities and their peers gobble up most of the awards. Her full analysis is here.

Related posts

The NSF GRFP problem, 2020 edition
Fewer shots, more diversity?
The NSF GRFP problem continues

External links

NSF Graduate Fellowships are a part of the problem

 

 



23 March 2021

Tuesday Crustie: Cinnamon Toast Crustie

The scandal of the day on Twitter: Jensen Karp asking why there appear to be bits of shrimp in his cereal.

Cinnamon Toast Cruch with apparent shrimp tails

The company claims they couldn’t possibly be shrimp, but those sure look like a telson and uropods to me.

28 February 2021

Australia’s CSIRO expends welfare guidelines to include decapod crustaceans

The Crustacean Compassion website is reporting that Australia’s major government research agency, CSIRO, will now be requiring ethics approval for research on decapod crustaceans.

I’m searching the CSIRO website, but can’t find the specific policy.

External links

Professor Culum Brown on protecting decapods in scientific research

24 February 2021

Austin bats: A case study in successful science communication

Bike rack in the shape of a bat from Austin, Texas
Check out the bike rack on the right. It’s a picture I took in Austin, Texas. People in Austin love bats now.

They didn’t always.

The podcast 99 Percent Invisible just dropped a great episode that looks at how the bats under an Austin bridge went from being viewed as terrors that needed to be eradicated to a major tourist attraction.

Lessons for science communication?

The key advocate for bats, Merlin Tuttle, was no carpetbagger. He moved to Austin and became part of the community he wanted to change.

He built a team. It wasn’t just Merlin – he had started a whole conservation group.

He got the right visuals. He didn’t photograph bats while echolocating because their mouth was open and they were showing their sharp teeth. He photographed them so they looked like they smiling.

Perhaps most important, he was patient and never called people stupid.

This is an incredible success story for conservation. It should be a case study in classes on science communication.

External links

The batman and the bridge builder

23 February 2021

It’s my birthday but you get the gift

Yes, I have successfully completed another trip around the sun. Rather than a birthday present, you can do me a favour: pre-order my book!

Better Posters book cover

The Better Posters book is currently scheduled for release in mid-April, 2021. The exact date is hard to say, because the COVID-19 pandemic is still creating delays and uncertainty in the production and shipping process. It’s been a long journey, to say the least. 

30% off
Pre-orders help books tremendously, and I would like to sell enough copies to have to write a second edition. You can pre-order from the publisher here and get a big 30% discount by using the code “POSTERS30” at check-out.

You could also recommend your university librarian purchase this!

Thank you for your support!

External links

Pelagic Publishing site for Better Posters

19 February 2021

Author jitters

I finished reading the latest proofs of the Better Posters book this week. Having just done that a couple of days ago, I appreciate this quote from Charles Darwin.

Charles Darwin

When I think of the many cases of men who have studied one subject for years, and have persuaded themselves of the truth of the foolishest doctrines, I feel sometimes a little frightened, whether I may not be one of these monomaniacs.

This was in a letter to one Dr. W.B. Carpenter in 1859, about none other than Darwin’s most famous work, On the Origin of Species. Darwin wrote the letter the same month the book was released and sold out in a day. I found the quote mentioned in this article

Re-reading my own book more than a year after finishing the manuscript and that nobody else has seen yet (besides the publisher’s staff) brings up “Did I just write something that nobody else will want to read?” thoughts.

Also: I love Darwin’s hat and think there should be a new version that evolutionary biologists can buy.


13 February 2021

Eagles and Falcon

 I’ve mentioned before that the Eagle from Space: 1999 is my favourite spaceship.

What I didn’t know what its role in my favourite space move, the first Star Wars

I knew the Milleneum Falcon went through several redesigns. The shots inside of the Falcon don’t always make sense relative to the exterior, because the Falcon was originally the ship that became the rebel blackade runner.

What I didn’t know was that a good part of the reason the design changed was that it looked just a little too much like the Eagle transporter.

Early Millenium  Falcon model next to Eage transporter model.

Which, side by side, I can see.

External links

FAB Facts: Star Wars’ Millenium Falcon almost looked like a Space:1999 Eagle

The complete history of the Millenium Falcon

11 February 2021

The worm lizard that’s rather like a whale

May I introduce Bipes biporus, also known as the Mexican mole lizard or Belding’s mole lizard.

Mexican worm lizard (Bipes biporus).

It’s an odd and fascinating beast, because it has arms (forelimbs) but no legs (hindlimbs). You can see its front legs very well in the picture above. They even look pretty chunky relative to the head.

Head and forelimbs of Mexican worm lizard (Bipes biporus).

But there are no obvious rear legs.

Mexican worm lizard (Bipes biporus).

There are tiny remnants of leg bones in the back of the animal, but they are not visible just by looking at the animal.

Pelvic skeleton from Mexican worm lizard (Bipes biporus).

Above is Figure 8 from Zangerl (1945).

A more recent paper (Kearney and Stuart 2004) says Blanus (another worm lizard genus) has forelimb skeletal elements but only vestiges of rear limbs. But pictures of Blanus don’t show obvious limbs like Bipes does.

Why do I say this worm lizard is like a whale? Because like whales, only the forelimbs are visible. The hindlimbs are all but lost. In some ways, the worm lizard is a more impressive specimen of evolution because its forelimbs are still obviously arms, unlike the flipper of a whale, which is so heavily modified that its relationship to out own arms is obscured.

References

Kearney M,  Stuart BL. 2004. Repeated evolution of limblessness and digging heads in worm lizards revealed by DNA from old bones. Proc. R. Soc. Lond. B. 271: 1677–1683. http://doi.org/10.1098/rspb.2004.2771

Zangerl R. 1945. Contributions to the Osteology of the Post-Cranial Skeleton of the Amphisbaenidae. American Midland Naturalist 33: 764–780.

22 January 2021

Classes taught by the dead and copyright

I feel like this should be a bigger story.

HI EXCUSE ME, I just found out the the prof for this online course I’m taking died in 2019 and he’s technically still giving classes since he’s literally my prof for this course and I’m learning from lectures recorded before his passing

..........it’s a great class but WHAT

IDK SOMETHING ABOUT IT IS WEIRD

I mean, I guess I technically read texts written by people who’ve passed all the time, but it’s the fact that I looked up his email to send him a question and PULLED UP HIS MEMORIAM INSTEAD that just THREW ME OFF A LITTLE

...that feeling when a tenured professor is still giving classes from beyond the grave

There’s job security, then there’s this lmfao.

Also like, all dystopian “you can retire when you’re dead” jabs @ the institution aside—this is actually really sad and somebody should have realized that.

This prof is this sweet old French guy who’s just absolutely thrilled to talk paintings of snow and horses, and somehow he always manages to make it interesting, making you care about something you truly thought could not possibly be that interesting.

It’s fucking sad man wtf

Why would you not tell someone that? Do you think students just don’t give a shit about the people they spend months learning from?


And like, it’s shitty that won’t get to thank him for making all of this information so engaging and accessible

I tend to you know...actually talk to my teachers a lot?

Idk man it’s just a weird thing to find out when you’re looking for an email address.

I’m getting a little tired of people comparing teachers to reusable objects so I’m going to go ahead and mute this lmao.

It’s weird to romanticize labor the way some of you do, and it’s weird to act like it’s normal to just not tell students that their teachers dead, goodnight!

Emphasis added.

The last time I was in the faculty senate at UTRGV, a recurring argument was about who owned courses that were created for online teaching. At the time, I thought there was far too much time spent discussing the matter.

But this example shows exactly why that question of who controls course materials matters. It is a sharp and sad reminder that as far as many institutions are concerned, teaching does not require personal interaction if pure Skinner boxing will do. Professors do not even rise to the level of interchangeable cogs. Professors are a mere convenience once they have created content.

External links

Dead man teaching (Added 26 January 2021; The Chronicle of Higher Education caught up)



02 January 2021

I’m in Abominable Science! (No, not like that)

Abominable Science book cover

A friend of mine sent me a screenshot of a page from Abominable Science: Origins of the Yeti, Nessie, and Other Famous Cryptids by Daniel Loxton and Donald Prothero.

It reads:

Invertebrate neuroethologist Zen Faulkes noted further that DeNovo lists no editor, no editorial board, no physical address—not even a telephone number: “The whole thing looks completely dodgy, with the lack of any identifiable names being the one screaming warning to stay away from this journal. Far, far away.”

The excerpt is from this blog post about the claim of sasquatch DNA being sequenced back in 2013. (Most scientists were deeply unconvinced by this.)

I’ve published enough stuff that getting cited is usually not worth a blog post. But having blog posts cited in real physical books still tickles me and is something a little unusual and wonderful.

And I think it speaks to something that makes the rounds now and then: the role of blogging in the 2020s. People occasionally pronounce blogs “dead.” While blogging isn’t a “scene” like it was in the late 2000s, a blog has a lifespan that social media just does not. Being cited in this book is one tiny little piece of evidence of that.

Related posts

Sasquatch DNA: new journal or vanity press?

External links

Abominable Science