07 June 2021

The @IAmSciComm threads, 2021 edition

Twitter heading for IAmSciComm hosted by Zen Faulkes

I’ve started my time hosting the IAmSciComm Twitter account, and will be adding my threads here as I go so that they are easy to find.

Monday, 7 June 2021

Tuesday, 8 June 2021

Wednesday, 9 June 2021

Thursday, 10 June 2021

Friday, 11 June 2021

Saturday, 12 June 2021

Sunday, 13 June 2021

Related posts

The IAmSciComm threads 

06 June 2021

The week of IAmSciComm, 6 June 2021!

I have just taken over the reigns of the @IAmSciComm rotating curator Twitter account! This is my second time hosting, and am gratified to be asked back.

Here is a rough schedule for the week.

Monday, 7 June: Show me a poster, graphic, or dataviz!   Tuesday, 8 June: Why streaks matter!  Wednesday, 9 June: From blog to book!   Thursday, 10 June: Posters for everyone!   Friday, 11 June: Posters reviewed!  Saturday, 12 June: The randomizer!

  • Monday, 7 June: Show me a poster, graphic, or dataviz! 
  • Tuesday, 8 June: Why streaks matter!
  • Wednesday, 9 June: From blog to book! 
  • Thursday, 10 June: Posters for everyone! 
  • Friday, 11 June: Posters reviewed!
  • Saturday, 12 June: The randomizer!

Join me, won’t you?

Related posts

The IAmSciComm threads

External links

IAmSciComm home page

04 June 2021

Experiments doesn’t always lead to papers

Drugmonkey tweeted

Telling academic trainees in the biomedical sciences to put their heads down, do a lot of experiments and the papers will just emerge as a side-product is the worst sort of gaslighting.

He’s right, as he often is, but it bears examining why that’s the case.

First, not all experiments should be published. Experiments can have design flaws like uncontrolled variables, small sample sizes, and all the other points of weakness that we learn to find and attack in journal club in graduate school.

Second, even if an experiment is technically sound and therefore publishable in theory, it may not be publishable in practice. In many fields, it’s almost impossible to publish a single experiment, because the bar for publication is high. People usually want to see a problem tackled by multiple experiments. The amount of data that is expected in a publishable paper has increased and probably will continue to do so.

The bar for what is considered “publishable” is malleable. We have seen that in the last two years with the advent of COVID-19. There was an explosion of scientific papers, many of which probably would not have been publishable if there wasn’t a pandemic going on. People were starved for information and researchers responded in force. You have to understand what is interesting to your research community.

Third, experimental design is a very different skill from writing and scientific publication. 

Fourth, it’s not a given that everyone feels the same drive to publish. Different people have different work priorities. For instance, I saw a lot of my colleagues who had big labs groups with a lot of students who churned through regularly. Those labs generated a lot of posters and a lot of master’s theses. According to our department guidelines, theses were supposed to represent publishable work.

But all of that didn’t turn into papers consistently. I think people got positive feedback for having lots of students (and looking “busy”) and came to view “I have a lot of students” as their personal measure of success. Or, they just got into the habit of thinking, “I’ll write up that work later.” “Or just one more experiment so we can get it in a better journal.”

I had fewer students and master’s theses written than my colleagues, but I published substantially more papers. I say this not to diss my colleagues or brag on myself, but it’s a fact. I made publication a priority.

Publishing papers requires very intentional, deliberate planning. It requires understanding the state of the art in the research literature. It requires setting aside time to write the papers. It requires understanding what journals publish what kinds of results. Just doing experiments in the lab will not cause papers to fall down like autumn leaves being shaken loose from trees.

24 May 2021

It’s Book Day! Better Posters is here!

It’s been a long time coming.

Today is the official release date for the Better Posters book.

Better Posters book in box.

The Better Posters blog started in 2009, inspired by in large part by Garr Reynolds Presentation Zen blog. Reynolds’s blog made the transition to book in late 2007. As my blog kept going, I quietly entertained the hope that maybe I might be able to write a book to do for conference posters something like what Reynolds (and many others) did for oral presentations.

Somewhere along the way, I even wrote out a partial incomplete outline for what a book might contain.

Flash forward to sometime in 2017. Nigel Massen from Pelagic Publishing contacts me about the potential for... a book on conference posters! I send him my crappy outline. Despite its crappiness, Nigel assures me that all books start with a crappy outline. I manage to convince him I can actually do this.

I start writing it in earnest in the first few months of 2018, and submit it on time on Halloween 2019. So yes, the writing and organizing and figure making and emailing people to ask if I can show their posters took well over a year. (In my defense, I was teaching a full course load while writing the book, too).

But proving that if it wasn’t for the last minute, nothing would get done, I was furiously making some fairly significant changes even on deadline day.

The original plan was to release the book in the first quarter of 2020. which would be in time for the normal summer conference season for academics. 

I don’t know if we would have made target, but the COVID-19 pandemic derailed that plan. There was just no way to release the book during a pandemic. The book got pushed back a couple of times until today.

After more than three years, it’s a little hard to believe that other people can now read this thing. It doesn’t quite feel the books that inspired it. When you have to write 80 thousand words, you just get too tired to emulate anyone else and what you get represents you and your voice, for good or bad.

I have more to say on the creation of this book, but for now, I will just say that if you get it, I hope you find it useful.

If you’re interested, the book is available in both paperback and ebook versions for Kobo, Kindle, Nook.

If you cannot buy a copy of the book yourself, you might recommend it to your university library or local community library.

Photo by Anita Davelos.

04 May 2021

Type of scientific papers, very specific niche edition

Last week, Randall Munroe started a whole thing with one of his xkcd cartoons, “Types of scientific papers.”

People were inspired to make different versions for their field, then sub-field, then sub-sub-field, so I took the meme to it’s logical conclusion.

Types of scientific paper that Zen Faulkes has co-authored

07 April 2021


I’m grateful that:

  1. I had a book in me that I thought was worth writing.
  2. Pelagic Publishing gave me a chance to write it.
  3. I made it through a pandemic to see it in print.

I hope people find it helpful.

03 April 2021

Why “descriptive” science is downplayed

On Twitter, Rejji Kuruvilla asked:

I’m sorry, but WHY are descriptive studies a problem in grant or ms review? If you don’t provide the 1st description or visualization of a biological process, how do you provide the basis for hypothesis or mechanism-driven science?

Oh, I feel this. I complained about this since my grad school days. One of the biggest scientific endeavors that closed out the twentieth century, the human genome project, was pure description. I love this from Niko Tinbergen, in his most famous paper (1963):

Contempt for simple observation is a lethal trait in any science(.)

Here’s what I think is going on.

First, I suspect “descriptive” as a critique might mean any one of three things.

  1. Not hypothesis driven.
  2. Not investigated by controlled experiment.
  3. Studies only a single level of organization.

The bias against description is a symptom of the fact that basic biological research is has been supported by medical agencies. In the United States, the budget for the National Institutes of Health dwarfs that of the National Science Foundation. (Interestingly, this isn’t the case in Canada.)

Medical agencies aren’t funding science for science’s sake. They are not interested in making discoveries that broaden our understanding of the natural world. They have sick people they want to make better. They want treatments. They want results.

To their credit, most medical funding agencies recognize that investing in basic science pays dividends. That’s why they support it at all. But their priorities are not those of curiosity-driven science.

So it is no surprise that such agencies would strongly favour hypothesis-driven research. Because as much as I love basic description and serendipitous discoveries, I absolutely recognize that hypothesis-driven research –  particularly strong inference of pitting competing hypotheses against each other – is ferociously efficient at generating new knowledge.

I don’t think hypothesis-driven research is enough. But even I have to say that I don’t think any other approach generates knowledge as quickly or as consistently.

If you get the “too descriptive” critique, you can’t fix it just by working in the word “hypothesis” into your proposal at every opportunity. The “descriptive” critique is not necessarily about whether you have a hypothesis at all, but whether you address a hypothesis that is actively investigated by your research community. 

You can have a perfectly hypothesis driven project, but is the hypothesis doesn’t addresses what the community cares about, it will still get called “descriptive.”

Another aspect of the critique is that “descriptive” studies are contrasted against “mechanistic” studies. Again, I think this is a symptom of the “medicalization” of research funding.

This semester, I was unexpected assigned to teach part of a course in human pathophysiology. This is way outside my expertise, and I’ve been forced to learn about medical topics more than I ever have before in my life. And after a few months of digging into bone, muscle, and hormonal disorders, it’s surprised me how often developing treatments drill down to understanding molecular interactions. 

A description like “There’s too much hormone” is necessary. But treatments are often based in, “This drug blocks the receptor to this hormone” and “This drug blocks synthesis of this hormone.” In other words, the research spans multiple levels of analysis. When people talk about “mechanism,” they usually mean that they are looking for a level of analysis that is at more finely grained, by at least one step. 

If you are studying an organism, they want an explanation at the level of organs. 

If you’re studying organs, they want an explanation at the level of tissue.

If you’re studying cells, they want an explanation at the level of molecules.

(At least in biology, we usually stop there and don’t require explanations at the quantum level. Thank goodness.)

So seeing the challenges of the problems and the successes gained from these these molecular approaches helps me see why funding agencies like them. They have a proven track record.

I’ve also found that many students struggle to articulate hypotheses. I wonder if early career researchers writing their grants might also be struggling with this.


Tinbergen, N. 1963. On aims and methods in ethology. Ethology 20(4): 410-433.