19 July 2019

More multimedia: Crustacean pain and nociception talk

Last year, I gave a talk at Northern Vermont University about crustacean pain. It was recorded by the local public access television station, Green Mountain Access TV, and is now up on Vimeo.



Current Topics in Science Series, Zen Faulkes from Green Mountain Access TV on Vimeo.

Big thanks to Leslie Kanat for hosting me and for Green Mountain Access TV for recording it!

Audiopapers

Photograph of microphone
Corina Newsome! This is your fault! You have to go and say:

Can we get scientific journal articles on audiobook? Please?

There is a long thread that follows about possible solutions. But two things emerge:

  1. Software to read papers aloud automatically using doesn’t do a very good job.
  2. Quite few people want these.

Following my long standing tradition of, “What the heck, I’ll have a go,” I’d like to present my first audiopaper! It’s a reading of my paper from last year on authorship disputes.

I decided to do this because I wanted to get more mileage out of a mic I’d bought for a podcast interview (forthcoming), and because I still have this discussion in the back of my head.

I often tell students, “Always plot the data”, since different patterns can give same summary stats. How could I help visually impaired students do something similar?

And the answer is that while there have been experiments in sonification of data, it seems to have stayed experimental and never moved into simple practical use. It got me thinking about how little we do for visually impaired researchers.

I picked my authorship disputes paper for a few reasons.

  1. There are no bothersome figures to worry about describing.
  2. The topic probably has wider appeal than my data driven papers.
  3. The paper is open access, so I wouldn’t run afoul of any copyright issues. 
  4. The paper is reasonably short.

I wrote an little into and a little outro. I pulled out my mic, fired up Audacity, and got reading. My first problem was finding a position for the mic where I could still see the computer screen so I could read from my paper.

I broke it into about sections (slightly more than sections with headings the paper). I think it took between one and two hours to read the whole thing. It’s not quite a single take, but it’s close.
 I’ve since figured out that I can probably do longer sessions, because I worked out how to identify sections I want to edit out because I stumbled or mispronounced words. After I screw up a sentence, I snap my fingers three times. This creates three sharp spikes in the playback visualization that is easy to see. That makes it easy to find the mistake, then edit it and the fingersnaps out of the recording.

Screen shot of sound recording in Audacity comparing speech and finger snaps.

I learned that it can be surprisingly hard to say “screenplay” correctly. And I curse my past self who wrote tongue twisters like “collaborative creator credit.”

Editing the recording also took about an hour. Besides cutting out my stumbles and fingersnaps, I cut out some longer pauses and occasional little background sounds. The recording was a bit quiet, so I increased the gain a few decibels.

Will I do more of these? It completely depends on the response to this experiment. I probably picked the single easiest paper to read and turn into an audio recording. It would only get harder from here. And I have other projects that I should be working on.

If people like this effort, I’ll see about doing more, maybe with better production. (I wanted to put in some music, but that was taking too long for a one off.)

External links

Resolving authorship disputes by mediation and arbitration on Soundcloud

12 June 2019

The final chapter in the UTRGV mascot saga

When UTRGV was forming, I blogged a lot about the choice of a name for a new mascot. What we got was... something that not a lot of people were happy with at first. In the years since, I guess peopel have made peace with it, because I haven’t heard much about the name since then.

Well, we’ve waited four, almost five years for a mascot to go with the name, and today we got it.


As far as I know, he doesn’t have a name. Just “Vaquero.”

The website lists what each feature represents, although I think a lot of this stuff is so far beyond subtle that nobody would ever guess what it is supposed to mean.

Scarf: The scarf features the half-rider logo against an orange background. Traditionally, the scarves were worn to protect against the sun, wind and dirt. Today, the scarf is worn to represent working Vaqueros.

Vest: The vest features the UTRGV Athletics symbol of the “V” on the buttons, which match the symbol on the back of the vest representing school spirit and pride.

Gloves: The gray and orange gloves symbolize strength and power. They represent Vaqueros building the future of the region and Texas.

Shirt: The white shirt represents the beginning of UTRGV, which was built through hard work and determination. (Couldn’t it just represent, I don’t know, cleanliness? - ZF)

Boots: A modern style of the classic cowboy boot, these feature elements that are unique to the region and UTRGV. The blue stitching along the boot represents the flowing Rio Grande River which signifies the ever-changing growth in the region and connects the U.S. to Mexico.

The boot handles showcase three stars. The blue star represents legacy institution UT Brownsville, the green star represents legacy institution UT-Pan American and the orange star represents the union of both to create UTRGV.

I don’t know. I was never a fan of the “Vaquero” name and this does not win me over. I just feel like the guy could do with a shave. I will be interested to see if this re-ignites the debate about the name...

Update: Apparently the mascot’s slogan is “V’s up!” Which doesn’t make any sense! And demonstrates questionable apostrophe usage! 

External links

Welcome your Vaquero

11 June 2019

Tuesday Crustie: For your crustacean GIF needs

Been meaning to make GIFs of a sand crab digging, suitable for social media sharing, for a while.

Here’s a serious one.


And here’s a fun one.




10 June 2019

Journal shopping in reverse: unethical, impolite, or expected?

A recent article describes a practice unknown to me. Some authors submit papers for review, get positive reviews, then withdraw it if the reviews are positive and try again in another “higher impact” or “better” journal.

It is entirely normal for authors to go “journal shopping” when reviews are bad: submit the article,and if the reviewers don’t like it, resubmit it to another. But this is the first time I’d heard of this process going the other way. It would never even occur to me to do this.

Nancy Gough tweeted her agreement with this article, and said that this behaviour was unethical. And she got an earful. Frankly, online reaction to this article seemed to be summed up as, “I know you are, but what am I?”

A lot of the reaction that I saw (though I didn’t see all of it) seemed to be, “Journals exploit us, so we should exploit journals!” or “Journals should pay us for our time.” This seemed to be a directed at for profit publishers, but people seemed to be lumping journals from for profit publishers and non profit journals from scientific societies together.

The “People in glass houses should not throw stones” have a point, but I’m not sure it addresses the actual issue. Publishers didn’t create the norms of refereeing and peer review. That was us, guys. Scientists. We created the idea that there are research communities. We created the idea that reviewing papers is a service to that community.

I don’t know that I would call “withdraw after positive reviews and resubmit to a journal perceived as better” unethical, but I think it’s a dick move.

Like asking someone to a dance and then never dancing with them. Sure, there’s no rules against it, but it’s not too much to expect a little reciprocity. The “Me first, me only” attitude drags.

Since the whole behaviour is “glam humping” and impact factor chasing, this seems a good time to link out to a couple of articles that point out the many ways that impact factor is deeply flawed: here and here.

I’ve written before about grumpiness about peer review being due in part to an eroded sense of research community. I guess people don’t want to see journals as part of the research community, but they are.

Related posts

A sense of community

External links


08 June 2019

Shoot the hostage, preprint edition

It takes a certain kind of academic who refuses to review papers. Not because of lack of expertise, a lack of time, or a conflict of interest, but because you don’t like how other authors have decided to disseminate their results.

I’ve been declining reviews for manuscripts that aren’t posted as preprints for the last couple of months (I get about 1-2 requests to review per week). I’ve been emailing the authors for every paper I decline to suggest posting.

This isn’t a new tactic, and I’ve made my thoughts on it known. But this takes review refusal to a new level. This individual isn’t just informing the editor he won’t review, but chases down the authors to tell them how to do their job.

I’m sure the emails are meant as helpful, and may be well crafted and polite. Still. Does advocating for preprints have to be done right then?

I see reviewing as service, as something you do to help make your research community function, and to build trust and reciprocity. I don’t think reviewing as an opportunity to chastise your colleagues for their publication decisions. But I guess some people are unconcerned whether they are seen as “generous” in their community or... something else.

And I am still struggling to work out if there are any conditions where I think it would genuinely be worth it to say refuse to review.

Additional, 9 June 2019: I ran a poll on Twitter. 18% described this as “Collegial peer pressure.” The other 82% percent described it as “Asinine interference.”


Related posts

Shoot the hostage

07 June 2019

Graylists for academic publishing

Lots of academics are upset by bad journals, which are often labelled “predatory.” This is maybe not a great name for them, because it implies people publishing in them are unwilling victims, and we know that a lot are not.

Lots of scientists want guidance about which journals are credible and which are not. And for the last few years, there’s been a lot of interests in lists of journals. Blacklists spell out all the bad journals, whitelists give all the good ones.

The desire for lists might seem strange if you’re looking at the problem from the point of view of an author. You know what journals you read, what journals your colleagues publish in, and so on. But part of the desire for lists comes when you have to evaluate journals as part of looking at someone else’s work, like when you’re on a tenure and promotion committee.
 But a new paper shows it ain’t that simple.

Strinzel and colleagues compared two blacklists and two whitelists, and found some journals appeared on both the lists.

Venn diagram showing overlap of two blacklists and two whitelists. Out of tends of thousands of journals, tens of journals are both at least one whitelist and at least one blacklist.

There are some obvious problems with this analysis. “Beall” is Jeffrey Beall’s blacklist, which he no longer maintains, so it is out of date. Beall’s list was also the opinion of just one person. (It’s indicative of the demand for simple lists that one put out by a single person, with little transparency, could gain so much credibility.)

One blacklist and one whitelist are from the same commercial source (Cabell), so they are not independent samples. It would be surprising if the same sources listed a journal on both its whitelist and blacklist!

The paper includes a Venn diagram for publishers, too, which shows similar results (though there is a published on both Cabell’s lists).

This is kind of like I expected. And really, this should be yesterday’s news. Let’s remember the journal Homeopathy is put out by an established, recognized academic publisher (Elsevier), indexed in Web of Science, and indexed PubMed. It’s a bad journal on a nonexistent topic that was somehow “whitelisted” by multiple services that claimed to be vetting what they index.

Academic publishing is a complex field. We should not expect all journals to fall cleanly into two easily recognizable categories of “Good guys” and “Bad guys” – no matter how much we would like it to be that easy.

It’s always surprising to me that academics, who will nuance themselves into oblivion on their own research, so badly want “If / then” binary solutions to publishing and career advancement.

If you’re going to have blacklists and whitelists, you should have graylists, too. There are going to be journals that have some problematic practices but that are put out by people with no ill intent (unlike “predatory” journals which deliberately misrepresent themselves). 

Reference

M Strinzel, Severin A, Milzow K, Egger M. 2019. Blacklists and whitelists to tackle predatory publishing: A cross-sectional comparison and thematic analysis. mBio 10(3): e00411-00419. https://doi.org/10.1128/mBio.00411-19.

Related posts

From predator to mutualist, or: What if predatory journals published reviews?

Dubious journals from major scientific publishers: Homeopathy

06 June 2019

How do you know if the science is good? Wait 50 years

A common question among non-scientists is how to tell what science you can trust. I think the best answer is, unfortunately, the least practical one.

Wait.

There’s one peculiarity that distinguishes parascience from science. In orthodox science, it’s very rare for a controversy to last more than, a generation; 50 years at the outside. Yet this is exactly what’s happened with the paranormal, which is the best possible proof that most of it is rubbish. It never takes that long to establish the facts – when there are some facts.

— Arthur C. Clarke, 10 July 1985, “Strange Powers: The Verdict,” Arthur C. Clarke’s World of Strange Powers

Emphasis added. I must have heard Clarke say that on television decades ago, and it stuck with me all this time. I remembered it a little different. I thought it was, “Scientists generally get to the bottom of things in about 50 years, if there’s any bottom to be gotten to.” I finally got around to digging up the exact quote today.

Related quote:

One of the things that should always be asked about scientific evidence is, how old is it? It’s like wine. If the science about climate change were only a few years old, I’d be a skeptic, too.

— Naomi Oreskes, quoted by Justin Gillis, “Naomi Oreskes, a Lightning Rod in a Changing Climate” 15 June 2015, The New York Times

30 May 2019

“Why do you love monsters?”

My wife asked me, “Why do you love monsters so much?”

Maybe because I can’t remember ever a time I didn’t know about Godzilla. The name and image - the glowing fins, the roar, the strange look that was almost a hybrid between a tyrannosaur and a stegosaur – were imprinted on me that early.

The same was true of King Kong, and maybe the Universal monsters (the Karloff Frankenstein, the Lugosi Dracula, and particularly the Creature from the Black Lagoon). You absorb a lot of pop culture that was made before you were born in your early years.

But I certainly didn’t learn about Godzilla from being exposed to the movies, because I remember the first time I actually got to see a Godzilla movie. It was in Killarney, Manitoba, when it still had a sit-down theatre. There’s no mention of that movie theatre online now; I think it was the Roxy? (There was a drive-in, too, and I think it’s still open.)

The theatre showed a lot of re-released low budget movies, ostensibly for a young audience, on weekend nights. I remember watching King Kong Escapes (long before I saw the original classic 1933 King Kong), an early anime called Magic Boy, and the oddly titled Super Stooges vs. the Wonder Women.

One week they showed Godzilla vs. The Thing. I think they had this poster to advertise it.


Needless to say, “The Thing” turned out to be rather different than the poster implied. I was not expecting... a moth.


In retrospect, this was a colossal stroke of luck. I later learned that this was generally considered to be the best Godzilla movie via the cover article in Fangoria #1. (I read and reread that article, thinking of all the Godzilla movies I had yet to see and felt I might never see. You had to work hard to be a nerd in those days.) Even though many more Godzilla films have been released since that Fangoria retrospective, lots of fans still rank Godzilla vs. The Thing among the best.

That good initial experience probably helped cement me as a lifelong Godzilla fan. Even now, I can watch that first movie and appreciate it, albeit on a different level than I could when I was young.

I don’t know if my love for Godzilla would have survived if the first movie I had seen had been something like Godzilla vs. Megalon. Or Son of Godzilla. I think that was the last of the original series I saw, and... that suit. Ugh.


But being a Godzilla fan teaches you to value hope over experience. Because, if we’re being honest, there aren’t that many good Godzilla movies. Even for a Godzilla fan, young or old, some are just utterly tedious.

Being a Godzilla fan has also taught me to bide your time. Because so many of the Godzilla movies were bad, they always had a reputation as being “cheap Japanese monster movies” that were easily dismissed. But guess what? The stuff that was derided for years finally earned some respect.

People started to realize just how hard it was to create those special effects. I look at the final monster battle in Destroy All Monsters, knowing what I know now, and am in awe. How they got all those suit actors, and wire controlling monster parts, on film at all amazes me.

People began to write about how great the music of Akira Ifukube was. Composer Bear McReary noted that the only real competitor that Ifukube’s Godzilla theme has for longevity is the James Bond theme – and the Godzilla theme is older!

The original 1954 version of Godzilla got an art house run, with all the additional American footage with Raymond Burr removed. Key shots were returned and gone was the so-so dubbing, replaced with subtitles. It was a revelation. No longer was the original Godzilla a cheap Japanese monster movie. It was a haunting classic that evoked the horror of an atomic bomb attack.

I hope the new Godzilla movie I am going to see tonight is good. But as a fan, whether it’s good or bad as a whole is almost beside the point. There will probably be some moments, images, that linger on and impress you even if the movie as a whole doesn’t.

And maybe there will be some more kids who will grow up never remembering how or when they first learned about Godzilla.

P.S.—I bought the T-shirt pictured above last year, and I never wore it. I saved it specifically to wear to the opening night of Godzilla: King of the Monsters. That night is tonight! IMAX presentation and I’m very excited!

That was close

There are many days where I am good at my job. The last weeks of the last semester were not among those days. And I say this with a couple of weeks having passed since the end of the semester.

I have a few semesters under my belt now, and I usually aim to have grading all done and posted several days before grades are due. It give students a chance to review their grades, check for any last minute corrections (which happen regularly, when there are hundreds of students).

This semester... just did not work like that.

This biggest problem was that I ran into unanticipated problems with one of my online courses. While the regular course is online, the final exam is in person to maintain the integrity of the class. (One of my colleagues was more blunt. “Because they are cheating [profanity].”)

So I booked computer labs, multiple sessions, to administer proctored exams to the students.

Except that the rooms were not exactly as advertised. Some computers flat out wouldn’t work; students could not log into them at all. Some computers crashed repeatedly, and logged students out of the exam – which, because of exam security, would mean the student would have to start the entire exam again from scratch.

These problems would turn, say, a 25 person computer lab into something more like a 20 person lab. I did not anticipate that. I had to figure out some other arrangements for people who were taking the exam on the last scheduled day. I had people taking exam for days after I thought I would have them all in and would just be grading.

I barely got the grades in on time. I wasn’t able to communicate with students as I wanted. It was very stressful.

And the moral of the story is: Don’t book computer labs to capacity. Leave a few seats empty to act as back ups.

26 May 2019

The future of education isn’t online

There is a certain class of people who are convinced that higher education as currently taught is a stupid waste of time, and that the future is to move instruction on to the Internet. I see a lot of questions on Quora asking when this will happen.

I think the notion of online learning is appealing for a certain kind of person: technologically savvy and probably rather introverted. I’m one of these people.

But most students are not like that. Most people learn best with face to face interactions instructors.

I am reminded of this by seeing this Twitter thread about Virginia Tech’s Math Emporium, which is basically an online class. Students get a computer lab and no professors.

And students hate it. I think my favourite burn is one studentwho wrote:

I would call this place hell on Earth but I don’t want to insult hell.

Even a MOOC company acknowledged that MOOCs have failed to disrupt education (they called them “dead”) in the way some people were talking.

More data shows MOOCs consistently underperform.

The vast majority of massive open online course (MOOC) learners never return after their first year, the growth in MOOC participation has been concentrated almost entirely in the world's most affluent countries, and the bane of MOOCs — low completion rates — has not improved over 6 years.

I teach some classes online, and I think you can create a good learning environment for some students. But this vision that the future of education is a bunch of YouTube videos and adaptive algorithms is not a vision that I want to see.

Update, 30 May 2019: A new report on online education adds more evidence saying they aren’t better, although this one is focused more on K-12 than higher education.

Full-time virtual and blended schools consistently fail to perform as well as district public schools.

11 May 2019

A pre-print experiment, part 3: Someone did notice

In 2016, I wrote a grumpy blog post about my worries that posting preprints is probably strongly subject to the Matthew effect. It was a reaction to Twitter anecdotes about researchers (usually famous) posting preprints and immediately getting lots of attention and feedback on their work. I wanted to see if someone less famous (i.e., me) could get attention for a preprint without personally spruiking it extensively on social media.

I felt my preprint was ignored (until I wrote aforementioned grumpy blog post). But here we are a few years later, and I’m re-evaluating that conclusion.

A new article about bioRxiv is out (Abdill and Blekman 2019), and it includes Rxivist, a website that tracks data about manuscripts in bioRvix. Having posted a paper in BioRvix, that means that my paper is tracked in Rxivist.

It’s always interesting to be a data point in someone else’s paper.

The search function is a little wonky,but I did find my paper, and was surprised (click to enlarge).


Rxivist showed that there has been a small but consistent number of downloads (Downloaded 421 times). Not only that, but the paper is faring pretty well compared to others on the site.
  • Download rankings, all-time:
    • Site-wide: 17,413 out of 49,290
    • In ecology: 542 out of 2,046
  • Since beginning of last month:
    • Site-wide: 19,899 out of 49,290
My little sand crab natural history paper is in the top half of papers in bioRxiv?

I did not expect that. Not at all.

I know there is an initial spike because I wrote my grumpy blog post and did an interview about preprints that got some attention, but even so. I know there aren’t hundreds of people doing research on sand crabs around the word, so hundreds of downloads is a much wider reach than I expected.

And some of the biggest months (October 2018) are after the final, official paper was published in Journal of Coastal Research. The final paper is open access on the journal website, too, so it’s not as though people are downloading the preprint because they are circimventing paywalls. (Though in researching this blog post, I learned a secondary site, BioOne, is not treating the paper as open access. Sigh.) (Update, 14 May 2019: BioOne fixed the open access problem!)

I am feeling much better about those numbers now than in the first few months after I posted the paper. I never would have anticipated that long tail of downloads years after the final paper is out.

And Rxivist certainly does a better job of providing metrics than the journal article does:


There’s an Altmetric score but nothing else. It’s nice that the Altmetric score for the preprint and published paper are directly comparable (and I’m happy to see the score of 24 for the paper is a little higher than the preprint at 13!), but I miss the data that Rxivist provides.

Other journals provide top 10 lists (and I’ve been happy to be on those a couple of times), but they tend to be very minimal. You often don’t know the underlying formula for how they generate those lists. The Journal of Coastal Research has a top 50 articles page that shows raw download numbers for those articles, and if you are not in that list, you have no idea how your article is doing.

While I still never got any feedback on my article before publication, I don’t feel like posting that preprint was a waste of time like I once did.

References

Abdill RJ, Blekhman R. 2019. Tracking the popularity and outcomes of all bioRxiv preprints. eLife 8: e45133. https://doi.org/10.7554/eLife.45133

Faulkes Z. 2016. The long-term sand crab study: phenology, geographic size variation, and a rare new colour morph in Lepidopa benedicti (Decapoda: Albuneidae). BioRXiv https://doi.org/10.1101/041376

Faulkes Z. 2017. The phenology of sand crabs, Lepidopa benedicti (Decapoda: Albuneidae). Journal of Coastal Research 33(5): 1095-1101. https://doi.org/10.2112/JCOASTRES-D-16-00125.1 (BioOne site is paywalled; open access at https://www.jcronline.org/doi/full/10.2112/JCOASTRES-D-16-00125.1)

Related posts

A pre-print experiment: will anyone notice?
A pre-print experiment, continued

Fiddly bits and increments

External links

Sand crab paper on Rxivist

10 May 2019

Reliable shortcuts

There’s an old saying that if you build a better mousetrap, the world will beat a path to your door.

I think that’s true of shortcuts, not mousetraps.


Everyone wants reliable shortcuts. They don’t want to have to assess every available option every single time.

Best seller lists, Consumer Reports, “Two thumbs up!” from Siskel and Ebert, “Certified fresh” on Rotten Tomatoes, “People also shopped for” on Amazon, “Five stars” on Yelp!, and awards shows are all efforts to create shortcuts.

An amazing number of arguments in academia are about shortcuts. Almost every debate about tenure and promotion and assessments of academics I have seen or read is about shortcuts. Arguments about the GRE are about shortcuts.

I got thinking about shortcuts because of this article on what journals go into PubMed.

For some members of the scientific community, the presence of predatory journals, publications that tend to churn out low-quality content and engage in unethical publishing practices—has been a pressing concern.

Rebecca Burdine is among the concerned, because she advised people to use PubMed as a shortcut.


I could tell parents “researching” their rare disease of interest that if it wasn’t on PubMed, then it shouldn’t be given lots of weight as a source.

Stephen Floor thinks the problem is even wider:

This has also contributed to the undermining of “peer reviewed” as a measure of validity.

But again, “peer reviewed” is a shortcut. Anyone who’s been in scientific publishing for a while knows that assessing scientific evidence is messy and complicated. Every working scientist has their own “That should never have gotten past peer review!” story.

We will never, ever get rid of shortcuts. People crave certainty and simple decision making rules. But we should talk about using shortcuts in science in realistic ways.

It is not reasonable to expect any shortcut to be perfectly reliable all the time. Don’t ask, “Which shortcut is better?” but “How can I use a few different shortcuts?”

Unfortunately, scientists who understand the nuances of a situation often do a shoddy job of conveying that nuance. Or maybe they just get tired of being pressed for shortcuts. So we have kind of brought this on ourselves.

External links

Academics Raise Concerns About Predatory Journals on PubMed


25 April 2019

Rejected for literary allusions

Ton van Raan had a paper rejected for referring to “sleeping beauty.” This is a term that van Raan and others have used to describe papers that aren’t cited for a long time and then start accumulated citations years after publication.

Rejection always makes academics grumpy, and van Raan is grumpy about this, unleashing the “political correctness gone mad” trope in his defense.

I went through something similar myself. I know first hand how easy it is to get defensive about deploying a common metaphor. But after you calm down and think about it, there is often a fair point to the criticism.

I think the editor had a good criticism but a bad implementation. Sleeping Beauty is problematic. Like many fairy tales, modern North Americans tend to know only very sanitized versions of the story. But even in the “clean” version, the part everyone remembers is pretty creepy.

Sleeping Beauty saying to Prince, "Whoa, what part of me sleeping here alone implied consent?"

I think the editor was right to ask for the author to not use Sleeping Beauty as a metaphor for scientific papers. Maybe a passing comment that this is how they have been referred to in the literature would be fine.

The editor may have failed in two ways.

The first was in rejecting the paper for a bad metaphor alone. If the scientific content was correct, it would seem that “Revise and resubmit” might have been a more appropriate response. It seems churlish to chuck the paper entirely for a poor metaphor, particularly one that is, for better or worse, already used by others in the literature.

The second was not offering a positive alternative to the Sleeping Beauty reference. Maybe these papers could be “buried treasures” or something that might be less problematic. There are many ways to express ideas. And neither an editor nor an author should dig in their heels over any particular way of expressing an idea.

Both of these problems assume the account offered by van Raan is accurate. Maybe there were other reasons for rejecting the paper besides the fairy tale reference. We don’t know.

Criticism can be valuable. Criticism plus suggestions for corrections are even better.

Hat tip to Marc Abrahams.

Related posts

Wake up calls for scientific papers

External links

Dutch professor astonished: comparison with Sleeping Beauty leads to refusal of publication

Scientists’ unguarded moments

Earlier on Twitter, I shared an Instagram picture of Dr. Katie Bouman at work, imaging a black hole.


This has been a widely shared picture, and I was a little surprised when I saw a friend on Facebook question it. She said when there was scientific discoveries that showed images of men, the men looked more composed and professional. This made the imaging of a black hole look like an almost accidental “Did I do that?” moment.

Certainly Bouman’s picture was not the only one available. Here’s a picture of another one of the black hold team, Kazunori Akiyama.


Both Akiyama and Bouman are at computers, looking at the historic images they made. But Akiyama’s is almost certainly a staged, posed picture.

I asked what she thought of this picture of the New Horizons team, looking at images of Pluto close up for the first time. The man in the middle is project leader Alan Stern.


She replied that she thought it did a disservice to the science and to Stern if it had been widely reported. It had been seen quite widely, though probably not as much as the Bouman picture.

This interested me, because it spoke to the risks and rewards of scientists showing their unguarded moments.

On the one hand, these spontaneous moments capture something that is, I think, deeply human. Excitement. Joy. Achievement. Surprise.

But I also get that these are moments where people look undignified or vulnerable. It’s easy to mock people for looking goofy. Especially for big projects that have a lot of taxpayer money behind them, it might not be a good look. It can look like people just screwing around.

Personally, I think that sharing those spontaneous moments are worth it. My wife has been watching a lot of Brené Brown talks recently, and she talks a lot about the concept of vulnerability. And how vulnerability is one of the best predictors of courage.

I think scientists could use a lot more encouragement. And if that means looking in a way that surprises people, that’s all right by me.

Since I’m talking black holes, there was some discussion over who did what. Bouman got a lot of media attention, Sara Issaon pointed out that the image was not Bouman’s algorithms alone. Weirdly, this snowballed into some trying to undermine her contribution entirely. Andrew Chael ably tackled the trolls.

References

First M87 Event Horizon Telescope Results. IV. Imaging the Central Supermassive Black Hole

External links

Meet one of the first scientists to see the historic black hole image


12 April 2019

“Open access” is not synonymous with “author pays”

Wingfield and Millar have a well-meaning but misleading article, “The open access research model is hurting academics in poorer countries.” They say:

The open access model merely changes who pays. So rather than individuals or institutions paying to have access to publications, increasingly, academics are expected to pay for publishing their research in these “open access” journals. ... The bottom line is that payment has been transferred from institutions and individuals paying to have access to researchers having to pay to have their work published.

The first sentence is correct. The second is even correct. It is true that there are now more journals that require article processing charges than their used to be. Importantly, though, the phenomenon of authors paying is not new. “Pages charges” existed long before open access.

But they lose all nuance in the third sentence and commit a category error. They are confusing “freedom to read” with “business model.” These two things are not the same.

There are many counter examples to their central premise. SciELO journals are open access, but have no article processing fees. I could go on.

I am not saying that there is not a concern about the effects of article processing charges. It isn’t even restricted to scientists in “poorer countries.” Michael Hendricks, a biologist at one of Canada’s major research universities (hardly a “poorer country” by any measure, and not even a “poorer institution” by any measure) is concerned about the cost of article processing charges. He wrote:

US$2500 is 1% of an R01 modular budget. It is 2.5% of the average CIHR Project grant. It’s 10% of the average NSERC grant.

Add to that the vastly differing support across universities for article processing charges (ours is $0). There is no way around that fact that shifting publication costs from libraries to PIs imposes a massively different burden according to PI, field of science, nation, and institution.

The solution is that universities should pay article processing charges by cancelling subscriptions (with huge $ savings). But they generally aren’t. The only way I see to force the issue is for funders to make article processing charges ineligible, which will be seen as an attack on open access.

It’s real problem: library subscription costs are staying the same or going up. At the same time, more and more grant money is being spent on article processing charges. The public paying even more for science dissemination than they were is not what we want. Funders and/or universities have to stop this.

But looking back up to the counter-example, SciELO, shows something important. It shows that you can create open access journals with alternative business models that are not “author pays.” It’s unusual, maybe even difficult, but it’s not impossible.

That’s a line we should be pursuing. Not dumping on open access because people can’t distinguish between “common” and “necessary.”

External links

The open access research model is hurting academics in poorer countries

03 April 2019

Recommendation algorithms are the biggest problem in science communication today


Having been interested in science communication (and being a low-level practitioner) for a while, I recognized that there are a lot of old problems that recycle themselves. Like, “Should we have scientists or journalists communicating science to the public?” (Why not both?) “Is there value in debunking bad science, or does it make people dig in and turn off?” “Why can’t people name a living scientist?” “You have to tell a story...”

Having followed social controversies about science (particularly the efforts of creationists to discredit evolution) for a long time, I was almost getting bored seeing the same issues go ‘round and ‘round. But now I think we are truly facing something new.

Recommendation algorithms on social media.

You know these. These are the lines of computer code that tells you what books you might like on Amazon based on what you’ve bought in the past. It’s the way Facebook and Instagram deliver ads that you kind of like seeing in your feed. And it’s how Netflix and YouTube shows you want you might want to watch next.

I think recommendation algorithms may be the number one problem facing science communication today.

And of these, YouTube seems to be particularly bad. How bad is YouTube? Pretty bad, in that it is very effective at convincing people of false stuff.

Interviews with 30 attendees revealed a pattern in the stories people told about how they came to be convinced that the Earth was not a large round rock spinning through space but a large flat disc doing much the same thing.

Of the 30, all but one said they had not considered the Earth to be flat two years ago but changed their minds after watching videos promoting conspiracy theories on YouTube.

You watch one thing, and YouTube recommends something even a little crazier and more extreme. Because YouTube wants you to spend more time on YouTube. People are calling this “rabbit hole effect.”

It’s not just happening over scientific facts, either. Many have noted that YouTube is having a similar effect in politics, with many branding it a “radicalization machine.”

Reporter Brandy Zadrozny summarizes YouTube’s defense thus:

YouTube’s CPO says the rabbit hole effect argument isn’t really fair because while sure, they do recommend more extreme content, users could choose some of the less extreme offerings in the recommendation bar.

So we need humans need to fix the problem, right? Human content moderation is the answer! Well, maybe not, because repeated exposure to misinformation makes you question your world view.

Conspiracy theories were often well received on the production floor, six moderators told me. After the Parkland shooting last year, moderators were initially horrified by the attacks. But as more conspiracy content was posted to Facebook and Instagram, some of Chloe’s colleagues began expressing doubts.

“People really started to believe these posts they were supposed to be moderating,” she says. “They were saying, ‘Oh gosh, they weren’t really there. Look at this CNN video of David Hogg — he’s too old to be in school.’ People started Googling things instead of doing their jobs and looking into conspiracy theories about them. We were like, ‘Guys, no, this is the crazy stuff we’re supposed to be moderating. What are you doing?’”

Mike Caulfield noted:

I will say this until I am blue in the face – repeated exposure to disinformation doesn’t just confirm your priors. It warps your world and gets you to adopt beliefs that initially seemed ridiculous to you.

Propaganda works. Massive disinformation campaigns work. Of course, people with a point of view and resources have known this for a long time. Dana Nucitelli noted:

(T)he American Petroleum Institute alone spent $663 million on PR and advertising over the past decade - almost 7 times more than all renewable energy trade groups combined.

Meanwhile, scientists are still hoping that just presenting facts will win the say. I mean, the comments coming from the first day of a National Academy of Sciences colloquia feel much like the same old stuff, even though the title seems to hint at the scope of the problem (“Misinformation About Science in the Public Sphere”). It feels like the old arguments about the best way that individual scientists can personally present facts, ignoring the massive machinery that most people are connected to very deeply: the social, video, and commercial websites that are using recommendation algorithms to maximize time people spent on site.

The good news is that as I was writing this, the second day seems to be much more on target. And one speaker is saying that the biggest source of news is still television. Sure, but what about all the other information people get that is not news?

The algorithm problem requires a deep reorienting of thinking. I don’t quite know what that is yet. It is true that this is a technological problem, and technological fixes may be relevant. But I think Danah Boyd is right that we can’t just change the algorithms, although I think changing algorithms is necessary. We have to change people, too, beyond a “If they only knew the facts” kind of way. Because many do know the facts, and it don’t settle matters. But changing culture is hard.

I think combating the algorithm problem might require strong political action to regulate YouTube in the way television networks in the US were (and, in other countries, still are) regulated. But the big social technology companies are spending millions in lobbying efforts in the US.

Update, 17 May 2019: A hopeful tweet on this matter.

Promising data point for YouTube’s anti-conspiracy push (on 1 topic, at least): the people in my flat-earth FB group are very mad that the site has stopped shoveling lunatic videos into their feeds.

Update, 27 June 2019:

A good illustration of this phenomenon recently appeared in a piece for MEL magazine about an increasingly disturbing trend — women whose once-promising romantic relationships implode after their boyfriends become “redpilled.” For the benefit of the blissfully uninitiated: to be “redpilled” means to internalize a set of misogynistic far-right beliefs popular with certain corners of the internet; the product of a noxious blend of junk science, conspiracy theory, and a pathological fear of social progress.

The men interviewed in the piece, once sweet and caring, started changing after going down a rabbit hole of extremist political content on YouTube and involving themselves in radical right-wing online communities. Convinced of their absolute correctness, these men became at first frustrated, then verbally abusive once they realized their female partners did not always agree with their new views.

From here.

External links

The trauma floor
Study blames YouTube for rise in number of Flat Earthers
Google and Facebook can’t just make fake news disappear

YouTube’s algorithms can drag you down a rabbit hole of conspiracies, researcher finds
Google, Facebook set 2018 lobbying records as tech scrutiny intensifies
Americans are smart about science
The magical thinking of guys who love logic

19 March 2019

The legality of legacy admission

In the light of the “Operation Varsity Blues” college scandal last week, a lot of people were complaining about university admissions generally. I learned that a lot of people:

  1. Think university admissions are hopelessly corrupt across the board, and that these cases were not “a few bad apples.”
  2. Are super grumpy about “legacy admission.” 

I knew about court cases  about affirmative action (including the current one at Harvard), but I got curious as to whether legacy admissions had ever faced a legal challenge, and if so, what was the basis for keeping it.

I found one case that concerned legacy admissions: Rosenstock v. The Board of Governors of the University of North Carolina. This is the relevant bit about legacy admissions:

Plaintiff also attacks the policy of the University whereby children of out-of-state alumni are exempted from the stiffer academic requirements necessary for out-of-state admission. Again, since no suspect criteria or fundamental interests are involved, the State need only show a rational basis for the distinction. In unrebutted affidavits, defendants showed that the alumni provide monetary support for the University and that out-of-state alumni contribute close to one-half of the total given. To grant children of this latter group a preference then is a reasonable basis and is not constitutionally defective. Plaintiff's attack on this policy is, therefore, rejected.

The questions raised here are, in large part, attacks on administrative decision-making, an area where the federal courts have not and should not heavily tread. Plaintiff has not shown a constitutional reason for abandoning this judicial policy.

The court is saying legacy admissions are okay because the university can make money. And it’s not up to courts to change administrative decisions.

Regardless, I kind of suspect that legacy admissions are going to come under increasing pressure because they are, as the pundits say, “a bad look” for universities.

External links


Six of the top 10 universities in the world no longer consider legacy when evaluating applicants—here’s why

What we know so far in the college admissions cheating scandal

18 March 2019

The Zen of Presentations, Part 72: Hasan Minhaj is one of the best presenters today

At any given moment in time, there are people who are well known for giving good presentations.

In the early part of the twenty-first century, many people pointed to Steve Jobs as an example of what a great presenter could do. In her book Resonate Nancy Duarte says, “Jobs had the uncanny ability to make audience engagement appear simple and natural.” She points to the iPhone launch in 2007 as one of the best product launches of all time.

I often pointed to Hans Rosling, who leapt into people’s awareness with some of the first TED talks in 2006. Indeed, Rosling practically provided the templatefor what a TED talk was. Others followed in his footsteps for years to come.

But we lost Jobs in 2011, and Rosling in 2017.

But now I would like to nominate the person who is, I think, one of the best presenters of this time.


Hasan Minhaj.

You might object that Minhaj is a stand-up comedian, and stand up isn’t really a presentation in the usual sense. That’s certainly what I might have thought when I had only seen him on The Daily Show. Funny, yes. But a great presenter?

But then I saw his special Homecoming King. It’s stand up, but like many one person shows, there’s a strong narrative running though it. It mostly revolves around a prom date gone wrong.


But it’s not just Minhaj on a stage. He has a screen that shows a lot of images that are relevant to what he is describing. In other words, his Peabody Award winning special is a PowerPoint presentation. A high end and heavily disguised PowerPoint presentation, but it’s not such a different beast than many.

His Netflix series Patriot Act is less personal but more topical, and Minhaj pushes his presentation skills even further. In each episode, Minhaj does a deep explanation of one or two subjects. In science communication terms, Minhaj is making “explainers.”



And these are data driven episodes on somewhat esoteric subjects. You don’t see a lot of coverage of the Indian general elections in the news on North America.



Chinese censors, street wear hype, drug pricing, and affirmative action all come under the microscope. (In light of the university admissions scandal that broke last week, the first episode about university admissions is worth a watch, too, as Minaj lays out the the background for the lawsuit against Harvard about admissions that is being backed by white guys trying to destroy affirmative action.)

Patriot Act the only show I can think of that wouldn’t surprise me if it did an entire episode about Plan S and academic publishing.

Why I think Minhaj’s presentation is the best around right now?

Obviously, Minhaj is legit funny. But he isn’t afraid to tell niche joke. In one episode, he says something like, “I tell jokes for four people at a time.”

Minhaj’s show is committed to evidence and data. Minhaj says he has a team of researchers that help him look smart, but most shows wouldn’t bother. Most comedy shows would just be content to have their comedian mouth off whatever thoughts they have, maybe with some light fact checking. But Minhaj is not just expressing opinions. He’s building arguments.

Minhaj is concise, and has the ability to sum up complicated backstory in a few short, well-chosen sentence. Almost accidentally, this makes him fast. I sometimes think an episode of his show would almost be one of the best “Intro to political science”lectures on any campus, but then I realize that it would be too quick for students to take notes. But you’re not taking notes, so it doesn’t matter. You can just enjoy the delivery and flow.

And Patriot Act is filmed in front of an audience. While his monologues are obviously incredibly tightly scripted, Minhaj still pays attention to his audience. He goes off script for a few seconds to responds to them and interact with them.

While I said Minhaj’s lectures wouldn’t be too effective for students trying to take notes, I will be taking notes: not on the content, but to figure out what makes his presentations so good.

11 March 2019

“Crustacean Compassion” advocacy group gives one-sided view of evidence

This morning I learned of the UK advocacy group “Crustacean Compassion”, which wants to change laws around the handling of crustaceans in the United Kingdom. They are currently engaged in a campaign to recognize the decapod crustaceans as having “sentience.”

They claim to be an “an evidence-based campaign group,” but when I went to their tab on whether crustaceans feel pain, I was presented with a one-sided view. Not lopsided. One-sided.

All the evidence comes from one lab, that of Professor Robert Elwood.
Weirdly, the page is so singularly built from Elwood’s work that it even omits research from other labs that could be viewed as supporting their premise that decapod crustaceans might feel pain.

They present experiments that have not been independently replicated as though they were unquestioned. They discuss none of the interpretive problems behind those experiments. They act as though there is a clear consensus within the scientific community when there is not (review in Diggles 2018).

Their full briefing for politicians is similarly one-sided.

In science, single studies are not definitive. Studies all arising from a single lab are not definitive.

If you claim to be all about the evidence, you have to present all the evidence, not just the evidence that supports your position. Some of the individuals behind the group have academic and scientific backgrounds, but judging from their bios, none have training working with invertebrates. None have training in neurobiology.

While I have reservations about the information provided by their group, the part of me that loves graphic design gives them full points for their clever logo (shown above).

References

Diggles BK. 2018. Review of some scientific issues related to crustacean welfare. ICES Journal of Marine Science: fsy058. https://doi.org/10.1093/icesjms/fsy058

Puri S, Faulkes Z. 2015. Can crayfish take the heat? Procambarus clarkii show nociceptive behaviour to high temperature stimuli, but not low temperature or chemical stimuli. Biology Open 4(4): 441-448. https://doi.org/10.1242/bio.20149654

Related posts

Crustacean pain is still a complicated issue, despite the headlines

What we know and don’t know about crustacean pain

Switzerland’s lobster laws are not paragons of science-based policy
 

28 February 2019

When the internet fails, you feel gaslighted by the world

One of the downsides of living in a world where you can verify so many things with a single search in Google is that when you can’t do that, you seriously start to wonder if you’re right in the head.

For years, I remembered a song I heard when I was young. Because I was young, I don’t think I ever knew the name of the artist, but I remembered the chorus. Every now and then, I would go to Google and search for lyrics I remembered from the chorus.

I’d like to ride a big white horse
‘Cause I can’t ride with the damned
Or maybe drive a racing car
And steer it with one hand
I’d live a life of danger
Most any way that I can
‘Cause that’s the kind of man
That I am
‘Cause that’s the kind of man
That I am

And every time: nothing. I was back at it again today after an NPR interview with Michael Murphy reminded me of another song I remembered but could never track down (“Wildfire”). And try googling those lyrics, and I'd get songs from the wrong decade, sometimes the wrong century. But somehow, I finally found the right combination of search terms to find a top 40 Canadian hit:


“That’s the Kind of Man That I Am” by The Good Brothers! Even knowing the artist and title of the song, and Even though it was a top 40 hit on Canadian country radio stations, there does not appear to be lyrics entered in any lyric database anywhere.

Now, if I could just find a song from around the same time called “Shotgun Rider.” (And no, I don’t mean the BTO song. There are a lot of songs titled “Shotgun Rider.” Marty Robbins, Blue Jug, Tim McGraw...)

11 February 2019

The weekly science news cycle

In politics, there is constant referencing to the “news cycle”, which is generally considered to be 24 hours. The next day is not quite a blank slate, but things older than that are not “news.”

In science, there is also a news cycle, it’s not a daily cycle. It’s a weekly one.

The scientific news cycle starts on Wednesdays, with the release of that week’s issue of Nature. It continues Thursday, with the releasee of that week’s issue of Science. Love them or hate them, the papers dropped by these two journals in mid-week drive much of the media coverage for science – whether newspapers, television, radio, or something else – for the rest of the week.

These journals are well tied into the traditional news ecosphere. Journalists often have advance notice of the big stories dropping by embargoed press releases, so the most connected media outlets are often dropping headline stories about Nature and Science papers in the middle of the week.

Social media discussions are also heavily influenced by these two glamour magazines. You often see early reaction on science Twitter the day of release, and longer reactions (blog posts, for instance) before the weekend is out.

Friday and Saturday are days for continuing, slightly longer and more in-depth coverage. Many science radio shows (also available as podcasts) air on Friday or Saturday, and they almost invariably feature interviews with authors who had a publication in Nature or Science that week. I’m thinking of NPR’s Science Friday, CBC’s Quirks and Quarks, and ABC Radio National’s The Science Show on (Australian Broadcasting Corporation, not the US TV network).

These are also days where websites and media companies that don’t have their own science reporters learn about stories from other reporters. A large amount of media coverage of science says, “As reportedimn The New York Times...”, not “A new paper in Nature...”.

Sunday is the day for deep dives and long reads about science. Newspapers and magazines often put out their long form feature articles or investigative pieces. It’s the day for things that “not news, but still important.”

Monday and Tuesday are reaction days from the some in the scientific community, particularly those who are low-key users of social media. They are the catch-up points for people who heard about some story that broke last week, but they maybe heard about it by listening to a radio show or reading a New York Times article. But they didn’t really tweet or comment about it because they weren’t at their desk until Monday.

05 February 2019

Second letter in Science!

I have yet another story of a publication that started because I was wasting time on the Internet. I say again: blogging is one of the best ways for an academic to work out ideas.

This new publication is my second brush with the realm of glamour magazines in my career. It’s a letter again and not a research article, but I’ll take it.

Blog readers and maybe some of my Twitter followers might recognize the arguments. They are the same ones I made in this blog post previously. Somewhere along the way, I found myself referencing it in tweets that I thought, “Maybe I can bring this to a wider audience.” More people read the glamour magazines than my blog. I chose to try for Science because it seemed to me that GRE discussions were most relevant to the US.

While the letter is short, it actually expanded from what I originally submitted. Letters editor Jennifer Sills pushed me to expand the last paragraph to include a few sentences about possible solutions. This was a good push, and the letter is better because of it.  I’ll quote Clay Shirky again (emphasis added):

(W)hat are the parent professions needed around writing? Publishing isn’t one of them. Editing, we need, desperately.

While blogging is one of the best ways I have found to develop and work through academic ideas, an editor who genuinely edits is invaluable in fine tuning and honing ideas.

References

Faulkes Z. 2019. #GRExit's unintended consequences. Science 363(6425): 356. https://doi.org/10.1126/science.aaw1012

Related posts

Letter in Science!
I come to bury the GRE, not to praise it
Publishing may be a button, but publishing isn’t all we need

04 February 2019

“We need to do a better job training PhDs in...”

I went looking for how many ways people completed the some version of the sentence, “We need to do a better job training PhDs in...”:


And that is with a couple of very trivial searches. I daresay many more entries could easily be added to this list.

As an educator, I never want to be the person to be the person saying that we shouldn’t train people. Heck, one of the entries on the list above is from me! But there is a finite number of things we can expect to teach people in a finite amount of time. I see two problems..

First, faculty tend to think, “We can do this in house.” They underestimate the complexities of fields, and they don’t reach out to experts in other fields. So the training risks being done by amateurs.

Second, long lists like this tend to encourage superficial “box checking.”

It may be that this “Train them in everything” is a symptom of the loss of support jobs in universities. Faculty are increasingly expected to do everything. If a department doesn’t have a staff photographer, who will do it? Faculty. Professors have to be one person bands, capable of playing every instrument, because universities don’t want to hire an orchestra (so to speak).

This is not a realistic expectation by academics. We should not expect to train grad students to be experts in everything, because nobody can be an expert at everything.

If I had the ability, I would rather see departments try have many more staff positions for some of these task above. Expand the pool of staff experts so that faculty don’t have to try to do everything.

Additional, 3 June 2019: Kieran Healy has a great thread underlining why graduate programs tend to push towards “Train students in everything”: the brutal academic job market.

Many grad programs exist in a state of permanent revolution that is fueled by the real anxiety produced by uncertainty about one’s future work and prospects. This creates demands that something be done to make those anxieties go away. ... the core uncertainty—and thus the anxiety—is ineradicable through policy, especially in a brutal labor market the program has no control over.

Assessing the "treatment effect" of program structure is itself infected by the core uncertainty about who will “do well” and why. The market is tiny. Admission processes deliberately neutralize many elements that would predict success if literally everyone could be admitted.

A common response is to wish one could inoculate against this uncertainty by "requiring" people to learn everything or somehow be intellectually fully-formed right away. But this is impossible; faculty will disagree about what it means; students will likely rebel against it.

In practice you have to be humane about the reality that underpins the anxiety, while remaining clear-eyed about what a program can and can’t do about it at the level of training. The levers that can be pulled aren't attached to the things you really want to adjust.

Related posts
 


All scholarship is hard


18 January 2019

Low on “agreeableness”


Grumpy prof is grumpy (low agreeableness score) just because, not because he’s stressed (low negative emotion score).

Test results of the “Big five” personality traits. Take the test here.

Hat tip to Adam Calhoun.

External link

Most personality quizzes are junk science. Take one that isn’t.

14 January 2019

How to fix a lab fail

I did my fair share of physiological experiments with neurons when I was a trainee.

The experiment was an attempt to get a handle on whether a particular pathway between sensory neurons A and interneurons B had few neurons (maybe even only a single connection; monosynaptic) or many neurons (polysynaptic).

One way you can test whether you have few connections or many is by messing with the physiological saline the neurons are sitting in. Physiological saline is a solution that mimics the inside of the animal they are normally found in. Different species have different mixes of salts and other chemicals that keep the neurons alive and firing. There is usually a lot of gold ol’ sodium chloride (table salt), potassium chloride (salt substitute for some people), and so on.

Normally, that physiological saline contains some calcium, because calcium causes neurons to release neurotransmitters. If you change with the amount of calcium in your saline, you make each connection between neurons more and more likely to fail. Using some ions that mimic calcium (like magnesium) make this plan even more effective.

Pathways with single connections between will often keep working with this altered saline: hen you stimulate A neurons, you still see the response in B neurons.

Pathways with many connections usually stop working with this altered saline. When you stimulate A neurons, you are unlikely to see activity in B neurons.

I was doing this experiment, and the results kept being... disappointing. I couldn’t understand the results. And the neurons seemed to keep dying faster than usual. I talked to my supervisor about these experiments. We went back and forth a bit, and at one point, my supervisor asked,

“What did you mix the solution in?”

I replied, “I mixed it in...”

Freeze frame. Record scratch.

It was at that precise, exact instant – after that exact word but before I said the next – that I simultaneously recognized and solved the problem that had been vexing me in the lab. If I was a cartoon, a lightbulb would have clicked on above my head. If I was in a modern movie, I would have had a high speed montage run in front of my eyes showing the key moments I went wrong.

All of this happened in pause that lasted about a second.

But I couldn’t stop myself from finishing the sentence, even though I knew that I was about to reveal myself as having made a dumb, amateur, “I should damn well have known better” mistake.

“...distilled water.”

My supervisor laughed. Not loudly. A chuckle, I think would be the appropriate description. I think the laugh was not only because he knew the solution as soon as I said it, but because he saw the look on my face that revealed I’d experienced “Aha!” and “D’oh!” moments simultaneously.

I’ d put the calcium substitutes in pure water. Not saline with all the other salts that were needed. No wonder the neurons kept dying.

I fixed the saline and went back to trying the experiment. The neurons were much happier, although it turned out the results of the experiment were so muddy and hard to interpret them that we never published that data in a paper. (It appeared on a couple of conference posters.)

And the moral of the story is: Whenever you have a problem in the lab, make sure to tell someone else. Because sometimes, you might just solve your own problem.

P.S.—I told this story on video as part of the SICB lab fail contest in 2018. I did not win. I wonder if the video is kicking around someplace...

P.P.S.—I didn’t know it until years later, but I was using a technique “Rubber duck problem solving.

P.P.P.S.—In the original account, the duck was stuffed (as in, a hunting trophy, not plush fur), not rubber.

P.P.P.P.S.—When I was deep into CCGs like Legend of the Five Rings, I talked a game company staffer who answered the phone. Her name was Mindy. Mindy would get players calling in with rules questions all the time. If you know CCGs at all, you know there are many complex rules questions that arise, because there were a lot of possible interactions between cards. Mindy said she would often get people really wanting to ask detailed questions about the Ninja Shapeshifter or something. But because Mindy was customer service and events, not game design, she didn’t have all the cards memorized. She’d ask the person on the phone to read the card out loud to her.

She said she lost track of the number of times the person would start reading the card to her, pause, and then say, “Oh.”

They answered their own question just by reading the card out loud.

Say stuff out loud, people. I’m telling you. It works.