28 December 2018
Crayfish online: The sixth in a trilogy
I’m a little surprised to realize that one of my most recent papers, about the crayfish pet trade, marks almost ten years of “following my nose.” This is a series of little projects that I keep thinking, “This might be nothing,” But they have not just turned into “something,” but they have been some of my more highly cited papers.
While working on my previous papers on the crayfish pet trade (Faulkes 2013), I noticed that some states and provinces have laws that would make having pet crayfish illegal. But I could still find people placing ads for crayfish on aquarium sites.
I though a lot more about whether legislation had any affect on whether people bought and sold crayfish when looking at sales of crayfish in Ireland (Faulkes 2015, 2017), since I believed at the time (wrongly) that crayfish were illegal there.
While working on those papers about Irish crayfish, I realized that whether laws work was actually something I could test using online ads. Because different jurisdictions had different laws, you had a sort of natural legislative experiment.
But while the expression “laboratories of democracies” is a phrase that is bandied about in US politics, any federal system will do. And, to my surprise, I ended up studying my home: the prairie provinces of Canada.
In looking back at this series of papers, one of the things that I am slightly surprised by, and proud of, is how I was able to improve the techniques. I know that looking at websites isn’t exactly the same as learning how to do some complex lab technique, but still, the potential for how to do some of these things are only obvious in hindsight.
I started off with a survey on my own website, moved to general Google Alerts, then to online auction site ads. The description of the trade in crayfish is more detailed and precise than I started with, and it’s more detailed and precise than I find in similar papers.
Plus, I have finally reached a point where I am using these online monitoring methods to do more than just describe the pet trade of crayfish: I’m using those hand-scraped classified ads data to test hypotheses. It’s the kind of subtlety in methodological refinement that you might not be able to get if you’re just looking at single papers.
I was also pleased the paper found a home in another journal that I had never published in before, Nauplius. I learned about the journal a couple of years ago. I may have read articles from the journal before, but never really clicked in to what the journal was. An well-established, open access society journal with no article processing fees? I’d take twenty, thanks, if I could.
Is this the end of the trilogy of six about the pet trade? I’m not sure. I think I might have an idea for at least one more paper on the pet trade paper. I might have an idea for how to test a question with even more nuance.
References
Faulkes Z. 2010. The spread of the parthenogenetic marbled crayfish, Marmorkrebs (Procambarus sp.), in the North American pet trade. Aquatic Invasions 5(4): 447-450. https://doi.org/10.3391/ai.2010.5.4.16
Faulkes Z. 2013. How much is that crayfish in the window? Online monitoring of Marmorkrebs, Procambarus fallax f. virginalis (Hagen, 1870) in the North American pet trade. Freshwater Crayfish 19(1): 39-44. https://doi.org/10.5869/fc.2013.v19.039
Faulkes Z. 2015. Marmorkrebs (Procambarus fallax f. virginalis) are the most popular crayfish in the North American pet trade. Knowledge and Management of Aquatic Ecosystems 416: 20. https://doi.org/10.1051/kmae/2015016
Faulkes Z. 2015. A bomb set to drop: parthenogenetic Marmorkrebs for sale in Ireland, a European location without non-indigenous crayfish. Management of Biological Invasions 6(1): 111-114. https://doi.org/10.3391/mbi.2015.6.1.09
Faulkes Z. 2017. Slipping past the barricades: the illegal trade of pet crayfish in Ireland. Biology and Environment: Proceedings of the Royal Irish Academy 117(1): 15-23. https://doi.org/10.3318/BIOE.2017.02
Faulkes Z. 2018. Prohibiting pet crayfish does not consistently reduce their availability online. Nauplius 26: e2018023. http://dx.doi.org/10.1590/2358-2936e2018023
27 December 2018
Challenges remain, no matter your career stage
You may know this gentleman pictured at right. It’s Sir Ian McKellen.
This is a person who is pretty good at what he does.
Understatement aside, the word “distinguished” hardly begins to cover his acting career, including that he started to capture public imagination for his performances as Magneto in the X-Men movies and Galdalf in The Lord of the Rings films at a time when many others might be thinking it’s about time to pack it up.
It’s his role of Galdalf that I want to talk about. I was watching the bonus features for The Hobbit: An Unexpected Journey. Because The Hobbit movies were shot in 3-D, the perspective tricks director Peter Jackson and company used to make Gandalf look larger than the hobbits in The Lord of the Rings wouldn’t work any more.
To create the illusion of different sizes, director Peter Jackson and company literally created two linked sets. There was a fully dressed physical set where the actors playing the smaller dwarves and hobbits would act, and a rescaled green screen set that McKellen would act in, responding only to lines he heard the actors in the other set say, using an earpiece to listen in on the other set.
McKellen, literally alone on his set, got frustrated with not being able to have other people to act with. (He later explains that acting with people is the reason he became an actor in the first place.) And he had a moment where those lonely, difficult working conditions broke him. It made him stop, and cry for a little bit.
In retrospect, this shouldn’t be surprising, given how crazily complex and challenging a major movie like The Hobbit must be. But I was still kind of stunned by this moment.
Here is someone who is extremely experienced. Some would say this is someone at the top of his game, but certainly near the top of his profession. And yet he’s faced with a task where he is feeling like a failure, where he’s wondering if someone is going to have to have the awkward conversation with him that it’s time to stop, since he clearly can’t do his work any more.
To his credit as a professional, McKellen did not get angry. He did not throw a tantrum or a fit. He did not lash out at the crew.
The crew, fortunately, being a good crew, took some steps to make McKellen feel better. You can watch the appendices for the whole story. And obviously he carried on and completed filming of all three movies.
And the moral of the story is: No matter how experienced you are, you can run into challenges in your profession that make you feel defeated. That maybe make imposter syndrome flare up. You never stop needing direction, mentoring, and maybe some kindness to get you back on track.
This is a person who is pretty good at what he does.
Understatement aside, the word “distinguished” hardly begins to cover his acting career, including that he started to capture public imagination for his performances as Magneto in the X-Men movies and Galdalf in The Lord of the Rings films at a time when many others might be thinking it’s about time to pack it up.
It’s his role of Galdalf that I want to talk about. I was watching the bonus features for The Hobbit: An Unexpected Journey. Because The Hobbit movies were shot in 3-D, the perspective tricks director Peter Jackson and company used to make Gandalf look larger than the hobbits in The Lord of the Rings wouldn’t work any more.
To create the illusion of different sizes, director Peter Jackson and company literally created two linked sets. There was a fully dressed physical set where the actors playing the smaller dwarves and hobbits would act, and a rescaled green screen set that McKellen would act in, responding only to lines he heard the actors in the other set say, using an earpiece to listen in on the other set.
McKellen, literally alone on his set, got frustrated with not being able to have other people to act with. (He later explains that acting with people is the reason he became an actor in the first place.) And he had a moment where those lonely, difficult working conditions broke him. It made him stop, and cry for a little bit.
In retrospect, this shouldn’t be surprising, given how crazily complex and challenging a major movie like The Hobbit must be. But I was still kind of stunned by this moment.
Here is someone who is extremely experienced. Some would say this is someone at the top of his game, but certainly near the top of his profession. And yet he’s faced with a task where he is feeling like a failure, where he’s wondering if someone is going to have to have the awkward conversation with him that it’s time to stop, since he clearly can’t do his work any more.
To his credit as a professional, McKellen did not get angry. He did not throw a tantrum or a fit. He did not lash out at the crew.
The crew, fortunately, being a good crew, took some steps to make McKellen feel better. You can watch the appendices for the whole story. And obviously he carried on and completed filming of all three movies.
And the moral of the story is: No matter how experienced you are, you can run into challenges in your profession that make you feel defeated. That maybe make imposter syndrome flare up. You never stop needing direction, mentoring, and maybe some kindness to get you back on track.
26 December 2018
How wasting time on the internet led to my new authorship disputes paper
My newest paper came about as a direct result of me wasting time on the internet. It’s not the only paper that started out this way, but the pathway here is a little more direct than usual.
This paper started because I was answering questions on Quora like this one: “What should a PhD student do if he finds out that his ex-advisor (for a master's) published his work in a conference paper without adding his name?” Once you answer a particular kind of question on Quora, it shows you more like that one. I started seeing lots of variations on, “I’m being screwed out of credit for authorship. What do I do?”
In retrospect, it’s interesting that I never saw these questions on Twitter or other websites where I hang out with my fellow academics. What you see on one social media site is not what you see on all of them.
I saw this question enough that I thought it was worth writing a blog post here about it. More than any other paper I’ve written, that blog post was the first rough draft of what would become the published paper. Some of the examples were largely unchanged in the progression from blog post through to final published paper.
At a time when lots of blogging veterans are shutting down their blogs (farewell, Scicurious blog, you were fun), I want to hold this out as an example of why academics should keep a blog. Blogging is still the best intellectual sketch board there is. A blog lets you develop half-formed ideas into coherent arguments by writing them out in sentences and paragraphs. For me, a Twitter thread would not have acted as a springboard that could have developed into a proper manuscript.
I first chatted a bit to a couple of anonymous people behind the SmartyPants Science blog (now deleted) to see if they would like to collaborate on it. They apparently had some experience seeing authorship disputes in action, which I never had. That... did not pan out, so I went it alone. I workshopped it with a grad student writing class, who had some good remarks.
I thought this was an article with enough general interest that if I posted it up as a preprint, I might get some useful feedback. That turned out to be... not a straightforward experience. I had my manuscript rejected by the BiorXiv preprint server for capricious reasons, which I wrote about here.
The good news about posting the article on the PeerJ preprint server was that I did get people tweeting it, and expressing interest in the topic of the paper. (Indeed, as of this writing, the preprint has a higher almetric score – 33 – than the published paper – 29.) The less good news was that nobody had any specific comments to make.
My reviewers, however, did have comments to make. After some desk rejects from a couple of journals (opinion pieces without data are not the easiest sell), I got some very thorough and encouraging reviews, with comments like, “I thoroughly enjoyed reading this paper” alongside the decision to... reject?! I’ve had accepted papers where the reviewers didn’t say as much positive as in these reviews rejecting the paper.
The reviews were very helpful, too, so I went back and revised and resubmitted it to the same journal. The final paper is so much stronger because of those reviews. The first half is shorter but has more concrete data. The second half has a much sharper focus on dispute resolution, because I realized that there was enough stuff talking about dispute prevention in the literature.
The moral of this part of the story is: Look past the editorial decision itself and pay attention to the tone and substance of the reviews.
And the moral of the whole story is: This paper demonstrates something I often tell people: “Yes, the internet / social media is a waste of time... but it’s not a complete waste of time. The qualifier is important.”
P.S.—I like the picture at the top of this post because these two chess pieces suggest conflict. But if you know how knights move in chess, the reality is that neither can capture the other. In other words, from the point of view of those pieces, it’s a “no win” situation.
I think that represents most authorship disputes pretty well.
Additional, 18 January 2019: Several reviewers argued that journals would never want to get involved in authorship disputes. Turns out the model I proposed is not all that similar from the one describe in this article, which came out shortly after mine was published.
Scientific journals’ creation of dedicated positions for rooting out misconduct before publication comes amid growing awareness of such issues, and stems from a recognition that spot-checking and other ad hoc arrangements were insufficient. ...
Renee Hoch(is)is one of three research integrity team members at PLOS ONE. “We’re not working in a job where people are generally happy to hear from us,” Hoch said. “You need to be a strong communicator, but also a very sensitive communicator.”
Hoch’s team, which was created in January, sees everything from concerns about data, to failure to disclose important conflicts of interest, to authorship disputes, and more. “If you wrote a list of potential ethical issues, we’ve probably seen everything on it,” she said, noting that the largest slices of the pie are image manipulation and data concerns.
Emphasis added.
And the moral of the update is: You will always find helpful articles you wish you could have cited after it’s too late.
References
Faulkes Z. 2018. Arbitration is needed to resolve scientific authorship disputes. PeerJ Preprints https://peerj.com/preprints/26987/
Faulkes Z. 2018. Resolving authorship disputes by mediation and arbitration. Research Integrity and Peer Review 3: 12. https://doi.org/10.1186/s41073-018-0057-z
Related posts
You think you deserved authorship, but didn’t get it. Now what?
21 December 2018
Rubber, glass, chainsaw
Being an academic is a juggling act. You’re expected to perform teaching, and do research, and do service. And that’s just the highest level breakdown of your responsibilities.
With so much to juggle, some balls get dropped. It happens.
But one of the things you have to learn is that not all the things you juggle are the same.
Some balls are rubber. You can drop these. They’ll bounce and be okay.
Some balls are glass. You can’t drop these or they crack, break, or shatter.
The trick is knowing which one is which.
A lot of the tasks that administrators give rank and file faculty are rubber balls. Answering every email is a rubber ball.
Teaching, on the other hand, is usually a glass ball. Writing that scheduled exam is a glass ball. Grading final exams so they can go on student transcripts is a glass ball.
And sometimes you’re juggling a flaming chainsaw in the mix, too.
Your research is a chainsaw. Drop it, and it will sputter around wildly and has the potential to really damage you.
External links
Tweet from 4 December
With so much to juggle, some balls get dropped. It happens.
But one of the things you have to learn is that not all the things you juggle are the same.
Some balls are rubber. You can drop these. They’ll bounce and be okay.
Some balls are glass. You can’t drop these or they crack, break, or shatter.
The trick is knowing which one is which.
A lot of the tasks that administrators give rank and file faculty are rubber balls. Answering every email is a rubber ball.
Teaching, on the other hand, is usually a glass ball. Writing that scheduled exam is a glass ball. Grading final exams so they can go on student transcripts is a glass ball.
And sometimes you’re juggling a flaming chainsaw in the mix, too.
Your research is a chainsaw. Drop it, and it will sputter around wildly and has the potential to really damage you.
External links
Tweet from 4 December
19 December 2018
Writing bad recommendation letters
This finding about recommendations letters shook me:
How many recommendation letters over the years had I written that had some variation of, “Please contact me”? I was trying to be helpful by letting committees know I was available to them. I though this was positive. And it looks like I inadvertently hurt my students’ chances instead.
I am not the only one who probably hurt peoples’ chances by writing letters that were perceived as weak.
This got me wondering: Why didn’t I know this?
And I realized that nobody ever gave me any guidance for how to write recommendations.
As a student, I am the person requesting recommendations. My training for writing was about how to write papers and grants.
As a post-doc, nobody asked me for recommendations. That was when someone should have warned me.
Become a faculty member, and suddenly you are regularly asked by students to supply recommendation letters. Sometimes there are from students who are one of dozens or hundred in an introductory class who you couldn’t pick out of a line-up. How do you do justice to these students who need recommendations and have few options?
In all my time on university campuses, I never heard any serious discussions about how to compose recommendation letters. Sure, I read recommendations from other faculty members, and saw obvious no-no’s. Some faculty wrote form letters, just swapping out names of students. (That works until someone sees the form letter twice. Then every student after that is harmed.)
Do other faculty ever get guidance from mentors about how to write recommendation letters? I think I’ll be putting that in a Twitter poll. Should we?
I had never seen any “how to” articles in journals about composing recommendations, either. I found this article with a quick search in Google Scholar, but it seems to be a rare specimen of the genre.
And the moral of the story is:
If you are someone who mentors postdocs, talk to them about what you know about recommendation letters. Don’t let them learn it on the fly by trial and error.
Additional, 20 December 2018: Twitter poll results! Small sample, but telling. Nobody was mentored in writing recommendations.
References
Greenburg AG, Doyle J, McClure DK. 1994. Letters of recommendation for surgical residencies: What they say and what they mean. Journal of Surgical Research 56(2): 192-198. https://doi.org/10.1006/jsre.1994.1031
Moore S, Smith JM. 1986. Writing recommendation letters for students. The Clearing House: A Journal of Educational Strategies, Issues and Ideas 59(8): 375-376. https://doi.org/10.1080/00098655.1986.9955695
External links
How to fix recommendation bias and evaluation inflation
Do professors ever write negative recommendation letters?
Tenure denial, seven years later
The commonly used phrase, “If I can provide any additional information, please call…,” was almost uniformly identified as a strong negative comment(.)Oh crap oh crap oh crap.
How many recommendation letters over the years had I written that had some variation of, “Please contact me”? I was trying to be helpful by letting committees know I was available to them. I though this was positive. And it looks like I inadvertently hurt my students’ chances instead.
I am not the only one who probably hurt peoples’ chances by writing letters that were perceived as weak.
This got me wondering: Why didn’t I know this?
And I realized that nobody ever gave me any guidance for how to write recommendations.
As a student, I am the person requesting recommendations. My training for writing was about how to write papers and grants.
As a post-doc, nobody asked me for recommendations. That was when someone should have warned me.
Become a faculty member, and suddenly you are regularly asked by students to supply recommendation letters. Sometimes there are from students who are one of dozens or hundred in an introductory class who you couldn’t pick out of a line-up. How do you do justice to these students who need recommendations and have few options?
In all my time on university campuses, I never heard any serious discussions about how to compose recommendation letters. Sure, I read recommendations from other faculty members, and saw obvious no-no’s. Some faculty wrote form letters, just swapping out names of students. (That works until someone sees the form letter twice. Then every student after that is harmed.)
Do other faculty ever get guidance from mentors about how to write recommendation letters? I think I’ll be putting that in a Twitter poll. Should we?
I had never seen any “how to” articles in journals about composing recommendations, either. I found this article with a quick search in Google Scholar, but it seems to be a rare specimen of the genre.
And the moral of the story is:
If you are someone who mentors postdocs, talk to them about what you know about recommendation letters. Don’t let them learn it on the fly by trial and error.
Additional, 20 December 2018: Twitter poll results! Small sample, but telling. Nobody was mentored in writing recommendations.
References
Greenburg AG, Doyle J, McClure DK. 1994. Letters of recommendation for surgical residencies: What they say and what they mean. Journal of Surgical Research 56(2): 192-198. https://doi.org/10.1006/jsre.1994.1031
Moore S, Smith JM. 1986. Writing recommendation letters for students. The Clearing House: A Journal of Educational Strategies, Issues and Ideas 59(8): 375-376. https://doi.org/10.1080/00098655.1986.9955695
External links
How to fix recommendation bias and evaluation inflation
Do professors ever write negative recommendation letters?
Tenure denial, seven years later
03 December 2018
How to create your academic web presence
You’re an early career academic, and you think, “I should have something more professional than my personal Facebook account.” I’m here to help!
I recommend creating your own academic website at a bare minimum. All you need is a clean website that you are obsessive about keeping up to date. It’s practically a cliché that professor websites are five years out of date, so just a regularly updated website puts you ahead of 90% of the academic pack! Colleagues and prospective students will appreciate it.
Your university’s IT depertment should be able to provide you with server space to host webpages. That’s currently how I host my home page. But Seth Godin has a good idea here: get a simple blog and set up one featured post with your basic contact information. Then you don’t have to worry about someone else hosting your stuff, or your URLs changing because your university’s IT reorganized their directories. Of, for that matter, having to transfer everything if you move from one institution to another.
It’s also helpful to have a website with an easy-to-remember name. My website’s domain name is not provided by the university; the name just redirects to the university location. There are lots of domain name providers. I used to used GoDaddy, but got driven crazy by their wesbite, which has all the restraint of someone competing for “Best Christmas decorations in the city” award. I have been a happy customer of I Want My Name for years. Their interface is clean and simple, and their prices are reasonable. Again, having your own website name makes it less likely to change if IT reorganizes your university website.
I code my own home page using very basic HTML. Basic HTML coding is actually pretty simple. I learned by viewing the code of other people’s websites. You can do this on most browsers by choosing “View source,” although the underlying code of webpages now is so much more complex than it used to be that this might not be a great way to learn. I’ve been able to do a lot without getting into style sheets (CSS) and more recent stuff.
(A side benefit of learning HTML was that it has helped significantly in teaching online. I know how to make my online class material consistent and clean.)
I use HTML-kit (build 292, which is free) as an HTML editor, but there are many more available. I used to use what became SeaMonkey (also free). I recommend against using Microsoft Word. Word adds huge masses of mostly useless code, making pages far bigger than they need to be.
If you don’t want to learn HTML, there are online services that create websites. I used Wix to create a website recently. It looks great, but Wix is very fiddly, and it took a ton of effort to make it look as pretty as it did. You can have Wix websites for free, but then the website runs ads. You can pay to have ads removed. Of course, other services are available.
Moving from a basic website to social media is a big step up, and this is the one most where you are most likely to run into option paralysis. There are so many things you can do, how do you pick one one to do? It’s okay for that answer to be “None.” Pick ones you actually use or enjoy. If you hate Twitter, start a blog. Or get on Instagram if you dig photography.
But there are tricks to streamline some of the work in creating a web presence. For instance, the Better Posters blog has a Twitter account that is completely automated. I set it up through IFTTT (short for "if this, then that").
I recommend creating your own academic website at a bare minimum. All you need is a clean website that you are obsessive about keeping up to date. It’s practically a cliché that professor websites are five years out of date, so just a regularly updated website puts you ahead of 90% of the academic pack! Colleagues and prospective students will appreciate it.
Your university’s IT depertment should be able to provide you with server space to host webpages. That’s currently how I host my home page. But Seth Godin has a good idea here: get a simple blog and set up one featured post with your basic contact information. Then you don’t have to worry about someone else hosting your stuff, or your URLs changing because your university’s IT reorganized their directories. Of, for that matter, having to transfer everything if you move from one institution to another.
It’s also helpful to have a website with an easy-to-remember name. My website’s domain name is not provided by the university; the name just redirects to the university location. There are lots of domain name providers. I used to used GoDaddy, but got driven crazy by their wesbite, which has all the restraint of someone competing for “Best Christmas decorations in the city” award. I have been a happy customer of I Want My Name for years. Their interface is clean and simple, and their prices are reasonable. Again, having your own website name makes it less likely to change if IT reorganizes your university website.
I code my own home page using very basic HTML. Basic HTML coding is actually pretty simple. I learned by viewing the code of other people’s websites. You can do this on most browsers by choosing “View source,” although the underlying code of webpages now is so much more complex than it used to be that this might not be a great way to learn. I’ve been able to do a lot without getting into style sheets (CSS) and more recent stuff.
(A side benefit of learning HTML was that it has helped significantly in teaching online. I know how to make my online class material consistent and clean.)
I use HTML-kit (build 292, which is free) as an HTML editor, but there are many more available. I used to use what became SeaMonkey (also free). I recommend against using Microsoft Word. Word adds huge masses of mostly useless code, making pages far bigger than they need to be.
If you don’t want to learn HTML, there are online services that create websites. I used Wix to create a website recently. It looks great, but Wix is very fiddly, and it took a ton of effort to make it look as pretty as it did. You can have Wix websites for free, but then the website runs ads. You can pay to have ads removed. Of course, other services are available.
Moving from a basic website to social media is a big step up, and this is the one most where you are most likely to run into option paralysis. There are so many things you can do, how do you pick one one to do? It’s okay for that answer to be “None.” Pick ones you actually use or enjoy. If you hate Twitter, start a blog. Or get on Instagram if you dig photography.
But there are tricks to streamline some of the work in creating a web presence. For instance, the Better Posters blog has a Twitter account that is completely automated. I set it up through IFTTT (short for "if this, then that").
06 November 2018
Pick one conference to go to every year
Because I have science attention deficit disorder (science ADD) and am not independently wealthy, there is no conference I go to each and every year. The only one which I have made a point to go regularly for the last few years has been the Society for Integrative and Comparative Biology, because I’m a committee chair. (Which reminds me I have some bookings to make.)
Over the years, I’ve been to the meetings about evolution, ecology, crayfish, crustaceans, natural history, coastal research, neuroethology, animal behaviour, neuroscience, parasites, and more. Because I have been living in Texas, a reasonably popular conference venue, my strategy has been to wait for meetings to come to me so I don’t have to travel and increase my conference carbon footprint.
This is a bad thing for me.
In the last few years, I’m coming to the realization that my capricious conference selection might be not the best strategy for professional opportunities.
I got to thinking about this after reading Terry McGlynn’s post about the shock and awe of going to the Neuroscience meeting.
Terry worries about the size of the conference being alienating. I have felt alienated at conferences, but it’s never been conference size made me feel that way.
Now, I started going to Neuroscience meeting when it was about 60% of what it is now, so I’m used to the scale of the bigger than big conference. And I’ve never used conference size as a factor for deciding what meeting to go to. Both large and small meetings can be great.
In the last few years, I’ve gone to new meetings I’ve never been to before. What put me off and made me feel excluded, particularly as an established mid-career scientist, were constant reminders that there you are not one of the people who come to this meeting every year. There are in jokes made during symposia, keynotes, and talks to things that happened in previous years. It’s very clear who are “conference buddies” that see each other every year.
(Part of this is probably my own damn fault, since I can be quiet, grumpy, or both. I’m working on this.)
But the point is that those conference regulars have clearly built a lot of social capital and created important professional networks that I haven’t.
And the moral of the story is:
Pick one conference that you like. Large or small, doesn’t matter. Go to that conference every year as religiously as you possibly can. Something to present or not, doesn’t matter. You will start building that group of conference buddies that will improve your conference experience (no more lunches alone!) and will become better known in your field.
External links
Huge conferences and the potential for alienation and isolation of junior scientists
Over the years, I’ve been to the meetings about evolution, ecology, crayfish, crustaceans, natural history, coastal research, neuroethology, animal behaviour, neuroscience, parasites, and more. Because I have been living in Texas, a reasonably popular conference venue, my strategy has been to wait for meetings to come to me so I don’t have to travel and increase my conference carbon footprint.
This is a bad thing for me.
In the last few years, I’m coming to the realization that my capricious conference selection might be not the best strategy for professional opportunities.
I got to thinking about this after reading Terry McGlynn’s post about the shock and awe of going to the Neuroscience meeting.
As soon as I walked into the poster hall, I was like ZOMG. HOLY MOLY. WHAT THE WHAT.
Ginormous cannnot do justice to explain the scale of this endeavor. Here’s an attempt: Imagine that scene at the end of Raiders of the Lost Ark, but instead of wooden crates, it’s an endless morass of posters upon posters, in which one of those slots is where you have the honor to have your work visible for a few hours.
Terry worries about the size of the conference being alienating. I have felt alienated at conferences, but it’s never been conference size made me feel that way.
Now, I started going to Neuroscience meeting when it was about 60% of what it is now, so I’m used to the scale of the bigger than big conference. And I’ve never used conference size as a factor for deciding what meeting to go to. Both large and small meetings can be great.
In the last few years, I’ve gone to new meetings I’ve never been to before. What put me off and made me feel excluded, particularly as an established mid-career scientist, were constant reminders that there you are not one of the people who come to this meeting every year. There are in jokes made during symposia, keynotes, and talks to things that happened in previous years. It’s very clear who are “conference buddies” that see each other every year.
(Part of this is probably my own damn fault, since I can be quiet, grumpy, or both. I’m working on this.)
But the point is that those conference regulars have clearly built a lot of social capital and created important professional networks that I haven’t.
And the moral of the story is:
Pick one conference that you like. Large or small, doesn’t matter. Go to that conference every year as religiously as you possibly can. Something to present or not, doesn’t matter. You will start building that group of conference buddies that will improve your conference experience (no more lunches alone!) and will become better known in your field.
External links
Huge conferences and the potential for alienation and isolation of junior scientists
01 November 2018
“How can you trust it when it changes all the time?”
“How can you trust science when it changes all the time? One day you hear you should eat margarine and not butter, then the next week you hear the reverse!”
I see variations of this claim fairly regularly when I’m reading about the neverending discussions about evolution and religious creationism. I sympathize. People want certainty, and science is not always great at providing it in the short term. Give a lot of scientists 50 years or more, and we can often provide pretty good confidence, approaching certainty.
While “science changes, but the Bible doesn’t” is mainly appealing to emotion, I’ve actually always appreciated the “Things mean what they say” aspect of Biblical literalism as an intellectual position. It’s honest. Give me an outspoken young Earth creationist over a hand-waving, insincere “intelligent design” proponent any day. I obviously disagree with the the whole “inerrant” part of Biblical literalism, not to mention the absolute refusal of Biblical literalists to update positions in light of new evidence. Still. Sticking to your guns on interpretation of text? I respect that.
But the next times someone pulls out “You can’t trust that ever-changing science,” I have a new riposte.
“May I please direct your attention to American Evangelical Christianity of the early twenty-first century.”
Because this particular branch of Christianity is now providing an object lesson in the fact that while the text of the Bible may not change, its interpretation sure does. And this is exactly what’s going on in some Protestant churches in the US now.
The context of this article is about political policy, not science, but nevertheless... if your position is that the Bible literally means what it says, you have to apply that to the whole text. You can’t say the story of Genesis takes place in seven days of 24 hours length and then say the whole prohibition against tattoos (say) is just a big misunderstanding.
While I can respect the intellectual honesty of Biblical literalism, it’s hard to respect people who extol it without practicing it.
External links
The Bible says to welcome immigrants. So why don’t white evangelicals?
The Year of Living Biblically
I see variations of this claim fairly regularly when I’m reading about the neverending discussions about evolution and religious creationism. I sympathize. People want certainty, and science is not always great at providing it in the short term. Give a lot of scientists 50 years or more, and we can often provide pretty good confidence, approaching certainty.
While “science changes, but the Bible doesn’t” is mainly appealing to emotion, I’ve actually always appreciated the “Things mean what they say” aspect of Biblical literalism as an intellectual position. It’s honest. Give me an outspoken young Earth creationist over a hand-waving, insincere “intelligent design” proponent any day. I obviously disagree with the the whole “inerrant” part of Biblical literalism, not to mention the absolute refusal of Biblical literalists to update positions in light of new evidence. Still. Sticking to your guns on interpretation of text? I respect that.
But the next times someone pulls out “You can’t trust that ever-changing science,” I have a new riposte.
“May I please direct your attention to American Evangelical Christianity of the early twenty-first century.”
Because this particular branch of Christianity is now providing an object lesson in the fact that while the text of the Bible may not change, its interpretation sure does. And this is exactly what’s going on in some Protestant churches in the US now.
Diana Butler Bass, an American church historian and scholar who focuses on the history of the American church... said, white evangelicals are motivated by a willingness to read the Bible non-literally when it comes to passages about, say, caring for the poor.
Over the past few years, she’s noticed what she called “a very slow theological turn within the evangelical community to redefine what seemed like very basic ... verses about the care of the poor and caring for the outcast. On one hand, they might say, ‘Oh, you know, Jesus was born of a literal virgin’ ... but when it comes to these verses about the poor and about refugees, in particular, all of a sudden, literalism disappears.”
Suddenly, she said, she noticed a “new sort of interpretation that’s floating around in evangelical circles about [verses in the Bible where Jesus exhorts care for the poor]. And the interpretation is Jesus does not mean everybody. That Jesus only means that you’re supposed to take care of the ‘least of these’ who are in the Christian community.”
The context of this article is about political policy, not science, but nevertheless... if your position is that the Bible literally means what it says, you have to apply that to the whole text. You can’t say the story of Genesis takes place in seven days of 24 hours length and then say the whole prohibition against tattoos (say) is just a big misunderstanding.
While I can respect the intellectual honesty of Biblical literalism, it’s hard to respect people who extol it without practicing it.
External links
The Bible says to welcome immigrants. So why don’t white evangelicals?
The Year of Living Biblically
11 October 2018
Why do people think humanities are easier than science?
I believe that all scholarship is hard. But people think humanities are not as hard. You rarely hear about students who “can’t cut it” in medieval literature and change their major to organic chemistry because they think it’s easier. But the opposite change of majors is practically cliché.
Maybe one of the reasons people view humanities as easier is because they are more familiar with it, because they have grappled with it longer, because teachers aren’t scared of using the original text.
We teach Shakespeare’s plays full on in grade school, for instance. We give students the complete written text of his plays and poems. We teach them despite the language used being far from common today. It’s only because of Hamlet that I have the faintest inkling of what a bodkin is. Understanding Shakespeare is hard. But it’s taught in schools to young people without hesitation. We teach even older stuff written in more archaic language, sometimes (albeit often with translations).
But what about Shakespeare’s scientific contemporary, Galileo? I doubts many schools have students read direct translations of Sidereus Nuncius (Starry Messenger) in full. Instead, students get summaries of science in textbooks that are far removed from the original texts.
Teaching science using only textbooks is like teaching literature using only CliffsNotes or teaching film studies using iMDB synopses. Yeah, you’ll learn something, but you’re also missing a lot. Some might say you’re missing the point entirely.
I’m willing to bet most students don’t get tasked with reading original scientific literature until well into a university degree. So is it any surprise that they can’t understand scientific papers compared to reading literature? They’ve had maybe the better part of a decade receiving instruction on how to parse out a literary text, but almost no instruction on how to make sense of a scientific text.
Because of this, students often don’t understand the process of argumentation that goes into science. They don’t understand how many things are suggested before being demonstrated. They don’t understand science as a process instead of facts. These are not new complaints, but the solution most people veer towards is to have students do more “hands on” work to get a sense of how science is done, not to have students read original science.
The writings of scientists are trivialized next to their discoveries, and maybe it shouldn’t be.
Maybe one of the reasons people view humanities as easier is because they are more familiar with it, because they have grappled with it longer, because teachers aren’t scared of using the original text.
We teach Shakespeare’s plays full on in grade school, for instance. We give students the complete written text of his plays and poems. We teach them despite the language used being far from common today. It’s only because of Hamlet that I have the faintest inkling of what a bodkin is. Understanding Shakespeare is hard. But it’s taught in schools to young people without hesitation. We teach even older stuff written in more archaic language, sometimes (albeit often with translations).
But what about Shakespeare’s scientific contemporary, Galileo? I doubts many schools have students read direct translations of Sidereus Nuncius (Starry Messenger) in full. Instead, students get summaries of science in textbooks that are far removed from the original texts.
Teaching science using only textbooks is like teaching literature using only CliffsNotes or teaching film studies using iMDB synopses. Yeah, you’ll learn something, but you’re also missing a lot. Some might say you’re missing the point entirely.
I’m willing to bet most students don’t get tasked with reading original scientific literature until well into a university degree. So is it any surprise that they can’t understand scientific papers compared to reading literature? They’ve had maybe the better part of a decade receiving instruction on how to parse out a literary text, but almost no instruction on how to make sense of a scientific text.
Because of this, students often don’t understand the process of argumentation that goes into science. They don’t understand how many things are suggested before being demonstrated. They don’t understand science as a process instead of facts. These are not new complaints, but the solution most people veer towards is to have students do more “hands on” work to get a sense of how science is done, not to have students read original science.
The writings of scientists are trivialized next to their discoveries, and maybe it shouldn’t be.
22 September 2018
Giving octopuses ecstasy
Nobody told me it was “Drug an invertebrate week.” But not only has a story of lobsters getting pot rather than going into pots made the round, now we have octopuses getting another recreational human drug. The story, according to headlines, is that giving ecstasy (MDMA) to octopuses makes them act more socially. And everyone’s comparing octopuses to ecstasy fueled partygoers at a rave.
It’s a nice narrative, but there isn’t enough evidence to conclude that.
There is some genetic analyses of MDMA receptors in this paper, but all of the interest in the press is about the behaviour experiments. The authors gave the octopuses the drug. The octopuses’ behaviour changed. The popular press is interpreting that behaviour in a cutesy way, using terms like “hug” and “cuddle” in headlines. (Even publications like Nature who should know better.)
That’s a problem. Octopuses hunt prey by enveloping them with their web and tentacles — effectively “hugging” them, if you will. Being eaten is rather different than cuddling. The authors provide no videos in the paper, just two still images (below), so you can’t see the behaviour in detail.
The sample size for the behavioural experiments is 4 or 5, as far as I can see. That’s tiny.
It’s worth noting that the behavioural changes were not always the same.
Dose-dependent responses are not at all unusual, but again, it makes the simple story of “MDMA means social” more complicated.
I do appreciate that this paper has an Easter egg for people who read the methods:
But which Stormtrooper, people?
Which Stormtrooper?!
The paper is interesting, but it’s not getting attention from popular press because it’s particularly informative about the evolution of social behaviour. It’s getting attention because of the novelty of giving drugs to animals, and the “Oh look, animals are like us!” narrative.
Additional, 24 September 2018: Another interpretive problem. Normally, in an interview on CBC’s Quirk and Quarks, Gul Dolen notes octopuses overcome their asocial behaviours for mating. Dolen cites this as reason to think that there could be a way to “switch” the octopuses’ behaviour using a drug. So mating behaviour is the natural “social” mode for these animals.
But the octopus under the basket was always male, because the researchers found octopuses avoided males more than females.
Three of the four octopuses tested were male. (I had to dig into the supplemental information for that.) So most of the observations were male-male behaviour. I don’t know that homosexual behaviour has ever been documented in octopuses. A quick Google Scholar search found nothing.
A Washington Post story revealed that the authors’ wouldn’t even talk about some of the behaviours they had seen:
This is problematic. This suggests the behaviours in the paper are deeply underdocumented at best. And it seems to be done on purpose, because it doesn’t fit the authors’ narrative. This, combined with the description of behaviours at different doses, it further suggests that rather than “prosocial” behaviour that the authors and headlines are pushing, the exposure to MDMA is making octopuses behave erratically, not socially.
Reference
Edsinger E, Dölen G. A conserved role for serotonergic neurotransmission in mediating social behavior in Octopus. Current Biology 28(3): P3136-3142.e4. https://doi.org/10.1016/j.cub.2018.07.061
External links
Octopuses on ecstasy: The party drug leads to eight-armed hugs
This is what happens to a shy octopus on ecstasy
Octopuses on ecstasy just want a cuddle
Serotonin: octopus love potion?
Picture from here.
It’s a nice narrative, but there isn’t enough evidence to conclude that.
There is some genetic analyses of MDMA receptors in this paper, but all of the interest in the press is about the behaviour experiments. The authors gave the octopuses the drug. The octopuses’ behaviour changed. The popular press is interpreting that behaviour in a cutesy way, using terms like “hug” and “cuddle” in headlines. (Even publications like Nature who should know better.)
That’s a problem. Octopuses hunt prey by enveloping them with their web and tentacles — effectively “hugging” them, if you will. Being eaten is rather different than cuddling. The authors provide no videos in the paper, just two still images (below), so you can’t see the behaviour in detail.
The sample size for the behavioural experiments is 4 or 5, as far as I can see. That’s tiny.
It’s worth noting that the behavioural changes were not always the same.
In addition, pilot studies in 3 animals indicated that higher submersion doses of MDMA (ranging from 10-400 mg/Kg) induced severe behavioral changes (e.g., hyper or depressed ventilation, traveling color waves across the skin or blanching, as well as catatonia or hyper-arousal/vigilance) and these animals were excluded from further analysis.
Dose-dependent responses are not at all unusual, but again, it makes the simple story of “MDMA means social” more complicated.
I do appreciate that this paper has an Easter egg for people who read the methods:
Novel objects consisted of multiple configurations of 4 objects: 1) plastic orchid pot with red weight, 2) plastic bottle with green weight, 3) Galactic Heroes ‘Stormtrooper’ figurine, and 4) Galactic Heroes ‘Chewbacca’ figurine.
But which Stormtrooper, people?
Which Stormtrooper?!
The paper is interesting, but it’s not getting attention from popular press because it’s particularly informative about the evolution of social behaviour. It’s getting attention because of the novelty of giving drugs to animals, and the “Oh look, animals are like us!” narrative.
Additional, 24 September 2018: Another interpretive problem. Normally, in an interview on CBC’s Quirk and Quarks, Gul Dolen notes octopuses overcome their asocial behaviours for mating. Dolen cites this as reason to think that there could be a way to “switch” the octopuses’ behaviour using a drug. So mating behaviour is the natural “social” mode for these animals.
But the octopus under the basket was always male, because the researchers found octopuses avoided males more than females.
Three of the four octopuses tested were male. (I had to dig into the supplemental information for that.) So most of the observations were male-male behaviour. I don’t know that homosexual behaviour has ever been documented in octopuses. A quick Google Scholar search found nothing.
A Washington Post story revealed that the authors’ wouldn’t even talk about some of the behaviours they had seen:
The authors observed even stranger behavior that they did not report in the study, Edsinger said. He was reluctant, even after extensive questioning, to further describe what the octopuses did, because the scientists could not be sure if the MDMA had induced these actions.
This is problematic. This suggests the behaviours in the paper are deeply underdocumented at best. And it seems to be done on purpose, because it doesn’t fit the authors’ narrative. This, combined with the description of behaviours at different doses, it further suggests that rather than “prosocial” behaviour that the authors and headlines are pushing, the exposure to MDMA is making octopuses behave erratically, not socially.
Reference
Edsinger E, Dölen G. A conserved role for serotonergic neurotransmission in mediating social behavior in Octopus. Current Biology 28(3): P3136-3142.e4. https://doi.org/10.1016/j.cub.2018.07.061
External links
Octopuses on ecstasy: The party drug leads to eight-armed hugs
This is what happens to a shy octopus on ecstasy
Octopuses on ecstasy just want a cuddle
Serotonin: octopus love potion?
Picture from here.
20 September 2018
Giving lobsters weed
I’ve been studying issues roiling around the question of “Does it hurt lobsters when they go into a pot?” for about a decade. After ten years or so, you get a little jaded. I’m used to seeing the same bad arguments. I’m used to it popping up and making the rounds in news about twice a year. The first time this year was when Switzerland put laws into place about lobster handling. This is the second.
And I’ve got to say:
That’s new.
A Maine newspaper is reporting on a restaurant owner, Charlotte Gill, is sedating lobster with marijuana.
I am pretty sure cannabis as a sedative not been the subject of any peer-reviewed scientific papers on crustacean anesthesia. But a quick Google Scholar search (thank you thank you thank you Google for this tool) shows that spiny lobsters and other invertebrates have cannabinoid receptors (McPartland et al. 2005). This makes the technique plausible on the face of it.
The behavioural effects reported were interesting.
I am surprised by the apparent duration of the effects. Weeks of behaviour change from a single treatment? That seems long compared to soporific effects of marijuana smoke in humans doesn’t seem to last multiple days.
I’m not sure of the ethics of this. Will Roscoe the lobster, who has apparently forgotten how to use claws, going to become a quick meal for a predator? A lobster without claws in the ocean is just bait (Barshaw et al. 2003). Releasing Roscoe may doom him!
I am a little concerned by what seems to be Gill’s quick dismissal of other techniques:
I don’t know if she has anything but intuition to support that opinion. There’s research on electrical stunning, and the results so far are mixed. Fregin and Bickmeyer (2016) found shocks “do not mitigate the response to external stimuli,” but Neil (2012), Roth and Grimsbø (2016), and Weineck et al. (2018) found electric shocks seemed to knock down neural activity effectively. But the impression I get is that using shock is tricky: you need different protocols for different animals.
It’s also worth noting that a new paper by Weineck et al. (2018) showed chilling was effective as an anesthetic, which the Swiss regulations forbade. Research I co-authored (Puri and Faulkes 2015) showed no evidence that crayfish responded to low temperature stimuli.
Of course, another complication around this technique is its legality. The legal landscape around marijuana in the U.S. is tricky. Marijuana is still regulated federally, but certain states permit different kinds of uses. The article notes:
This is interesting, but it’s not clear to me that this is a more cost effective or humane way to sedate a lobster than what many crustacean researchers have been doing for a long time: cooling on crushed ice.
Hat tip to Mo Costandi.
References
Barshaw DE, Lavalli KL, Spanier E. 2003. Offense versus defense: responses of three morphological types of lobsters to predation. Marine Ecology Progress Series 256: 171-182. https://doi.org/10.3354/meps256171
Fregin T, Bickmeyer U. 2016. Electrophysiological investigation of different methods of anesthesia in lobster and crayfish. PLOS ONE 11(9): e0162894. https://doi.org/10.1371/journal.pone.0162894
McPartland JM, Agraval J, Gleeson D, Heasman K, Glass M. 2006. Cannabinoid receptors in invertebrates. Journal of Evolutionary Biology 19(2): 366-373. https://doi.org/10.1111/j.1420-9101.2005.01028.x
Puri S, Faulkes Z. 2015. Can crayfish take the heat? Procambarus clarkii show nociceptive behaviour to high temperature stimuli, but not low temperature or chemical stimuli. Biology Open 4(4): 441-448. https://doi.org/10.1242/bio.20149654
Roth B, Grimsbø E. 2016. Electrical stunning of edible crabs (Cancer pagurus): from single experiments to commercial practice. Animal Welfare 25(4): 489-497. https://doi.org/10.7120/09627286.25.4.489
Weineck K, Ray A, Fleckenstein L, Medley M, Dzubuk N, Piana E, Cooper R. 2018. Physiological changes as a measure of crustacean welfare under different standardized stunning techniques: cooling and electroshock. Animals 8(9): 158. http://www.mdpi.com/2076-2615/8/9/158
Related posts
Switzerland’s lobster laws are not paragons of science-based policy
External links
“Hot box” lobsters touted
Maine restaurant sedates lobsters with marijuana
New England marijuana laws – where it’s legal, where it’s not and what you need to know
And I’ve got to say:
That’s new.
A Maine newspaper is reporting on a restaurant owner, Charlotte Gill, is sedating lobster with marijuana.
I am pretty sure cannabis as a sedative not been the subject of any peer-reviewed scientific papers on crustacean anesthesia. But a quick Google Scholar search (thank you thank you thank you Google for this tool) shows that spiny lobsters and other invertebrates have cannabinoid receptors (McPartland et al. 2005). This makes the technique plausible on the face of it.
The behavioural effects reported were interesting.
Following the experiment, Roscoe’s (the experimental lobster - ZF) claw bands were removed and kept off for nearly three weeks.
His mood seemed to have an impact on the other lobsters in the tank. He never again wielded his claws as weapons.
I am surprised by the apparent duration of the effects. Weeks of behaviour change from a single treatment? That seems long compared to soporific effects of marijuana smoke in humans doesn’t seem to last multiple days.
Earlier this week, Roscoe was returned to the ocean as a thank you for being the experimental crustacean.
I’m not sure of the ethics of this. Will Roscoe the lobster, who has apparently forgotten how to use claws, going to become a quick meal for a predator? A lobster without claws in the ocean is just bait (Barshaw et al. 2003). Releasing Roscoe may doom him!
I am a little concerned by what seems to be Gill’s quick dismissal of other techniques:
In Switzerland, the recommended method of cooking the crustacean is to electrocute it or stab it in the head before putting it in the boiling water.
“These are both horrible options,” said Gill. “If we’re going to take a life we have a responsibility to do it as humanely as possible.”
I don’t know if she has anything but intuition to support that opinion. There’s research on electrical stunning, and the results so far are mixed. Fregin and Bickmeyer (2016) found shocks “do not mitigate the response to external stimuli,” but Neil (2012), Roth and Grimsbø (2016), and Weineck et al. (2018) found electric shocks seemed to knock down neural activity effectively. But the impression I get is that using shock is tricky: you need different protocols for different animals.
It’s also worth noting that a new paper by Weineck et al. (2018) showed chilling was effective as an anesthetic, which the Swiss regulations forbade. Research I co-authored (Puri and Faulkes 2015) showed no evidence that crayfish responded to low temperature stimuli.
Of course, another complication around this technique is its legality. The legal landscape around marijuana in the U.S. is tricky. Marijuana is still regulated federally, but certain states permit different kinds of uses. The article notes:
Gill holds a medical marijuana caregiver license with the state and is using product she grows in order to guarantee its quality.
This is interesting, but it’s not clear to me that this is a more cost effective or humane way to sedate a lobster than what many crustacean researchers have been doing for a long time: cooling on crushed ice.
Hat tip to Mo Costandi.
References
Barshaw DE, Lavalli KL, Spanier E. 2003. Offense versus defense: responses of three morphological types of lobsters to predation. Marine Ecology Progress Series 256: 171-182. https://doi.org/10.3354/meps256171
Fregin T, Bickmeyer U. 2016. Electrophysiological investigation of different methods of anesthesia in lobster and crayfish. PLOS ONE 11(9): e0162894. https://doi.org/10.1371/journal.pone.0162894
McPartland JM, Agraval J, Gleeson D, Heasman K, Glass M. 2006. Cannabinoid receptors in invertebrates. Journal of Evolutionary Biology 19(2): 366-373. https://doi.org/10.1111/j.1420-9101.2005.01028.x
Puri S, Faulkes Z. 2015. Can crayfish take the heat? Procambarus clarkii show nociceptive behaviour to high temperature stimuli, but not low temperature or chemical stimuli. Biology Open 4(4): 441-448. https://doi.org/10.1242/bio.20149654
Roth B, Grimsbø E. 2016. Electrical stunning of edible crabs (Cancer pagurus): from single experiments to commercial practice. Animal Welfare 25(4): 489-497. https://doi.org/10.7120/09627286.25.4.489
Weineck K, Ray A, Fleckenstein L, Medley M, Dzubuk N, Piana E, Cooper R. 2018. Physiological changes as a measure of crustacean welfare under different standardized stunning techniques: cooling and electroshock. Animals 8(9): 158. http://www.mdpi.com/2076-2615/8/9/158
Related posts
Switzerland’s lobster laws are not paragons of science-based policy
External links
“Hot box” lobsters touted
Maine restaurant sedates lobsters with marijuana
New England marijuana laws – where it’s legal, where it’s not and what you need to know
19 September 2018
“Best” journals and other unhelpful publishing advice
The Society for Neuroscience recently posted a short guide for publishing papers for early career researchers. It makes me grumpy.
“Aim for the best journal in your field that you think you can get into, as a general rule.” This ranks right up there with Big Bobby Clobber’s hockey advice, “The key to beating the Russians is to score more points than they do.” “Publish in the best journal” (or its sibling, “Strive for excellence”) is incredibly unhelpful for new researchers, because they don’t know the lay of the publishing landscape. They will rightfully ask how to recognize “best” journals. In academia, notions of “best” are often highly subjective and have more to do with tradition than actual data. This tweet led me to this article:
When people talk about “best” journals, this almost always ends up being code for Impact Factor. The article mentions these second.
“Consider impact factors, but don’t obsess over the number. There are many excellent medical and biomedical specialty journals considered top tier in their fields that have relatively low impact factors. Don’t let the impact factor be your only data point when deciding where to send your paper.” This gives me another chance to point to articles about the problems of this measure, like this and this. It’s so flawed that authors should think about it as little as possible.
“Look at the masthead. Are the people listed on the editorial team who you want reading your paper? Do they represent your target readership?” This is deeply unhelpful to new researchers. New researchers do not know the lay of the land and probably are not going to recognize most of the people on editorial boards. Recognizing that network takes time and experience.
“Read the aims and scope. Does the journal’s focus align well with your submission?” Finally, a good piece of advice. I would have put this first, not fourth.
“Do you and/or your university care whether you publish in open-access journals? Some institutions will put a high value on an open-access paper, so don’t underestimate the importance of this preference.” Again, probably unhelpful for early career researchers. Doctoral students and post-docs may very well change what institutions they are affiliated with, maybe multiple times.
“Is your research ready to be published? Do you have a compelling and complete story to tell? While there is a great deal of pressure to publish frequently, don’t slice and dice your research into many small pieces. Consider the least publishable unit, and make sure yours is not too small to be meaningful.” I’m kind of godsmacked that “Check to see if it’s done” is presented as advice. The notion of a “complete story” is deeply problematic. The data don’t always cooperate and answer a question cleanly. There are many projects that I would never have published if I say on them until they were as “complete” as I wanted. Here’s a project I sat on for eight years because it wasn’t “complete.”
External links
How to Publish for a Successful Academic Career
“Aim for the best journal in your field that you think you can get into, as a general rule.” This ranks right up there with Big Bobby Clobber’s hockey advice, “The key to beating the Russians is to score more points than they do.” “Publish in the best journal” (or its sibling, “Strive for excellence”) is incredibly unhelpful for new researchers, because they don’t know the lay of the publishing landscape. They will rightfully ask how to recognize “best” journals. In academia, notions of “best” are often highly subjective and have more to do with tradition than actual data. This tweet led me to this article:
When Elfin was first charged with creating a ranking system, he seems to have known that the only believable methodology would be one that confirmed the prejudices of the meritocracy: The schools that the most prestigious journalists and their friends had gone to would have to come out on top. The first time that the staff had drafted up a numerical ranking system to test internally–a formula that, most controversially, awarded points for diversity–a college that Elfin cannot even remember the name of came out on top. He told me: “When you’re picking the most valuable player in baseball and a utility player hitting .220 comes up as the MVP, it’s not right.”
Elfin subsequently removed the first statistician who had created the algorithm and brought in Morse, a statistician with very limited educational reporting experience. Morse rewrote the algorithm and ran it through the computers. Yale came out on top, and Elfin accepted this more persuasive formula. At the time, there was internal debate about whether the methodology was as good as it could be. According to Lucia Solorzano, who helped create the original U.S. News rankings in 1983, worked on the guide until 1988, and now edits Barron’s Best Buys in College Education, “It’s a college guide and the minute you start to have people in charge of it who have little understanding of education, you’re asking for trouble.”
To Elfin, however, who has a Harvard master’s diploma on his wall, there’s a kind of circular logic to it all: The schools that the conventional wisdom of the meritocracy regards as the best, are in fact the best–as confirmed by the methodology, itself conclusively ratified by the presence of the most prestigious schools at the top of the list. In 1997, he told The New York Times: “We’ve produced a list that puts Harvard, Yale and Princeton, in whatever order, at the top. This is a nutty list? Something we pulled out of the sky?”
When people talk about “best” journals, this almost always ends up being code for Impact Factor. The article mentions these second.
“Consider impact factors, but don’t obsess over the number. There are many excellent medical and biomedical specialty journals considered top tier in their fields that have relatively low impact factors. Don’t let the impact factor be your only data point when deciding where to send your paper.” This gives me another chance to point to articles about the problems of this measure, like this and this. It’s so flawed that authors should think about it as little as possible.
“Look at the masthead. Are the people listed on the editorial team who you want reading your paper? Do they represent your target readership?” This is deeply unhelpful to new researchers. New researchers do not know the lay of the land and probably are not going to recognize most of the people on editorial boards. Recognizing that network takes time and experience.
“Read the aims and scope. Does the journal’s focus align well with your submission?” Finally, a good piece of advice. I would have put this first, not fourth.
“Do you and/or your university care whether you publish in open-access journals? Some institutions will put a high value on an open-access paper, so don’t underestimate the importance of this preference.” Again, probably unhelpful for early career researchers. Doctoral students and post-docs may very well change what institutions they are affiliated with, maybe multiple times.
“Is your research ready to be published? Do you have a compelling and complete story to tell? While there is a great deal of pressure to publish frequently, don’t slice and dice your research into many small pieces. Consider the least publishable unit, and make sure yours is not too small to be meaningful.” I’m kind of godsmacked that “Check to see if it’s done” is presented as advice. The notion of a “complete story” is deeply problematic. The data don’t always cooperate and answer a question cleanly. There are many projects that I would never have published if I say on them until they were as “complete” as I wanted. Here’s a project I sat on for eight years because it wasn’t “complete.”
External links
How to Publish for a Successful Academic Career
18 September 2018
Who decides “Peer reviewed" in library records?
I noticed something new in my library’s search results while searching the catalog.
Journals showed up with “Peer reviewed” and “Open access” icons. This got me wondering where that information came from. I didn’t think the library staff had the time to assess all the catalog entries, so I tried to track down where that came from. Particularly “peer reviewed.”
A librarian confirmed it that “peer reviewed” was not a status designated by the university, but could not tell me exactly where the determination came from.
The icon shows up because there is a note in the item’s MARC record. MARC is a format for bibliographic data. (It’s MARC field 500, in case you’re curious). If I understand right, MARC records get created by many different entities. Those MARC shared to help standardize records across institutions. The entities who are populating those fields could include other universities, a network like OCLC (the nonprofit organization behind WorldCat), or the publishers themselves.
I’m disturbed that information might be added by the publishers themselves, which have a conflict of interest. Of course publishers will want to say all their journals are peer reviewed. It’s the practically the bare minimum to be considered an academic journal. But many journals claim to be peer reviewed that are not.
But what worries me most that that what is presented as a simple and authoritative “Yes / no” icon to university library patrons (mostly students) is added by a complex and unverifiable process.
I’m always trying to push students to think about how peer review related to credibility and trustworthiness. I often ask them, “How do you know a journal is peer reviewed?”, which often flummoxes them. As it should. Determining whether a journal is “real” (i.e., credible) to a research community is complex.
If an institution librarian can’t say who is making a decision that a journal is peer reviewed or not, what hope do students have of critically assessing that information in the library catalog?
The “Peer reviewed” icons in a university library record gives people a false sense of security.
Additional: I learned that the particular example I used here, Brazilian Journal of Biology, is not given as peer-reviewed in Ulrich’s, used by University of Toronto.
This was apparently part of a February 2018 update to Ex Libris.
More additional: The “Open access” icon ignores that hybrid journals exist.
Journals showed up with “Peer reviewed” and “Open access” icons. This got me wondering where that information came from. I didn’t think the library staff had the time to assess all the catalog entries, so I tried to track down where that came from. Particularly “peer reviewed.”
A librarian confirmed it that “peer reviewed” was not a status designated by the university, but could not tell me exactly where the determination came from.
The icon shows up because there is a note in the item’s MARC record. MARC is a format for bibliographic data. (It’s MARC field 500, in case you’re curious). If I understand right, MARC records get created by many different entities. Those MARC shared to help standardize records across institutions. The entities who are populating those fields could include other universities, a network like OCLC (the nonprofit organization behind WorldCat), or the publishers themselves.
I’m disturbed that information might be added by the publishers themselves, which have a conflict of interest. Of course publishers will want to say all their journals are peer reviewed. It’s the practically the bare minimum to be considered an academic journal. But many journals claim to be peer reviewed that are not.
But what worries me most that that what is presented as a simple and authoritative “Yes / no” icon to university library patrons (mostly students) is added by a complex and unverifiable process.
I’m always trying to push students to think about how peer review related to credibility and trustworthiness. I often ask them, “How do you know a journal is peer reviewed?”, which often flummoxes them. As it should. Determining whether a journal is “real” (i.e., credible) to a research community is complex.
If an institution librarian can’t say who is making a decision that a journal is peer reviewed or not, what hope do students have of critically assessing that information in the library catalog?
The “Peer reviewed” icons in a university library record gives people a false sense of security.
Additional: I learned that the particular example I used here, Brazilian Journal of Biology, is not given as peer-reviewed in Ulrich’s, used by University of Toronto.
This was apparently part of a February 2018 update to Ex Libris.
More additional: The “Open access” icon ignores that hybrid journals exist.
11 September 2018
BugFest blues
Anticipation.
I had been anticipating the chance to speak at the North Carolina Natural Sciences Museum for a good long while. I’d been asked to speak at BugFest, one of their biggest events, which draws tens of thousands of people to the museum. I’d been wanting a chance to go since I heard so much positive about the museum when Science Online was held in the area. When I went to Science Online, I missed the chance to go because my flight didn’t arrive on time.
Anticipation.
This Sunday, I started to get a sinking feeling as I watched weather forecasts and my Twitter timeline. It’s hurricane season. Models were starting to predict Hurricane Florence was heading straight for North Carolina.Now it looks like Florence is all but going to the doorstep of the Natural Sciences Museum and knock on the door when BugFest was supposed to happen.
I emailed the organizers, got word that a decision would be made at the start of the week, and today I got word that the event was postponed.
“Whew!” from me. I did not want to get on a plane and fly towards a major hurricane.
I’ll come and talk science and crayfish after things have calmed down.
I hope everyone in North Carolina – those I know and those I don’t – can stay safe through Florence. It looks like it’s going to be very bad.
(But it was a little fun to come up with this cancellation tagline.)
I had been anticipating the chance to speak at the North Carolina Natural Sciences Museum for a good long while. I’d been asked to speak at BugFest, one of their biggest events, which draws tens of thousands of people to the museum. I’d been wanting a chance to go since I heard so much positive about the museum when Science Online was held in the area. When I went to Science Online, I missed the chance to go because my flight didn’t arrive on time.
Anticipation.
This Sunday, I started to get a sinking feeling as I watched weather forecasts and my Twitter timeline. It’s hurricane season. Models were starting to predict Hurricane Florence was heading straight for North Carolina.Now it looks like Florence is all but going to the doorstep of the Natural Sciences Museum and knock on the door when BugFest was supposed to happen.
I emailed the organizers, got word that a decision would be made at the start of the week, and today I got word that the event was postponed.
“Whew!” from me. I did not want to get on a plane and fly towards a major hurricane.
I’ll come and talk science and crayfish after things have calmed down.
I hope everyone in North Carolina – those I know and those I don’t – can stay safe through Florence. It looks like it’s going to be very bad.
(But it was a little fun to come up with this cancellation tagline.)
28 August 2018
Headline hogwash: Rosehip neurons
I was listening to the radio this morning, and heard some very strange coverage about something called “rosehip neurons.” The coverage was pushing this notion that these newly described neurons were somehow extra special and extra unusual and might be one of the things that make us human. I almost felt like rosehip neurons were being described the way I imagined René Descartes described the pineal gland.
I looked at some headlines, because headlines disproportionately influence what people believe about a story.
NPR, The Independent, Iran Daily, and India Today link these neurons to human “uniqueness.” News Medical and Biocompare flat out state rosehip neurons are unique to humans.
Facebook juggernaut I Fucking Love Science, Science Alert, Interesting Engineering, and News.com.au are more cautious, saying “possibly,” “looks like” or “may be” that rosehip neurons are only in humans.
Science, Science Daily, Wired (*), Forbes, and LiveScience carefully specify these neurons are found in people or humans. Yet media will almost never say when research is done in mice or some other animal.
The cumulative effect of looking through these news headlines is that you get the impression that rosehip neurons are the first thing we have ever found that is unique to humans. Walter (2008) has a whole book of uniquely human traits.
And people are falling for this narrative already. I say “falling” because the reasoning trying to link these neurons to the uniqueness of humans is spurious.
The paper by Boldog and colleagues does not show rosehip neurons are “human specific.” The paper shows rosehip neurons are “not seen in mouse cortex”. That’s a big difference. It’s like calling whiskers “mouse specific” because you look at a mouse and a human, and you see that the mouse has whiskers and humans don’t. It sounds good until you look at a cat.
For all we know, cats might have rosehip neurons. Bats might have them. Elephants might have them. Chimps, gorillas, and whales might have them. Lions, and tigers, and bears might have them.
Different species are different. This is not a surprise. The idea that human brains are just very large mouse brains might be a great strawman to get money from medical funding agencies, but it’s not a position that anyone who understands evolution and animal diversity should take.
References
Boldog et al. Transcriptomic and morphophysiological evidence for a specialized human cortical GABAergic cell type. Nature Neuroscience. https://doi.org/10.1038/s41593-018-0205-2 (Preprint available here.)
Walter C. Thumbs, Toes, and Tears (see also interview)
Related posts
New rule for medical research headlines
External links
What Makes A Human Brain Unique? A Newly Discovered Neuron May Be A Clue
Mysterious new brain cell found in people
Scientists identify a new kind of human brain cell
Meet the Rose Hip Cell, a New Kind of Neuron (* The Wired headline that appears in Google search results is, “Meet the Rosehip Cell, a New Kind of Human Neuron | WIRED”)
Scientists Discover A New Type Of Brain Cell In Humans
Neuroscientists identify new type of “rose hip” neurons unique to humans
Team Discovers New ‘Rosehip’ Neuronal Cells Found Only in Humans
Scientists Find a Strange New Cell in Human Brains: The 'Rosehip Neuron
International Team Discovers New Type Of Neuron That May Be Unique To Humans
New, and possibly unique, human brain cell discovered
Mysterious new type of cell could help reveal what makes human brain special
Scientists Discovered A New Type of Brain Cell That May Only Exist in Humans
I looked at some headlines, because headlines disproportionately influence what people believe about a story.
NPR, The Independent, Iran Daily, and India Today link these neurons to human “uniqueness.” News Medical and Biocompare flat out state rosehip neurons are unique to humans.
Facebook juggernaut I Fucking Love Science, Science Alert, Interesting Engineering, and News.com.au are more cautious, saying “possibly,” “looks like” or “may be” that rosehip neurons are only in humans.
Science, Science Daily, Wired (*), Forbes, and LiveScience carefully specify these neurons are found in people or humans. Yet media will almost never say when research is done in mice or some other animal.
The cumulative effect of looking through these news headlines is that you get the impression that rosehip neurons are the first thing we have ever found that is unique to humans. Walter (2008) has a whole book of uniquely human traits.
And people are falling for this narrative already. I say “falling” because the reasoning trying to link these neurons to the uniqueness of humans is spurious.
The paper by Boldog and colleagues does not show rosehip neurons are “human specific.” The paper shows rosehip neurons are “not seen in mouse cortex”. That’s a big difference. It’s like calling whiskers “mouse specific” because you look at a mouse and a human, and you see that the mouse has whiskers and humans don’t. It sounds good until you look at a cat.
For all we know, cats might have rosehip neurons. Bats might have them. Elephants might have them. Chimps, gorillas, and whales might have them. Lions, and tigers, and bears might have them.
Different species are different. This is not a surprise. The idea that human brains are just very large mouse brains might be a great strawman to get money from medical funding agencies, but it’s not a position that anyone who understands evolution and animal diversity should take.
References
Boldog et al. Transcriptomic and morphophysiological evidence for a specialized human cortical GABAergic cell type. Nature Neuroscience. https://doi.org/10.1038/s41593-018-0205-2 (Preprint available here.)
Walter C. Thumbs, Toes, and Tears (see also interview)
Related posts
New rule for medical research headlines
External links
What Makes A Human Brain Unique? A Newly Discovered Neuron May Be A Clue
Mysterious new brain cell found in people
Scientists identify a new kind of human brain cell
Meet the Rose Hip Cell, a New Kind of Neuron (* The Wired headline that appears in Google search results is, “Meet the Rosehip Cell, a New Kind of Human Neuron | WIRED”)
Scientists Discover A New Type Of Brain Cell In Humans
Neuroscientists identify new type of “rose hip” neurons unique to humans
Team Discovers New ‘Rosehip’ Neuronal Cells Found Only in Humans
Scientists Find a Strange New Cell in Human Brains: The 'Rosehip Neuron
International Team Discovers New Type Of Neuron That May Be Unique To Humans
New, and possibly unique, human brain cell discovered
Mysterious new type of cell could help reveal what makes human brain special
Scientists Discovered A New Type of Brain Cell That May Only Exist in Humans
16 August 2018
How to present statistics: a gap in journals’ instructions to authors
I recently reviewed a couple of papers, and was reminded by how bad the presentation of statistics in many papers is. This is true even from veterans with lengthy publication records who you might think would know better. Here are a few tips on presenting statistics that I’ve gleaned over the years.
One paper wrote “(7.8 ± 0.8).” I guess this was supposed to be a mean and standard deviation. But I had to guess, because the paper didn’t say. But people often report other measures of dispersion around an average (standard errors, coefficient of variations) the exact same way.
Curran-Everett and Benos (2004) write, “The ± symbol is superfluous: a standard deviation is a single positive number.” When I tweeted this yesterday, a few wrote that this was silly, because Curran-Everett and Benos’s recommended format was a few characters longer and people worried about it being repetitive and hard to read. This reminds me of fiction writers who try to avoid repeating “He said,” usually with unreadable results. My concern is not about the ± symbol as the numbers having no description at all.
Regardless, my usual strategy is similar to Curran-Everett and Benos. I usually write something like, “(mean = 34.5, SD = 33.0, n = 49).” Yes, it’s longer, but it’s explicit.
That strategy isn’t the only way, of course. I have no problem with a line saying, “averages are reported as (mean ± S.D.) throughout.” That’s explicit, too.
Another manuscript repeatedly just said, “p < 0.05.” It didn’t tell me what test was used, nor any other information that could be used to check that it is correct.
For reporting statistical tests, I write something like, “(Kruskal-Wallis = 70.76, df = 2, p < 0.01).” That makes it explicit:
My understanding is that these values particularly matter for people doing meta-analyses.
I wondered why I keep seeing stats presented in ways that are either uninterpretable or unverifiable. I checked the author’s instructions of PeerJ, PLOS ONE, The Journal of Neuroscience, and The Journal of Experimental Biology. As far as I could find, only The Journal of Neuroscience provided guidance on what their standards for reporting statistics is. The Journal of Experimental Biology’s checklist says, “For small sample sizes (n<5), descriptive statistics are not appropriate, and instead individual data points should be plotted.”
(Additional: PeerJ does have some guidelines. They are under “Policies and procedures” rather than “Instructions for authors,” so I missed them in my quick search.)
This stands in stark contrast to the notoriously detailed instructions practically every journal has for reference formatting. This is even true for PeerJ, which has a famously relaxed attitude towards reference formatting.
In the long haul, the proper reporting of statistical tests is probably more important to the long term value of a paper in the scientific record than the exact reference format.
Judging from how often I see minimal to inadequate presentation of statistics in manuscripts that I’m asked to review, authors need help. Sure, most authors should “know better,” but journals should provide reminders even for authors who should know this stuff.
How about it, editors?
Additional: One editor took me up on this challenge. Alejandro Montenegro gets it. Hooray!
More additional: And Joerg Heber gets it, too. Double hooray!
Update, 20 September 2019: PLOS ONE has now added guidelines for reporting statistics. This seems to have been prompted at least in part by this post.
References
Curran-Everett D, Benos DJ. 2004. Guidelines for reporting statistics in journals published by the American Physiological Society. American Journal of Physiology - Gastrointestinal and Liver Physiology 287(2): G307-G309. http://ajpgi.physiology.org/content/287/2/G307.short
Averages
One paper wrote “(7.8 ± 0.8).” I guess this was supposed to be a mean and standard deviation. But I had to guess, because the paper didn’t say. But people often report other measures of dispersion around an average (standard errors, coefficient of variations) the exact same way.
Curran-Everett and Benos (2004) write, “The ± symbol is superfluous: a standard deviation is a single positive number.” When I tweeted this yesterday, a few wrote that this was silly, because Curran-Everett and Benos’s recommended format was a few characters longer and people worried about it being repetitive and hard to read. This reminds me of fiction writers who try to avoid repeating “He said,” usually with unreadable results. My concern is not about the ± symbol as the numbers having no description at all.
Regardless, my usual strategy is similar to Curran-Everett and Benos. I usually write something like, “(mean = 34.5, SD = 33.0, n = 49).” Yes, it’s longer, but it’s explicit.
That strategy isn’t the only way, of course. I have no problem with a line saying, “averages are reported as (mean ± S.D.) throughout.” That’s explicit, too.
Common statistical tests
Another manuscript repeatedly just said, “p < 0.05.” It didn’t tell me what test was used, nor any other information that could be used to check that it is correct.
For reporting statistical tests, I write something like, “(Kruskal-Wallis = 70.76, df = 2, p < 0.01).” That makes it explicit:
- What test I am using.
- The exact test statistic (i.e., the result of the statistical calculations).
- The degrees of freedom, or sample size, or any other values that is relevant to checking and interpreting the test’s calculated value.
- The exact p value, and never “greater than” or “lesser than” 0.05. Again, anyone who wants to confirm that the calculations have been done correctly needs an exact p value. People’s interpretations of the data change depending on the reported p value. People don’t interpret a p value of 0.06 and 0.87 the same, even though both are “greater than 0.05.” Yes, I know that people probably should not put much stake in that exact value, and that p values are less reproducible than people expect, but there it is.
My understanding is that these values particularly matter for people doing meta-analyses.
Journals aren’t helping (much)
I wondered why I keep seeing stats presented in ways that are either uninterpretable or unverifiable. I checked the author’s instructions of PeerJ, PLOS ONE, The Journal of Neuroscience, and The Journal of Experimental Biology. As far as I could find, only The Journal of Neuroscience provided guidance on what their standards for reporting statistics is. The Journal of Experimental Biology’s checklist says, “For small sample sizes (n<5), descriptive statistics are not appropriate, and instead individual data points should be plotted.”
(Additional: PeerJ does have some guidelines. They are under “Policies and procedures” rather than “Instructions for authors,” so I missed them in my quick search.)
This stands in stark contrast to the notoriously detailed instructions practically every journal has for reference formatting. This is even true for PeerJ, which has a famously relaxed attitude towards reference formatting.
In the long haul, the proper reporting of statistical tests is probably more important to the long term value of a paper in the scientific record than the exact reference format.
Judging from how often I see minimal to inadequate presentation of statistics in manuscripts that I’m asked to review, authors need help. Sure, most authors should “know better,” but journals should provide reminders even for authors who should know this stuff.
How about it, editors?
Additional: One editor took me up on this challenge. Alejandro Montenegro gets it. Hooray!
More additional: And Joerg Heber gets it, too. Double hooray!
Update, 20 September 2019: PLOS ONE has now added guidelines for reporting statistics. This seems to have been prompted at least in part by this post.
References
Curran-Everett D, Benos DJ. 2004. Guidelines for reporting statistics in journals published by the American Physiological Society. American Journal of Physiology - Gastrointestinal and Liver Physiology 287(2): G307-G309. http://ajpgi.physiology.org/content/287/2/G307.short
06 August 2018
Maybe we can’t fix “fake news” with facts
There’s been a few recent moves in the war on “fake news.” For instance, several platforms stopped hosting a certain conspiracy-laden podcast today. (You can still get all that conspiratorial stuff. The original website is untouched.) But the discussion about “fake news” seems to be focusing on one thing: its content.
Lately, I’ve been thinking about this diagram I made about communication, based on Daniel Kahneman’s work. Kahneman argues you need three things for successful communication:
I feel like most of the talking about “fake news” is very focused on “evidence.” This article, for instance, describes some very interesting research about how people view news articles. It’s concerned with how people are very prone to value opposing sources, but are very poor at evaluating the credibility of those sources.
All good as far as it goes. But, as I mentioned before, it feels a lot like science communicators who, for years and years, tried to beat creationists, flat Earthers, anti-vaccine folks, and climate change deniers by bringing forward evidence. They were using the deficit model: “People must think this because they don't know the facts. We must get the facts to them.”
That didn’t work.
I’m kind of seeing the same trend in fighting fake news. “Remove the falsehoods, and all will fix itself.”
But where I see the truly big gap between where we were and where we are isn’t about facts. It’s about trust.
When you bring evidence to a fight that isn’t about facts, you will lose. Every time. Facts mean nothing without trust. “Check the facts” means nothing when you are convinced everyone is lying to you. This is why conspiratorial thinking is so powerful and dangerous: it destroys trust.
You see the results in how someone who buys into one conspiracy theory often buys into several other conspiracy theories. If you believe Obama wasn’t born in the US because conspiracy (say), it’s not that big a leap to the moon landings were fake and the Earth is a flat square.
I have some hypotheses about how America in particular got to this point. I suspect the erosion of trust was slow, gradual, and – importantly – started long before social media. Maybe more like, I don’t know, let’s say 1996.
I don’t know how to reverse a long-term trend of distrust and paranoia. I’m not saying, “We need to understand and sympathize with fascists,” either. But you can’t cure a disease when you misdiagnose it. I just don’t see focusing on the factual content of social media getting very far.
Update, 29 August 2018: Jenny Rohn is discouraged, too.
Related posts
The Zen of Presentations, Part 59: The Venn of Presentations
Post fact politics catches up to science communication
External links
Fake news exploits our obliviousness to proper sourcing
Looking for life on a flat Earth
How can I convince someone the Earth is round?
Why do people believe conspiracy theories?
Lately, I’ve been thinking about this diagram I made about communication, based on Daniel Kahneman’s work. Kahneman argues you need three things for successful communication:
- Evidence
- Likeability
- Trust
I feel like most of the talking about “fake news” is very focused on “evidence.” This article, for instance, describes some very interesting research about how people view news articles. It’s concerned with how people are very prone to value opposing sources, but are very poor at evaluating the credibility of those sources.
All good as far as it goes. But, as I mentioned before, it feels a lot like science communicators who, for years and years, tried to beat creationists, flat Earthers, anti-vaccine folks, and climate change deniers by bringing forward evidence. They were using the deficit model: “People must think this because they don't know the facts. We must get the facts to them.”
That didn’t work.
I’m kind of seeing the same trend in fighting fake news. “Remove the falsehoods, and all will fix itself.”
But where I see the truly big gap between where we were and where we are isn’t about facts. It’s about trust.
When you bring evidence to a fight that isn’t about facts, you will lose. Every time. Facts mean nothing without trust. “Check the facts” means nothing when you are convinced everyone is lying to you. This is why conspiratorial thinking is so powerful and dangerous: it destroys trust.
You see the results in how someone who buys into one conspiracy theory often buys into several other conspiracy theories. If you believe Obama wasn’t born in the US because conspiracy (say), it’s not that big a leap to the moon landings were fake and the Earth is a flat square.
I have some hypotheses about how America in particular got to this point. I suspect the erosion of trust was slow, gradual, and – importantly – started long before social media. Maybe more like, I don’t know, let’s say 1996.
I don’t know how to reverse a long-term trend of distrust and paranoia. I’m not saying, “We need to understand and sympathize with fascists,” either. But you can’t cure a disease when you misdiagnose it. I just don’t see focusing on the factual content of social media getting very far.
Update, 29 August 2018: Jenny Rohn is discouraged, too.
(W)riters like me, who specialise in evidence-based communication, have been deluded as to the power of our pens in the face of this inexorable tide. ... I am now starting to think that none of this makes much difference. When does any of our evidence, no matter how carefully and widely presented, actually sway the opinion of someone whose viewpoint has been long since been seduced by the propagandists?
... I am starting to believe that the best way to affect the current state of affairs is by influencing those in power, using more private and targeted channels.
Related posts
The Zen of Presentations, Part 59: The Venn of Presentations
Post fact politics catches up to science communication
External links
Fake news exploits our obliviousness to proper sourcing
Looking for life on a flat Earth
How can I convince someone the Earth is round?
Why do people believe conspiracy theories?
25 July 2018
Crayfish clothing contest conqueror!
You are looking at the winner of the International Association of Astacology T-shirt contest! By me!
- First place: “Astacus fluviatilis” by Zen Faulkes
- Second place: “Euastacus,” front and back design by Premek Hamr
- Third place: “Astacolic” by Alexa Ballinger (which you can see here)
I haven’t yet see the runner-up designs, which were shown at the last IAA meeting in Pitssburgh, but I look forward to seeing them! This started with the quote. I found it on page vi of Thomas Henry Huxley’s monograph on the crayfish (also Google Books edition). It took a little digging to find the author’s complete name and year of the quote. (Yes, my academic training is showing: obsession with complete and correct citations.)
While looking up the person who wrote the quote, I discovered that Rösel von Rosenhof was an amazing illustrator of the natural world. And he painted crayfish! So I was able to combine this wonderful quote about crayfish with this brilliant plate by the same person.
I cleaned up an image of one of Rösel von Rosenhof’s paintings, cleaning up page blemishes left over from the scan and making it a little brighter.
I kept some of the writing on the painting but repositioned it. The quote that started me off was not on the page, so I had to add it. I had just the thing: the Adorn font family evoked the style the old plate well. But the wonderful thing about a well made font family is that you can use a lot of different variations of text, and it still feels coherent.
Adorn has a lot of built in letter variants, and it was fun to play around with different swashes in CorelDraw! I am pleased people like this, but I’m sure that it won the contest is more a tribute to the artistry of Rösel von Rosenhof than my own graphic design skills. But this was not the only T-shirt design I submitted. Oh no. I was having fun with the shirt designs. This was actually the first concept I worked on:
The outline is a signal crayfish claw, if I remember right.The words inside the claw outline are the names of every genus of freshwater crayfish (according to Crandall and de Grave 2017). Originally, I played with the idea of using the name of every species of crayfish, but with over 600 and rising, there were too many and it was too likely to go out of date soon.
I like this design, but I was never able to get it to look exactly like it was in my head. I wanted the shape of the claw to be defined by the words alone. I like the big, bold shape of the claw and that it includes all the crayfish diversity within it, though.
I worried that the genus names might be too small and fussy for a T-shirt, but I liked that claw shape, so I made this variant:
It’s bold, though I worry that it’s a little simple.
To be honest, this was my favourite design:
I traced an image of a crayfish using in CorelDraw. I love Japanese and Chinese calligraphy, and modified the lines making up the trace to taper, giving it a sort of brush-like appearance. The font is Cherry Blossoms, which I wrapped around on an oval. This font, like Adorn, has a lot of options, and I had way too much fun trying out different swashes. (Discovering alternate glyphs and swashes has been a revelation.)
Initially, I only had “International Association of Astacology.” But the words traced out the oval so clearly on the top that the bottom looked broken and incomplete. I needed something to complete the shape, so I added in “the science of crayfish.” I loved this, because I feel like one of the big problems with being a member of the International Association of Astacology is explaining what “astacology” is!
I made variations of the three no-winning designs, too, changing the fonts and colours in different ways.The first version of the brushwork crayfish above had the colours flipped, with the crayfish in red, and the text above in black. But since ink was the inspiration, making the crayfish black made more sense.
Even though my favourite design didn’t win, I am completely thrilled to have won the T-shirt contest. I am mentioning this award this in my annual review folder!
And maybe a few more people will discover and appreciate the fine artwork of Rösel von Rosenhof.
References
Crandall KA, De Grave S. 2017. An updated classification of the freshwater crayfishes (Decapoda: Astacidea) of the world, with a complete species list. Journal of Crustacean Biology 37(5): 615-653. https://doi.org/10.1093/jcbiol/rux070
External links
August Johann Rösel von Rosenhof
How to swash: using a font’s alternate glyphs, text styles, and numbers
Critique: The Capricorn Experiment, plus: Font families