22 May 2018

Tuesday Crustie: Mollie

Never heard of the US Digital Services agency? Now you have. Meet Mollie, their mascot:


Yes, those are light sabers. Slate has the story behind this adorable creation.

Hat tip to Miriam Goldstein.

18 May 2018

All scholarship is hard

Nicholas Evans wrote:

The solution levied by synthetic biologists is to get more biologists doing ethics. That this is always the suggestion tells me a) you think it’s easier to think about ethics than synbio; b) you want to keep the analysis in house. Neither are good.

Seconded, confirmed, and oh my God yes. I’ve been through several iterations of this in biology curriculum meetings, where I or others have suggested incorporating some non-biology class into a degree program, or even just an elective students funded by a training grant have to take. And the reaction is just what Nicholas describes:

“Why don’t we just do it ourselves?”

The single exception seemed to be chemistry. Maybe there was less suspicion because of the blurry line between molecular biology and biochemistry. Or maybe it was because their department was right above ours and we knew the people better. But when it was ethics or writing or statistics: nope, we’ll develop out own class taught by our own faculty in our own department.

I get a lot of variations of “Is is easier to do this or that in academia?” questions on Quora, too.

In an institution, this attitude of “We know best” is made worse by administrative measurements. Departments are evaluated by how many credit hours they generate. So when I suggest students might take a course taught by the Philosophy or Math or Communications or Psychology department, the response is, “We’re just giving credit hours away.” Since credit hours are one thing that are looked at to determine resources, it’s an understandable reaction. It’s Goodheart’s law in action. The measure becomes a target and changes what the measure does.

Nicholas notes:

The vast majority of people talking synbio ethics have almost no training in ethics. You wouldn’t accept that in the technical side of synbio, so don’t accept it in ethics.

Exactly. We often complain about how people don’t respect expertise on many controversial subjects, like evolution, climate change, or vaccination. But we see the same disrespect within universities for scholarship in different fields. Scholarship in every field is hard, and “My field is better than your field” is a shitty game.

Hat tip to Janet Stemwedel.

16 May 2018

The Zen of Presentations, Part 71: Slides per minute

In grad school, I was introduced to a nice, simple rule for giving a talk.

One slide per minute.

I used this rule for a long time. It seemed to work well. In particular, any slide with data seemed to take at least a minute to digest. You had to orient yourself to the axis labels, the units, there is reader an interpretation to do, and that takes a little time.

I did know it was a rule of thumb, not an ironclad rule. I would estimate a slide would be up for a little less than a minute when it was a picture of an animal or something else that had no data or nothing to read

But then I saw Lawrence Lessig’s presentation, “Free culture” (via Garr Reynolds’s blog). His talk had 243 slides, but it was not 243 minutes. Lessig used his slides in a way I’d never seen before. They weren’t illustrations to be described or explained. His slides were his rhythm section, laying out a beat and emphasizing what he said. Even though his slides were up for such a short time, I never felt confused or lost or thinking, “Wait, wait, go back!”

I was blown away. I showed me how limited my views about what a “good presentation” were.

Then I learned about formats like pecha kucha and Ignite talks. Like Lessig, they emphasized quick pacing, running through slides at 3 to 4 per minutes. And those talks often rocked.

The key to such rapid fire delivery was planning and practice. The automatic slide advance rule for pecha kucha and Ignite talks forced to you plan and practice relentlessly. Practice never leads you wrong.

There are some images and slides that probably do warrant a full minute. But the audience can often pick up on points faster than you’d think.

There isn’t any magic number of slides in a talk. Your talk can have hundreds of slides. Your talk can have no slides. Or your talk can even have one slide per minute.

Related posts

The Zen of Presentations, Part 40: Lighting a fire under speakers
How Gilmore Girls change my teaching

External links

Free culture presentation
The “Lessig method” of presentation

11 May 2018

The Zen of Presentations, Part 70: Giving away the plot

Mike Nitenbach wrote:

Huge mistake to design scientific presentation like fucken Sherlock Holmes story.

Becca replied:

If you set things up and present your logic at every step, the audience can tell where things are headed without being explicitly told in advance.

Over on Better Posters, I’ve talked a lot about the Columbo principle. Columbo taught us that even when the audience knows the answer, the fun can be in learning how you prove it. I think that advice works well for titles, but it still implies a sort of “mystery” aspect that Nitenbach is criticizing.

But you can structure a talk where you tell the audience what’s going to happen, but not leave them disappointed.

When making Star Trek II: The Wrath of Khan, writer Nicholas Meyer (who also directed) was faced with a problem: Spock, the show’s most popular character, was going to die.


Actor Leonard Nimoy was bored with the part, not interested in doing another movie, and was sort of lured back in by the prospect of killing off the character. Fans learned about this, and were upset. Meyer got death threats. So what did Meyer do?

He killed Spock in the opening scene.

Of course, Spock doesn’t actually die at that point. He pretends to die as part of the Kobayushi Maru training scenario. So when the film is winding up for the actual, powerful death scene of Spock, people were not thinking about, “This is the one where Spock dies!”

Meyer said he learned on this movie that you can show an audience anything in the first ten minutes of a movie, and they will forget about it by the end of the movie.

You can do the same thing in a talk. You can tell people right at the start of a talk what you found. If you involve them, and make the narrative of that process well told, you can bring people through to the end, and they will think, “Oh yeah, I already knew that!”


Meyer said in the film’s Blu-ray commentary:

The question is not whether you kill him. It’s whether you kill him well. If it’s perceived as a working out of a clause in a star’s contract, then they’re gonna hate it. If it’s organic, if it’s really part of the story, then no one’s gonna object.

Or, to paraphrase Anton Chekov, if you want fire a gun in the third act, load it in the first act. The audience will forget the gun was even loaded until that final climactic shot.

External links

Detective stories: “Whodunnit?” versus “How’s he gonna prove it?”
38 Things We Learned from the ‘Star Trek II’ Commentary

04 May 2018

Rhodes Trust is academia’s equivalent to Confederate statues and flags

Bree Newsome taking down South Carolina Confederate flag

In the last few years in the United States, there’s been debate about the presence of Confederate flags and statues in public places. I credit Bree Newsome for getting this ball rolling. The Confederacy was built on the notion that slavery was right and just.

Continuing to display the symbols of that failed government on public grounds is tacit endorsement of the ideals of white supremacy. Put those statues and flags that are on government property in museums.

This morning, I was given a link to a fellowship and was asked to promote it. I had two problems with that, and the first was that the fellowship had a lot of ties to the Rhodes Trust.

As a student, I learned about Cecil Rhodes because of his association with Oxford’s Rhodes Scholarships (supported by the Rhodes Trust). That name had a positive association for me.

It was only later that I learned, “Man, this dude was racist as fuck.” In Born a Crime, Trevor Noah says if many Africans had a time machine, they wouldn’t go back in time to stop Adolf Hitler, they’d be packing heat for Cecil Rhodes. (Edit: Yes, this is admittedly a big gap in my education. I should have known.)

"Africa is still lying ready for us it is our duty to take it. It is our duty to seize every opportunity of acquiring more territory and we should keep this one idea steadily before our eyes that more territory simply means more of the Anglo-Saxon race more of the best the most human, most honorable race the world possesses." - Cecil Rhodes

I wish I had learned about Rhodes’s colonial racism first, not years after hearing about the scholarships. The misery Rhodes caused in life seems more important to me than the money he left behind after death.

The second problem I had with this fellowship was that it was for “leading academic institutions.” I’m pretty sure that means American Ivy League institutions and English Oxbridge universities, and not the sort of public, regional institutions where most students in the world get their university educations. (The sort of place I work.)

Racist and elitist was not a winning combination for me. I did not push out notification of the fellowship. Admittedly, this was made easier because the deadline was past, but I wouldn’t have done it regardless.

Is Rhodes the only example? When I mentioned this on Twitter, “Sackler” came up. Like Rhodes, I first heard that name in a positive light: the Sackler symposium on science communication, which I’ve blogged about several times (here in 2012, here in 2013). But the Sackler family is problematic: they made a lot of money from opioids, which is now a major public health problem. And that name is on museums and medical schools.

Like Rhodes, I should have known about the Sackler drug connection before I knew about the symposium. That’s not good.

Turning money isn’t as easy as taking down a flag on a pole, or a statue in a park. But the principle is the same. Academia needs to look harder at how to stop giving these unspoken endorsements to people who caused a lot of suffering.

Update, 14 May 2018: Poll results from Twitter. 88% of people surveyed said they’d take money with the Rhodes name.


Picture from here.

Rethinking the graduate admissions process

Warning: The following post is a piece of devil’s advocacy. I’m not sure I believe myself.

The process for selecting graduate students is mostly deeply flawed and should be revamped from the ground up. Almost everything in the admission process works against increasing diversity in academia.

Let’s take the elements apart piece by piece.

Application fee: Many program charge an application fee. This works against students who are good, but economically disadvantaged. There is no way that those fees are paying the bills of the graduate office, Friction can be a useful thing in preventing spurious applications, but generally the cost is so high that multiple applications quickly add up and remove options from students who can’t pay them all.

GRE scores: The cost of writing and submitting scores is another economic barrier. Many have written about the low predictive power of the test (also here).

Undergraduate GPA: Grade inflation is making it difficult to distinguish student performance. Plus, they are not exactly comparable from institution to institution, both in calculation (is the top score 4 or 4.3?), a situation that gets even more complex when student cross national borders. And it’s highly likely that the same grade point average will be interpreted differently depending on the issuing institution.

Recommendation letters: So much room for bias here. People write different recommendations for men and women. Like, twice the men get glowing letters than women. People are influenced by university of the letter writer and the seniority of the recommender and probably other factors that have nothing to do with the candidate. Recommendation letters are the primary tool for old boy’s networks to reinforce themselves.

CVs: Recently, we learned that a large number of graduate fellowship applicants were told they didn’t get the award because they didn’t have a publication yet. These are supposed to be people at the start of their academic careers, so it is not reasonable to expect them to have a lot on a CV. And given that so many places have not cracked down on unpaid internships, experience on paper will tend to favour people in well off families. Again.

Personal statement: This one might be okay, as long as applicants gave no indication of their gender. Because just the name alone works against increasing diversity.

If grad review is so messed up, what can we do?

One idea is to stop the tedious review by committee and just let individual faculty pick students they want to supervise. It doesn’t eliminate all the biases, but at least it’s less work.

In research grant applications, there’s occasionally serious suggestions crop up that the peer review process is kind of ineffective and that we’d be better off assigning funding by lottery. Maybe we should consider admitting grad students by lottery, too.

On Twitter, I asked students what they would like to see in the application process. Zachary Eldredge brings up the idea of a lottery, and Olivia mentions a face-to-face interview. Will Lykins says it would be good to normalize non-academic work on the forms, which again many students increasingly have to do to make ends meet instead of doing those unpaid enrichment activities.

Related posts

I come to bury the GRE, not to praise it
How do you test persistance?
Why grade inflation is good for the GRE
Does grad school have a mismatch problem?
The “Texas transcript” is a good idea, but won’t solve grade inflation

18 April 2018

Teaching online and inclusion

"Do you expect me to talk, Goldfinger?" "No Mr. Bond, I expect you to make this online course ADA compliant!"

 I’ve been teaching a completely online class this semester. I’ve done partly online classes, and practically live online anyway, so I thought this would be a fairly simple thing for me to do.

It has not. It has been a real eye-opener for thinking about student needs.

One of the biggest challenges I’ve been working with is making the class compliant with the rules for students with disabilities. The rules are that whether there are students in the class who have declared disabilities or not, you must make every item in the class as readily available and accessible as if there were students with disabilities.

This means video lectures need closed captioning. There is voice recognition software that does closed captioning automatically, which is great, but it never does it perfectly. Every time I say, “Doctor Zen,” the software puts in, “doctors in.” This means you have to go in, listen to the entire lecture, and proofread the captioning for entire lecture.

Similarly, every image needs a description so that someone who is blind or otherwise visually impaired can understand the material. And many scientific diagrams are complex and challenging. Today, I was forced with trying to write a complete description of this:

Human genome influences traits. Human genome has 2 copies in every cell. 1 copy is made of 3 billion base pairs. Cell makes up tissue. In cell, genome divided into nuclear genome and mitochondrial genome. Cells manifest traits. Tissues make up organs. Tissues manifest traits. Organs make up body. Body manifests traits. Traits leads back to Lesson 1. Mitochondrial genome has 1 circular chromosome. Mitochondrial genome is many per cell. Circular chromosome is many per cell. Circular chromosome made of nucleic acid and histone proteins. Nuclear genome is one per cell. Nuclear genome is 23 pairs of linear chromosomes. 23 pairs of linear chromosomes has 22 pairs called autosomes. 23 pairs of linear chromosomes has 1 pair called sex chromosomes. Sex chromosomes are XX for female. Sex chromosomes are XY for male. 23 pairs of linear chromosomes are made of nucleic acid and histone proteins. Nucleic acid wraps around histone proteins. Nucleic acid has two types, DNA and RNA. RNA leads to lesson 3. DNA is composed on deoxynucleotides. DNA is double stranded. DNA composed of deoxynucleotides. Double stranded leads to helical shape. Double stranded by base pairs. Deoxynucleotides are 4 types of nitrogenous bases. Nitrogenous bases can form base pairs. Nitrogenous base connects to A, T, C, G. A base pairs with T and vice versa. G base pairs with C and vice versa.

Here’s what I came up with for the concept map above:

Human genome influences traits. Human genome has 2 copies in every cell. 1 copy is made of 3 billion base pairs. Cell makes up tissue. In cell, genome divided into nuclear genome and mitochondrial genome. Cells manifest traits. Tissues make up organs. Tissues manifest traits. Organs make up body. Body manifests traits. Traits leads back to Lesson 1. Mitochondrial genome has 1 circular chromosome. Mitochondrial genome is many per cell. Circular chromosome is many per cell. Circular chromosome made of nucleic acid and histone proteins. Nuclear genome is one per cell. Nuclear genome is 23 pairs of linear chromosomes. 23 pairs of linear chromosomes has 22 pairs called autosomes. 23 pairs of linear chromosomes has 1 pair called sex chromosomes. Sex chromosomes are XX for female. Sex chromosomes are XY for male. 23 pairs of linear chromosomes are made of nucleic acid and histone proteins. Nucleic acid wraps around histone proteins. Nucleic acid has two types, DNA and RNA. RNA leads to lesson 3. DNA is composed on deoxynucleotides. DNA is double stranded. DNA composed of deoxynucleotides. Double stranded leads to helical shape. Double stranded by base pairs. Deoxynucleotides are 4 types of nitrogenous bases. Nitrogenous bases can form base pairs. Nitrogenous base connects to A, T, C, G. A base pairs with T and vice versa. G base pairs with C and vice versa.

Writing that description... took time.

Anyone who think that online teaching is going to be some sort of big time saver that will allow instructors to reach a lot more students has never prepared an online class. It’s long. It’s hard. It’s often bordering on tortuous (hence the “No Mr. Bond” gag at the top of the post).

These things take time, but I don’t begrudge the time spent. It’s the right thing to do. It’s forced me to think more deeply about how I can provide more resources that are more helpful to more students. It’s not just deaf students who can benefit from closed captions, for instance. Someone who can hear could benefit from seeing words spelled out, or maybe use them when they are listening in a noisy environment, or one where sound would be distracting.

And I keep thinking that if I think it takes a lot of work to put these it, it’s nothing compared to students who need these materials who have to navigate through courses every day.

External links

Flowcharts and concept maps

16 April 2018

“It makes no sense!” versus history

There’s no channel 1 on televisions in North America.

It makes no sense.

That is, it makes no sense from the point of view of an engineer that had to design a channel system today, starting from scratch.

It makes sense from the point of view of a historian examining how broadcasting developed in North America.

Sometimes, discussions about academic systems of various sorts feel like people complaining mightily about how stupid it is that there is no channel 1, and proposing fix after fix after fix to correct it. And they do so in an environment where lots of people aren’t bothered by the lack of channel 1. And they do so even if the proposed fixes will mean some people’s televisions won’t work any more.

“Sure, but they’ll be better televisions!” Maybe, but it misses that a consistent channel numbering system is not what most people want out of a television.

03 April 2018

The NSF GRFP problem continues

This morning, a fine scientist congratulated two undergraduates in her lab about winning National Science Foundation (NSF) Graduate Research Fellowship Program (GRFP) awards. I thought, “Huh. They’re out? And two seems like a lot from one lab.”

A few years ago, Terry McGlynn wrote an important blog post about how tilted the playing field is for the NSF GRFP awards. He compared awards to Harvard students (with about 7,000 undergraduates) to the more than 20 campuses in the California State University system (over 400,000, according to a check of Wikipedia).

The NSF is good about making it easy to find a list of all 2,000 awards in this program. I went looking for the same comparison of one Ivy League university to an entire state’s system. Embarrassingly, I screwed up the calculation on the first pass, not realizing that several California State universities don’t say “California State” in their name, unlike the University of Texas institutions.

Harvard got 43, and all of California State get 50 (thanks to Terry for counting here and here).

Cal Poly Pomona 4
Cal Poly SLO 5
CSUCI 1
CSUDH 1
CSU Fresno 1
CSU Fullerton 8
CSULB 2
CSULA 1
Sac State 1
CSUSB 1
CSUN 5
CSUSM 3
SDSU 6
SFSU 6
SJSU 3
Humboldt State 2

So one lab in Harvard alone equaled the entire combined output of eight different California State universities (separately, not combined).

If this sort of pattern intrigues you, you must for to Natalie Telis’s post where she digs down into the numbers. Not just this year’s, but over 28,000 awardees worth of data, from 2011 to 2017. It’s bloody brilliant. One of her first points is, “The most expensive undergraduate schools have an extreme excess of (NSF GRFP) recipients.” She also makes some comments on Twitter about this.

I can’t wait to see what she finds for 2018 data.

Matt Cover did some similar things the previous year, and found no relationship between institutional enrollment and number of grants.

Update, 2 August 2019: Here’s the second half of Natalie Tellis’s analysis of GRFP awards.

External links

NSF Graduate Fellowships are a part of the problem
The price of a GRFP, part 1
Matt Cover thread from 2017

28 March 2018

Innovation must be accompanied by education


When Apple launched the iPod, the company had to put a lot of effort into educating people about digital music.

Mr. Jobs pulled the white, rectangular device out of the front pocket of his jeans and held it up for the audience. Polite applause. Many looked like they didn’t get it.

That was just fine with Mr. Jobs. They’d understand soon enough.

Apple had to inform the mass market that digital downloads could be legal (remember Napster?). They had to let people know how much music you could have with you. They had to let people know about the iTunes store. Without all those pieces of the puzzle, the iPod would have tanked.

I was reminded of these scene when Timothy Verstynan asked:

Why can’t we have a scientific journal where, instead of PDFs, papers are published as @ProjectJupyter notebooks (say using Binders), with full access to the data & code used to generate the figures/main results? What current barriers are preventing that?

I follow scientific publishing at a moderate level. I write about it. I’m generally interested in it. And I have no idea what Jupyter notebooks and binders are. If I don’t know about it, I can guarantee that nobody else in my department will have the foggiest idea.

This is a recurring problem with discussions around reforming or innovating in scientific publishing. The level of interest and innovation and passion around new publication ideas just doesn’t reach a wide community.

I think that this is because those people interested might undervalue the importance of educating other scientists about their ideas. Randy Olson talks a lot about how scientists are cheapskates with their communications budgets. They just don’t think it¤s important, and assume the superiority of the ideas will carry the day.

I’ve talked with colleagues about open access many times, and discover over and over that people have huge misconceptions about what open access is and how it works. And open access is something that has been around for a decade and has been written about a lot.

Publishing reformers drop the iPod, but don’t do the legwork to tell people how the iPod works.

So to answer Timothy’s initial question: the current barrier is ignorance.

27 March 2018

What defines a brain?

A side effect of my bafflement yesterday over how lobsters became some sort of strange right-wing analogy for the rightness of there being winners and losers (or something) was getting into a discussion about whether lobsters have brains.

That decapod crustaceans are brainless is a claim I have seen repeated many times, often in the service of the claim that lobsters cannot feel pain. This article, refuting Jordan Peterson, said:

(L)obsters don’t even have a brain, just an aglomerate of nerve endings called ganglia.

This is a bad description of ganglia. It makes it sound like there are no cell bodies in ganglia, where there usually are. Here are some. This is from the abdominal ganglion of Louisiana red swamp crayfish (Procambarus clarkii):


These show cell bodies of leg motor neurons from several species (sand crabs and crayfish, I think; these pics go back to my doctoral work).


These are neurons in a ganglion from a slipper lobster (Ibacus peronii), where those big black cell bodies are very easy to see:


And these are leg motor neurons in slipper lobster:


And there is substantial structure within that alleged “not a brain” in the front:



And we’re know this for well over a century, as this drawing from 1890 by master neuroanatomist Gustav Retzius shows:



So ganglia are more than “nerve endings.” So putting that aside, are there other features that make brains, brains?

Intuitively, when I think about brains, I think of a few main features. Two anatomical, and one functional:

  1. Brains are big, single cluster of neurons. Even though there may be many neurons in, say, the digestive system (and there are not as many as some people claim), it’s so diffuse that nobody would call it a brain.
  2. It’s in the head, near lots of sensory organs. In humans, our brain is right next door to our eyes, ears, nose, and mouth, which covers a lot of the old-fashioned senses.
  3. It’s a major coordinating center for behaviour.

Decapod crustaceans (not to mention many other invertebrates) meet all those criteria. Sure, the proportion of neurons in the decapod crustacean brain may be smaller than vertebrates, but I have never seen a generally agreed upon amount of neural tissue that something must have to be a brain instead of a “ganglion in the front of the animal.”

I have a sneaking suspicion that some people will argue that only vertebrates can have brains because we are vertebrates, and vertebrates must be special, because we are vertebrates. That is, people will define brains in a way to stroke human egos.
 And, as I implied above, some people make the “no brains” claim out of self-interest. I don’t think it’s any accident that I see “lobsters don’t have brains” coming from institutes that have close ties to commercial lobster fisheries.

I suppose that some could argue that limiting the word “brain” to vertebrates is a way of bringing recognizing that vertebrate and invertebrate nervous systems are structured very differently. They are, but why only do this for one part of the nervous system? This is a little bit like saying “invertebrates don’t have eyes,” because they have compound eyes instead of our camera-style eyes. We routinely give things in invertebrates and vertebrates the same names if they have the same functions.

And in practice, I see people referring to octopus brains all the time. They do so even though, like other invertebrates, a large proportion of the nervous system sits outside the brain. From memory, roughly half the neurons in an octopus reside in its arms.

In practice, I am far from the only person that calls the clump of neurons at the front end of decapod crustaceans, “brains.” From this page:


So, fellow neuroscientists, if you don’t think invertebrates can have brains, why not? What is your dividing line?

Hat tip to Hilary Gerstein.

26 March 2018

I was unaware of how lobsters got sucked into an all-encopassing conspiracy theory

Miriam Goldstein and Bethany Brookshire burst my cosy bubble of ignorance. Today I learned  Jordan Peterson, a current darling of conservatives, drags lobsters into his mish-mash of writings to make white dudes feel good about themselves. Allow me an extended quote from this Vox article:

The book is a kind of bridge connecting his academic research on personality and his political punditry. In it, Peterson argues that the problem with society today is that too many people blame their lot in life on forces outside their control — the patriarchy, for example. By taking responsibility for yourself, and following his rules, he says, you can make your own life better.

The first chapter, about posture, begins with an extended discussion of lobsters. Lobster society, inasmuch as it exists, is characterized by territoriality and displays of dominance. Lobsters that dominate these hierarchies have more authoritative body language; weaker ones try to make themselves look smaller and less threatening to more dominant ones.

Peterson argues that humans are very much like lobsters: Our hierarchies are determined by our behaviors. If you want to be happy and powerful, he says, you need to stand up straight:

If your posture is poor, for example — if you slump, shoulders forward and rounded, chest tucked in, head down, looking small, defeated and ineffectual (protected, in theory, against attack from behind) — then you will feel small, defeated, and ineffectual. The reactions of others will amplify that. People, like lobsters, size each other up, partly in consequence of stance. If you present yourself as defeated, then people will react to you as if you are losing. If you start to straighten up, then people will look at and treat you differently.

“Look for your inspiration to the victorious lobster, with its 350 million years of practical wisdom. Stand up straight, with your shoulders back,” he concludes, in one of the book’s most popular passages.

The lobster has become a sort of symbol of his; the tens of thousands of Peterson fans on his dedicated subreddit even refer to themselves as “lobsters.”

This is classic Peterson: He loves to take stylized facts about the animal kingdom and draw a one-to-one analogy to human behavior. It also has political implications: He argues that because we evolved from lower creatures like lobsters, we inherited dominance structures from them. Inequalities of various kinds aren’t wrong; they’re natural.

“We were struggling for position before we had skin, or hands, or lungs, or bones,” he writes. “There is little more natural than culture. Dominance hierarchies are older than trees.”

Foul!


The logical fallacy is appeal to nature.

As analogies go, comparing humans to lobsters is... not a good analogy. This article provides a pretty good response, so I don’t have to. (Though I say lobsters have brains. But that doesn’t detract from the main points.)

Additional, 19 May 2018: Bailey Steinworth argues the diversity of marine invertebrate behaviour does not support Peterson’s ideas, either.

External links


Psychologist Jordan Peterson says lobsters help to explain why human hierarchies exist – do they?

20 March 2018

The impossibility of species definitions


Sad news about the death of Sudan, the last northern male white rhino, prompted some discussion about whether the northern white rhino is a species or a subspecies. The TetZoo blog has a nice look at this specific issue. I’d like to take a broader look at the whole problem of why defining species is so hard.

Arguing over what defines a species is a long-running argument in biology. It’s practically its own cottage industry. There is much effort to define species precisely, for all sorts of good reasons. And that desire for clear, precise definitions often appears on websites like Quora. Questions come up like, “If Neanderthals bred with us, doesn’t that mean, by definition, they are the same species?”

But as much as we want clear definitions in science, there is a problem. You can’t always draw sharp dividing lines on anything that is gradual. (Philosphers know this as the continuum fallacy.)

To demand a precise definition of species is like demanding to know the precise moment that a man is considered to have a beard. For instance, I think we can agree that Will Smith, in this pic from in After Earth (2013), does not have a beard:


And that in Suicide Squad (2016), Smith pretty clearly does have a beard:


But does Smith have a beard in this pic? Er... there’s definitely some facial hair there.


What is the exact average hair length that qualifies a man to be “bearded”? There isn’t one. But that doesn’t mean that you can’t meaningfully distinguish After Earth Smith from Suicide Squad Smith.

It’s a problem that Charles Darwin recognized. In Darwin’s view, speciation was going to result from the slow, gradual accumulation of tiny, near imperceptible changes. Darwin recognized that speciation was a gradual process, and he  frequently made the point that “varieties” could be considered “incipient species.” At any given point in time, some groups would be early in that process of divergence, and some would be further along.

That’s why we shouldn’t expect there to be clear, consistent species definitions that apply across the board and are helpful in every case.

External links

The last male northern white rhino has died
How Many White Rhino Species Are There? The Conversation Continues

16 March 2018

The last round of the year

Reason I love the AFLW competition, number 2,749:

Going into this last round of the home and away season, five out of eight teams had a shot at the grand final. And no team was guaranteed a slot in the grand final.

And the mighty Demons are well placed to be one of the two teams in the final. Go the Dees!

It's going to be a series of nail-biting games, and I love it.

13 March 2018

“Mind uploading” company will kill you for a US$10,000 deposit, and it’s as crazy as it sounds

Max Headroom was an 1980s television series that billed itself as taking place “20 minutes into the future.” In 1987, its second episode was titled, “Dieties”. It concerned a new religion, the Vu-Age church, that promised to scan your brain and store it for resurrection.

Vanna Smith: “Your church has been at the forefront of resurrection research. But resurrection is a very costly process and requires your donations. Without your generosity, we may have a long, long wait... until that glorious day... that rapturous day... when the Vu-Age laboratories perfect cloning, and reverse transfer.”

That episode suddenly feels relevant now, although it took a little longer tan 20 minutes.

On Quora, which I frequent, I often see people asking about mind uploading. My usual response is:


So I am stunned to read this article about Nectome, which, for the low deposit price of US$10,000, will kill you and promise to upload your mind somewhere, sometime, by a process that hasn’t been invented yet.

If your initial reaction was, “I can’t have read that right, because that’s crazy,” you did ready it right, and yes, it is crazy.

In fairness, it is not as crazy as it first sounds. They don’t want to kill you when you’re healthy. They are envisioning an “end of life” service when you are just at the brink of death. This makes it moderately more palatable, but introduces more problems. It’s entirely possible that people near the end of life may have tons of cognitive and neurological problems that you really wouldn’t want to preserve.


How do they propose to do this? Essentially, this company has bought into the idea that everything interesting about human personality is contained in the connectime:

(T)he idea is to retrieve information that’s present in the brain’s anatomical layout and molecular details.

As I’ve written about before, the “I am my connectome” idea is probably badly, badly wrong. It completely ignores neurophysiology. It’s a selling point for people to get grants about brain mapping, and it’s a good selling point for basic research. But as a business model, it’s an epic fail.

And what grinds my gears even more is that this horrible idea is getting more backing that many scientists have ever received in their entire careers:

Nectome has received substantial support for its technology, however. It has raised $1 million in funding so far, including the $120,000 that Y Combinator provides to all the companies it accepts. It has also won a $960,000 federal grant from the U.S. National Institute of Mental Health for “whole-brain nanoscale preservation and imaging,” the text of which foresees a “commercial opportunity in offering brain preservation” for purposes including drug research.

I think it is good to fund research of high speed analysis of imaging of synaptic connections. But why does this have to be tied to a business? Especially one as batshit crazy as Nectome?

Co-founder Robert McIntyre says:

Right now, when a generation of people die, we lose all their collective wisdom.

If only there was some way that people could preserve what they thought about things... then we could know what Artistotle thought about stuff. Oh, wait, we do, it’s called, “writing.”

I can’t remember the last time I saw a business so exploitative and vile. And in this day and age, that’s saying something.

Update, 3 April 2018: MIT is walking away from its relationship with the company. Good. That said, Antonio Regalado notes:

Although MIT Media Lab says it’s dropping out of the grant, its statement doesn’t strongly repudiate Nectome, brain downloading idea, or cite the specific ethical issue (encouraging suicide). So it's not an apology or anything.

Hat tip to Leonid Schneider and Janet Stemwedel.

Related posts

Overselling the connectome
Brainbrawl! The Connectome review
Brainbrawl round-up

External links

A startup is pitching a mind-uploading service that is “100 percent fatal”

How many learning objectives?




I am teaching an online course this semester, and I had to undergo training and review of the class before it ran. In preparing it, one of the key things that the instructions stressed was the importance of having learning objectives.

All that material gave me good insight into how to write a single leaning objective, there was almost nothing about how to put them all together.
And right now I’m struggling with what a good number of learning objectives is. But so far, the only direct answer I’ve seen to that is:

How many learning outcomes should I have?
This is tied to the length and weight of the course
How many learning objectives should I have?
This is tied to the number of learning outcomes.


You’re not helping.

Most courses track lessons in some standard unit of time. A day. A week. Surely there has to be some sort of thinking about what a reasonable number of learning objectives is for a given unit of time. It’s probably not out of line for me to guess that one hundred learning objectives in a single day would be too much. On the other hand, a single learning objective for a week might be too low.

Right now, I have some weeks that have ten or more learning objectives. I’m wondering if that’s too much. And I’m just lost. I have no way of knowing.

It might sound it’s just a matter of looking at student performance and adjusting as you go. But in a completely online course, it is so hard to adjust. You have to prepare almost everything in advance, and you can’t easily go faster or slower in the way that you can when you meet students in person.

I’m not sure how much student feedback will help, because everyone’s tendency is probably to say, “Yes, give me fewer objectives so I have more time to master each one.” And sometimes students aren’t good at assessing what they need to learn.

Maybe this is a gap in the education literature that needs filling.

Picture from here.

12 March 2018

How anonymous is “anonymous” in peer review?

Last time, I was musing about the consequences of signing or not signing reviews of journal articles. But I got wondering just how often people sabotage their own anonymity.

As journals have moved to online submission and review management systems, it’s become standard for people to be able to download Word or PDF versions of the article they are reviewing.

The last article I reviewed was something like 50 manuscript pages. There was no way I was going to write out each comment as "Page 30, paragraph 2, line 3," and make a comment. I made comments using the Word review feature. And all my comments had my initials.

As more software uses cloud storage for automatic saving features, more software packages are asking people to create accounts, and saving that identifying information along with documents. Word alerts you with your initials, but Acrobat Reader's comment balloons are little more subtle.

Ross Mounce and Mike Fowler confirmed that this happens:

Yep. Metadata tags are great. 😀 Even simply the language setting can be a giveaway: Austrian English is a huge clue in a small field [real, recent example!]. "Blind" peer review is not always effective...

Having wondered how often authors do this, I wonder if editorial staff ever check to make sure reviewers don’t accidentally out themselves.

Picture from here.

03 March 2018

Signing reviews, 2018

Seen on Twitter, two days apart.

First, Kay Tye:

Dear everyone! You don’t need to wonder if I reviewed your paper anymore. I now sign ALL of my reviews.
Inspired by @pollyp1 who does this and I asked her why and she said “I decided to be ethical.” I do it to promote transparency, accountability and fairness. #openscience

I particularly noted this reply from Leslie Vosshall: (the “@pollyp1” mentioned in the prior tweet).

Open peer review would instantly end the dangerous game of “guess the reviewer.” This happens all the time with senior people guessing that some junior person trashed their paper and then holding grudges. But usually they guess wrong and inadvertently damage innocents.

Second, one day later, Tim Mosca:

Since becoming an assistant prof, I’ve reviewed ~ 12 papers. Signed one. Received a phone call from the senior (tenured) author asking, “Who do you think you are to make anything less than glowing comments?” So there are still dangers for young, non-tenured profs when reviewing.

The threads arising from these tweets are well worth perusing.

I’ve signed many reviews for a long time, and nothing bad has happened. I used to be much more in favour of everyone signing reviews, but long discussions about the value of pseudonyms on blogs, plus the ample opportunity to see people behaving badly on social media, significantly altered my views. But the problem is that everyone will remember a single bad story, and not pay attention to the many times where someone signed a review and everything was fine. Or cases where something positive came out of signing peer reviews.

How can we weigh the pros of transparency with the cons of abuse? I don’t know where that balance is, but I think there has to a some kind of balance. But the underlying issue here is not signing reviews, but that people feel they can be vindictive assholes. Univeristies do not do enough to address that kind of poor professional behaviour.

19 February 2018

Once around the earth

Gordon Pennycook asked how far people have moved in pursuit of their academic careers. I’d never added it up before. I found an online distance calculator, and off I went.

From my high school town of Pincher Creek, Alberta to the University of Lethbridge for my bachelor’s degree: 100 km

From Lethbridge to the University of Victoria for my graduate work: 1,268 km (driving)

From Victoria to Montreal for my first post-doc: 4,732 km (driving)

From Montreal to Melbourne, Australia for my second post-doc: 16,755 km

From Melbourne to Pincher Creek, for a brief period of unemployment: 13,874 km

From Pincher Creek to Edinburg, Texas to start my tenure-track position: 3,476 km

And from Edinburg to an undisclosed location, where I am on leave: 3,090 km

Grand total: 45,295 km! For comparison, the circumference of the Earth is 40,075 km.

You may now judge me on my carbon footprint. I would hate to start adding in the miles for conferences on top of that.

Update, 5 October 2020: Refreshed link to distance calculator.

06 February 2018

Tuesday Crustie: Know your Lamingtons

Canada has butter tarts. Australia has lamingtons.


But because Autralia is the lucky country, it not only has lamingtons as dessert, but Lamington as a bad ass crayfish:


I loved this description (my emphasis):

One of Australia's most unusual creatures, the Lamington spiny crayfish, lives there and has been known to startle bushwalkers by confronting them in battle stance, clicking claws and warning hiss.

It’s like this crayfish is trying to live up to this description of Australia from Douglas Adams:

Australia is like Jack Nicholson. It comes right up to you and laughs very hard in your face in a highly threatening and engaging manner.

External links

Feisty crayfish surprise in rainforest

Dessert pic from here; crayfish pic from here.