28 March 2018

Innovation must be accompanied by education


When Apple launched the iPod, the company had to put a lot of effort into educating people about digital music.

Mr. Jobs pulled the white, rectangular device out of the front pocket of his jeans and held it up for the audience. Polite applause. Many looked like they didn’t get it.

That was just fine with Mr. Jobs. They’d understand soon enough.

Apple had to inform the mass market that digital downloads could be legal (remember Napster?). They had to let people know how much music you could have with you. They had to let people know about the iTunes store. Without all those pieces of the puzzle, the iPod would have tanked.

I was reminded of these scene when Timothy Verstynan asked:

Why can’t we have a scientific journal where, instead of PDFs, papers are published as @ProjectJupyter notebooks (say using Binders), with full access to the data & code used to generate the figures/main results? What current barriers are preventing that?

I follow scientific publishing at a moderate level. I write about it. I’m generally interested in it. And I have no idea what Jupyter notebooks and binders are. If I don’t know about it, I can guarantee that nobody else in my department will have the foggiest idea.

This is a recurring problem with discussions around reforming or innovating in scientific publishing. The level of interest and innovation and passion around new publication ideas just doesn’t reach a wide community.

I think that this is because those people interested might undervalue the importance of educating other scientists about their ideas. Randy Olson talks a lot about how scientists are cheapskates with their communications budgets. They just don’t think it¤s important, and assume the superiority of the ideas will carry the day.

I’ve talked with colleagues about open access many times, and discover over and over that people have huge misconceptions about what open access is and how it works. And open access is something that has been around for a decade and has been written about a lot.

Publishing reformers drop the iPod, but don’t do the legwork to tell people how the iPod works.

So to answer Timothy’s initial question: the current barrier is ignorance.

27 March 2018

What defines a brain?

A side effect of my bafflement yesterday over how lobsters became some sort of strange right-wing analogy for the rightness of there being winners and losers (or something) was getting into a discussion about whether lobsters have brains.

That decapod crustaceans are brainless is a claim I have seen repeated many times, often in the service of the claim that lobsters cannot feel pain. This article, refuting Jordan Peterson, said:

(L)obsters don’t even have a brain, just an aglomerate of nerve endings called ganglia.

This is a bad description of ganglia. It makes it sound like there are no cell bodies in ganglia, where there usually are. Here are some. This is from the abdominal ganglion of Louisiana red swamp crayfish (Procambarus clarkii):


These show cell bodies of leg motor neurons from several species (sand crabs and crayfish, I think; these pics go back to my doctoral work).


These are neurons in a ganglion from a slipper lobster (Ibacus peronii), where those big black cell bodies are very easy to see:


And these are leg motor neurons in slipper lobster:


And there is substantial structure within that alleged “not a brain” in the front:



And we’re know this for well over a century, as this drawing from 1890 by master neuroanatomist Gustav Retzius shows:



So ganglia are more than “nerve endings.” So putting that aside, are there other features that make brains, brains?

Intuitively, when I think about brains, I think of a few main features. Two anatomical, and one functional:

  1. Brains are big, single cluster of neurons. Even though there may be many neurons in, say, the digestive system (and there are not as many as some people claim), it’s so diffuse that nobody would call it a brain.
  2. It’s in the head, near lots of sensory organs. In humans, our brain is right next door to our eyes, ears, nose, and mouth, which covers a lot of the old-fashioned senses.
  3. It’s a major coordinating center for behaviour.

Decapod crustaceans (not to mention many other invertebrates) meet all those criteria. Sure, the proportion of neurons in the decapod crustacean brain may be smaller than vertebrates, but I have never seen a generally agreed upon amount of neural tissue that something must have to be a brain instead of a “ganglion in the front of the animal.”

I have a sneaking suspicion that some people will argue that only vertebrates can have brains because we are vertebrates, and vertebrates must be special, because we are vertebrates. That is, people will define brains in a way to stroke human egos.
 And, as I implied above, some people make the “no brains” claim out of self-interest. I don’t think it’s any accident that I see “lobsters don’t have brains” coming from institutes that have close ties to commercial lobster fisheries.

I suppose that some could argue that limiting the word “brain” to vertebrates is a way of bringing recognizing that vertebrate and invertebrate nervous systems are structured very differently. They are, but why only do this for one part of the nervous system? This is a little bit like saying “invertebrates don’t have eyes,” because they have compound eyes instead of our camera-style eyes. We routinely give things in invertebrates and vertebrates the same names if they have the same functions.

And in practice, I see people referring to octopus brains all the time. They do so even though, like other invertebrates, a large proportion of the nervous system sits outside the brain. From memory, roughly half the neurons in an octopus reside in its arms.

In practice, I am far from the only person that calls the clump of neurons at the front end of decapod crustaceans, “brains.” From this page:


So, fellow neuroscientists, if you don’t think invertebrates can have brains, why not? What is your dividing line?

Hat tip to Hilary Gerstein.

26 March 2018

I was unaware of how lobsters got sucked into an all-encopassing conspiracy theory

Miriam Goldstein and Bethany Brookshire burst my cosy bubble of ignorance. Today I learned  Jordan Peterson, a current darling of conservatives, drags lobsters into his mish-mash of writings to make white dudes feel good about themselves. Allow me an extended quote from this Vox article:

The book is a kind of bridge connecting his academic research on personality and his political punditry. In it, Peterson argues that the problem with society today is that too many people blame their lot in life on forces outside their control — the patriarchy, for example. By taking responsibility for yourself, and following his rules, he says, you can make your own life better.

The first chapter, about posture, begins with an extended discussion of lobsters. Lobster society, inasmuch as it exists, is characterized by territoriality and displays of dominance. Lobsters that dominate these hierarchies have more authoritative body language; weaker ones try to make themselves look smaller and less threatening to more dominant ones.

Peterson argues that humans are very much like lobsters: Our hierarchies are determined by our behaviors. If you want to be happy and powerful, he says, you need to stand up straight:

If your posture is poor, for example — if you slump, shoulders forward and rounded, chest tucked in, head down, looking small, defeated and ineffectual (protected, in theory, against attack from behind) — then you will feel small, defeated, and ineffectual. The reactions of others will amplify that. People, like lobsters, size each other up, partly in consequence of stance. If you present yourself as defeated, then people will react to you as if you are losing. If you start to straighten up, then people will look at and treat you differently.

“Look for your inspiration to the victorious lobster, with its 350 million years of practical wisdom. Stand up straight, with your shoulders back,” he concludes, in one of the book’s most popular passages.

The lobster has become a sort of symbol of his; the tens of thousands of Peterson fans on his dedicated subreddit even refer to themselves as “lobsters.”

This is classic Peterson: He loves to take stylized facts about the animal kingdom and draw a one-to-one analogy to human behavior. It also has political implications: He argues that because we evolved from lower creatures like lobsters, we inherited dominance structures from them. Inequalities of various kinds aren’t wrong; they’re natural.

“We were struggling for position before we had skin, or hands, or lungs, or bones,” he writes. “There is little more natural than culture. Dominance hierarchies are older than trees.”

Foul!


The logical fallacy is appeal to nature.

As analogies go, comparing humans to lobsters is... not a good analogy. This article provides a pretty good response, so I don’t have to. (Though I say lobsters have brains. But that doesn’t detract from the main points.)

Additional, 19 May 2018: Bailey Steinworth argues the diversity of marine invertebrate behaviour does not support Peterson’s ideas, either.

External links


Psychologist Jordan Peterson says lobsters help to explain why human hierarchies exist – do they?

20 March 2018

The impossibility of species definitions


Sad news about the death of Sudan, the last northern male white rhino, prompted some discussion about whether the northern white rhino is a species or a subspecies. The TetZoo blog has a nice look at this specific issue. I’d like to take a broader look at the whole problem of why defining species is so hard.

Arguing over what defines a species is a long-running argument in biology. It’s practically its own cottage industry. There is much effort to define species precisely, for all sorts of good reasons. And that desire for clear, precise definitions often appears on websites like Quora. Questions come up like, “If Neanderthals bred with us, doesn’t that mean, by definition, they are the same species?”

But as much as we want clear definitions in science, there is a problem. You can’t always draw sharp dividing lines on anything that is gradual. (Philosphers know this as the continuum fallacy.)

To demand a precise definition of species is like demanding to know the precise moment that a man is considered to have a beard. For instance, I think we can agree that Will Smith, in this pic from in After Earth (2013), does not have a beard:


And that in Suicide Squad (2016), Smith pretty clearly does have a beard:


But does Smith have a beard in this pic? Er... there’s definitely some facial hair there.


What is the exact average hair length that qualifies a man to be “bearded”? There isn’t one. But that doesn’t mean that you can’t meaningfully distinguish After Earth Smith from Suicide Squad Smith.

It’s a problem that Charles Darwin recognized. In Darwin’s view, speciation was going to result from the slow, gradual accumulation of tiny, near imperceptible changes. Darwin recognized that speciation was a gradual process, and he  frequently made the point that “varieties” could be considered “incipient species.” At any given point in time, some groups would be early in that process of divergence, and some would be further along.

That’s why we shouldn’t expect there to be clear, consistent species definitions that apply across the board and are helpful in every case.

External links

The last male northern white rhino has died
How Many White Rhino Species Are There? The Conversation Continues

17 March 2018

The last round of the year

Reason I love the AFLW competition, number 2,749:

Going into this last round of the home and away season, five out of eight teams had a shot at the grand final. And no team was guaranteed a slot in the grand final.

And the mighty Demons are well placed to be one of the two teams in the final. Go the Dees!

It's going to be a series of nail-biting games, and I love it.

13 March 2018

“Mind uploading” company will kill you for a US$10,000 deposit, and it’s as crazy as it sounds

Max Headroom was an 1980s television series that billed itself as taking place “20 minutes into the future.” In 1987, its second episode was titled, “Dieties”. It concerned a new religion, the Vu-Age church, that promised to scan your brain and store it for resurrection.

Vanna Smith: “Your church has been at the forefront of resurrection research. But resurrection is a very costly process and requires your donations. Without your generosity, we may have a long, long wait... until that glorious day... that rapturous day... when the Vu-Age laboratories perfect cloning, and reverse transfer.”

That episode suddenly feels relevant now, although it took a little longer tan 20 minutes.

On Quora, which I frequent, I often see people asking about mind uploading. My usual response is:


So I am stunned to read this article about Nectome, which, for the low deposit price of US$10,000, will kill you and promise to upload your mind somewhere, sometime, by a process that hasn’t been invented yet.

If your initial reaction was, “I can’t have read that right, because that’s crazy,” you did ready it right, and yes, it is crazy.

In fairness, it is not as crazy as it first sounds. They don’t want to kill you when you’re healthy. They are envisioning an “end of life” service when you are just at the brink of death. This makes it moderately more palatable, but introduces more problems. It’s entirely possible that people near the end of life may have tons of cognitive and neurological problems that you really wouldn’t want to preserve.


How do they propose to do this? Essentially, this company has bought into the idea that everything interesting about human personality is contained in the connectime:

(T)he idea is to retrieve information that’s present in the brain’s anatomical layout and molecular details.

As I’ve written about before, the “I am my connectome” idea is probably badly, badly wrong. It completely ignores neurophysiology. It’s a selling point for people to get grants about brain mapping, and it’s a good selling point for basic research. But as a business model, it’s an epic fail.

And what grinds my gears even more is that this horrible idea is getting more backing that many scientists have ever received in their entire careers:

Nectome has received substantial support for its technology, however. It has raised $1 million in funding so far, including the $120,000 that Y Combinator provides to all the companies it accepts. It has also won a $960,000 federal grant from the U.S. National Institute of Mental Health for “whole-brain nanoscale preservation and imaging,” the text of which foresees a “commercial opportunity in offering brain preservation” for purposes including drug research.

I think it is good to fund research of high speed analysis of imaging of synaptic connections. But why does this have to be tied to a business? Especially one as batshit crazy as Nectome?

Co-founder Robert McIntyre says:

Right now, when a generation of people die, we lose all their collective wisdom.

If only there was some way that people could preserve what they thought about things... then we could know what Artistotle thought about stuff. Oh, wait, we do, it’s called, “writing.”

I can’t remember the last time I saw a business so exploitative and vile. And in this day and age, that’s saying something.

Update, 3 April 2018: MIT is walking away from its relationship with the company. Good. That said, Antonio Regalado notes:

Although MIT Media Lab says it’s dropping out of the grant, its statement doesn’t strongly repudiate Nectome, brain downloading idea, or cite the specific ethical issue (encouraging suicide). So it's not an apology or anything.

Hat tip to Leonid Schneider and Janet Stemwedel.

Related posts

Overselling the connectome
Brainbrawl! The Connectome review
Brainbrawl round-up

External links

A startup is pitching a mind-uploading service that is “100 percent fatal”

How many learning objectives?




I am teaching an online course this semester, and I had to undergo training and review of the class before it ran. In preparing it, one of the key things that the instructions stressed was the importance of having learning objectives.

All that material gave me good insight into how to write a single leaning objective, there was almost nothing about how to put them all together.
And right now I’m struggling with what a good number of learning objectives is. But so far, the only direct answer I’ve seen to that is:

How many learning outcomes should I have?
This is tied to the length and weight of the course
How many learning objectives should I have?
This is tied to the number of learning outcomes.


You’re not helping.

Most courses track lessons in some standard unit of time. A day. A week. Surely there has to be some sort of thinking about what a reasonable number of learning objectives is for a given unit of time. It’s probably not out of line for me to guess that one hundred learning objectives in a single day would be too much. On the other hand, a single learning objective for a week might be too low.

Right now, I have some weeks that have ten or more learning objectives. I’m wondering if that’s too much. And I’m just lost. I have no way of knowing.

It might sound it’s just a matter of looking at student performance and adjusting as you go. But in a completely online course, it is so hard to adjust. You have to prepare almost everything in advance, and you can’t easily go faster or slower in the way that you can when you meet students in person.

I’m not sure how much student feedback will help, because everyone’s tendency is probably to say, “Yes, give me fewer objectives so I have more time to master each one.” And sometimes students aren’t good at assessing what they need to learn.

Maybe this is a gap in the education literature that needs filling.

Picture from here.

12 March 2018

How anonymous is “anonymous” in peer review?

Last time, I was musing about the consequences of signing or not signing reviews of journal articles. But I got wondering just how often people sabotage their own anonymity.

As journals have moved to online submission and review management systems, it’s become standard for people to be able to download Word or PDF versions of the article they are reviewing.

The last article I reviewed was something like 50 manuscript pages. There was no way I was going to write out each comment as "Page 30, paragraph 2, line 3," and make a comment. I made comments using the Word review feature. And all my comments had my initials.

As more software uses cloud storage for automatic saving features, more software packages are asking people to create accounts, and saving that identifying information along with documents. Word alerts you with your initials, but Acrobat Reader's comment balloons are little more subtle.

Ross Mounce and Mike Fowler confirmed that this happens:

Yep. Metadata tags are great. 😀 Even simply the language setting can be a giveaway: Austrian English is a huge clue in a small field [real, recent example!]. "Blind" peer review is not always effective...

Having wondered how often authors do this, I wonder if editorial staff ever check to make sure reviewers don’t accidentally out themselves.

Picture from here.

03 March 2018

Signing reviews, 2018

Seen on Twitter, two days apart.

First, Kay Tye:

Dear everyone! You don’t need to wonder if I reviewed your paper anymore. I now sign ALL of my reviews.
Inspired by @pollyp1 who does this and I asked her why and she said “I decided to be ethical.” I do it to promote transparency, accountability and fairness. #openscience

I particularly noted this reply from Leslie Vosshall: (the “@pollyp1” mentioned in the prior tweet).

Open peer review would instantly end the dangerous game of “guess the reviewer.” This happens all the time with senior people guessing that some junior person trashed their paper and then holding grudges. But usually they guess wrong and inadvertently damage innocents.

Second, one day later, Tim Mosca:

Since becoming an assistant prof, I’ve reviewed ~ 12 papers. Signed one. Received a phone call from the senior (tenured) author asking, “Who do you think you are to make anything less than glowing comments?” So there are still dangers for young, non-tenured profs when reviewing.

The threads arising from these tweets are well worth perusing.

I’ve signed many reviews for a long time, and nothing bad has happened. I used to be much more in favour of everyone signing reviews, but long discussions about the value of pseudonyms on blogs, plus the ample opportunity to see people behaving badly on social media, significantly altered my views. But the problem is that everyone will remember a single bad story, and not pay attention to the many times where someone signed a review and everything was fine. Or cases where something positive came out of signing peer reviews.

How can we weigh the pros of transparency with the cons of abuse? I don’t know where that balance is, but I think there has to a some kind of balance. But the underlying issue here is not signing reviews, but that people feel they can be vindictive assholes. Univeristies do not do enough to address that kind of poor professional behaviour.