17 July 2018

Tuesday Crustie: Crab emojis rated

It is, according to my social media, World Emoji Day. No sure why that needs a day, but I’m not here to judge. Well, not, today I am, because I am here to judge crab emojis!

Apple


I think this crab is cold. That would explain why it seems to be wearing mittens over its claws. 6 out of 10.

EmojiOne


Is this crab doing a shadow puppet play about warring sibling birds in the nest? The claws look like little bird beaks.The eyes seem way to big for its stalks. This is some sort of angsty, emo teenage crab with an art project. It might be nice when it grows out of its akward phase. 3 out of 10.

Facebook


Sure, this social media giant is contributing the decline and probable fall of democracies in several nations, but damn! They got someone who was paying attention in invertebrate biology and respects the crustacean to draw their emoji. The claws look like they could do serious damage. The lines on the carapace show more attention to detail than anyone else. I feel like I could almost key it out to the genus. 10 out of 10.

Google


The Google crab has seen your browser history and is shocked, shocked it says, by the websites you visit. It wants to push you away, which is why its claws are in some sort of weird backwards pose. 4 out of 10.

LG


An abyssal crab, judging from the teensy-weensy eyes. Kind of funky bulbous claws, but bonus points for its wickedly curved final pair of thoracopods. They look like kama. 7 out of 10.

Microsoft


I think this crab was produced by a game of telephone between artists. Someone once saw a crab, drew it, then showed their drawing to someone who copied the drawing, who showed it to another person who copied the drawing, and this went on maybe about ten to twenty times. As Magritte might say, “Ce n’est pas un crabe.” (This is not a crab.) 3 out of 10.

Samsung


Argh! Dude! What happened to your legs? And your claws can’t open! I’m so sorry. Is there a foundation we can donate for to support research into your condition? 1 out of 10.

Emojidex

Flashback to 1984 and the release of Romancing the Stone:
Gloria: [observing men in a bar]

Wimp. Wimp. Loser. Loser. Major loser. Too angry. Too vague. Too desperate. God, too happy.


This is the “God, too happy” crab. Seriously freaking me out how happy this guy is. I think this crab has had chemical stimulation. -2 out of 10.

Twitter


Another poor unfortunate victim of some sort of leg disfigurement. Those claws cannot work, the lega tips look unsuitable for grasping any benthic substratum. And yet I can feel somethings besides pity for this crab. Something about the dots on the carapace say to me, “This is a crab that has not stopped living. This is a crab happy to get a tramp stamp and put it out there.” 6 out of 10.

WhatsApp


Ooh, it’s a little baby crab! It’s so tiny! It looks like it lives in the water column, floating and hoping no fish eats it. It will be sad when it grows up and will turn into into the EmojiOne artsy emo teen crab. 8 out of 10.

What have we learned from all this? That all crabs in the Internet are red. Or orange-y red. Except for Samsung crab, whose brown colour is obviously some sort of symptom of whatever disease it has.

Inspiration from ant emoji ratings. Hat tip to Alex Wild and Melanie Ehrenkranz.

Come back for World Emoji Day 2019, when we’ll rate shrimp emoji!

External links

an entomologist rates ant emojis

10 July 2018

Ego

Hypothesis:

Complaints about peer review are often made by people who believe that their work is so infallible and perfect that it cannot be made better by peer review.

It trickles through in complaints about how long peer review is taking (when the review time may be reasonable), It trickles through when asking what reviewers could possibly say about a manuscript. It trickles through when questioning the value of journals organizing peer review.

This is a dangerous habit of mind for a scientist. As David Brin likes to say, “Criticism is the only known antidote to error.”

Sure, most scientists are professionals who are trained to produce competent science. It’s not surprising that most papers pass peer review, and that the improvement is not always that large. But there shouldn’t be an expectation that everything a scientist does is going to be worth publishing as is. Everyone makes mistakes.

As Rose Eveleth said yesterday:

Sure, some editors are annoying, but you know what is worse? Literally any writer’s raw copy.

This is a message a lot of scientists need to hear.

External links

How long should peer review take?

09 July 2018

Giving an article a DOI does not make it citable

With preprint servers and data repositories like Figshare gaining in popularity, a related myth is also gaining in popularity: “Giving this article / dataset a DOI makes it a citable academic product!”

Exposition for bystanders: DOI is short for “digital object identifier.” I usually describe it as “a serial number for a journal article.” You often seen them tucked at the top or bottom of academic papers,  a lengthy number beginning with 10. But as the name indicates, it’s mean to be usable for any kind of “digital object,” not just papers.

The beauty of a DOI is that if you have it, you can type in “https://doi.org/” followed by that number beginning with 10, and it will take you straight to the paper.No having to go to Google to find the journal, then drill down to the volume, then the issue, and so on.

A Twitter search shows people have been wrongly making the connection between “having a DOI” and “something that can be cited” for at least seven years. Here’s a tweet from 2011 (emphasis added):

#DataCite offers DOIs for making data sets citable and getting credits for its reuse.

A few example: here, here, and here. And also here, here, and here. And people have been pointing out this is wrong for the same amount of time.

What is “citable” is an editorial policy set by journals individually.

Most journals have long traditions of allowing you to cite things that are not part of the formal scientific literature. After all, journals existed well before links ever existed, never mind the DOI standard.

But not every journal will let you cite whatever. If a journal says, “We will only allow citations to peer-reviewed journal articles,” saying, “But this has a DOI!” is not going to make any editor change her or his mind.

I don’t quite understand why people think this. I suspect this myth arises because it plays into scientists’ obsession with formalism and simple “If this, then that” rules. Maybe people are confusing the DOI number itself with, “Anyone who goes to the bother of giving DOIs to things is probably an organization that is fairly large, stable, and has its act together.” But that exploitative journals regularly give their articles working DOIs shows that it can’t take that much to assign those number and get the links working.

It’s another example of the information vacuum around academic publishing: publishers make up new stuff and assume academics will figure out how it works.

While I’m here, Lyndon White pointed out another DOI myth: that the link “never breaks.” They can, although in practice I’ve found them to be far less susceptible to link rot than publisher links, which get rearranged every few years, it seems.

Related posts

Innovation must be accompanied by education
Why can’t I cite Mythbusters?

29 June 2018

Not hot: the Rate My Professor chili pepper is done

The website Rate My Professors is getting rid of its “hotness” rating. Which means you won’t see stuff like this any more:

I'll give you a chili on Rate My Professors if you give me an A

The idea of getting rid of the “chili pepper” has been floating around for a while, but fellow neuroscientist Beth Ann McLaughlin was able to hit a nerve on Twitter this week. Almost 3,000 retweets and many professors chimed in to say, “Get rid of this appearance rating.”

And to their credit, the website owners did.

This is a good thing for people in higher education. The Rate My Professors site is well known to people in higher education, both faculty and students. I’ve encouraged students to use Rate My Professors, because I have a record teaching, and people have a right to hear other students’ experiences. It matters when they tacitly suggest it’s okay to ogle professors.

It’s nice to have a little good news. And to be reminded that sometimes, faceless corporate websites – and the people behind them – do listen to reason, and can change.

External links

Why The Chili Pepper Needs To Go: Rape Culture And Rate My Professors (2016)
RateMyProfessors.com Is Dropping The "Hotness" Rating After Professors Called It Sexist
I Killed the Chili Pepper on Rate My Professor
RateMyProfessors.com Retires the Sexist and Uncomfortable “Chili Pepper” Rating After Academics Speak Out
RateMyProfessors Removes Hotness Rating

14 June 2018

Another preprint complication

While I knew some journals won’t publish papers that had previously been posted as preprints, I didn’t know that some journals are picky.

Jens Joschinski wrote:

Some journals (well, @ASNAmNat) will not accept papers posted at @PeerJPreprints or other commercial services.

This makes no sense to me. What does the business model of the preprint server have to do with anything regarding later publication?

There’s a list of journal policies. Thanks to Jessica Polka.

But frankly, every little bit of legwork just makes me less inclined to post preprints. I’ll still do it if I think I have some compelling reasons to do so, but doing this routinely as part of my regular publication practices? Maybe not.

11 June 2018

Does biorXiv have different rules for different scientists?

Last year, I submitted a preprint to biorXiv. I was underwhelmed by the experience.

But I am a great believer in the saying, “Never try something once and say, ‘It did not work.’” (Sometimes attributed to Thomas Edison, I think.) I submitted another manuscript over the weekend which I thought might be a little more suited to preprinting, so after I submitted it to the journal, I went and uploaded it to biorXiv. It was the weekend, so it sat until Monday. Today, I received a reply. My preprint was rejected.

bioRxiv is intended for the posting of research papers, not commentaries(.)

How interesting.

I like that this demonstrates that preprint servers are not a “dumping ground” where anyone can slap up any old thing.

My paper is not a research paper. I don’t deny that. Following that rule, biorXiv made a perfectly understandable decision.

But the whole reason I thought this new paper might be appropriate to send to biorXiv was I had seen papers like “Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication” on the site before. I opened up that PDF and looked at it again. There’s no “Methods” section. There’s no graphs of data. There’s no data that I can find at all.

How is that a research paper? And how is that not a commentary? Maybe I’m missing something.

But although the paper above doesn’t have data, what it does have is a lead author who was the former editor-in-chief of Science and current current president of the National Academy of Science of the US, Marcia McNutt. The paper was submitted in May 2017, some time after McNutt became president of the National Academy in 2016.

And while she is the only one to have “National Academy of Sciences” listed in the authors’ affiliations, the rest of the author list is nothing to sneeze at. It boasts other people with “famous scientist” credentials, like Nobel laureate and eLife editor Randy Schekman. Most of the authors are involved in big science journals.

One of my criticisms of preprints is that they would make the Matthew Effect for publication worse. People who are in well-known labs at well-known institutions would receive the lion’s share of attention. People who are not would have just another expectation with minimal benefits.

But this feels even worse. This feels like there’s one set of rules for the rank and file scientists (“No commentaries!”) and another set of rules for scientists with name recognition (“Why yes, we’d love to have your commentary.”).

I like the idea of preprints, but this is leaving a sour taste in my mouth.

Update, 12 June 2018: The manuscript found a home at a different preprint server, Peer Preprints.

Related posts

A pre-print experiment: will anyone notice?
A pre-print experiment, continued

External links

Twitter thread
Transparency in authors' contributions and responsibilities to promote integrity in scientific publication

04 June 2018

Viral video verdict: Crayfish claw cutting complicated

Making the rounds in international news in the last couple of days is a viral video that is normally described in the headlines this way:

"Heroic crayfish cuts off own claws to escape the pot!"

Crayfish behavior, heat, pain, claws... This is right on target with some of my research. But so far, nobody has called me to break down what is going on in this video. 

What the crayfish is doing probably autotomy, not desperate self-mutilation. A crayfish dropping a claw is not like a person ripping off an arm. But the narrative is so good that nobody cares about the science.

I'm away from my desktop, so it's too hard to write a detailed blog post like I normally would. Instead, I wrote a Twitter thread about it: https://mobile.twitter.com/doctorzen/status/1003645638213623808 

External links

03 June 2018

Theory and practice

Years ago, while listening to CBC's Morningside, I heard this description:

"Canada is a country that works in practice, but not in theory. The United States is a country that works in in theory, but not in practice."

I was reminded of this over the weekend reading a thread about data sharing (https://twitter.com/danirabaiotti/status/1002824181145317376). Universal a data sharing between scientists is one of those ideas that sounds great in theory. So great that people tend to undervalue how it will work in practice.

Another example that I was thinking about recently was post publication peer review. In theory, it might be nice to have a single cenatralized site that included all post publication comments. In practice, blogs have a pretty good track record of bringing important critical comments to a broader audience.

I see this over and over again with people putting forward ideas about how we should do science? The meta science, so to speak. Around publication, peer review, career incentives, statistical analysis. I've been guilty of this. There's old posts on this blog about open peer review that I still think were fine in theory, but not grounded in practice. 

I think we scientists often get very enamoured of those solutions that work in theory, and undervalue things that work in practice.

22 May 2018

Tuesday Crustie: Mollie

Never heard of the US Digital Services agency? Now you have. Meet Mollie, their mascot:


Yes, those are light sabers. Slate has the story behind this adorable creation.

Hat tip to Miriam Goldstein.

18 May 2018

All scholarship is hard

Nicholas Evans wrote:

The solution levied by synthetic biologists is to get more biologists doing ethics. That this is always the suggestion tells me a) you think it’s easier to think about ethics than synbio; b) you want to keep the analysis in house. Neither are good.

Seconded, confirmed, and oh my God yes. I’ve been through several iterations of this in biology curriculum meetings, where I or others have suggested incorporating some non-biology class into a degree program, or even just an elective students funded by a training grant have to take. And the reaction is just what Nicholas describes:

“Why don’t we just do it ourselves?”

The single exception seemed to be chemistry. Maybe there was less suspicion because of the blurry line between molecular biology and biochemistry. Or maybe it was because their department was right above ours and we knew the people better. But when it was ethics or writing or statistics: nope, we’ll develop out own class taught by our own faculty in our own department.

I get a lot of variations of “Is is easier to do this or that in academia?” questions on Quora, too.

In an institution, this attitude of “We know best” is made worse by administrative measurements. Departments are evaluated by how many credit hours they generate. So when I suggest students might take a course taught by the Philosophy or Math or Communications or Psychology department, the response is, “We’re just giving credit hours away.” Since credit hours are one thing that are looked at to determine resources, it’s an understandable reaction. It’s Goodheart’s law in action. The measure becomes a target and changes what the measure does.

Nicholas notes:

The vast majority of people talking synbio ethics have almost no training in ethics. You wouldn’t accept that in the technical side of synbio, so don’t accept it in ethics.

Exactly. We often complain about how people don’t respect expertise on many controversial subjects, like evolution, climate change, or vaccination. But we see the same disrespect within universities for scholarship in different fields. Scholarship in every field is hard, and “My field is better than your field” is a shitty game.

Hat tip to Janet Stemwedel.

16 May 2018

The Zen of Presentations, Part 71: Slides per minute

In grad school, I was introduced to a nice, simple rule for giving a talk.

One slide per minute.

I used this rule for a long time. It seemed to work well. In particular, any slide with data seemed to take at least a minute to digest. You had to orient yourself to the axis labels, the units, there is reader an interpretation to do, and that takes a little time.

I did know it was a rule of thumb, not an ironclad rule. I would estimate a slide would be up for a little less than a minute when it was a picture of an animal or something else that had no data or nothing to read

But then I saw Lawrence Lessig’s presentation, “Free culture” (via Garr Reynolds’s blog). His talk had 243 slides, but it was not 243 minutes. Lessig used his slides in a way I’d never seen before. They weren’t illustrations to be described or explained. His slides were his rhythm section, laying out a beat and emphasizing what he said. Even though his slides were up for such a short time, I never felt confused or lost or thinking, “Wait, wait, go back!”

I was blown away. I showed me how limited my views about what a “good presentation” were.

Then I learned about formats like pecha kucha and Ignite talks. Like Lessig, they emphasized quick pacing, running through slides at 3 to 4 per minutes. And those talks often rocked.

The key to such rapid fire delivery was planning and practice. The automatic slide advance rule for pecha kucha and Ignite talks forced to you plan and practice relentlessly. Practice never leads you wrong.

There are some images and slides that probably do warrant a full minute. But the audience can often pick up on points faster than you’d think

There isn’t any magic number of slides in a talk. Your talk can have hundreds of slides. Your talk can have no slides. Or your talk can even have one slide per minute.

Related posts

The Zen of Presentations, Part 40: Lighting a fire under speakers
How Gilmore Girls change my teaching

External links

Free culture presentation
The “Lessig method” of presentation

11 May 2018

The Zen of Presentations, Part 70: Giving away the plot

Mike Nitenbach wrote:

Huge mistake to design scientific presentation like fucken Sherlock Holmes story.

Becca replied:

If you set things up and present your logic at every step, the audience can tell where things are headed without being explicitly told in advance.

Over on Better Posters, I’ve talked a lot about the Columbo principle. Columbo taught us that even when the audience knows the answer, the fun can be in learning how you prove it. I think that advice works well for titles, but it still implies a sort of “mystery” aspect that Nitenbach is criticizing.

But you can structure a talk where you tell the audience what’s going to happen, but not leave them disappointed.

When making Star Trek II: The Wrath of Khan, writer Nicholas Meyer (who also directed) was faced with a problem: Spock, the show’s most popular character, was going to die.


Actor Leonard Nimoy was bored with the part, not interested in doing another movie, and was sort of lured back in by the prospect of killing off the character. Fans learned about this, and were upset. Meyer got death threats. So what did Meyer do?

He killed Spock in the opening scene.

Of course, Spock doesn’t actually die at that point. He pretends to die as part of the Kobayushi Maru training scenario. So when the film is winding up for the actual, powerful death scene of Spock, people were not thinking about, “This is the one where Spock dies!”

Meyer said he learned on this movie that you can show an audience anything in the first ten minutes of a movie, and they will forget about it by the end of the movie.

You can do the same thing in a talk. You can tell people right at the start of a talk what you found. If you involve them, and make the narrative of that process well told, you can bring people through to the end, and they will think, “Oh yeah, I already knew that!”


Meyer said in the film’s Blu-ray commentary:

The question is not whether you kill him. It’s whether you kill him well. If it’s perceived as a working out of a clause in a star’s contract, then they’re gonna hate it. If it’s organic, if it’s really part of the story, then no one’s gonna object.

Or, to paraphrase Anton Chekov, if you want fire a gun in the third act, load it in the first act. The audience will forget the gun was even loaded until that final climactic shot.

External links

Detective stories: “Whodunnit?” versus “How’s he gonna prove it?”
38 Things We Learned from the ‘Star Trek II’ Commentary

04 May 2018

Rhodes Trust is academia’s equivalent to Confederate statues and flags

Bree Newsome taking down South Carolina Confederate flag

In the last few years in the United States, there’s been debate about the presence of Confederate flags and statues in public places. I credit Bree Newsome for getting this ball rolling. The Confederacy was built on the notion that slavery was right and just.

Continuing to display the symbols of that failed government on public grounds is tacit endorsement of the ideals of white supremacy. Put those statues and flags that are on government property in museums.

This morning, I was given a link to a fellowship and was asked to promote it. I had two problems with that, and the first was that the fellowship had a lot of ties to the Rhodes Trust.

As a student, I learned about Cecil Rhodes because of his association with Oxford’s Rhodes Scholarships (supported by the Rhodes Trust). That name had a positive association for me.

It was only later that I learned, “Man, this dude was racist as fuck.” In Born a Crime, Trevor Noah says if many Africans had a time machine, they wouldn’t go back in time to stop Adolf Hitler, they’d be packing heat for Cecil Rhodes.


I wish I had learned about Rhodes’s colonial racism first, not years after hearing about the scholarships. The misery Rhodes caused in life seems more important to me than the money he left behind after death.

The second problem I had with this fellowship was that it was for “leading academic institutions.” I’m pretty sure that means American Ivy League institutions and English Oxbridge universities, and not the sort of public, regional institutions where most students in the world get their university educations. (The sort of place I work.)

Racist and elitist was not a winning combination for me. I did not push out notification of the fellowship. Admittedly, this was made easier because the deadline was post, but I wouldn’t have done it regardless.

Is Rhodes the only example? When I mentioned this on Twitter, “Sackler” came up. Like Rhodes, I first heard that name in a positive light: the Sackler symposium on science communication, which I’ve blogged about several times (here in 2012, here in 2013). But the Sackler family is problematic: they made a lot of money from opiods, which is now a major public health problem. And that name is on museums and medical schools.

Like Rhodes, I should have known about the drugs before I knew about the symposium. That’s not good.

Turning money isn’t as easy as taking down a flag on a pole, or a statue in a park. But the principle is the same. Academia needs to look harder at how to stop giving these unspoken endorsements to people who caused a lot of suffering.

Update, 14 May 2018: Poll results from Twitter. 84% of people surveyed said they’d take money with the Rhodes name,


Picture from here.

Rethinking the graduate admissions process

Warning: The following post is a piece of devil’s advocacy. I’m not sure I believe myself.

The process for selecting graduate students is mostly deeply flawed and should be revamped from the ground up. Almost everything in the admission process works against increasing diversity in academia.

Let’s take the elements apart piece by piece.

Application fee: Many program charge an application fee. This works against students who are good, but economically disadvantaged. There is no way that those fees are paying the bills of the graduate office, Friction can be a useful thing in preventing spurious applications, but generally the cost is so high that multiple applications quickly add up and remove options from students who can’t pay them all.

GRE scores: The cost of writing and submitting scores is another economic barrier. Many have written about the low predictive power of the test (also here).

Undergraduate GPA: Grade inflation is making it difficult to distinguish student performance. Plus, they are not exactly comparable from institution to institution, both in calculation (is the top score 4 or 4.3?), a situation that gets even more complex when student cross national borders. And it’s highly likely that the same grade point average will be interpreted differently depending on the issuing institution.

Recommendation letters: So much room for bias here. People write different recommendations for men and women. Like, twice the men get glowing letters than women. People are influenced by university of the letter writer and the seniority of the recommender and probably other factors that have nothing to do with the candidate. Recommendation letters are the primary tool for old boy’s networks to reinforce themselves.

CVs: Recently, we learned that a large number of graduate fellowship applicants were told they didn’t get the award because they didn’t have a publication yet. These are supposed to be people at the start of their academic careers, so it is not reasonable to expect them to have a lot on a CV. And given that so many places have not cracked down on unpaid internships, experience on paper will tend to favour people in well off families. Again.

Personal statement: This one might be okay, as long as applicants gave no indication of their gender. Because just the name alone works against increasing diversity.

If grad review is so messed up, what can we do?

One idea is to stop the tedious review by committee and just let individual faculty pick students they want to supervise. It doesn’t eliminate all the biases, but at least it’s less work.

In research grant applications, there’s occasionally serious suggestions crop up that the peer review process is kind of ineffective and that we’d be better off assigning funding by lottery. Maybe we should consider admitting grad students by lottery, too.

On Twitter, I asked students what they would like to see in the application process. Zachary Eldredge brings up the idea of a lottery, and Olivia mentions a face-to-face interview. Will Lykins says it would be good to normalize non-academic work on the forms, which again many students increasingly have to do to make ends meet instead of doing those unpaid enrichment activities.

Related posts

I come to bury the GRE, not to praise it
How do you test persistance?
Why grade inflation is good for the GRE
Does grad school have a mismatch problem?
The “Texas transcript” is a good idea, but won’t solve grade inflation

18 April 2018

Teaching online and inclusion

"Do you expect me to talk, Goldfinger?" "No Mr. Bond, I expect you to make this online course ADA compliant!"

 I’ve been teaching a completely online class this semester. I’ve done partly online classes, and practically live online anyway, so I thought this would be a fairly simple thing for me to do.

It has not. It has been a real eye-opener for thinking about student needs.

One of the biggest challenges I’ve been working with is making the class compliant with the rules for students with disabilities. The rules are that whether there are students in the class who have declared disabilities or not, you must make every item in the class as readily available and accessible as if there were students with disabilities.

This means video lectures need closed captioning. There is voice recognition software that does closed captioning automatically, which is great, but it never does it perfectly. Every time I say, “Doctor Zen,” the software puts in, “doctors in.” This means you have to go in, listen to the entire lecture, and proofread the captioning for entire lecture.

Similarly, every image needs a description so that someone who is blind or otherwise visually impaired can understand the material. And many scientific diagrams are complex and challenging. Today, I was forced with trying to write a complete description of this:

Human genome influences traits. Human genome has 2 copies in every cell. 1 copy is made of 3 billion base pairs. Cell makes up tissue. In cell, genome divided into nuclear genome and mitochondrial genome. Cells manifest traits. Tissues make up organs. Tissues manifest traits. Organs make up body. Body manifests traits. Traits leads back to Lesson 1. Mitochondrial genome has 1 circular chromosome. Mitochondrial genome is many per cell. Circular chromosome is many per cell. Circular chromosome made of nucleic acid and histone proteins. Nuclear genome is one per cell. Nuclear genome is 23 pairs of linear chromosomes. 23 pairs of linear chromosomes has 22 pairs called autosomes. 23 pairs of linear chromosomes has 1 pair called sex chromosomes. Sex chromosomes are XX for female. Sex chromosomes are XY for male. 23 pairs of linear chromosomes are made of nucleic acid and histone proteins. Nucleic acid wraps around histone proteins. Nucleic acid has two types, DNA and RNA. RNA leads to lesson 3. DNA is composed on deoxynucleotides. DNA is double stranded. DNA composed of deoxynucleotides. Double stranded leads to helical shape. Double stranded by base pairs. Deoxynucleotides are 4 types of nitrogenous bases. Nitrogenous bases can form base pairs. Nitrogenous base connects to A, T, C, G. A base pairs with T and vice versa. G base pairs with C and vice versa.

Here’s what I came up with for the concept map above:

Human genome influences traits. Human genome has 2 copies in every cell. 1 copy is made of 3 billion base pairs. Cell makes up tissue. In cell, genome divided into nuclear genome and mitochondrial genome. Cells manifest traits. Tissues make up organs. Tissues manifest traits. Organs make up body. Body manifests traits. Traits leads back to Lesson 1. Mitochondrial genome has 1 circular chromosome. Mitochondrial genome is many per cell. Circular chromosome is many per cell. Circular chromosome made of nucleic acid and histone proteins. Nuclear genome is one per cell. Nuclear genome is 23 pairs of linear chromosomes. 23 pairs of linear chromosomes has 22 pairs called autosomes. 23 pairs of linear chromosomes has 1 pair called sex chromosomes. Sex chromosomes are XX for female. Sex chromosomes are XY for male. 23 pairs of linear chromosomes are made of nucleic acid and histone proteins. Nucleic acid wraps around histone proteins. Nucleic acid has two types, DNA and RNA. RNA leads to lesson 3. DNA is composed on deoxynucleotides. DNA is double stranded. DNA composed of deoxynucleotides. Double stranded leads to helical shape. Double stranded by base pairs. Deoxynucleotides are 4 types of nitrogenous bases. Nitrogenous bases can form base pairs. Nitrogenous base connects to A, T, C, G. A base pairs with T and vice versa. G base pairs with C and vice versa.

Writing that description... took time.

Anyone who think that online teaching is going to be some sort of big time saver that will allow instructors to reach a lot more students has never prepared an online class. It’s long. It’s hard. It’s often bordering on tortuous (hence the “No Mr. Bond” gag at the top of the post).

These things take time, but I don’t begrudge the time spent. It’s the right thing to do. It’s forced me to think more deeply about how I can provide more resources that are more helpful to more students. It’s not just deaf students who can benefit from closed captions, for instance. Someone who can hear could benefit from seeing words spelled out, or maybe use them when they are listening in a noisy environment, or one where sound would be distracting.

And I keep thinking that if I think it takes a lot of work to put these it, it’s nothing compared to students who need these materials who have to navigate through courses every day.

External links

Flowcharts and concept maps

16 April 2018

“It makes no sense!” versus history

There’s no channel 1 on televisions in North America.

It makes no sense.

That is, it makes no sense from the point of view of an engineer that had to design a channel system today, starting from scratch.

It makes sense from the point of view of a historian examining how broadcasting developed in North America.

Sometimes, discussions about academic systems of various sorts feel like people complaining mightily about how stupid it is that there is no channel 1, and proposing fix after fix after fix to correct it. And they do so in an environment where lots of people aren’t bothered by the lack of channel 1. And they do so even if the proposed fixes will mean some people’s televisions won’t work any more.

“Sure, but they’ll be better televisions!” Maybe, but it misses that a consistent channel numbering system is not what most people want out of a television.

03 April 2018

The NSF GRFP problem continues

This morning, a fine scientist congratulated two undergraduates in her lab about winning National Science Foundation (NSF) Graduate Research Fellowship Program (GRFP) awards. I thought, “Huh. They’re out? And two seems like a lot from one lab.”

A few years ago, Terry McGlynn wrote an important blog post about how tilted the playing field is for the NSF GRFP awards. He compared awards to Harvard students (with about 7,000 undergraduates) to the more than 20 campuses in the California State University system (over 400,000, according to a check of Wikipedia).

The NSF is good about making it easy to find a list of all 2,000 awards in this program. I went looking for the same comparison of one Ivy League university to an entire state’s system. Embarrassingly, I screwed up the calculation on the first pass, not realizing that several California State universities don’t say “California State” in their name, unlike the University of Texas institutions.

Harvard got 43, and all of California State get 50 (thanks to Terry for counting here and here).

Cal Poly Pomona 4
Cal Poly SLO 5
CSUCI 1
CSUDH 1
CSU Fresno 1
CSU Fullerton 8
CSULB 2
CSULA 1
Sac State 1
CSUSB 1
CSUN 5
CSUSM 3
SDSU 6
SFSU 6
SJSU 3
Humboldt State 2

So one lab in Harvard alone equaled the entire combined output of eight different California State universities (separately, not combined).

If this sort of pattern intrigues you, you must for to Natalie Telis’s post where she digs down into the numbers. Not just this year’s, but over 28,000 awardees worth of data, from 2011 to 2017. It’s bloody brilliant. One of her first points is, “The most expensive undergraduate schools have an extreme excess of (NSF GRFP) recipients.” She also makes some comments on Twitter about this.

I can’t wait to see what she finds for 2018 data.

Matt Cover did some similar things the previous year, and found no relationship between institutional enrollment and number of grants.

External links

NSF Graduate Fellowships are a part of the problem
The price of a GRFP, part 1
Matt Cover thread from 2017

28 March 2018

Innovation must be accompanied by education


When Apple launched the iPod, the company had to put a lot of effort into educating people about digital music.

Mr. Jobs pulled the white, rectangular device out of the front pocket of his jeans and held it up for the audience. Polite applause. Many looked like they didn’t get it.

That was just fine with Mr. Jobs. They’d understand soon enough.

Apple had to inform the mass market that digital downloads could be legal (remember Napster?). They had to let people know how much music you could have with you. They had to let people know about the iTunes store. Without all those pieces of the puzzle, the iPod would have tanked.

I was reminded of these scene when Timothy Verstynan asked:

Why can’t we have a scientific journal where, instead of PDFs, papers are published as @ProjectJupyter notebooks (say using Binders), with full access to the data & code used to generate the figures/main results? What current barriers are preventing that?

I follow scientific publishing at a moderate level. I write about it. I’m generally interested in it. And I have no idea what Jupyter notebooks and binders are. If I don’t know about it, I can guarantee that nobody else in my department will have the foggiest idea.

This is a recurring problem with discussions around reforming or innovating in scientific publishing. The level of interest and innovation and passion around new publication ideas just doesn’t reach a wide community.

I think that this is because those people interested might undervalue the importance of educating other scientists about their ideas. Randy Olson talks a lot about how scientists are cheapskates with their communications budgets. They just don’t think it¤s important, and assume the superiority of the ideas will carry the day.

I’ve talked with colleagues about open access many times, and discover over and over that people have huge misconceptions about what open access is and how it works. And open access is something that has been around for a decade and has been written about a lot.

Publishing reformers drop the iPod, but don’t do the legwork to tell people how the iPod works.

So to answer Timothy’s initial question: the current barrier is ignorance.

27 March 2018

What defines a brain?

A side effect of my bafflement yesterday over how lobsters became some sort of strange right-wing analogy for the rightness of there being winners and losers (or something) was getting into a discussion about whether lobsters have brains.

That decapod crustaceans are brainless is a claim I have seen repeated many times, often in the service of the claim that lobsters cannot feel pain. This article, refuting Jordan Peterson, said:

(L)obsters don’t even have a brain, just an aglomerate of nerve endings called ganglia.

This is a bad description of ganglia. It makes it sound like there are no cell bodies in ganglia, where there usually are. Here are some. This is from the abdominal ganglion of Louisiana red swamp crayfish (Procambarus clarkii):


These show cell bodies of leg motor neurons from several species (sand crabs and crayfish, I think; these pics go back to my doctoral work).


These are neurons in a ganglion from a slipper lobster (Ibacus peronii), where those big black cell bodies are very easy to see:


And these are leg motor neurons in slipper lobster:


And there is substantial structure within that alleged “not a brain” in the front:



And we’re know this for well over a century, as this drawing from 1890 by master neuroanatomist Gustav Retzius shows:



So ganglia are more than “nerve endings.” So putting that aside, are there other features that make brains, brains?

Intuitively, when I think about brains, I think of a few main features. Two anatomical, and one functional:

  1. Brains are big, single cluster of neurons. Even though there may be many neurons in, say, the digestive system (and there are not as many as some people claim), it’s so diffuse that nobody would call it a brain.
  2. It’s in the head, near lots of sensory organs. In humans, our brain is right next door to our eyes, ears, nose, and mouth, which covers a lot of the old-fashioned senses.
  3. It’s a major coordinating center for behaviour.

Decapod crustaceans (not to mention many other invertebrates) meet all those criteria. Sure, the proportion of neurons in the decapod crustacean brain may be smaller than vertebrates, but I have never seen a generally agreed upon amount of neural tissue that something must have to be a brain instead of a “ganglion in the front of the animal.”

I have a sneaking suspicion that some people will argue that only vertebrates can have brains because we are vertebrates, and vertebrates must be special, because we are vertebrates. That is, people will define brains in a way to stroke human egos.
 And, as I implied above, some people make the “no brains” claim out of self-interest. I don’t think it’s any accident that I see “lobsters don’t have brains” coming from institutes that have close ties to commercial lobster fisheries.

I suppose that some could argue that limiting the word “brain” to vertebrates is a way of bringing recognizing that vertebrate and invertebrate nervous systems are structured very differently. They are, but why only do this for one part of the nervous system? This is a little bit like saying “invertebrates don’t have eyes,” because they have compound eyes instead of our camera-style eyes. We routinely give things in invertebrates and vertebrates the same names if they have the same functions.

And in practice, I see people referring to octopus brains all the time. They do so even though, like other invertebrates, a large proportion of the nervous system sits outside the brain. From memory, roughly half the neurons in an octopus reside in its arms.

In practice, I am far from the only person that calls the clump of neurons at the front end of decapod crustaceans, “brains.” From this page:


So, fellow neuroscientists, if you don’t think invertebrates can have brains, why not? What is your dividing line?

Hat tip to Hilary Gerstein.

26 March 2018

I was unaware of how lobsters got sucked into an all-encopassing conspiracy theory

Miriam Goldstein and Bethany Brookshire burst my cosy bubble of ignorance. Today I learned  Jordan Peterson, a current darling of conservatives, drags lobsters into his mish-mash of writings to make white dudes feel good about themselves. Allow me an extended quote from this Vox article:

The book is a kind of bridge connecting his academic research on personality and his political punditry. In it, Peterson argues that the problem with society today is that too many people blame their lot in life on forces outside their control — the patriarchy, for example. By taking responsibility for yourself, and following his rules, he says, you can make your own life better.

The first chapter, about posture, begins with an extended discussion of lobsters. Lobster society, inasmuch as it exists, is characterized by territoriality and displays of dominance. Lobsters that dominate these hierarchies have more authoritative body language; weaker ones try to make themselves look smaller and less threatening to more dominant ones.

Peterson argues that humans are very much like lobsters: Our hierarchies are determined by our behaviors. If you want to be happy and powerful, he says, you need to stand up straight:

If your posture is poor, for example — if you slump, shoulders forward and rounded, chest tucked in, head down, looking small, defeated and ineffectual (protected, in theory, against attack from behind) — then you will feel small, defeated, and ineffectual. The reactions of others will amplify that. People, like lobsters, size each other up, partly in consequence of stance. If you present yourself as defeated, then people will react to you as if you are losing. If you start to straighten up, then people will look at and treat you differently.

“Look for your inspiration to the victorious lobster, with its 350 million years of practical wisdom. Stand up straight, with your shoulders back,” he concludes, in one of the book’s most popular passages.

The lobster has become a sort of symbol of his; the tens of thousands of Peterson fans on his dedicated subreddit even refer to themselves as “lobsters.”

This is classic Peterson: He loves to take stylized facts about the animal kingdom and draw a one-to-one analogy to human behavior. It also has political implications: He argues that because we evolved from lower creatures like lobsters, we inherited dominance structures from them. Inequalities of various kinds aren’t wrong; they’re natural.

“We were struggling for position before we had skin, or hands, or lungs, or bones,” he writes. “There is little more natural than culture. Dominance hierarchies are older than trees.”

Foul!


The logical fallacy is appeal to nature.

As analogies go, comparing humans to lobsters is... not a good analogy. This article provides a pretty good response, so I don’t have to. (Though I say lobsters have brains. But that doesn’t detract from the main points.)

Additional, 19 May 2018: Bailey Steinworth argues the diversity of marine invertebrate behaviour does not support Peterson’s ideas, either.

External links


Psychologist Jordan Peterson says lobsters help to explain why human hierarchies exist – do they?

20 March 2018

The impossibility of species definitions


Sad news about the death of Sudan, the last northern male white rhino, prompted some discussion about whether the northern white rhino is a species or a subspecies. The TetZoo blog has a nice look at this specific issue. I’d like to take a broader look at the whole problem of why defining species is so hard.

Arguing over what defines a species is a long-running argument in biology. It’s practically its own cottage industry. There is much effort to define species precisely, for all sorts of good reasons. And that desire for clear, precise definitions often appears on websites like Quora. Questions come up like, “If Neanderthals bred with us, doesn’t that mean, by definition, they are the same species?”

But as much as we want clear definitions in science, there is a problem. You can’t always draw sharp dividing lines on anything that is gradual. (Philosphers know this as the continuum fallacy.)
To demand a precise definition of species is like demanding to know the precise moment that a man is considered to have a beard. For instance, I think we can agree that Will Smith, in this pic from in After Earth (2013), does not have a beard:


And that in Suicide Squad (2016), Smith pretty clearly does have a beard:


But does Smith have a beard in this pic? Er... there’s definitely some facial hair there.


What is the exact average hair length that qualifies a man to be “bearded”?There isn’t one. But that doesn’t mean that you can’t meaningfully distinguish After Earth Smith from Suicide Squad Smith.

It’s a problem that Charles Darwin recognized. In Darwin’s view, speciation was going to result from the slow, gradual accumulation of tiny, near imperceptible changes. Darwin recognized that speciation was a gradual process, and he  frequently made the point that “varieties” could be considered “incipient species.” At any given point in time, some groups would be early in that process of divergence, and some would be further along.

That’s why we shouldn’t expect there to be clear, consistent species definitions that apply across the board and are helpful in every case.
External links

The last male northern white rhino has died
How Many White Rhino Species Are There? The Conversation Continues

16 March 2018

The last round of the year

Reason I love the AFLW competition, number 2,749:

Going into this last round of the home and away season, five out of eight teams had a shot at the grand final. And no team was guaranteed a slot in the grand final.

And the mighty Demons are well placed to be one of the two teams in the final. Go the Dees!

It's going to be a series of nail-biting games, and I love it.

13 March 2018

“Mind uploading” company will kill you for a US$10,000 deposit, and it’s as crazy as it sounds

Max Headroom was an 1980s television series that billed itself as taking place “20 minutes into the future.” In 1987, its second episode was titled, “Dieties”. It concerned a new religion, the Vu-Age church, that promised to scan your brain and store it for resurrection.

Vanna Smith: “Your church has been at the forefront of resurrection research. But resurrection is a very costly process and requires your donations. Without your generosity, we may have a long, long wait... until that glorious day... that rapturous day... when the Vu-Age laboratories perfect cloning, and reverse transfer.”

That episode suddenly feels relevant now, although it took a little longer tan 20 minutes.

On Quora, which I frequent, I often see people asking about mind uploading. My usual response is:


So I am stunned to read this article about Nectome, which, for the low deposit price of US$10,000, will kill you and promise to upload your mind somewhere, sometime, by a process that hasn’t been invented yet.

If your initial reaction was, “I can’t have read that right, because that’s crazy,” you did ready it right, and yes, it is crazy.

In fairness, it is not as crazy as it first sounds. They don’t want to kill you when you’re healthy. They are envisioning an “end of life” service when you are just at the brink of death. This makes it moderately more palatable, but introduces more problems. It’s entirely possible that people near the end of life may have tons of cognitive and neurological problems that you really wouldn’t want to preserve.


How do they propose to do this? Essentially, this company has bought into the idea that everything interesting about human personality is contained in the connectime:

(T)he idea is to retrieve information that’s present in the brain’s anatomical layout and molecular details.

As I’ve written about before, the “I am my connectome” idea is probably badly, badly wrong. It completely ignores neurophysiology. It’s a selling point for people to get grants about brain mapping, and it’s a good selling point for basic research. But as a business model, it’s an epic fail.

And what grinds my gears even more is that this horrible idea is getting more backing that many scientists have ever received in their entire careers:

Nectome has received substantial support for its technology, however. It has raised $1 million in funding so far, including the $120,000 that Y Combinator provides to all the companies it accepts. It has also won a $960,000 federal grant from the U.S. National Institute of Mental Health for “whole-brain nanoscale preservation and imaging,” the text of which foresees a “commercial opportunity in offering brain preservation” for purposes including drug research.

I think it is good to fund research of high speed analysis of imaging of synaptic connections. But why does this have to be tied to a business? Especially one as batshit crazy as Nectome?

Co-founder Robert McIntyre says:

Right now, when a generation of people die, we lose all their collective wisdom.

If only there was some way that people could preserve what they thought about things... then we could know what Artistotle thought about stuff. Oh, wait, we do, it’s called, “writing.”

I can’t remember the last time I saw a business so exploitative and vile. And in this day and age, that’s saying something.

Update, 3 April 2018: MIT is walking away from its relationship with the company. Good. That said, Antonio Regalado notes:

Although MIT Media Lab says it’s dropping out of the grant, its statement doesn’t strongly repudiate Nectome, brain downloading idea, or cite the specific ethical issue (encouraging suicide). So it's not an apology or anything.

Hat tip to Leonid Schneider and Janet Stemwedel.

Related posts

Overselling the connectome
Brainbrawl! The Connectome review
Brainbrawl round-up

External links

A startup is pitching a mind-uploading service that is “100 percent fatal”

How many learning objectives?




I am teaching an online course this semester, and I had to undergo training and review of the class before it ran. In preparing it, one of the key things that the instructions stressed was the importance of having learning objectives.

All that material gave me good insight into how to write a single leaning objective, there was almost nothing about how to put them all together.
And right now I’m struggling with what a good number of learning objectives is. But so far, the only direct answer I’ve seen to that is:

How many learning outcomes should I have?
This is tied to the length and weight of the course
How many learning objectives should I have?
This is tied to the number of learning outcomes.


You’re not helping.

Most courses track lessons in some standard unit of time. A day. A week. Surely there has to be some sort of thinking about what a reasonable number of learning objectives is for a given unit of time. It’s probably not out of line for me to guess that one hundred learning objectives in a single day would be too much. On the other hand, a single learning objective for a week might be too low.

Right now, I have some weeks that have ten or more learning objectives. I’m wondering if that’s too much. And I’m just lost. I have no way of knowing.

It might sound it’s just a matter of looking at student performance and adjusting as you go. But in a completely online course, it is so hard to adjust. You have to prepare almost everything in advance, and you can’t easily go faster or slower in the way that you can when you meet students in person.

I’m not sure how much student feedback will help, because everyone’s tendency is probably to say, “Yes, give me fewer objectives so I have more time to master each one.” And sometimes students aren’t good at assessing what they need to learn.

Maybe this is a gap in the education literature that needs filling.

Picture from here.

12 March 2018

How anonymous is “anonymous” in peer review?

Last time, I was musing about the consequences of signing or not signing reviews of journal articles. But I got wondering just how often people sabotage their own anonymity.

As journals have moved to online submission and review management systems, it’s become standard for people to be able to download Word or PDF versions of the article they are reviewing.

The last article I reviewed was something like 50 manuscript pages. There was no way I was going to write out each comment as "Page 30, paragraph 2, line 3," and make a comment. I made comments using the Word review feature. And all my comments had my initials.

As more software uses cloud storage for automatic saving features, more software packages are asking people to create accounts, and saving that identifying information along with documents. Word alerts you with your initials, but Acrobat Reader's comment balloons are little more subtle.

Ross Mounce and Mike Fowler confirmed that this happens:

Yep. Metadata tags are great. 😀 Even simply the language setting can be a giveaway: Austrian English is a huge clue in a small field [real, recent example!]. "Blind" peer review is not always effective...

Having wondered how often authors do this, I wonder if editorial staff ever check to make sure reviewers don’t accidentally out themselves.

Picture from here.

03 March 2018

Signing reviews, 2018

Seen on Twitter, two days apart.

First, Kay Tye:

Dear everyone! You don’t need to wonder if I reviewed your paper anymore. I now sign ALL of my reviews.
Inspired by @pollyp1 who does this and I asked her why and she said “I decided to be ethical.” I do it to promote transparency, accountability and fairness. #openscience

I particularly noted this reply from Leslie Vosshall: (the “@pollyp1” mentioned in the prior tweet).

Open peer review would instantly end the dangerous game of “guess the reviewer.” This happens all the time with senior people guessing that some junior person trashed their paper and then holding grudges. But usually they guess wrong and inadvertently damage innocents.

Second, one day later, Tim Mosca:

Since becoming an assistant prof, I’ve reviewed ~ 12 papers. Signed one. Received a phone call from the senior (tenured) author asking, “Who do you think you are to make anything less than glowing comments?” So there are still dangers for young, non-tenured profs when reviewing.

The threads arising from these tweets are well worth perusing.

I’ve signed many reviews for a long time, and nothing bad has happened. I used to be much more in favour of everyone signing reviews, but long discussions about the value of pseudonyms on blogs, plus the ample opportunity to see people behaving badly on social media, significantly altered my views. But the problem is that everyone will remember a single bad story, and not pay attention to the many times where someone signed a review and everything was fine. Or cases where something positive came out of signing peer reviews.

How can we weigh the pros of transparency with the cons of abuse? I don’t know where that balance is, but I think there has to a some kind of balance. But the underlying issue here is not signing reviews, but that people feel they can be vindictive assholes. Univeristies do not do enough to address that kind of poor professional behaviour.

19 February 2018

Once around the earth

Gordon Pennycook asked how far people have moved in pursuit of their academic careers. I’d never added it up before. I found an online distance calculator, and off I went.

From my high school town of Pincher Creek, Alberta to the University of Lethbridge for my bachelor’s degree: 100 km

From Lethbridge to the University of Victoria for my graduate work: 1,268 km (driving)

From Victoria to Montreal for my first post-doc: 4,732 km (driving)

From Montreal to Melbourne, Australia for my second post-doc: 16,755 km

From Melbourne to Pincher Creek, for a brief period of unemployment: 13,874 km

From Pincher Creek to Edinburg, Texas to start my tenure-track position: 3,476 km

And from Edinburg to an undisclosed location, where I am on leave: 3,090 km

Grand total: 45,295 km! For comparison, the circumference of the Earth is 40,075 km.

You may now judge me on my carbon footprint. I would hate to start adding in the miles for conferences on top of that.