Correlation does not mean causation.
But after yesterday’s rant about our ice machine being broken for over a month and a half, look what I spotted in the bottom of our ice machine late this afternoon...
It’s white, cold, and wet when it melts... could it be?
And I managed to get The Manuscript That Will Not Die off my desk* and back into the hands of the editor!
* Figuratively speaking, of course. I’ve never printed the manuscript. It only exists as only computer bits, so never physically been on my desk.
30 June 2011
Why your trainee sucks (and how to fix it)
I was a teenage dilettante.
And an undergraduate dilettante. And in grad school.
When I took my first post-doc, I realized that I had something to prove to myself: that I could go into the lab and do research, day in, day out. I had to develop some discipline as a professional scientist. It was the right time to do it. Even when I was in grad school, I was a teaching assistant. This meant I always had some obligations besides research. The post-doc was the first time I was expected to do not much else.
I recognize that dilettante part of me in a lot of aspiring research students.
Many aspiring research students have always been considered one of the “smart kids” through high school or earlier. For many, school was pretty easy for them. They were able to skate by without working terribly hard, because a lot of the stuff in class was not very challenging intellectually for them. This can mean that you’re able to dabble in a lot of different academic topics, because so many topics need similar study skills and thinking habits.
It’s easy for such students to proclaim, “I’m interested in everything!” It’s easy to claim interest in something if you’ve been passably okay and gotten good academic grades in it.
Of course, you should try out a lot of things in your early education. You could miss something that would be very rewarding, either personally or professionally, and it becomes harder to get some of those experiences later.
But the professional sphere is not friendly to dilettantes.
By the time you get to the point of being a mentor to research students, whether they are undergraduates or post-docs, you have gotten there because you can take a problem, and drill down as long as it takes until you get the solution. That you have a doctorate proves you can take on a project that takes the better part of a decade (at least in the North American higher educational system).
In contrast, a beginning research student has probably never had a project that lasted more than a semester, and was always running concurrently with other projects in other classes.
Bright students have been able to dabble for so long, I suspect many have an unconscious expectation think they can keep doing that. They get into a lab, and the mentor, the hardened and focused and disciplined professional (that’s you), is not interested in dabblers. “I don’t care whether you think you know the answer, run those damn replicate experiments so that we have the numbers to satisfy Reviewer Number Two when we submit this for publication!”
And for students who have been used to projects lasting a few months, even a couple of years on a single project for a master’s degree can seem like an eternity with no clear finish line.
Is shouldn’t be any surprise that to us mentors, our research students are flaky. Unreliable. Undisciplined. Why not? They’ve never had to be as disciplined for as long as you're asking them to be.
(To use a completely geeky analogy, it reminds me of a scene in World’s Finest #199. Superman is on a red sun planet, has lost his powers, and is getting his butt kicked in a fight against a normal person, because he’s not used to exerting himself. Normally, everything is easy for him.)
Armed with this point of view, a few ideas come to mind.
Making sure that a student’s research projects can be broken down into smaller chunks, rather than everything coming together only at the end. Otherwise, they may feel like there is no end in sight.
Give your trainees variation in the tasks they have to do. Even something like having to make and present a conference poster can be a welcome relief from the daily grind.
Make sure you communicate your expectations clearly. You may expect people to work evenings and weekends if necessary for an experiment (because you’ve developed focus and discipline), but this may be a completely alien expectation to someone new.
Be prepared for focus and discipline to emerge from your trainees gradually. Professionalism isn’t built overnight. They have to adjust to a whole new set of expectations.
Even though it’s been a long while since my first post-doc, I still joke that I have scientific attention deficit disorder. Look at my last two years of papers: ethics, ecological modelling, parasites, behaviour, sensory electrophysiology...
I haven’t completely vanquished my dilettante tendencies. But I’ve gotten better. Your students can, too.
Note: This is a sequel of sorts to a guest post I had on the BioData blog, “Why your mentor sucks (and how to fix it).”
Cartoon from here.
And an undergraduate dilettante. And in grad school.
When I took my first post-doc, I realized that I had something to prove to myself: that I could go into the lab and do research, day in, day out. I had to develop some discipline as a professional scientist. It was the right time to do it. Even when I was in grad school, I was a teaching assistant. This meant I always had some obligations besides research. The post-doc was the first time I was expected to do not much else.
I recognize that dilettante part of me in a lot of aspiring research students.
Many aspiring research students have always been considered one of the “smart kids” through high school or earlier. For many, school was pretty easy for them. They were able to skate by without working terribly hard, because a lot of the stuff in class was not very challenging intellectually for them. This can mean that you’re able to dabble in a lot of different academic topics, because so many topics need similar study skills and thinking habits.
It’s easy for such students to proclaim, “I’m interested in everything!” It’s easy to claim interest in something if you’ve been passably okay and gotten good academic grades in it.
Of course, you should try out a lot of things in your early education. You could miss something that would be very rewarding, either personally or professionally, and it becomes harder to get some of those experiences later.
But the professional sphere is not friendly to dilettantes.
By the time you get to the point of being a mentor to research students, whether they are undergraduates or post-docs, you have gotten there because you can take a problem, and drill down as long as it takes until you get the solution. That you have a doctorate proves you can take on a project that takes the better part of a decade (at least in the North American higher educational system).
In contrast, a beginning research student has probably never had a project that lasted more than a semester, and was always running concurrently with other projects in other classes.
Bright students have been able to dabble for so long, I suspect many have an unconscious expectation think they can keep doing that. They get into a lab, and the mentor, the hardened and focused and disciplined professional (that’s you), is not interested in dabblers. “I don’t care whether you think you know the answer, run those damn replicate experiments so that we have the numbers to satisfy Reviewer Number Two when we submit this for publication!”
And for students who have been used to projects lasting a few months, even a couple of years on a single project for a master’s degree can seem like an eternity with no clear finish line.
Is shouldn’t be any surprise that to us mentors, our research students are flaky. Unreliable. Undisciplined. Why not? They’ve never had to be as disciplined for as long as you're asking them to be.

Armed with this point of view, a few ideas come to mind.
Making sure that a student’s research projects can be broken down into smaller chunks, rather than everything coming together only at the end. Otherwise, they may feel like there is no end in sight.
Give your trainees variation in the tasks they have to do. Even something like having to make and present a conference poster can be a welcome relief from the daily grind.
Make sure you communicate your expectations clearly. You may expect people to work evenings and weekends if necessary for an experiment (because you’ve developed focus and discipline), but this may be a completely alien expectation to someone new.
Be prepared for focus and discipline to emerge from your trainees gradually. Professionalism isn’t built overnight. They have to adjust to a whole new set of expectations.
Even though it’s been a long while since my first post-doc, I still joke that I have scientific attention deficit disorder. Look at my last two years of papers: ethics, ecological modelling, parasites, behaviour, sensory electrophysiology...
I haven’t completely vanquished my dilettante tendencies. But I’ve gotten better. Your students can, too.
Note: This is a sequel of sorts to a guest post I had on the BioData blog, “Why your mentor sucks (and how to fix it).”
Cartoon from here.
Super villain team-up
Yesterday, I described the return of my old nemesis, the ice machine. Now it has been joined by another enemy from my past: the Manuscript That Will Not Die.
The Manuscript That Will Not Die is a writing project that started three years ago now. When I was writing it, it just felt like the more I worked on it, the further away the end got, because as I was researching it, I kept uncovering a whole new line of information that was relevant. And I’m still not sure when anyone else is going to be able to read it.
And people wonder why I blog? Because three years and counting is a long time to get something out.
The Manuscript That Will Not Die is a writing project that started three years ago now. When I was writing it, it just felt like the more I worked on it, the further away the end got, because as I was researching it, I kept uncovering a whole new line of information that was relevant. And I’m still not sure when anyone else is going to be able to read it.
And people wonder why I blog? Because three years and counting is a long time to get something out.
29 June 2011
History repeating, not repairing
Warning: This post contains strong language. Four letter words. Curses! Swearing!
Set the wayback machine, Sherman. Back to the very early days of this blog.
Back in the early days of this blog, when I still had some of that new Assistant Professor smell, I wrote about our ice machine. Quite a bit, actually. The first post is here, dated late August 2003. It’s not a bad little rant, though I say it myself:
There are intermittent updates until about April 2004. Because that was about when we got this fancy new machine:
See? All stainless steel, looks like something you’d find in a real classy hotel restaurant or something.
But alas! Like Old Mother Hubbard...
We ain’t got no ice. Above the scoop is this sign:
It’s been like that for about six and a half weeks. That breaks the 40 day record back in October 2003.
And it’s the same goddamn shit all over again. Nobody outside of the department seems to have any interest in getting this simple piece of common equipment fixed during peak research season.
For. Fuck’s. Sake.
Administration tells us to use the one in the Chemistry Department. And yes, it’s good that theirs is working (even though it’s not as good as ours when it works). I mean, it’s wildly improbable that we would have two ice machines in two departments breaking at the same time... Wait, I have a blog post that records exactly that happening last time. Perhaps proof that probability is a lesser force in the universe than the Law of Maximum Inconvenience.
But dammit, what pisses me off is not that I have to go up one storey in the building. It’s the principle of the thing. This is exactly the sort of things that universities supposed to keep running with that cut they take out of research grants that faculty get. The principle is having things that work. About having requests met. It’s about competence of the people around you.
Which leave me wondering, to paraphrase Dr. Evil, why I am surrounded by friggin’ incompetence?
Cue the music, Miss Bassey: “And it’s all just a little bit of history repeating...”
Set the wayback machine, Sherman. Back to the very early days of this blog.
Back in the early days of this blog, when I still had some of that new Assistant Professor smell, I wrote about our ice machine. Quite a bit, actually. The first post is here, dated late August 2003. It’s not a bad little rant, though I say it myself:
Over the last year, a distinct pattern has emerged. Ice machine breaks. Ice machine is taken away, leaving bare pipe sticking out of the wall where it should be sitting. My colleague Mike Persans (who is a self-admitted pushy New Yorker) keeps after the maintainence people, asking when it's going to be fixed.
Days turn into weeks. Weeks turn into months.
The ice machine finally returns. It works for about a week, and breaks again. Back to square one.
It's like some weird battle of nerves between the machine and maintenance versus the biologists. I think we biologists are going to be the first to break. Because as far as we've been able to determine, there seems to be absolutely no sense of urgency or comprehension on the part of almost anyone else that we scientists need this thing.
There are intermittent updates until about April 2004. Because that was about when we got this fancy new machine:
See? All stainless steel, looks like something you’d find in a real classy hotel restaurant or something.
But alas! Like Old Mother Hubbard...
We ain’t got no ice. Above the scoop is this sign:
It’s been like that for about six and a half weeks. That breaks the 40 day record back in October 2003.
And it’s the same goddamn shit all over again. Nobody outside of the department seems to have any interest in getting this simple piece of common equipment fixed during peak research season.
For. Fuck’s. Sake.
Administration tells us to use the one in the Chemistry Department. And yes, it’s good that theirs is working (even though it’s not as good as ours when it works). I mean, it’s wildly improbable that we would have two ice machines in two departments breaking at the same time... Wait, I have a blog post that records exactly that happening last time. Perhaps proof that probability is a lesser force in the universe than the Law of Maximum Inconvenience.
But dammit, what pisses me off is not that I have to go up one storey in the building. It’s the principle of the thing. This is exactly the sort of things that universities supposed to keep running with that cut they take out of research grants that faculty get. The principle is having things that work. About having requests met. It’s about competence of the people around you.
Which leave me wondering, to paraphrase Dr. Evil, why I am surrounded by friggin’ incompetence?
Cue the music, Miss Bassey: “And it’s all just a little bit of history repeating...”
28 June 2011
Tuesday Crustie: Ala Moana
A black brachyuran photographed down at the seawall in the Ala Moana State Recreation Area on my first day in Honolulu at the Crustacean Society meeting earlier this month.
27 June 2011
Bay Nature cameo
You’ll find my name in the July issue of Bay Nature if you look very, very closely. I just received my copy, which I requested as “payment” for some help I gave with one of their articles.
Bay Nature is a magazine for young readers in the San Francisco bay area. California readers, make sure to get your copy!
If a baby crow can’t see it, is it still there?

For almost many animal, though, you can sum up an ecologically relevant problem with an old philosophical problem: “If a tree falls in the forest and nobody is there to hear it, does it still make a sound?”
More generally, if you can’t see it, is it still there?

Object permanence are fairly well established in working with infants, and although I had not been aware of it, they have been used in cross-species cognitive tests from time to time. Now, Hoffman and colleagues developed fifteen tasks for carrion crows (Corvus corone) to do that were associated with six classic “stages” of cognitive abilities related to object permanence. For instance, this is a Stage 5 task:
Three covers were used. The worm was hidden randomly under the three covers. Criterion: the bird immediately had to search for the worm under the respective cover where it was hidden.
The question in this paper is not just, “Do these bird have object permanence?”, however, but how and when do they develop? The researchers took very young crows, and presented them these tasks in order.
And they looked cool doing it. See, they wore sunglasses during all the experiments to that the crows could not figure out where the food was located. Unfortunately, the sunglasses are not visible in this picture from the paper. That would have been awesome.
The crows tended to accomplish the tasks in roughly the same order than humans can accomplish them. Stages 2, 3 and 4 were learned in that order. Things got a little messy with Stages 5 and 6. The crows tended to learn one stage 6 task (of six different tasks in that stage) before they learned any of the stage 5 tasks.
The very last, most complicated of the 15 tasks?
The worm was visibly presented in the palm of the experimenter’s hand, which was then closed. The hand passed behind three screens and the worm was left behind the first one. Then the experimenter showed her empty hand to the bird. Criterion: the bird had to search for the worm systematically in reverse order: final screen, second screen and finally first screen.
To do this, the bird has to not only infer the location of a hidden item, it has to remember the path it took, and then rewind events to search in reverse order. The crows weren’t able to learn that task during the experiment.
Hoffman and colleagues also administered a second battery of tests, where they hid food, then rotated the whole affair on something like a Lazy Susan. The birds then had too look for the food in the new location. These were hard for the crows, particularly when they could not see the food.
The authors end with some nice comparisons across species, suggesting that how long it takes different bird species to get to each of these stages these tasks is correlated with how long it takes each of them to develop. This turns out to be the case, with one exception: magpies (Pica pica) seem to lag consistently behind crows, ravens, and jays.
It is puzzling how hard the rotation tasks are for the crows, which has also been reported for other animals, too.
The online article has some videos of the crows performing these tasks.
Reference
Hoffmann A, Rüttler V, Nieder A. 2011. Ontogeny of object permanence and object tracking in the carrion crow, Corvus corone. Animal Behaviour: In press. DOI: 10.1016/j.anbehav.2011.05.012
Top photo by Le No on Flickr, used under a Creative Commons license.
26 June 2011
Guest post at BioData Blogs
For students and post-docs in particular, head over to BioData Blogs to read my guest post, “Why your mentor sucks (and how to fix it).”
Don’t worry, mentors! At some point, I will write, “Why your protégé sucks (and how to fix it).”
Don’t worry, mentors! At some point, I will write, “Why your protégé sucks (and how to fix it).”
24 June 2011
Give ‘em the old razzle dazzle, and give it to ’em fast
Why do zebras look like this?
You know. All stripey.
One theory for this “Notice me!” pattern was picked up by naval officials: that the complicated, high contrast shapes somehow made zebras harder to see. Here, we see the HMAS Australia, painted with “dazzle” colouration.
Some said this particular pattern contained a “lady’s leg.” But then, some people see that sort of thing everywhere.
That the advantage of this kind of pattern hasn’t been well tested is maybe not surprising. “Hard to see” is a wide and ill-defined term that could be measured in a lot of ways.
Scott-Samuel and colleagues examined this by showing images with patterns on a computer screen. Their subjects were show two moving blocks, and had to judge which of the two were moving faster. The “dazzle” patterns they used are shown here. They also used a plain white square as a standard to which everything else was compared.
When the patterns moved quickly against a high contrast background, people judged them to be moving slower than the plain white stimulus. Zigzags and checked patterns caused a larger effect (that is, were seen as moving slower) than those with stripes.
How fast do you have to be moving to get this effect? The authors estimate that the tested speeds on the computer screen are equivalent to 13 kilometers an hour for an object about 10 meters away. This is certainly within the range of biological capabilities. I found one site that estimated zebras can run at about 60 kilometers per hour. If a predator thinks an animal is running slower than it is, it could miss what it aims at.
That this effect works with humans doesn’t mean that it will work with other animals, however. For that matter, the effect might be enhanced for other kinds of animals. Different eyes, different possibilities.
It seems to me that this should be possible to test with animals. If you could train a visual predator to get a food reward for striking at a moving target, you would predict that the predator would miss targets with the dazzle patterns more often.
The authors take a more applied view. They do some calculations for a Land Rover under rocket attack. They reckon that the effect of dazzle colouration could be enough for someone to miss the vehicle is they were launching a rocket propelled grenade at it. Sadly, the paper contains no experimental test of this prediction.
Jeeps? Rockets? Now there’s a scientific prediction that has MythBusters written all over it.
Reference
Scott-Samuel N, Baddeley R, Palmer C, Cuthill I. 2011. Dazzle camouflage affects speed perception. PLoS ONE 6(6): e20233. DOI: 10.1371/journal.pone.0020233
Zebra photo by alles-schlumpf on Flickr; ship photo by ; both used under a Creative Commons license.
You know. All stripey.
One theory for this “Notice me!” pattern was picked up by naval officials: that the complicated, high contrast shapes somehow made zebras harder to see. Here, we see the HMAS Australia, painted with “dazzle” colouration.
Some said this particular pattern contained a “lady’s leg.” But then, some people see that sort of thing everywhere.

Scott-Samuel and colleagues examined this by showing images with patterns on a computer screen. Their subjects were show two moving blocks, and had to judge which of the two were moving faster. The “dazzle” patterns they used are shown here. They also used a plain white square as a standard to which everything else was compared.

How fast do you have to be moving to get this effect? The authors estimate that the tested speeds on the computer screen are equivalent to 13 kilometers an hour for an object about 10 meters away. This is certainly within the range of biological capabilities. I found one site that estimated zebras can run at about 60 kilometers per hour. If a predator thinks an animal is running slower than it is, it could miss what it aims at.
That this effect works with humans doesn’t mean that it will work with other animals, however. For that matter, the effect might be enhanced for other kinds of animals. Different eyes, different possibilities.
It seems to me that this should be possible to test with animals. If you could train a visual predator to get a food reward for striking at a moving target, you would predict that the predator would miss targets with the dazzle patterns more often.
The authors take a more applied view. They do some calculations for a Land Rover under rocket attack. They reckon that the effect of dazzle colouration could be enough for someone to miss the vehicle is they were launching a rocket propelled grenade at it. Sadly, the paper contains no experimental test of this prediction.
Jeeps? Rockets? Now there’s a scientific prediction that has MythBusters written all over it.
Reference
Scott-Samuel N, Baddeley R, Palmer C, Cuthill I. 2011. Dazzle camouflage affects speed perception. PLoS ONE 6(6): e20233. DOI: 10.1371/journal.pone.0020233
Zebra photo by alles-schlumpf on Flickr; ship photo by ; both used under a Creative Commons license.
23 June 2011
The noble, the artist, and the crane

After a month, the noble returned to the house of the artist. The artist asked the noble to sit down, and said, “Just a moment, please.”
The artist unrolled a piece of paper, grabbed his brush and paints, and before the nobleman’s very eyes, completed his painting of the crane. It was undeniably beautiful. So lifelike was its pose, so delicate were the brushstrokes!
The noble, rather than being pleased, was furious. “How can you expect me to pay so much for something that took you only a few minutes to do?”
The artist calmly walked over to a nearby cupboard and opened it.
Contained within were dozens – no, hundreds – of badly drawn cranes.
When reading a research paper that you think isn’t very good or very interesting, keep in mind that it may be a crude, but necessary, first step towards making something great.
It may just be a badly drawn crane.
Note: I didn’t create this story; I believe it is a traditional story from Asia.
Picture by origamiwolf on Flickr; used under a Creative Commons license.
22 June 2011
Vote in the Labby Awards!

It’s a tough choice. There are several good sites in the running. As a neuro guy, I know The Brain Observatory does good work.
I want to ask you to vote for my blogging buddy, Carin Bondar. Carin and I have blogged together, and she’s shown up in some blog posts. And I am so pleased that she just joined me on the Craywatch citizen science project.
You should go here and vote for Carin right now!
And take a second to check out the finalists for the best science videos!
Open access and taxes
There are many reasons to argue for open access of scientific research. But this is not the best one:
That one is from Kevin at We, Beasties. When I protested that this argument omits indie science, Kevin replied that it’s such a small amount as to be not even worth considering.
Here’s how I see arguments going for people who try to link open access to tax dollars.
Open access advocate: American tax payers have paid for this research, so it should be freely available.
Unconvinced person: How will you do that? Put it on blogs? In library archives?
OAA: Oh, no! Real science has to be published in peer-reviewed journals for quality control purposes.
UP: Who runs those journals now?
OAA: Mostly private businesses.
UP: So you want to put those publishers out of business? You’re a socialist who wants to kill private sector jobs.
OAA: Wha...? No! Open access doesn’t mean a journal can’t be for profit.
UP: How are journals supposed to turn a profit if all that intellectual property is given out for free?
OAA: The authors will pay fees to the journal if their paper is accepted.
UP: How much?
OAA: One of the most successful open access journals charges $1,350 for each paper.
UP: And that comes out of the authors’ own pockets?
OAA: Oh, no! That money is budgeted by the researchers into the government grants we get.
UP: So you want even more of my tax dollars to fund your research? You’re a greedy pig at the trough who’ll leave a bankrupt country to my grandchildren.
OAA: Wha...? No! Open access means that scientific progress around the world can progress faster. Scientific research grows economies.
UP: So everyone could read this science funded by our tax dollars?
OAA: That’s right.
UP: You want to give away our best knowledge to countries competing with us in the global economy. You want to destroy our way of life and replace it with a global world government.
OAA: Wha...? No! Open access means that you personally will be able to delve into the world’s best new science!
UP: But I don’t care.
Photo by soukup on Flickr; used under a Creative Commons license.
It’s your taxes that fund the research, you should have access to the results without me or anyone else being a mediator.
That one is from Kevin at We, Beasties. When I protested that this argument omits indie science, Kevin replied that it’s such a small amount as to be not even worth considering.
Here’s how I see arguments going for people who try to link open access to tax dollars.
Open access advocate: American tax payers have paid for this research, so it should be freely available.
Unconvinced person: How will you do that? Put it on blogs? In library archives?
OAA: Oh, no! Real science has to be published in peer-reviewed journals for quality control purposes.
UP: Who runs those journals now?
OAA: Mostly private businesses.
UP: So you want to put those publishers out of business? You’re a socialist who wants to kill private sector jobs.
OAA: Wha...? No! Open access doesn’t mean a journal can’t be for profit.
UP: How are journals supposed to turn a profit if all that intellectual property is given out for free?
OAA: The authors will pay fees to the journal if their paper is accepted.
UP: How much?
OAA: One of the most successful open access journals charges $1,350 for each paper.
UP: And that comes out of the authors’ own pockets?
OAA: Oh, no! That money is budgeted by the researchers into the government grants we get.
UP: So you want even more of my tax dollars to fund your research? You’re a greedy pig at the trough who’ll leave a bankrupt country to my grandchildren.
OAA: Wha...? No! Open access means that scientific progress around the world can progress faster. Scientific research grows economies.
UP: So everyone could read this science funded by our tax dollars?
OAA: That’s right.
UP: You want to give away our best knowledge to countries competing with us in the global economy. You want to destroy our way of life and replace it with a global world government.
OAA: Wha...? No! Open access means that you personally will be able to delve into the world’s best new science!
UP: But I don’t care.
Photo by soukup on Flickr; used under a Creative Commons license.
21 June 2011
Tuesday Crustie: Commedia dell'arte
Two harlequin shrimp (Hymenocera picta) that I snapped before the banquet at the recent Crustacean Society meeting at the Waikiki Aquarium.
The Waikiki Aquarium has a fact sheet on these glorious little animals, which has only one flaw: the picture is in black and white, and doesn’t capture their wonderful colours.
20 June 2011
The Zen of Presentations, Part 42: Outlines must die
Outline slides are a waste of time. By definition, they contain no information that will not be found somewhere else in the talk. Worse, people usually narrate those slides, tediously plodding through each point.
For scientific talks, outlines are even more useless because almost every talk has the same structure. People structure their talks the same way they structure their scientific papers: Introduction. Methods. Results. Discussion. If your outline isn’t substantially different than that, leave it out. Putting an outline up with those headings advertises your lack of imagination.
Why be redundant? Why repeat yourself? Why say the same thing over and over again?
Yet not only do I regularly see these sinkholes when I attend scientific conferences, some of my colleagues insist their students include them.
An outline is a planning tool. Outlines are useful in preparing a talk. You don’t need to show it once the planning is over, however. You don’t need to see an outline for a talk any more than you need to see the blueprints for a building you’re walking around in. You don’t need to see the storyboards for a movie you’re watching. You don’t need to see the rough sketches of a painting.
The problem might be that instructors want students to have a plan for their talk, which is a useful thing to teach them. The easiest way for instructors to ensure that happens is to make students “show their work” by including an outline slide. This forces the student to plan, which is good. This is less work for the instructor, because he or she can just tick it off during the presentation. But the cost is the student has learned a horrible habit that makes nobody else in the audience happy.
My colleagues are great, but sometimes I’d like to give ‘em such a smack.
Related rants
The Zen of Presentations, Part 31: Redundant and repetitive
Picture by andersabrahamsson on Flickr; used under a Creative Commons license.
For scientific talks, outlines are even more useless because almost every talk has the same structure. People structure their talks the same way they structure their scientific papers: Introduction. Methods. Results. Discussion. If your outline isn’t substantially different than that, leave it out. Putting an outline up with those headings advertises your lack of imagination.
Why be redundant? Why repeat yourself? Why say the same thing over and over again?
Yet not only do I regularly see these sinkholes when I attend scientific conferences, some of my colleagues insist their students include them.
An outline is a planning tool. Outlines are useful in preparing a talk. You don’t need to show it once the planning is over, however. You don’t need to see an outline for a talk any more than you need to see the blueprints for a building you’re walking around in. You don’t need to see the storyboards for a movie you’re watching. You don’t need to see the rough sketches of a painting.
The problem might be that instructors want students to have a plan for their talk, which is a useful thing to teach them. The easiest way for instructors to ensure that happens is to make students “show their work” by including an outline slide. This forces the student to plan, which is good. This is less work for the instructor, because he or she can just tick it off during the presentation. But the cost is the student has learned a horrible habit that makes nobody else in the audience happy.
My colleagues are great, but sometimes I’d like to give ‘em such a smack.
Related rants
The Zen of Presentations, Part 31: Redundant and repetitive
Picture by andersabrahamsson on Flickr; used under a Creative Commons license.
19 June 2011
American science without Americans?
Physicist Michio Kaku may miss the mark here (edited down; full quote here):
I part company with Kaku when he asserts that American students can’t do science (or that there aren’t enough of them – I’m not sure which he’s arguing). I work with plenty of smart American students. They can perform science at the highest levels. Mike The Mad Biologist has often noted that American students perform well in science – if you account for the bad effects of poverty.
It might be that many Americans don’t go into science not because they are incapable (or lazy or damaged by their education), but because they’re smart. Americans might not pursue scientific careers for the same reason that they don’t pursue careers as migrant crop pickers or maids: there are better ways of making a living out there than being a researcher.
Shorten the path to a doctorate and a career, increase the number of positions requiring doctorates in education and industry, may consider a slight pay raise, and then we’ll talk.
The United States has the worst educational system known to science. Our graduates compete regularly at the level of third world countries. So how come the scientific establishment of the United States doesn’t collapse? ...
How come the scientific establishment of the United States doesn’t collapse? ... America has a secret weapon.
That secret weapon is the H1B.
Without the H1B, the scientific establishment of this country would collapse. ... The United States is a magnet sucking up all the brains of the world, but now the brains are going back. ... And people are saying, “Oh, my God, there’s a Silicon Valley in India now!” “Oh, my God, there’s a Silicon Valley in China!” Duh! Where did it come from? It came from the United States. So don’t tell me that science isn’t the engine of prosperity.
I part company with Kaku when he asserts that American students can’t do science (or that there aren’t enough of them – I’m not sure which he’s arguing). I work with plenty of smart American students. They can perform science at the highest levels. Mike The Mad Biologist has often noted that American students perform well in science – if you account for the bad effects of poverty.
It might be that many Americans don’t go into science not because they are incapable (or lazy or damaged by their education), but because they’re smart. Americans might not pursue scientific careers for the same reason that they don’t pursue careers as migrant crop pickers or maids: there are better ways of making a living out there than being a researcher.
Shorten the path to a doctorate and a career, increase the number of positions requiring doctorates in education and industry, may consider a slight pay raise, and then we’ll talk.
17 June 2011
“Did nobody see this was a bad idea?”
“How did this paper get published?”
This is a common refrain in conferences, journal clubs, and science blogs. You look at a paper and thing, “Why oh why could people not see the flaws? Why couldn’t the authors see it? How did the reviewers miss this? What was the editor thinking?”
We scientists might obsess about this a little too much. Why should science be different from other kinds of human endeavor?
I like movies. I try to see a new movie in the theater every week. (And yes, I have yet another blog with movie reviews.)
From a purely financial point of view, the stakes when you make a movie are generally much higher than for scientific papers. For basic biology, many grants cost hundreds of thousands of dollars. Movies routinely cost tens of millions of dollars.
Quite often, you’ll hear political pundits talk about how private industry gets “signals” to be efficient public government does not. Businesses have to be profitable, after all. You would think that people would think very carefully when then set out to make a movie. They would read the script. They would scrutinize the costs. They would try to make something that people would want to see.
And yet, somehow, we have this:
And this:
And this:
And you ask, “How did this happen?”
How could writers, directors, producers, and everyone else involved not look at each other at some point and go, “This is going to be so bad, we should stop right now”?
Nobody has a good answer. Some of it is that people get too close to their projects, and lose objectivity. But there are so many people and so much money involved that you just have wonder: “What were they thinking?”
There is a saying in the film making business: “Nobody sets out to make a bad movie.”
But that doesn’t mean the endeavor is necessarily a complete waste. I like this this bit from Todd Brown:
Sometimes, you can find a small redeeming thing in a terrible film. An actor who doesn’t phone in the performance. A single genuinely funny joke. A good piece of music.
Nobody sets out to write a bad scientific paper.
Science is a creative process. And the creative process is messy, filled with people who are just trying to do the best they can. And sometimes there will be something of value that someone else might be able to use.
Additional: David Pogue calls the reason nobody says anything “the Broadway flop effect.”
This is a common refrain in conferences, journal clubs, and science blogs. You look at a paper and thing, “Why oh why could people not see the flaws? Why couldn’t the authors see it? How did the reviewers miss this? What was the editor thinking?”
We scientists might obsess about this a little too much. Why should science be different from other kinds of human endeavor?
I like movies. I try to see a new movie in the theater every week. (And yes, I have yet another blog with movie reviews.)
From a purely financial point of view, the stakes when you make a movie are generally much higher than for scientific papers. For basic biology, many grants cost hundreds of thousands of dollars. Movies routinely cost tens of millions of dollars.
Quite often, you’ll hear political pundits talk about how private industry gets “signals” to be efficient public government does not. Businesses have to be profitable, after all. You would think that people would think very carefully when then set out to make a movie. They would read the script. They would scrutinize the costs. They would try to make something that people would want to see.
And yet, somehow, we have this:
And this:
And this:
And you ask, “How did this happen?”
How could writers, directors, producers, and everyone else involved not look at each other at some point and go, “This is going to be so bad, we should stop right now”?
Nobody has a good answer. Some of it is that people get too close to their projects, and lose objectivity. But there are so many people and so much money involved that you just have wonder: “What were they thinking?”
There is a saying in the film making business: “Nobody sets out to make a bad movie.”
But that doesn’t mean the endeavor is necessarily a complete waste. I like this this bit from Todd Brown:
No serious, professional filmmaker ever set out to make a bad movie. Not one. They all set out to make the best movies they could under the circumstances they were presented with. It’s the struggle to be good that gives them their vitality and their continuing drawing power. When they’re funny it’s not because they wanted to be bad - it’s because they tried to be good and missed so badly. Even Ed Wood believed he was making art.
Sometimes, you can find a small redeeming thing in a terrible film. An actor who doesn’t phone in the performance. A single genuinely funny joke. A good piece of music.
Nobody sets out to write a bad scientific paper.
Science is a creative process. And the creative process is messy, filled with people who are just trying to do the best they can. And sometimes there will be something of value that someone else might be able to use.
Additional: David Pogue calls the reason nobody says anything “the Broadway flop effect.”
16 June 2011
Presentation tips compiled
I’ve just takes some of my best blog posts on presentations, reorganized them into themes instead of “What I happened to feel like writing about that day,” and compiled them into a very quick and dirty PDF titled, “Presentation tips.”
Conferences are broken and awesome at the same time
Last week, I blogged from The Crustacean Society meeting in Honolulu, Hawaii. Given the long plane flights, and that I paid for the entire trip out of my own pocket, was it worth it?
I had a split view after the conference.
Part of me said, “Conferences are broken.”
I’m getting impatient by the low level quality of talks at scientific conferences. I don’t mean the quality of the science, but the quality of the presentation of the science.
People are still reading those damn lists of bullet points they’ve made in those canned PowerPoint templates. I am 99.9% sure I was the only person to use Prezi. I was not the only person to deliver a presentation with no bullet points, but that number was small.
Oral presentations should be the highlight of the conference. They are ostensibly the reason we are holding the conference. Instead, talks are so weak that they are the least interesting thing about the whole affair.
Not every talk was poor - especially not yours, dear reader. But if you were there, did you feel energized before going to the coffee break, or was the coffee break sweet release?
And I ate more lunches and dinners alone than I would have liked. I suck at making conference food dates.
But part of me said, “Conferences are awesome.”
I had a lot of good, productive conversations. At the opening social. In the poster session. At the banquet. A few in the coffee breaks.
There were points in my talk that I thought people didn’t respond to, but I found out two days later that they had the desired effect.
All of those talks made me glad that I hadn’t just connected over Skype. The actual physical meeting still has advantages.
If I were organizing a smallish conference like this, I would try to:
I don’t think I’d be quite ready to have one of those “unconferences” I hear about. But I do think the standard conference format could be improved.
I had a split view after the conference.
Part of me said, “Conferences are broken.”
I’m getting impatient by the low level quality of talks at scientific conferences. I don’t mean the quality of the science, but the quality of the presentation of the science.
People are still reading those damn lists of bullet points they’ve made in those canned PowerPoint templates. I am 99.9% sure I was the only person to use Prezi. I was not the only person to deliver a presentation with no bullet points, but that number was small.
Oral presentations should be the highlight of the conference. They are ostensibly the reason we are holding the conference. Instead, talks are so weak that they are the least interesting thing about the whole affair.
Not every talk was poor - especially not yours, dear reader. But if you were there, did you feel energized before going to the coffee break, or was the coffee break sweet release?
And I ate more lunches and dinners alone than I would have liked. I suck at making conference food dates.
But part of me said, “Conferences are awesome.”
I had a lot of good, productive conversations. At the opening social. In the poster session. At the banquet. A few in the coffee breaks.
There were points in my talk that I thought people didn’t respond to, but I found out two days later that they had the desired effect.
All of those talks made me glad that I hadn’t just connected over Skype. The actual physical meeting still has advantages.
If I were organizing a smallish conference like this, I would try to:
- Schedule more featured talks from speakers at the top of their game who give kick-ass talks. Scientists who can make you laugh. Scientists who can bring a tear to your eye. Or just the ones with awesome photos. They’re out there.
- Shake up the contributed talks. I’ve said before I would love to do an Ignite! session at a conference. Another might be to put more effort into giving speakers input on effective presentation, or have a “What not to slide” workshop.
- Ask some people to try talking about their research in panels (no slides permitted) on similar themes instead of single speaker talks.
- Try to facilitate lunch and dinner outings. Maybe designate a few of the locals to act as “hubs,” to say, “Hey, I’m going to have lunch at a nearby sushi place.” Nobody should have to eat alone at a conference.
- Make sure people have a chance to view posters before the formal poster session. And more than one poster session. “All posters up for four hours, odd numbers present first two hours, even posters present last two hours” doesn’t work. People get trapped in front of posters and don’t see others. Not having all posters up at the same time allows people to see other posters.
- Have fewer tracks of programming as the conference wears on. That way, as people leave, the rooms do not appear to be distressingly empty.
I don’t think I’d be quite ready to have one of those “unconferences” I hear about. But I do think the standard conference format could be improved.
15 June 2011
Comments for first half of June 2011
Dr. Micro O likes Comic Sans and Wikipedia. Hey, neither of those are crimes.
DrugMonkey asks what people think retraction means.
I make a cameo appearance at John Rennie’s blog. Regarding comics.
Coming off the invasive species symposium I spoke at, Christie Wilcox looks at whether every introduced species is an invasive species.
DrugMonkey asks what people think retraction means.
I make a cameo appearance at John Rennie’s blog. Regarding comics.
Coming off the invasive species symposium I spoke at, Christie Wilcox looks at whether every introduced species is an invasive species.
Turning light and going blind: A tale of caves and genes


But the cave and surface forms are nominally the same species. What has happened genetically to these? Are they still able to interbreed? Are all those cave-related features reversible?
Protas and colleagues tried breeding the two forms, and were able to get hybrids, all females. Why the first generation were all females isn’t clear.
When they started doing genetics, a perhaps unsurprising pattern emerged for the genes they examined: the cave dwellers were homozygous, while the hybrids and surface dwellers were heterozygous. This to me suggests the cave dwellers are showing classic recessive Mendelian genes.
The eyes of these cave crusties have a wide range of appearances. Some are truly eyeless, while others have eyes that are... messed up. Those also appeared in the second generation. The eye loss seems to be a small number of genes with large effects.
The authors found multiple genetic mutations responsible for pigment loss - there are two ways, involving three genes, to become albino. There are also different genes for small eyes and the complete lack of eyes. This is very similar to the situation in Mexican blind cave fish, where “small eyes” and “no eyes” have are the result of two different genes, not variation within a single gene.
The paper goes on to detail much more about the genes, most of which is “above my pay grade,” as they say. But it’s cool to find these genes, with such clear and large effects, that are so clearly correlated with the environment, and with such unusual parallels to an unrelated species.
Anyone know of third blind cave species that a geneticist might tackle? Or maybe someone is willing to do the experiment of releasing a population of fruit flies into a cave, and coming back in a decade or two to see what has happened.
Reference
Protas M, Trontelj P, Patel N. 2011. Genetic basis of eye and pigment loss in the cave crustacean, Asellus aquaticus. Proceedings of the National Academy of Sciences 108(14): 5702-5707. DOI: 10.1073/pnas.1013850108
Subscribe to:
Posts (Atom)