30 October 2013

Save the Day essays #2: Recovery

I went to a Doctor Who convention in 1986. John Nathan-Turner, then producer of the show, was asked about efforts to recover the old missing stories. He said something like, “They may have dried up, and there may be no more to find.”

He was wrong.

We got back most of “The Ice Warriors” in 1988. We got back all of the epic “Tomb of the Cybermen” in 1991. We got back an episode of “Galaxy 4” in 2011, to name a few. But there is always this fear that the next recovered episode from the 1960s could be the last to be recovered.

If, not too long ago, you had told me that during the fiftieth anniversary year of Doctor Who, that there would be nine episodes recovered (completing one story and almost completing another)...

And that one set of those (“The Web of Fear”) would be a story featuring the Great Intelligence, a villain that had been brought back to the current show just months before (“The Snowmen” Christmas special from 2012, “The Bells of St. John,” “The Name of the Doctor”)...

And that I’d be be able to download them online and watch them mere days after their discovery had been announced...

I would have brushed that off as the most wishful of wishful thinking. It sounds too improbable, too good to be true, to have so many coincidences align.

The moral of the story is that sometimes, the improbable happens. And it can be wonderful.

29 October 2013

Tuesday Crustie: Nibbles

Yes, there are crustaceans in this picture. Enlarge and look closely at the second and third toes from the left...

Photo by Michela Simoncini on Flickr; used under a Creative Commons license.

28 October 2013

Save The Day essays #1: Restoration

My “Save the Day” essays will be a series celebrating the fiftieth anniversary of Doctor Who, which is less than a month away.

I didn’t start watching Doctor Who at the beginning. I was part of the early wave of Doctor Who fandom in North America. Like a fair number of people, I’d started watching Doctor Who in the late 1970s and early 1980s, when the fourth Doctor series were syndicated to public television in the United States.

I learned many of the old Doctor Who stories were lost. Many original tapes had been wiped, because the BBC considered television ephemeral. Most of the old black and white stories from the 1960s existed because someone had pointed a 16 mm camera at a TV screen and filmed episodes for overseas sales.

I was excited when I learned that our nearby American public television station would be showing all the existing stories from the show’s beginning. But at first, I was disappointed. So many of these stories felt so bad to me.

It wasn’t until years later when I started seeing some of the DVDs that I realized my judgment had bee compromised. I wasn’t just reacting to the show; I was reacting to the low quality of the images. What I saw on public television were unrestored 16 mm prints, with scratches on the frames and hiss on the audio.

Seeing the restored versions on DVD were a revelation. This comparison of an original and restored frame (from Seeds of Death, a second Doctor story) shows how huge the difference could be. Here’s the sort of image I saw broadcast:

And here’s the DVD.

The difference is stunning. Episodes that looked cheap and tired suddenly looked professional and even ambitious.

I realized that I was responding to the scratches on the film far more than I’d even thought. I hadn’t been judging the old stories on their own merit, but I’d been heavily influenced by the presentation of them.

I had a similar reaction when I started to watch DVDs of old Godzilla movies. Like Doctor Who, I’d seen the movies on low quality prints that had scratches and crummy reproduction. I love them, but they seemed cheap. Seeing them remastered on DVD from original negatives, and with subtitles instead of dubbing, completely changed my thinking about the movies. They looked glorious. What I thought was a bad production was actually bad reproduction.

It makes me wonder how many things I rate lower than they deserve because of incidental flaws in how they are presented.

External links

Doctor Who Restoration Team

25 October 2013

How common is “co-first” authorship?

Deciphering authorship of scientific papers is an esoteric task. Something that raises red flags for those in the tribe of science (like 300 publications with 40 as last author) means little to anyone on the outside. “Why do you mention last author? Does it matter?”

In many fields, it is viewed as good to be the first author. But only one person can be first author, which leads to the practice of some labs having “co-first” authorship. This is usually designated by a footnote to the effect that “authors Smith and Jones contributed equally to this work.”

In one of my classes, I was discussing the practice of co-authorship, and I commented that this was rare. As it happened, later that day I had occasion to review a couple of sets of CVs, from different fields of biology. One set of CVs fell on the “basic organismal biology” side, and the other set tilted much more heavily towards the “biomedical” side.

Only 5% of the organismal biologist listed papers with “equal contributions,” compared to 45% of the biomedical scientists. Note that this isn’t percent of publications, but the percent of scientists with one or more papers with “co-first” papers. This surprised me; I didn’t think it would be that high.

I wonder if this reflects a real difference in the two fields. I think it might be. Biomedical research, I suspect, is a field where there is more money, more competition, and more team members. I think this creates an incentive for people to try to scrape together any additional credit, no matter how tiny. I have yet to meet anyone who cares about “equal contribution” footnotes when reviewing publication records.

I dislike co-first authorship, but it’s just a symptom of another antiquated part of scientific publishing that dates back to the days of small numbers of scientists. Trying to figure out who did what under current authorship practices is like trying to slice bread with a club: messy.

Related posts

Letter in Science!

External links

Co-first Authorship is a lie and a sham and an embarrassment to our profession (DrugMonkey has written extensively on co-first authorship; search his blog for much more.)

23 October 2013

That’s the best evidence for a STEM shorage?

The National Math and Science Initiative blog claims there is a STEM crisis. We just aren’t producing enough skilled STEM graduates, it claims. The article advances three pieces of evidence for a "STEM shortage." All are problematic.

Less than 40% of students who start out as STEM majors in college receive a STEM degree.

The number of students who leave does not address whether the number of students who remain are adequate to meet employment needs. A lot of students start off university wanting to be physicians and health professionals. Not all of them make it. That doesn’t automatically mean we’re failing in education or that there is a shortfall of qualified individuals. It might mean it is a stringent, selective, difficult profession.

Nearly half of the Army Corps of Engineers are eligible for retirement.

First, it is risky to extrapolate from a single subset of one STEM discipline to all disciplines, with all degrees. Second, "eligible" to retire is not the same as "planning" to retire. Shortfalls due to retirement were routinely predicted for academia in the late 1980s, which never materialized.

More than half of the fastest growing jobs in the nation are in STEM fields.

That demand is growing fast does not in and of itself tell you about whether there is adequate supply. Also, those “growing jobs” are likely a widely varied mix, with different skill sets and qualifications. As Biochem Belle noted, it doesn’t make much sense when talking about a STEM shortfall to lump jobs that need a bachelor’s in computing with those that need a doctorate in chemistry, and call them all “STEM jobs.” And probably most of those fast growing jobs are computing and a little engineering.

External links

Once more, with feeling: computer science job growth dwarfs all others

22 October 2013

Tuesday Crustie: Venom

Bonus Tuesday Crustie! Because breaking crustacean news!

I’ve featured remipedes before, because they are cool in many ways. They are cave-dwellers, and probably the closest relative to insects.

This remipede, Speleonectes tulumensis, is in the news today because it has become the first crustacean discovered to have venom. Check the links below for great summaries!

Related posts

Tuesday Crustie: Insect cousin

External links

First venomous crustacean discovered – Nature News
Of 70,000 Crustacean Species, Here’s The First Venomous One – Not Exactly Rocket Science
The first venomous crustacean is found – Why Evolution is True

Picture from here.

Tuesday Crustie: Better than Darjeeling or Earl Grey

From here. Hat tip to Craig McLain and Andrew Thaler

21 October 2013

Do you know what your professor does?

Last week, I gave a talk on research culture, about which I shall write more later. I wanted was feedback from people who work with or hire starting research students, especially at the undergraduate stage. I took to Twitter, and was grateful for the feedback on this question (and several others). I asked, “What do students do right, or wrong, when they first contact you about research opportunities?”

The most common answer, by a long, long, looooooong ways was, “I want students to have some idea of the research I do.” To give few examples:

Wrong = have no idea what my lab does.

I want someone who knows why they want to work in my lab.

Read some lab pubs before talking to PI.

Wrong: they don't know that my lab is computational. Right: they have checked some of my publications.
Gabo MH

Do show up in my office having read a few of my publication abstracts (on my web site) with a specific possible interest.

Show that you have actually looked at my website and read some papers, and explain what skills you have that are aligned.

Prospective student claimed interest in my lab, then asked what I study & if I have pubs. Informed her about Pubmed.

This reflects a larger aspect of research culture: the expectation that people who want to join in research will read the literature. Of course, new people may not know what “the literature is,” thinking we mean Dickens, Chaucer, Homer, and the like. But we can explain that to them.

That said, back at a mentoring workshop I was participating in a while ago, when I told students that they should have an idea of what professors do, one student spoke up. “Isn’t it egotistical for you to say you’ll only work with students who will work on projects interesting to you?” I can’t remember my answer. I’m still struggling with this question. Research opportunities on any given campus are finite, and I do wonder how many people get turned away who might be able to make good contributions.

Photo by Thompson Rivers on Flickr; used under a Creative Commons license.

16 October 2013

Today in “What the hell is wrong with people?”: Me

Science blogger Danielle Lee was on the receiving end of bad behaviour. On the other hand, science blogger Bora Zivcovic engaged in bad behaviour. Priya Shetty ask the science blogosphere to explain the difference in reaction, saying of the latter:

But what about his fellow science writers who are normally a highly articulate and outspoken bunch? What explains either the deafening silence or the bizarre closing of ranks?

Insults are simple. Insults documented in pixels might even enter the realm of clear cut, or straightforward. In this case, I thought I could do something positive. I re-posted Dr. Lee’s (now restored) post about getting an insulting question.

Harassment is complicated. Harassment with only personal accounts, even when both participants agree in broad strokes about what happened, even more so. It’s hard to know what to make of the situation, what should be done, and what is appropriate restitution. In this case, I don’t know if there is something positive that I can do – yet.

Layer on top of that the tribal issues – who’s in, who’s out, who’s being attacked and who is doing the attacking – and you part of the explanation Shetty is asking about.

Update: Knowing what to make of the situation is getting less complicated.

More additional: Actions have consequences (part 1, I expect). Bora Zivcovic has quit the Science Online board.

Update, 18 October 2013: Yet another hard to read but important account of Bora Zivcovic’s bad behaviour has appeared. As is so often the case, just one tug starts to unravel more. And I still don’t know what I can say or do that would genuinely be positive.

More updates, 18 October 2013: Actions have consequences (part 2). Bora Zivcovic has resigned from Scientific American. This was the only possible outcome, I think. Where do we go from here? I don’t know.

Related posts

Today in “What the hell is wrong with people?”: Danielle Lee’s story

External links

Another Sexual Harassment Case in Science: The Deafening Silence That Surrounds It Condones It
This happened
The insidious power of not-quite-harassment
Two stories: One man got away with it — will the other, too?
Confronting Sexism in Science Communications – Link round-up, added 21 October 2013

Using “journal sting” papers for teaching

Say what you will about John Bohannon’s Science article about dodgy peer review, he’s done a service by getting people talking about problems of journal credibility and peer review. My link list is over 50 pieces now, and still growing.

Bohannon may have done those of us teaching technical writing another service. When I teach students how to read through original research papers, I want to show them bad examples as well as good examples. Bad examples force you to confront and articulate the details about why the paper doesn’t make its case.

Jeffrey Beall found at least four websites that published one of Bohannon’s hoax papers, and has archived them on his blog:

The sites that put the hoax paper up have done a passable job of typesetting them. They have included on the hoax papers all the usual paraphernalia that you see in regular journals: page numbers, banners, DOI numbers, ISSN numbers, and so on. Thus, to an untrained eye, there are no obvious giveaways in the presentation that these are anything other than a short paper published in in the usual way. Bohannon has created papers that, to a student, look like the real deal.

The paper is short enough (3-5 typeset pages) that students can do a read through and start picking it apart in a single class period.

Even better for teaching purposes, Bohannon provides a “key” to the paper’s major problems in his Science article. The problems do not require a specialist’s knowledge in cell biology, or cancer, or probably even biology, to pick them out. For instance, one of the key graphs claims it shows a “dose dependent” effect. Most people can grasp the idea of a “dose dependent” curve, if you present it right. “If something is anti-cancer, what do you expect to happen to cancer cells if you give them more of it?”

When they first start to read the scientific literature, students can be a both a bit too deferential and bit too trusting. You need the bad examples to convince yourself that you need to read critically and not take things at face value

Related posts

Open access or vanity press, the Science “sting” edition

Comments for first half of October 2013

Christina’s LIS rant looks at the journal sting conducted by Science magazine.

The journal sting is also the subject of Brian Wood’s Human Ecology blog.

The Lab and Field asks us to stop using the #ICanHazPDF hashtag on Twitter. Sure, as soon a Interlibrary Loans makes getting a paper somewhat close to as fast and as convenient as sending a tweet.

Neuroecology asks why there isn’t a coherent neuroscience blogosphere.

15 October 2013

Tuesday Crustie: Ice blue

This lovely little caldoceran Moina macrocopus has a frosty colour.

Photo by PROYECTO AGUA** /** WATER PROJECT on Flickr; used under a Creative Commons license.

14 October 2013

Initial reactions: can we hide the sex of authors? And should we?

Maliniak and colleagues (2013) have found an important fact: if a paper in international relations has only female authors, it will be cited about 70% as often as a paper with only male authors.

In a guest blog post, co-author on that study, Barbara Walter, says there is a “relatively easy” solution to fix this. I was reminded of the saying, “For every complex problem there is a simple solution, and it’s wrong.”

(There’s a) potentially easy solution: anonymity. What if we set up evaluations in academia so that we never knew the gender of the person being evaluated (or at the very least downplayed it as much as possible)?

Under this suggestion, I should be “Z. Faulkes,” always, in my academic writing.

I am sympathetic to the idea of removing identifiers. After all, if something is supposed to be judged on its content, then let’s focus on the content. But what bugs me is that the authors say this is “relatively easy.” Indeed, the example they use makes it sound easy:

Prior to adopting anonymous auditions in which candidates sat behind screens when they played, less than 5 percent of all musicians in these orchestras were women. Once blind auditions were instituted women were 50 percent more likely to make it out of the preliminary rounds and significantly more likely to ultimately win the job. Today, 25 percent of all musicians in these elite orchestras are women.

The more I thought about it, the more the analogy with a musical audition unraveled.

A musical audition asks different performers to perform a set, predetermined piece of music. In academic evaluation, researchers are submitting original works, and no two are alike.

A musical audition is a solo endeavour, where you can judge the performance of the individual. In academics, particularly in science, you are often working with many authors, and it’s very difficult to disentagle the contributions of the authors.

Evaluating academics isn’t like auditioning a single soloist in an orchestra. It’s more like digging through demo tapes of rock bands. And you may have heard the band before at a bar or showcase gig.

For academics, their manuscripts and their CVs are extensively reviewed.

Double blind reviews of single manuscripts has been tested occasionally. For instance. Budden and colleagues (2007) claimed this increased the number of papers with women as the first author. Those results were contested, however (Webb et al 2008; Whittaker 2008). But even the authors of the original study admitted it was incredibly difficult to get data on the effects of different peer-review practices, since so few journals are willing to tinker with it, or share data about it.

As a practical matter, I don’t think there is any way to make a CV even close to anonymous. Manuscripts are perhaps a more promising target for being reviewed anonymously. The nature of peer review, trying to find the most qualified people to review papers with the most expertise, makes it highly likely that reviews will be done by people who have met the authors and know what their research is.

For instance, early in my career, I got wind that my name had come up in an Animal Behavior Society meeting. Someone (I think it was Zuleyma Tang-Martinez) was making the point that the society should know the sex of all of its members, “For instance, is this ‘Zen Faulkes’ male or female?”

As it happened, not one, but two of my professors from my university were in the room. They laughed, and said, “Male... You should have asked!”

For that matter, making a work anonymous will most likely change the “females cited 70% as often as males” if the problem is bias at the reviewing stage. It is unlikely to address the citation problem that is coming post review, like males being more active in self-promotion and networking.

A few years ago at Science Online, Ed Yong mentioned that he often had scientists emailing him, asking if Ed could mention their newest work. These were invariably men. The women in the room reacted with surprise. None of them had thought that this was something that they could or should do. Whether you think such requests are out of line or not, you could see how that kind of difference in promotion could lead to men being better known in their field, and getting some more citations as a result.

That the scientific community is a community means that anonymity will be difficult to come by except in very early careers. Double blind review might be much more practical in situations like applications to grad school.

Let’s fight biases whenever we come across them. but let’s not pretend it’s going to be easy. It’s going to be a tough, hard slog.

Hat tip to Miriam Goldstein for bringing this to my attention.

Update, 16 May 2015: Any discussion of making peer review blinded or not is probably incomplete without referencing this post by Hilda Bastian, which systematically reviews a big chunk of the literature on effects of anonymity in peer review. She concludes:

I think institutionalizing anonymity in publication peer review is probably going out on a limb. It’s only partially successful at hiding authors’ identities, and mostly only when people in their field don’t know what authors have been working on. If blinding authors was a powerful mechanism, that would be evident by now.


Maliniaka D, Powers R, Walter BF. 2013. The gender citation gap in international relations. International Organization 67(04): 889-922. DOI: http://dx.doi.org/10.1017/S0020818313000209

Budden AE, Tregenza T, Aarssen LW, Koricheva J, Leimu R, Lortie CJ. 2007. Double-blind review favours increased representation of female authors. Trends in Ecology & Evolution 23(1): 4-6. http://dx.doi.org/10.1016/j.tree.2007.07.008

Webb TJ, O’Hara B, Freckleton RP. 2008. Does double-blind review benefit female authors? Trends in Ecology & Evolution 23(7): 351-353. http://dx.doi.org/10.1016/j.tree.2008.03.003

Whittaker RJ. 2008. Journal review and gender equality: a critical comment on Budden et al. Trends in Ecology & Evolution 23(9) 478-479. http://dx.doi.org/10.1016/j.tree.2008.06.003

Budden AE, Aarssen LW, Koricheva J, Leimu R, Lortie CJ, Tregenza T. 2008. Response to Whittaker: challenges in testing for gender bias. Trends in Ecology & Evolution 23(9): 480-481. http://dx.doi.org/10.1016/j.tree.2008.06.004

External links

How to reduce the gender gap in one (relatively) easy step
The problem is that scientists are human
Weighing up anonymity and openness in publication peer review
Photo by Penn Provenance Project on Flickr; used under a Creative Commons license.

12 October 2013

Today in “What the hell is wrong with people?”: Danielle Lee’s story

Danielle Lee is someone the blogosphere, and science, and science writing, needs a lot more of: smart, passionate, articulate. Can never have too much of that.

She doesn’t deserve the treatment she got in the story she described below.

She also doesn’t deserve this post being taken down by Scientific American, on which blog network this was originally posted. I am grateful to Dr. Isis for archiving it, and giving us a chance to spread this story far and wide.

Because the treatment that she got deserves to be called out and condemned. I almost hope I get asked to blog by this site so I can say in no uncertain terms, “No. I haven’t forgotten how you treated Danielle Lee.”

Update, 14 October 2013: I posted this before I realized how much of an Internet shitstorm was raging about this. Since then...

The person who asked Dr. Lee the insulting question was fired by Biology-Online. Both Biology-Online and Scientific American have issued apologies (varying in their explanations, and which are not satisfactory to many readers).

Despite Scientific American’s editor-in-chief apologizing, Dr. Lee’s post still cannot be found at  her Scientific American blog, The Urban Scientist.

Have I mentioned I’m not a big fan of words without deeds?

More update, 14 October  2013: And within less than an hour of writing the text above, Dr. Lee’s post is back up, with an explanation of why it vanished. Do I consider this particular matter settled? I’m not sure. Regardless, the issues raised by it are a long way from being resolved.

Wachemshe hao hao kwangu mtapoa

I got this wrap cloth from Tanzania. It’s a khanga. It was the first khanga I purchased while I was in Africa for my nearly 3 month stay for field research last year. Everyone giggled when they saw me wear it and then gave a nod to suggest, “Well, okay”. I later learned that it translates to “Give trouble to others, but not me”. I laughed, thinking how appropriate it was. I was never a trouble-starter as a kid and I’m no fan of drama, but I always took this 21st century ghetto proverb most seriously:

Don’t start none. Won’t be none.

For those not familiar with inner city anthropology – it is simply a variation of the Golden Rule. Be nice and respectful to me and I will do the same. Everyone doesn’t live by the Golden Rule it seems. (Click to embiggen.)

The Blog editor of Biology-Online dot org asked me if I would like to blog for them. I asked the conditions. He explained. I said no. He then called me out of my name.

My initial reaction was not civil, I can assure you. I’m far from rah-rah, but the inner South Memphis in me was spoiling for a fight after this unprovoked insult. I felt like Hollywood Cole, pulling my A-line T-shirt off over my head, walking wide leg from corner to corner yelling, “Aww hell nawl!” In my gut I felt so passionately: “Ofek, don’t let me catch you on these streets, homie!”

This is my official response:

It wasn’t just that he called me a whore – he juxtaposed it against my professional being: Are you urban scientist or an urban whore? Completely dismissing me as a scientist, a science communicator (whom he sought for my particular expertise), and someone who could offer something meaningful to his brand.What? Now, I’m so immoral and wrong to inquire about compensation? Plus, it was obvious me that I was supposed to be honored by the request..

After all, Dr. Important Person does it for free so what’s my problem? Listen, I ain’t him and he ain’t me. Folks have reasons – finances, time, energy, aligned missions, whatever – for doing or not doing things. Seriously, all anger aside…this rationalization of working for free and you’ll get exposure is wrong-headed. This is work. I am a professional. Professionals get paid. End of story. Even if I decide to do it pro bono (because I support your mission or I know you, whatevs) – it is still worth something. I’m simply choosing to waive that fee. But the fact is I told ol’ boy No; and he got all up in his feelings. So, go sit on a soft internet cushion, Ofek, ‘cause you are obviously all butt-hurt over my rejection. And take heed of the advice on my khanga.

You don’t want none of this

Thanks to everyone who helped me focus my righteous anger on these less-celebrated equines. I appreciate your support, words of encouragement, and offers to ride down on his *$$.

08 October 2013

Journal sting a black eye for Thomson Reuters

I’m grateful to Brian Wood on the Human Ecology blog for analyzing some of the data from Science magazine’s journal sting. He was curious is there was any relationship between the journal’s Impact Factor and the acceptance of Bohannon’s hoax paper.

57% of the journals without a listing in (Journal Citations Reports) Science accepted the paper, only 7% (n=3) of those in the database did so.

That any journal with an Impact Factor accepted the paper is a black eye for Thomson Reuters (the company that owns the Impact Factor), not just the journal. Journals are peer reviewed by Thomson Reuters before they are added to the Web of Knowledge. One of the things journals in the Web of Knowledge are supposed to have shown is that they are peer reviewed:

Application of the peer-review process is another indication of journal standards and signifies overall quality of the research presented and the completeness of cited references.

Brian goes on:

I compared the 2012 impact factors of listed journals who accepted the paper (n=3) with those who rejected it (n=34). The median impact factor of accepting journals was 0.994 and those who rejected was 1.3075 (p < 0.05 in a two-tailed t-test).

With a sample size of 3, I wouldn’t conclude anything about the predictive power of Impact Factor.

What I think the analysis shows is not that Impact Factor is useful, but that making the cut to be included in Web of Knowledge is useful. You might expect this: it’s a peer review system, just for journals instead of articles.

That peer review of journals failed to screen out all journals with inadequate peer review again demonstrates that peer review is imperfect.

This might also be a good time to remember that no journal comes into being with an Impact Factor. New journals will never establish themselves if scientists are waiting to see if Thomson Reuters thinks the journal is important enough to include in Web of Knowledge.

Related posts

Open access or vanity press, the Science “sting” edition
Aiming low

External links

Sting operation demonstrates the value of Journal Impact Factors
The Thomson Reuters journal selection process

Tuesday Crustie: Shimmer

This is a little cladorceran named Ceriodaphnia reticulata, if I’m translating the caption correctly.

Photo by PROYECTO AGUA** /** WATER PROJECT on Flickr; used under a Creative Commons license.

07 October 2013

Should you review for a journal that would never publish your work?

Dr. 24 Hours tweeted:

Asked to review for the #1 theory journal in my field! A journal that will NEVER publish my work.

DrugMonkey said:

Now there’s an interesting issue for discussion.

So I’m taking the bait. Should you review for a journal that would never publish your work?

Reading cynically, these tweets suggest a “What’s in it for me?” way of thinking. The main reason to review a paper (the argument might be) is if you hope to publish there. Maybe by doing a thorough review, the editor will look more favourably on your manuscript than otherwise. This presumes that editors are easily manipulated, and actually remember who you are. Neither is a safe bet.

But a more optimistic reading is that these tweets are not quite about such a blatant “tit for tat”approach. Instead, it raises the question of who you feel an professional obligation to help. You might only feel obliged to those people in your immediate field of research. This draws a tight circle around a small tribe.

One problem is that you never know what your “immediate field of research” will be. For example, if you suggested to me not that long ago that I would have a paper in a parasite journal, I would have given you a very puzzled look, because I had no plans of working in that field. Now, I’m co-organizing a symposium on the subject. Refusing reviews because “They’ll never publish my work” suggests a very limited imagination about where your work might lead you.

Personally, I take a more “social” than “individual” approach. I see peer review as a professional expectation in general. It’s not quite an obligation, because it is still voluntary work. But when you get into this profession, you do so knowing that this is something you’re expected to do in general, not just for your field.

I would say yes, with a few qualifiers. First, and hopefully obvious, you honestly think you have the sufficient expertise in the area the paper is about and that you can detect issues that might arise in the paper.

Second, that you are not so overburdened with reviews or other tasks that you can’t get the job done. I don’t get that many review requests. I reviewed six papers in the last academic year, which is high for me.

As an aside, I sometimes wonder if those who are quick to whinge about the “poor peer review” of journals and how there are all these “low quality papers cluttering up the scientific record” are also the first to say they won’t review a paper.

Related posts

Peer review pariah
Pressuring journals you dislike
Read, white and review

Artwork by Gideon Burton on Flickr; used under a Creative Commons license.

04 October 2013

CVs versus résumés

There’s a lot of variation in people’s understanding about how to convey information about your past work experiences in academia.

First, there is a big difference between a CV and a résumé (in North America; the terms means slightly different things in other countries, I’m led to believe). A CV is a comprehensive document of your academic achievements. This means there is no length limit. In contrast, a résumé does have a length limit; usually two pages.

Anyone who tells you to limit your CV to two pages is not talking about an academic CV. They’re talking about creating a résumé.

Different organizations have different expectations. As noted above, there is variation in the terminology in different locations, and not everyone draws a clean distinction between the two. That means it is important to read instructions for particular positions, particularly if you are looking at different cultures, whether they be institutional (going from academic to industrial) or national (North America versus Europe).

But I have yet to hear of any academic institution that wants two-page résumés. Particularly for tenure-track jobs, post-doctoral positions, and the like, I am willing to bet that a two-page résumé would doom an application. For many people seeking entry-level positions, educational background, publications and presentations alone would take up more than two pages. And that would give you no place to talk about your teaching and service.

Because CVs are long, this means that the onus is on you to make it well-organized and typeset correctly. Make sure you put important things (like papers) early, use clear headings for each section, leave wide margins, number the pages, use a proportional typeface, and so on.

While I’m here, let me address another suggestion I’ve seen: “Add a line to your CV every [unit of time].” The problem with this advice is that it treats all “lines” in your CV as equal. They are not. Publications matter most. Adding papers to your CV will help more than almost anything else. Treating a paper as “a line in a CV” on par with a presentation or serving on a committee is short-sighted. Figure out what lines on your CV matter the most, and put an appropriate amount of effort into them.

Because while CVs are long and comprehensive, they aren’t evaluated just by putting them on a scale and seeing whose is heaviest.

Related posts

Is this good advice?
Scanning CVs
Little CV secrets...

03 October 2013

Open access or vanity press, the Science “sting” edition

Earlier this year, I co-moderated a panel at Science Online about open access. To me, it was about the reputation economy in science, and how new journals and models could gain credibility in a changing publishing environment. Today, Science magazine published an article that is being described as a “sting” operation to test whether a fatally flawed paper could get published in open access journals.

The quick reaction from the online community: why wasn’t this investigation extended to subscription based journals? This is a completely valid criticism. As it ran, this article seems to have a predetermined end point: to show that open access publication is harmful to science.

I may have more to say about this later, but for now, I’m collecting reactions in the external links below.

Additional, 4 October 2013: Here’s a general problem with this sort of investigation. If you have studied statistics, you know there are two kinds of errors you can make: a “miss” (type I error) or a “false alarm” (type II error).

The Science article, and every case of hoax articles accepted by journals, is only getting at the “misses” of peer review: the crazy papers that should be rejected, but weren’t.

Nobody has a way of investigating the “false alarms” of peer review: decent papers that should not be rejected and published, but are denied for publication by reviewer or editor. I think everyone has stories of prominent, significant papers that were initially rejected that went on to have a big impact in the field. The issue here is that there is no agreement on what makes a paper acceptable for publication, particularly given that many journals screen for “importance,” which is completely subjective.

If you could do the experiment – “Here is a paper that is completely publishable, competent science. Now let’s see how many journals reject it.” – you might find that the rate of mistaken rejections is far higher than mistaken acceptances.

Related posts

Science Online 2013: “Open access or vanity press?” appetizer

External links

Who’s afraid of peer review? (Science magazine’s “sting”)
Live Chat: Exploring the 'Wild West' of Open Access (Recorded 10 October 2013; video of chat still available for review)

Science reporter spoofs hundreds of open access journals with fake papers
I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals
New “sting” of weak open-access journals
2009 reflection on the 2013 Bohannon sting.
Flawed sting operation singles out open access journals
John Bohannon’s peer-review sting against Science
Which is it?
Open access publishing hoax: what Science magazine got wrong
Science Magazine rejects data, publishes anecdote
How embarrassing was the ‘journal sting’ for Science magazine?
Open access “sting” reveals deception, missed opportunities
Who’s afraid of open access?
Academic publishing: Science’s Sokal moment
"Open access spam" and how journals sell scientific reputation
What Science’s “Sting Operation” reveals: open access fiasco or peer review hellhole?
Science Mag sting of OA journals: is it about Open Access or about peer review?
What’s “open” got to do with it?
Heads up new Science is a special issue on scholarly communication
Glam mag fucks up, news at 11
The troubled peer review system, the open access wars, & the blurry line between human subjects research & investigative journalism
On John Bohannon article in Science
What Science — and the Gonzo Scientist — got wrong: open access will make research better
Stones, glass houses, etc.
Predatoromics of science communication
Science gone bad, or the day after the sting
Fake cancer study spotlights bogus science journals
Science Magazine Conducts Sting Operation on OA Publishers: This article notes that at least four websites – I hesitate to call them journals – published the faked paper, against the wishes of the author. Two have since been withdrawn.
A publishing sting, but what was stung?
Science's open access challenge
Unscientific spoof paper accepted by 157 "black sheep" open access journals - but the Bohannon study has severe flaws itself
Critics say sting on open-access journals misses larger point
Hoax reveals ‘Wild West’ of open-access science journals
Fake science journals create a scholarly Wild West for fast money
The Wild West world of open-access journals
What hurts science – rejection of good or acceptance of bad?
It may have been a flawed #OpenAccess "Sting" but WE ROCKED IT so submit to our journal ...
Sting operation demonstrates the value of journal Impact Factors
About Science's open access “sting”
A veritable sting
The open access sting: a missed opportunity?
Science’s straw man sting
“Truthiness” isn’t quite truth, and “sciencey” isn’t quite science, even if published in Science: Mike Taylor’s “Anti-tutorial: how to design and execute a really bad study”
Science magazine’s open-access sting lacks bite
The good, the bad, and the ugly: Open access, peer review, investigative reporting, and pit bulls
Who is afraid of Peer review: Sting Operation of The Science: Some analysis of the metadata
The real peer review: post-publication
The Bohannon “Sting”; Can we trust AAAS/Science or is this PRISM reemerging from the grave?
Who was stung – open access or peer-review?
Why a Harvard scientist wrote a bogus paper and submitted it for publication - CBC Radio interview with Bohannon
Bohannon and Science: bogus articles and PR spin instead of peer review
Peer review quality is independent of open access
Fallout from John Bohannon’s “Who’s afraid of peer review”
Lessons from the faux journal investigation - 15 October 2013
Fallout from Science’s publisher sting: Journal closes in Croatia - Retraction Watch, 17 October 2013
The Science magazine hoax (mBio wasn’t fooled, in case you’re wondering) - mBiosphere, added 18 October 2013
John Bohannon’s Open Access sting paper annoys many, scares the easily scared, accomplishes relatively little - Melville House, added 20 October
DOAJ's response to the recent article in Science entitled “Who’s Afraid of Peer Review?” - Directory of Open Access Journals, added 21 October 2013
Second response to the Bohannon article 2013-10-18 - Directory of Open Access Journals, added 21 October 2013

02 October 2013

Sasquatch paper available, but not through the journal

Just a quickie: Jon Tennant reports that the sasquatch DNA paper I blogged about previously is now available for all. It is not on the journal DeNovo, but on http://sasquatchgenomeproject.org. If you go to the journal DeNovo, you will see this paper is the only one there, and the journal is still asking for money for it.

Related posts

Sasquatch DNA?

01 October 2013

Tuesday Crustie: Royal

I got asked yesterday if there are any purple crayfish. I didn’t know of any freshwater crayfish that are purple, but I discovered Debelius’ reef lobster (Enoplometopus debelius), which is a marine species that is very crayfish-like. I’d not heard of this genus before, but they all seem to be brightly coloured.

Why are coral reef species always so pretty?

Picture from here.

Comments for second half of September 2013

This blog makes a well-disguised cameo at Innovations in Scholarly Publishing. The topic? Typesetting!