Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

13 July 2020

Who am I citing?

There is research that indicates that women scholars are cited less than men. There is research that indicates that Black and brown scholars are cited less than white ones. So I see, and am sympathetic to, calls for people to check who they are citing. Citing only white guys perhaps means you are not capturing the full range of scholarship that exists.

This is harder than it sounds.

Star Trek title card showing "Written by D.C. Fontana" (The "D" was for "Dorothy".)
Some authors want to obscure their gender or background, sometimes to reduce bias. So they use only their initials.

Some journals show only author initials, particularly in the references.

Some names are used by both genders. Names like Terry, Kelly, Zen...

Once you get past your own language and culture, trying to work out gender from the name alone becomes much more difficult. I wonder how well most English speakers would do at guessing the gender associated with Chinese or Indian names.

So when someone asks, “What percent of your reference list in your papers are white men?”, my answer is, “I don’t know.”

I am not sure what the solution here is.

In theory, this kind of demographic data might be registered by ORCID. Eventually, I could imagine a system where you downloaded ORCID into a citation manager, which could then do an analysis on a reference list. Or you could have a plug-in or webpage that did that. But ORCID currently doesn’t capture anything like that. I don’t think any academic database does.

Otherwise, the only answer I can think of is doing a lot of googling, which will probably not lead to definitive answers in many cases.

Update, 14 June 2020: Thanks for Beth Lapour for alerting me to this work. This paper tries to examine citation bias in neuroscience journals. Excerpt from the abstract:

Using data from five top neuroscience journals, we find that reference lists tend to include more papers with men as first and last author than would be expected if gender were unrelated to referencing. Importantly, we show that this imbalance is driven largely by the citation practices of men and is increasing over time as the field diversifies.

They used a couple of automated techniques to try to distinguish gender of the authors. Using two databases, they assigned an author as male or female if their confidence was 70% or better. One was an R stats package. I seem to recall reading criticisms of this package on Twitter, but can’t find it now.

They failed to assign gender for 12% of authors: 7% because there wasn’t high enough confidence by their criteria, and 5% because no author name was available for the paper. I’m not sure what the latter group could be. Unsigned editorials, maybe?

They then tried to find an independent way to check the accuracy for the 88% of authors they assigned a gender. They did this by sampling 200 authors and Google stalking them for pronoun use. And according to that, their algorithmic assignment was about 96% accurate.

So according to this, the problem is currently small. But this is just a snapshot of one field. I wonder if the difficulty will get larger or smaller over time for reasons mentioned in the main post.

Update, 28 July 2020: Marie Rivas articulates some of the reasons I am uncomfortable with using software to assess gender for research purposes: 

1. You cannot “identify” or “verify” gender on behalf of someone else; you can only guess.
2. Guessing gender is often inaccurate, offensive, and exceptionally harmful.
3. You don’t actually need to know peoples’ genders for most use cases; and if you really must know, just ask.

Reference

Jordan D. Dworkin JD, Linn KA, Teich EG, Zurn P, Shinohara RT, Bassett DS. 2020. The extent and drivers of gender imbalance in neuroscience reference lists. Nature Neuroscience: in press. https://doi.org/10.1038/s41593-020-0658-y

07 July 2020

Notes from a pandemic: You go into lockdown with the data you have, not the data you want

The Better Posters book is still in press. I am getting periodic updates from the publisher, which is exciting.

I am waiting on reviews for one project.

I am writing another big project, and I have many thousands of words down for it already.

So these are good. But I have no idea when I will be able to collect data again, and it’s kind of getting to me sometimes.

I was preparing for a journal club presentation about snapping shrimp. Which are awesome beasts. They make sound louder than a gunshot, louder than a rocket launch, with their claws. The journal club talk isn’t even about claws, but I wanted to mention how they worked because it’s tangentially relevant to the main part of the talk.

I came across this paper about the evolution of the snap. It’s comparative, got behaviour, 3-D modelling of the claws, biomechanics, a lot of the kind of stuff I was doing more back in grad school. It’s amazing work.

And I feel sad. I want to do stuff like it. I want to do good, original science.

30 March 2020

Notes from a pandemic: Scholarship stoppages

Among academics, there is a particular anxiety about being stuck at home. It’s a touchy subject. Productivity and overwork is always a sore spot in academia.

A lot of people are saying, “Academics’ cult-like worship of productivity is insane. It’s unreasonable to expect to be productive in a global crisis.”

And some people are pushing back, saying, “I need routine and work helps take my mind off that we are in the middle of a global crisis. Leave me alone.”

Whatever people’s personal feelings about continuing to do academic work, a lot of people are asking, “What is my university’s expectations about research?”

A couple of administrators and some scientists have basically said, “Keep getting data so we can keep papers and grants coming.” There’s a real worry about how this compromises social distancing and the safety of researcher. And it’s probably stressful for a lot of people to be told, “Keep working like nothing has changed.”

Most institutions have said that the fact that this has been a “pandemic year” will be taken into consideration at annual review time.

Some institutions have said they are going to “stop the clock” for tenure, which relieves some people but scares others. A year delay in review means a year delay in promotion and the raise usually associated with tenure.

Graph of faculty salaries in Texas. Professors $119,080, associate professors $89,782, assistant professors $81,250.

Looking at data from Texas, the average salary increase between assistant and associate professor is over $8,500. Because that baseline is often used for various kinds of salary adjustments, the hit to someone lifetime earnings is much more than $8,500 for that one year.

I bring all this up because I encountered an unexpected obstacle to continuing with my academic scholarship. I have a project where I have all the data. I am starting to write up a manuscript about it. so I need to read the prior work. I find a book that looks highly relevant to the topic, and my library has it on the shelves. Excellent.

That’s when I discover the library isn’t lending out its books.

And even though the library is nominally open, the stacks are closed, so I can’t even go in and read the volume in the library itself.

That’s a little obstacle I should have expected to writing up articles, but didn’t.

There are going to be tons of obstacles, large and small, even for someone like me, who is not particularly affected by social distancing measures and working from home.

25 March 2020

Notes from a pandemic: “Research that actually matters”

On Monday, a now deleted tweet from Andrew Timming said something along the lines of, “This crisis is a wake-up call. COVID-19 shows how much academic research is just castles in the sky. ‘Moving forward, let’s do research that actually matters to the world’.”

Andrew has deleted the tweet, so I can’t confirm the exact wording. That last part – “research that actually matters to the world” – is an exact quote. I don’t think Andrew deserves hate, which he says he received, but I do think his comment deserves commentary. Maybe even critical commentary.

I get the sentiment. I do. In times of crisis, a lot of people feel useless.

Animated cave painting of mammoth huntTimming was making a variation of an old, long-running argument about “basic verus applied” research. Now, I’ve heard a lot of retorts to this. I like, “If we only ever did applied research, all we’d have would be better mammoth traps.”

According to (probably untrue) legend, a politician once asked Michael Faraday what good electricity was.

There are two versions of the story of Faraday’s reply.

  1. “One might as well as what good is a new borne baby.”
  2. “One day, sir, you may tax it.”
(I like the second one.)

But the next day, I was listening to Maddie Sofia interviwing Ed Yong on ShortWave. It shows the COVID-19 pandemic itself shows the problem of focusing research on what “actually matters in the world.” (My emphasis.)

SOFIA: So one thing that I found really interesting in your article was the state of coronavirus research in general and how that plays into how prepared we are right now. Like, this is a big group of viruses that cause a decent bit of disease throughout the world. But one researcher you talked to said that until recently, not that many people were studying coronaviruses.

YONG: Right. So a very small group of people - maybe, you know, several dozens of researchers - have focused on coronaviruses for a few decades now. But it really has been a very, very niche field, even among virologists. When SARS classic first emerged, I think coronavirus researchers were really shocked that the things that they were studying were suddenly of public health importance.

SOFIA: Right.

YONG: And they are even more flabbergasted now.

SOFIA: And so because of that - because even after SARS, there wasn’t a huge uptake in how many people were studying this, we don’t necessarily have surveillance networks in place for coronavirus like we do for the flu.

YONG: Right. A lot of our preparedness measures in general have been focused on flu as the most likely next pandemic - and for good reason - because flu actually is the most likely next pandemic. It just so happened that this time, it was a coronavirus. And we don’t have surveillance for coronaviruses. We know, actually, surprisingly little about coronavirus biology. And all of those deficiencies have contributed to this dire situation that we’re facing when we don't know enough but we're forced to act as quickly as possible.

Arguably, the situation we now find ourselves in with the COVID-19 pandemic is not despite the view that researchers should do work “that actually matters to the world,” it’s because of it.

From a rational assessment of risk, need, whatever, I’m sure people argued in grant agencies that we should not invest much money and resources in coronavirus research. The best estimates were that coronaviruses didn’t pose much of a threat, so we should put that money into influenza or something else.

This isn’t even the first time we’ve seen this happen in the last decade.

Remember when people were freaking out about zika? (I know, it seems like something that we read about in history books instead of only four years ago, in 2016.) The CDC director tweeted this picture of every paper about the zika virus published in the world to that point.

Short stack of scientific papers

It was pretty short freakin’ stack of paper. And the headline was that scientists were caught “flat-footed.”

I’m sure that on September 10, 2001, there would have been a lot of people in the US arguing that universities should think about shuttering programs in, say, contemporary Islamic thought or Arabic language studies.

Movie poster for "Metero" (1979)If we discovered an comet, asteroid, or meteor on a collision course with Earth tomorrow (and given how 2020 is going, I feel like we should be watching the skies more), the headline would probably again be that scientists were caught flat-footed. Even though people have known this is a possibility for decades.

Hell, Hollywood knew this well enough to make a movie about it in 1979. And Sean Connery disaster from space movies are the best disaster from space movies. (Don’t @ me, Armageddon and Deep Impact viewers.)

Things are only irrelevant until they’re not. And then people complain, “Why wasn’t anyone studying this?!” Society pretty much told us not to. Society told us that we weren’t doing research that “actually matters.”

External links

Why is the coronavirus so good at spreading?
One tweet that shows how the Zika virus caught scientists flat-footed

04 December 2017

End of a project


Eight years ago and three months ago, I started a project to accommodate Jessica Murph’s request to do fieldwork (she was a student in the NSF REU program I ran then). It was a simple project to try to figure out some basic biology of the local sand crab species, Lepidopa benedicti.

Jessica finished her year in the program, and I kept going. And going.

Along the way, the project yielded three papers (Murph and Faulkes 2013, Faulkes 2014, 2017). The last paper covered up this project from 2011 to the end of 2015, and I have gathered two more years of data, making it seven calendar years of continuous monthly samples.

It’s a project where I genuinely felt I learned a lot. There was, at the start of this project, very little known about any species of this family. This project was a good first step in understanding the natural history not just of L. benedicti, but the family. And I found a species that had never been documented in the area before.

There were times when things got crazy when I could just think to myself, “I have to go to the beach.” They were good opportunities to decompress.

That project came to a close for the foreseeable future yesterday.

Posting here has been slow this semester, because I stuff that I didn’t want to blog about. It’s good stuff, not bad! I have some big plans that start early next year that I am very excited about.

But for every door that opens, one closes. These projects will be taking me away from South Texas, and I’m not going to be able to visit my field site for a while. I can’t go collect and measure “my” sand crabs.

I’ve had other projects that have ended before, but I can’t think of another that ran so long. It’s tough knowing that I still have questions that I will only be able to answer by collecting, and not knowing if or when I might be able to pick up the project again. Even if I do, I won’t have the bragging rights of a nice, continuous record.

On the plus side, I do still have two more years of field data in the can that I can analyze. I hope that I might be able to squeeze one more paper out of this project.

But I’m still a little sad.

References

Faulkes Z. 2014. A new southern record for a sand crab, Lepidopa websteri Benedict, 1903 (Decapoda, Albuneidae). Crustaceana 87(7): 881-885. https://doi.org/10.1163/15685403-00003326

Faulkes Z. 2017. The phenology of sand crabs, Lepidopa benedicti (Decapoda: Albuneidae). Journal of Coastal Research 33(5): 1095-1101. https://doi.org/10.2112/JCOASTRES-D-16-00125.1

Murph JH, Faulkes Z. 2013. Abundance and size of sand crabs, Lepidopa benedicti (Decapoda: Albuneidae), in South Texas. The Southwestern Naturalist 58(4): 431-434. https://doi.org/10.1894/0038-4909-58.4.431

Photo by Karren Faulkes. Thanks, mom.

13 July 2017

Five years for seven points of data

I was very excited yesterday. I got to add another data point to this graph:


It’s taken me five years to get those seven data points. Five. Years.

It’s not for lack of trying. Each data point depends on me catching a rare event. There’s a limited amount I can do to try to catch those rare events, so this graph is building up slowly. It’s not quite a pitch-drop experiment, but I am seriously wondering if I am ever going to have enough data that I will feel confident about publishing it.

I share this because there are a lot of people fretting about the speed of science these days. People want want fast review, and fast publication. Some are turning to pre-prints for greater speed. But sometimes, try as we might like, some questions force you to take a long, slow slog to get to the answer.

10 February 2017

Staying active in the lab and/or field when you’re the boss

For many scientists, there comes a point in their careers where they are not collecting their own data. They supervise students, and the students collect the data, leaving the senior scientist (or, to use grant-speak I hate, the principle investigator or “PI”) to write grant proposals and help draft papers.

I’m a beleiver that senior scientists should have at least one project of their own. One project where they collecting their own data and write it up themselves as first author. I know that this is overly optimistic, and not a lot of people can do this. But even if you don’t have your own project, it’s still valuable to be in the field or in the lab doing something.

I’ve been thinking about this a lot this week. I’ve been spending quite a bit of time in the lab collecting data. I’m quite excited by the small amount of data I have so fa.

But the project I’m collecting data for started as an incidental observation last summer. I was helping one of my students on a project, and noticed something interesting. Just happenstance while we were looking at something mostly unrelated.

That incidental observation last summer is probably going yield at least one paper.

No matter how good and dedicated students are, the likelihood that any of them would have noticed what I noticed, and recognized it as interesting, is low.

There are benefits to having experienced observers, and that’s almost always the PI. You transition from lab bench to office desk at your own peril of missing some cool stuff.

08 August 2016

Research is more than bench work and field work

I sometimes get emails from undergraduate student here asking if there are research opportunities with me. Partly because I have some bottlenecks in my lab (microscopes are a limiting factor), I can’t have a lot of students in my lab.

I’ve started offering them research opportunities to do data extraction or analysis, rather than data collection. I ask them to extracting data from websites or journal articles, and get them into an analyzable form. For example, getting latitude and longitude coordinates for species locations in the literature. Or compiling weather data.

I never heard from those students again.

I can only speculate as to why they never follow up. But, at a guess, I think they don’t consider working with spreadsheets “real” research. For them, “real” research means having a lab coat on and a pipette in hand, or getting a sunburn out in the field with a notebook in hand.

Students are shortchanging themselves.

First, I suspect that by the time you’re asking someone to compile and analyze data from some other source, it may be more likely to result in the student getting their name on a publication than bench or field work.

Second, extracting existing data and putting into a form that can be analyzed is far, far more likely to be a skill that these students will use throughout their professional career. Lots of professional level jobs require working with spreadsheets; very few require running gels.

22 January 2016

How I’ve parasitised research

Michael Hoffman did the research world a favour by highlighting a new editorial in the New England Journal of Medicine about data re-use.

The aerial view of the concept of data sharing is beautiful. ... However... There is concern among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites.”

Oh boy.

As a biologist, I know that parasitism is one of the most successful strategies for living in the world, and play an integral role in ecosystems. But I’m no so much a biologist that I fail to recognize that those who re-use other people’s data – like me – are being disparaged.

Following Longo and Drazen, I guess creating GenBank was totally the wrong thing to do.

I became convinced of the usefulness of data sharing when I started collaborating with Paty Feria, modelling the distribution of crayfish species. About the same time, I was starting to work on the ecology of sand crabs. Both projects required using other people’s published geographic data. I spent a long time pulling out distribution records from published papers.

Without that geographic data, we couldn’t have created the new predictive models for distribution (Feria and Faulkes 2011, Faulkes et al. 2012). Those models were considered in this risk assessment for marbled crayfish, which demonstrates that, at some level, people found those new analyses useful.

While not critical in analyses, geographic data was critical in creating maps that allowed my to show the context of a range extension (Faulkes 2014). I couldn’t really prove it was an extension without that.

Because of my experiences in creating those papers, I’ve put in effort into archiving my own data, usually on Figshare. My record isn’t perfect, but I hope it might be useful to someone else.

There are a few (very few) defenses of the Longo and Drazen piece. First, they are trying to show an example of good collaboration, where everyone was happy. That could be useful, if they had stripped out the potshots about “parasites.”

Second, they are talking about medical research, where patient consent and privacy are ongoing, real concerns that shouldn’t be swept under the table. Remember issues around sequencing the DNA of HeLa cells, and people then going, “Hey, the woman those cells came from still has immediate family, and posting those cell DNA sequences could violate their medical privacy.”

But Longo and Drazen don’t frame it that way. Instead, they frame the problem as one in which researchers could suffer embarrassment or career impediments because of someone else used their data.

The first concern is that someone not involved in the generation and collection of the data may not understand the choices made in defining the parameters.

Someone might misunderstand what I was doing and I could be embarrassed.

(S)tealing from the research productivity planned by the data gatherers...

Someone could publish before me.

(E)ven use the data to try to disprove what the original investigators had posited.

Someone might show I was wrong and I could be embarrassed.

I understand wanting to protect your reputation and advance your career. But if your reputation and career can’t stand up to someone else using you’re data, it’s not a very strong career to start with.

25 January 2016: Co-author Drazen wouldn’t comment on the use of the word “parasite” when asked about it by a journalist. But Drazen has penned a response.

References

Faulkes Z. 2014. A new southern record for a sand crab, Lepidopa websteri Benedict, 1903 (Decapoda, Albuneidae). Crustaceana 87(7): 881-885. http://dx.doi.org/10.1163/15685403-00003326

Faulkes Z, Feria TP, Muñoz J. 2012. Do Marmorkrebs, Procambarus fallax f. virginalis, threaten freshwater Japanese ecosystems? Aquatic Biosystems 8: 13. http://dx.doi.org/10.1186/2046-9063-8-13

Feria TP, Faulkes Z. 2011. Forecasting the distribution of Marmorkrebs, a parthenogenetic crayfish with high invasive potential, in Madagascar, Europe, and North America. Aquatic Invasions 6(1): 55-67. http://dx.doi.org/10.3391/ai.2011.6.1.07

Gewin V. 2016. Data sharing: An open mind on open data. Nature 529: 117–119. http://dx.doi.org/10.1038/nj7584-117a

Longo DL, Drazen JM. 2016. Data sharing. New England Journal of Medicine 374: 276-277. http://dx.doi.org/10.1056/NEJMe1516564

External links

Attack of the research parasites
I am a research parasite. Got a problem with that?
Priggish NEJM Editorial on Data-sharing Misses the Point it Almost Made
A fundamental difference of opinion
NEJM Calls Data Scientists 'Parasites.' Can Joe Biden Change Their Minds?
Criticism of ‘research parasites’ moves NEJM in the wrong direction
On research parasites and internet mobs - let's try to solve the real problem.

Data Sharing and the Journal

02 December 2015

Ideas are cheap

In tooling around Quora, I see a lot of questions from non-scientists that sort of run like this:

“I have an idea! How do I proclaim it to science?”

I’ve heard that authors get similar things all the time. Someone will approach them and say, “I have this great idea for a book. Why don’t I tell you the idea, you write it, and we’ll split the profits?” To which the writer says, “So... you want me to do all the work, and you take half of the money? Thank you, but no.”

To top it off, when people tell the author their brilliant idea for a book, the idea is usually hackneyed and trite. “A man and a woman in a space ship crash land on a deserted alien planet. Their names are.... wait for it... Adam and Eve. Brilliant, huh?”

The cold reality is scientists will probably think your idea is not worthy of their time or talents. Scientists have ideas of their own that they want to test. They don’t lack for ideas.

This is not a knock against non-scientists having ideas. Scientists have much the same reaction to ideas from other scientists. Most of them are not going to influence the research questions that we already want to solve.

Ideas are cheap and plentiful. Testing them is hard.

That’s not to say that scientists don’t need to have ideas. Far from it. One of the reasons why first authorship of papers is so critical for early career scientists is that middle authorship is associated with being a data collector, not the intellectual driver of the project.

To be a scientist, you need ideas plus willingness to put in the grunt work.

External links

The efficient research hypothesis

24 August 2015

A plea for pluralism in science

When I was heavily involved in L5R, there were recurring arguments between players about the “better” way to play the game. Probably the most common was between players who would use whatever cards they thought would win, and players who were also concerned with the story, who often imposed limitations on what cards they would play: using only cards from a certain faction, using no “evil” cards, and so on.

Wizards of the Coast did research on what people enjoyed about role-playing games to feed into their relaunch of Dungeons & Dragons, and came to a similar conclusion: there are a lot of reasons people play a game.

It took me a while, but in L5R, I came to the conclusion that there is no right way to play the game. What gives one player enjoyment may not give another player enjoyment.

And that’s okay. You shouldn’t denigrate people who play a game differently than you do.

Arguments in science sometimes remind me of those gamer discussions. Some accusasions of, “You’re doing it wrong!” are more reflective of the critic’s priorities than a wide view of the multiple ways there are to do science.

We need to be very careful about criticising particular forms of scholarship as “better” than one another.

External links

Breakdown of RPG players (Image source)
Whose problem is the reproducibility crisis anyway?
Ponderable

31 July 2015

Connections in my scientific career

If you’ve never seen the television series Connections by James Burke, you are missing out. Whereas most histories of science emphasize a “march of progress,” Burke’s series emphasized contingencies: you couldn’t have this if there hadn’t been that, and how those this and that were related were not obvious or predictable. In episode 9, “Countdown,” for instance, Burke connects the divorce of Henry VIII to the invention of television.

I got thinking about this with the publication of my most recent paper, which was nexus point between a couple of different research projects. I’ve joked with people that I have “science ADD,” but there are relationships between my projects. They just might not be obvious to people who are not me.

As an undergraduate, I worked on a project about walking by octopuses. This got me interested in locomotion, and I looked for a related project for graduate school. This led me to do a doctoral project on sand crab digging.

Sand crabs dig with their legs, so this led me into looking at the leg motor neurons of crustaceans. I’d found a discrepancy between the description of leg motor neurons in spiny lobsters and everything else that had been looked at. I wrote a post-doctoral fellowship proposal to study that, and got it. I went to work with David Macmillan in Australia for a post-doc.

David’s students had some projects on crayfish escape responses going on while I was there. Meanwhile, spiny lobsters were hard to get and hard to work with, I moved to working with slipper lobsters. I remember standing in David’s office, chatting about trying to get as much use out of the slipper lobsters as possible (they weren’t super cheap), and saying something like, “We’ll do some sections of the abdominal nerve cord, just to look at the giant interneurons and see that they’re there.”

Except they weren’t there.


Discovering that some species were missing a major set of very well-studied neurons was a completely unplanned observation.

That led me to working on the escape response in crustaceans. Because I was seeing substantial differences between species, I thought I needed to see how those neurons developed; take an “evo devo” approach to the problem.

I got very interested in marbled crayfish as a developmental model for the escape neurons from chatting Steffen Harzsch at the Neuroethology congress. I got some marbled crayfish for my lab, fully intending to start working them up as an experimental model. I started the Marmorkrebs.org website.

While I was thing about things to post on the Marmorkrebs blog, it became obvious that there were quite a few Marmorkrebs in the pet trade in the U.S.. Those crayfish were a potential problem if they got loose. This led me to doing research on the pet trade, and about the same time started doing species distribution models. All of this led me to be co-author on a forthcoming book on crayfish (out next week!).

I was also looking for a way to get the relatives of marbled crayfish in my lab. That led me to participate in the #SciFund Challenge, which became a scientific experiment in its own right.

Meanwhile, I was still plugging away on the escape response. I’d studied slipper lobsters, spiny lobsters, and had moved on to shrimp. While I was looking at the backfills of the shrimp, I saw things moving in the nervous system. And those moving things were parasites.

Finding parasites in the nervous system of shrimp was another completely unplanned observation. And before you know it, I’m helping Kelly Weinersmith co-organize a whole symposium on the subject at an international conference.

And the sand crabs? I still liked those guys, and recognized that we knew almost zero about most species. So with the incentive of finding a field project for an undergraduate student, I started collecting very basic natural history data for the ecology of the local sand crab species.

So you see, it all makes perfect sense. (Well, most of it does: there are a few papers that don’t fit neatly into that narrative.) But you are not likely to recognize the “this happened because of that” connections by skimming the titles of the papers.

The moral of the story? One is that it’s absolutely worth doing exploratory experiments and keeping your eyes open. I’ve had two findings (interneurons missing in slipper lobsters, parasites in shrimp) that came about not because there were hypothesis driven experiments, but that I got by happenstance, and those opened up whole new lines of research and resulted in multiple papers for me.

What a strange trip it’s been.

05 June 2015

Top ten again for crayfish nociception!

Just a brief moment of celebration as I point out that our latest paper on crayfish nociception is in the top ten most read Biology Open articles for a second straight month!

I’m very pleased.

11 March 2015

“What’s our Nature paper?”

A few years ago, a candidate interviewing for a gig in our department said that he routinely asked people in the lab, “What’s our Nature paper?”

I have severe concerns about chasing after glamour magazine publications. But that question stuck with me. I may hate the way the question is phrased, but I understand what the question is asking at a deeper level. It’s asking, “What research will do you do that would be important, not just to your narrow field, but to science generally? What research would you do that would change the world?”

And when I thought about my own work, I didn’t have an answer.

It’s a well known thing in psychology that when things go wrong, we blame external influences. (But when things go right, we credit ourselves.) And it’s increasingly easy to say, “I can’t have a manuscript worth sending to Nature because grants are hard to get these days, my teaching load is too heavy, the best students won’t apply to my program, we don’t have a doctoral program...” and on and on and on.

And then I asked, “What if none of those were problems? What if I had no other commitments, and all the supplies and cash and colleagues I needed? What would be my Nature paper?”

And I still don’t have an answer.

Maybe as you go on in your career, you get complacent. You have a line of research, and you know what the next experiment is for the next few papers. And you don’t stop to ask yourself, “Forget about external limitations, do I have an question or an idea that – if I could answer it well and before anyone else – would shake things up?”

Ambition has a bad rap in science. There are a lot of people with massive egos who are very ambitious. But ambition is not all bad. You can achieve a lot of positive things if you are ambitious. So maybe it’s worth asking yourself, “Am I being ambitious enough?”

Myself, I have the ambition to live 300 years. I will not live 300 years. Maybe I will live one year more. But I have the ambition.
Why will you not have ambition? Why?
Have the greatest ambition possible.
You want to be immortal? Fight to be immortal. Do it.
You want to make the most fantastic art of movie? Try.
If you fail, it’s not important. We need to try.
Alejandro Jodorowsky, Jodorowksy’s Dune (2014)

03 December 2014

“We should celebrate scientists... They are brave.”

Many are sharing this superb New York Times article on the woman who discovered the chemical francium. There are only just over 100 elements, and some were known to the ancients, so being the discoverer of a chemical element is a rare achievement indeed.

The article has a lot to offer. It’s part biography about one remarkable woman. It’s part indictment of scientific sexism of the past, lax attitudes about safety, and more. But I want to pull out this section near the end, which talks about science generally, which I liked a lot:

We should celebrate scientists not solely for their accomplishments but also for their courage and the tenacity required to discover anything at all. There are brave people out there working right now. They are brave not because they are killing themselves slowly or leaping from airplanes or catching rare tropical diseases, although scientists have done all those things. They are brave because of the intense emotional risks of trying to do something no one has done before by following your own lead. Radiation is a potent allegory for human life. Everything is always, inevitably falling apart; we are all in arrested decay. Our greatest achievements may become at best footnotes; few people remember us; we can’t know what will eventually come of our work.

External links

My Great-Great-Aunt Discovered Francium. And It Killed Her.

26 September 2014

What you need for strong hypotheses

Given how important hypothesis testing is in science, I am continually frustrated by how much trouble students have in making good, strong hypotheses ones.

When I ask students, even our graduate students, “What’s your hypothesis?”, the answer often start with something like, “I’m studying,” “I’m looking at,” or, “My question is...” Those are not hypotheses.

Some more advanced students (particularly in proposal seminars) will say, “My hypothesis is that my control group will be different than my experimental group.” Okay, you’ve learned the concept of the null and alternate hypotheses. That’s a useful thing to understand for statistical analyses. I suppose “experiment will be different than the control” that counts as an hypothesis, but it’s such a weak one.

That’s not hypothesizing, that’s just hoping.

Strong hypotheses have two components:

First, strong hypotheses incorporate some kind of mechanism, whether implicitly or explicitly. A strong hypothesis is based on investigating causal mechanisms. Without that, you have a fishing expedition.

Second, because strong hypotheses are based on assumptions about mechanisms, they make explicit predictions. Do you think the mean of the experimental group will be higher or lower than the control group? Both are valid alternate hypotheses under the “null versus alternate” scheme, but they are not the same. It’s even better if you can predict the magnitude of the difference.

17 September 2014

Selfish science


I was out on South Padre Island yesterday, making my regular collecting trip for my long term Lepidopa study. I had been worried, because we’ve had a lot of rain lately, and the forecast was for isolated thundershowers. But the day turned out to be flawless.

It wasn’t just the nice weather; I got some useful scientific data, too. I found the first “young of the year” for my sand crabs, arriving as if on cue:



As I was digging and taking in how pleasant it was, I wondered, “Why don’t I have grad students lining up to do this work?” I sometimes get a little down that I have had very few inquiries from students about working with me.

But then, I flipped it. If I had grad students, they would probably be out collecting data instead of me! And I thought that would be a loss for me. I would have missed the satisfaction of getting the data, and the enjoyment of a beautiful day on the beach.

This made me wonder how many other scientists have jobs that they could leave to students or technicians, but keep for themselves, because they enjoy the experience so much.

Maybe this seems selfish, but I think it’s very important that I continue collecting my own data. A lot of mid-career scientists seem to be are chained to a desk, writing manuscripts and grants.It’s too easy to get disconnected from the stuff that drew you into to science in the first place.

P.S.—Not every collecting trip is anywhere near as physically pleasant as yesterday. Extreme heat, sudden cloudbursts, beach littered with Portuguese men-o-war... somedays, the rose loses its bloom.

05 August 2014

Better a deluge than a drought

Another prominent opinion piece is crying again that there is too much low-quality research.

This annoys me so much. It presumes “quality” can be judged immediately and accurately, and that researchers should all be of the same mind about what the “right” research questions to ask are.

I wonder if, in other fields of creative endeavor, people write editorials calling for less work. “People are releasing more songs than ever, but have you noticed that there are still only 52 number one songs on Billboard magazine each year? We haven’t had any increases in number one songs since the 1950s!”

Would anyone ask a musician, “Why don’t you just write hits?” without expecting to get punched in the face? No, because there is some understanding that not everything is going to be a hit.

Let’s look at a few bits of the article here.

Estimates vary wildly, but probably between a quarter and a third of all research papers in the natural sciences go uncited.

“Uncited research” is research that has not been cited yet. This is a great strength of scientific literature: you can go back and look at the old stuff. Some projects have “long tails,” and it’s not possible to know when someone will stumble across something you have created and find out something relevant to what they are doing. For example, Hill (1979) was not cited for almost two decades. Failure? Maybe. But because there is no statute of limitations on when we can cite papers, it eventually was cited. (Yes, I’ve cited it.)

Scientific papers are love letters to the future. We write them in hope that not only will they be useful within the first few years of publication, but that they may be useful to researchers living long after we are not.

Some works will only reach a small audience. That does not automatically make them less worthy, or less influential.

To use the music analogy again:


In 1968, The Velvet Underground were releasing records that very few people bought. But their work lasted, and regularly shows up on “Best of all time” lists.

In comparison, the 1968 Grammy winner for Record of the Year that year was “Up, Up, and Away.”


It’s a breezy, catchy, even memorable tune, but... I bet it doesn’t show on on many “Best of all time” lists now. I wager not many people would know the name of the band now.

This in turn leads to the bane of every scientist's existence: far too many papers to read in far too little time.

Not my bane. It has never been easier for me to find papers that are relevant to my interests, thanks to Google Scholar and similar tools.

One reason is the rise of author-pays open-access publishing. This has the benefit of allowing anyone to read the research in question without paying the publisher, but also has the disadvantage of giving publishers a strong commercial incentive to issue as much content as possible. ...
(S)ubscription business models at least help to concentrate the minds of publishers on the poor souls trying to keep up with their journals.

Elsevier has almost 3,000 technical journals, and Springer has 2,200 journals, and Wiley has 1,500 journals, most of which are subscription journals. That, to me, does not suggest that subscription-based publishers are trying to keep the literature down to a manageable size.

Subscription publishers have incentives to publish more scientific literature, just like open access publishers do. If each journal tends to be profitable, then you have incentive for publishers to make more journals. The more journals they can put in their “big deal” packages, they more they might make.

The incentive to publish is not coming from publishers. The incentive to publish comes from administrations, funding agencies, hiring committees, tenure and promotion committees. They all count publications. This seems indisputable. Indeed, the article goes on to admit this:

On one hand funders and employers should encourage scientists to issue smaller numbers of more significant research papers. This could be achieved by placing even greater emphasis on the impact of a researcher's very best work and less on their aggregate activity.

How are we going to evaluate “best” work? Unfortunately, the typical way that “very best work” is evaluate now is the journal Impact Factor (van Dijk et al. 2014). The problems of using Impact Factor to assess individual work are many, to put it mildly (Brembs et al. 2013).

In the end, we get a bait and switch! Instead of what the piece initially calls for (publish less), it ends with a call to publish even more. Now we are supposed to publish data in addition to our papers:

On the other they should require scientists to share all of their results as far as practically possible. But most of these should not appear in the form of traditional scholarly papers, which are too laborious for both the author and the reader to fulfil such a role. Rather, less significant work should be a issued in a form that is simple, standardised and easy for computers to index, retrieve, merge and analyse. Humans would interact with them only when looking for aggregated information on very specific topics.

So the issue is filter failure, not information overload.

Whatever the shortcomings of traditional journal articles are, they realize the awesome power of narrative. This is, I think, the reason why scientific journals have never just published the data, as I wrote before:

If science is purely and solely about “the facts,” why do we publish scientific papers at all? Why not just upload methods and datasets? If you have the data and the methods to generate them, isn’t that all you need to assess the “facts” in play?

(T)here is an inherent connection between stories and experimental science: they are both about causes. A satisfying story is built around causal connections. Without those causal connections, you have a series of disconnected events that makes about as much sense as a random inkblot.

If we struggle with too many papers now, we will struggle even more with too many datasets.

Ignorance is a much, much bigger problem than too much knowledge.

References

Brembs B, Button K, Munafò M. 2013. Deep Impact: Unintended consequences of journal rank. Frontiers in Human Neuroscience 7: 291. http://www.frontiersin.org/Journal/Abstract.aspx?s=537&name=human_neuroscience&ART_DOI=10.3389/fnhum.2013.00291

Hill GW. 1979. Biogenic sedimentary structures produced by the mole crab Lepidopa websteri Benedict. Texas Journal of Science 31(1): 43-51.

van Dijk D, Manor O, Carey LB. Publication metrics and success on the academic job market. Current Biology 24(11): R516-R517. http://www.cell.com/current-biology/abstract/S0960-9822(14)00477-1

Related posts

I’m a pebble in the avalanche
Balkanizing small universities 
Storytelling is dead, long live narrative

External links

Stop the deluge of science research

Photo by Broo_am (Andy B) on Flickr; used under a Creative Commons license.

23 July 2014

Way down south: stumbling across a sand crab (Lepidopa websteri)

“Hey Meera, check this out.”

Not as powerful or concise as Archimedes’s “Eureka!”, but that was what I said at the moment of discovery – exactly two years ago today! – that led to my newest paper.

I have been making regular trips to collect sand crabs on South Padre Island for a few years now (e.g., Murph and Faulkes 2013). A couple of years back, I had a summer intern, Meera (pictured), who was working on the parasites of the local species, Lepidopa benedicti (just out; Joseph and Faulkes 2014). I thought it was important that she see the animals in their native habitat, so she came out to the beach with me to collect.

Somewhere along the way, I turned over a shovel of sand, as I do, and I saw antennae sticking out of the sand.

Even as I reached down, I recognized instantly that this was something unusual. The antennae were way longer than usual. I had it in my hands, and that was when I said, “Hey Meera, check this out.”

I jotted it down in my notebook with three little words: “Super long antennae.”

 
I’d forgotten until I did this post that this animal was the last one of the day before we packed it up and drove back to the main UTPA campus.

We took it back to the lab, and it was getting late in the afternoon. It wasn’t until the next morning that I sat down with a dissecting microscope and looked at the definitive guide to sand crabs, Chris Boyko’s magnificent monograph (Boyko 2002). (In retrospect,this was dumb, because the animal could have died overnight.)

Meera and another student, Karina, were in the lab, and I turned around and announced to them, “New species.”

It wasn’t a species new to science; it was Lepidopa websteri, described in 1903. But it was a new species for the region. I had made a lot of maps of the distribution of sand crabs species in the Gulf of Mexico, and I knew that Lepidopa websteri had never been found this far south.

Then I started taking a lot of pictures. It is hard to get good pictures of the entire animal, because the antennae are so long. I’m still not happy with the pictures I got, but they are better than most depictions of this species (more on this in a moment).

Because I knew this species was rarely collected (noted in the Boyko monograph, which I quote in the paper), and I am in a department with several ecologists, I knew that range extensions could be published. The advice I got was that Crustaceana was the logical choice, because they have a history of publishing notes on range extensions.

Given that I found this animal two years ago, why did it take so long for this paper to appear? Unlike another recent sand crab paper of mine (don’t get me started), the time spent getting this one out had nothing to do with the publisher.

I spent a lot of time trying to “add value” to this paper. I was confident that it could be published as a note, but I do like to have something more substantial when possible. I spent months with students and colleagues trying to include two other things in this paper. One addition kept having technical failures, and the data were never clean enough to publish. The second addition was completed, but none of the reviewers liked it. I took that out, turned the longer paper back into a note, and will try to find a home for that in another journal.

Funny story.

Before the paper came out, I did have a chance to talk about this. Back in January 2013, I gave a public talk at the World Birding Center on South Padre Island. I talked about sand crabs in general, and how I found Lepidopa websteri, which had never been recorded on South Padre Island before.

After my talk, I walked around the inside of the Birding Center. I’d visited the Birding Center before, but had mostly walked around outside. Imagine my surprise when I found this on display:



Here I was saying, “Nobody’s found Lepidopa websteri here on South Padre Island before!” and there’s this picture, which must have been there for years. Good to be proven right for a change, given that the last time I’d seen sand crabs on a sign, I pointed out so many errors that it was taken down soon afterwards. I can overlook that these animals don’t build burrows, and there is no solid evidence their antennae act as straws.

I am betting that the artist was working from a preserved specimen, though (click to enlarge)...


When I mentioned on Twitter that this paper was out, someone noted that the paper was paywalled within minutes. True, but I’m pleased that Brill has a much more reasonable publishing and copyright agreement than many other publishers:

The Author may post the post-print version of the Contribution on his/her own personal website free of charge with the appropriate acknowledgment and link to the Brill website. This means the Contribution may be shown exactly as it appears in print.

This means that if you want a PDF of this paper, you can find it at DoctorZen.net!

This paper reminds me of the value of just looking. You never know what you’ll find.

Related posts

Building or beast?
Public talk at World Birding Center next week! 
Tuesday Crustie: Oh no it isn’t

References

Faulkes Z. 2014. A new southern record for a sand crab, Lepidopa websteri Benedict, 1903 (Decapoda, Albuneidae). Crustaceana: 87(7): 881-885. http://dx.doi.org/10.1163/15685403-00003326

Joseph M, Faulkes Z. 2014. Nematodes infect, but do not manipulate digging by, sand crabs, Lepidopa benedicti. Integrative and Comparative Biology 54(2): 101-107. http://dx.doi.org/10.1093/icb/icu064

Murph JH, Faulkes Z. 2013. Abundance and size of sand crabs, Lepidopa benedicti (Decapoda: Albuneidae), in South Texas. The Southwestern Naturalist 58(4): 431-434. http://dx.doi.org/10.1894/0038-4909-58.4.431  

18 July 2014

Where are they now? (Veterinary edition)

Back in 2006, I wrote:

In another one of those still-not-quite-sure-how-this happened events of the summer, I spent this afternoon meeting with various people about a summer internship program with high school students. I’ll have a student, Amanda, working with me for six weeks starting next Monday. Not quite sure what I'll have her do yet. Will have to spend some time sorting out project ideas next week.

What happened after that? I came up with a project, Amanda rocked it for six weeks, got data way more interesting than I expected, and we published a paper from it (Flores and Faulkes 2008). It was very cool to have a paper with a co-author who was in high school at the time.

Amanda went off and did one degree at Texas A & M University. She told me that having a publication from her internship was helpful, because it always gave her something that set her apart from the crowd. She stayed at A&M for another degree, and today posted this:


From ascidian intern to practicing vet! Sniff. They grow up so fast... Anyone in the region who needs animal care, please visit Dr. Flores, and tell them Zen sent you.

Related posts

In turn...
Best of times, worst of times, best of times
Personal review of 2008

Reference

Flores AR, Faulkes Z. 2008. Texture preferences of ascidian tadpole larvae during settlement. Marine and Freshwater Behaviour and Physiology 41(3): 155-159. http://dx.doi.org/10.1080/10236240802360914