The website Rate My Professors is getting rid of its “hotness” rating. Which means you won’t see stuff like this any more:
The idea of getting rid of the “chili pepper” has been floating around for a while, but fellow neuroscientist Beth Ann McLaughlin was able to hit a nerve on Twitter this week. Almost 3,000 retweets and many professors chimed in to say, “Get rid of this appearance rating.”
And to their credit, the website owners did.
This is a good thing for people in higher education. The Rate My Professors site is well known to people in higher education, both faculty and students. I’ve encouraged students to use Rate My Professors, because I have a record teaching, and people have a right to hear other students’ experiences. It matters when they tacitly suggest it’s okay to ogle professors.
It’s nice to have a little good news. And to be reminded that sometimes, faceless corporate websites – and the people behind them – do listen to reason, and can change.
External links
Why The Chili Pepper Needs To Go: Rape Culture And Rate My Professors (2016)
RateMyProfessors.com Is Dropping The "Hotness" Rating After Professors Called It Sexist
I Killed the Chili Pepper on Rate My Professor
RateMyProfessors.com Retires the Sexist and Uncomfortable “Chili Pepper” Rating After Academics Speak Out
RateMyProfessors Removes Hotness Rating
29 June 2018
14 June 2018
Another preprint complication
While I knew some journals won’t publish papers that had previously been posted as preprints, I didn’t know that some journals are picky.
Jens Joschinski wrote:
This makes no sense to me. What does the business model of the preprint server have to do with anything regarding later publication?
There’s a list of journal policies. Thanks to Jessica Polka.
But frankly, every little bit of legwork just makes me less inclined to post preprints. I’ll still do it if I think I have some compelling reasons to do so, but doing this routinely as part of my regular publication practices? Maybe not.
Jens Joschinski wrote:
Some journals (well, @ASNAmNat) will not accept papers posted at @PeerJPreprints or other commercial services.
This makes no sense to me. What does the business model of the preprint server have to do with anything regarding later publication?
There’s a list of journal policies. Thanks to Jessica Polka.
But frankly, every little bit of legwork just makes me less inclined to post preprints. I’ll still do it if I think I have some compelling reasons to do so, but doing this routinely as part of my regular publication practices? Maybe not.
11 June 2018
Does biorXiv have different rules for different scientists?
Last year, I submitted a preprint to biorXiv. I was underwhelmed by the experience.
But I am a great believer in the saying, “Never try something once and say, ‘It did not work.’” (Sometimes attributed to Thomas Edison, I think.) I submitted another manuscript over the weekend which I thought might be a little more suited to preprinting, so after I submitted it to the journal, I went and uploaded it to biorXiv. It was the weekend, so it sat until Monday. Today, I received a reply. My preprint was rejected.
How interesting.
I like that this demonstrates that preprint servers are not a “dumping ground” where anyone can slap up any old thing.
My paper is not a research paper. I don’t deny that. Following that rule, biorXiv made a perfectly understandable decision.
But the whole reason I thought this new paper might be appropriate to send to biorXiv was I had seen papers like “Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication” on the site before. I opened up that PDF and looked at it again. There’s no “Methods” section. There’s no graphs of data. There’s no data that I can find at all.
How is that a research paper? And how is that not a commentary? Maybe I’m missing something.
But although the paper above doesn’t have data, what it does have is a lead author who was the former editor-in-chief of Science and current current president of the National Academy of Science of the US, Marcia McNutt. The paper was submitted in May 2017, some time after McNutt became president of the National Academy in 2016.
And while she is the only one to have “National Academy of Sciences” listed in the authors’ affiliations, the rest of the author list is nothing to sneeze at. It boasts other people with “famous scientist” credentials, like Nobel laureate and eLife editor Randy Schekman. Most of the authors are involved in big science journals.
One of my criticisms of preprints is that they would make the Matthew Effect for publication worse. People who are in well-known labs at well-known institutions would receive the lion’s share of attention. People who are not would have just another expectation with minimal benefits.
But this feels even worse. This feels like there’s one set of rules for the rank and file scientists (“No commentaries!”) and another set of rules for scientists with name recognition (“Why yes, we’d love to have your commentary.”).
I like the idea of preprints, but this is leaving a sour taste in my mouth.
Update, 12 June 2018: The manuscript found a home at a different preprint server, Peer Preprints.
Related posts
A pre-print experiment: will anyone notice?
A pre-print experiment, continued
External links
Twitter thread
Transparency in authors' contributions and responsibilities to promote integrity in scientific publication
But I am a great believer in the saying, “Never try something once and say, ‘It did not work.’” (Sometimes attributed to Thomas Edison, I think.) I submitted another manuscript over the weekend which I thought might be a little more suited to preprinting, so after I submitted it to the journal, I went and uploaded it to biorXiv. It was the weekend, so it sat until Monday. Today, I received a reply. My preprint was rejected.
bioRxiv is intended for the posting of research papers, not commentaries(.)
How interesting.
I like that this demonstrates that preprint servers are not a “dumping ground” where anyone can slap up any old thing.
My paper is not a research paper. I don’t deny that. Following that rule, biorXiv made a perfectly understandable decision.
But the whole reason I thought this new paper might be appropriate to send to biorXiv was I had seen papers like “Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication” on the site before. I opened up that PDF and looked at it again. There’s no “Methods” section. There’s no graphs of data. There’s no data that I can find at all.
How is that a research paper? And how is that not a commentary? Maybe I’m missing something.
But although the paper above doesn’t have data, what it does have is a lead author who was the former editor-in-chief of Science and current current president of the National Academy of Science of the US, Marcia McNutt. The paper was submitted in May 2017, some time after McNutt became president of the National Academy in 2016.
And while she is the only one to have “National Academy of Sciences” listed in the authors’ affiliations, the rest of the author list is nothing to sneeze at. It boasts other people with “famous scientist” credentials, like Nobel laureate and eLife editor Randy Schekman. Most of the authors are involved in big science journals.
One of my criticisms of preprints is that they would make the Matthew Effect for publication worse. People who are in well-known labs at well-known institutions would receive the lion’s share of attention. People who are not would have just another expectation with minimal benefits.
But this feels even worse. This feels like there’s one set of rules for the rank and file scientists (“No commentaries!”) and another set of rules for scientists with name recognition (“Why yes, we’d love to have your commentary.”).
I like the idea of preprints, but this is leaving a sour taste in my mouth.
Update, 12 June 2018: The manuscript found a home at a different preprint server, Peer Preprints.
Related posts
A pre-print experiment: will anyone notice?
A pre-print experiment, continued
External links
Twitter thread
Transparency in authors' contributions and responsibilities to promote integrity in scientific publication
04 June 2018
Viral video verdict: Crayfish claw cutting complicated
Making the rounds in international news in the last couple of days is a viral video that is normally described in the headlines this way:
"Heroic crayfish cuts off own claws to escape the pot!"
Crayfish behavior, heat, pain, claws... This is right on target with some of my research. But so far, nobody has called me to break down what is going on in this video.
What the crayfish is doing probably autotomy, not desperate self-mutilation. A crayfish dropping a claw is not like a person ripping off an arm. But the narrative is so good that nobody cares about the science.
I'm away from my desktop, so it's too hard to write a detailed blog post like I normally would. Instead, I wrote a Twitter thread about it: https://mobile.twitter.com/doctorzen/status/1003645638213623808
External links
03 June 2018
Theory and practice
Years ago, while listening to CBC's Morningside, I heard this description:
"Canada is a country that works in practice, but not in theory. The United States is a country that works in in theory, but not in practice."
I was reminded of this over the weekend reading a thread about data sharing (https://twitter.com/danirabaiotti/status/1002824181145317376). Universal a data sharing between scientists is one of those ideas that sounds great in theory. So great that people tend to undervalue how it will work in practice.
Another example that I was thinking about recently was post publication peer review. In theory, it might be nice to have a single cenatralized site that included all post publication comments. In practice, blogs have a pretty good track record of bringing important critical comments to a broader audience.
I see this over and over again with people putting forward ideas about how we should do science? The meta science, so to speak. Around publication, peer review, career incentives, statistical analysis. I've been guilty of this. There's old posts on this blog about open peer review that I still think were fine in theory, but not grounded in practice.
I think we scientists often get very enamoured of those solutions that work in theory, and undervalue things that work in practice.
Subscribe to:
Posts (Atom)