“Aim for the best journal in your field that you think you can get into, as a general rule.” This ranks right up there with Big Bobby Clobber’s hockey advice, “The key to beating the Russians is to score more points than they do.” “Publish in the best journal” (or its sibling, “Strive for excellence”) is incredibly unhelpful for new researchers, because they don’t know the lay of the publishing landscape. They will rightfully ask how to recognize “best” journals. In academia, notions of “best” are often highly subjective and have more to do with tradition than actual data. This tweet led me to this article:
When Elfin was first charged with creating a ranking system, he seems to have known that the only believable methodology would be one that confirmed the prejudices of the meritocracy: The schools that the most prestigious journalists and their friends had gone to would have to come out on top. The first time that the staff had drafted up a numerical ranking system to test internally–a formula that, most controversially, awarded points for diversity–a college that Elfin cannot even remember the name of came out on top. He told me: “When you’re picking the most valuable player in baseball and a utility player hitting .220 comes up as the MVP, it’s not right.”
Elfin subsequently removed the first statistician who had created the algorithm and brought in Morse, a statistician with very limited educational reporting experience. Morse rewrote the algorithm and ran it through the computers. Yale came out on top, and Elfin accepted this more persuasive formula. At the time, there was internal debate about whether the methodology was as good as it could be. According to Lucia Solorzano, who helped create the original U.S. News rankings in 1983, worked on the guide until 1988, and now edits Barron’s Best Buys in College Education, “It’s a college guide and the minute you start to have people in charge of it who have little understanding of education, you’re asking for trouble.”
To Elfin, however, who has a Harvard master’s diploma on his wall, there’s a kind of circular logic to it all: The schools that the conventional wisdom of the meritocracy regards as the best, are in fact the best–as confirmed by the methodology, itself conclusively ratified by the presence of the most prestigious schools at the top of the list. In 1997, he told The New York Times: “We’ve produced a list that puts Harvard, Yale and Princeton, in whatever order, at the top. This is a nutty list? Something we pulled out of the sky?”
When people talk about “best” journals, this almost always ends up being code for Impact Factor. The article mentions these second.
“Consider impact factors, but don’t obsess over the number. There are many excellent medical and biomedical specialty journals considered top tier in their fields that have relatively low impact factors. Don’t let the impact factor be your only data point when deciding where to send your paper.” This gives me another chance to point to articles about the problems of this measure, like this and this. It’s so flawed that authors should think about it as little as possible.
“Look at the masthead. Are the people listed on the editorial team who you want reading your paper? Do they represent your target readership?” This is deeply unhelpful to new researchers. New researchers do not know the lay of the land and probably are not going to recognize most of the people on editorial boards. Recognizing that network takes time and experience.
“Read the aims and scope. Does the journal’s focus align well with your submission?” Finally, a good piece of advice. I would have put this first, not fourth.
“Do you and/or your university care whether you publish in open-access journals? Some institutions will put a high value on an open-access paper, so don’t underestimate the importance of this preference.” Again, probably unhelpful for early career researchers. Doctoral students and post-docs may very well change what institutions they are affiliated with, maybe multiple times.
“Is your research ready to be published? Do you have a compelling and complete story to tell? While there is a great deal of pressure to publish frequently, don’t slice and dice your research into many small pieces. Consider the least publishable unit, and make sure yours is not too small to be meaningful.” I’m kind of godsmacked that “Check to see if it’s done” is presented as advice. The notion of a “complete story” is deeply problematic. The data don’t always cooperate and answer a question cleanly. There are many projects that I would never have published if I say on them until they were as “complete” as I wanted. Here’s a project I sat on for eight years because it wasn’t “complete.”
External links
How to Publish for a Successful Academic Career
No comments:
Post a Comment