In general, people in science get doctoral degrees because they want to do science. But the opportunities to do so after grad school and post-docs have shrunk dramatically.
Karen James wrote about the prospects of being an unsupported scientist. Shortly after, Terry McGlynn talked about the prospect of self-funding research. Both are expressions of a common frustration: more people are getting trained to do science, but after that “training period” is over, there’s less money for research per scientist than there used to be. It’s getting tighter and tighter, with no end in sight.
While I was turning this over in my mind, two questions came to me.
Who is to blame?
The answer, of course, is nobody is to blame. Funding agencies, states, and universities each have their own, often contradictory, sets of goals and incentives. On science social media, most of the talk rotates around the policies federal funding agencies, neglecting the role of the states, incentives for institutions, and that some trends occur with no help from funding agencies. (For instance, those agency policies don’t seem to account for the growth in master’s degrees.)
The last one – institutional incentives – is is not looked at enough. If I were a university president, even knowing the oversupply problem, if I had a chance to create more doctoral programs, I would do it. There are just too many advantages. You get higher university rankings and more money.
For instance, look at the Carnegie classifications of universities. Their first pass on classifying universities as research (R1, R2, R3) is based on the number of doctoral programs. In my state, the Texas Higher Education Coordinating Board groups universities by the number of graduate programs, number of graduate students, and research expenditures. (The last is another potentially corrosive influence.)
Why is number of graduate programs and students used to measure research output? It assumes that faculty perform no research outside of supervising doctoral students. Why not number of publications or scholarly products? Universities collect that data. Heck, I’ve lost count of the number of times my university has asked me for my CV.
Obviously, there are potential pitfalls in systems intended to measure academic productivity. But consider what would change if universities were classified more by what research they put out instead of how many training programs and students they have.
Using the number of doctoral programs as a proxy measurement for amount of research capacity of a university is like using the Impact Factor as a proxy measurement for the quality of research articles: deeply, if not fatally flawed for most purposes, but survives because it’s convenient. The difference is there’s no shortage of researchers, editors, and others writing articles and editorials pointing out that the limits of Impact Factors.
Why are there so few solutions suggested to address these problems?
Everyone likes to support “training.” Nobody’s going to get fired for putting money into training, since education is one of those rare areas that pretty much everyone wants to be seen supporting.
People with doctorates have some of the lowest unemployment rates in the U.S. There’s very little concept of underemployment and no balance sheet for missed opportunities. That makes it a tough sell to convince politicians that people with a Ph.D. are a group in crisis.
Like the weather, everyone talks about Ph.D. oversupply, but nobody know what to do about it besides dressing for today and hoping it’ll chance to something nicer soon.
Karen James’s Twitter rant
Self-funding your research program
Refusing to be measured
In the future, all research will be funded by Taco Bell