You need variation to make decisions. I talked about this yesterday when I was discussing faculty evaluations. Today, I want to examine the consequences of that principle in graduate school applications.
In our department, we require students take the general Graduate Record Exam (GRE). In many places, the GRE is apparently the most important factor in deciding whether an applicant is accepted.
Now, there are good and valid arguments against standardized tests in general and the GRE in particular.* One argument that people make against using the GRE is that undergraduate GPA should tell you everything you need to know about a prospective grad students academic chops.
The problem is that there is less and less variation in undergraduate grades.
I’m sure that most of academics in the United States have seen this chart, which shows that a few years ago, “A” become the most common grade at universities in the United States.
As grades vary less, they become less informative, and people will stop making decisions based on them. And this can only be good news for the standardized test business, like the GRE. As long as GRE scores vary a lot, people are more likely to use them to make decisions, as flawed and as imperfect as though those scores may be.
* In our department, GRE scores are just one of several pieces of information we look at, and they are not the most important. I might talk more about our reasons for requiring the GRE some other time.
No comments:
Post a Comment
Comments are moderated. Real names and pseudonyms are welcome. Anonymous comments are not and will be removed.