17 March 2011

Undergrad teaching: When you don’t tell me your wheel is round, I have to reinvent it

Progress happens when people communicate. Otherwise, people waste time re-doing something that has already been done.

Many people are not happy with the state of the teaching of science in universities, particularly the first couple of years of classes. They are often large and can feel impersonal. They are often a mix of people who intend to major in the discipline and those who are taking breadth requirements. And it’s fair to say that they are not always viewed as plum tasks by many faculty members.

This paper not only assesses the effectiveness of many different kinds of different strategies for teaching undergraduate classes, but provides great examples of what not to do in reporting your science.

First, the actual teaching stuff. They found positive effects for all manner of innovations, like collaborative learning, conceptually oriented tasks, and used of various kinds of technologies, and all manner of permutations and combinations therein.

The problem is that there is so much smear in the data that it’s hard to zero in on which are the most effective techniques. And they note that the better designed the experiment, the smaller the measured effect was. If you randomly assign subjects (i.e., students got into one of two classes at random), a standard technique in experiments biology, psychology, and elsewhere, the effect size was 0.26. If students were not randomly assigned - students self-registered for classes - the reported effect size was bigger, 0.50. But we should maybe have a little more confidence in the actual experiments. After all, this is the level we consider to be appropriate for things like clinical drug testing.

As for the “don’t do this” aspect of the paper, the authors say that almost half the studies they looked at couldn’t be included in their analysis because of some reporting flaw.

And it’s not hard stuff.

For instance, Ruiz-Primo and colleagues note that many papers show p values, but not basic statistical information like sample size. Or the actual means and standard deviation. These are not complicated things to calculate or include.

The authors also slap the journals publishing these papers a little, noting that people have complained about the poor level of science around teaching and education before. Yet, “journals continue publishing these types of papers.”

And the moral of the story? Scientists who are interested in teaching need to start bringing the same level of attention to detail to educational experiments that we bring to our other research subjects.

Reference

Ruiz-Primo M, Briggs D, Iverson H, Talbot R, Shepard L. 2011. Impact of undergraduate science course innovations on learning. Science 331(6022): 1269-1270. DOI: 10.1126/science.1198976

Photo by Thomas Guest on Flickr; used under a Creative Commons license.

No comments: