This is an excerpt of a data set from a manuscript I’ve been working on.
The box shows where half the data live, with a line dividing the boxes marking the median. The small black square in the mean, and the little crosses show highest and lowest data points.
When I submitted the manuscript, I didn’t do any statistical analysis of the data. One reviewer asked me to to a statistical analysis. It was a perfectly reasonable request that I should have anticipated. The reviewer didn’t see the same plot that I have above and didn’t know the data as well as I do.
But it got me thinking. John Vokey, one of my undergraduate professors at the University of Lethbridge,used to refer to some differences as “significant by the IOT test.” IOT was an acronym for “Inter Ocular Test.” In other words, the difference was so bloody obvious that it hit you right between the eyes.
“If the mean for one group is up here with only a little variation, and the mean for the other group is down here with this much variation, what do you need a test for? Why not just say they’re different?”
I didn’t do an analysis because I thought there was no point. In the data above, there is no overlap between the two sets at all. Do you need a statistical test to tell you that those two data sets are different?
It is easy enough to do a simple t-test on the data above.
But does adding the test and p value tell you anything more, or different, than the plot alone? Or is including the p value a statistical “fig leaf”?
Do your thoughts about analysis change when I plot the raw data next to the box plot?
Now you can see more clearly that the sample size is small. But even then, when there is no overlap in the data sets, is there any test or condition that will say those two are not statistically different?