One of the very first lessons that stuck when I started out on my psychology degree in 2007 was never to suggest in an essay that the results of a psychological experiment either proved or disproved something. This is because psychological researchers always express their results with reference to the probability that their findings might have been arrived at by chance. It’s common for researchers to reject this notion if the calculated probability of that having happened is less than 1 in 20. More precisely, researchers often state that their results are statistically significant if the probability of the null hypothesis being correct is less than 1 in 20 – the “magic” p < .05 .
This morning as I was sat on a train from Derby to London, studiously avoiding reading more of the personnel selection and assessment module for my masters, I came across an analysis of the odds at which horses have won flat races at over the last 8 years. In short, it concluded nothing surprising – the bookmaker always wins in the long run – but what did catch my eye was that 1,366 horses won races at odds of 20 to 1 or longer. In other words, these were examples where p < .05 (as calculated by the bookmakers) but the horse won anyway. Before anyone points it out in the comments, I do realise that I’m taking a statistical liberty or two by making this comparison …
However, one of the luxuries of horse racing data is that all of it, winners and losers, is available for analysis. This is rarely the case with published research, as according to The Economist, negative results only account for 14% of all papers – down from around 30% in 1990. Unlike most media outlets who seem to be only interested in reporting bad news, it would seem that the research community is often only interested in good news – or at least, results that back up their research hypothesis.
There are plausible explanations for this bias, psychological and otherwise, as the article in The Economist points out. Some of it may even come from our experiences as students. For example, I was constantly surprised at the large number of people on my undergraduate psychology degree who thought that a non-significant result in an experiment they were working on meant that they had somehow failed.
So now, when I read a research paper that reports a positive result, I’m always interested to see if other researchers have attempted to replicate the study or experiment and what their results and conclusions are. And while the presence or absence of negative results doesn’t necessarily tell the whole story, the absence of similar positive results always makes me rather wary.
I’ve just been doing some revision for my Generalised Linear Models exam – if I had a penny for everytime I had to find the ‘p value’ I’d have my student loans paid off by next week haha! Loved the comparison š
Hi Pamela,
Thanks for the comment and I hope the revision (and exam!) goes well. There are many things that I wish I’d been given a penny for everytime I had to do them … finding and explaining p values is definitely one of them!