As readers of contemporary psychology journals may well know, you can’t help but keep reading about something called “Bayesian Statistics”. Researchers “in the know” seem to extol Bayesian statistics as a superior method of making inferences from data compared to traditional, “frequentist”, methods (yes, Mr *p*-value, I’m looking at you!).

But what is it? What can it do that standard methods can’t? It turns out the answer is just about everything you’ve ever dreamed of (well, as a researcher, anyway!).

**The Problem**

Your first question might well be “what’s wrong with standard methods of analysis and inference, like the *p*-value?”. As I mentioned in a previous post, there are many problems with the *p*-value. One of the least appreciated problems—by students and faculty alike—is that it doesn’t really answer what we, as researchers, are interested in. Remember the definition of the *p*-value?

The probability of observing results as extreme (or more so) as the ones you have obtained, if the null hypothesis is true.

What’s wrong with this? Well, the *p*-value provides information about the probability of your *data*; hidden in this definition lies the fact that it is also a *conditional probability*; the key part is “…if the null hypothesis is true.” Thus, the p-value provides information about your data **assuming the null is true**. Formally, this is written as

(which reads “probability of data given the null hypothesis”, p[Data|Hypothesis]).

At first blush, you might see no problem with this. But hang on a minute…as a researcher, aren’t you interested in the probability of the hypothesis, given the data [p(Hypothesis|Data)]? That is,

Isn’t it the same thing? Can we not just use the p-value to infer the probability of our (null) hypothesis? NO!

For example—and I am borrowing these examples from Professor John Kruschke’s insanely good book, which you can find here—imagine you are interested in the weather. What is the probability it is raining if you see clouds [p(rain|clouds)]? Alternatively, if you know that it is raining (because you are soaked!), what is the probability there are clouds [p(clouds|rain)]? Clearly, these are not the same!

**Introducing Bayes’ Rule**

Thomas Bayes was a mathematician and cleric who live in the 18th Centrury. (I’m not Wikipedia, so if you want more information about him, go there!) He provided a remarkable solution to the problem of “reversing” condition probabilities. He presented a theorem which allows you to discover p(y|x) if we know p(x|y), p(y) and p(x). In relation to our research interests, this allows us to turn p(Data|Hypothesis) into p(Hypothesis|Data) – just what we want!

Here is his theorem:

**An Example of its Use***

Let’s consider a concrete—but trivial—example, to show that this works. I have a deck of cards in front of me. If I were to pick one at random, and tell you that it was a Queen, what is the probability that this card is also a heart? It’s not too difficult to work out that p(♥|Q) = 1/4, as there are only four suits that a card can take. After replacing the card and shuffling, a I draw another card and tell you it is a heart; given this information, what is the probability that the card is a Queen? Again, it’s trivial to work out that p(Q|♥) = 1/13, as a card can have one of 13 values.

Here is the rub: you now clearly see that p(Q|♥) **DOES NOT** equal p(♥|Q) (and therefore p(D|H) does not equal p(H|D)!)!

Bayes’ rule allows us to find the relationship between these probabilities.

Let’s pretend we want to know p(Q|♥) – (this is where it’s clear this example is too simple, as we have already worked it out, but humour me!). From Bayes’ rule, we know that

so, if we know p(♥|Q), p(Q), and p(♥), we can work out p(Q|♥). Easy! Let’s do it:

**The Solution**

We have already shown that p(♥|Q) = 1/4. We also know—because there are only 4 queens in a pack of 52—that p(Q) is 4/52. Also, as there are only 13 hearts in a pack of 52, we know that p(♥) = 13/52. Let’s plug these into Bayes’ theorem to work out p(Q|♥)!

Plug away at this equation, and it will tell you that p(Q|♥) = 1/13, which is what we knew it was already!!! It works!

**In Conclusion**

This is obviously a simplified example, but it shows the rationale and process of the mathematics behind Bayes’ theorem. Remember that real life is not so simple, but the principle still remains: we—as researchers—should not be interested in p(Data|Hypothesis). What we are really interested in is p(Hypothesis|Data). This is something the p-value just can’t do; Bayesian analysis, on the other hand, does it whilst laughing in its sleep.

As the techniques used to conduct Bayesian analysis become simplified for researchers, we will undoubtedly see more of this method in psychology publications. It is therefore worthwhile to familiarise yourself with the concepts.

Or, if you are like me, just dive head-first into the wonderful world of Bayesian Statistics. Enjoy!

***Note:** How to actually conduct Bayesian analysis is beyond the scope of this blog. For a nice introduction, check out John Kruschke’s book, and also his wonderful website!

[…] a previous post, I extolled the virtues of Bayesian statistics. Bayesian statistics is often flouted as the […]