What two things do sex and statistics have in common – apart, that is, from both starting with the letter S? The answer, of course, is firstly that in both, bigger is usually butt (sic) not always better, and secondly that both can be massaged to make things, err, stand out more.

Big Pharma, and often researchers, want to make their findings stand out. Knowing that bigger is better, they naturally lean towards presenting their findings in such a way as to maximise the impact of those findings. Statistics – and human gullibility – allow them to do this with ease, by choosing the method of data presentation that has the greatest impact.

Take the HIV vaccine trial we heard about in the news this week. The headline figure was that taking the vaccine reduced your chances of contracting HIV by 31.2%. Sounds pretty good?

Not really. As vaccines go, a 31.2% reduction in infection rates is pretty feeble (but that’s OK – the study was more about demonstrating that a vaccine is possible than testing a production vaccine), but it nonetheless sounds a whole lot better than alternative, more revealing but less flattering ways of presenting the same data.

The percentage given, 31.2%, is what is known in the trade as a relative risk reduction. It is the percentage to which the unwanted outcome is reduced in the treatment group. Now what does it tell us?

As it happens, not a lot. It just tells us the relative difference between the groups. Being the *relative* difference, it tells us nothing about the *absolute* numbers involved.

Take, for example, two separate trials comparing two different preventative treatments to placebo (sugar pills). Both trials have 1000 subjects, 500 in the treatment group and 500 in the placebo group.

In the first trial, 400 subjects in the placebo group experience the unwanted outcome, while only 200 in the treatment group suffer the negative outcome. In the second trial, the same numbers are four in the placebo group, and two in the treatment group.

In each case, intuitively (and correctly), the risk has been reduced by half. Doing the formal calculations would reveal a 50% relative risk reduction in risk in the treated group compared to the untreated group – suggesting the two trials and treatments are very similar – which they clearly are not. In the first, 200 subjects benefited, in the second, only two benefited.

What has happened here is that by expressing the results as a ratio (percentage), we have blotted out the information about the absolute numbers. Four is to two as 400 is to 200 – but that does not mean two is the same as 200 or four is the same as 400. The absolute numbers are wildly different. The studies are not the same at all, despite having identical relative risk reductions.

When the numbers involved are very small, as they often are (as in the second trial), using a relative risk reduction allows researchers to sex up the results by “loosing” the small (often embarrassingly small) numbers – and by so doing they mislead us into overvaluing their findings.

To understand the results better, we need some more figures – ideally, the actual study figures. As it happens, we do have (buried in the reports) the absolute numbers for the Thai HIV vaccine trial. 8,197 subjects received the vaccine, while 8,198 received the placebo vaccine. Of the treated group, 51 contracted HIV infection, compared to 74 in the placebo group.

These numbers, 51 and 74 out of over 16,000 subjects, are indeed pretty small. So what can we do to present the data in a way which better represents the reality of the study findings?

One way is to look at the absolute, as opposed to relative, risk reduction. This is simply a matter of subtracting the infection risk in the treated group from the infection risk in the untreated group. If we do the sums ((74÷8198) – (51÷8197)), then we get an absolute risk reduction of 0.003 – hardly the headline grabbing figure that 31.2% was.

Absolute risk reduction can help alert us the fact that the numbers affected by the outcome of interest – HIV infection in this case – might be small, as indeed they are in this vaccine study. But it is still not a very useful figure to use in practice.

What if, instead of talking about risks, we could determine how many people we would have to treat with “x” (the vaccine) in order to prevent one case of “y” (HIV infection)?

Happily, by a quirk of mathematics, there is a way of converting an absolute risk reduction into a figure that makes real world sense. That figure is, unsurprisingly, known as the Number Needed to Treat (NNT) and it is simply calculated as 1 divided by the absolute risk reduction (1÷ARR).

In this vaccine trial, the NNT comes out as 357 – that is, 357 individuals must be vaccinated to prevent one case of HIV infection. Somehow that 31.2% doesn’t seem so sexy after all.

The bottom line? Never take a relative risk reduction figure (“Amazing new vaccine slashes HIV infection by more than 30%”) on its own at face value. You need to know the numbers behind that 30%. Then you can put things in perspective.