Introduction to Bayesian reasoning and hypothesis testing

2024 February 18
math

In the covid origins debate, Bayesian computations featured prominently as both sides argued their cases at least in part using the language of Bayes factors and Bayesian updating.

The most important question I asked at the debate is whether Bayesian reasoning is a valid approach for resolving questions; specifically, whether it is possible to get the wrong conclusion at the end of a Bayesian argument even if all the numbers that went into your computation are correct. (For more on this topic, see section 3 of my report (pdf).)

However before we can understand the potential weaknesses of Bayesian reasoning or how it can go wrong, we need to understand what it is and how to compute Bayesian updates, which is the goal of this article; while we are here, we also discuss the closely related and famously confusing method of hypothesis testing and p-values. There is a common sentiment that Bayesian reasoning is the “right” way to make deductions or that Bayesian reasoning is “better” than hypothesis testing; this is the same kind of category error as a statement like “cheese is tastier than food”. We will discuss how they relate to each other and why primary science research almost always uses hypothesis testing.

(If you are uninterested in hypothesis testing, you can skip to the third section.)

Logical deduction

We are all familiar with the most basic step in logical deducation, modus ponens: if A is true, and A implies B, then B is true.1Taking what is intuitively obvious and dressing it up in formal language may seem counterproductive when working with toy examples, but it has the advantage of making the statements amenable to algrebaic manipulation. Algebraic manipulation of formal symbols is much faster and more reliable when the task at hand is too large to hold in one’s head. Logical deduction is useful because it allows for chaining: if A implies B, and B implies C, then A implies C.

The next most basic form of logical deduction is that A implies B is equivalent to its contrapositive, \neg B (i.e., “not B”) implies \neg A. I suspect that some people find this less intuitive because it does not work especially well under uncertainty in the real world. Suppose, for example, we want to test the claim that all ravens are black. We could build evidence for this claim by observing ravens, and verifying that they are black. However that sounds like a lot of work, so how about an easier plan: observe that “all ravens are black” is logically equivalent to “all non-black things are not ravens”. We could build evidence for this completely equivalent claim by observing non-black things, and verifying they are not ravens! And, we can do that without even having to go outside and risk encountering an actual raven (hence this is sometimes referred to as “indoor ornithology”).

This is only the beginning of our difficulties as we stray further from pure mathematics. In the real world, we can never be fully certain of any statement, so the preconditions of modus ponens will never apply. The most common resolution is to augment each statement with a probability, representing our confidence in its truth. What if we try logical deduction now?

Probabilistic deduction and hypothesis testing

Attempting to perform deduction of probably-true statements in the same way that we did above immediately falls on its face. The root issue is that it is impossible to perform inference along chains: if A implies probably-B, and B implies probably-C, we cannot conclude that A implies probably-C! (Similarly, probably-A and A implies probably-B does not imply probably-B.2In the terminology of functional programming, this is to say “probably-” is not a monad, because for a monad m you can combine m a and a \to m b to make m b (this is the “bind” operation), and you can combine a \to m b and b \to m c to make a \to m c (this is lifted bind). Equivalently, if “probably-” were a monad, then “join” would be m m a \to m a, which is to say probably-probably-A implies probably-A, which is false. The failure of the monadic laws is what makes probabilistic inference like this useless.) If we were to chain together probably-true inferences, our uncertainty compounds until we are left with nothing.

To make discussion easier, let us write {\sim} A to mean “A is probably true” (think of it as saying that A is true-ish), and A \Rightarrow B to mean “A implies B”. As usual, A \lor B means “A or B”, A \land B means “A and B”, and \neg A means “not A”. For example, one can verify each of the following:3The monadic laws for \sim would say {\sim} {\sim} A \Rightarrow {\sim} A and (A \Rightarrow {\sim} B) \Rightarrow ({\sim} A \Rightarrow {\sim} B), both of which are false.


\begin{gathered}
    (A \Rightarrow B) = (\neg A \lor B) \\
    (A \Rightarrow B) = (\neg B \Rightarrow \neg A) \\
    A \Rightarrow {\sim} A \\
    (A \Rightarrow C) \land (B \Rightarrow C) = (A ...

Note that the first statement above can be thought of us the definition of \Rightarrow, the second is the contrapositive, and the third is one of the axioms of \sim. Before we move on, let us prove the last two statements. To show the fourth statement, we have


\begin{aligned}
    (A \Rightarrow C) \land (B \Rightarrow C)
    &= (\neg A \lor C) \land (\neg B \lor C) \\
    &= (\neg A \land \neg B) \lor C \\
    &= \neg (A \lor B) \lor C \\
    &= (A \lor...

where we have used the definition of \Rightarrow three times and basic properties of \land and \lor.

For the fifth statement, we have {\sim} A \Rightarrow {\sim} (A \lor B), and likewise {\sim} B \Rightarrow {\sim} (A \lor B), so by the fourth statement the conclusion follows. Note that the converse (i.e., flip the direction of \Rightarrow) is false: it is possible for A \lor B to be probably-true, but neither A nor B is probably-true.

Let us return again to the contrapositive, which says that if A \Rightarrow B then \neg B \Rightarrow \neg A. We would like to use \sim to make a similar statement, that if A \Rightarrow {\sim} B then \neg B \Rightarrow {\sim} (\neg A). This is false (e.g., if A and {\sim} B are true, but not B nor {\sim} \neg A, then the antecedent is true but the conclusion is false4You may recall that in ordinary logic we can test the validity of statements using truth-tables: that is, tabulate every combination of possible values of true and false for each variable, and then verify the claim simplifies to “true” for each combination. We could do something similar, but now each variable has three possible values: true, probably-true but false, and definitely false. Indeed this technique successfully disproved the statement here. However I am not sure this could be used in general, because of the existence of expressions like {\sim} {\sim} A and {\sim} (A \lor B) and {\sim} (\neg A) which cannot be simplified into depending only on the values of A and B.). However, a weaker form of the contrapositive is true: if A \Rightarrow {\sim} B, then {\sim} (\neg B \Rightarrow \neg A). Let us prove that. First, since \neg A implies {\sim} (\neg A), therefore \neg A \lor {\sim} B implies {\sim} \neg A \lor {\sim} B. Therefore


\begin{aligned}
    (A \Rightarrow {\sim} B)
    &= \neg A \lor {\sim} B \\
    &\Rightarrow {\sim} \neg A \lor {\sim} B \\
    &\Rightarrow {\sim} (\neg A \lor B) \\
    &= {\sim} (A \Rightarrow ...

where we have used the various properties from the list of examples above.

How is this useful? Suppose we have some item A whose truth is of interest, but we cannot directly observe A. Instead A has some consequence, A \Rightarrow B, which is observable. If we observe \neg B, then we can conclude \neg A.

Of course the real world is rarely so generous as to have certainty, so rather than A \Rightarrow B we only know the weaker consequence A \Rightarrow {\sim} B. Now from \neg B we cannot conclude \neg A, but we can probably conclude \neg A, as we have {\sim} (\neg B \Rightarrow \neg A). This is not to be confused with definitely concluding {\sim} \neg A, which would be \neg B \Rightarrow {\sim} \neg A.

This is exactly the formula used in hypothesis testing. We have some unobservable claim, A, called the null hypothesis. To test A, we build a statement of the form A \Rightarrow {\sim} B; this process will be explained in more detail shortly. We then perform an experiment to measure B. If we observe \neg B, then we can probably conclude \neg A: we reject the null hypothesis at some confidence level, representing a false negative rate.

We see from this the two main sources of confusion that arise in hypothesis testing. First, to make this deduction we must observe \neg B. If we observe B then we can make no deduction (we “fail to reject the null hypothesis”), just as if A \Rightarrow B and B then we gain no information about A. Second, even if we do observe \neg B, we do not conclude that A is probably false; instead we probably conclude that A is false. These sound very similar but this is the distinction between {\sim} (\neg B \Rightarrow \neg A), which is true, and \neg B \Rightarrow {\sim} \neg A, which is false. It is common for people to mistakenly interpret the p-value as the probability that A is true; but instead it is the probability of a false negative if A is true.

Hypothesis testing

Time for the messy bit: how do we construct a statement of the form A \Rightarrow {\sim} B? That is, we need to make a choice for B. As an example, suppose we pick a random number from 0 to 1, and say that B is the observation that the random number is bigger than 0.05. Then A \Rightarrow {\sim} B where \sim means “with 95% confidence”. If we observe \neg B, that the random number is smaller than 0.05, we can reject A at a 95% confidence level.5Obviously you should choose the desired confidence level for your application, and not blindly follow the traditional choice of 5% error rate, but for purpose of our discussion it is fine.

As an example, let us say that A is the hypothesis that a certain population has a mean height of 2m and standard deviation of 0.1m. Using the choice for B above, then A \Rightarrow {\sim} B is true where \sim means 95% confidence. If we perform the experiment we either observe B or \neg B. If the former, we deduce nothing; if the latter, we have {\sim} (\neg B \Rightarrow \neg A) at 95% confidence, which we express as “rejecting” the null hypothesis A.

Obviously this is not a great choice of B, as it has no relation to the underlying claim A that we want to test. Our approach has a 5% false negative rate: there is a 5% chance of us rejecting A even if it is true. We also have a 5% true negative rate, and have a 5% chance of correctly rejecting A if it is false. Our goal then is two-fold: we want to choose a B such that A \Rightarrow {\sim} B according to some desired false negative rate, and conditional on maintaining that false negative rate maximize the probability of observing \neg B if \neg A.

The task of choosing such a B is usually broken up into 3 steps: choosing a test statistic, calculating a p-value, and choosing a false error rate (the threshold for the p-value). Frequently there is an obvious choice for B that is the mathematically best way to distinguish the possibilities A and \neg A, for a certain collection of observations; other times it is less obvious what the optimal choice is, or if there even is one, and we just have to pick something that seems good. Different choices for B correspond to different statistical tests; one is not more correct than another, but one may have greater statistical power for the same significance threshold, and therefore be more likely to reach a useful conclusion for a given dataset.

Using again that A is the claim that a population has a mean height of 2m and standard deviation of 0.1m, we will make a more sensible choice for B. To have any chance of progress, we need observations that relate to A in some way, so let us suppose we have access to a random sample of members of that population, and can measure their heights. Thus we have as data some collection of n numbers X_1, \ldots, X_n representing these heights; these are random variables,6Formally a “random variable” means “a value randomly sampled from a probability distribution”. in that each time we perform the experiment we may get different values for the X_i, and we suppose they are iid. Depending on our application, we might have much more complicated observations, including non-numerical data.

A test statistic is any real-valued function of the observations.7Well, technically that is the definition of a statistic. A test statistic is a statistic that is useful in hypothesis testing. Also it doesn’t really have to be real-valued; it just needs some ordering. We want to choose a test statistic that is informative about whether A is true or false. In this case, the obviously best test statistic is the sample mean:

\overline X = \frac {X_1 + \cdots + X_n}{n}.

Next, we calculate a p-value. A test statistic is a random variable (because it is a function of random variables) and therefore follows some distribution. We want to choose our test statistic in such a way that if A is true, we know the distribution the test statistic follows: in this case, \overline X is approximately normally distributed with mean 2m and standard deviation 0.1m / \sqrt{n}. The p-value is found by normalizing a test statistic to be uniformly distributed in the range 0 to 1, assuming A is true. We do this by computing the cdf of the test statistic and applying it to the observed value of the test statistic; that is, we compute the percentile of the test statistic in its distribution. (We will discuss one-tailed vs two-tailed tests later.)

Finally, we choose a threshold such as 0.05, and B is the observation that the p-value is greater than 0.05.

Thus, if A is true, the p-value is a random variable uniformly distributed in the interval 0 to 1, and therefore there is a 95% chance of observing B. That is, A \Rightarrow {\sim} B, from which we deduce {\sim} (\neg B \Rightarrow \neg A), as desired: our test has a 5% false negative rate. However now we have improved on our previous choice of B. If A is false, the distribution of \overline X is not what we calculated above, and therefore the p-value is not uniformly distributed from 0 to 1. Hopefully the p-value strongly favors values below 0.05 – if so, we will have a high true negative rate (and so low false positive rate).8You are welcome to, say, suppose the population has a height mean of 1.5m and calculate the distribution of \overline X and p under that assumption; if n is large enough, the p-value will be strongly weighted towards 0.

While historically hypothesis testing was intended to give a reject/fail-to-reject binary outcome, as described above, in practice one usually reports p-values directly.

So far we have been staying close to propostional logic, with a bit of uncertainty mixed in. As we change course to look at Bayesian reasoning, we will fully accept the probabilistic nature of our computations. From now on we are using the letters A, B, …, to represent events, with the symbol P(A) to mean the probability of the event A.

Bayesian reasoning

Suppose you have two exclusive events9In probability theory, an event is a statement which has a probability of being true. We assume events follow certain rules, such as if X and Y are events then there is an event called X \cap Y, representing that both X and Y are true, satisfying P(X \cap Y) \leq P(X). A and \neg A you wish to distinguish. Unfortunately we cannot observe them directly; instead we make some sequence of other observations O_1, O_2, \ldots, O_n.

If one of our observations O_1 were, say, incompatible with A, then we would be done; by simple deductive reasoning we could say “A implies \neg O_1; O_1; therefore \neg A”. Instead each of our observations are consistent with either A or \neg A – but, critically, not equally so. We measure the degree of consistency with the conditional probabilities10The symbol P(X | Y), read “(the conditional) probability of X given Y” is defined as the ratio P(X \cap Y) / P(Y). P(O_i | A) and P(O_i | B). From this information it is a simple application of Bayes’ Theorem to compute P(A | O_1), and repeated application to get P(A | O_1 \cap O_2 \cap \cdots \cap O_n).

There is nothing deeper to performing Bayesian computation than repeated use of Bayes’ Theorem, but with a little organization we can gain better understanding of the process and be less likely to make mistakes. First, we can build a table of the information we know:

prior P(A) P(\neg A)
observation 1 P(O_1 | A) P(O_1 | \neg A)
observation 2 P(O_2 | A \cap O_1) P(O_2 | \neg A \cap O_1)
observation 3 P(O_3 | A \cap O_1 \cap O_2) P(O_3 | \neg A \cap O_1 \cap O_2)

Because we will be interested in the event that all of the observations are jointly true O_1 \cap O_2 \cap O_3, rather than only one or another of them being true, we have to use the probability of each observation conditioned on the previous ones also happening. If our observations are independent of each other then this is unnecessary, as P(O_3 | A \cap O_1 \cap O_2) = P(O_3 | A) in this case.

Next, let us use the definition of conditional probability:


\begin{aligned}
    P(A) &= P(A) \\
    P(A \cap O_1) &= P(A) \cdot P(O_1 | A) \\
    P(A \cap O_1 \cap O_2) &= P(A \cap O_1) \cdot P(O_2 | A \cap O_1) \\
    &= P(A) \cdot P(O_1 | A) \cdot P(O_2 ...

Thus, the cumulative products we get by just multiplying down the columns of the previous table are the joint probabilities:

- 1 1
prior P(A) P(\neg A)
observation 1 P(A \cap O_1) P(\neg A \cap O_1)
observation 2 P(A \cap O_1 \cap O_2) P(\neg A \cap O_1 \cap O_2)
observation 3 P(A \cap O_1 \cap O_2 \cap O_3) P(\neg A \cap O_1 \cap O_2 \cap O_3)

(Note that mathematically there is nothing distinguishing the prior probability from any of our observations; the choice of what information to call “prior” versus “observation” is just a matter of convention. You can think of prior as the “null” or trivial observation: how much you should update your probabilities based on not making any observations at all.)

We should interpret each row as telling us the relative probability of either of the hypotheticals A, \neg A jointly with the cumulative observations. Initially, before making any observations, the prior probabilities P(A) and P(\neg A) sum to 1 but as we apply successive observations the sum of each row will decrease and become smaller than 1. This residue probability (ie, the amount by which each row sums to less than 1) represents the chance of some hypothetical alternative in which at least one of the observations didn’t happen.

Ok, what good has this done us? Well, our goal is to find the conditional probability P(A | O_1 \cap \cdots \cap O_n). This is just equal to the fraction of the nth row that is in the first column:


    P(A | O) = \frac {P(A \cap O)}{P(O)} = \frac {P(A \cap O)}{P(A \cap O) + P(\neg A \cap O)}.

Indeed, all that matters is the ratio of the two columns: 
    P(A | O) = \frac 1{1 + P(\neg A \cap O) / P(A \cap O)}

Since the columns of the second table were found by just multiplying the columns of the first table, the ratio of the columns of the second table are just the product of the ratio of the columns of the first table:


    \frac {P(A \cap O_1 \cap O_2 \cap O_3)}{P(B \cap O_1 \cap O_2 \cap O_3)}
    = \frac {P(A)}{P(\neg A)} \cdot \frac {P(O_1 | A)}{P(O_2 | \neg A)} \cdot
    \frac {P(O_2 | A \cap O_1)}{P(O_2 | \...

Thus we began with eight11Actually seven, because we knew P(A) + P(\neg A) = 1, so the first row only had one data point. pieces of data in the first table but it turns out that all we needed was four pieces of data, supposing we can directly measure these ratios.

Indeed frequently the ratio P(O_1 | A) / P(O_1 | \neg A), called the Bayes factor, is easier to measure than either P(O_1 | A) or P(O_1 | \neg A) separately; in messy, real-world scenarios the probability of the event O_1 might be wrapped up with many uncertain factors that have nothing to do with either A or \neg A. Estimating P(O_1 | A) requires assessing these irrelevant factors, but estimating the Bayes factor does not.

In extremes where probabilities get very close to 0 or 1 we can simplify matters further by taking logarithms everywhere – why multiply when instead you can add? We then have logarithmic Bayes factors 
    I_1 = \log \frac {P(O_1 | A)}{P(O_1 | \neg A)}
where I_1 is the information contained in the first observation, with positive numbers informing us in favor of event A, and negative numbers informing us against it. If the logarithm is base 2, then I_1 has units of bits. Adding up each of our pieces of information gives 
    \log \frac {P(A \cap O_1 \cap \cdots \cap O_n)}{P(\neg A \cap O_1 \cap \cdots \cap O_n)} = I_1 + \cdots + I_n
(Given a new piece of information I_{n + 1}, we can simply add it to the sum we have so far; this is called Bayesian updating.)

This result is also called the (conditional) log-odds of A.12The log-odds of A is defined as \log (P(A) / P(\neg A)), but here we have conditioned on the observations O_1 \cap \cdots \cap O_n. Log-odds can in some situations be more intuitive than ordinary probability, especially for extreme probabilities. A log-odds of 0 means a probability of 50%, and positive log-odds means an event that is more likely to happen than not. From the (conditional) log-odds of A we can compute the ordinary (conditional) probability: 
    P(A | O_1 \cap \cdots \cap O_n) = \frac 1{1 + \exp(- (I_1 + \cdots + I_n))}.

How does this work in practice? Suppose some event of interest A has some probability of being true; start by computing the unconditional (ie, prior) log-odds of A. We make a series of observations, and assess for each observation how much information it provides in favor of A versus against it. We update our log-odds of A by adding to it the information (positive or negative) from each observation. The result is the updated (ie, conditional on the observations) log-odds for A. Alternatively, if we don’t want to work with logarithms, instead of adding up log-odds we can directly multiply probabilities.

Let us work an example. Suppose you are hiring an engineer, and you want to know their competency A at a particular skill. You make three independent observations: they did adequately on an interview assessing that skill, they have an obscure certificate for that skill, and they went to University of Example which has a good engineering program. Most candidates, whether competent or not, do not have that certificate nor went to that university, so it is hard to assess the probability of those observations; but the relative probabilities, or Bayes factors, are easier to guess at. We have

P(O_i | A) P(O_i | \neg A) Bayes log-odds
prior 0.1 0.9 0.111 -2.2
interview 0.4 0.1 4 1.39
certificate - - 1.2 0.18
UoE grad - - 2 0.69
total - - 1.0667 0.065

Adding up the last column we get a log-odds of 0.065, or a probability of 51.6%, just barely above even odds that the candidate has this particular skill.

This example also illustrates a second important principle to understand with Bayesian reasoning: it can applied to any situation, and always gives an answer, regardless of how appropriate the technique is for the application. I certainly hope no one involved in hiring candidates is using a calculation of this nature to help make that decision, or at least not with the same lack of care as I did above. One must pay close attention to potential problems: did you account for all the available evidence? how accurate are your probabilities? how robust is your result to changes in the data? The messier and more “real-world” your situation, the easier it is to run afoul.

How do Bayesian reasoning and hypothesis testing relate?

The short answer is very simple: the conditional probability P(O | A) of making an observation conditional on the null hypothesis A is the p-value of that observation.13Following a conversation with Michael Weissman in which he disagreed with this statement, I should clarify that this is only for a binary observation. More generally one should say that the conditional probability P(O | A) is related to the p-value, with the nature of that relationship depending on the definitions chosen for a particular application.

In Bayesian reasoning we start with a prior probability P(A), a conditional probability P(O | A), and the complementary pieces of information P(\neg A)14Which of course is redundant with P(A), since P(A) + P(\neg A) = 1 and P(O | \neg A), and use these to compute an update:

P(A | O) = \frac {P(A) P(O | A)}{P(A) P(O | A) + P(\neg A) P(O | \neg A)}.

In hypothesis testing, the only piece of data we have is the p-value P(O | A). We can’t compute P(A | O) because we don’t have the other required pieces of data.

If we have all three pieces of information P(O | A), P(A), P(O | \neg A) then it makes sense to go ahead and compute the updated probability P(A | O); however if we do not have access to those last two numbers, or their accuracy is very low, it can be more useful to directly report a p-value P(O | A) and not attempt to compute P(A | O).

In primary scientific research, those two numbers are often inaccessible due to two features of the null hypothesis: the null hypothesis being non-probabilistic, and being very narrow or asymmetric.

Suppose A is a statement like “it will rain tomorrow”, and we want to estimate the probability of tomorrow’s weather based on observing today’s weather. It makes a lot of sense to start with a prior probability P(A) (based on historic rainfall frequency) and update it appopriately. But if A is a statement like “neutrinos are massless” or “fracking does not influence earthquakes” then it is not meaningful to speak of probabilities like P(A) or P(A | O).15Note that P(O | A) is still meaningful, so long as the observation is probabilistic in nature. We could interpret P(A) to mean our level of confidence in A, i.e., as information content, but this often has more to do with the mental state of the researcher than with A.

Second, Bayes theorem is fully symmetric between A and \neg A, but frequently in scientific research these hypotheses are not symmetric. The classic example is testing if a coin is fair: suppose we observe 30 heads in 100 coin flips (iid), and we want to test the null hypothesis A that the coin is fair. Here A is a very narrow and specific claim that lets us easily compute a p-value P(O | A). The negative \neg A is unspecific: the coin has some bias, but it could be any nonzero amount. The probability P(O | \neg A) depends on how biased the coin is, so we need additional information like a probability distribution for the amount of bias. This is a lot to ask for when we don’t even know if the coin is biased at all!

Finally, even if we had this extra data and could compute P(A | O), frequently that is less useful than reporting raw p-values. Suppose you are doing secondary research, and want to estimate P(A | O); you find that primary research into the effects of A has identified a series of unrelated observations O_1, O_2, O_3, O_4 that give information about A. Each of these observations has been made by different research teams with different specialities. If each team reported their own estimate for P(A | O_i), it would be a troublesome and error-prone process to combine these estimates into a single value for P(A | O_1 \cap O_2 \cap O_3 \cap O_4): each team will have a different estimate for P(A) with different assumptions; each team would incorporate different evidence (perhaps the researchers investigating the O_1 phenomenon were unaware of the existence of O_4 and omitted it entirely; or the O_2 researchers incorporated some other evidence O_5 that was later found to be unreliable). For you to compute P(A | O) would require first undoing all the computations that each team did to reconstruct the underlying p-values P(O_i | A); much simpler if they just reported these p-values directly.

The reason primary researchers report p-values is that this is usually the natural end point of their research; synthesizing the p-values of many different observations into a single posterior probability is the job of secondary research. Each probability P(A), P(O_i | A), P(O_i | \neg A) might involve a completely different physical process and speciality, so it is most suitable to have each term investigated separately by experts in the appropriate field.

(Todo; might add some explanatory text on how to convert between bayes factors and p-values)

Appendix: one-tailed vs two-tailed tests, and other messiness

Recall from the first section that the hypothesis testing method involves three steps:

  1. Compute a test statistic, which is a random variable that is a function of some collection of observations.
  2. Normalize the test statistic so that it is uniformly distributed in the range [0, 1]. This normalized value is called the p-value.
  3. Choose a desired false negative rate, and “reject” the null hypothesis if the p-value is below this threshold.

We slightly glossed over the second step, giving one way to normalize the test statistic by applying its cdf.

Choosing the normalization method is every bit as important as choosing the test statistic (though usually obvious once the test statistic has been chosen); as with the choice of test statistic, any choice is valid so long as the result is uniformly distributed in the range [0, 1], but not every choice might have the same statistical power (i.e., false positive rate).

Recall that we want the p-value to be as low as possible when conditioned on \neg A; this maximizes the chance of a correct rejection of A, since we reject when the p-value is below the threshold. Therefore when normalizing, we first sort all possible values of the test statistic by how likely they are under the condition of \neg A. Thus, the most likely outcomes will have the lowest p-values.

Slightly more carefully, what we are sorting by is how informative each possible value is in favor of \neg A over A; that is, we are sorting by the Bayes factors P(O | \neg A) / P(O | A).

For example, suppose our null hypothesis A is that a coin is fair, and we observe O that the coin had 30 heads out of 100 flips. Our test statistic is the number of heads, which when conditioned on A is approximately normally distributed with a mean of 50 and standard deviation of 5. We can normalize a normal distribution into a uniform distribution by applying the cdf; then 0 heads gives a p-value of 0, 50 heads gives a p-value of 0.5, and 100 heads gives a p-value of 1. What is the p-value of 30 heads? This is 4 standard deviations below the mean, which we can look up in a standard normal table is a percentile of 0.00003; that is our p-value, and we can feel confident in rejecting the null hypothesis that the coin is fair.

While this worked okay, depending on the application this was not the best choice of normalization. If instead we observe 70 heads out of 100, we would have gotten a p-value of 0.99997, and we would fail to reject the null hypothesis even though we know intuitively that the observation contains enough information to do so. We would have done better if we had sorted the possible observations by how informative they are in rejecting the null hypothesis. Here finding 0 or 100 heads is the most informative, so they get the lowest p-values, followed by 1 or 99 heads, then 2 or 98 heads, and so on, ending with 50 heads getting assigned a p-value of 1. As before, this is done so that the result is uniformly distributed in the range 0 to 100. Now if we observe 70 heads, this has a higher p-value than any observation of 0 to 29 or 71 to 100 heads, but a lower p-value than 31 to 69 heads; therefore it gets a p-value of 0.00006,16The total probability of seeing 0 to 29 or 71 to 100 heads, conditioned on A, is 0.00006. again comfortably rejecting the null hypothesis that the coin is fair.

As we are sorting the observations by their Bayes factors P(O | A) / P(O | \neg A), this sorting depends on the choice of alternate hypothesis \neg A. If \neg A is that the coin has any nonzero bias, then the sorting method we just used is appropriate, and is called a two-tailed test; but if \neg A is more specifically that the coin is biased in favor of tails, then the observation of 70 heads out of 100 does not significantly favor either the null or alternative hypotheses, and so gets the p-value of 0.99997 we had originally calculated. This is the one-tailed test. For example, when testing a cancer medication in rats, our null hypothesis is that it has no effect, and our alternate is that it reduces cancer rate, so a one-tailed test is appropriate in that we would not draw any conclusions from an observation of it increasing cancer rates.17And also in that rats, unlike certain biased coins, have one tail. This was an actual question I got from a cancer researcher who knew how to perform the calculations for a one-tailed and two-tailed t-test but not which one was appropriate, and only one of the results was “significant”.18Apparently the group’s statistician was on vacation at the time; why the researcher thought to ask a 19-year old kid from a foreign country is unclear. Probably the better answer would have been that the magical 5% confidence threshold is arbitrary and no great import should be assigned to whether your results fall above or below that line.

(A little subtlety: how do we sort by the Bayes factors if we can’t compute them without choosing a specific bias for \neg A? For the one-tailed coin test, it does not matter; the sorting is the same for any possible bias even though the actual values are not. For two-tailed, we have to assume that bias in favor of heads is equally likely as bias in favor of tails, and maybe some further assumptions.)

While frequently choosing how to normalize the test statistic amounts to simply choosing between a one-tailed or two-tailed test, in principle it could be any possible normalization scheme: maybe the two tails could be weighted differently, or the sorting goes from inside out, or even numbers come before odd numbers, etc etc. The test statistic doesn’t even have to be a number – all that is required is that it can be sorted by the Bayes factors.

(Addendum. I couldn’t find a decent splash image for this post online, so I asked a bot to draw a picture of “hypothesis testing” and ended up with this mess.)

Follow RSS/Atom feed for updates.