Bayesian probability banned?

This post on Understanding Uncertainty bears the amusing, alarming and somewhat over-stated title “Court of Appeal bans Bayesian probability (and Sherlock Holmes)”. It’s not unusual for people to experience a little intellectual indigestion when first faced with the Bayesian probabilistic paradigm; it is particularly prevalent among people whose point of view is roughly speaking “frequentist”, even though they may have had no formal education in probability in their lives. Personally, I think that the judge’s criticisms, and Understanding Uncertainty‘s criticisms of those criticisms, are somewhat overblown.

However, I will advance one criticism of Bayesian probability as applied to practical situations. The basic axiom of the Bayesian paradigm is that one’s state of knowledge (or uncertainty) can be encapsulated in a unique, well-defined probability measure ℙ (the “prior”) on some sample space. Having done this, the only sensible way to update your probability measure (to produce a “posterior”) in light of new evidence is to condition it using Bayes’ rule — and I have no bone of contention with that theorem. My issue is with specifying a unique prior. If I believe that a coin is perfectly balanced, then I might be willing to commit to the prior ℙ for which

ℙ[heads] = ℙ[tails] = 1/2.

But can I really know that the coin is perfectly fair? Can I reasonably be expected to tell the difference between a perfectly fair coin and one for which

| ℙ[heads] − ℙ[tails] | ≤ 10−100?

(By Hoeffding’s inequality, to be satisfied with confidence level 1 − ε of the truth of this inequality would take of the order of 10100 (− log ε)1/2 / √2 (i.e. lots!) independent flips of the coin.) If not, then any prior distribution ℙ that satisfies this inequality should be a reasonable prior, and all the resulting posteriors are similarly reasonable conclusions. This kind of extended Bayesian point of view goes by the name of the robust Bayesian paradigm. It may seem that the difference between 10−100 and 0 is negligible… but it is not! The results of statistical tests can depend very sensitively on the assumptions made, especially when there is little data available to filter through those assumptions (and, scarily, sometimes even in the limit of infinite data!).

So, yes, I agree that (classical) Bayesian statistics shouldn’t be let near life-or-death cases in a courtroom. But robust Bayesian statistics? I could support that…

Advertisements