Proper analysis does not need Bayes
A recent XKCD post makes fun of frequentist statisticians and hails Bayesian statisticians for their superior skill at predicting trivial outcomes. This comic makes it look like only Bayesian approaches are sane and reality-compatible when predicting outcomes. This annoys me. There is no need to invoke Bayes’ theorem and Bayesian statistics to evaluate a machine that predicts the sun exploding with a random bias.
I much prefer the engineering approach.
The description of the machine goes as follows: “This neutrino detector measures whether the sun has gone nova. Then, it rolls two dices; if they both come up to six, then it lies to us; otherwise, it tells the truth.” The question then comes how to interpret a “yes” answer.
To start with, the description of the machine is superficial. The machine is really a neutrino detector combined with two dices and a yes/no output. There are many engineering aspects that can go wrong:
- the neutrino detector may fail at detecting neutrinos properly (either over-estimate or under-estimate, possibly non-deterministically);
- the link between the output of the neutrino detector and the dice-rolling mechanism may be faulty, and not report the neutrino detection properly;
- the dices may be unknowingly biased;
- the output from the dice-rolling mechanism may not be connected properly to the yes/no output.
The probability of the sun exploding is tiny compared to any of these engineering failures; therefore, if the machine outputs “yes” the answer should be primarily attributed to a human error in the making of the machine.
Moreover, a high neutrino count at the detector may be caused by another astrophysical phenomenon. I don’t know the likelihood of that, but in case it is possible this would also offset the probability of a “yes” answer being caused by the sun exploding, and make a bet that the sun did not explode worthwhile.
Bayesians don’t have a monopoly on common sense.