Lies, Damn Lies, And Statistics


A few days ago, I declared truth to be nothing but a model of reality that we can use to predict observable results. Today I wanted to point out some examples of situations where such models have gone wrong, or clashed with the models of other people, but I realized that just about every article I wrote on philosophy before is one such example. Still, now that I’ve started this post, I need to finish it somehow, so let’s consider the nature of learning.

Overly simply phrased, learning is the same as the acquisition of truths. But that’s glossing over a fair amount of details. If I say truth is a model that allows one to predict observable results, I’m glossing over even more.

Ask yourself why you would want to predict anything in life, and you’re likely to arrive at two very closely related reasons:

  1. To be able to act in such a way that avoids harmful events happening to yourself.
  2. As an extension of that, to be able to act in such a way as to increase the likelihood of useful events to happen to yourself.

The wish to predict the future, in other words, is grounded in the fundamental human desire for survival. If I can avoid a broken skull by predicting that breaking my skull is an event very likely to happen if a baseball bat is swung at it at high speed by some mugger, then this form of prediction is essentially equivalent to survival. By extension, if I can opt not to go to a location where such a situation would arise in the first place, and instead choose to invest the money the mugger would have stolen into some very promising stock, I may even become rich instead.

Quite naturally, therefore, we want to predict the future as accurately as possible in order to optimize our survival prospects. So how do we do that?