Useful Pages

Sunday, June 5, 2016

The Difference between a priori and a posteriori Probability

Imagine a pure thought experiment: an abstract, logical world where everything is true by definition.

In this world, we have a fair dice. The pure mathematical a priori probability of rolling a 6 on this dice is 1/6. That probability has necessary truth – but limited only to the abstract fair game of dice one is imagining. The analytic a priori propositions that express the imaginary world and probability in question do have necessary truth – but limited to this abstract analytic a priori system. This is the epistemological nature of a priori probabilities.

But what happens when we step into the real world? As soon as we move to the real world and take any given real world dice, we cannot have apodictic certainty that the dice is not loaded and that the probability 1/6 really applies with necessary truth.

Why? Because of (1) the epistemological problems with how we know a posteriori knowledge and (2) Hume’s problem of induction. No matter how much evidence you have that the dice is fair, there is still a tiny possibility you are mistaken. This still holds even if – for good measure – the empirical relative frequency approach to show the probabilities of the outcomes when throwing the dice is used and this shows the dice is fair.

In short, the belief that the probability that any given dice will roll a 6 is 1/6 has become an a posteriori probability and contingent, not necessarily true.

Most people just take the abstract analytic a priori model first sketched and impose it on the real world, forgetting that this is an epistemological mistake. The difference between
(1) abstract a priori truth and

(2) contingent, empirical a posteriori truth is real.
It applies even to probability.

More on this issue here.

1 comment:

  1. You write: Because of (1) the epistemological problems with how we know a posteriori knowledge and (2) Hume’s problem of induction."

    I read propositions (1) and (2) in the following way:

    As for method (1), owing to errors and limitations in the methods by which we try to capture the phenomenon in question, we may be prevented from recognising important aspects of it — say, the nature of physical laws or the impossibility of establishing fair game conditions etc — that ensure that the expected probabilistic regularities will be violated in reality.

    As for induction (2), these irregularities show up quite unexpectedly—unexpectedly, because our expectations were false, for reasons insinuated in (1).

    Methodological errors could still occur even if, in principle, we were able to capture the conditions that govern the phenomenon, only that we happen to look in the wrong direction, as it were.

    Of course, we can never be conclusively sure of the correctness of our methodology, as there can always crop up a big surprise courtesy of induction.

    So, we can never be entirely sure that we are getting it right, and that means, if we do (get it right), there is no way to know it.

    If fact, I think, that is a huge boon. It makes our species try again and again, whereby we gradually get to know more about the world than the crocodile that has little incentive to doubt its omniscience.

    ReplyDelete