The major division is between

All(1)a prioriprobabilities, and

(2)a posterioriprobabilities.

*a priori*probabilities can really be understood as analytic

*a priori*: they are the product of a formal, abstract system in which it is simply assumed by definition that the system has a finite, exhaustive and exclusive number of outcomes which are all equiprobable.

That is, an

*a priori*probability is necessarily true and a mathematical certainty because it is the product of an analytic

*a priori*model that does not really describe reality, but simply

*assumes*as part of its model the following factors:

Of course, a real world system(1)the abstract system is random, in the sense that, when an outcome occurs, it is from a set of possible known outcomes, and one such outcome is sure to occur. E.g., in an abstract tossing of a fair coin, we know that the result will be heads or tails and can denote the set of outcomes by {heads, tails}. The latter is the sample space.

(2)the abstract system has a sample space (or set of outcomes) that is known, in the sense that there are finitely many outcomes of the random process and these can be stated. (An event is defined as a subset of the sample space.)

(3)All events are equally likely to occur (or equiprobable).

*might*have these properties, but whether it does or not is a matter for empirical investigation.

In the calculation of an analytic

*a priori*probability, we do not look at the real world, but simply

*assume*an abstract model where all these conditions hold as true.

Simply stated, the probability P(

*E*) of any event

*E*in a finite sample space

*S*, where all outcomes are equally likely, is the number of outcomes for

*E*divided by the total number of outcomes in

*S*.

By such calculations, probability and uncertainty can be quantified with objective numeric “points.” In this case, the uncertainty is normally called “risk” or “Knightian risk,” after the work of Frank Knight.

This is ultimately pure mathematics and analytic

*a priori*knowledge: the probabilities we create in this way have necessary truth, because they are abstract models.

We can see this in the way that any

*a priori*probabilities describing games of chance like dice throwing, roulette or card games are really describing

*abstract games*, not real world ones. In

*a priori*probabilities describing card games or roulette, for example, we are just

*assuming by definition*that the games are fair and not rigged, and that the system is truly random.

As soon as you move to a real world game of chance, you cannot be absolutely sure that the game is fair, not rigged, and truly random, because we can only know this by

*a posteriori*knowledge, which is fallible and does not yield certainty.

When we calculate the probability of winning at a real world game of roulette, we are taking an abstract model from pure mathematics and using it as an

*applied mathematical system*: this means that the model is transformed from an analytic

*a priori*system to a synthetic

*a posteriori*system. We do not get epistemological certainty about the truth of the probabilities because any given real world system (such as the game of roulette) might not conform to the assumptions of the model (e.g., it might be rigged or the wheel might be biased).

Indeed, a better applied mathematical model for calculating probabilities of real world games of chance is the

*relative frequency approach*: in roulette or dice, for example, you look at the long-run relative frequencies of outcomes in the game and see if they converge over time to stable relative frequencies as predicted by

*a priori*probability.

Again, by such calculations, probability and uncertainty may well be quantified with objective numeric “points” – assuming the process really does have and continues to have stable relative frequencies. If the latter, the uncertainty could once again be interpreted as “risk” or “Knightian risk.”

But even this relative frequency approach is

*a posteriori*and does not guarantee certainty. For no matter how many outcomes you look at, it is possible that the next outcome might have been biased in some way or rigged: so that the probabilities are fallible and contingent. Even in natural systems where we have some reason to think that nature produces the objective probabilities as a matter of ontological necessity, Hume’s problem of induction throws up epistemological problems that still prevent us from obtaining epistemological certainty.

Matters are even worse once we get to social and economic systems: for here stable relative frequencies may not even exist, or when they do, they exist only in the short run, as noted by Paul Davidson:

“[s]ome economic processes may appear to be ergodic, at least for short subperiods of calendar time, while others are not. The epistemological problem facing every economic decision maker is to determine whether (a) the phenomena involved are currently governed by probabilities that can be presumed ergodic – at least for the relevant future, or (b) nonergodic circumstances are involved.” (Davidson 1996: 501).At this point, the difference between ergodic and non-ergodic systems becomes important. In non-ergodic systems, the relative frequency approach will not work, since stable relative frequencies cannot be obtained.

With non-ergodic systems and many other types of probabilities (like the probabilities of certain past events or future events), we must move to yet another type of probability: epistemic probability (sometimes called evidential probabilities).

An epistemic probability is a property of propositions and also of inferences in inductive arguments (depending on the validity and soundness of the inductive arguments and evidence offered in support of it).

An epistemic probability has a degree of probability and uncertainty, but it is not an objectively numeric “point” probability. It is better described as a degree of belief, on the basis of empirical evidence and inductive argument.

When we face epistemic probabilities derived from experience, empirical evidence and inductive arguments, and no objective “point” probabilities (either in an

*a priori*or relative frequency sense) can be given, then the probability of a proposition also comes with

*some degree of uncertainty*, from low to very high.

When one has no relevant or convincing evidence on which to make an inductive argument, one would face

*total or radical uncertainty*. Exactly when and in what circumstances one does face radical uncertainty, of course, could be a matter of some dispute.

It is important to remember that, in the Post Keynesian tradition, the word “uncertainty” is usually understood to be non-quantifiable and restricted to that sense, whereas “risk” is quantifiable.

**BIBLIOGRAPHY**

Davidson, Paul. 1996. “Reality and Economic Theory,”

*Journal of Post Keynesian Economics*18.4: 479–508.

LK,

ReplyDeleteI'll preface this by saying I consider myself an Austrian. Yet, I find your blog very interesting. I'm glad there are people (whether Austrian or not) that are still willing to engage in this form of economic discussion. I've seen you commenting on Bob Murphy's blog, and I always enjoy it when you do. I don't often agree with you, but you have made me (and probably others) revisit some long-held beliefs, which I think is always important to do in order to prevent bias. In short, keep up the good work.