Showing posts sorted by relevance for query ergodic. Sort by date Show all posts
Showing posts sorted by relevance for query ergodic. Sort by date Show all posts

Wednesday, August 28, 2013

Non-Ergodicity and Trends and Cycles

Non-ergodicity is a tricky concept relevant to economics.

Yet any particular economy is not purely non-ergodic, but a complex mix of ergodic and non-ergodic elements, and non-ergodicity is a property of those processes or phenomena in which time and/or space averages of certain outcomes or attributes of that system either do not coincide for an infinite series or do not converge as the finite number of observations increases. That is to say, there will be no stable long-run relative frequencies, and even a large sample of the past does not reveal the future in an non-ergodic process to allow objective probabilities to be calculated for the likelihood any specific future outcome.

But, as already noted, any real world economy is a complex mix of both ergodic and non-ergodic processes, and the important point is that non-ergodicity does not mean no trends, cycles and oscillations occur in non-ergodic systems or in the economy at large.

We have, for example, no difficulty identifying high unemployment in the present or immediate past, or rising unemployment or rising or falling real output growth. Or trends like bear or bull markets in stock markets, even though future prediction of the value of any one share with objective probability scores still cannot be given.

What cannot be done in a pure non-ergodic system is to give an objective probability score for some specific future state of the system.

Things can become more complex because some processes have short-term stable relative frequencies but may not have such stability in the long run:
“[s]ome economic processes may appear to be ergodic, at least for short subperiods of calendar time, while others are not. The epistemological problem facing every economic decision maker is to determine whether (a) the phenomena involved are currently governed by probabilities that can be presumed ergodic – at least for the relevant future, or (b) nonergodic circumstances are involved.” (Davidson 1996: 501).
The long-run instability of certain human ensemble averages is an example of this.

Furthermore, some processes – and perhaps long term climate is one – may be so complex that they have elements that are ergodic and other elements that are non-ergodic, so that how one characterises the overall system can be an epistemic problem.

Links
“Physical Probability versus Evidential Probability,” July 9, 2013.

“Keynes’s Interval Probabilities,” July 15, 2013.

“Davidson on “Reality and Economic Theory,” July 10, 2013.

“Probability and Uncertainty,” July 11, 2013.

“A Classification of Types of Probability and Theories of Probability,” July 14, 2013.

“Is Long Term Climate Non-Ergodic?,” July 18, 2013.

BIBLIOGRAPHY
Davidson, Paul. 1996. “Reality and Economic Theory,” Journal of Post Keynesian Economics 18.4: 479–508.

Thursday, July 18, 2013

Is Long Term Climate Non-Ergodic?

I must confess this post is more a question and a series of musings, rather than a proper answer.

Edward Norton Lorenz (1917–2008), the American mathematician and meteorologist, apparently thought that earth’s climate system displayed both ergodic and non-ergodic elements:
“… let us recall that a dynamic system is termed ergodic if the equations describing its evolution at random initial conditions and fixed external parameters have a unique possible stationary solution. If the dynamic system is not ergodic, then its behaviour over an infinitely large time interval will depend on the initial conditions. As applied to the climatic system this is equivalent to the fact that external parameters uniquely determine climate in the first case and non-uniquely in the second case.

The idea of the non-uniqueness of Earth’s climate was first put forward by Lorenz (1979), who termed ergodic systems transitive, and those systems which do not have the property of transitivity intransitive. The real climatic system, according to Lorenz, is almost intransitive, that is, it shows signs of transitivity and intransitivity simultaneously. Alternation of glacial and interglacial epochs over the last 3.5 million years of Earth’s history testified to this. (Kagan 1995: 15).
The climate system is highly complex and the further into the future one goes, the greater the uncertainty associated with what the weather will be like on any particular day. In fact, one might argue that, as soon as one goes from the very short term future (say, hours, days and weeks at most), it must become extremely difficult if not impossible to predict the weather. Certainly, predictions cannot yield objective probability scores, and there are high degrees of increasing uncertainty involved as one moves to the future.

Nevertheless, there are certain predicable cycles: the changes caused by days and nights, the changes of seasons, and (generally speaking) Ice Ages.

So the system seems simultaneously ergodic and non-ergodic, and one must wonder whether in economic life we also face a number of such complex processes that have the property of being both ergodic and non-ergodic. For example, business cycles are a real and repeated empirical regularity in modern capitalist systems. Asset bubbles and their collapse appear in an admittedly highly irregular but cyclical way on unregulated or poorly regulated secondary asset markets, even though strict prediction of quantities and turning points with mathematic probability is not possible and movements of specific prices on secondary asset markets are surely non-ergodic.


BIBLIOGRAPHY
Kagan, B. A. 1995. Ocean-Atmosphere Interaction and Climate Modelling (trans. Mikhail Hazin), Cambridge University Press, Cambridge.

Lorenz, Edward N. 1979. “Forced and Free Variations of Weather and Climate,” Journal of Atmospheric and Oceanic Science, 36.8: 1367–1376.

Thursday, July 11, 2013

Probability and Uncertainty

There are two fundamental types of probability with subcategories:
(1) Physical/Objective probabilities (class probabilities), divided into:
(i.) A priori probabilities (mathematical/Classical probabilities)
(ii.) Relative frequency probabilities (or a posteriori/empirical/experimental probabilities), and
(2) Subjective probability (or evidential/Bayesian probability).
These are discussed below, with the issue of uncertainty in section (3).

(1) Physical/Objective Probabilities
Again, these are divided into:
(i.) A priori probabilities (mathematical/Classical probabilities), and
(ii.) Relative frequency probabilities (or a posteriori/empirical/experimental probabilities).
Objective probabilities are either in practice or in theory quantifiable with a numerical value (or numerical coefficient of probability). The numerical value that describes the likelihood of an occurrence or event can range from 0 (impossibility) to 1 (certainty).

A priori probabilities can be calculated from antecedent information and before the experiment or the event in question, such as probabilities of coin tosses.

Relative frequency probabilities, on the other hand, are derived from the empirical data of a sufficiently representative, random sample. Usually a reference class and attribute of interest are involved, and in theory the probability can be expressed as a numerical value, calculated as a fraction where the denominator is the number of members in the reference class and the numerator is the number of members of the reference class who have the attribute involved. Probabilities are assigned to events on the basis of available evidence or sample, and therefore may be different when people have different sized data.

But is also possible to view a priori probabilities as relative frequency probabilities: hence the probability of heads in a fair coin toss at 0.5 can be conceived as the relative frequency of that outcome in repeated experiments of coin tosses over many instances, and as the repeated experiments approach infinity supposedly the numerical value will approach 0.5.

Advocates of the frequentist interpretation of probability might contend that a priori probabilities do not exist, but are ultimately explained by relative frequencies. It is interesting that Ludwig von Mises referred to objective probabilities as “class probabilities,” under the influence of his brother Richard von Mises (1883–1953), a proponent of the frequentist interpretation of physical probabilities.

I assume (I could be wrong) that if Bayesian probability uses frequency probabilities, it may also be able to yield objective probabilities.

A fundamental point is that risk (as opposed to uncertainty) is associated with objective probabilities, either a priori probabilities or relative frequency probabilities, when a numerical value can be assigned, as Frank Knight argued (though Knight’s terminology was potentially misleading as he also called risk “measurable uncertainty”).

Post Keynesians would argue that risk is not the relevant concept in many entrepreneurial investment decisions, but uncertainty.

(2) Subjective Probability (or evidential probability/Bayesian probability)
In instances where probabilities of events cannot be analysed in terms of relative frequencies or because the events are unique and cannot be included in a reference class, probability theory has been developed that measures “degrees of belief,” and that can be termed “subjective probability.” The usual procedure for this is some form of Bayesian probability theory.

In neoclassical economics, subjective probability theory was developed from the work of John von Neumann, Oskar Morgenstern, Frank Ramsey, Bruno de Finetti, and Leonard J. Savage, the latter of whom (drawing on Bayesian probability theory as well) formulated a formal model of decision-making where optimal decisions are made to maximise expected utility, and probability distributions are given by subjective evaluations.

But even here uncertainty is seen as a state of the mind, not as a state of the world, and ultimately Walrasian general equilibrium theory in its various forms requires real, objective probabilities to actually exist for events in economic decision making, and for the subjective probabilities of agents to converge towards these objective probabilities over time.

Curiously, though being subjectivists, Austrian economists reject the expected-utility representation of decision making under uncertainty in neoclassical economics (Langlois 1994: 118). We should also note that Ludwig von Mises’s “case probability” is not really the same thing as Bayesian subjective probability. Case probability is a purely subjective form of probability and Mises argued that “case probability is not open to any kind of numerical evaluation” (Mises 1998: 113). By contrast, Bayesianism does give numerical values to evidential probabilities, even if these are deemed subjective, but are updated and revised in light of new evidence.

(3) Uncertainty
In understanding uncertainty, the distinction between ergodic and non-ergodic processes is important. For neoclassical theory, reliable knowledge of the future requires the assumption of the ergodic axiom. Ergodicity is a property of some process or phenomenon in which time and/or space averages or attributes of that system either coincide for an infinite series or converge as the finite number of observations increases (Dunn 2012: 434). Thus a sufficient sample of the past can be said to reveal the future in an ergodic process.

But, for Post Keynesians, the complications involved in assessing the ergodic or non-ergodic nature of an economic process might be considerable, especially as there exist:
(1) genuinely ergodic economic processes/phenomena;

(2) genuinely non-ergodic economic processes/phenomena;

(3) economic processes/phenomena that appear ergodic for short periods of calendar time, but may change. (Dunn 2012: 435).
For example, non-stationarity and Shackle’s “crucial decision” concept in decision making are sufficient conditions for non-ergodicity, but not necessary conditions (Dunn 2012: 435–436). Future events or processes that are created by human agency are, above all, candidates for non-ergodicity.

Events where objective probabilities exist imply an ergodic world or a justified use of the ergodic axiom. Information from past and present data series should allow a probability estimate that approaches the objective numerical value as the data increases, even for future events.

Keynesian uncertainty (in the sense of Keynes and Post Keynesianism) stresses the unknowable nature of the future and the inappropriateness or profound limitations of probability theory.

Although there is not an exact equivalence between all the various concepts below (and perhaps some important differences), these concepts of uncertainty are roughly similar to Keynesian uncertainty:
(1) Knightian (unmeasurable) uncertainty;

(2) Misesian case probability;

(3) G. L. S. Shackle’s radical uncertainty;

(4) Ludwig Lachmann’s radical uncertainty;

(5) Austrian “structural uncertainty” (Langlois 1994: 120);

(6) Loasby’s partial ignorance, and

(7) O’Driscoll and Rizzo’s genuine uncertainty.
When the idea of fundamental uncertainty is understood as a crucial one for economic science, the next question is: how do economic agents act and make decisions under uncertain conditions?

George L. S. Shackle developed a theory of decision making under uncertainty that dispensed with probability theories in describing such behaviour, and this was a project derived from the work of Frank Knight and Keynes. In contrast, as we have seen, mainstream neoclassical economics via Arrow adopted the use of subjective probability in decision making theory, and effectively denied the (1) risk versus (2) Knightian/Keynesian uncertainty distinction.

Neoclassical theory was influenced by the work of Frank Ramsay and Leonard J. Savage and essentially went down the path of subjective probability theory with a Bayesian flavour.

I conclude by posing some other questions that seem important to me:
(1) what is the contribution and value of Gilboa and Schmeidler’s non-additive probability approach to decision-making under uncertainty?

(2) to what extent did Ludwig von Mises follow the frequentist interpretation of probability of his brother Richard von Mises?

(3) Knight made a distinction between “statistical probability” and “estimated probability.” Is “estimated probability” more or less “subjective probability”?

(4) What is the significance of Daniel Kahneman and Amos Tversky’s critiques of standard economic decision making theory, and that of Daniel Ellsberg in Risk, Ambiguity and Decision (2001)?
BIBLIOGRAPHY
Copi, Irving, Cohen, Carl and Kenneth McMahon. 2011. Introduction to Logic (14th edn.). Prentice Hall, Boston, Mass. and London.

Dunn, S. P. 2012. “Non-Ergodicity,” in J. E. King (ed.), The Elgar Companion to Post Keynesian Economics (2nd edn.), Edward Elgar, Cheltenham, UK and Northampton, MA. 434–439.

Langlois, R. 1994. “Risk and Uncertainty,” in Peter J. Boettke (ed.), The Elgar Companion to Austrian Economics. E. Elgar, Aldershot. 118–122.

Mises, L. 1998. Human Action: A Treatise on Economics. The Scholar's Edition. Mises Institute, Auburn, Ala.

Runde, Jochen. 2000. “Shackle on Probability,” in Stephen F. Frowen and Peter Earl (eds.), Economics as an Art of Thought: Essays in Memory of G. L. S. Shackle. Routledge, New York.

Skyrms, B. 2010. “Probability, Theories of,” in Jonathan Dancy, Ernest Sosa, and Matthias Steup (eds.), A Companion to Epistemology (2nd edn.). Wiley-Blackwell, Oxford. 622–626.

Wednesday, March 2, 2011

Uncertainty and Non-Ergodic Stochastic Systems

The concept of uncertainty in economic life was used by Keynes in the General Theory (1936) and also in an article defending his new theory the next year (see Keynes, “The General Theory of Employment,” Quarterly Journal of Economics 51 [1937]: 209–223).

Paul Davidson notes the nature of uncertainty in the Keynesian/Knightian sense:
“Keynes’s description of uncertainty matches technically what mathematical statisticians call a nonergodic stochastic system. In a nonergodic system, one can never expect whatever data set exists today to provide a reliable guide to future outcomes. In such a world, markets cannot be efficient” (Davidson 2002: 187).

“Keynes … rejected this view that past information from economic time-series realizations provides reliable, useful data which permit stochastic predictions of the economic future. In a world where observations are drawn from a non-ergodic stochastic environment, past data cannot provide any reliable information about future probability distributions. Agents in a non-ergodic environment ‘know’ they cannot reliably know future outcomes. In an economy operating in a non-ergodic environment, therefore – our economic world – liquidity matters, money is never neutral, and neither Say’s Law nor Walras’s Law is relevant. In such a world, Keynes’s revolutionary logical analysis is relevant” (Davidson 2006: 150).
Certain types of phenomena in our universe are what mathematicians call non-ergodic stochastic systems. The concept of radical uncertainty applies to such systems, like medium term weather events, financial markets, and economies, and other natural systems studied in physics.

In these systems, past data is not a useful tool from which one can derive an objective probability score for some specific, future state of a quantitative variable in the system. Of course, such a system can still have trends, cycles and oscillations, both in the past and future. For example, stock markets certainly have cycles of bull and bear phases, but trying to predict the specific value of some stock x, say, two years from now with an objective probability score is not possible.

But the fundamental point is that it is still possible for a powerful agency or entity to reduce uncertainty in these systems, or at least in theory in some of them. It is entirely possible that in the future – with a far more advanced human civilization – we could use technology to control local, regional or perhaps even global weather.

And even today a powerful entity like the government can intervene to reduce uncertainty in the non-ergodic stochastic system we call the economy.


Is Climate a Non-Ergodic Stochastic System?

Does the earth’s climate system have the property of non-ergodicity? This question has occurred to me more than once, but I am actually unsure of the answer.

Some quick research suggests that climate models appear to make an ergodicity assumption about climate systems:
“Thus, it is perfectly valid to consider our climate a realization of a continuous stochastic process even though the time-evolution of any particular path is governed by physical laws. In order to apply this fact to our diagnostics of the observed and simulated climate we have to assume that the climate is ergodic. That is, we have to assume that every trajectory will eventually visit all parts of phase space and that sampling in time is equivalent to sampling different paths through phase space. Without this assumption about the operation of our physical system the study of the climate would be all but impossible.

The assumption of ergodicity is well founded, at least on shorter time scales, in the atmosphere and the ocean. In both media, the laws of physics describe turbulent fluids with limited predictability (ie, small perturbations grow quickly, so two paths through phase space diverge quickly) (von Storch and Zwiers 1999: 29–30).
But then what about longer time scales? If “the laws of physics describe turbulent fluids with limited predictability” on short time scales, what sort of predictability can they provide on medium or long term time scales?

Let’s assume, for the sake of argument, that long term climate is non-ergodic, in the way that a free market economy is. Does that mean all intervention would be useless and ineffective in such a system to affect the state of it? Does it mean that we are all doomed to (in a manner of speaking) live in a “free market” climate forever?

In fact, that does not follow at all. It is probably very likely that our future technology, when it becomes sophisticated and powerful enough, will be used by humans to intervene and control climate, e.g., by preventing ice ages.


BIBLIOGRAPHY

David, P. A. 2007. “Path Dependence, its Critics and the Quest for ‘Historical Economics,’” in G. M. Hodgson, The Evolution of Economic Institutions: A Critical Reader, Edward Elgar, Cheltenham. 120–144.

Davidson, P. 2002. Financial Markets, Money, and the Real World, Edward Elgar, Cheltenham.

Davidson, P. 2004. “Uncertainty and Monetary Policy,” in P. Mooslechner, H. Schuberth, and M. Schürz (eds), Economic Policy under Uncertainty: The Role of Truth and Accountability in Policy Advice, Edward Elgar, Cheltenham. 233–260.

Davidson, P. 2006. “Keynes and Money,” in P. Arestis and M. Sawyer (eds), A Handbook of Alternative Monetary Economics, Edward Elgar, Cheltenham, UK and Northampton, Mass. 139–153.

Keynes, J. M. 1937. “The General Theory of Employment,” Quarterly Journal of Economics 51 (February): 209–223.

Storch, H. von and F. W. Zwiers, 1999. Statistical Analysis in Climate Research, Cambridge University Press, Cambridge, UK and New York.

Wednesday, July 10, 2013

Davidson on “Reality and Economic Theory”

Davidson (1996) provides an important study of the nature of uncertainty, probability and decision making in economic life.

In standard neoclassical theory, rational actors must have reliable probability forecasts::
“To make statistically reliable forecasts of the future, agents need to obtain and analyze sample data from the future. Since that is impossible, the assumption of a predetermined-ergodic-reality permits the modeler to assert that sampling from past and present market data is the same thing as obtaining a sample from the future. Ergodicity implies that future outcomes are merely the statistical shadow of past and current market signals. Presuming ergodic conditions reduces the modeler's problem to explaining how and at what cost agents obtain and process existing data (in the form of ‘price signals’).

Unlike the old classical economists, rational expectations theorists do not claim that the agents in their models obtain complete knowledge of reality. Rational expectations models only require agents to use existing market price signals to calculate subjective probabilities that are statistically reliable estimates of the objective probability function describing the reality that governs future events. Subjective probabilities calculated from current and/or past market data can provide these statistically reliable estimates if, and only if, the economic system is ergodic. Hence, all rational expectations models are based on the ergodic axiom.” (Davidson 1996: 480).
Davidson proposes the following classification of the way in which mainstream and heterodox economic theories treat economic reality and human knowledge of the future:
Concepts of External Economic Reality
A. Immutable reality
Type 1. In both the short run and the long run, the future is known or at least knowable. Examples are:
a. Classical perfect certainty models.
b. Actuarial certainty equivalents, such as rational expectations models.
c. New Classical models.
d. Some New Keynesian theories.

Type 2. In the short run, the future is not completely known due to some limitation in human information processing and computing power. Examples are:
a. Bounded rationality theory
b. Knight’s theory of uncertainty
c. Savage’s expected utility theory.
d. Some Austrian theories.
e. Some New Keynesian models (e.g., coordination failure).
f. Chaos, sunspot, and bubble theories.
B. Transmutable or creative reality: Some aspects of the economic future will be created by human action today and/or in the future. Examples of theories using this postulate are:
a. Keynes’ General Theory and Post Keynesian monetary theory.
b. Post-1974 writings of Sir John Hicks.
c. G.L.S. Shackle’s crucial experiment analysis.
d. Old Institutionalist theory.” (Davidson 1996: 485).
The Type 2 models assume that in the short-run economic agents are ignorant about the immutable reality, and have very incomplete knowledge, because there are serious limitations on the human ability to collect and analyse the time series data necessary to obtain reliable knowledge (Davidson 1996: 484). In type 2 models, economic agents are therefore subject to a type of epistemological uncertainty.

Regarding decision making in the Type 2 theories, Davidson notes:
“Type 2 immutable reality models typically employ a subjectivist orientation. Agents form subjective expectations (usually, but not necessarily in the form of Bayesian subjective probabilities). In the short run, subjective probabilities need not coincide with the presumed immutable objective probabilities. Today’s decision makers, therefore, can make short-run errors regarding the uncertain (i.e., probabilistic risky) future. Agents ‘learn’ from these short-run mistakes so that subjective probabilities or decision weights tend to converge onto an accurate description of the programmed external reality.” (Davidson 1996: 486).
Davidson then draws attention to the distinction between risk and uncertainty in Frank Knight’s work:
“the practical difference between the two categories, risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known (either through calculation a priori or from the statistics of past experience), while in the case of uncertainty, this is not true, the reason being in general that it is impossible to form a group of instances, because the situation dealt with is in a high degree unique” (Knight 1921: 233).
Finally the Post Keynesian view, as derived from Keynes himself:
“For Keynes and the Post Keynesians, long-run uncertainty is associated with a nonergodic and transmutable reality concept. A fundamental tenet of Keynes’ revolution ... is that probabilistic risks must be distinguished from uncertainty where existing probabilities are not reliable guides to future performance. Probabilistic risk may characterize routine, repeatable economic decisions where it is reasonable to presume an immutable (ergodic) reality. Keynes ..., however, rejected the ergodic axiom as applicable to all economic expectations when he insisted that the ‘state of long term expectations’ involving non routine matters that are ‘very uncertain’ form the basis for important economic decisions involving investment, the accumulation of wealth, and finance. In these areas, agents ‘know’ they are dealing with an uncertain, nonprobabilistic creative economic external reality.” (Davidson 1996: 492–493).
Shackle’s concept of the “crucial choice” is also relevant here. A “crucial choice” decision is one that has a fundamental influence on the economic environment, and the conditions under which the decision is made are not repeated (Davidson 1996: 495). The transmutable economic future is created by such decisions, but often contrary to what agents intended. As Davidson notes, for Shackle, the
“future is not discovered through the Bayes-LaPlace theorem regarding relative frequencies or any error learning model.” (Davidson 1996: 499).
Furthermore,
“[s]ome economic processes may appear to be ergodic, at least for short subperiods of calendar time, while others are not. The epistemological problem facing every economic decision maker is to determine whether (a) the phenomena involved are currently governed by probabilities that can be presumed ergodic – at least for the relevant future, or (b) nonergodic circumstances are involved.” (Davidson 1996: 501).
Where economic phenomena are non-ergodic, “discovered empirical regularities in past data cannot be used to predict the future” (Davidson 1996: 502).


BIBLIOGRAPHY
Davidson, Paul. 1996. “Reality and Economic Theory,” Journal of Post Keynesian Economics 18.4: 479–508.

Saturday, May 17, 2014

The Epistemic Types of Probability

There is a fundamental epistemic/epistemological division in types of probability, as we can see in the diagram below.


The major division is between
(1) a priori probabilities, and

(2) a posteriori probabilities.
All a priori probabilities can really be understood as analytic a priori: they are the product of a formal, abstract system in which it is simply assumed by definition that the system has a finite, exhaustive and exclusive number of outcomes which are all equiprobable.

That is, an a priori probability is necessarily true and a mathematical certainty because it is the product of an analytic a priori model that does not really describe reality, but simply assumes as part of its model the following factors:
(1) the abstract system is random, in the sense that, when an outcome occurs, it is from a set of possible known outcomes, and one such outcome is sure to occur. E.g., in an abstract tossing of a fair coin, we know that the result will be heads or tails and can denote the set of outcomes by {heads, tails}. The latter is the sample space.

(2) the abstract system has a sample space (or set of outcomes) that is known, in the sense that there are finitely many outcomes of the random process and these can be stated. (An event is defined as a subset of the sample space.)

(3) All events are equally likely to occur (or equiprobable).
Of course, a real world system might have these properties, but whether it does or not is a matter for empirical investigation.

In the calculation of an analytic a priori probability, we do not look at the real world, but simply assume an abstract model where all these conditions hold as true.

Simply stated, the probability P(E) of any event E in a finite sample space S, where all outcomes are equally likely, is the number of outcomes for E divided by the total number of outcomes in S.

By such calculations, probability and uncertainty can be quantified with objective numeric “points.” In this case, the uncertainty is normally called “risk” or “Knightian risk,” after the work of Frank Knight.

This is ultimately pure mathematics and analytic a priori knowledge: the probabilities we create in this way have necessary truth, because they are abstract models.

We can see this in the way that any a priori probabilities describing games of chance like dice throwing, roulette or card games are really describing abstract games, not real world ones. In a priori probabilities describing card games or roulette, for example, we are just assuming by definition that the games are fair and not rigged, and that the system is truly random.

As soon as you move to a real world game of chance, you cannot be absolutely sure that the game is fair, not rigged, and truly random, because we can only know this by a posteriori knowledge, which is fallible and does not yield certainty.

When we calculate the probability of winning at a real world game of roulette, we are taking an abstract model from pure mathematics and using it as an applied mathematical system: this means that the model is transformed from an analytic a priori system to a synthetic a posteriori system. We do not get epistemological certainty about the truth of the probabilities because any given real world system (such as the game of roulette) might not conform to the assumptions of the model (e.g., it might be rigged or the wheel might be biased).

Indeed, a better applied mathematical model for calculating probabilities of real world games of chance is the relative frequency approach: in roulette or dice, for example, you look at the long-run relative frequencies of outcomes in the game and see if they converge over time to stable relative frequencies as predicted by a priori probability.

Again, by such calculations, probability and uncertainty may well be quantified with objective numeric “points” – assuming the process really does have and continues to have stable relative frequencies. If the latter, the uncertainty could once again be interpreted as “risk” or “Knightian risk.”

But even this relative frequency approach is a posteriori and does not guarantee certainty. For no matter how many outcomes you look at, it is possible that the next outcome might have been biased in some way or rigged: so that the probabilities are fallible and contingent. Even in natural systems where we have some reason to think that nature produces the objective probabilities as a matter of ontological necessity, Hume’s problem of induction throws up epistemological problems that still prevent us from obtaining epistemological certainty.

Matters are even worse once we get to social and economic systems: for here stable relative frequencies may not even exist, or when they do, they exist only in the short run, as noted by Paul Davidson:
“[s]ome economic processes may appear to be ergodic, at least for short subperiods of calendar time, while others are not. The epistemological problem facing every economic decision maker is to determine whether (a) the phenomena involved are currently governed by probabilities that can be presumed ergodic – at least for the relevant future, or (b) nonergodic circumstances are involved.” (Davidson 1996: 501).
At this point, the difference between ergodic and non-ergodic systems becomes important. In non-ergodic systems, the relative frequency approach will not work, since stable relative frequencies cannot be obtained.

With non-ergodic systems and many other types of probabilities (like the probabilities of certain past events or future events), we must move to yet another type of probability: epistemic probability (sometimes called evidential probabilities).

An epistemic probability is a property of propositions and also of inferences in inductive arguments (depending on the validity and soundness of the inductive arguments and evidence offered in support of it).

An epistemic probability has a degree of probability and uncertainty, but it is not an objectively numeric “point” probability. It is better described as a degree of belief, on the basis of empirical evidence and inductive argument.

When we face epistemic probabilities derived from experience, empirical evidence and inductive arguments, and no objective “point” probabilities (either in an a priori or relative frequency sense) can be given, then the probability of a proposition also comes with some degree of uncertainty, from low to very high.

When one has no relevant or convincing evidence on which to make an inductive argument, one would face total or radical uncertainty. Exactly when and in what circumstances one does face radical uncertainty, of course, could be a matter of some dispute.

It is important to remember that, in the Post Keynesian tradition, the word “uncertainty” is usually understood to be non-quantifiable and restricted to that sense, whereas “risk” is quantifiable.

BIBLIOGRAPHY
Davidson, Paul. 1996. “Reality and Economic Theory,” Journal of Post Keynesian Economics 18.4: 479–508.

Sunday, July 14, 2013

A Classification of Types of Probability and Theories of Probability

There are three fundamental conceptual divisions in the way that probability theory has been interpreted:
(1) the Classical interpretation;

(2) the epistemological (or epistemic) probability theory, further divided into
(i.) the logical interpretation;
(ii.) the subjective interpretation (personalism, subjective Bayesianism);
(iii.) the intersubjective interpretation;
(3) objective probability theory, further divided into
(i.) the frequency interpretation;
(ii.) the propensity interpretation.
(Gillies 2000: 2).
These are basically overarching philosophical interpretations of probability. The Classical interpretation is probably of historical interest only.

Keynes and Harold Jeffreys held the logical interpretation (2.i), which nevertheless seems widely rejected by modern philosophers of probability.

Frequency theorists include John Venn, A. N. Kolmogorov and Richard von Mises.

A subjective personalist theory of probability was developed by Bruno de Finetti, Frank P. Ramsey, and Leonard J. Savage. A decision making theory was developed from this that is still fundamental in neoclassical economics.

Regarding actual types of probability as a property and not a philosophy theory, although there are different classifications (see Appendix 1), perhaps there are two types, as argued by Rudolf Carnap and Ian Hacking:
(1) Epistemic/epistemological probability
A property of inferred propositions in inductive arguments, depending on the validity and soundness of the inductive arguments and evidence offered in support of it. It is thus a partial logical entailment. This is basically inductive probabilism.

(2) Aleatory probability
Long-run, relative frequency probabilities that are numerical values and that pertain to properties of elements of sets, classes or kinds. (McCann 1994: 27).
Basic notation to express probability is
P(h|e),
where P is the probability,
h is some hypothesis or conclusion, and
e is the evidence or premises.
This is usually read as “the probability of h given evidence e.” Numerical values for probability lie between 0 and 1.

0 denotes impossibility and 1 certainty.

Epistemic/epistemological probability is obviously strongly connected with induction, generally the following types of argument:
(1) induction by simple enumeration;
(2) argument by analogy;
(3) statistical syllogism, and
(4) induction to a particular.
While aleatory probabilities are capable of having numerical values, it seems that many types of inference from inductive arguments are not.

But even many events that look like they might have aleatory probabilities cannot yield them:
“In games of chance, scientific inference is possible because … an aggregate regularity (in fair games) is readily apparent; chance affords an objective, homogeneous, stationary series. In empirically observable series, on the other hand, series chosen from a potentially unstable natural environment, such homogeneity and regularity may not be in evidence. One cannot a priori assume stability; rather one must be alert to the possibly chaotic nature of any empirical series which may, over the short and the long run, generate patterns for which a probability distribution does not exist or one which generates no discernible pattern whatsoever.” (McCann 1994: 32–33).
For example, what use is the time series data on the average daily selling price of a stock in providing an objective numerical value for the probability that this stock will have value y on the 15 July, 2017? The answer is: it is useless.

The assumption of an objective, homogeneous, stationary process producing events or variables over time, in the past, present and future, is the ergodic hypothesis or ergodic axiom, familiar from neoclassical economics. If the relative frequencies of outcomes of some process converge over a long-run time series, then the process is ergodic (Glickman 2003: 368). But many economic phenomena are non-ergodic, and, for example, non-stationarity is a sufficient condition for non-ergodicity. Therefore objective probabilities do not exist in such processes: past and present time series data are of limited value or just useless for strict prediction or forecasts in terms of numerical value probabilities.

But probabilities – whether (1) objectively numerical or (2) inductive and non-numerical – only form a basis and criterion for decision and action, and decision making theory must be concerned with how people actually make decisions in particular situations, and avoid highly abstract, logically incoherent, and empirically false theories.

APPENDIX 1
Other systems of classifying types of probability as a property include the following:
(I.) Wesley Salmon (1967):
(1) Classical.
(2) subjective;
(3) frequency
(4) logical, and
(5) personal.
(II.) Roy Weatherford (1982):
(1) Classical.
(2) subjective/personal;
(3) frequency, and
(4) logical.
(III.) Leonard J. Savage (1972):
(1) necessarian;
(2) personalist, and
(3) frequentist.
Further Reading
Abrams, Marshall. 2012. “Mechanistic Social Probability: How individual Choices and Varying Circumstances produce Stable Social Patterns,” in Harold Kincaid (ed.), The Oxford Handbook of Philosophy of Social Science, Oxford University Press, Oxford. 184-226.

Galavotti, M. C. 2010. “Probability,” in Stathis Psillos and Martin Curd (eds.), The Routledge Companion to Philosophy of Science. Routledge, London and New York. 414-424.

Hartmann, S. and J. Sprenger, 2010. “Bayesian Epistemology,” in Sven Bernecker and Duncan Pritchard (eds.), The Routledge Companion to Epistemology. Routledge, London. 609-620.

Humphreys, Paul. 1998. “Probability, Interpretations of,” in Edward Craig (ed.), Routledge Encyclopedia of Philosophy. Volume 7, Nihilism - Quantum Mechanics. Routledge, London.

Interpretations of Probability, Stanford Encyclopedia of Philosophy, 2002 (rev. 2011)
http://plato.stanford.edu/entries/probability-interpret/

Loewer, Barry. 1998. “Probability Theory and Epistemology,” in Edward Craig (ed.), Routledge Encyclopedia of Philosophy. Volume 7, Nihilism - Quantum Mechanics. Routledge, London. 705-711.

Skyrms, B. 2010. “Probability, Theories of,” in Jonathan Dancy, Ernest Sosa, and Matthias Steup (eds.), A Companion to Epistemology (2nd edn.). Wiley-Blackwell, Oxford. 622–626.


BIBLIOGRAPHY
Gillies, Donald. 2000. Philosophical Theories of Probability. Routledge, London and New York.

Glickman, M. 2003. “Uncertainty,” in J. E. King (ed.), The Elgar Companion to Post Keynesian Economics. E. Elgar Pub., Cheltenham, UK and Northhampton, MA. 366–370.

McCann, Charles R. 1994. Probability Foundations of Economic Theory. Routledge, London.

Salmon, Wesley Charles. 1967. Foundations of Scientific Inference. University of Pittsburgh Press, Pittsburgh.

Savage, Leonard Jimmie. 1972. The Foundations of Statistics (2nd rev. edn.). Dover, New York.

Weatherford, Roy. 1982. Philosophical Foundations of Probability Theory. Routledge & Kegan Paul, London.

Thursday, August 14, 2014

The Three Axioms at the Heart of Neoclassical Economics

As identified by Paul Davidson, they are as follows:
(1) the neutral money axiom;

(2) the ergodic axiom, and

(3) the gross substitution axiom (Davidson 2002: 40–45; Davidson 2009: 26–31).
While Fazzari (2009) argues that the neutral money axiom is more a consequence of unrealistic models rather than a real axiom (Fazzari 2009: 6), the concept of neutral money holds that changes in the money supply will only affect nominal values (e.g., prices, money wages, etc.), not real variables (such as production, employment, and investment).

While most neoclassical economists are of course willing to concede that money is non-neutral in the short run, nevertheless most do think money is neutral in the long run (Davidson 2002: 41).

Keynes and Post Keynesians, by contrast, reject the view that money can ever be neutral even in the long run (Davidson 2002: 41).

The ergodic axiom holds that the probability of future events can be predicted objectively by means of statistical analysis from past data (Davidson 2002: 43). But the world contains many non-ergodic processes and phenomena where statistical data simply does not yield probabilities of this sort: that is, fundamental uncertainty is a real, frequent and ineradicable aspect of economic life.

The gross substitution axiom is the idea that every good can in theory be a substitute for any other good (Davidson 2002: 43). In essence, this means that the law of demand can be applied to all goods, assets (even financial assets on secondary financial markets) and money.

This is unrealistic. As the blogger “Unlearning Economics” puts it rather pithily,
“economic theory assumes there is a price at which all commodities will be preferred to one another, which implies that at some price you’d substitute beer for your dying sister’s healthcare.”
“The Illusion of Mathematical Certainty,” Unlearning Economics, July 10, 2014.
http://unlearningeconomics.wordpress.com/2014/07/10/the-illusion-of-mathematical-certainty/
But the problems with the law of demand actually run far deeper than this, as pointed out by Steve Keen.

The gross substitution axiom is also not realistic for a much more profound reason as pointed out by Keynes: when applied to both financial assets and newly produced goods, it does not necessarily work (Davidson 2002: 44).

Fundamentally, money and financial assets have zero or near zero elasticity of substitution with producible commodities:
“The elasticity of substitution between all (nonproducible) liquid assets and the producible goods and services of industry is zero. Any increase in demand for liquidity (that is, a demand for nonproducible liquid financial assets to be held as a store of value), and the resulting changes in relative prices between nonproducible liquid assets and the products of industry will not divert this increase in demand for nonproducible liquid assets into a demand for producible goods and/or services” (Davidson 2002: 44).
And once we see that money and secondary financial assets (as demanded as a store of value) have a zero or very small elasticity of production, it follows that a rise in demand for money or such financial assets (and a rising “price” for such assets) will not lead to businesses “producing” money or financial assets by hiring unemployed workers (Davidson 2002: 44).

All this is sufficient to damn the gross substitution axiom.

All in all, the three axioms that form the basis of neoclassical economics cannot be taken seriously.

Further Reading
“The Law of Demand in Neoclassical Economics,” June 1, 2013.

“What is the Epistemological Status of the Law of Demand?,” September 19, 2013.

“Steve Keen on the Law of Demand,” September 20, 2013.

“Keynes on the Special Properties of Money,” May 8, 2011.

“F. H. Hahn in a Candid Moment on Neo-Walrasian Equilibrium,” January 29, 2011.

“More on the Gross Substitution Axiom,” July 28, 2011.

“Gold as Commodity Money and its Elasticity of Production,” November 18, 2011.

BIBLIOGRAPHY
Davidson, P. 2002. Financial Markets, Money, and the Real World. Edward Elgar, Cheltenham.

Davidson, Paul. 2009. John Maynard Keynes (rev. edn.). Palgrave Macmillan, Basingstoke.

Fazzari, Steven M. 2009. “Keynesian Macroeconomics as the Rejection of Classical Axioms,” Journal of Post Keynesian Economics 32.1: 3–18.

“The Illusion of Mathematical Certainty,” Unlearning Economics, July 10, 2014.
http://unlearningeconomics.wordpress.com/2014/07/10/the-illusion-of-mathematical-certainty/

Friday, July 19, 2013

Brady on Speculation in Financial Markets

Food for thought from Michael Emmett Brady:
“There is a long 400–500 year history that demonstrates repeatedly, time and time again, that past and current speculation always leads to some kind of future economic problem.

Keynes recognized that financial markets, for the last 400–500 years since the introduction of modern, fractional reserve banking, exhibited the same speculative pattern over and over and over and over again. …. Obama, Bernanke, and Geithner … bailed out the Wall Street speculator crowd again, just as they were bailed out in the early to late 1980’s by Paul Volcker and late 1990’s–early 2000’s by Alan Greenspan. The result is that another bubble in the stock markets is being created. These financial bubbles are ergodic because the same pattern repeats again and again. New types of financial assets and financing are created by the banking industry. In the 1920’s, for example, these new financial assets were balloon payments for houses and margin account financing for stocks. The creation of these new types of assets is called securitization. The next step is debt leveraging. This allows speculators and speculating bankers to maximize their speculative debt financing. The growing bubble is fed by herding and copycat behavior that automatically leads to the creation of a larger and larger bubble. The next stage occurs as the bubble leads to a mania, which leads to a panic, which inevitably leads to a crash, which always leads to an economic downturn, recession, or depression of some sort. These kinds of events are stationary because they keep repeating over and over again. Their ultimate collapse can be predicted with a probability approaching 1. However, they are not normally distributed. One can’t use the normal distribution to describe the time series data in financial markets. The underlying processes are given by the Cauchy distribution.”
Michael Emmett Brady, September 18, 2009
http://www.amazon.com/review/R32PPK2MQ5SQUG
I find the idea that the repeated rise and fall of bubbles per se in capitalism to be ergodic worthy of further investigation.

Of course, one needs a strict definition of ergodicity and stationarity.

But another issue is how one defines “bubble.” It is entirely conceivable that a small or moderate bubble might in fact stabilise, reach plateau and then further bull or bear markets may follow, instead of simply deflating in a significant way.

Of course, if one wants to limit the definition of “bubble” used here to large, debt-fuelled bubbles, which really destabilise asset prices wildly, then the idea that the collapse of such bubbles “can be predicted with a probability approaching 1” is not so unreasonable, even though I assume that such a probability value would be what Keynes called non-numerical (Keynes 1921: 160), and cannot be understood as in the same class as a priori probabilities.

BIBLIOGRAPHY
Keynes, John Maynard. 1921. A Treatise on Probability. Macmillan, London.

Tuesday, October 4, 2011

How Can Government Overcome Uncertainty?

S. D. Parsons poses the following question:
“Post Keynesian economists can, with considerable justification, criticize the view in some Austrian circles that it is possible to emphasize both uncertainty and market coordination. However, it would also seem that the Post Keynesian emphasis on uncertainty raises problems for the argument that governments can resolve coordination problems. ... Keynes may well have correctly identified problems of market coordination when he wrote, and correctly identified policy instruments to resolve them. However, given uncertainty, the past is a fickle guide to the future and, given transmutation, the world is now a different place. In conclusion, Post Keynesians have a valid point when they argue that an emphasis on economic uncertainty raises problems for the assumption that market coordination can occur in the absence of governmental intervention. However, it can also be argued that the emphasis on uncertainty raises problems for the assumption that market coordination can occur through government intervention.” (Parsons 2003: 9).
It is not, however, difficult to answer these charges.

When you introduce an intervention to influence the state of a nonergodic stochastic system, that process and outcome is not in the same ontological category or status as the future of that system, without intervention. The past data from which one draws inferences about what the intervention will do consist of examples of past such interventions, ideally of the same type. For example, there is no doubt that induction from past data will not be a reliable method to predict the future value of certain shares on the stock market or the future value of the whole market itself measured by some index, but predicting what happens when an entity with the power to influence certain shares or the whole system is a different matter. If the Treasury bought up the stock of a certain promising company, making the shares scarce when demand is high, announcing it will even support the value of the shares, we can make a empirical prediction about the outcome, which can be falsified. How? I have already addressed the question of the epistemological justification for such things and even Keynesian stimulus (and other government interventions) here:
“Risk and Uncertainty in Post Keynesian Economics,” December 8, 2010.
The problem revolves around whether induction can be rationally justified. If one thinks that induction can be defended rationally, then inductive arguments using past empirical evidence can be used to provide justification for policy interventions. Induction can be reliable when used outside of nonergodic stochastic systems or events. If one thinks that induction has no rational justification, then Karl Popper’s falsificationism by hypothetico-deduction can be used to test predictive hypotheses about what will happened in the future under government intervention. In the absence of falsification, we have empirical support for such polices.

Fundamentally, if Austrians or neoclassicals think that they can evade their own such epistemological problems, they are deeply mistaken. How, for example, does the Austrian praxeologist justify his belief that that the axiom of disutility of labour will continue to be true in the future? Mises explicitly tells us that this axiom is “not of a categorial and aprioristic character”, but “experience teaches that there is disutility of labor” (Mises 1998: 65). In other words, it is a synthetic proposition and its truth is only known a posteriori. Praxeologists require either induction or Popper’s falsificationism by hypothetico-deduction using empirical evidence to justify their belief in its truth now and for the future.

The concept of radical uncertainty in the Post Keynesian or Knightian sense applies to non-ergodic, stochastic systems. But human life does not just consist only of non-ergodic systems. The economic system we know as capitalism, where most commodities are produced by decentralised investment decision-making by millions of agents and consumption by other agents with shifting subjective utilities, is not the only institution of modern life. We have government and quasi-government entities, private non-profit organisations, private voluntary organisations, and at the basic level families.

The free market itself has attempted to overcome uncertainty by certain institutions. Government interventions in economies are merely a much more powerful and more effective instrument for reducing uncertainty than what has emerged on the market.

Its many institutions that exist alongside and influence modern capitalism (such as law courts that enforce contracts, buffer stocks, and even central banks) have developed precisely to deal with uncertainty, as “outside” entities capable of reducing uncertainty by interventions designed to influence the state of the system. Law and order is a basic human institution without which commerce would be impossible. It has been enforced through the ages essentially by governments, not by private enterprise. When, for example, the trade of the Roman Republic was threatened by pirates in the east Mediterranean, it was the state that ended that threat and allowed commerce to resume with confidence. Indeed, some conventions or institutions that reduce uncertainty (for example, forward/future markets for commodities, and even money) are so deeply ingrained that we think of them now as a fundamental part of capitalism. A futures market was developed to reduce uncertainty for producers of commodities, often primary commodities. There is a great deal of evidence that standardised coinage in Western European civilisation was essentially the invention of the state. Indeed, the state had a great role in monetising economies.

Central banks developed in the 19th and 20th centuries precisely because business and financial interests wanted a system that would reduce the uncertainty caused by liquidity crises and financial panics, because they were frightened by the potentially disastrous consequences of unregulated financial markets and banking systems.

It is interesting that the Austrian Ludwig Lachmann’s view that institutions have an important part to play in free market systems is similar to the view I have had described above. It is important to note the logical consequences these ideas had for Lachmann as well:
“Because of his focus on uncertainty, Lachmann came to doubt that, in a laissez-faire society, entrepreneurs would be able to achieve any consistent meshing of their plans. The economy, instead of possessing a tendency toward equilibrium, was instead likely to careen out of control at any time. Lachmann thought that the government had a role to play in stabilizing the economic system and increasing the coordination of entrepreneurial plans. We call his position ‘intervention for stability.’” (Callahan 2004: 293).
While I doubt whether Lachmann’s interventions would have been anything but minimal by Post Keynesian standards, nevertheless his intellectual journey is actually a lesson for his fellow Austrians: once they take fundamental uncertainty and subjective expectations seriously they would find themselves forced to much the same conclusions that he eventually drew.


BIBLIOGRAPHY

Barkley Rosser, J. 2010. “How Complex are the Austrians?,” in R. Koppl, S. Horwitz, and P. Desrochers (eds), What is So Austrian About Austrian Economics?, Emerald Group Publishing Limited, Bingley, UK. 165–180.

Callahan, G. 2004. Economics for Real People: An Introduction to the Austrian School (2nd edn), Ludwig von Mises Institute, Auburn, Ala.

Parsons, S. D. 2003. “Austrian School of Economics,” in J. E. King (ed.), The Elgar Companion to Post Keynesian Economics, E. Elgar Pub., Cheltenham, UK and Northhampton, MA. 5–10.

Sunday, August 6, 2017

How to Refute the Core of Austrian/Neoclassical Economics in Four Easy Points

Both Austrian and Neoclassical economics stem from the Marginalist revolution of the 1870s. Although there are important differences between both schools, they have enough in common that is flawed to make them both subject to this critique:
(1) both Austrian and Neoclassical theory ultimately hold that free markets have a tendency towards general equilibrium, and hence economic coordination by means of a flexible wage and price system, and a (supposed) coordinating loanable funds market that equates savings and investment. This is an empirically false view of market economies: it is essentially the product of Marginalists from the 1870s onwards who had physics envy and wanted to model a market economy like a self-equilibrating physical system.

(2) the core Neoclassical and Austrian model in (1) is false because:
(i) market systems are complex human systems subject to degrees of non-calculable probability and future uncertainty, so that market economies would not converge to general equilibrium states even if wages and prices were perfectly flexible. This makes human decision-making highly different to the fundamental model proposed by Neoclassical economics (even with their modern ad hoc models that invoke asymmetric information and bounded rationality), and, even if Austrians supposedly accept subjective expectations in decision making, they fail spectacularly to apply it properly in their economic theory. At the heart of this failure of both Neoclassical and Austrian theory is the mistaken ergodic axiom.

Investment is essentially driven by expectations which are highly subjective and even irrational, and come in waves of general optimism and pessimism;

(ii) the loanable funds model is a terrible model of aggregate investment (partly because the mythical natural rate of interest can’t be defined outside one commodity worlds) but very importantly because of (i) (which is the point that kills both Austrian economics and Neoclassical loanable funds models).

(iii) the price and wage system is highly inflexible, and even if it were flexible all sorts of factors prevent convergence to equilibrium states anyway (e.g., the reality of a non-ergodic future, subjective expectations, shifting liquidity preferences, failure of Say’s law, spending of money on non-reproducible financial assets, wage–price spirals, debt deflation, failure of the Pigou effect);
(3) the quantity theory of money is virtually useless, because of the following reasons:
(i) the modern money supply is endogenous because broad money creation is credit-driven (that is, created by private banks and its quantity is determined by the private demand for it), and, furthermore, a truly independent money supply function does not actually exist in an endogenous money world, since credit money comes into existence because it has been demanded, and so the broad money supply is not independent of money demand, but can be demand-led;

(ii) money can never be neutral, neither in the short run nor in the long run.

(iii) the direction of causation is generally from credit demand (via business loans to finance labour and other factor inputs) to money supply increases, contrary to the direction of causation as assumed in the quantity theory, and

(iv) changes in the general price level are a highly complex result of many factors, and not some simple function of money supply.
(4) the (non-Keynesian) Neoclassicals and Austrians have an obsessive-compulsive fixation with the supply-side, but this cripples their economic theory. In our capital-rich Western economies historically (and once we re-implement some kind of industrial policy now), what mostly constrains our prosperity is the demand-side, not the supply-side.
Of course, you have to say an incredible amount in addition to this to refute all the other manifold errors of Austrian theory and Neoclassical economics (see also here, here, and here), but these points above are in essence devastating to their core ideas.

Finally, George L. S. Shackle summed up the essence of Keynes’ theory as follows:
[sc. Keynes’s] ... theory of involuntary unemployment is perfectly simple and can be expressed in a paragraph, or in a sentence. If you express it in a sentence, you simply say that enterprise is the launching of resources upon a project whose outcome you do not, and cannot, know. The business of enterprise involves investment, the investing of large amounts of resources--huge sums of money--in things whose outcome you cannot be certain of, which could perfectly well turn into a disaster or a brilliant success.

The people who do this kind of investing are essentially gamblers and they can lose their nerve. And if they decide to withdraw from trade, they sweep their chips up from the table. If they decide it’s too risky, if their nerve gives out and they can’t bring themselves to go on investing, they cease to give employment and that is the explanation.
When business is at all unsettled--when there’s any sign at all of depression--or when there’s been a lot of investment and people have run out of ideas, or when their goods are not selling quite as fast as they have been, they no longer know what the marginal value product of an extra man is—it’s non-existent. How can you say that a certain number of men have a certain marginal productivity when you can’t know what the per unit value of the goods they would produce if you employed them would sell for?”
“An Interview with G.L.S. Shackle,” The Austrian Economics Newsletter, Spring 1983.
This is actually a splendid summing up of what Keynes’s theory is about, and why both Austrian and Neoclassical economics are nonsense.

Links
“King on Post Keynesian Approaches to Microfoundations,” April 1, 2013.

“The Essence of Keynesianism is Investment,” December 8, 2012.

“Money Has Direct Utility,” October 25, 2012.

“World GDP versus Total Value of Financial Asset Market Exchanges,” February 21, 2013.

“Capitalism has Two Fundamental Sectors,” February 22, 2013.

“Steve Keen, Debunking Economics, Chapter 6: Wages,” February 12, 2014.

“Steve Keen, Debunking Economics, Chapter 5: Theory of the Firm,” February 13, 2014.

“Kaldor on Economics without Equilibrium,” March 9, 2013.

“Kaldor on the Irrelevance of Equilibrium Economics,” May 15, 2013.

“The Marginalist Pricing Controversy Revisited,” April 12, 2014.

“Where Gardiner Means went Wrong,” May 11, 2014.

“Robinson on Marshall on Diminishing Marginal Utility,” March 12, 2014.

“Steve Keen on Consumer Theory,” March 14, 2014.

“What is Wrong with Neoclassical Economics?,” March 30, 2014.

“Post Keynesian Policy on Interest Rates,” March 12, 2013.

“Keynes’s Mistakes in the General Theory,” May 7, 2013.

“The General Theory, Chapter 19: Changes in Money-Wages,” January 30, 2014.

“The Law of Demand in Neoclassical Economics,” June 1, 2013.

“What is the Epistemological Status of the Law of Demand?,” September 19, 2013.

“Steve Keen on the Law of Demand,” September 20, 2013.

“Price, Average Total Cost, Average Variable Cost and Marginal Cost,” November 28, 2013.

“Joan Robinson on the Quantity Theory of Money,” March 3, 2014.

“Steven Pressman on Public Choice Theory,” February 1, 2013.

“Say’s Law: An Overview and Bibliography,” April 13, 2013.

“The Origin of Coinage in Ancient Greece,” April 29, 2011.

“The Origins of Money,” January 8, 2012.

“Quiggin on the Origin of Money,” February 10, 2012.

“More on Prices in the Real World,” July 31, 2012.

“Price Rigidity in New Keynesianism and Post Keynesianism,” June 30, 2012.

“Gardiner Means on Administered Prices,” June 20, 2013.

Realist Left
Realist Left on Facebook
Realist Left on Twitter @realistleft
Realist Left on Reddit
Realist Left Blog
Realist Left on YouTube
Lord Keynes on Facebook
Social Democracy for the 21st Century: A Realist Alternative to the Modern Left

Alt Left on the Internet:
Alternative Left on Facebook
Alt-Left on Google+
Alt-Left Closed Facebook Group
Alternative Left: For the Freedom Loving Leftist
Samizdat Broadcasts YouTube Channel

I’m on Twitter:
Lord Keynes @Lord_Keynes2
https://twitter.com/Lord_Keynes2

Saturday, July 21, 2012

Paul Davidson Interview

A great interview below with the American Post Keynesian Paul Davidson, by the INET (Institute for New Economic Thinking) Executive Director Robert Johnson. You can also view the videos here.

Davidson discusses a whole range of topics, but, above all, Post Keynesian theory, uncertainty, and financial markets. Video 2 has a very good discussion of fundamental uncertainty and Davidson’s own contribution to this concept, in terms of ergodic and non-ergodic stochastic systems.

See also this recent excellent article by Davidson:
Paul Davidson, “Restoring Trust in the American Economy: The Real World v. The Confidence Fairy,” Alternet.org, July 11, 2012.











Wednesday, June 30, 2010

The Utility of Money in Post Keynesianism

In the previous post, I described money as a possible factor of production, and I have also realised that the discussion of value there raises the question whether money has utility.

In its role as a medium of exchange, money functions as an intermediary unit of account (or numéraire) that facilitates the exchange of goods and services. From this derives the idea that money only has utility through its exchange value, a view which is held by the Austrians and neoclassicals. As the American neoclassical F. W. Taussig argued,
[t]he phrase “marginal utility of money” must … be used with caution. Money has utility in a different way from other things. It is valued not because it serves in itself to satisfy wants, but as a medium of exchange, having purchasing power over other things. Gold jewelry is subject to the law of diminishing utility precisely as other things are. But gold coin—money—is subject to it only in the sense that an individual buys first the things he prizes most, and then other things in the order of their less utility (Taussig 1911: 124).
Writing in 1911, Taussig here refers to commodity money (although it would appear that other neoclassicals admitted that commodity money like gold had utility in itself, but perhaps this is another issue).

But Post Keynesian economics shows us that money (even fiat money) does have utility:
In an uncertain world, the possession of money and other nonproducible liquid assets provides utility by protecting the holder from fear of being unable to meet future liabilities (Davidson 2003: 236).
The neoclassicals thought that only producible goods and services can provide utility. But money can have utility on its own account. So can liquid financial assets. The neoclassical view was that money has no utility, but only exchange value. The Austrian view also seems to be that money has no utility except for what can obtained in exchange for it. The idea that money has no utility in itself is part of the three fundamental neoclassical axioms that Keynes rejected. The following three fundamental axioms are the basis of neoclassical economics and of Say’s law:
(1) the neutral money axiom (i.e., holding money by itself provides no utility),
(2) the gross substitution axiom, and
(3) the ergodic axiom.
Post Keynesian economics requires the rejection of these axioms. In a fundamentally uncertain world, you have the problem of facing a possible lack of liquidity in the future (i.e., lack of money). This is why many people like to hold onto money, and precisely why money has utility – and in fact often has a great deal of utility.

In Keynes’ General Theory, an essential property of liquid assets (money being the most liquid asset) is that their “elasticity of production” is near or equal to zero. To say that financial assets and money have “a zero elasticity of production” means that commodity-producing businesses cannot engage in production of money or financial assets by hiring labour. If demand for liquidity in an economy increases, then producers of commodities cannot “produce” liquid assets by hiring workers. When the demand for non-reproducible assets as a “store” for money rises, this can induce unemployment. If there are assets in which money can be saved other than reproducible goods, then full equilibrium will not necessarily happen in a market economy: investment will not be sufficient to achieve full employment. This is why, even if wages and prices were perfectly flexible, we could still have involuntary unemployment.

BIBLIOGRAPHY

Davidson, P. 2003. “Keynes’ General Theory,” in J. E. King, Elgar Companion to Post Keynesian Economics, Edward Elgar Publishing, Cheltenham, UK and Northampton, MA. 229–237.

Patinkin, D. and Steiger, O. 1989. “In Search of the ‘Veil of Money’ and the ‘Neutrality of Money’: A Note on the Origin of Terms,” Scandinavian Journal of Economics 91.1: 131–146.

Taussig, F. W. 1911. Principles of Economics, Volume 1. Macmillan Company, New York.

Friday, March 2, 2018

Academic Agent on “Six Key Lessons from Classical Economics”: A Critique

“The Academic Agent” has a video here on what he calls “Six Key Lessons from Classical Economics” (but actually from both Classical and Neoclassical economics):



Of course, not all of his points are wrong. And, since I assume various followers of “Academic Agent” will read this, let me state: I support Post Keynesian economics, a non-neoclassical version of Keynesianism.

But let us break this down as follows, point by point:

(1) “Wealth is not Money”.

This is true. Money is clearly not wealth (if we understand by “wealth” the good and services we consume). Money cannot be consumed in the way that commodities can. Libertarians are fond accusing Keynesians of saying that “money is wealth” or that “money creation is wealth creation.” But I can’t even recall seeing any left heterodox economists who even say this. The maxim that money is not everything – which many people on the Left are fond of saying – is even a subtle admission of the point.

One can readily agree that money is not wealth. Money is (1) a unit of account, (2) a medium of exchange and (3) a store of value.

So this point, while true, is largely a straw man, if it is supposed to be directed against Keynesian economists.

Further reading here:
Money: Is it Wealth?, October 12, 2010.

(2) “The Economy is not a Zero Sum Competition”.

While there are numerous economic activities in modern capitalism that are indeed not zero sum games, there clearly do exist economic activities which are precisely that.

Many speculative activities are like this: for example, activity on secondary financial asset markets where two (or more) parties engage in a trade in which one loses and the other gains. If I “bet” on a futures option or on a currency trade, I win or lose. This is a type of zero sum game.

It is notable that “The Academic Agent” doesn’t even bother to discuss financial markets, which are a fundamental part of modern capitalism.

Furthermore, if a person goes to a casino and gambles (which is clearly a form of capitalist exchange), he wins or loses. Either he comes out with more money than he went in with or less. This is a zero sum game. One could argue that, even if a person loses, he got in return the possible thrill of winning.

But this is a specious argument, because one can also point out that gambling addicts “lose” not only their money but also social well-being as they experience devastating negative personal and social consequences as the result of gambling problems. Such people are losers, and their gambling is a zero sum game.

Of course, plenty of other economic activities and transactions are not zero sum games, but the point remains.

(3) “International Trade is not a Zero Sum Game.

Once again, while a lot of international trade may well be mutually beneficial, not all trade is.

“The Academic Agent” relies on Ricardo’s Principle of Comparative Advantage, which claims that free trade is always mutually beneficial to nations engaging in it.

Ricardo’s argument takes the example of cloth and wine production in Portugal and England. Ricardo’s argument is simple: Portugal can produce more wine by concentrating on the production of wine (where it has a comparative advantage in needing less labour), and import cloth from England, even if (as in Ricardo’s example) it takes fewer labourers to produce cloth in Portugal than in England. The aggregate effect of England concentrating on producing cloth (where its comparative advantage lies in needing fewer workers or labour hours per unit) and Portugal producing wine is that a greater quantity of these commodities can be produced in total, and Portugal and England can exchange them to mutual benefit, instead of producing fewer goods in isolation and autarky.

But this argument contains all sort of unrealistic assumptions, and a fatal flaw in that Ricardo (like other Classical economists) assumed a pre-Marxist Labour Theory of Value.

First, the argument for unrestricted free trade by Ricardo’s principle of comparative advantage requires a number of stated or hidden fundamental assumptions to work properly, as follows:
(1) if a nation focusses on comparative advantage, domestic capital or factors of production like capital goods and skilled labour are not internationally mobile, and will be re-employed in the sector/sectors in which the country’s comparative advantage lies and within that nation;

(2) workers are fungible, and will be re-trained easily and moved to the new sectors where comparative advantage lies.

(3) it does not matter what you produce (e.g., you could produce pottery), as long as you do it in a way that gives you comparative advantage;

(4) technology is essentially unchanging and uniform; and

(5) there are no returns to scale in all sectors.
Assumption (1) doesn’t hold today and what happens is movement of capital under the principle of absolute advantage. By practising free trade a nation could experience capital flight and severe de-industrialisation. This results in a type of race to the bottom for industrialised countries that do not protect their industries. Movement of capital to a place where it has absolute advantage tends to cause de-industrialization in Western countries, as capital moves to nations with the lowest unit labour and factor costs, and higher wage countries experience falling wages, high unemployment and rising trade deficits.

Assumption (2) is plainly untrue.

Assumptions (3), (4) and (5) are utter nonsense.

In essence, Ricardo’s argument ignores the long-run benefits of industrialisation (a sector which gives increasing returns to scale), and manufacturing and industrialisation are the only real way to escape the grinding rural poverty of underdevelopment (unless of course you are lucky enough to be one of the minority of nations that has lucrative commodities like energy, or to be some tiny city-state that can get by on service industries).

In the long run, Portugal is better off producing cloth and other manufactured goods, not just wine. By adopting free trade, Portugal will reduce its future aggregate output and reduce its future per capita wealth.

Finally, there is also another devastating flaw in Ricardo’s argument: Ricardo actually uses a naive Labour Theory of Value assumption in its argument! (see also Reinert 2007: 301–304 for discussion). To be more precise, one of Ricardo’s crucial arguments in favour of free trade by comparative advantage is based on the idea that specialising in the production of some commodity is inherently better just because of the comparatively lower labour time involved in production. But this is false.

Even if it takes more labour hours and human labourers to produce manufactured goods, in the long run this is a key to becoming rich, whereas dead-end production of commodities with diminishing returns to scale, even if it requires fewer labour hours and labourers, is a path to Third World poverty.

Erik S. Reinert explains the flaw here:



So, quite clearly, international trade can be a zero sum game in that some nations engaging in free trade will lose, in the sense that they will have much lower future aggregate output and lower future per capita wealth.

It also follows that protectionism may be a better policy to create industries that gave increasing returns to scale (generally manufacturing) – rather than dead-end “diminishing returns to scale,” since this is what marks successful economic development. Once the new manufacturing sectors become internationally competitive, it is possible to reduce or eliminate tariffs. Note also that this policy is perfectly compatible with the fact the other types of tariffs (protecting inefficient rent seekers) or poorly targeted tariffs can be harmful to economic development.

Further reading here:
“Robert Murphy’s Debate on Free Trade,” August 7, 2016.

“The Cult of Free Trade in a Nutshell,” July 4, 2016.

“Ricardo’s Argument for Free Trade by Comparative Advantage,” July 5, 2016.

“Erik Reinert versus Ricardo on Free Trade,” July 5, 2016.

“Erik S. Reinert on Heterodox Development Economics,” July 9, 2016.

“Britain’s Protectionism against Indian Cotton Textiles,” July 12, 2016.

“Mises on the Ricardian Law of Association: The Flaws of Praxeology,” January 25, 2011.

(4) Say’s Law.

“The Academic Agent” seems to define Say’s Law in two senses, as follows:
(1) you must produce commodities before you consume them, and

(2) supply and demand are not independent of one another, but dependent in the sense that factor payments by producers or income to producers provide the source of demand for other goods.
No serious economist even disputes (1) or (2), and certainly not Keynesians, who would merely add that the creation of credit money within capitalism (for example, by banks) is a further source of demand for goods.

The trouble is that “The Academic Agent” then proceeds to garble Say’s law and what it actually says.

He also seems unaware that historians of economic thought like Thweatt (1979: 92–93) and Baumol (2003: 46) conclude that Jean-Baptiste Say’s role in formulating the law is grossly overrated, and that Adam Smith was in fact the real father of what is recognisably Say’s law in Classical economics, with the major work in developing the idea conducted by James Mill (1808), not Jean-Baptiste Say himself.

Furthermore, Keynes did not misrepresent what the 19th century economists had said about “Say’s Law.”

If we look at how Say’s law was formulated by the Classical economists, as defined by Thomas Sowell (1994: 39–41), it was as follows:
(1) The total factor payments received for producing a given volume (or value) of output are necessarily sufficient to purchase that volume (or value) of output [an idea in James Mill].

(2) There is no loss of purchasing power anywhere in the economy. People save only to the extent of their desire to invest and do not hold money beyond their transactions need during the current period [James Mill and Adam Smith].

(3) Investment is only an internal transfer, not a net reduction, of aggregate demand. The same amount that could have been spent by the thrifty consumer will be spent by the capitalists and/or the workers in the investment goods sector [John Stuart Mill].

(4) In real terms, supply equals demand ex ante [= “before the event”], since each individual produces only because of, and to the extent of, his demand for other goods. (Sometimes this doctrine was supported by demonstrating that supply equals demand ex post.) [James Mill.]

(5) A higher rate of savings will cause a higher rate of subsequent growth in aggregate output [James Mill and Adam Smith].

(6) Disequilibrium in the economy can exist only because the internal proportions of output differ from consumer’s preferred mix—not because output is excessive in the aggregate” [Say, Ricardo, Torrens, James Mill] (Sowell 1994: 39–41).
It is not clear that (1) is true, since most real-world prices include a profit mark-up and their aggregate value is much higher than the aggregate value of factor payments paid out in the production of the commodities.

Ideas (2), (3) and (6) are ridiculously false, since people can hoard money. In reality, people can hold money without purchasing goods and services. Furthermore, money can be spent on secondary financial or real asset markets where it is not used to purchase commodities.

This will lead to a situation where aggregate output is excessive, since some people do not wish to purchase commodities at all but save their money, hoard it, or spend it on financial assets.

In any real world economy, money from income streams from production, either to capitalists or workers, can become diverted to asset markets and may not be spent on goods. For this reason alone, Say’s law is a grossly unrealistic picture of market economies. Moreover, capitalists themselves have subjective expectations about the future and the future profitability of investment, and when their expectations are shattered, they will not necessarily invest out of retained earnings.

Lastly, as a matter of historical interest, eventually Jean-Baptiste Say actually repudiated the strong form of Say’s law that we call “Say’s Identity” in his letters to Malthus.

But “The Academic Agent” is blissfully unware of this.

More reading here:
“Say’s Law: An Overview and Bibliography,” April 13, 2013.

(5) “Every part of the economy is connected to the whole of economy… .”

While this is true, this does not vindicate Léon Walras’ Neoclassical economics, which “The Academic Agent” cites as his source for this insight, which has quite specific assertions about capitalist economies.

“The Academic Agent” argues against government intervention in the economy (by quoting a passage of Thomas Sowell) and tacitly invokes Walrasian Neoclassical theory and Austrian economic theory that envisage a capitalist economy as a self-correcting or self-equilibrating machine, which gravitates towards a long-run general equilibrium state.

But this is a profoundly mistaken view of market economies, and is wrong for the following reasons:
(1) both Austrian and Neoclassical theory ultimately hold that free markets have a tendency towards general equilibrium, and hence economic coordination by means of a flexible wage and price system, and a (supposed) coordinating loanable funds market that equates savings and investment. This is an empirically false view of market economies: it is essentially the product of Marginalists from the 1870s onwards who had physics envy and wanted to model a market economy like a self-equilibrating physical system.

(2) the core Neoclassical and Austrian model in (1) is false because:
(i) market systems are complex human systems subject to degrees of non-calculable probability and future uncertainty, so that market economies would not converge to general equilibrium states even if wages and prices were perfectly flexible. This makes human decision-making highly different to the fundamental model proposed by Neoclassical economics (even with their modern ad hoc models that invoke asymmetric information and bounded rationality), and, even if Austrians supposedly accept subjective expectations in decision making, they fail spectacularly to apply it properly in their economic theory. At the heart of this failure of both Neoclassical and Austrian theory is the mistaken ergodic axiom.

Investment is essentially driven by expectations which are highly subjective and even irrational, and come in waves of general optimism and pessimism;

(ii) the loanable funds model is a terrible model of aggregate investment (partly because the mythical natural rate of interest can’t be defined outside one commodity worlds) but very importantly because of (i) (which is the point that kills both Austrian economics and Neoclassical loanable funds models).

(iii) the price and wage system is highly inflexible, and even if it were flexible all sorts of factors prevent convergence to equilibrium states anyway (e.g., the reality of a non-ergodic future, subjective expectations, shifting liquidity preferences, failure of Say’s law, spending of money on non-reproducible financial assets, wage–price spirals, debt deflation, failure of the Pigou effect);
(3) the quantity theory of money is virtually useless, because of the following reasons:
(i) the modern money supply is endogenous because broad money creation is credit-driven (that is, created by private banks and its quantity is determined by the private demand for it), and, furthermore, a truly independent money supply function does not actually exist in an endogenous money world, since credit money comes into existence because it has been demanded, and so the broad money supply is not independent of money demand, but can be demand-led;

(ii) money can never be neutral, neither in the short run nor in the long run.

(iii) the direction of causation is generally from credit demand (via business loans to finance labour and other factor inputs) to money supply increases, contrary to the direction of causation as assumed in the quantity theory, and

(iv) changes in the general price level are a highly complex result of many factors, and not some simple function of money supply.
(4) the (non-Keynesian) Neoclassicals and Austrians have an obsessive-compulsive fixation with the supply-side, but this cripples their economic theory. In our capital-rich Western economies historically (and once we re-implement some kind of industrial policy now), what mostly constrains our prosperity is the demand-side, not the supply-side.
Once we realise there is no reliable or automatic tendency to general equilibrium in capitalist economies, then nearly all arguments against government interventions to promote economic activity collapse. The whole basis of Neoclassical and Austrian economics collapses.

For example, the assumption of “The Academic Agent” that a government program to build a bridge would automatically destroy private sector jobs, or harm the economy, does not follow at all, and certainly not if we have a recession or depression and vast resources are idle, and there is no private sector impetus for using such idle resources on capital investment or production.

Finally, George L. S. Shackle summed up the essence of Keynes’ theory as follows:
[sc. Keynes’s] ... theory of involuntary unemployment is perfectly simple and can be expressed in a paragraph, or in a sentence. If you express it in a sentence, you simply say that enterprise is the launching of resources upon a project whose outcome you do not, and cannot, know. The business of enterprise involves investment, the investing of large amounts of resources--huge sums of money--in things whose outcome you cannot be certain of, which could perfectly well turn into a disaster or a brilliant success.

The people who do this kind of investing are essentially gamblers and they can lose their nerve. And if they decide to withdraw from trade, they sweep their chips up from the table. If they decide it’s too risky, if their nerve gives out and they can’t bring themselves to go on investing, they cease to give employment and that is the explanation.
When business is at all unsettled--when there’s any sign at all of depression--or when there’s been a lot of investment and people have run out of ideas, or when their goods are not selling quite as fast as they have been, they no longer know what the marginal value product of an extra man is—it’s non-existent. How can you say that a certain number of men have a certain marginal productivity when you can’t know what the per unit value of the goods they would produce if you employed them would sell for?”
“An Interview with G.L.S. Shackle,” The Austrian Economics Newsletter, Spring 1983.
This is actually a splendid summing up of what Keynes’s theory is about, and why both Austrian and Neoclassical economics are nonsense.

More reading here:
“The Essence of Keynesianism is Investment,” December 8, 2012.

“Steve Keen, Debunking Economics, Chapter 6: Wages,” February 12, 2014.

“Steve Keen, Debunking Economics, Chapter 5: Theory of the Firm,” February 13, 2014.

“Kaldor on Economics without Equilibrium,” March 9, 2013.

“Kaldor on the Irrelevance of Equilibrium Economics,” May 15, 2013.

“Steve Keen on Consumer Theory,” March 14, 2014.

“What is Wrong with Neoclassical Economics?,” March 30, 2014.

“The Law of Demand in Neoclassical Economics,” June 1, 2013.

“What is the Epistemological Status of the Law of Demand?,” September 19, 2013.

“Steve Keen on the Law of Demand,” September 20, 2013.

“Price, Average Total Cost, Average Variable Cost and Marginal Cost,” November 28, 2013.

(6) “Marginal Utility”.

“The Academic Agent” ends with pointing out the “value” in the sense of desiring or evaluating commodities is subjective. This is true, but does not take you very far.

The law of diminishing subjective marginal utility states that, as a person consumes an additional unit of the same good (or a homogenous good), then the satisfaction or utility derived from the consumption of that good diminishes and continues to diminish with each additional good.

As a general empirical principle, it is true, but there are important exceptions, as can be seen here. But this general principle does not refute the case for government intervention in the economy.

Moreover, most prices in modern capitalist economies are not determined by the dynamics of supply and demand, but in reality are cost-based mark-up prices, which tend to be relatively inflexible downwards. Moreover, this is now the overwhelming conclusion of the Neoclassical empirical research literature itself, as can be seen here (with full citation of literature on price determination).

The relative downwards price rigidity in modern capitalism (largely an outgrowth of businesses and corporations themselves trying to avoid flexible price markets) also destroys the whole basis of the correction mechanism envisaged in Neoclassical and Austrian economics, since they think that product markets have a tendency to clear by highly flexible prices, when in reality this is confined to a minority of markets.

More reading here:
“The ‘Law’ of Diminishing Marginal Utility,” March 7, 2014

Mark-up Pricing in 21 Nations and the Eurozone: the Empirical Evidence.
BIBLIOGRAPHY
Baumol, William J. 2003. “Retrospectives: Say’s Law,” in S. Kates (ed.), Two Hundred Years of Say’s Law: Essays on Economic Theory’s Most Controversial Principle. Edward Elgar Pub, Cheltenham and Northampton, Mass. 39–49.

Reinert, Erik S. 2007. How Rich Countries Got Rich, and Why Poor Countries Stay Poor. Carroll & Graf, New York.

Sowell, T. 1994. Classical Economics Reconsidered (2nd edn.). Princeton University Press, Princeton, N.J.

Thweatt, W. O. 1979. “Early Formulators of Say’s Law,” Quarterly Review of Economics and Business 19: 79–96.