Saturday, August 31, 2013

Bob Murphy All At Sea on Geometry and Economic Epistemology

A beautiful illustration of the continuing errors of Austrians who support Misesian praxeology can be seen in Robert Murphy’s comments in this video on geometry, and in his debate with David Friedman.*

Robert Murphy, like Mises, cannot properly distinguish between (1) pure geometry and (2) applied geometry (on which, see Salmon 1967: 38). When Euclidean geometry is considered as a pure mathematical theory, it can be regarded as analytic a priori knowledge, and asserts nothing necessarily of the external, real world, since it is tautologous and non-informative. (An alternative view derived from the theory called “conditionalism” or “if-thenism” holds that pure geometry is merely a set of conditional statements from axioms to theorems, derivable by logic, and asserting nothing about the real world [Musgrave 1977: 109–110], but this is just as devastating to Misesians.)

When Euclidean geometry is applied to the world, it is judged as making synthetic a posteriori statements (Ward 2006: 25), which can only be verified or falsified by experience or empirical evidence. That means that applied Euclidean geometrical statements can be refuted empirically, and we know that Euclidean geometry – understood as a universally true theory of space – is a false theory (Putnam 1975: 46; Hausman 1994: 386; Musgrave 2006: 329).

Murphy’s confusion is also confirmed in these remarks below.

The fact that the refutation of Euclidean geometry understood as an empirical theory leaves pure geometry untouched does not help Murphy, because pure geometry per se says nothing necessary about the universe, and is an elegant but non-informative system.

Albert Einstein was expressing this idea in the following remarks about mathematics in an address called “Geometry and Experience” on 27 January 1921 at the Prussian Academy of Sciences:
“One reason why mathematics enjoys special esteem ... is that its laws are absolutely certain and indisputable, while those of all other sciences are to some extent debatable and in constant danger of being overthrown by newly discovered facts. In spite of this, the investigator in another department of science would not need to envy the mathematician if the laws of mathematics referred to objects of our mere imagination, and not to objects of reality. For it cannot occasion surprise that different persons should arrive at the same logical conclusions when they have already agreed upon the fundamental laws (axioms), as well as the methods by which other laws are to be deduced therefrom. But there is another reason for the high repute of mathematics, in that it is mathematics which affords the exact natural sciences a certain measure of security, to which without mathematics they could not attain. At this point an enigma presents itself which in all ages has agitated inquiring minds. How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality? Is human reason, then, without experience, merely by taking thought, able to fathom the properties of real things. In my opinion the answer to this question is, briefly, this:- As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.”
If we were to pursue this analysis further as applied to economic methodology, it would follow that praxeology – if it is conceived as deduced from analytic a priori axioms – is also an empty, tautologous, and vacuous theory that says nothing necessary of the real world. And the instant any Austrian asserts that praxeology is making real assertions about the world, it must be judged synthetic a posteriori, and so is to be verified or falsified by experience or empirical evidence.

What Murphy fails to mention is that the only way to sustain his whole praxeological program is to defend the truth of Kant’s synthetic a priori knowledge, which, as we have seen from the last post, is a category of knowledge that must be judged as non-existent.

* Murphy also conflates (1) the logical positivists’ verifiability criterion for meaningfulness with (2) Popper’s falsifiability criterion for scientific knowledge, but this is an issue I will not bother to pursue here.

Hausman, Daniel M. 1994. “If Economics Isn’t Science, What Is It?,” in Daniel M. Hausman (ed.), The Philosophy of Economics: An Anthology (2nd edn.). Cambridge University Press, Cambridge. 376–394.

Musgrave, Alan. 1977. “Logicism Revisited,” British Journal for the Philosophy of Science 28: 99–127.

Musgrave, Alan. 2006. “Responses,” in Colin Cheyne and John Worrall (eds.), Rationality and Reality: Conversations with Alan Musgrave. Springer, Dordrecht. 293–334.

Putnam, Hilary. 1975. “The Analytic and the Synthetic,” in Hilary Putnam, Mind, Language and Reality. Philosophical Papers. Volume 2. Cambridge University Press, Cambridge. 33–69.

Salmon, Wesley C. 1967. The Foundations of Scientific Inference. University of Pittsburgh Press, Pittsburgh.

Ward, Andrew. 2006. Kant: The Three Critiques. Polity, Cambridge.

Friday, August 30, 2013

Mises Fails Philosophy of Mathematics 101

My post below makes a broad point about the intellectual bankruptcy of aprioristic praxeology on the basis of Mises’s misunderstanding of modern epistemology and the philosophy of mathematics.

The evidence for Mises’s misunderstanding of philosophy of mathematics is here in Human Action:
“Aprioristic reasoning is purely conceptual and deductive. It cannot produce anything else but tautologies and analytic judgments. All its implications are logically derived from the premises and were already contained in them. Hence, according to a popular objection, it cannot add anything to our knowledge.

All geometrical theorems are already implied in the axioms. The concept of a rectangular triangle already implies the theorem of Pythagoras. This theorem is a tautology, its deduction results in an analytic judgment. Nonetheless nobody would contend that geometry in general and the theorem of Pythagoras in particular do not enlarge our knowledge. Cognition from purely deductive reasoning is also creative and opens for our mind access to previously barred spheres. The significant task of aprioristic reasoning is on the one hand to bring into relief all that is implied in the categories, concepts, and premises and, on the other hand, to show what they do not imply. It is its vocation to render manifest and obvious what was hidden and unknown before.” (Mises 2008: 38)

“Praxeology is a theoretical and systematic, not a historical, science. Its scope is human action as such, irrespective of all environmental, accidental, and individual circumstances of the concrete acts. Its cognition is purely formal and general without reference to the material content and the particular features of the actual case. It aims at knowledge valid for all instances in which the conditions exactly correspond to those implied in its assumptions and inferences. Its statements and propositions are not derived from experience. They are, like those of logic and mathematics, a priori. They are not subject to verification and falsification on the ground of experience and facts. They are both logically and temporally antecedent to any comprehension of historical facts. They are a necessary requirement of any intellectual grasp of historical events” (Mises 2008: 32).
First, Mises’s belief that aprioristic reasoning can deliver new, informative knowledge of the real world fails because it is all dependent on the untenable idea of Kantian synthetic a priori knowledge.

Kant’s belief in the synthetic a priori is false, and we know this now given the empirical evidence in support of non-Euclidean geometry: this damns Kant’s claim that Euclidean geometry – the geometry of his day – was synthetic a priori (Salmon 2010: 395). In addition, despite Gödel’s incompleteness theorems and the failure of Bertrand Russell’s strict logicist program, the consensus today is that most of classical mathematics can nevertheless be derived from pure logic and set theory (Schwartz 2012: 19), just as Russell thought,* and it is arguably just analytic a priori knowledge (and even arithmetic might be conceptually divided into (1) analytic a priori pure arithmetic and (2) synthetic a posteriori applied arithmetic [see Musgrave 1993: 240]).

Furthermore, as I have already shown, the human action axiom cannot be considered to be a synthetic a priori statement.

But the real issue raised by Mises here is the epistemological status of geometry, or, more precisely, Euclidean geometry.

Mises has failed to distinguish between geometry in its role as (1) a pure mathematical theory, and as (2) applied geometry (for the distinction, see Salmon 2010: 395). Mises’s statements are ignorant and wrong, because he conflates these two distinct forms of geometry. The inability to separate geometry into these senses – pure geometry versus applied geometry – leads to all sorts of philosophical disasters, amongst them Platonic mystical belief in the eternal realm of the forms and aprioristic Rationalism (the derivation of these things from Euclidean geometry is described in Salmon 2010: 393).

Rudolf Carnap explains the difference pure geometry and applied (physical) geometry:
“It is necessary to distinguish between pure or mathematical geometry and physical geometry. The statements of pure geometry hold logically, but they deal only with abstract structures and say nothing about physical space. Physical geometry describes the structure of physical space; it is a part of physics. The validity of its statements is to be established empirically—as it has to be in any other part of physics—after rules for measuring the magnitudes involved, especially length, have been stated. (In Kantian terminology, mathematical geometry holds indeed a priori, as Kant asserted, but only because it is analytic. Physical geometry is indeed synthetic; but it is based on experience and hence does not hold a priori. In neither of the two branches of science which are called ‘geometry’ do synthetic judgements a priori occur. Thus Kant’s doctrine must be abandoned).” (Carnap 1958: vi).
When Euclidean geometry is considered as a pure mathematical theory, it is nothing but analytic a priori knowledge, and asserts nothing of the world, since it is tautologous and non-informative.

But, when Euclidean geometry is applied to the world, it is judged as making synthetic a posteriori statements (Musgrave 1993: 236; Ward 2006: 25), which can only be verified or falsified by experience or empirical evidence (or, in the jargon of philosophy, can be known as true only a posteriori).

That is to say, applied Euclidean geometrical statements can be refuted empirically, and, indeed, Euclidean geometry – when asserted as a universally true theory of space – is now known to be a false theory (Putnam 1975: 46; Hausman 1994: 386; Musgrave 2006: 329). Non-Euclidean geometry is now understood to be a better theory of reality. When confined to its role as a pure mathematical theory, Euclidean geometry is true but vacuous. That is to say, modern apriorist Rationalists can defend the necessary, a priori truth of Euclidean geometry (as in Katz 1998: 49–50), but only as a pure mathematical theory that is vacuous, non-informative and tautologous. It tells us no necessary truth about reality (Salmon 2010: 395).

But isn’t Euclidean geometry still a useful empirical theory in certain ways? Yes, but this does not save Mises. Euclidean geometry is useful only because it is an approximation of reality and only at certain levels of space (Ward 2006: 25). But it is still false when judged as a universal theory of space.

Even on the most generous estimate, all you could argue is that Euclidean geometry is true only in a highly limited domain: the relatively small, macroscopic spaces and distances humans normally deal with in everyday life. But, once we move beyond this world, Euclidean geometry is false.

And even this qualification does not save the Misesian and Austrian apriorists, because we can only know that geometry is true in its limited domain a posteriori, that is, by empirical evidence.

As soon as Euclidean geometry as pure mathematics is used beyond its tautologous form, it becomes a system making synthetic a posteriori statements, not Kant’s imaginary synthetic a priori.

Since synthetic a priori propositions do not exist, it follows from this that, if Mises thinks that the axioms of praxeology are analytic a priori, then praxeology would indeed be a tautological system that is non-informative, and asserts nothing necessarily and apodictically true about the real world. The only viable route left for modern Misesian praxeologists is to accept the empirical nature of the human action axiom (and other axioms) and admit that derived praxeological theorems are empirical.

That is to say, as soon as praxeology is taken as a system that asserts something about the real world of human economic life (and is not simply asserted as a non-informative, tautologous and vacuous system), it must be judged, like applied geometry, as making synthetic a posteriori statements, which – contrary to Mises’s bizarre assertions cited above (Mises 2008: 32) – can certainly be refuted by experience and empirical evidence.

Like Kant, Mises’s project is damned, as is traditional Rationalist epistemology in general, as has been noted by the Popperian philosopher Alan Musgrave:
The invention of non-Euclidean geometries deprived rationalism of its paradigm. It also suggested to empiricists a new way to deal with mathematics: distinguish pure mathematics from applied mathematics, locate the latter in the synthetic a posteriori compartment of Kant’s box, and the former in the analytic a priori compartment of Kant’s box. One attempt to do the last, logicism, is generally admitted to have failed. Another attempt, if-thenism, is still hotly debated among philosophers. On the other hand, the logical empiricist view of applied mathematics has met with pretty wide acceptance. The rationalist dream, ‘certain knowledge of the objects of experience by means of pure thinking’, is shattered even though the nature of pure mathematics remains problematic indeed.” (Musgrave 1993: 245–246).
* Successors of logicism include (1) the formalism of David Hilbert; (2) conditionalism or “if-thenism” (a term coined by Hilary Putnam), which is a deductivist version of formalism (see Musgrave 1977); and (3) various forms of Intuitionism.

Carnap, Rudolf. 1958. “Introduction,” in Hans Reichenbach, The Philosophy of Space and Time (trans. Maria Reichenbach and John Freund). Dover, York.

Elugardo, R. 2010. “Analytic/Synthetic, Necessary/Contingent, and a priori/a posterori: Distinction,” in Alex Barber and Robert J Stainton (eds.), Concise Encyclopedia of Philosophy of Language and Linguistics. Elsevier, Oxford. 10–19.

Hausman, Daniel M. 1994. “If Economics Isn’t Science, What Is It?,” in Daniel M. Hausman (ed.), The Philosophy of Economics: An Anthology (2nd edn.). Cambridge University Press, Cambridge. 376–394.

Katz, Jerrold J. 1998. Realistic Rationalism. MIT Press, Cambridge, Mass.

Mises, L. von. 2008. Human Action: A Treatise on Economics. The Scholar’s Edition. Mises Institute, Auburn, Ala.

Musgrave, Alan. 1977. “Logicism Revisited,” British Journal for the Philosophy of Science 28: 99–127.

Musgrave, Alan. 1993. Common Sense, Science and Scepticism: Historical Introduction to the Theory of Knowledge. Cambridge University Press, Cambridge.

Musgrave, Alan. 2006. “Responses,” in Colin Cheyne and John Worrall (eds.), Rationality and Reality: Conversations with Alan Musgrave. Springer, Dordrecht. 293–334.

Putnam, Hilary. 1975. “The Analytic and the Synthetic,” in Hilary Putnam, Mind, Language and Reality. Philosophical Papers. Volume 2. Cambridge University Press, Cambridge. 33–69.

Reichenbach, Hans. 1958. The Philosophy of Space and Time (trans. Maria Reichenbach and John Freund). Dover, York.

Salmon, W. C. 2010. “Geometry,” in Jonathan Dancy, Ernest Sosa, and Matthias Steup (eds.), A Companion to Epistemology (2nd edn.). Wiley-Blackwell, Chichester, UK and Malden, MA. 393–395.

Schwartz, Stephen P. 2012. A Brief History of Analytic Philosophy: From Russell to Rawls. Wiley-Blackwell, Chichester, UK.

Ward, Andrew. 2006. Kant: The Three Critiques. Polity, Cambridge.

Thursday, August 29, 2013

The Return of Metaphysics into Analytic Philosophy

This post is based on Chapters 6 and 7 of Stephen P. Schwartz’s A Brief History of Analytic Philosophy: From Russell to Rawls (2012).

Schwartz calls Saul Kripke’s book Naming and Necessity (1980 [1972]) the “apotheosis of analytic philosophy,” because of the manner in which the book has founded a new analytic metaphysics and new insights into epistemology (Schwartz 2012: 241), even if Kripke often drew on the work of others.

The fundamentals of this new metaphysics can be summarised as follows:
(1) the modal logic of possible worlds;

(2) the new “causal” or “direct” theory of reference, applying to proper names, definite descriptions and natural kind terms;

(3) the new epistemological categories of (1) necessary a posteriori and (2) contingent a priori truth.
Despite objections from Quine (who was opposed to the existence of intensional objects), this rebirth of metaphysics began in the 1960s with developments in quantified modal logic, the logic of necessity and possibility (Schwartz 2012: 204–210). Saul Kripke played a large role in clarifying modal logic (Schwartz 2012: 212).

In the new modal logic, a necessarily true proposition p (or, in symbolic form, □P) is true in all possible worlds.

A possibly true proposition p (or, in symbolic form, ◇P) is true in at least one possible world. A thing would have a property or properties essential to it if and only if it has that property or properties in every possible world where it exists, but would have a property accidentally (or contingently), if and only if it has that property in one possible world but not another (Schwartz 2012: 214–215). It is important to note that these are metaphysical concepts of necessity, contingency and possibility, not merely semantic ones (Schwartz 2012: 215).

A “possible world” could be understood as
(1) a purely imaginary and non-real linguistic entity describing how things could have been;

(2) a real but abstract possible world (in the way numbers are often held to be real but not concrete), or

(3) a real and actual possible world but different from ours (as, for example, imagined in the multiverse hypothesis).
David Lewis adopted the view called modal realism: that infinite logically possible worlds are real and that individuals and things in those worlds exist just as concretely as actual things in our world, even though no universe is causally connected to others (Schwartz 2012: 218).

The alternative to modal realism is modal actualism: the view that only our universe actually exists, and that possible worlds are just abstract entities inside ours (Schwartz 2012: 219).

Modal realism raises issues about personal identity in other possible worlds. For example, if Nixon exists in other possible worlds with different life histories, what allows us to identify these other “Nixons” as the same man as the Nixon in our actual world? For David Lewis, there are no strict transworld personal identities, but merely counterparts in each possible world, which resemble each other to some degree (though this just raises the question of what counts as a proper counterpart!).

Others argue that individual human beings presumably have an individual essence, such as (1) being human (in a scientific sense), (2) having the same parents and birth facts, and (3) having the same DNA or genome (Schwartz 2012: 225). (Notably, Kripke, Plantinga and others strangely dismiss the problem of transworld identity as a pseudo-problem.)

Overall, most modern analytic philosophers have rejected David Lewis’s modal realism and his counterpart theory, but the metaphysical aspects of modal logic still remain (Schwartz 2012: 229).

But a further development of metaphysical ideas, via philosophy of language, was done by Saul Kripke, Hilary Putnam, and Keith Donnellan, although they drew on or developed ideas from others, such as Carnap, Ruth Barcan Marcus, Alvin Plantinga, David Lewis and Quine.

Part of this metaphysics was a new theory of reference, developed in works such as Keith Donnellan’s “Reference and Definite Descriptions” (1966), Kripke’s Naming and Necessity (1980 [1972]), and Putnam’s “The Meaning of ‘Meaning’” (1979).

Traditionally, the conjunction of properties used to describe a term was held to be its intension, and this provides the necessary and sufficient condition for deciding what the term refers to in terms of its extension (the alternative theory, associated with the later Wittgenstein, is that terms have a cluster of properties, and referents of terms have a certain number of these properties, without any one being sufficient) (Schwartz 2012: 241).

A proposition defining a thing in terms of its intension has an analytic truth, so that necessary analytic truth has a merely verbal or de dicto necessity, not a metaphysical (or de re) necessity (Schwartz 2012: 241). Therefore, in the traditional theory of sense and reference, the essence of a thing is a mere verbal or linguistic definition of it (Schwartz 2012: 241–242).

Saul Kripke began by questioning this traditional theory of reference with respect to proper names. Russell argued that proper names are really “disguised definite descriptions.” That is, each proper name has a set of descriptions which the referent of the name satisfies.

In place of this, Kripke and Donnellan proposed a new theory.

First, Donnellan argued that definite descriptions are used in two senses: in (1) an attributive sense, and (2) in a referential manner (Schwartz 2012: 243). It is possible to use a definite description in an attributive sense in which it is a subject with a predicate, without the speaker knowing the actual referent of the definite description (Schwartz 2012: 244). By contrast a direct referential use can refer to a thing independently of the descriptions.

Secondly, Kripke also argued that proper names refer independently of attached descriptions and are “rigid designators” which refer to the same individual in every possible world in which that individual exists (Schwartz 2012: 245). In all worlds, the individual to which a proper name refers need only have the properties essential to the individual and not a list of contingent properties given by definite descriptions (Schwartz 2012: 245–246).

The reference of a proper name is not determined by descriptions, according to Kripke, Putnam and Donnellan, but by the existence of many causal or historical chains of reference, passed on from speaker to speaker, going back to the time when an object was first designated with a name (Schwartz 2012: 253).

From these points, Kripke argues that identity statements using alternative names for the same thing have a necessary truth (Schwartz 2012: 246). Thus we can think of statements like the following:
(1) The morning star is the evening star.

(2) Hesperus is Phosphorus.

(3) Tully is Cicero.
Under Frege’s theory of meaning, these only have a contingent truth. But Kripke contends that any rigid designators used in a true identity statement make that statement necessarily true, and it is also necessarily true in all possible worlds where the entity exists.

Furthermore, Kripke insisted on three fundamental epistemological differences, as follows:
(1) the synthetic versus analytic distinction is a semantic difference;

(2) the notions of “necessity” and “contingency” can be understood in a metaphysical/ontological sense, and

(3) the “a priori” versus “a posteriori” distinction is an epistemological one (Schwartz 2012: 247).
That is, “necessarily true” has a sense distinct from purely verbal (or de dicto) necessity, and carries the additional metaphysical sense of “true in all possible worlds” that is itself distinct from the notion of aprioricity (Schwartz 2012: 247).

These epistemological distinctions were a landmark of recent analytic philosophy, according to Schwartz (2012: 247).

Since the epistemological concepts listed above do not coincide, Kripke presented arguments for two additional types of knowledge: the (1) necessary a posteriori truth, and (2) the contingent a priori truth (Schwartz 2012: 247).

For example, the statement “the morning star is the evening star” is necessarily true since both “rigid designators” refer to the planet Venus. Yet this was an empirical discovery, so that epistemologically it is known a posteriori. Therefore “the morning star is the evening star” is a necessary a posteriori truth (Schwartz 2012: 247).

Kripke also applied the rigid designator concept to the common nouns we call “natural kind” terms, such as “water,” “gold” and “tiger” (Schwartz 2012: 247). Gold, for instance, is the element that science has identified as having an atomic number of 79, and natural kind terms are specified by a conjunction of fundamental properties as discovered by science. And, if these properties are true properties of the natural kind, then the natural kind must have them as essential properties as a matter of nature or metaphysical necessity (Schwartz 2012: 248, 251).

Take, as an example, the difference between iron pyrite (fool’s gold) and real gold. The former has the superficial properties of gold (or many of the same concepts as that of gold in its intension), but nevertheless is not gold because of its essential chemical difference. Gold has as its natural essence the property of being the element with the atomic number of 79, and this is metaphysically necessary of gold in that gold must be like this in any possible world.

Kripke also uses the causal or historical theory of reference to explain the origin of natural kind names (Schwartz 2012: 253). The name “water” (or its equivalent in other languages) was used referentially of things familiar as water, but only modern science discovered the fundamental essence of water.

If water is truly H2O, then it is necessarily H2O in all possible worlds, and this is another synthetic necessary a posteriori truth (Schwartz 2012: 249, 251). (Problems arise when this sort of analysis is applied to natural kind types like “tiger” or “horse,” but I will skip this point.)

The upshot of this is that science can and does discover necessary truths (Schwartz 2012: 252), and scientific investigation of the fundamental atomic, chemical or biological properties or structures of some objects yields, or has already yielded, the necessary metaphysical essence of that object, in the sense that, if the object truly has that essence, it will do so in all possible worlds. And these natural essences are independent of linguistic convention, unlike mere analytic truth.

So such is the new analytic metaphysics, though it seems to be a type of metaphysics different from traditional forms.

For example, synthetic a priori knowledge does not appear in it. Nor does it seem to be fundamentally opposed to the natural sciences in the way other metaphysical systems were.

Whether it will continue to be part of future analytic philosophy is an open question.

“Saul Kripke,” Wikipedia

Naming and Necessity,” Wikipedia

“Rigid Designators,” Stanford Encyclopedia of Philosophy, 2006

“Rigid designator,” Wikipedia

“Actualism,” Stanford Encyclopedia of Philosophy, 2000 (rev. 2008)

“Modal Realism,” Wikipedia

“David Lewis,” Wikipedia

“David Lewis,” Stanford Encyclopedia of Philosophy, 2009

“Modal Logic,” Stanford Encyclopedia of Philosophy, 2000 (rev. 2009)

“Reference,” Stanford Encyclopedia of Philosophy, 2003 (rev. 2009)

Ted Parent, “Modal Metaphysics,” Internet Encyclopedia of Philosophy, 2012

Jason S. Baehr, “A Priori and A Posteriori,” Internet Encyclopedia of Philosophy, 2006

“David Malet Armstrong,” Wikipedia

Donnellan, Keith. 1966. “Reference and Definite Descriptions,” Philosophical Review 75: 281–304.

Kripke, Saul A. 1980 [1972]. Naming and Necessity (rev. edn.). Blackwell, Oxford.

Putnam, Hilary. 1979. “The Meaning of ‘Meaning,’” in Hilary Putnam, Mind, Language and Reality. Philosophical Papers. Volume 2, Cambridge University Press, Cambridge.

Schwartz, Stephen P. 2012. A Brief History of Analytic Philosophy: From Russell to Rawls. Wiley-Blackwell, Chichester, UK.

Wednesday, August 28, 2013

Non-Ergodicity and Trends and Cycles

Non-ergodicity is a tricky concept relevant to economics.

Yet any particular economy is not purely non-ergodic, but a complex mix of ergodic and non-ergodic elements, and non-ergodicity is a property of those processes or phenomena in which time and/or space averages of certain outcomes or attributes of that system either do not coincide for an infinite series or do not converge as the finite number of observations increases. That is to say, there will be no stable long-run relative frequencies, and even a large sample of the past does not reveal the future in an non-ergodic process to allow objective probabilities to be calculated for the likelihood any specific future outcome.

But, as already noted, any real world economy is a complex mix of both ergodic and non-ergodic processes, and the important point is that non-ergodicity does not mean no trends, cycles and oscillations occur in non-ergodic systems or in the economy at large.

We have, for example, no difficulty identifying high unemployment in the present or immediate past, or rising unemployment or rising or falling real output growth. Or trends like bear or bull markets in stock markets, even though future prediction of the value of any one share with objective probability scores still cannot be given.

What cannot be done in a pure non-ergodic system is to give an objective probability score for some specific future state of the system.

Things can become more complex because some processes have short-term stable relative frequencies but may not have such stability in the long run:
“[s]ome economic processes may appear to be ergodic, at least for short subperiods of calendar time, while others are not. The epistemological problem facing every economic decision maker is to determine whether (a) the phenomena involved are currently governed by probabilities that can be presumed ergodic – at least for the relevant future, or (b) nonergodic circumstances are involved.” (Davidson 1996: 501).
The long-run instability of certain human ensemble averages is an example of this.

Furthermore, some processes – and perhaps long term climate is one – may be so complex that they have elements that are ergodic and other elements that are non-ergodic, so that how one characterises the overall system can be an epistemic problem.

“Physical Probability versus Evidential Probability,” July 9, 2013.

“Keynes’s Interval Probabilities,” July 15, 2013.

“Davidson on “Reality and Economic Theory,” July 10, 2013.

“Probability and Uncertainty,” July 11, 2013.

“A Classification of Types of Probability and Theories of Probability,” July 14, 2013.

“Is Long Term Climate Non-Ergodic?,” July 18, 2013.

Davidson, Paul. 1996. “Reality and Economic Theory,” Journal of Post Keynesian Economics 18.4: 479–508.

Monday, August 26, 2013

Victoria Chick on Money

Victoria Chick (Emeritus Professor of Economics, University College, London) gives a nice talk here about the nature of money, given at the Positive Money Conference (January, 2013).

The “real” exchange model of mainstream economics – with its emphasis on neutral money, the strict quantity theory of money, and money as a “veil” – is deeply flawed, and simply cannot properly understand money and its effects on economic systems.

Most insightful is the comment that the alleged “money illusion” is not necessarily an illusion at all, but a real worry given the profoundly non-neutral nature of money (e.g., think of debt deflation).

Sunday, August 25, 2013

Schwartz’s A Brief History of Analytic Philosophy: from Russell to Rawls: Chapter 3

Chapter 3 of Stephen P. Schwartz’s A Brief History of Analytic Philosophy: from Russell to Rawls (2012) examines the philosophy of Quine and the critics of logical positivism.

After World War II, Anglo-American analytic philosophy and epistemology were strongly influenced by logical positivism. But already critics had emerged. First, at Oxford university, John Austin and Gilbert Ryle were developing their “ordinary language” philosophy which followed in the tradition of George E. Moore (1873–1958) and the later work of Ludwig Wittgenstein (1889–1951) (Schwartz 2012: 77).

Very quickly, it came to be seen that the verifiability principle was too extreme and its own epistemological status was unclear (i.e., was it analytic or empirical?) (Schwartz 2012: 80).

Karl Popper, a critic of logical positivism, proposed an alternative epistemological system called critical rationalism to defend scientific knowledge, which nevertheless has been widely criticised in modern analytic philosophy (Musgrave 2004: 16–17). Popper argued that science uses the hypothetico-deductive method, with falsification (not verification) of hypotheses by empirical evidence the key to knowledge. In hypothetico-deduction, hypotheses are formed, predictions or conclusions are derived from hypotheses, and are then empirically tested, so that hypotheses can be falsified. Only hypotheses falsifiable in principle have a claim to be scientific (Schwartz 2012: 81).

In contrast to the logical positivists, however, Popper did not make his falsifiability principle a criterion for meaningfulness: what the falsifiability principle does is to demarcate scientific claims from metaphysical ones (and the latter may still be meaningful, but not scientific) (Schwartz 2012: 82).

Willard Van Orman Quine (1908–2000), an empiricist and broadly influenced by the American pragmatist tradition in philosophy (Schwartz 2012: 95), attacked the epistemological foundations of logical positivism. The paradox here is that Quine himself was an empiricist (Schwartz 2012: 95) – perhaps even a radical empiricist (Schwartz 2012: 95) – but he attacked logical positivism (which at the time was considered the most extreme “no nonsense” form of empiricist philosophy) as being contaminated by apriorist rationalism and metaphysics (Schwartz 2012: 77–78), most notably in its continuing adherence to a strict analytic versus synthetic distinction in epistemology.

The attack on analyticity was made in Quine’s famous article “Two Dogmas of Empiricism” (1951) and later work. “Two Dogmas of Empiricism” is often understood to have argued that the idea of analyticity (or the analytic nature of a proposition) cannot be made clear, and that definition of the term falls back on “synonymy” which in turn falls back on “analyticity,” and so is ultimately circular (Schwartz 2012: 84–85).

Nevertheless, I think Quine’s argument is unconvincing, not least of all because it is committed to an untenable verbal behaviourism. Schwartz (2012: 86) concludes that modern analytic philosophers continue to use the analytic versus synthetic distinction, but that they cannot do so with a “clear conscience,” a view which I think is unwarranted, since Quine never successfully overthrew the distinction nor the meaningful concept of analyticity.

Quine’s broader philosophy was a type of radical empiricism (Schwartz 2012: 95) or, as some have called it, a “hyperempiricism.” In brief, Quine’s view of philosophy can be summarised as follows:
(1) rejection of the strict analytic–synthetic distinction;

(2) an epistemological holism, or the view that totality of knowledge is a web of belief;

(3) a naturalised epistemology, or the view that the theory of knowledge is part of science and the question of how human beings form their beliefs is an issue for psychology (Schwartz 2012: 99–100);

(4) the Duhem–Quine thesis and the view that science is underdetermined (Schwartz 2012: 88–89);

(5) a view of the natural sciences as a useful tool for prediction;

(6) the idea of indeterminacy of radical translation, and

(7) the idea that philosophy is continuous with natural science, in the sense that even speculative metaphysics is part of the human web of belief, but it is ultimately an empirical question for science (Schwartz 2012: 96–99).
Quine sees all knowledge as in principle capable of revision, even logic and mathematics (Schwartz 2012: 88).

Many also think that Quine’s epistemological holism and his use of the Duhem–Quine thesis have been confirmed by Thomas Kuhn’s book The Structure of Scientific Revolutions (1962) (Schwartz 2012: 91).

Quine’s version of pragmatism inspired a number of later American philosophers, including the neo-pragmatists Nelson Goodman, Richard Rorty and Hilary Putnam (Schwartz 2012: 101).

“Willard van Orman Quine,” Stanford Encyclopedia of Philosophy, 2010 (rev. 2010)

Robert Sinclair, “Quine’s Philosophy of Science,” Internet Encyclopedia of Philosophy, 2009

Chase B. Wrenn, “Naturalistic Epistemology,” Internet Encyclopedia of Philosophy, 2005

Stefanie Rocknak, “Quine on the Analytic/Synthetic Distinction,” Internet Encyclopedia of Philosophy, 2013

“Naturalized Epistemology,” Stanford Encyclopedia of Philosophy, 2001

“Underdetermination of Scientific Theory,” Stanford Encyclopedia of Philosophy, 2009

“Karl Popper,” Stanford Encyclopedia of Philosophy, 1997 (rev. 2013)

John R. Wettersten, “Karl Popper and Critical Rationalism,” Internet Encyclopedia of Philosophy, 2007

“Falsifiability,” Wikipedia

Musgrave, A. E. 2004. “How Popper (might have) Solved the Problem of Induction,” in P. Catton and G. Macdonald (eds), Karl Popper: Critical Appraisals. Routledge, Abingdon, Oxon, England. 16–27.

Schwartz, Stephen P. 2012. A Brief History of Analytic Philosophy: From Russell to Rawls. Wiley-Blackwell, Chichester, UK.

Saturday, August 24, 2013

Quine and the Analytic–Synthetic Distinction

I am taking a quick detour through analytic philosophy and epistemology at the moment, as a type of prolegomena to economic methodology.

In 1951, Willard Van Orman Quine (1908–2000) published the now famous paper “Two Dogmas of Empiricism” (1951; reprinted in Quine 1981), in which he argued that the conventional idea of analyticity (or the analytic nature of a proposition) cannot be defended, and that the distinction between analytic and synthetic truths is not clear cut.

“Two Dogmas of Empiricism” is often said to be one of the most important papers in analytic philosophy of the late 20th century, though Quine continued to develop his epistemological views later in life, so that he modified or shifted the arguments used in “Two Dogmas” (Creath 2004: 47).

In “Two Dogmas of Empiricism,” Quine examines the intensional definitions or meanings given to the concepts of “analyticity” and “synonymy.”

Quine complains that the process of definition necessary for understanding “analytic truth” rests on the concept of “synonymy,” and that affirming or establishing that these two linguistic forms are proper synonyms is “far from clear” (Quine 1951: 25). Moreover, Quine asserts that appeals to “synonymy” supposedly fall back on “analyticity,” so that we have a circular argument.

Quine therefore argues that all attempts to define “analyticity” fall back on intensional terms in a way that is viciously circular, and that intensional definitions do “not hold the key” to the concepts of synonymy and analyticity (Quine 1951: 27). Ultimately, intensional terms like “analyticity” must be defended in extensional terms, or, that is to say, in terms of their reference to verbal behaviour (Glock 1996: 204).

Quine therefore concluded that the standard ideas of analyticity and analytic truth were indefensible, and so the analytic and synthetic distinction unclear.

Many responses to Quine were made (Grice and Strawson 1956; Putnam 1962; Quinton 1967; Glock 1996; Nimtz 2003; Gutting 2009: 11–30).

Glock argues that, although the process by which “analyticity” is defined is circular, it is nevertheless not a vicious form of circularity (Glock 1996: 204; Glock 2003: 75). Quine’s demand that an intensional concept like “analyticity” needs to be reduced to extensional ones is unreasonable and unnecessary (Glock 2003: 75).

We can consider the following proposition:
(1) All bachelors are unmarried.
It is not possible to deny the truth of this proposition without simply redefining one of the words, and the definition of “analyticity” in terms of synonymy is not unjustified if intensional meanings can be sustained without being reduced to extensional verbal behaviour.

Quine’s complaints, then, about the circularity involved in defining “analyticity” cannot be sound, nor are they sufficient to overthrow the definition of analyticity in terms of synonymy.

Quine himself later denied that his major criticism of the concept of “analyticity” in “Two Dogmas of Empiricism” was simply that attempts to define it are circular (indeed Glock 2003: 77 contends that Quine dropped the “circularity” charge as early as 1953), and other philosophers seem to accept this, and instead claim that Quine’s actual fundamental complaint was that the term of “analyticity” must be analysed in a behavioural sense.

So with Quine’s behaviourism very much the relevant background theory, Quine’s objection appears to be that neither “analyticity” nor “synonymy” can be reduced to verbal behaviour or behavioural criteria (Gibson 1996: 99; Creath 2004: 49; Gutting 2009: 21; Hylton 2006: 183). This view was confirmed when critics charged that Quine’s standard for the definitions of these terms was impossibly high, and Quine responded by saying precisely that he wished “no more, after all, than a rough characterization in terms of dispositions to verbal behavior” (Quine 1960: 207).

So it is clear that Quine wants a behaviourist semantics (Gibson 1996: 99). Quine also redefined analytic sentences to mean those that all people in a community of speakers learn are true by learning the meanings of the words involved (Gibson 1996: 100). But, Quine argued, neither this behaviourist semantical definition nor a “popular” or ordinary language notion of analyticity (Elugardo 1997: 15) can provide a strict scientific clarification that can defend the intensional semantic one that justifies the necessary verbal truth of analytic propositions (Gibson 1996: 100).

In essence, Quine’s attack on analyticity is to be understood in the logical positivist tradition of the verification principle: how is the term of analyticity to be related to the empirical verbal behaviour of human beings as judged by a methodological behaviourism? (Creath 2004: 49). The paradox, as Creath points out, is that:
“Quine is pushing against Carnap the very demands that Carnap had pushed against the metaphysicians” (Creath 2004: 49).
But Quine’s whole attempt to reject strict analytic truth would seem to collapse once we recognise that (1) the verification principle cannot be accepted, and (2) the whole behaviourist project is unsound (Gutting 2009: 21; Burgess 2004: 51–52).

Ultimately, then, Quine’s attempt to reject the analytic versus synthetic distinction is a failure.

These conclusions are broadly in line with the arguments of Quine’s critics, who find that analytic truths do exist, but they are, as in conventional empiricist epistemology, trivial or non-informative (Putnam 1962; Nimtz 2003).

For example, Putnam argued that there is indeed an analytic versus synthetic distinction but that it is ultimately a trivial one (Putnam 1962: 361).

Burgess, John P. 2004. “Quine, Analyticity and Philosophy of Mathematics,” The Philosophical Quarterly 54.214: 38–55.

Creath, Richard. 1990. Dear Carnap, dear Van: The Quine-Carnap Correspondence and Related Work. University of California Press, Berkeley, CA and London.

Creath, Richard. 2004. “Quine on the Intelligibility and Relevance of Analyticity,” in Roger F. Gibson, (ed.). The Cambridge Companion to Quine. Cambridge University Press, Cambridge, UK and New York. 47–64.

Elugardo, R. 1997. “Analytic/Synthetic, Necessary/Contingent, and a priori/a posterori: Distinction,” in Peter V. Lamarque (ed.), Concise Encyclopedia of Philosophy of Language. Pergamon, New York. 10–19.

Gibson, Roger F. 1996. “Quine’s Behaviorism,” in William O’Donohue and Richard F. Kitchener (eds.), The Philosophy of Psychology. Sage, London. 96–107.

Glock, Hans-Johann. 1996. “Necessity and Normativity,” in Hans Sluga and David G. Stern (eds.). The Cambridge Companion to Wittgenstein. Cambridge University Press, Cambridge and New York. 198–225.

Glock, Hans-Johann. 2003. Quine and Davidson on Language, Thought and Reality. Cambridge University Press, Cambridge.

Grice, H. P. and P. F. Strawson. 1956. “In Defence of a Dogma,” Philosophical Review 65: 141–158.

Gutting, Gary. 2009. What Philosophers Know: Case Studies in Recent Analytic Philosophy. Cambridge University Press, Cambridge.

Hylton, P. 2006. “W. V. Quine (1908–2000),” in A. P. Martinich and David Sosa (eds.), A Companion to Analytic Philosophy. Blackwell, Malden, Mass. and Oxford. 181–204.

Nimtz, C. 2003. “Analytic Truths – Still Harmless After All These Years?,” in H. J. Glock, K. Gluer, and G. Keil (eds.), Fifty Years of Quine’s ‘Two Dogmas’. Rodopi, Amsterdam and New York. 91–118.

Putnam, Hilary. 1962. “The Analytic and the Synthetic,” Minnesota Studies in the Philosophy of Science 3: 358–397.

Putnam, Hilary. 1975. “The Analytic and the Synthetic,” in Hilary Putnam, Mind, Language and Reality. Philosophical Papers. Volume 2. Cambridge University Press, Cambridge. 33–69.

Quine, Willard Van Orman. 1951. “Two Dogmas of Empiricism,” Philosophical Review 60: 20–43.

Quine, Willard Van Orman. 1960. Word and Object. M.I.T. Press, Massachusetts.

Quine, Willard Van Orman. 1981. “Two Dogmas of Empiricism,” in From a Logical Point of View. Harvard University Press, Cambridge, MA. 20–46.

Quine, Willard Van Orman. 1991. “Two Dogmas in Retrospect,” Canadian Journal of Philosophy 21.3: 265–274.

Quinton, Anthony. 1967. “The a priori and the analytic,” in P. F. Strawson (ed.), Philosophical Logic. Oxford University Press, Oxford. 107–128.

Friday, August 23, 2013

Schwartz’s A Brief History of Analytic Philosophy: from Russell to Rawls: Chapter 2

Chapter 2 of Stephen P. Schwartz’s A Brief History of Analytic Philosophy: from Russell to Rawls (2012) examines the logical positivists and early Wittgenstein.

The Vienna Circle (or Ernst Mach Society) was a group of German-speaking scientists, mathematicians and philosophers based around the University of Vienna from 1922 until the mid-1930s, and included the following:
Moritz Schlick (1882–1936)
Rudolf Carnap (1891–1970), from 1926
Otto Neurath (1882–1945)
Friedrich Waismann (1896–1959)
Gustav Bergmann (1906–1987)
Hans Hahn (1879–1934)
Victor Kraft (1880–1975)
Karl Menger (1902–1985)
Philipp Frank (1884–1966)
Marcel Natkin
Olga Hahn-Neurath (1882-1937)
Theodor Radakovic

Herbert Feigl
Kurt Gödel
Hans Hahn, Otto Neurath, and Rudolf Carnap wrote the manifesto of the Vienna Circle in 1929, but, as noted above, the group had existed since 1922.

The members of the Vienna Circle had drawn inspiration from Ludwig Wittgenstein’s (1889–1951) book the Tractatus Logico-Philosophicus (Logical-Philosophical Treatise), which was first published in German in 1921, and then in an English translation prepared in Cambridge of 1922.

From 1926, Wittgenstein himself attended meetings of the Vienna Circle, although relations were not exactly amicable, not only because Wittgenstein did not get along with Rudolf Carnap (Schwartz 2012: 51), but also because of philosophical disagreements.

The importance of the Tractatus (as it is usually called) lies in the realm of philosophy of language, how language represents the world (Schwartz 2012: 52), and particularly in the picture theory of meaning. While the world is composed of all kinds of things and states of affairs, our language represents reality, and there is a fundamental structure embodied in formal logic that gives us an isomorphic relationship between our language (which we use in thought) and reality (Schwartz 2012: 52).

A simple and independent, or “atomic,” proposition represents a simple fact about the world, and larger complex or compound propositions are built up out of the truth functions of atomic propositions (Schwartz 2012: 53).

The fundamental logical concepts we call tautologies and self-contradictions are recognisable by the formal use of truth tables (Schwartz 2012: 53). Wittgenstein thought that tautologies are without meaning in the sense that they contain no new information (Schwartz 2012: 53). Thus the necessary truth they provide is “empty and formal,” so that Wittgenstein also saw mathematics as being tautological (Schwartz 2012: 53–54).

Wittgenstein also held that the “totality of true propositions is the whole of natural science,” so that philosophy itself is not science, but merely a technique aiming at “logical clarification of thoughts” (Schwartz 2012: 58).

The upshot of all this was the epistemologically revolutionary view (at least at the time) that analytic propositions, while they are certain, are tautologies and provide no informative new knowledge (Schwartz 2012: 54). This view invigorated the radical empiricism of the Vienna Circle, and led to the emergence of logical positivism.

The logical positivists came to think that there are ultimately two sources of human knowledge: (1) logical reasoning (yielding analytic a priori knowledge) and (2) empirical experience (yielding synthetic a posteriori knowledge). Like Frege, they rejected the existence of Kantian synthetic a priori knowledge, and saw mathematics as analytic tautologies (Schwartz 2012: 61).

The unusual twist in logical positivist epistemology is the verification criterion of meaningfulness, which, in the form stated by Ayer, holds that any non-analytic proposition must be empirically verifiable, either in practice or at least in principle, to be meaningful (Schwartz 2012: 60–61). If a proposition is not verifiable, then it is meaningless or without cognitive content. The logical positivists used the verification principles to reject metaphysics, theology and ethics as meaningless (Schwartz 2012: 61), a rather extreme view to say the least.

At the heart of the logical positivist program was the belief that many of the traditional issues of metaphysics are just confusions caused by improper use or misunderstanding of language (Schwartz 2012: 63). One of the most important of these confusions was the idea that existence is a property, when it is not a property at all, but simply expressed by the logic of quantifiers (Schwartz 2012: 63).

Logical positivism was spread to the English-speaking world by A. J. Ayer in his now classic book Language, Truth, and Logic (1936). Ayer had visited the Vienna Circle in 1933, and learned the principles of the logical positivists, though perhaps oversimplified them in process (Schwartz 2012: 59).

Ayer’s logical positivism had these seven tenets:
(1) that metaphysics, theology, ethics and aesthetics are meaningless by the verification principle;

(2) metaphysical issues are pseudo-problems caused by unclear or informal and misleading use of language;

(3) logic and mathematics are formal truths but tautologies;

(4) all propositions are divided into two classes: (i) analytic, a priori and necessarily true, but tautologous, and (ii) synthetic, a posteriori and contingent;

(5) the idea that all science forms a single unified system, and social sciences use the same methods as the natural sciences;

(6) reductionism (either phenomenalist or physicalist), and

(7) that ethical statements have no cognitive content, but express attitudes and emotions (Schwartz 2012: 61–67).
After the 1930s, however, many of the leading logical positivists came to modify or reject many of their core beliefs, and other philosophers such as the later Wittgenstein and the “ordinary language” philosophers at Oxford came to attack its principles (Schwartz 2012: 69).

Curiously, in 1932 – the year before Ayer’s own visit to Vienna – Willard Van Orman Quine had also visited the logical positivists, but, while Ayer was to become a leading exponent of logical positivism, Quine emerged after WWII as a severe critic.

“Ludwig Wittgenstein,” Stanford Encyclopedia of Philosophy, 2002 (rev. 2009)

Duncan J. Richter, “Ludwig Wittgenstein (1889–1951),” Internet Encyclopedia of Philosophy, 2004

“Vienna Circle,” Stanford Encyclopedia of Philosophy, 2006 (rev. 2011),

Mauro Murzi, “Vienna Circle,” Internet Encyclopedia of Philosophy, 2004

“Logical Empiricism,” Stanford Encyclopedia of Philosophy, 2011 (rev. 2011)

Mauro Murzi, “Rudolf Carnap (1891–1970),” Internet Encyclopedia of Philosophy, 2001

“Moritz Schlick,” Stanford Encyclopedia of Philosophy (2013)

“Russell’s Logical Atomism,” Stanford Encyclopedia of Philosophy, 2005 (rev. 2009)

“Wittgenstein’s Logical Atomism,” Stanford Encyclopedia of Philosophy, 2004 (rev. 2013)

“Logical Atomism,” Wikipedia

Ayer, A. J. 1936. Language, Truth, and Logic. Gollancz Ltd, London.

Schwartz, Stephen P. 2012. A Brief History of Analytic Philosophy: From Russell to Rawls. Wiley-Blackwell, Chichester, UK.

Thursday, August 22, 2013

Schwartz’s A Brief History of Analytic Philosophy: From Russell to Rawls: Chapter 1

Stephen P. Schwartz’s A Brief History of Analytic Philosophy: from Russell to Rawls (2012) is very useful treatment of the origin and development of modern Anglo-American analytic philosophy, and is one of a number of recent general histories of the subject (see Beaney 2013; Glock 2008; Martinich and Sosa 2006; Stroll 2000; Soames 2003a; and Soames 2003b).

The background that Schwartz provides in Chapter 1 of his book actually illuminates Keynes’s own early philosophical ideas and the context of Keynes’s famous A Treatise on Probability (1921). I sketch the main points of Chapter 1 from Schwartz’s study in what follows.

Bertrand Russell (1872–1970) was the founder of analytic philosophy, but he drew on important work in mathematical logic by the German Gottlob Frege (1848–1925).

Russell and George E. Moore (1873–1958), another founder of analytic philosophy, attended Cambridge University in the 1890s, and came under the influence of British Hegelian philosophers.

Russell went through a number of philosophical phases as follows:
(1) a period of influence from British idealism;

(2) a period of Platonist realism (1901–1904);

(3) the period of logical realism (1905–1912), and

(4) the period of logical atomism (1913–1918).
When Moore and Russell broke with Idealism, they had a brief flirtation with Platonic realism (Schwartz 2012: 28), and then Russell moved towards “logical atomism,” which is recognisably an early form of analytic philosophy.

In 1903, two important books appeared. Both of these works profoundly influenced the young John Maynard Keynes. The first (and most important to Keynes) was Moore’s Principia Ethica, an influential treatise on ethics; the second was Russell’s Principles of Mathematics (1903) (written in the latter’s Platonic realist phase).

Russell’s book was concerned with the foundations of mathematics, and in it Russell argued that mathematics could be deduced from a very small number of principles, a view which is the hallmark of the philosophy of mathematics called logicism.

But the ground for Russell’s logicist interpretation of mathematics had already been laid by Gottlob Frege in his Begriffsschrift (1879), Die Grundlagen der Arithmetik (The Foundations of Arithmetic; 1884), and the Grundgesetze der Arithmetik (Basic Laws of Arithmetic; vol. 1: 1893; vol. 2: 1903), in which works Frege founded modern logic and argued against Kant’s view that arithmetic statements were synthetic a priori knowledge. Against this Kantian view, Frege held that arithmetic was analytic a priori, and tried to demonstrate how a new logic could be used to deduce mathematics from a set of given axioms.

Russell’s early work uncovered a flaw in Frege’s system called Russell’s paradox, but Russell continued his work in the 1900s in an attempt to solve this paradox and complete Frege’s vision.

Russell and Alfred North Whitehead (1861–1947) worked on the culmination of their logicist program in mathematics, the three-volume book Principia Mathematica (the volumes of which were published in 1910, 1912, and 1913 respectively). In the Principia Mathematica, Russell and Whitehead attempted to construct a set of axioms and rules by means of symbolic logic from which all mathematics could be proven.

Though it is generally thought that Russell’s strict logicist program failed (given the problems raised by Gödel’s incompleteness theorems), nevertheless the consensus today is still that most of classical mathematics can be derived from pure logic and set theory (Schwartz 2012: 19), so in one important respect the essence of Russell’s logicist program was successful.

Thus the main legacy of Russell’s logicism was the rejection of Kantian synthetic a priori knowledge. For after it was shown that mathematics was not an example of synthetic a priori knowledge, one of the greatest arguments made by Rationalist apriorists was undercut and refuted.

Another influence that Russell’s logicism had was on John Maynard Keynes. Keynes’s own “logical theory of probability” was itself a logicist attempt to put probability and inductive inference on a sound footing by using a system of formal logic (Gillies 2000: 27). It is notable that Russell himself was deeply involved in helping Keynes with his work on probability (Gillies 2000: 27), although the initial inspiration for Keynes’s work on probability came from Moore’s Principia Ethica (Gillies 2000: 28).

The other major philosophical achievement of Russell covered by Schwartz is Russell’s article “On Denoting” (Mind 14 [1905]: 479–493), a landmark in the analytic philosophy of language. In this, Russell developed a theory of “definite descriptions,” or phrases that pick out one specific object, such as the “30th Prime Minister of the United Kingdom” or “my copy of Keynes’s General Theory.” These are distinguished from proper names, and philosophical problems arise when definite descriptions refer to non-existent objects, such as “the present king of France” or the “current president of Canada,” and propositions such as “the present king of France is bald.”

For Russell, these “definite description” propositions were merely informal ways of expressing existential statements: for example, the proposition “the present king of France is bald” is really to be understood as “there is one and only one present king of France and that one is bald” (Schwartz 2012: 24). Such existential statements are clearly false in terms of their truth value, so that Russell was able to reject the questionable theory of definite descriptions developed by Meinong (Schwartz 2012: 23).

By the time Russell turned to actual philosophy in the 1910s, he continued the British empiricist tradition of Locke, Berkeley and Hume (Schwartz 2012: 34), and modern analytic philosophy, for better or worse, has continued largely to shun both Hegelianism and modern Continental philosophy.

“Bertrand Russell,” Stanford Encyclopedia of Philosophy, 1995 (rev. 2010)

Carey, Rosalind. “Russell’s Metaphysics,” Internet Encyclopedia of Philosophy, 2008

Klement, Kevin C. “Russell’s Paradox ,” Internet Encyclopedia of Philosophy, 2005

“Gottlob Frege,” Stanford Encyclopedia of Philosophy, 1995 (rev. 2012)

Klement, Kevin C. “Gottlob Frege (1848–1925),” Internet Encyclopedia of Philosophy, 2005

“Frege’s Theorem and Foundations for Arithmetic,” 1998 (rev. 2013)

Lotter, Dorothea. “Frege and Language,” Internet Encyclopedia of Philosophy, 2005

“Philosophy of Mathematics,” Stanford Encyclopedia of Philosophy, 2007 (rev. 2012)

“Principia Mathematica,” Stanford Encyclopedia of Philosophy, 1996 (rev. 2010)

“Logicism,” Wikipedia

Beaney, Michael. 2013. The Oxford Handbook of the History of Analytic Philosophy. Oxford University Press, Oxford.

Gillies, Donald. 2000. Philosophical Theories of Probability. Routledge, London and New York.

Glock, Hans-Johann. 2008. What is Analytic Philosophy?. Cambridge University Press, Cambridge, UK and New York.

Keynes, John Maynard. 1921. A Treatise on Probability. Macmillan, London.

Martinich, A. P. and David Sosa (eds.). 2006. A Companion to Analytic Philosophy. Blackwell, Malden, Mass. and Oxford.

Preston, Aaron. 2012. Review of Stephen P. Schwartz, A Brief History of Analytic Philosophy: From Russell to Rawls

Russell, Bertrand. 1905. “On Denoting,” Mind 14: 479–493.

Schwartz, Stephen P. 2012. A Brief History of Analytic Philosophy: From Russell to Rawls. Wiley-Blackwell, Chichester, UK.

Soames, Scott. 2003a. Philosophical Analysis in the Twentieth Century. The Age of Meaning. Volume 2. Princeton University Press, Princeton, N.J. and Oxford.

Soames, Scott. 2003b. Philosophical Analysis in the Twentieth Century. The Dawn of Analysis. Volume 1. Princeton University Press, Princeton, N.J. and Oxford.

Stroll, Avrum. 2000. Twentieth-Century Analytic Philosophy. Columbia University Press, New York.

Wednesday, August 21, 2013

Why Did WWII Lift America Out of Depression?

Although I would add some caveats and a few more significant reasons, none other than the libertarian Robert Higgs (more or less) hits the nail on the head:
“Notwithstanding the initial availability of much unemployed labor and capital, the mobilization became a classic case of guns displacing both butter and churns. So why, apart from historians and economists misled by inappropriate and inaccurate statistical constructs, did people—evidently almost everyone—think that prosperity had returned during the war?

The question has several answers. First, everybody with a desire to work was working. After more than 10 years of persistently high unemployment and the associated insecurities (even for those who were working), full employment relieved a lot of anxieties. Although economic well-being deteriorated after 1941, civilians were probably better off on the average during the war than they had been during the 1930s. Second, the national solidarity of the war effort, though decaying after the initial upsurge of December 7, 1941, helped to sustain the spirits of many who otherwise would have been angry about the shortages and other inconveniences. For some people the wartime experience was exhilarating even though, like many adventures, it entailed hardships. Third, some individuals (for instance, many of the black migrants form the rural South who found employment in northern and western industry) were better off, although the average person was not. Wartime reduction of the variance in personal income—and hence in personal consumption—along with rationing and price controls, meant that many people at the bottom of the consumption distribution could improve their absolute position despite a reduction of the mean. Fourth, even if people could not buy many of the things they wanted at the time, they were earning unprecedented amounts of money. Perhaps money illusion, fostered by price controls, made the earnings look bigger than they really were. In any event, people were building up bank accounts and bond holdings; while actually living worse than before, they were feeling wealthier. Which brings us to what may be the most important factor of all: the performance of the war economy, despite its command-and-control character, broke the back of the pessimistic expectations almost everybody had come to hold during the seemingly endless Depression. In the long decade of the 1930s, especially its latter half, many people had come to believe that the economic machine was irreparably broken. The frenetic activity of war production—never mind that it was just a lot of guns and ammunition—dispelled the hopelessness. People began to think: if we can produce all these planes, ships, and bombs, we can also turn out prodigious quantities of cars and refrigerators.”
Robert Higgs. 1992. “Wartime Prosperity? A Reassessment of the U.S. Economy in the 1940s,” Independent Institute, March 1
First, the command economy put an end to unemployment, and not only that but many discouraged workers (or the “hidden unemployed”) were brought back into the labour force. We can see the reduction in unemployment in the graph below (in which the estimates are those of Darby (1976: 8), whose better method for calculating the figures is explained here).

The sheer absurdity of asserting that “no economic good” of any kind came out of the war is refuted by this point alone. Many people no longer had to experience the grinding poverty of unemployment, and obtained work on the home front when the war started. And they obviously chose that work over being unemployed.

For anyone who subscribes to a subjective theory of value, it is obvious that many individuals must have obtained subjective value (or an “economic good”) from the newly created civilian employment they received during the war, even if nobody would seriously doubt that the hours in many of these jobs were long and the work difficult.

Secondly, the war allowed the accumulation of savings and money income. The point overlooked by Higgs is that this also allowed both business and consumers to finally complete the process of deleveraging and paying down private debt to a low level. That is a fundamental point: the debt deflationary drag on the US economy was eliminated during the war, as we can see in this graph (in the “private debt” line).

Thirdly, the war fundamentally shifted business expectations from being highly pessimistic to a strong optimism that emerged after the conflict ended.

With the end of the war, when expectations had become optimistic, there was a private investment and consumption boom, which was in part fuelled by the drawing down of savings earned in the war. That outcome is a Keynesian story.

Darby, M. R. 1976. “Three-and-a-Half Million U.S. Employees Have Been Mislaid: Or, an Explanation of Unemployment, 1934–1941,” Journal of Political Economy 84.1: 1–16.

Higgs, Robert. 1992. “Wartime Prosperity? A Reassessment of the U.S. Economy in the 1940s,” Independent Institute, March 1

Higgs, Robert. 1992. “Wartime Prosperity? A Reassessment of the U.S. Economy in the 1940s,” The Journal of Economic History 52.1: 41–60.

The Success of America’s Command Economy in WWII

Here is an astute author on America’s command economy during the Second World War:
“In 1940 and 1941 the economy was recovering smartly from the Depression, but in the latter year the recovery was becoming ambiguous, as substantial resources were diverted to war production. From 1942 to 1944 war production increased rapidly. Although there is no defensible way to place a value on the outpouring of munitions, its physical dimensions are awesome. From mid-1940 to mid-1945 munitions makers produced 86,338 tanks; 297,000 airplanes; 17,400,000 rifles, carbines, and sidearms; 315,000 pieces of field artillery and mortars; 4,200,000 tons of artillery shells; 41,400,000,000 rounds of small arms ammunition; 64,500 landing vessels; 6,500 other navy ships; 5,400 cargo ships and transports; and vast amounts of other munitions. Despite countless administrative mistakes, frustrations, and turf battles, the command economy worked. But, as always, a command economy can be said to work only in the sense that it turns out what the authorities demand. The U.S. economy did so in quantities sufficient to overwhelm enemy forces.” (Higgs 1992).
That is correct.

But who is the author of this passage? Some “statist”?

It is none other than the libertarian Robert Higgs, who is cited ad nauseam by other libertarians, but I doubt whether many bother to cite this passage. (As an aside, I have a sneaking admiration for Higgs for reasons which I will perhaps explain in another post.)

Now once it is understood that command economies were mostly run on the basis of “planners’ sovereignty” and not “consumer sovereignty,” the debate about whether command economies “work” either in a theoretical and empirical sense becomes much more interesting than the tired and grossly exaggerated themes of Mises’s Socialist Calculation Debate.

Some command economies failed. Others have succeeded. The former communist states like the Soviet Union did not operate their economies on the principle of “consumer sovereignty.” These were command economies with production decisions by planners. If one assumes that the output of the command economy is planned by administrators by their own designs, then one will have to measure the success of their planning by whether the output produced did actually match their plans.

The Western command economies during WWII in America, Canada, the UK, Australia and New Zealand were very successful indeed: they more or less produced what was planned and won the war for Western democratic civilisation.

Higgs, Robert. 1992. “Wartime Prosperity? A Reassessment of the U.S. Economy in the 1940s,” Independent Institute, March 1

The Evidence for Hayek the Stable MV Theorist is still Feeble

Lawrence H. White responds to me here and cites three sources from Klausinger (2012: 34, n. 123) as evidence for Hayek’s alleged support for a stable MV.

Let us review the sources:
(1) First, no precise reference is given for Geldtheorie und Konjunkturtheorie (1929) for where Hayek “advocated a constant M instead of MV.” Furthermore, it does not necessarily even follow that Hayek supported central banks meeting demand for high-powered money, just because he “advocated a constant M.”

(2) Secondly, yet again no specific reference is given for Investigations into Monetary Theory (1925–1929), which, anyway, was never even completed or published (Klausinger 2012a: 45, n. 1) by Hayek.

(3) Thirdly, we have a reference to Prices and Production.

But I have already cited the relevant passage from this work and demonstrated that it does not say what the free bankers think it says:
“Hayek the Stable MV Theorist?,” August 13, 2013.
All in all, the idea that Hayek was a “stable MV” man before 1932/1933 looks fairly tenuous to me. Maybe there is some convincing evidence that he was, but I have not yet seen it.

Klausinger, Hansjoerg. 2012. “Introduction,” in Hansjoerg Klausinger (ed.). The Collected Works of F. A. Hayek. Volume 7. Business Cycles. Part I. University of Chicago Press, Chicago.

Klausinger, Hansjoerg (ed.). 2012a. The Collected Works of F. A. Hayek. Volume 8. Business Cycles. Part II. University of Chicago Press, Chicago.

Steve Keen on the Australian Economy

Steve Keen is interviewed here on the Australian economy:

Tuesday, August 20, 2013

Horwitz on the Post-WWII Boom

Horwitz gets it badly wrong on two of his major points.

First, the assertion that “most economists” who were influenced by Keynesian economics “were predicting a huge recession or depression” after WWII.

Horwitz has simply repeated a libertarian myth. First, if we want to gauge the opinion of Keynesian economists, the obvious starting point is Keynes himself.

Keynes was in fact optimistic about the post-war US economy:
“Keynes harshly rejected the risk of post-war stagnation, holding that because of Social security there would be a large reduction in private saving and so that would be no problem.” (Colander and Landreth 1996: 202).
Secondly, those who claim on the basis of Samuelson’s famous article of 1943 called “Full Employment after the War” that he and all Keynesians were predicting disaster have clearly never properly read that article or the book it was in.

Samuelson’s doom and gloom prediction was limited to a hypothetical scenario in 1943 in which “were the war to end suddenly within the next 6 months, were we again planning to wind up our war effort in the greatest haste, to demobilize our armed forces, to liquidate price controls, to shift from astronomical deficits to even the large deficits of the thirties – then there would be ushered in the greatest period of unemployment and industrial dislocation which any economy has ever faced.” Needless to say, that is not the prediction ascribed to Samuelson by his critics, and, moreover, it is clear that Samuelson (1943: 37) was opposed by many of his fellow Keynesian economists who were optimistic about a post war boom, on the basis of “private demand alone.” Amongst these optimists were Alvin A. Hansen and also Richard M. Bissell.

There is further discussion of this issue here:
“Keynesianism in America in the 1940s and 1950s,” January 22, 2011.

“The Post-1945 Boom in America,” July 15, 2011.

“Thomas E. Woods on Keynesian Predictions vs. American History: A Critique,” May 29, 2012.

“What Did Paul Samuelson really say about the Post-WWII US Economy?,” May 31, 2012.

“Paul Samuelson on the Post-1945 Boom,” January 4, 2013.

“Alvin Hansen Predicted the Post-1945 US Boom,” January 4, 2013.

“More on Alvin A. Hansen’s Prediction of a Post-1945 Boom,” January 6, 2013.
Secondly, it is under the Austrian business cycle theory that we should have expected and predicted a disastrous and long depression after WWII in the sense of massive unemployment and the collapse of the capital structure.

Austrian capital theory sees the capital structure as incredibly fragile and that allegedly unsustainable distortions in its structure will lead to painful and protracted periods of recessions and unemployment. On any Austrian view, there must have been massive malinvestment in the US during the years from 1941 to 1945. If the Austrian theory of capital were true, there should have been a devastating US depression after WWII as malinvestments were liquidated and unemployment soared. But instead the economy adjusted rapidly and boomed: real GDP did indeed fall as war output was ended, but conversion occurred with remarkable speed and success.

In reality, the rapidity and comparative ease of the post-WWII US boom are devastating refutations of both Austrian capital theory and the Austrian business cycle theory.

The Broken Window Fallacy and WWII

The problems with the libertarian use of the broken window fallacy and WWII are illustrated well by these videos.

The problem with invoking the broken window fallacy in reference to WWII is this: it implies that all military actions and wartime production was literally done only and solely for the purpose of creating jobs and producing something. That is, the war had no purpose in itself and was like the random destruction of a hooligan smashing windows motivated by the belief that his vandalism is justified just because people will be paid for fixing the broken windows.

The libertarians invoking the broken window fallacy here are guilty of the truly bizarre inability to accept that
(1) US war spending was done in order to defeat Nazi, Italian, and Japanese fascism, and defend the democratic world from tyranny, and

(2) that war spending did indeed have the concomitant consequence that US income, employment and output increased, even if it was obviously not the type occurring in peacetime.
These facts do not entail that any Keynesian economist advocates war or natural disaster as a way of stimulating the economy.

The absurd assumption of the libertarian critics is that the wartime economy must have been simply like a peacetime economy in all respects (something which I do not think any Keynesian has claimed), so that the discovery that it was not is then touted as some devastating argument against economic benefits that the US command economy did have. Amongst those benefits was that (1) private sector income was increased and this allowed a substantial reduction in the level of private sector debt (which had been a major drag on the economy since 1929), and (2) the accumulation of both personal and corporate savings during war that could be drawn down after the war ended.

Moreover, the assertion that all war employment did not contribute to private sector growth is unconvincing. First, there is the scientific and technological advancement that occurred via employment, spending and R&D related to the war, such as jet propulsion, new aeronautic technologies, radar technology, nuclear technology, surgical innovations, and so on. These technologies provided lucrative to private sector businesses and still are.

Secondly, many industries were created or strongly developed in new ways in the war years, and proved to be just as important after the war ended, such as the aeronautics, motor vehicle, pharmaceuticals and antibiotics industry. Many of capital goods created in the war years were also useful after the war ended, directly or with some adaptation.

The argument that wage and price controls distorted real GDP figures, while true, has little force given that (1) much of the production in the war was not for the private sector, but the government sector, and (2) that the private sector itself practices massive administration of prices even in peacetime, yet that does not stop people from buying what they want and shunning what they do not want.

Monday, August 19, 2013

Lee’s Post Keynesian Price Theory: Chapter 5

Chapter 5 of Frederic S. Lee’s Post Keynesian Price Theory (Cambridge, 1998) looks at the work of the economist Philip Andrews, who served as secretary of the Oxford Economists’ Research Group (OERG), chief statistician of the Nuffield College Social Reconstruction Survey, a participant in the Courtauld Inquiry on business enterprises, and was developer of the theory of “competitive oligopoly.”

Philip Andrews’s research led him to conclude that many businesses’ average direct cost curves were horizontal, and that even the notion of downward-sloping enterprise demand curves were problematic in manufacturing markets (Lee 1998: 101–102).

Moreover, many industrial markets were oligopolistic, used administered pricing, and engaged in competition not necessarily involving price adjustment (Lee 1998: 102).

Andrews held that both average direct costs and indirect costs will decline as a business increases its flow rate of output (Lee 1998: 105).

The setting of prices, though based on cost of production plus profit markup, was called by Andrews “normal cost price” (Lee 1998: 109). This involves the following concepts:
(1) a normal flow rate of output, determined by past experience and future expectations about sales;

(2) normal average direct costs and normal average indirect costs to calculate normal average total costs;

(3) the addition to normal average direct costs of a “costing margin” to cover normal average indirect costs and a profit margin (Lee 1998: 109).
But the profit margin is also constrained by the behaviour of competitors, and by “goodwill” relationships with suppliers and consumers (Lee 1998: 107–108). Some markets have a “price leader” that sets the market price because it is the business with the largest scale of production (Lee 1998: 112). Alternatively, trade associations allow businesses to share information about average normal costs and determine a common profit markup (Lee 1998: 112). The result of either of these is a stable administered price in many markets.

A particularly interesting finding of Andrews relates to the prices of factor inputs: he found that reductions by suppliers in the prices of factor inputs do not necessarily induce more purchases of a factor by a producer if its sales are stagnant or falling (Lee 1998: 108).

As previous researchers had found, changes in normal cost pricing tend generally to be caused by changes in factor input costs.

Andrews, Philip Walter Sawford. 1949. Manufacturing Business. Macmillan, London.

Lee, Frederic S. 1998. Post Keynesian Price Theory. Cambridge University Press, Cambridge and New York.

Saturday, August 17, 2013

Lee’s Post Keynesian Price Theory: Chapter 4

The fourth chapter of Frederic S. Lee’s Post Keynesian Price Theory (Cambridge, 1998) turns to the work of the Oxford Economists’ Research Group (OERG), an empirical and research group that was instituted at Oxford University in 1936, and involved economists such as Hubert Henderson, James Meade, and George L. S. Shackle.

The research conducted by this group involved direct interviews with UK businessmen, and one of the many topics of interest was price setting.

One of the findings of the research was that often business people were ignorant of the standard marginalist neoclassical price theory (Lee 1998: 87):
“… the problem was that businessmen were seeing common phenomena in a different light than the members of the OERG. The most important example of this, according to Robert Hall …, was that businessmen saw prices as non-market-clearing and not even designed to clear the market, while the members of the OERG saw prices as a market-clearing (Lee 1998: 88; emphasis in original).
The members of the OERG quickly realised that they had uncovered novel and important facts about price setting in the private sector:
“In fact severe questioning by the Group failed to uncover any evidence that the businessmen paid any attention to marginal revenue or costs in the sense defined by economic theory, and that they had only the vaguest ideas about anything remotely resembling their price elasticities of demand. The Oxford economists were shocked, to say the least. But what caught their attention even more was the relative stability of prices over the trade cycle, and this became the phenomenon which really needed to be explained.” (Lee 1998: 89).
Yet another finding was that the interest rate had considerably less influence on investment than standard economic theory held, and that uncertainty was an overriding factor in the investment decision – an insight which was of particular interest to Shackle (Lee 1998: 88).

The economists R. L. Hall and Charles J. Hitch found themselves much concerned by the findings of the OERG on price setting, and after further research published their now classic paper “Price Theory and Business Behaviour” (Oxford Economic Papers 2 [1939]: 12–45) to explain the evidence they found. Hall and Hitch concluded that businessmen did not generally estimate the elasticity of the demand curves for their products or equate marginal revenue with marginal cost, but instead set prices by means of “full cost pricing” (Lee 1998: 90), which was their terminology for what are now called “administered prices.”

Hall and Hitch found that full cost pricing was determined by the following factors:
(1) direct material and labour costs per unit of output;

(2) indirect costs at an expected level of output, and

(3) a markup for profit. (Lee 1998: 90).
However, given either the competition in a particular industry or the presence of a “price leader,” the profit margin and hence the price of products would tend to be similar in many markets even with full cost pricing (Lee 1998: 90–91). Thus the profit markup and profit margin tended to be a stable and conventional one (Lee 1998: 92), so that full cost prices are not profit maximising prices and are set before the many transactions in a given period.

Furthermore, businesses found that frequent price changes were unpopular with customers, that often price reductions would not induce significant additional market sales, and that they feared price wars (Lee 1998: 91). The business expectation that (1) price reductions would be followed by competitors but that (2) price increases would not be followed therefore tended to cause a price stability in full cost pricing markets.

Lee concludes by noting that Hall and Hitch’s full cost pricing research was developed by Philip Andrews in his own theory of competitive oligopoly.

Philip Pilkington has some related discussion of neoclassical price theory here:
Philip Pilkington, “Teleology and Market Equilibrium: Manifesto for a General Theory of Prices,” Fixing the Economists, August 16, 2013.

Philip Pilkington, “Quantity Rationing as Business Strategy: Furthering the Case for a General Theory of Pricing,” Fixing the Economists, August 17, 2013.
Hall, R. L. and C. J. Hitch. 1939. “Price Theory and Business Behaviour,” Oxford Economic Papers 2: 12–45.

Lee, Frederic S. 1998. Post Keynesian Price Theory. Cambridge University Press, Cambridge and New York.