Saturday, January 23, 2021

Klein-Melvin (1982) Spuriously Argue That Banking Is a Natural Monopoly Because of Economies of Scale in Building Public Confidence

 A third type of spurious argument is that banking is a natural monopoly because of economies of scale in building public ‘confidence.’ A well-known version is the Klein-Melvin (1982) argument that public confidence in the value of a currency depends on confidence-building expenditures that involve certain fixed costs, the presence of which implies that one bank could always produce confidence more cheaply than two or more could. Klein and Melvin go on to suggest that the government would have an advantage in providing this confidence because its ability to tax implies that it does not need to hold reserves to maintain confidence as private banks would. There are a number of problems with this argument:

It depends on the assumption that competitive banks would issue inconvertible currencies, but competitive issues would actually be convertible, as discussed already, and convertibility undermines the whole thrust of the Klein-Melvin analysis. If competition leads to a credible convertibility guarantee, there should be no lack of public confidence regarding the value of the currency, and the problem that Klein and Melvin discuss does not arise. . . . 

To put it mildly, it is very difficult to make out a serious case that government intervention in the monetary system has helped promote confidence in the currency. The very problem that Klein and Melvin discuss — the question of confidence in an inconvertible currency — only arises in the first place because governments have suppressed the convertibility ‘guarantee’ of the value of the currency that private competition would have provided. Far from providing the public with currencies in which they could have confidence or promoting public confidence in private currencies, government policy has often been designed to compel the public to use currencies in which they had little or no confidence.

—Kevin Dowd, Competition and Finance: A Reinterpretation of Financial and Monetary Economics (Houndmills, UK: Macmillan Press, 1996), 202-203.


Friday, January 22, 2021

The Diamond-Dybvig Model Is Sometimes Described as a “Bubble” or “Sunspot” Theory of Bank Runs

If bank runs were always confined to banks that were already (pre-run) insolvent [when liabilities are greater than its assets], then runs would not be a problem, but, instead, largely salutary. In other industries, an insolvent firm’s creditors whose debts are overdue, and who wish to cut their losses, can legally force the firm into liquidation through an involuntary bankruptcy proceeding. A run on an insolvent bank serves the same function as an involuntary bankruptcy proceeding: it is an action by the bank’s creditors, namely its depositors (or note-holders, but for simplicity we will speak only of depositors), that forces the bank into liquidation. Unlike the rule in a bankruptcy, the assets do not go pro rata to all creditors of equal standing, but instead go preferentially to those who are first in line to redeem their claims (a possibly problematic feature of deposit contracts that will concern us later). The run is salutary in that it closes the insolvent bank immediately, before the bank squanders even more depositor wealth, and goes even further into the red. The run cuts the depositors’ potential losses in the aggregate. The threat of a run, like the threat of bankruptcy in other industries, provides useful discipline. It forces banks to invest smartly, and to work vigorously to avoid insolvency or even the appearance of insolvency.

A problem arises, however, if depositors with imperfect information sometimes run on banks that are not (pre-run) insolvent. In an influential article, Douglas Diamond and Philip Dybvig (1983) emphasized that a run itself can cause a bank to default that would not otherwise have defaulted. A bank forced to liquidate assets hastily may have to accept less for them than they would otherwise be worth, an event known as suffering “fire-sale” losses. A bank run can thereby be a self-reinforcing equilibrium: if enough other depositors are running, the bank will incur large fire-sale losses, default becomes likely, and it becomes each depositor’s own best strategy to run.³ There is a “me-first” scramble as each depositor tries to redeem his claim ahead of others, before the bank’s funds are exhausted. In Diamond and Dybvig’s model, discussed in more detail below, the bank attempting to meet redemption demands by more than a certain proportion of its depositors will incur fire-sale losses so large that its default is a certainty. Any event that makes people anticipate a run, therefore, makes them anticipate insolvency, and so does, in fact, trigger a run. As in a rational speculative “bubble,” the induced outcome validates the anticipation, even if the anticipation is triggered by an intrinsically irrelevant event like the appearance of sunspots. The Diamond-Dybvig model is accordingly sometimes described as a “bubble” or “sunspot” theory of bank runs.

__________

³ In game-theoretic terms, the bank run is a Nash equilibrium. 

—Lawrence H. White, “Should Government Play a Role in Banking?” in The Theory of Monetary Institutions (Malden, MA: Blackwell Publishers, 1999), 121-122.


Edgeworth (1888) Initiated the Idea that Economies from RESERVE HOLDINGS Could Lead to a NATURAL MONOPOLY in Banking

A perennial issue in the banking literature is whether banking is a natural monopoly—are there economies of scale in banking such that only one firm can survive in the competitive equilibrium? In one way or another natural monopoly issues underlie many discussions of banking, and it is important to clarify them, because many people still believe, not only that banking is a natural monopoly, but that the monopolization of the currency supply and other aspects of present-day central banking can be justified on natural monopoly grounds. An industry can be said to be a natural monopoly if the average production cost is lower for one firm than it would be for two or more firms, and this condition requires that the production technology exhibits increasing returns to scale, to the point where all market demand is satisfied. There is nothing to prevent a second firm entering an industry characterized by natural monopoly, but average costs would be higher while both firms continued to supply the market, and these higher costs would presumably indicate scope for one firm to ‘eliminate’ the other in a mutually profitable way—bribing it to leave the market, for instance, or taking it over and then closing down its production facilities. It follows that while we might observe more than one firm in the industry over some short period, we would not expect that state of affairs to persist in the long run.

There are several reasons why banks might face increasing returns to scale that could conceivably lead to natural monopoly. One factor is economies from reserve holdings. The underlying idea goes back to Edgeworth (1888) and it has been developed since in a number of places (e.g., Porter 1961; Niehans 1978:182-4; Baltensperger 1980:4-9; Sprenkle 1985, 1987; Selgin 1989b:6-12; Glasner 1989a). These economies are based on a well known result that subject to certain plausible conditions a bank’s optimal reserves rise with the square root of its liabilities, implying that the bank’s optimal reserve ratio falls as the bank gets bigger. Given that reserves are costly to hold, a larger bank therefore faces lower average reserve costs.

—Kevin Dowd, “Is Banking a Natural Monopoly?” Laissez-faire Banking, Foundations of the Market Economy (London: Taylor & Francis e-Library, 2003), 76-77.


Free Banking Is the Necessary Precondition for Discovering Whether or Not Banking Is a NATURAL MONOPOLY

A standard proposition of modern microeconomic theory is that if there exist significant economies of scale, that is, if long-run average costs decline over the range of output demanded in the market, then only one firm will survive. Such a firm would be a “natural monopoly.” It has often been argued that banking may be a natural monopoly, and if so, the most efficient approach is to restrict competition and allow only a single issuer of banknotes, the central bank. In short, if there are large cost economies in banking, then free banking may not be an optimal solution. 

The most basic rebuttal to the natural monopoly argument is to point out that costs cannot be known to the producer (much less to the economist) prior to the process of production and a firm’s costs are likely to change when the market structure changes. From this it follows that

a governmental producer of money is not an efficient natural monopolist unless he can prevail in conditions of free entry. . . . The only operational proof that a common money is more efficient than currency competition and that the government is the most efficient provider of the common money would be to permit free currency competition. (Vaubel 1986, 933, 935)

That is, free banking is the necessary precondition for discovering whether or not banking is a natural monopoly. In the absence of such competition, those who claim that banking is a natural monopoly are guilty of making an unsupportable assertion. 

—Larry J. Sechrest, Free Banking: Theory, History, and a Laissez-Faire Model (Auburn, AL: Ludwig von Mises Institute, 2008), 162-163.


Thursday, January 21, 2021

The Fed Was “Supposed” to Command Such Superior Information As Ought to Have Allowed It to See the Subprime Crisis Brewing

The most recent financial crisis has allowed the Fed to achieve one of its most impressive public relations feats, to wit: convincing the public that the crisis, instead of supplying more proof of its inadequacy, shows that it’s now working better than ever. To accomplish this, the Fed has had to argue that, had it not been for its interventions, the outcome would have been much worse. Typical of this spin is San Francisco Fed President John C. Williams’s (2012) observation that, at the end of 2008, the U.S. economy was

teetering on the edge of an abyss. If the panic had been left unchecked, we could well have seen an economic cataclysm as bad as the Great Depression, when 25 percent of the workforce was out of work. . . . Why then didn’t we fall into that abyss in 2008 and 2009? The answer is that a financial collapse was not—I repeat, not—left unchecked. The Federal Reserve did what it was supposed to do.

But did the Fed really do everything “it was supposed to do” to contain the crisis? Is it even certain that its interventions made the crisis no worse than it would have been otherwise? There are good reasons for believing that the correct answer to both questions is “no.”

The Fed was, first of all, “supposed” to command such superior information as ought to have allowed it to see the crisis, or at least some trouble, brewing. After all, according to the San Francisco Fed’s “Dr. Econ” (FRBSF 2001), “Federal Reserve operations and structure provide the System with some unique insights into the health of the financial system and the economy,” providing it “with firsthand knowledge of the conditions of financial institutions.” In fact, Fed officials never saw what hit them. As the Federal Open Market Committee’s (FOMC) 2006 transcripts make clear, that committee was convinced at that late date both that a housing market downturn was unlikely and that, if such a downturn occurred, it would not do much damage to the rest of the economy. New York Fed President Timothy Geithner, for example, observed that “we just don’t see troubling signs yet of collateral damage, and we are not expecting much,” while Janet Yellen did not hesitate to congratulate outgoing Fed Chairman Alan Greenspan for leaving “with the economy in such solid shape.”

—George Selgin, “Operation Twist-the-Truth: How the Federal Reserve Misrepresents Its History and Performance,” in Money: Free and Unfree (Washington, DC: Cato Institute, 2017), 278-279.


Wednesday, January 20, 2021

Why Did the 1930s Depression Last So Long? Purdue Economist James A. Estey Explained Why in Strictly Austrian Terms

Estey described in detail how the shape of the economy’s structure of production is distorted by monetary inflation, creating initially a rapid expansion in the capital-goods industry. To keep the boom going, the banks and the central banks would have to advance further credits. “This further increase, however, would have to be greater than the initial one, for the general price level and the general level of incomes have now risen. . . . It is only a question of time until the changed structure becomes impossible and the former one must be restored.” Estey believed that the expansion of bank credit to finance World War II was a classic example of Hayek’s thesis, if one considers war goods as capital. 

Why did the 1930s depression last so long? Estey gave his explanation in strictly Austrian terms:

Theoretically, there should be a steady transfer of workers and nonspecific capital from the abandoned higher stages to these lower ones. In fact, this process is slow. Shorter processes still have to be started from the beginning. Goods still have to pass through the necessary steps. In addition, it is possible only gradually, as successive stages are reached in the passage of goods to the consumer, to absorb the labor and nonspecific capital released from longer and more roundabout processes. Moreover, this delay is increased by the uncertainty of producers in respect to appropriate methods in the shortened process where a relatively smaller amount of capital and a relatively larger amount of labor are needed.

    In brief, workers and mobile resources are released from the longer processes faster than they can be absorbed in the shorter, and the consequence is a growing volume of unemployment. . . . The attempt to restore the normal levels of consumption sets up a further disturbing factor—that is deflation and a fall in prices—which lengthens the depression and adds to the obstacles facing recovery.

Estey’s coverage of Hayek’s theory is extensive. He characterized the Hayekian model as “ingenious.”

—Mark Skousen, The Structure of Production, new rev. ed. (New York: New York University Press, 2015), 87.


Tuesday, January 19, 2021

The Very Essence of the Market Economy Is the SPECIFICITY of Capital Goods (Some Are More Specific in Use Than Others)

What kind of conclusions can we make about the general nature of capitalistic production? Certainly, all goods are transformed into final consumption through multi-stage development, but not all are multi-staged in terms of final use. Many capital goods are highly specific in their use, especially in the earlier stages of output (raw materials, producers’ goods, etc.). Their distance from final consumption can generally be identified. 

It is a different matter for nonspecific goods, such as paper products, electricity, telephones, trucks, and other goods used in a wide variety up and down the industrial sectors. Actually, all goods vary in their degree of specificity. Some goods are extremely specific, others are very non-specific and are used in virtually all sectors of the economy. But rather than abandon the idea of stages entirely, it is better to try to identify in a general way where along the time-structure hierarchy these nonspecific goods belong.

The very essence of the market economy is the specificity of capital goods. Suppose, for the sake of argument, that all capital goods were completely nonspecific and totally versatile. This would mean that they could be transferred from one project to another at no cost. If this were the case, there would be no structure to the economy, and therefore no lags, no structural unemployment of resources or labor—in short, no business cycle. In short, capital goods are specific in nature, although some are more specific in use than others. This is the crux of macroeconomic analysis, and the reason that Lachmann and others stress the importance of the heterogeneity of capital goods (and, I might add, the labor market, although to a lesser extent). But the degree to which producer’s goods and machinery are nonspecific—that is, useable in more than one stage—is the degree to which the economy will be flexible in adjusting to monetary disequilibrium.

—Mark Skousen, The Structure of Production, new rev. ed. (New York: New York University Press, 2015), 148-149.


Monday, January 18, 2021

Complementarity Is a Condition of Plan Equilibrium (Stability); Substitutability Is a Condition of Plan Disequilibrium (Change)

Lachmann’s world is consciously similar to Schumpeter’s world of “creative destruction,” except that for Lachmann the innovating entrepreneur is not disrupting some preexisting general equilibrium. His world is one in which a continuous evolutionary process of changing patterns of capital complementarity is occurring. At any point in time, different entrepreneurs will have different and frequently incompatible production plans. Over time the market process will validate some and invalidate others. Lachmann sees the market process as tending to integrate the capital structure, in other words, rendering plans more consistent, although he is careful to add that the forces of equilibrium may be overwhelmed by the forces of change. 

The concept of the capital structure is built out of the notion of capital complementarity. A production plan is a construction of the human mind. As such it exhibits a necessary internal consistency. From the point of view of the individual planner, it might be said that the plan is always in equilibrium. The plan is always in equilibrium in the sense that every planner, being rational, may always be counted on to do the best that he can, given all the relevant constraints, where such constraints include the time available to adjust to any unexpected changes. That is to say, at any given point of time any individual planner is in equilibrium with respect to the world as he sees it at that point of time. All productive resources employed in that plan stand in complementary relationships to one another. Between any two points of time, during which unexpected changes will necessarily have occurred, resource substitutions will have been made in an attempt to adjust to the changes. Complementarity is a condition of plan equilibrium (stability); substitutability is a condition of plan disequilibrium (change). 

 —Peter Lewin, Capital in Disequilibrium: The Role of Capital in a Changing World, 2nd ed. (Auburn, AL: Ludwig von Mises Institute, 2011), 134-135.


Sunday, January 17, 2021

Understanding Capital Combinations Entails an Understanding of the Concepts of Complementarity and Substitutability

According to Lachmann, though the capital-stock is heterogeneous, it is not amorphous. The various components of the capital stock stand in sensible relationship to one another because they perform specific functions together. That is to say, they are used in various capital combinations. If we understand the logic of capital combinations, we give meaning to the capital structure and, in this way, we are able to design appropriate economic policies or, even more importantly, avoid inappropriate ones.

Understanding capital combinations entails an understanding of the concepts of complementarity and substitutability. These concepts pertain to a world in which perceived prices are actual (disequilibrium) prices, in the sense that they reflect inconsistent expectations, and in which changes that occur cause protracted visible adjustments. Capital goods are complements if they contribute together to a given production plan. A production plan is defined by the pursuit of a given set of ends to which the production goods are the means. As long as the plan is being successfully fulfilled, all of the production goods stand in complementary relationship to one another. They are part of the same plan. The complementarity relationships within the plan that may be quite intricate and no doubt involve different stages of production and distribution. 

Substitution occurs when a production plan fails (in whole or in part). When some element of the plan fails, a contingency adjustment must be sought. Thus some resources must be substituted for others. This is the role, for example, of spare parts or excess inventory. Thus, complementarity and substitutability are properties of different states of the world. The same good can be a complement in one situation and a substitute in another. Substitutability can only be gauged to the extent that a certain set of contingency events can be visualized. There may be some events, such as those caused by significant technological changes, that, not having been predictable, render some production plans valueless. The resources associated with them will have to be incorporated into some other production plan or else scrapped; they will have been rendered unemployable. This is a natural result of economic progress which is driven primarily by the trial-and-error discovery of new and superior outputs and techniques of production. What determines the fate of any capital good in the face of change is the extent to which it can be fitted into any other capital combination without loss in value. Capital goods are regrouped. Those that lose their value completely are scrapped. That is, capital goods, though heterogeneous and diverse, are often capable of performing a number of different economic functions.

—Peter Lewin, “Hayek and Lachmann,” in Elgar Companion to Hayekian Economics, ed. Roger W. Garrison and Norman Barry (Cheltenham, UK: Edward Elgar Publishing, 2014), 169-170.


(POST 2 OF 2) The Structure of Production Under Central Planning: Skousen’s Contribution to the Socialist Calculation Debate

Let us use the example of shoe production to demonstrate the inherent problems with central planning [and specifically under the “random pricing” scenario]. Suppose a price is set too low for the production of cowhide, causing inventories to decline and a shortage to arise. As the incentive to produce cattle declines, cattlemen fail to build up their herds for future slaughter. The central board realizes its mistake and raises the price for cowhide. This is the right decision, but it takes time for cattle producers to rebuild their herds. Meanwhile, there is a current shortage of cowhides, even at the higher price. The next level of production, leather making, is severely restricted in its output because of the cowhide shortage. It must look for substitutes, or expensive foreign imports, but the search may not be entirely successful, especially in the short run. It also takes considerable time to find synthetic leather or other substitutes. 

Now we come to the final stage of shoe production. Suppose a shoe factory has been given a quota (demand) to produce ten thousand shoes in a given time period to satisfy consumer demand. The factory possesses all the tools, labor, and materials necessary to achieve its quota except the shoe manufacturer has only enough leather to produce five thousand shoes due to the shortage of leather. 

How many shoes will be produced? Only five thousand. Half the consumer demand will be met. Output is always limited to the availability of each complementary capital good. As Menger puts it, “With respect to given future time periods, our effective requirements for particular goods of higher order are dependent upon the availability of complementary quantities of the corresponding goods of higher order.”

In sum, the shortage of cowhide leads to a shortage of leather and eventually to a shortage of shoes. In addition, those capital goods and labor associated with the shoe industry will be underemployed because of the shortage. Thus, delays, shortages and underemployment of labor and resources are inevitable under such a random pricing system. The shortage problem is intensified even more when the process of transformation involves a wide variety of complementary factors. Thus a shortage of a widely demanded complementary factor can create more havoc as the production process moves toward final consumption.

—Mark Skousen, The Structure of Production, new rev. ed. (New York: New York University Press, 2015), 173-174.


(POST 1 OF 2) The Structure of Production Under Central Planning: Skousen’s Contribution to the Socialist Calculation Debate

The concept of the structure of production is a valuable tool in the ongoing debate over economic calculation in the socialist economy. In the 1930s, a major dispute developed between the Austrian economists, led by Ludwig von Mises and Friedrich A. Hayek, and the socialist economists, led by Oskar Lange and Fred M. Taylor. In his critique of socialism, Mises argued that central planning would not work because, without competition between firms, prices could not logically be calculated, and without market prices, firms could not produce goods and services efficiently.

Oskar Lange rebutted Mises’ view by contending that central planning boards under socialism could determine prices through “trial and error.” A price could be set and the market of supply and demand could be observed. If shortages occurred, the price should be raised. If surpluses abound, the price should be lowered. Lange even went so far as to state, “Let the Central Planning Board start with a given set of prices chosen at random. . . . If the quantity demanded of a commodity is not equal to the quantity supplied, the price of that commodity has to be changed.”

Surprisingly, most economists concluded that Lange and the other “market” socialists adequately answered the Austrian challenge, although the issue is still debated today. 

However, the literature on the socialist calculation debate tends to ignore in large measure the problems arising out of the structure-of-production concept. The debate seems to focus on a “micro” approach of supply-demand factors of individual consumer and factor markets rather than the critical interrelation of economic processes. Specifically, how could a central planning board successfully use a “trial and error” method at each stage of production wherein each successive level of output depends on earlier produced inputs and working capital? After all, the setting up of a socialist state, whereby government controls the means of production, does not eliminate the intermediate stages of output. . . . 

Setting prices at random would undoubtedly create massive shortages and surpluses. But the deficiencies in one market are never isolated—they lead to disequilibrium in other related markets before and after the specific market. Moreover, it takes time to eliminate shortages and surpluses—the industrial system under central planning cannot create equilibrium overnight. Random pricing would therefore result in delays and shortages in the long and complex chain of production.

—Mark Skousen, The Structure of Production, new rev. ed. (New York: New York University Press, 2015), 172-173.