EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Gresham’s Law

George Selgin, University of Georgia

Introduction

The proposition known as “Gresham’s Law” is often stated baldly as “bad money drives good money out of circulation.” Ancient and medieval references to this tendency were informed by circumstances in which lightened, debased, or worn coins had assigned to them the same official value as coins containing greater quantities of precious metal. In this context the tendency, which had yet to be elevated to the status of an economic “law,” was one in which “bad” coins alone, that is, coins possessing a relatively low metallic content (“intrinsic value”), continued to be offered in routine payments, while “good” coins were withdrawn into hoards, exported, or reduced through clipping or “sweating” (that is, purposeful erosion by chemical or mechanical means) to an intrinsic value no greater than that possessed by their “bad” counterparts.

Later writers, however, tended to stretch the meaning of Gresham’s Law by invoking it as an argument against bimetallism and the competitive production of money and as the reason for the historical substitution of paper for metallic moneys; and a recent work even goes so far as to treat the law as being nothing more than an instance of the rule that rational agents prefer less expensive means of accomplishing their ends to dearer ones. As we shall see, although some modern interpretations of Gresham’s Law can be regarded as legitimate extensions of the law’s original (and perfectly valid) meaning, others are entirely unwarranted. Indeed, inappropriate applications of Gresham’s Law have caused some economists to err in the opposite direction, by jumping to the conclusion that, because in these applications the law appears to be contradicted by available empirical evidence, it must be altogether fallacious.

Attribution to Gresham and Ancient Status

The expression “Gresham’s Law” dates back only to 1858, when British economist Henry Dunning Macleod (1858, p. 476-8) decided to name the tendency for bad money to drive good money out of circulation after Sir Thomas Gresham (1519-1579). However, references to such a tendency, sometimes accompanied by discussion of conditions promoting it, occur in various medieval writings, most notably Nicholas Oresme’s (c. 1357) Treatise on money, and can even be found in much earlier works, including Aristophanes’ The Frogs, where the prevalence of bad politicians is attributed to forces similar to those favoring bad money over good.

As for Gresham himself, he observed “that good and bad coin cannot circulate together” in a letter written to Queen Elizabeth on the occasion of her accession in 1558. The statement was part of Gresham’s explanation for the “unexampled state of badness” England’s coinage had been left in following the “Great Debasements” of Henry VIII and Edward VI, which reduced the metallic value of English silver coins to a small fraction of what that value had been at the time of Henry VII. It was owing to these debasements, Gresham observed to the Queen, that “all your ffine goold was convayd ought of this your realm.”

Importantly, as Robert Giffen (1891, pp. 304-5) observed, Gresham made no direct reference to bimetallism or to “the analogous case of inconvertible paper when the paper drives the metal out of circulation.” However, Giffen (who appears here to have followed Jevons’s lead) errs in claiming that Gresham was “only responsible for the suggestion that bad coins … drive good ones of the same metal out of circulation,” for Gresham’s letter to Elizabeth clearly points to the debasement of silver coins as a factor leading to the disappearance of gold (Fetter 1932, pp. 490-1).It remains true nonetheless that the treatment of Gresham’s Law as an argument against bimetallism or resort to any sort of paper money are modern extensions of the law’s original meaning, the merits of (and implicit or explicit assumptions underlying which) must be assessed separately from those of early versions.

Correct and Incorrect Interpretations

That bad coins have in fact often tended to drive better coins of the same metal out of circulation is beyond dispute. Yet historical exceptions to this tendency have been observed. Thus even in its narrowest meaning Gresham’s Law must be said to hold only under particular conditions. What are these conditions, and why are they crucial?

These questions may best be answered by first considering those exceptional cases in which good coins appear to have driven out bad ones rather than vice versa. The most notable of such exceptions arose in the context of international trade, where, as Robert Mundell (1998) has observed, “strong” currencies, meaning ones that tended to retain their precious metal value over long periods of time, tended to dominate and drive-out “weak” (that is, less reputable) ones: “The florins, ducats and sequins of the Italian city-states did not become ‘dollars of the Middle Ages’ because they were bad coins; they were among the best coins ever made.” Less well known but equally important exceptions to Gresham’s Law involved relatively rare instances of competitive coin production, one example of which was the competitive production of gold coins by private mints in California in the wake of the gold rush. Here as well it was the higher-quality coins that captured the market, allowing their makers to thrive while less reputable private mints failed (Summers 1976).

The main thing that distinguished these exceptions to Gresham’s Law from other instances in which the law appears to have applied was the lack of any rules or of any authority capable of enforcing rules compelling people to accept particular coins in payment for goods or in the settlement of debts at some officially designated nominal value. Thus while the California private gold coins were, like those produced at the Philadelphia Mint, denominated in dollars, none of them were legal tender, and people were free to value them as they pleased, or to refuse them altogether. In practice only the better coins gained wide employment because others were not considered to be reliable representatives of the pre-existing dollar unit, and because valuing these inferior coins according to their actual gold content was inconvenient. In the market for international exchange media a similar tendency for good coins to be favored over bad stemmed from the absence of government authorities capable of enforcing legal tender laws and other rules compelling the acceptance of official coins “by tale” (that is, at par or face value, rather than by weight) beyond national borders.

Gresham’s Law can hold, on the other hand, where both good and bad coins enjoy similar legal-tender status and where non-trivial sanctions can be applied to persons who insist upon discriminating against bad coin and in favor of good coin. In such cases all coins must be accepted by tale, and the employment of bad coin becomes a dominant strategy in what amounts to a “Prisoners’ Dilemma” game in which both sellers and buyers participate. Buyers, knowing that sellers must accept either good and bad coins at their official face value, offer inferior coins, while hoarding, exporting, or reducing better ones; sellers, anticipating buyers’ dominant strategy, price their wares accordingly (Selgin 1996). As Frank Fetter has observed in a classic paper (Fetter 1932, pp. 494-5), the tendency for good coins (or metal made by melting metal from good coins) to actually leave the country is the result, not of debasement per se, but of the tendency for prices, including the price of bullion, to increase in consequence of an overall excess supply of coins. Debasement, including both official debasement and the unofficial reduction of good coins, contributes to such an excess supply by allowing an increased nominal stock of money to be derived from any given quantity of metal.

Legal tender laws of varying degrees of severity have normally buttressed the “sovereign prerogative” of coinage. In Gresham’s England, for instance, the sovereign assumed the right to reinforce its monopoly of minting by “imposing penalties for the offense of refusing the king’s coins at the value set upon them by the king,” and at least one act of Parliament (passed during the course of the debasements) explicitly “forbade any person to receive or to utter any coin at a price above the current value or proclamation rate.” Penalties for disobeying such laws included fines and imprisonment as well as the forfeiting of unlawfully exchanged sums. It is easy to understand how such laws promoted the use of bad coin over good. On the other hand, occasional changes to the legal status of bad coin, such as when Elizabeth elected (in response to Gresham’s advice) to “decry” (that is, to devalue) the bad shillings then in circulation, can serve to bring good coin back into open circulation.

Failure to recognize the dependence of Gresham’s Law upon laws interfering with the normal course of voluntary exchange has been responsible for some of the cruder misapplications of the law, including the tendency to treat it as describing the inevitable outcome of any sort of currency competition. A particularly egregious example occurs in William Stanley Jevons’s highly influential Money and the Mechanism of Exchange (1882 [1875], pp. 64 and 82), where Jevons argues that Gresham’s Law supplies sufficient grounds for rejecting Herbert Spencer’s (1851, pp. 396-402) arguments for private coinage. Although Spencer was not an economist, his arguments appear to have been more consistent with a proper understanding of Gresham’s Law, as well as with evidence from actual private coinage episodes in California and elsewhere.

Extensions to Paper Money and Bimetallism

To observe that Gresham’s Law originally referred to circumstances where both good and bad coins of the same metal were awarded similar, substantial legal tender status is not to deny that the law may have other applications as well. Thus, legal tender laws may also attempt to compel people to treat coins of different metals, or coins and paper notes, as equivalents, unintentionally driving the more esteemed form of money into hiding. When, for example, the Continental Congress resolved in 1775 to treat any person refusing to accept irredeemable continental bills at their declared (specie) value as “an enemy of this country,” it merely succeeded it putting a stop to any open trading of specie. The French Revolutionary government’s decision to sentence to death persons caught discounting assignats relative to coins bearing the same face value had a similar effect. To describe such episodes as instances of Gresham’s Law at work is to propose a perfectly valid and sensible extension of the law’s original meaning. To insist, on the other hand (as Robert Mundell, among others, does), that Gresham’s Law also accounts for the historical tendency of redeemable banknotes, lacking legal tender status, to take the place of gold or silver coin is to misapply and misunderstand the law, in so far as the redeemable notes must have been regarded by their holders and by others who accepted them, not as “bad” money, but as money that was just as “good” as the coins into which they were readily converted. Properly understood, Gresham’s Law refers to an unintended consequence of legislation the intention of which is to force people to treat a money they view as inferior as if it were not so. The law is not, as Mundell (1998) asserts, simply an instance of the general (free-market) tendency for lower-cost substitutes to replace dearer ones capable of accomplishing the same ends!

Resort to Gresham’s Law in analyzing bimetallism must likewise be done with care. In so far as bimetallic legislation assigns similar legal tender status to both gold and silver coins, coins of one metal may be legally overrated relative to those made of the other metal, and these overrated coins may be treated as being analogous to worn, light, or debased coins, or “bad” money, in a monometallic system. “Good” (underrated) coins will then tend to be driven out of circulation, while only the “bad” (overrated) metal will be brought to the mint for coining.

The tendency just described is, however, limited by the fact that coins of different metals are unlikely to be equally useful in different transactions. In particular, gold coins will generally be of larger denominations and as such cannot supply the need for smaller change (cf. Sargent and Velde 2002). Consequently, even though gold may be legally overvalued relative to silver, and silver may cease to be voluntarily rendered to the mint, silver coins are unlikely to disappear from circulation altogether. Instead, they may circulate at a premium; alternatively, they may be clipped or sweated by private agents until their metallic value no longer exceeds their face value, as happened in Britain during the eighteenth century. Here and in some other bimetallic episodes Gresham’s Law held in its narrowest sense, in that reduced coins made of undervalued metal systematically took the place of full-bodied coins of that same metal. But the law did not hold strictly in the version of it proposed by some critics of bimetallism, in that gold and silver coins continued to circulate together.

A Fallacy?

While many writers have abused Gresham’s Law by treating it as being more generally valid than is in fact the case, Arthur Rolnick and Warren Weber (1986) err in the opposite direction in calling the law a “fallacy.” Their argument draws on examples involving either bimetallic legislation or the introduction of paper substitutes for gold or silver coin but not, significantly, on episodes involving debasement, to which all early statements of Gresham’s Law refer. With respect to bimetallism, Rolnick and Weber claim that conventional appeals to Gresham’s Law are based on the untenable assumption that government and private agents actually offer to exchange gold for silver and vice versa at some official non-market rate. Such a policy, they observe, “would imply potentially unbounded profits for currency traders at the expense of a very ephemeral mint or a very naive public” (ibid, p. 186). But this reading of conventional arguments is far-fetched: the operation of Gresham’s Law depends, not on persons actually offering to trade moneys having distinct “intrinsic” values at officially declared exchange rates, but on their being subject to sanctions if they attempt to value the moneys otherwise than as prescribed by law. The disappearance of “good” money occurs, if it occurs at all, as the unintended consequence of laws that seek, often quite futilely, to force people to treat differently valued moneys equally. Moreover, the idea that mints might offer to exchange gold for silver at official rates (as implied by the separate mint prices for those metals) is a perfect fiction that no proponent of Gresham’s Law has ever entertained.

A distinct component of Rolnick and Weber’s critique holds that, where both good and bad moneys are available, the good money, instead of disappearing from circulation, will tend to remain in circulation while trading at a premium; alternatively, the bad money may trade at a discount, with the good money serving as the medium of account at hence trading at par. Small change may be an exception to this rule, as it may be prohibitively costly to employ such change at other than its par value. Nevertheless, even with respect to small change Rolnick and Weber regard Gresham’s Law as fallacious, since, according to their view, the small change that disappears from circulation might be either “good” or “bad” money, depending upon which of these happens to be the medium of account.

In support of their argument Rolnick and Weber offer empirical evidence of bad and good moneys circulating together at market-determined exchange rates, including the case of California during the Greenback era, where the gold standard remained in effect, with greenbacks trading at a discount relative to gold. But while such evidence may demonstrate that Gresham’s Law isn’t universally applicable, it hardly succeeds in proving the law a fallacy. As has been noted above, Gresham’s Law, properly understood, applies only to circumstances where people are legally compelled to accept both good and bad moneys at their par or face values, either in spot transactions or in the settlement of debts. Where legal sanctions play no role (as was the case in California, where local authorities refused to enforce Federal legal tender legislation), market-based transactions costs alone may discourage the use of non-par money. However, because market-based transaction costs might systematically favor either good or bad money, depending upon which happens to function as a medium of account, such costs alone cannot account for the overwhelming number of historical instances in which bad money does in fact appear to have taken the place of good.

References and Further Reading

Fetter, Frank W. “Some Neglected Aspects of Gresham’s Law.” Quarterly Journal of Economics 46, no. 3 (1932): 480-95.

Giffen, Robert. “The Gresham Law.” Economic Journal 1, no. 2 (1891): 304-6.

Jevons, William Stanley. Money and the Mechanism of Exchange. New York: D. Appleton and Company, 1882.

Macleod, Henry Dunning. Elements of Political Economy. London: Longmans, Green & Co., 1858.

Mundell, Robert. “Uses and Abuses of Gresham’s Law in the History of Money.” Zagreb Journal of Economics 2, no. 2 (1998): 3-38. (http://www.columbia.edu/~ram15/grash.html).

Rolnick, Arthur J., and Warren E. Weber. “Gresham’s Law or Gresham’s Fallacy?” Journal of Political Economy 94, no. 1 (1986): 185-99.

Sargent, Thomas, and Françcois Velde, The Big Problem of Small Change. Princeton, NJ: Princeton University Press, 2002.

Selgin, George. “Salvaging Gresham’s Law: The Good, the Bad, and the Illegal.” Journal of Money, Credit, and Banking 28, no. 4 (1996): 637-49.

Spencer, Herbert. Social Statics. London: John Chapman, 1851.

Summers, Brian. “Private Coinage in America.” The Freeman 26, no. 7 (1976): 436-40.

Citation: Selgin, George. “Gresham’s Law”. EH.Net Encyclopedia, edited by Robert Whaples. June 9, 2003. URL http://eh.net/encyclopedia/greshams-law/

Economic Recovery in the Great Depression

Frank G. Steindl, Oklahoma State University

Introduction

The Great Depression has two meanings. One is the horrendous debacle of 1929-33 during which unemployment rose from 3 to 25 percent as the nation’s output fell over 25 percent and prices over 30 percent, in what also has been called the Great Contraction. A second meaning has the Great Depression as the entire decade of the thirties, the anxieties and apprehensions for which John Steinbeck’s The Grapes of Wrath is a metaphor. Much has been written about the unprecedented drop in economic activity in the Great Contraction, with questions about its causes and the reasons for its protracted decline especially prominent. The amount of scholarship devoted to these issues dwarfs that dealing with the recovery. But there indeed was a recovery, though long, tortuous, and uneven. In fact, it was well over twice as long as the contraction.

The economy hit its trough in March 1933. Whether or not by coincidence, President Franklin D. Roosevelt took office that month, initiating the New Deal and its fabled first hundred days, among which was the creation in June 1933 of its principal recovery vehicle, the NIRA — National Industrial Recovery Act.

Facts of the Recovery

Figure 1 uses monthly data. This allows us to see more finely the movements of the economy, as contrasted with the use of quarterly or annual data. For present purposes, the decade of the Depression runs from August 1929, when the economy was at its business cycle peak, through March 1933, the contraction trough, to June 1942, when the economy clearly was back to it long-run high-employment trend.

Figure 1 depicts the behavior of industrial output and prices over the Great Depression decade, the former as measured by the Index of Industrial Employment and the latter by the Wholesale Price Index.[1] Among the notable features are the large declines in output and prices in the Great Contraction, with the former falling 52 percent and the latter 37 percent. Another noteworthy feature is the sharp, severe 1937-38 depression, when in twelve months output fell 33 percent and prices 11 percent. A third feature is the over-two-year deflation in the face of a robust increase in output following the 1937-38 depression.

The behavior of the unemployment rate is shown in Figure 2.[2] The dashed line shows the reported official data, which do not count as employed those holding “temporary” relief jobs. The solid line adjusts the official series by including those holding such temporary jobs as employed, the effect of which is to reduce the unemployment rate (Darby 1976). Each series rises from around 3 to about 23 percent between 1929 and 1932. The official series then climbs to near 25 percent the following year whereas the adjusted series is over four percentage points lower. Each continues declining the rest of the recovery, though both rise sharply in 1938. By 1940, each is still in double digits.

Three other charts that are helpful for understanding the recovery are Figures 3, 4, and 5. The first of these shows that the monetary base of the economy — which is the reserves of commercial banks plus currency held by the public — grew principally through increases in the stock of gold In contrast to the normal situation, the base did not increase because of credit provided by the Federal Reserve System. Such credit was essentially constant. That is, the Fed, the nation’s central bank, was basically passive for most of the recovery. The rise in the stock of gold occurred initially because of revaluation of gold from $20.67 to $35 an ounce in 1933-34 (which though not changing the physical holdings of gold raised the value of such holdings by 69 percent). The physical stock of gold now valued at the higher price then increased because of an inflow of gold principally from Europe due to the deteriorating political and economic situation there.

Figure 4 shows the behavior of the stock of money, both the narrow M1and broader M2 measures of it. The shaded area shows the decreases in those money stocks in the 1937-38 depression. Those declines were one of the reasons for that depression, just as the large declines in the money stock in 1929-33 were major factors responsible for the Great Contraction. During the Contraction of 1929-33, the narrow measure of the money stock — currency held by the public and demand deposits, M1 — fell 28 percent and the broader measure of it (M1 plus time deposits at commercial banks) fell 35 percent. These declines were major factors in causing the sharp decline that was the debacle of 1929-33.

Lastly, the budget position of the federal government is shown in Figure 5. One of the notable features is the sharp increase in expenditures in mid-1936 and the equally sharp decrease thereafter. The budget therefore went dramatically into deficit, and then began to move toward a surplus by the end of 1936, largely due to the tax revenues arising from the Social Security Act of 1935.

Reasons for Recovery

In Golden Fetters (1992), Barry Eichengreen advanced the basis for the most widely accepted understanding of the slide and recovery of economies in the 1930s. The depression was a worldwide phenomenon, as indicated in Figure 6, which shows the behavior of industrial production for several major countries. His basic thesis related to the gold standard and the manner in which countries altered their behavior under it during the 1930s. Under the classical “rules of the game,” countries experiencing balance of payments deficits financed those deficits by exporting gold. The loss of gold forced them to contract their money stock, which then resulted in deflationary pressures. Countries running balance of payments surpluses received gold, which expanded their money stocks, thereby inducing expansionary pressures. According to Eichengreen’s framework, countries did not “play by the rules” of the international gold standard during the depression era. Rather, countries losing gold were forced to contract. Those receiving gold, however, did not expand. This generated a net deflationary bias, as a result of which the depression was world wide for those countries on the gold standard. As countries cut their ties to gold, which the U.S. did in early 1933, they were free to pursue expansionary monetary and fiscal policies, and this is the principal reason underlying the recovery. The inflow of gold into the U.S., for instance, expanded the reserves of the banking system, which became the basis for the increases in the stock of money.

The quantity theory of money is a useful framework that can be used to understand movements of prices and output. The theory holds that increases in the supply of money relative to the demand results in increased spending on goods, services, financial assets, and real capital. The theory can be expressed in the following equation, where M is the stock of money, V is velocity, the rate at which it is spent, which is the mirror side of the demand for money — the desire to hold it. P is the price level and y is real output.

Increases in M relative to V result in increases in P and y.

Research into the forces of recovery generally concludes that the growth of the money supply (M) was the principal cause of the rise in output (y) after March 1933, the trough of the Great Contraction. Furthermore, those increases in the money stock also pushed up the price level (P).

Four studies expressly dealing with the recovery are of note. Milton Friedman and Anna Schwartz show that “the broad movements in the stock of money correspond with those in income” (1963, 497) and argue that “the rapid rate of rise in the money stock certainly promoted and facilitated the concurrent economic expansion” (1963, 544). Christina Romer concludes that the growth of the money stock was “crucial to the recovery. If [it] had been held to its normal level, the U.S. economy in 1942 would have been 50 percent below its pre-Depression trend path” (1992, 768-69). She also finds that fiscal policy “contributed almost nothing to the recovery” (1992, 767), a finding that mirrors much of the postwar research on the influence of fiscal policy, and stands in contrast to the views of much of the public as it came to believe that the fiscal budget deficits of President Roosevelt were fundamental in promoting recovery.[3]

Ben Bernanke (1995) similarly stresses the importance of the growth of the money stock as basic to the recovery. He focuses on the gold standard as a restraint on independent monetary actions, finding that “the evidence is that countries leaving the gold standard recovered substantially more rapidly and vigorously than those who did not” (1995, 12) because they “had greater freedom to initiate expansionary monetary policies” (1995, 15).

More recently Allan Meltzer (2003) finds the recovery driven by increases in the stock of money, based on an expanding monetary base due to gold. “The main policy stimulus to output came from the rise in money, an unplanned consequence of the 1934 devaluation of the dollar against gold. Later in the decade the rising threat of war, and war itself supplemented the $35 gold price as a cause of the rise in gold and money” (2003, 573).

That the recovery was due principally to the growth of the stock of money appears to be a robust conclusion of postwar research into causes of the 1930s recovery.

The manner in which the stock of money increased is important. The growing stock of gold increased the reserves of banks, hence the monetary base. With their greater reserves, banks did two things. First, they held some as precautionary reserves, called excess reserves. This is measured on the left hand side of Figure 7. Secondly, they bought U.S.government securities, more than tripling their holdings, as seen on the right hand axis of Figure 7. Also, as seen there, commercial bank loans increased only slightly in the recovery, rising only 25 percent in over nine years.[4] The principal impetus to the growth of the money stock, therefore, was banks’ increased purchases of U.S. government securities, both ones already outstanding and ones issued to finance the deficits of those years.

The 1937-38 Depression and Revival

After four years of recovery, the economy plunged into a deep depression in May 1937, as output fell 33 percent and prices 11 percent in twelve months (shown in Figure 1). Two developments were identified with being principally responsible for the depression.[5] The one most prominently identified by contemporary scholars is the action of the Federal Reserve.

As the Fed saw the volume of excess reserves climbing month after month, it became concerned about the potential inflationary consequences if banks were to begin making more loans, thereby expanding the money supply and driving up prices. The Banking Act of 1935 gave the Fed authority to change reserve requirements. With its newly granted authority, it decided upon a “preemptive strike” against what it regarded as incipient inflation. Because it thought that those excess reserves were due to a “shortage of borrowers,” it therefore raised reserve requirements, the effect of which was to impound in required reserves the former excess reserves. The increased requirements were in fact doubled, in three steps: August 1936, March 1937, and May 1937. As Figure 7 exhibits, excess reserves therefore fell. The principal effect of the doubling of reserve requirements was to reduce the stock of money, as shown in the shaded area of Figure 4.[6]

A second factor causing the depression was the falling federal budget deficit, due to two considerations. First, there was a sharp one-time rise in expenditures in mid-1936, due to the payment of a World War I Veterans’ Bonus. Thereafter, expenditures fell — the “spike” in the figure. Secondly, the Social Security Act of 1935 mandated collection of payroll taxes beginning in 1937, with the first payments to be made several years later. The joint effect of these two was to move the budget to near surplus by late 1937.

During the depression, both output and prices fell, as was their usual behavior in depressions. The bottom of the depression was May 1938, one year after it began. Thereafter, output began growing quite robustly, rising 58 percent by August 1940. Prices, however, continued to fall, for over two years. Figure 8 shows the depression and revival experience from May 1937 through August 1940, the month in which prices last fell. The two shaded areas are the year-long depression and the price “spike” in September 1939. Of interest is that the shock of the war that spurred the price jump did not induce expectations of further price rises. Prices continued to fall for another year, through August 1940.

Difficulties with Current Understanding

According to the currently accepted interpretation, the recovery owes its existence to increases in the stock of money. One difficulty with this view is the marked contrast to the price experience of recovery through mid-1937. How could rising prices in the 1933 turnaround be fundamental to the recovery but not in the vigorous, later recovery, when prices actually fell? Another difficulty is that the continued rise in the stock of money is due to the political turmoil in Europe. There is little intrinsic to the U.S economy that contributed. Presumably, had there been no continuing inflow of gold raising the monetary base and money stock, the economy would have languished until the demands of World War II would have made their impact. In other words, would there have been virtually no recovery had there been no Adolf Hitler?

Of more consequence is the conundrum presented by the experience of more than two years of deflation in the face of dramatically rising aggregate demand, of which the sharply rising money stock appears as a major force. If the rising stock of money were fundamental to the recovery, then prices and output would have been rising, as the aggregate demand for output, spurred also by increasing fiscal budget deficits, would have been increasing relative to aggregate supply. But in the present instance, prices were declining, not rising. Something else was driving the economy during the entire recovery, but the seemingly dominant aggregate demand pressures obscured it in the early part.

One prospective impetus to aggregate supply would be declining real wages that would spur the hiring of additional workers. But with prices declining, it is unlikely that real wages would have fallen in the revival from the late 1930s depression. The evidence as indicated in Figure 9 shows that they in fact increased. With few exceptions, real wages increased throughout the entire deflationary period, rising 18 percent overall and 6 percent in the revival. The real wage rate, by rising, was thus a detriment to increased supply. Real wages cannot therefore be a factor inducing greater aggregate supply.

The economic phenomenon that was driving the recovery was probably increasing productivity. An early indication of this comes from the pioneering work of Robert Solow (1957) who in the course of examining factors contributing to economic growth developed data on the behavior of productivity. In support of this, Alexander Field presents both macroeconomic and microeconomic evidence showing that “the years 1929-41 were, in the aggregate, the most technologically progressive of any comparable period in U.S. economic history” (2003, 1399).

The rapid productivity increases were an important factor explaining the seemingly anomalous problem of rapid recovery and the stubbornness of the unemployment rate. In today’s parlance, this has come to be known as a “jobless recovery,” one in which rising productivity generates increased output rather than greater labor input producing more.

To acknowledge that productivity increases were crucial to the economic recovery is not however the end of the story because we are still left trying to understand the mechanisms underlying their sharp increases. What induced such increases? Serendipity — the idea that productivity increased at just the right time and in the appropriate amounts — is not an appealing explanation.

More likely, there is something intrinsic to the economy that encapsulates mechanisms — that is, incentives spurring inventive capital and labor innovations generating productivity increases, as well as other factors — that move the economy back to its potential.

References

Bernanke, Ben S. “The Macroeconomics of the Great Depression: A Comparative Approach.” Journal of Money, Credit, and Banking 27 (1995): 1-28.

Darby, Michael R. “Three-and-a-Half Million U.S. Employees Have Been Mislaid: Or an Explanation of Unemployment, 1934-41.” Journal of Political Economy 84 (1976):1-16.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression 1919-1939. New York: Oxford University Press, 1992.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” American Economic Review 93 2003): 1399-1413.

Friedman, Milton and Anna J. Schwartz. A Monetary History of the United States: 1867-1960. Princeton, NJ: Princeton University Press, 1963.

Meltzer, Allan H. A History of the Federal Reserve, volume 1, 1913-1951. Chicago: University of Chicago Press, 2003.

Romer, Christina D. “What Ended the Great Depression?” Journal of Economic History 52 (1992): 757-84.

Solow, Robert M. “Technical Change and the Aggregate Production Function.” Review of Economics and Statistics 39 (1957): 312-20.

Smithies, Arthur. “The American Economy in the Thirties.” American Economic Review Papers and Proceedings 36 (1946):11-27.

Steindl, Frank G. Understanding Economic Recovery in the 1930s: Endogenous Propagation in the Great Depression. Ann Arbor: University of Michigan Press, 2004.


[1] Industrial production and the nation’s real output, real GDP, are highly correlated. The correlation relation is 98 percent, both for quarterly and annual data over the recovery period

[2] Data on the unemployment rate are available only on an annual basis for the Depression decade.

[3] In fact, large numbers of academics held that view, of which Arthur Smithies’ address to the American Economic Association is an example. His assessment was that “My main conclusion … is that fiscal policy did prove to be … the only effective means to recovery” (1946, 25, emphasis added).

[4] Real loans — loans relative to the price level — in fact declined, falling 24 percent in the 111 months of recovery.

[5] A third factor was the action of the U.S. Treasury as it “sterilized” gold, at the instigation of the Federal Reserve. By sterilization of gold, the Treasury prevented the gold inflows from increasing bank reserves.

[6] The reason the stock of money fell is that banks responded to the increased reserve requirements by trying to rebuild their excess reserves. That is, the banks did not regard their excess reserves as surplus reserves, but rather as precautionary reserves. This contrasted with the Federal Reserve’s view that the excess reserves were surplus ones, due to a “shortage” of borrowers at banks.

Citation: Steindl, Frank. “Economic Recovery in the Great Depression”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/economic-recovery-in-the-great-depression/

An Overview of the Great Depression

Randall Parker, East Carolina University

This article provides an overview of selected events and economic explanations of the interwar era. What follows is not intended to be a detailed and exhaustive review of the literature on the Great Depression, or of any one theory in particular. Rather, it will attempt to describe the “big picture” events and topics of interest. For the reader who wishes more extensive analysis and detail, references to additional materials are also included.

The 1920s

The Great Depression, and the economic catastrophe that it was, is perhaps properly scaled in reference to the decade that preceded it, the 1920s. By conventional macroeconomic measures, this was a decade of brisk economic growth in the United States. Perhaps the moniker “the roaring twenties” summarizes this period most succinctly. The disruptions and shocking nature of World War I had been survived and it was felt the United States was entering a “new era.” In January 1920, the Federal Reserve seasonally adjusted index of industrial production, a standard measure of aggregate economic activity, stood at 81 (1935–39 = 100). When the index peaked in July 1929 it was at 114, for a growth rate of 40.6 percent over this period. Similar rates of growth over the 1920–29 period equal to 47.3 percent and 42.4 percent are computed using annual real gross national product data from Balke and Gordon (1986) and Romer (1988), respectively. Further computations using the Balke and Gordon (1986) data indicate an average annual growth rate of real GNP over the 1920–29 period equal to 4.6 percent. In addition, the relative international economic strength of this country was clearly displayed by the fact that nearly one-half of world industrial output in 1925–29 was produced in the United States (Bernanke, 1983).

Consumer Durables Market

The decade of the 1920s also saw major innovations in the consumption behavior of households. The development of installment credit over this period led to substantial growth in the consumer durables market (Bernanke, 1983). Purchases of automobiles, refrigerators, radios and other such durable goods all experienced explosive growth during the 1920s as small borrowers, particularly households and unincorporated businesses, utilized their access to available credit (Persons, 1930; Bernanke, 1983; Soule, 1947).

Economic Growth in the 1920s

Economic growth during this period was mitigated only somewhat by three recessions. According to the National Bureau of Economic Research (NBER) business cycle chronology, two of these recessions were from May 1923 through July 1924 and October 1926 through November 1927. Both of these recessions were very mild and unremarkable. In contrast, the 1920s began with a recession lasting 18 months from the peak in January 1920 until the trough of July 1921. Original estimates of real GNP from the Commerce Department showed that real GNP fell 8 percent between 1919 and 1920 and another 7 percent between 1920 and 1921 (Romer, 1988). The behavior of prices contributed to the naming of this recession “the Depression of 1921,” as the implicit price deflator for GNP fell 16 percent and the Bureau of Labor Statistics wholesale price index fell 46 percent between 1920 and 1921. Although thought to be severe, Romer (1988) has argued that the so-called “postwar depression” was not as severe as once thought. While the deflation from war-time prices was substantial, revised estimates of real GNP show falls in output of only 1 percent between 1919 and 1920 and 2 percent between 1920 and 1921. Romer (1988) also argues that the behaviors of output and prices are inconsistent with the conventional explanation of the Depression of 1921 being primarily driven by a decline in aggregate demand. Rather, the deflation and the mild recession are better understood as resulting from a decline in aggregate demand together with a series of positive supply shocks, particularly in the production of agricultural goods, and significant decreases in the prices of imported primary commodities. Overall, the upshot is that the growth path of output was hardly impeded by the three minor downturns, so that the decade of the 1920s can properly be viewed economically as a very healthy period.

Fed Policies in the 1920s

Friedman and Schwartz (1963) label the 1920s “the high tide of the Reserve System.” As they explain, the Federal Reserve became increasingly confident in the tools of policy and in its knowledge of how to use them properly. The synchronous movements of economic activity and explicit policy actions by the Federal Reserve did not go unnoticed. Taking the next step and concluding there was cause and effect, the Federal Reserve in the 1920s began to use monetary policy as an implement to stabilize business cycle fluctuations. “In retrospect, we can see that this was a major step toward the assumption by government of explicit continuous responsibility for economic stability. As the decade wore on, the System took – and perhaps even more was given – credit for the generally stable conditions that prevailed, and high hopes were placed in the potency of monetary policy as then administered” (Friedman and Schwartz, 1963).

The giving/taking of credit to/by the Federal Reserve has particular value pertaining to the recession of 1920–21. Although suggesting the Federal Reserve probably tightened too much, too late, Friedman and Schwartz (1963) call this episode “the first real trial of the new system of monetary control introduced by the Federal Reserve Act.” It is clear from the history of the time that the Federal Reserve felt as though it had successfully passed this test. The data showed that the economy had quickly recovered and brisk growth followed the recession of 1920–21 for the remainder of the decade.

Questionable Lessons “Learned” by the Fed

Moreover, Eichengreen (1992) suggests that the episode of 1920–21 led the Federal Reserve System to believe that the economy could be successfully deflated or “liquidated” without paying a severe penalty in terms of reduced output. This conclusion, however, proved to be mistaken at the onset of the Depression. As argued by Eichengreen (1992), the Federal Reserve did not appreciate the extent to which the successful deflation could be attributed to the unique circumstances that prevailed during 1920–21. The European economies were still devastated after World War I, so the demand for United States’ exports remained strong many years after the War. Moreover, the gold standard was not in operation at the time. Therefore, European countries were not forced to match the deflation initiated in the United States by the Federal Reserve (explained below pertaining to the gold standard hypothesis).

The implication is that the Federal Reserve thought that deflation could be generated with little effect on real economic activity. Therefore, the Federal Reserve was not vigorous in fighting the Great Depression in its initial stages. It viewed the early years of the Depression as another opportunity to successfully liquidate the economy, especially after the perceived speculative excesses of the 1920s. However, the state of the economic world in 1929 was not a duplicate of 1920–21. By 1929, the European economies had recovered and the interwar gold standard was a vehicle for the international transmission of deflation. Deflation in 1929 would not operate as it did in 1920–21. The Federal Reserve failed to understand the economic implications of this change in the international standing of the United States’ economy. The result was that the Depression was permitted to spiral out of control and was made much worse than it otherwise would have been had the Federal Reserve not considered it to be a repeat of the 1920–21 recession.

The Beginnings of the Great Depression

In January 1928 the seeds of the Great Depression, whenever they were planted, began to germinate. For it is around this time that two of the most prominent explanations for the depth, length, and worldwide spread of the Depression first came to be manifest. Without any doubt, the economics profession would come to a firm consensus around the idea that the economic events of the Great Depression cannot be properly understood without a solid linkage to both the behavior of the supply of money together with Federal Reserve actions on the one hand and the flawed structure of the interwar gold standard on the other.

It is well documented that many public officials, such as President Herbert Hoover and members of the Federal Reserve System in the latter 1920s, were intent on ending what they perceived to be the speculative excesses that were driving the stock market boom. Moreover, as explained by Hamilton (1987), despite plentiful denials to the contrary, the Federal Reserve assumed the role of “arbiter of security prices.” Although there continues to be debate as to whether or not the stock market was overvalued at the time (White, 1990; DeLong and Schleifer, 1991), the main point is that the Federal Reserve believed there to be a speculative bubble in equity values. Hamilton (1987) describes how the Federal Reserve, intending to “pop” the bubble, embarked on a highly contractionary monetary policy in January 1928. Between December 1927 and July 1928 the Federal Reserve conducted $393 million of open market sales of securities so that only $80 million remained in the Open Market account. Buying rates on bankers’ acceptances1 were raised from 3 percent in January 1928 to 4.5 percent by July, reducing Federal Reserve holdings of such bills by $193 million, leaving a total of only $185 million of these bills on balance. Further, the discount rate was increased from 3.5 percent to 5 percent, the highest level since the recession of 1920–21. “In short, in terms of the magnitudes consciously controlled by the Fed, it would be difficult to design a more contractionary policy than that initiated in January 1928” (Hamilton, 1987).

The pressure did not stop there, however. The death of Federal Reserve Bank President Benjamin Strong and the subsequent control of policy ascribed to Adolph Miller of the Federal Reserve Board insured that the fall in the stock market was going to be made a reality. Miller believed the speculative excesses of the stock market were hurting the economy, and the Federal Reserve continued attempting to put an end to this perceived harm (Cecchetti, 1998). The amount of Federal Reserve credit that was being extended to market participants in the form of broker loans became an issue in 1929. The Federal Reserve adamantly discouraged lending that was collateralized by equities. The intentions of the Board of Governors of the Federal Reserve were made clear in a letter dated February 2, 1929 sent to Federal Reserve banks. In part the letter read:

The board has no disposition to assume authority to interfere with the loan practices of member banks so long as they do not involve the Federal reserve banks. It has, however, a grave responsibility whenever there is evidence that member banks are maintaining speculative security loans with the aid of Federal reserve credit. When such is the case the Federal reserve bank becomes either a contributing or a sustaining factor in the current volume of speculative security credit. This is not in harmony with the intent of the Federal Reserve Act, nor is it conducive to the wholesome operation of the banking and credit system of the country. (Board of Governors of the Federal Reserve 1929: 93–94, quoted from Cecchetti, 1998)

The deflationary pressure to stock prices had been applied. It was now a question of when the market would break. Although the effects were not immediate, the wait was not long.

The Economy Stumbles

The NBER business cycle chronology dates the start of the Great Depression in August 1929. For this reason many have said that the Depression started on Main Street and not Wall Street. Be that as it may, the stock market plummeted in October of 1929. The bursting of the speculative bubble had been achieved and the economy was now headed in an ominous direction. The Federal Reserve’s seasonally adjusted index of industrial production stood at 114 (1935–39 = 100) in August 1929. By October it had fallen to 110 for a decline of 3.5 percent (annualized percentage decline = 14.7 percent). After the crash, the incipient recession intensified, with the industrial production index falling from 110 in October to 100 in December 1929, or 9 percent (annualized percentage decline = 41 percent). In 1930, the index fell further from 100 in January to 79 in December, or an additional 21percent.

Links between the Crash and the Depression?

While popular history treats the crash and the Depression as one and the same event, economists know that they were not. But there is no doubt that the crash was one of the things that got the ball rolling. Several authors have offered explanations for the linkage between the crash and the recession of 1929–30. Mishkin (1978) argues that the crash and an increase in liabilities led to a deterioration in households’ balance sheets. The reduced liquidity2 led consumers to defer consumption of durable goods and housing and thus contributed to a fall in consumption. Temin (1976) suggests that the fall in stock prices had a negative wealth effect on consumption, but attributes only a minor role to this given that stocks were not a large fraction of total wealth; the stock market in 1929, although falling dramatically, remained above the value it had achieved in early 1928, and the propensity to consume from wealth was small during this period. Romer (1990) provides evidence suggesting that if the stock market were thought to be a predictor of future economic activity, then the crash can rightly be viewed as a source of increased consumer uncertainty that depressed spending on consumer durables and accelerated the decline that had begun in August 1929. Flacco and Parker (1992) confirm Romer’s findings using different data and alternative estimation techniques.

Looking back on the behavior of the economy during the year of 1930, industrial production declined 21 percent, the consumer price index fell 2.6 percent, the supply of high-powered money (that is, the liabilities of the Federal Reserve that are usable as money, consisting of currency in circulation and bank reserves; also called the monetary base) fell 2.8 percent, the nominal supply of money as measured by M1 (the product of the monetary base3 multiplied by the money multiplier4) dipped 3.5 percent and the ex post real interest rate turned out to be 11.3 percent, the highest it had been since the recession of 1920–21 (Hamilton, 1987). In spite of this, when put into historical context, there was no reason to view the downturn of 1929–30 as historically unprecedented. Its magnitude was comparable to that of many recessions that had previously occurred. Perhaps there was justifiable optimism in December 1930 that the economy might even shake off the negative movement and embark on the path to recovery, rather like what had occurred after the recession of 1920–21 (Bernanke, 1983). As we know, the bottom would not come for another 27 months.

The Economy Crumbles

Banking Failures

During 1931, there was a “change in the character of the contraction” (Friedman and Schwartz, 1963). Beginning in October 1930 and lasting until December 1930, the first of a series of banking panics now accompanied the downward spasms of the business cycle. Although bank failures had occurred throughout the 1920s, the magnitude of the failures that occurred in the early 1930s was of a different order altogether (Bernanke, 1983). The absence of any type of deposit insurance resulted in the contagion of the panics being spread to sound financial institutions and not just those on the margin.

Traditional Methods of Combating Bank Runs Not Used

Moreover, institutional arrangements that had existed in the private banking system designed to provide liquidity – to convert assets into cash – to fight bank runs before 1913 were not exercised after the creation of the Federal Reserve System. For example, during the panic of 1907, the effects of the financial upheaval had been contained through a combination of lending activities by private banks, called clearinghouses, and the suspension of deposit convertibility into currency. While not preventing bank runs and the financial panic, their economic impact was lessened to a significant extent by these countermeasures enacted by private banks, as the economy quickly recovered in 1908. The aftermath of the panic of 1907 and the desire to have a central authority to combat the contagion of financial disruptions was one of the factors that led to the establishment of the Federal Reserve System. After the creation of the Federal Reserve, clearinghouse lending and suspension of deposit convertibility by private banks were not undertaken. Believing the Federal Reserve to be the “lender of last resort,” it was apparently thought that the responsibility to fight bank runs was the domain of the central bank (Friedman and Schwartz, 1963; Bernanke, 1983). Unfortunately, when the banking panics came in waves and the financial system was collapsing, being the “lender of last resort” was a responsibility that the Federal Reserve either could not or would not assume.

Money Supply Contracts

The economic effects of the banking panics were devastating. Aside from the obvious impact of the closing of failed banks and the subsequent loss of deposits by bank customers, the money supply accelerated its downward spiral. Although the economy had flattened out after the first wave of bank failures in October–December 1930, with the industrial production index steadying from 79 in December 1930 to 80 in April 1931, the remainder of 1931 brought a series of shocks from which the economy was not to recover for some time.

Second Wave of Banking Failure

In May, the failure of Austria’s largest bank, the Kredit-anstalt, touched off financial panics in Europe. In September 1931, having had enough of the distress associated with the international transmission of economic depression, Britain abandoned its participation in the gold standard. Further, just as the United States’ economy appeared to be trying to begin recovery, the second wave of bank failures hit the financial system in June and did not abate until December. In addition, the Hoover administration in December 1931, adhering to its principles of limited government, embarked on a campaign to balance the federal budget. Tax increases resulted the following June, just as the economy was to hit the first low point of its so-called “double bottom” (Hoover, 1952).

The results of these events are now evident. Between January and December 1931 the industrial production index declined from 78 to 66, or 15.4 percent, the consumer price index fell 9.4 percent, the nominal supply of M1 dipped 5.7 percent, the ex post real interest rate5 remained at 11.3 percent, and although the supply of high-powered money6 actually increased 5.5 percent, the currency–deposit and reserve–deposit ratios began their upward ascent, and thus the money multiplier started its downward plunge (Hamilton, 1987). If the economy had flattened out in the spring of 1931, then by December output, the money supply, and the price level were all on negative growth paths that were dragging the economy deeper into depression.

Third Wave of Banking Failure

The economic difficulties were far from over. The economy displayed some evidence of recovery in late summer/early fall of 1932. However, in December 1932 the third, and largest, wave of banking panics hit the financial markets and the collapse of the economy arrived with the business cycle hitting bottom in March 1933. Industrial production between January 1932 and March 1933 fell an additional 15.6 percent. For the combined years of 1932 and 1933, the consumer price index fell a cumulative 16.2 percent, the nominal supply of M1 dropped 21.6 percent, the nominal M2 money supply fell 34.7 percent, and although the supply of high-powered money increased 8.4 percent, the currency–deposit and reserve–deposit ratios accelerated their upward ascent. Thus the money multiplier continued on a downward plunge that was not arrested until March 1933. Similar behaviors for real GDP, prices, money supplies and other key macroeconomic variables occurred in many European economies as well (Snowdon and Vane, 1999; Temin, 1989).

An examination of the macroeconomic data in August 1929 compared to March 1933 provides a stark contrast. The unemployment rate of 3 percent in August 1929 was at 25 percent in March 1933. The industrial production index of 114 in August 1929 was at 54 in March 1933, or a 52.6 percent decrease. The money supply had fallen 35 percent, prices plummeted by about 33 percent, and more than one-third of banks in the United States were either closed or taken over by other banks. The “new era” ushered in by “the roaring twenties” was over. Roosevelt took office in March 1933, a nationwide bank holiday was declared from March 6 until March 13, and the United States abandoned the international gold standard in April 1933. Recovery commenced immediately and the economy began its long path back to the pre-1929 secular growth trend.

Table 1 summarizes the drop in industrial production in the major economies of Western Europe and North America. Table 2 gives gross national product estimates for the United States from 1928 to 1941. The constant price series adjusts for inflation and deflation.

Table 1
Indices of Total Industrial Production, 1927 to 1935 (1929 = 100)

1927 1928 1929 1930 1931 1932 1933 1934 1935
Britain 95 94 100 94 86 89 95 105 114
Canada 85 94 100 91 78 68 69 82 90
France 84 94 100 99 85 74 83 79 77
Germany 95 100 100 86 72 59 68 83 96
Italy 87 99 100 93 84 77 83 85 99
Netherlands 87 94 100 109 101 90 90 93 95
Sweden 85 88 100 102 97 89 93 111 125
U.S. 85 90 100 83 69 55 63 69 79

Source: Industrial Statistics, 1900-57 (Paris, OEEC, 1958), Table 2.

Table 2
U.S. GNP at Constant (1929) and Current Prices, 1928-1941

Year GNP at constant (1929) prices (billions of $) GNP at current prices (billions of $)
1928 98.5 98.7
1929 104.4 104.6
1930 95.1 91.2
1931 89.5 78.5
1932 76.4 58.6
1933 74.2 56.1
1934 80.8 65.5
1935 91.4 76.5
1936 100.9 83.1
1937 109.1 91.2
1938 103.2 85.4
1939 111.0 91.2
1940 121.0 100.5
1941 131.7 124.7

Contemporary Explanations

The economics profession during the 1930s was at a loss to explain the Depression. The most prominent conventional explanations were of two types. First, some observers at the time firmly grounded their explanations on the two pillars of classical macroeconomic thought, Say’s Law and the belief in the self-equilibrating powers of the market. Many argued that it was simply a question of time before wages and prices adjusted fully enough for the economy to return to full employment and achieve the realization of the putative axiom that “supply creates its own demand.” Second, the Austrian school of thought argued that the Depression was the inevitable result of overinvestment during the 1920s. The best remedy for the situation was to let the Depression run its course so that the economy could be purified from the negative effects of the false expansion. Government intervention was viewed by the Austrian school as a mechanism that would simply prolong the agony and make any subsequent depression worse than it would ordinarily be (Hayek, 1966; Hayek, 1967).

Liquidationist Theory

The Hoover administration and the Federal Reserve Board also contained several so-called “liquidationists.” These individuals basically believed that economic agents should be forced to re-arrange their spending proclivities and alter their alleged profligate use of resources. If it took mass bankruptcies to produce this result and wipe the slate clean so that everyone could have a fresh start, then so be it. The liquidationists viewed the events of the Depression as an economic penance for the speculative excesses of the 1920s. Thus, the Depression was the price that was being paid for the misdeeds of the previous decade. This is perhaps best exemplified in the well-known quotation of Treasury Secretary Andrew Mellon, who advised President Hoover to “Liquidate labor, liquidate stocks, liquidate the farmers, liquidate real estate.” Mellon continued, “It will purge the rottenness out of the system. High costs of living and high living will come down. People will work harder, live a more moral life. Values will be adjusted, and enterprising people will pick up the wrecks from less competent people” (Hoover, 1952). Hoover apparently followed this advice as the Depression wore on. He continued to reassure the public that if the principles of orthodox finance were faithfully followed, recovery would surely be the result.

The business press at the time was not immune from such liquidationist prescriptions either. The Commercial and Financial Chronicle, in an August 3, 1929 editorial entitled “Is Not Group Speculating Conspiracy, Fostering Sham Prosperity?” complained of the economy being replete with profligate spending including:

(a) The luxurious diversification of diet advantageous to dairy men … and fruit growers …; (b) luxurious dressing … more silk and rayon …; (c) free spending for automobiles and their accessories, gasoline, house furnishings and equipment, radios, travel, amusements and sports; (d) the displacement from the farms by tractors and autos of produce-consuming horses and mules to a number aggregating 3,700,000 for the period 1918–1928 … (e) the frills of education to thousands for whom places might better be reserved at bench or counter or on the farm. (Quoted from Nelson, 1991)

Persons, in a paper which appeared in the November 1930 Quarterly Journal of Economics, demonstrates that some academic economists also held similar liquidationist views.

Although certainly not universal, the descriptions above suggest that no small part of the conventional wisdom at the time believed the Depression to be a penitence for past sins. In addition, it was thought that the economy would be restored to full employment equilibrium once wages and prices adjusted sufficiently. Say’s Law will ensure the economy will return to health, and supply will create its own demand sufficient to return to prosperity, if we simply let the system work its way through. In his memoirs published in 1952, 20 years after his election defeat, Herbert Hoover continued to steadfastly maintain that if Roosevelt and the New Dealers would have stuck to the policies his administration put in place, the economy would have made a full recovery within 18 months after the election of 1932. We have to intensify our resolve to “stay the course.” All will be well in time if we just “take our medicine.” In hindsight, it challenges the imagination to think up worse policy prescriptions for the events of 1929–33.

Modern Explanations

There remains considerable debate regarding the economic explanations for the behavior of the business cycle between August 1929 and March 1933. This section describes the main hypotheses that have been presented in the literature attempting to explain the causes for the depth, protracted length, and worldwide propagation of the Great Depression.

The United States’ experience, considering the preponderance of empirical results and historical simulations contained in the economic literature, can largely be accounted for by the monetary hypothesis of Friedman and Schwartz (1963) together with the nonmonetary/financial hypotheses of Bernanke (1983) and Fisher (1933). That is, most, but not all, of the characteristic phases of the business cycle and depth to which output fell from 1929 to 1933 can be accounted for by the monetary and nonmonetary/financial hypotheses. The international experience, well documented in Choudri and Kochin (1980), Hamilton (1988), Temin (1989), Bernanke and James (1991), and Eichengreen (1992), can be properly understood as resulting from a flawed interwar gold standard. Each of these hypotheses is explained in greater detail below.

Nonmonetary/Nonfinancial Theories

It should be noted that I do not include a section covering the nonmonetary/nonfinancial theories of the Great Depression. These theories, including Temin’s (1976) focus on autonomous consumption decline, the collapse of housing construction contained in Anderson and Butkiewicz (1980), the effects of the stock market crash, the uncertainty hypothesis of Romer (1990), and the Smoot–Hawley Tariff Act of 1930, are all worthy of mention and can rightly be apportioned some of the responsibility for initiating the Depression. However, any theory of the Depression must be able to account for the protracted problems associated with the punishing deflation imposed on the United States and the world during that era. While the nonmonetary/nonfinancial theories go a long way accounting for the impetus for, and first year of the Depression, my reading of the empirical results of the economic literature indicates that they do not have the explanatory power of the three other theories mentioned above to account for the depths to which the economy plunged.

Moreover, recent research by Olney (1999) argues convincingly that the decline in consumption was not autonomous at all. Rather, the decline resulted because high consumer indebtedness threatened future consumption spending because default was expensive. Olney shows that households were shouldering an unprecedented burden of installment debt – especially for automobiles. In addition, down payments were large and contracts were short. Missed installment payments triggered repossession, reducing consumer wealth in 1930 because households lost all acquired equity. Cutting consumption was the only viable strategy in 1930 for avoiding default.

The Monetary Hypothesis

In reviewing the economic history of the Depression above, it was mentioned that the supply of money fell by 35 percent, prices dropped by about 33 percent, and one-third of all banks vanished. Milton Friedman and Anna Schwartz, in their 1963 book A Monetary History of the United States, 1867–1960, call this massive drop in the supply of money “The Great Contraction.”

Friedman and Schwartz (1963) discuss and painstakingly document the synchronous movements of the real economy with the disruptions that occurred in the financial sector. They point out that the series of bank failures that occurred beginning in October 1930 worsened economic conditions in two ways. First, bank shareholder wealth was reduced as banks failed. Second, and most importantly, the bank failures were exogenous shocks and led to the drastic decline in the money supply. The persistent deflation of the 1930s follows directly from this “great contraction.”

Criticisms of Fed Policy

However, this raises an important question: Where was the Federal Reserve while the money supply and the financial system were collapsing? If the Federal Reserve was created in 1913 primarily to be the “lender of last resort” for troubled financial institutions, it was failing miserably. Friedman and Schwartz pin the blame squarely on the Federal Reserve and the failure of monetary policy to offset the contractions in the money supply. As the money multiplier continued on its downward path, the monetary base, rather than being aggressively increased, simply progressed slightly upwards on a gently positive sloping time path. As banks were failing in waves, was the Federal Reserve attempting to contain the panics by aggressively lending to banks scrambling for liquidity? The unfortunate answer is “no.” When the panics were occurring, was there discussion of suspending deposit convertibility or suspension of the gold standard, both of which had been successfully employed in the past? Again the unfortunate answer is “no.” Did the Federal Reserve consider the fact that it had an abundant supply of free gold, and therefore that monetary expansion was feasible? Once again the unfortunate answer is “no.” The argument can be summarized by the following quotation:

At all times throughout the 1929–33 contraction, alternative policies were available to the System by which it could have kept the stock of money from falling, and indeed could have increased it at almost any desired rate. Those policies did not involve radical innovations. They involved measures of a kind the System had taken in earlier years, of a kind explicitly contemplated by the founders of the System to meet precisely the kind of banking crisis that developed in late 1930 and persisted thereafter. They involved measures that were actually proposed and very likely would have been adopted under a slightly different bureaucratic structure or distribution of power, or even if the men in power had had somewhat different personalities. Until late 1931 – and we believe not even then – the alternative policies involved no conflict with the maintenance of the gold standard. Until September 1931, the problem that recurrently troubled the System was how to keep the gold inflows under control, not the reverse. (Friedman and Schwartz, 1963)

The inescapable conclusion is that it was a failure of the policies of the Federal Reserve System in responding to the crises of the time that made the Depression as bad as it was. If monetary policy had responded differently, the economic events of 1929–33 need not have been as they occurred. This assertion is supported by the results of Fackler and Parker (1994). Using counterfactual historical simulations, they show that if the Federal Reserve had kept the M1 money supply growing along its pre-October 1929 trend of 3.3 percent annually, most of the Depression would have been averted. McCallum (1990) also reaches similar conclusions employing a monetary base feedback policy in his counterfactual simulations.

Lack of Leadership at the Fed

Friedman and Schwartz trace the seeds of these regrettable events to the death of Federal Reserve Bank of New York President Benjamin Strong in 1928. Strong’s death altered the locus of power in the Federal Reserve System and left it without effective leadership. Friedman and Schwartz maintain that Strong had the personality, confidence and reputation in the financial community to lead monetary policy and sway policy makers to his point of view. Friedman and Schwartz believe that Strong would not have permitted the financial panics and liquidity crises to persist and affect the real economy. Instead, after Governor Strong died, the conduct of open market operations changed from a five-man committee dominated by the New York Federal Reserve to that of a 12-man committee of Federal Reserve Bank governors. Decisiveness in leadership was replaced by inaction and drift. Others (Temin, 1989; Wicker, 1965) reject this point, claiming the policies of the Federal Reserve in the 1930s were not inconsistent with the policies pursued in the decade of the 1920s.

The Fed’s Failure to Distinguish between Nominal and Real Interest Rates

Meltzer (1976) also points out errors made by the Federal Reserve. His argument is that the Federal Reserve failed to distinguish between nominal and real interest rates. That is, while nominal rates were falling, the Federal Reserve did virtually nothing, since it construed this to be a sign of an “easy” credit market. However, in the face of deflation, real rates were rising and there was in fact a “tight” credit market. Failure to make this distinction led money to be a contributing factor to the initial decline of 1929.

Deflation

Cecchetti (1992) and Nelson (1991) bolster the monetary hypothesis by demonstrating that the deflation during the Depression was anticipated at short horizons, once it was under way. The result, using the Fisher equation, is that high ex ante real interest rates were the transmission mechanism that led from falling prices to falling output. In addition, Cecchetti (1998) and Cecchetti and Karras (1994) argue that if the lower bound of the nominal interest rate is reached, then continued deflation renders the opportunity cost of holding money negative. In this instance the nature of money changes. Now the rate of deflation places a floor on the real return nonmoney assets must provide to make them attractive to hold. If they cannot exceed the rate on money holdings, then agents will move their assets into cash and the result will be negative net investment and a decapitalization of the economy.

Critics of the Monetary Hypothesis

The monetary hypothesis, however, is not without its detractors. Paul Samuelson observes that the monetary base did not fall during the Depression. Moreover, expecting the Federal Reserve to have aggressively increased the monetary base by whatever amount was necessary to stop the decline in the money supply is hindsight. A course of action for monetary policy such as this was beyond the scope of discussion prevailing at the time. In addition, others, like Moses Abramovitz, point out that the money supply had endogenous components that were beyond the Federal Reserve’s ability to control. Namely, the money supply may have been falling as a result of declining economic activity, or so-called “reverse causation.” Moreover the gold standard, to which the United States continued to adhere until March 1933, also tied the hands of the Federal Reserve in so far as gold outflows that occurred required the Federal Reserve to contract the supply of money. These views are also contained in Temin (1989) and Eichengreen (1992), as discussed below.

Bernanke (1983) argues that the monetary hypothesis: (i) is not a complete explanation of the link between the financial sector and aggregate output in the 1930s; (ii) does not explain how it was that decreases in the money supply caused output to keep falling over many years, especially since it is widely believed that changes in the money supply only change prices and other nominal economic values in the long run, not real economic values like output ; and (iii) is quantitatively insufficient to explain the depth of the decline in output. Bernanke (1983) not only resurrected and sharpened Fisher’s (1933) debt deflation hypothesis, but also made further contributions to what has come to be known as the nonmonetary/financial hypothesis.

The Nonmonetary/Financial Hypothesis

Bernanke (1983), building on the monetary hypothesis of Friedman and Schwartz (1963), presents an alternative interpretation of the way in which the financial crises may have affected output. The argument involves both the effects of debt deflation and the impact that bank panics had on the ability of financial markets to efficiently allocate funds from lenders to borrowers. These nonmonetary/financial theories hold that events in financial markets other than shocks to the money supply can help to account for the paths of output and prices during the Great Depression.

Fisher (1933) asserted that the dominant forces that account for “great” depressions are (nominal) over-indebtedness and deflation. Specifically, he argued that real debt burdens were substantially increased when there were dramatic declines in the price level and nominal incomes. The combination of deflation, falling nominal income and increasing real debt burdens led to debtor insolvency, lowered aggregate demand, and thereby contributed to a continuing decline in the price level and thus further increases in the real burden of debt.

The “Credit View”

Bernanke (1983), in what is now called the “credit view,” provided additional details to help explain Fisher’s debt deflation hypothesis. He argued that in normal circumstances, an initial decline in prices merely reallocates wealth from debtors to creditors, such as banks. Usually, such wealth redistributions are minor in magnitude and have no first-order impact on the economy. However, in the face of large shocks, deflation in the prices of assets forfeited to banks by debtor bankruptcies leads to a decline in the nominal value of assets on bank balance sheets. For a given value of bank liabilities, also denominated in nominal terms, this deterioration in bank assets threatens insolvency. As banks reallocate away from loans to safer government securities, some borrowers, particularly small ones, are unable to obtain funds, often at any price. Further, if this reallocation is long-lived, the shortage of credit for these borrowers helps to explain the persistence of the downturn. As the disappearance of bank financing forces lower expenditure plans, aggregate demand declines, which again contributes to the downward deflationary spiral. For debt deflation to be operative, it is necessary to demonstrate that there was a substantial build-up of debt prior to the onset of the Depression and that the deflation of the 1930s was at least partially unanticipated at medium- and long-term horizons at the time that the debt was being incurred. Both of these conditions appear to have been in place (Fackler and Parker, 2001; Hamilton, 1992; Evans and Wachtel, 1993).

The Breakdown in Credit Markets

In addition, the financial panics which occurred hindered the credit allocation mechanism. Bernanke (1983) explains that the process of credit intermediation requires substantial information gathering and non-trivial market-making activities. The financial disruptions of 1930–33 are correctly viewed as substantial impediments to the performance of these services and thus impaired the efficient allocation of credit between lenders and borrowers. That is, financial panics and debtor and business bankruptcies resulted in a increase in the real cost of credit intermediation. As the cost of credit intermediation increased, sources of credit for many borrowers (especially households, farmers and small firms) became expensive or even unobtainable at any price. This tightening of credit put downward pressure on aggregate demand and helped turn the recession of 1929–30 into the Great Depression. The empirical support for the validity of the nonmonetary/financial hypothesis during the Depression is substantial (Bernanke, 1983; Fackler and Parker, 1994, 2001; Hamilton, 1987, 1992), although support for the “credit view” for the transmission mechanism of monetary policy in post-World War II economic activity is substantially weaker. In combination, considering the preponderance of empirical results and historical simulations contained in the economic literature, the monetary hypothesis and the nonmonetary/financial hypothesis go a substantial distance toward accounting for the economic experiences of the United States during the Great Depression.

The Role of Pessimistic Expectations

To this combination, the behavior of expectations should also be added. As explained by James Tobin, there was another reason for a “change in the character of the contraction” in 1931. Although Friedman and Schwartz attribute this “change” to the bank panics that occurred, Tobin points out that change also took place because of the emergence of pessimistic expectations. If it was thought that the early stages of the Depression were symptomatic of a recession that was not different in kind from similar episodes in our economic history, and that recovery was a real possibility, the public need not have had pessimistic expectations. Instead the public may have anticipated things would get better. However, after the British left the gold standard, expectations changed in a very pessimistic way. The public may very well have believed that the business cycle downturn was not going to be reversed, but rather was going to get worse than it was. When households and business investors begin to make plans based on the economy getting worse instead of making plans based on anticipations of recovery, the depressing economic effects on consumption and investment of this switch in expectations are common knowledge in the modern macroeconomic literature. For the literature on the Great Depression, the empirical research conducted on the expectations hypothesis focuses almost exclusively on uncertainty (which is not the same thing as pessimistic/optimistic expectations) and its contribution to the onset of the Depression (Romer, 1990; Flacco and Parker, 1992). Although Keynes (1936) writes extensively about the state of expectations and their economic influence, the literature is silent regarding the empirical validity of the expectations hypothesis in 1931–33. Yet, in spite of this, the continued shocks that the United States’ economy received demonstrated that the business cycle downturn of 1931–33 was of a different kind than had previously been known. Once the public believed this to be so and made their plans accordingly, the results had to have been economically devastating. There is no formal empirical confirmation and I have not segregated the expectations hypothesis as a separate hypothesis in the overview. However, the logic of the above argument compels me to be of the opinion that the expectations hypothesis provides an impressive addition to the monetary hypothesis and the nonmonetary/financial hypothesis in accounting for the economic experiences of the United States during the Great Depression.

The Gold Standard Hypothesis

Recent research on the operation of the interwar gold standard has deepened our understanding of the Depression and its international character. The way and manner in which the interwar gold standard was structured and operated provide a convincing explanation of the international transmission of deflation and depression that occurred in the 1930s.

The story has its beginning in the 1870–1914 period. During this time the gold standard functioned as a pegged exchange rate system where certain rules were observed. Namely, it was necessary for countries to permit their money supplies to be altered in response to gold flows in order for the price-specie flow mechanism to function properly. It operated successfully because countries that were gaining gold allowed their money supply to increase and raise the domestic price level to restore equilibrium and maintain the fixed exchange rate of their currency. Countries that were losing gold were obligated to permit their money supply to decrease and generate a decline in their domestic price level to restore equilibrium and maintain the fixed exchange rate of their currency. Eichengreen (1992) discusses and extensively documents that the gold standard of this period functioned as smoothly as it did because of the international commitment countries had to the gold standard and the level of international cooperation exhibited during this time. “What rendered the commitment to the gold standard credible, then, was that the commitment was international, not merely national. That commitment was activated through international cooperation” (Eichengreen, 1992).

The gold standard was suspended when the hostilities of World War I broke out. By the end of 1928, major countries such as the United States, the United Kingdom, France and Germany had re-established ties to a functioning fixed exchange rate gold standard. However, Eichengreen (1992) points out that the world in which the gold standard functioned before World War I was not the same world in which the gold standard was being re-established. A credible commitment to the gold standard, as Hamilton (1988) explains, required that a country maintain fiscal soundness and political objectives that insured the monetary authority could pursue a monetary policy consistent with long-run price stability and continuous convertibility of the currency. Successful operation required these conditions to be in place before re-establishment of the gold standard was operational. However, many governments during the interwar period went back on the gold standard in the opposite set of circumstances. They re-established ties to the gold standard because they were incapable, due to the political chaos generated after World War I, of fiscal soundness and did not have political objectives conducive to reforming monetary policy such that it could insure long-run price stability. “By this criterion, returning to the gold standard could not have come at a worse time or for poorer reasons” (Hamilton, 1988). Kindleberger (1973) stresses the fact that the pre-World War I gold standard functioned as well as it did because of the unquestioned leadership exercised by Great Britain. After World War I and the relative decline of Britain, the United States did not exhibit the same strength of leadership Britain had shown before. The upshot is that it was an unsuitable environment in which to re-establish the gold standard after World War I and the interwar gold standard was destined to drift in a state of malperformance as no one took responsibility for its proper functioning. However, the problems did not end there.

Flaws in the Interwar International Gold Standard

Lack of Symmetry in the Response of Gold-Gaining and Gold-Losing Countries

The interwar gold standard operated with four structural/technical flaws that almost certainly doomed it to failure (Eichengreen, 1986; Temin, 1989; Bernanke and James, 1991). The first, and most damaging, was the lack of symmetry in the response of gold-gaining countries and gold-losing countries that resulted in a deflationary bias that was to drag the world deeper into deflation and depression. If a country was losing gold reserves, it was required to decrease its money supply to maintain its commitment to the gold standard. Given that a minimum gold reserve had to be maintained and that countries became concerned when the gold reserve fell within 10 percent of this minimum, little gold could be lost before the necessity of monetary contraction, and thus deflation, became a reality. Moreover, with a fractional gold reserve ratio of 40 percent, the result was a decline in the domestic money supply equal to 2.5 times the gold outflow. On the other hand, there was no such constraint on countries that experienced gold inflows. Gold reserves were accumulated without the binding requirement that the domestic money supply be expanded. Thus the price–specie flow mechanism ceased to function and the equilibrating forces of the pre-World War I gold standard were absent during the interwar period. If a country attracting gold reserves were to embark on a contractionary path, the result would be the further extraction of gold reserves from other countries on the gold standard and the imposition of deflation on their economies as well, as they were forced to contract their money supplies. “As it happened, both of the two major gold surplus countries – France and the United States, who at the time together held close to 60 percent of the world’s monetary gold – took deflationary paths in 1928–1929” (Bernanke and James, 1991).

Foreign Exchange Reserves

Second, countries that did not have reserve currencies could hold their minimum reserves in the form of both gold and convertible foreign exchange reserves. If the threat of devaluation of a reserve currency appeared likely, a country holding foreign exchange reserves could divest itself of the foreign exchange, as holding it became a more risky proposition. Further, the convertible reserves were usually only fractionally backed by gold. Thus, if countries were to prefer gold holdings as opposed to foreign exchange reserves for whatever reason, the result would be a contraction in the world money supply as reserves were destroyed in the movement to gold. This effect can be thought of as equivalent to the effect on the domestic money supply in a fractional reserve banking system of a shift in the public’s money holdings toward currency and away from bank deposits.

The Bank of France and Open Market Operations

Third, the powers of many European central banks were restricted or excluded outright. In particular, as discussed by Eichengreen (1986), the Bank of France was prohibited from engaging in open market operations, i.e. the purchase or sale of government securities. Given that France was one of the countries amassing gold reserves, this restriction largely prevented them from adhering to the rules of the gold standard. The proper response would have been to expand their supply of money and inflate so as not to continue to attract gold reserves and impose deflation on the rest of the world. This was not done. France continued to accumulate gold until 1932 and did not leave the gold standard until 1936.

Inconsistent Currency Valuations

Lastly, the gold standard was re-established at parities that were unilaterally determined by each individual country. When France returned to the gold standard in 1926, it returned at a parity rate that is believed to have undervalued the franc. When Britain returned to the gold standard in 1925, it returned at a parity rate that is believed to have overvalued the pound. In this situation, the only sustainable equilibrium required the French to inflate their economy in response to the gold inflows. However, given their legacy of inflation during the 1921–26 period, France steadfastly resisted inflation (Eichengreen, 1986). The maintenance of the gold standard and the resistance to inflation were now inconsistent policy objectives. The Bank of France’s inability to conduct open market operations only made matters worse. The accumulation of gold and the exporting of deflation to the world was the result.

The Timing of Recoveries

Taken together, the flaws described above made the interwar gold standard dysfunctional and in the end unsustainable. Looking back, we observe that the record of departure from the gold standard and subsequent recovery was different for many different countries. For some countries recovery came sooner. For some it came later. It is in this timing of departure from the gold standard that recent research has produced a remarkable empirical finding. From the work of Choudri and Kochin (1980), Eichengreen and Sachs (1985), Temin (1989), and Bernanke and James (1991), we now know that the sooner a country abandoned the gold standard, the quicker recovery commenced. Spain, which never restored its participation in the gold standard, missed the ravages of the Depression altogether. Britain left the gold standard in September 1931, and started to recover. Sweden left the gold standard at the same time as Britain, and started to recover. The United States left in March 1933, and recovery commenced. France, Holland, and Poland continued to have their economies struggle after the United States’ recovery began as they continued to adhere to the gold standard until 1936. Only after they left did recovery start; departure from the gold standard freed a country from the ravages of deflation.

The Fed and the Gold Standard: The “Midas Touch”

Temin (1989) and Eichengreen (1992) argue that it was the unbending commitment to the gold standard that generated deflation and depression worldwide. They emphasize that the gold standard required fiscal and monetary authorities around the world to submit their economies to internal adjustment and economic instability in the face of international shocks. Given how the gold standard tied countries together, if the gold parity were to be defended and devaluation was not an option, unilateral monetary actions by any one country were pointless. The end result is that Temin (1989) and Eichengreen (1992) reject Friedman and Schwartz’s (1963) claim that the Depression was caused by a series of policy failures on the part of the Federal Reserve. Actions taken in the United States, according to Temin (1989) and Eichengreen (1992), cannot be properly understood in isolation with respect to the rest of the world. If the commitment to the gold standard was to be maintained, monetary and fiscal authorities worldwide had little choice in responding to the crises of the Depression. Why did the Federal Reserve continue a policy of inaction during the banking panics? Because the commitment to the gold standard, what Temin (1989) has labeled “The Midas Touch,” gave them no choice but to let the banks fail. Monetary expansion and the injection of liquidity would lower interest rates, lead to a gold outflow, and potentially be contrary to the rules of the gold standard. Continued deflation due to gold outflows would begin to call into question the monetary authority’s commitment to the gold standard. “Defending gold parity might require the authorities to sit idly by as the banking system crumbled, as the Federal Reserve did at the end of 1931 and again at the beginning of 1933” (Eichengreen, 1992). Thus, if the adherence to the gold standard were to be maintained, the money supply was endogenous with respect to the balance of payments and beyond the influence of the Federal Reserve.

Eichengreen (1992) concludes further that what made the pre-World War I gold standard so successful was absent during the interwar period: credible commitment to the gold standard activated through international cooperation in its implementation and management. Had these important ingredients of the pre-World War I gold standard been present during the interwar period, twentieth-century economic history may have been very different.

Recovery and the New Deal

March 1933 was the rock bottom of the Depression and the inauguration of Franklin D. Roosevelt represented a sharp break with the status quo. Upon taking office, a bank holiday was declared, the United States left the interwar gold standard the following month, and the government commenced with several measures designed to resurrect the financial system. These measures included: (i) the establishment of the Reconstruction Finance Corporation which set about funneling large sums of liquidity to banks and other intermediaries; (ii) the Securities Exchange Act of 1934 which established margin requirements for bank loans used to purchase stocks and bonds and increased information requirements to potential investors; and (iii) the Glass–Steagal Act which strictly separated commercial banking and investment banking. Although delivering some immediate relief to financial markets, lenders continued to be reluctant to extend credit after the events of 1929–33, and the recovery of financial markets was slow and incomplete. Bernanke (1983) estimates that the United States’ financial system did not begin to shed the inefficiencies under which it was operating until the end of 1935.

The NIRA

Policies designed to promote different economic institutions were enacted as part of the New Deal. The National Industrial Recovery Act (NIRA) was passed on June 6, 1933 and was designed to raise prices and wages. In addition, the Act mandated the formation of planning boards in critical sectors of the economy. The boards were charged with setting output goals for their respective sector and the usual result was a restriction of production. In effect, the NIRA was a license for industries to form cartels and was struck down as unconstitutional in 1935. The Agricultural Adjustment Act of 1933 was similar legislation designed to reduce output and raise prices in the farming sector. It too was ruled unconstitutional in 1936.

Relief and Jobs Programs

Other policies intended to provide relief directly to people who were destitute and out of work were rapidly enacted. The Civilian Conservation Corps (CCC), the Tennessee Valley Authority (TVA), the Public Works Administration (PWA) and the Federal Emergency Relief Administration (FERA) were set up shortly after Roosevelt took office and provided jobs for the unemployed and grants to states for direct relief. The Civil Works Administration (CWA), created in 1933–34, and the Works Progress Administration (WPA), created in 1935, were also designed to provide work relief to the jobless. The Social Security Act was also passed in 1935. There surely are other programs with similar acronyms that have been left out, but the intent was the same. In the words of Roosevelt himself, addressing Congress in 1938:

Government has a final responsibility for the well-being of its citizenship. If private co-operative endeavor fails to provide work for the willing hands and relief for the unfortunate, those suffering hardship from no fault of their own have a right to call upon the Government for aid; and a government worthy of its name must make fitting response. (Quoted from Polenberg, 2000)

The Depression had shown the inaccuracies of classifying the 1920s as a “new era.” Rather, the “new era,” as summarized by Roosevelt’s words above and initiated in government’s involvement in the economy, began in March 1933.

The NBER business cycle chronology shows continuous growth from March 1933 until May 1937, at which time a 13-month recession hit the economy. The business cycle rebounded in June 1938 and continued on its upward march to and through the beginning of the United States’ involvement in World War II. The recovery that started in 1933 was impressive, with real GNP experiencing annual rates of the growth in the 10 percent range between 1933 and December 1941, excluding the recession of 1937–38 (Romer, 1993). However, as reported by Romer (1993), real GNP did not return to its pre-Depression level until 1937 and real GNP did not catch up to its pre-Depression secular trend until 1942. Indeed, the unemployment rate, peaking at 25 percent in March 1933, continued to dwell near or above the double-digit range until 1940. It is in this sense that most economists attribute the ending of the Depression to the onset of World War II. The War brought complete recovery as the unemployment rate quickly plummeted after December 1941 to its nadir during the War of below 2 percent.

Explanations for the Pace of Recovery

The question remains, however, that if the War completed the recovery, what initiated it and sustained it through the end of 1941? Should we point to the relief programs of the New Deal and the leadership of Roosevelt? Certainly, they had psychological/expectational effects on consumers and investors and helped to heal the suffering experienced during that time. However, as shown by Brown (1956), Peppers (1973), and Raynold, McMillin and Beard (1991), fiscal policy contributed little to the recovery, and certainly could have done much more.

Once again we return to the financial system for answers. The abandonment of the gold standard, the impact this had on the money supply, and the deliverance from the economic effects of deflation would have to be singled out as the most important contributor to the recovery. Romer (1993) stresses that Eichengreen and Sachs (1985) have it right; recovery did not come before the decision to abandon the old gold parity was made operational. Once this became reality, devaluation of the currency permitted expansion in the money supply and inflation which, rather than promoting a policy of beggar-thy-neighbor, allowed countries to escape the deflationary vortex of economic decline. As discussed in connection with the gold standard hypothesis, the simultaneity of leaving the gold standard and recovery is a robust empirical result that reflects more than simple temporal coincidence.

Romer (1993) reports an increase in the monetary base in the United States of 52 percent between April 1933 and April 1937. The M1 money supply virtually matched this increase in the monetary base, with 49 percent growth over the same period. The sources of this increase were two-fold. First, aside from the immediate monetary expansion permitted by devaluation, as Romer (1993) explains, monetary expansion continued into 1934 and beyond as gold flowed to the United States from Europe due to the increasing political unrest and heightened probability of hostilities that began the progression to World War II. Second, the increase in the money supply matched the increase in the monetary base and the Treasury chose not to sterilize the gold inflows. This is evidence that the monetary expansion resulted from policy decisions and not endogenous changes in the money multiplier. The new regime was freed from the constraints of the gold standard and the policy makers were intent on taking actions of a different nature than what had been done between 1929 and 1933.

Incompleteness of the Recovery before WWII

The Depression had turned a corner and the economy was emerging from the abyss in 1933. However, it still had a long way to go to reach full recovery. Friedman and Schwartz (1963) comment that “the most notable feature of the revival after 1933 was not its rapidity but its incompleteness.” They claim that monetary policy and the Federal Reserve were passive after 1933. The monetary authorities did nothing to stop the fall from 1929 to 1933 and did little to promote the recovery. The Federal Reserve made no effort to increase the stock of high-powered money through the use of either open market operations or rediscounting; Federal Reserve credit outstanding remained “almost perfectly constant from 1934 to mid-1940” (Friedman and Schwartz, 1963). As we have seen above, it was the Treasury that was generating increases in the monetary base at the time by issuing gold certificates equal to the amount of gold reserve inflow and depositing them at the Federal Reserve. When the government spent the money, the Treasury swapped the gold certificates for Federal Reserve notes and this expanded the monetary base (Romer, 1993). Monetary policy was thought to be powerless to promote recovery, and instead it was fiscal policy that became the implement of choice. The research shows that fiscal policy could have done much more to aid in recovery – ironically fiscal policy was the vehicle that was now the focus of attention. There is an easy explanation for why this is so.

The Emergences of Keynes

The economics profession as a whole was at a loss to provide cogent explanations for the events of 1929–33. In the words of Robert Gordon (1998), “economics had lost its intellectual moorings, and it was time for a new diagnosis.” There were no convincing answers regarding why the earlier theories of macroeconomic behavior failed to explain the events that were occurring, and worse, there was no set of principles that established a guide for proper actions in the future. That changed in 1936 with the publication of Keynes’s book The General Theory of Employment, Interest and Money. Perhaps there has been no other person and no other book in economics about which so much has been written. Many consider the arrival of Keynesian thought to have been a “revolution,” although this too is hotly contested (see, for example, Laidler, 1999). The debates that The General Theory generated have been many and long-lasting. There is little that can be said here to add or subtract from the massive literature devoted to the ideas promoted by Keynes, whether they be viewed right or wrong. But the influence over academic thought and economic policy that was generated by The General Theory is not in doubt.

The time was right for a set of ideas that not only explained the Depression’s course of events, but also provided a prescription for remedies that would create better economic performance in the future. Keynes and The General Theory, at the time the events were unfolding, provided just such a package. When all is said and done, we can look back in hindsight and argue endlessly about what Keynes “really meant” or what the “true” contribution of Keynesianism has been to the world of economics. At the time the Depression happened, Keynes represented a new paradigm for young scholars to latch on to. The stage was set for the nurturing of macroeconomics for the remainder of the twentieth century.

This article is a modified version of the introduction to Randall Parker, editor, Reflections on the Great Depression, Edward Elgar Publishing, 2002.

Bibliography

Olney, Martha. “Avoiding Default:The Role of Credit in the Consumption Collapse of 1930.” Quarterly Journal of Economics 114, no. 1 (1999): 319-35.

Anderson, Barry L. and James L. Butkiewicz. “Money, Spending and the Great Depression.” Southern Economic Journal 47 (1980): 388-403.

Balke, Nathan S. and Robert J. Gordon. “Historical Data.” In The American Business Cycle: Continuity and Change, edited by Robert J. Gordon. Chicago: University of Chicago Press, 1986.

Bernanke, Ben S. “Nonmonetary Effects of the Financial Crisis in the Propagation of the Great Depression.” American Economic Review 73, no. 3 (1983): 257-76.

Bernanke, Ben S. and Harold James. “The Gold Standard, Deflation, and Financial Crisis in the Great Depression: An International Comparison.” In Financial Markets and Financial Crises, edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Brown, E. Cary. “Fiscal Policy in the Thirties: A Reappraisal.” American Economic Review 46, no. 5 (1956): 857-79.

Cecchetti, Stephen G. “Prices during the Great Depression: Was the Deflation of 1930-1932 Really Anticipated?” American Economic Review 82, no. 1 (1992): 141-56.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, edited by Mark Wheeler. Kalamazoo, MI: W.E. Upjohn Institute for Employment Research, 1998.

Cecchetti, Stephen G. and Georgios Karras. “Sources of Output Fluctuations during the Interwar Period: Further Evidence on the Causes of the Great Depression.” Review of Economics and Statistics 76, no. 1 (1994): 80-102

Choudri, Ehsan U. and Levis A. Kochin. “The Exchange Rate and the International Transmission of Business Cycle Disturbances: Some Evidence from the Great Depression.” Journal of Money, Credit, and Banking 12, no. 4 (1980): 565-74.

De Long, J. Bradford and Andrei Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” Journal of Economic History 51, no. 3 (1991): 675-700.

Eichengreen, Barry. “The Bank of France and the Sterilization of Gold, 1926–1932.” Explorations in Economic History 23, no. 1 (1986): 56-84.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919–1939. New York: Oxford University Press, 1992.

Eichengreen, Barry and Jeffrey Sachs. “Exchange Rates and Economic Recovery in the 1930s.” Journal of Economic History 45, no. 4 (1985): 925-46.

Evans, Martin and Paul Wachtel. “Were Price Changes during the Great Depression Anticipated? Evidence from Nominal Interest Rates.” Journal of Monetary Economics 32, no. 1 (1993): 3-34.

Fackler, James S. and Randall E. Parker. “Accounting for the Great Depression: A Historical Decomposition.” Journal of Macroeconomics 16 (1994): 193-220.

Fackler, James S. and Randall E. Parker. “Was Debt Deflation Operative during the Great Depression?” East Carolina University Working Paper, 2001.

Fisher, Irving. “The Debt–Deflation Theory of Great Depressions.” Econometrica 1, no. 4 (1933): 337-57.

Flacco, Paul R. and Randall E. Parker. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30, no. 1 (1992): 154-71.

Friedman, Milton and Anna J. Schwartz. A Monetary History of the United States, 1867–1960. Princeton, NJ: Princeton University Press, 1963.

Gordon, Robert J. Macroeconomics, seventh edition. New York: Addison Wesley, 1998.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 13 (1987): 1-25.

Hamilton, James D. “Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6, no. 2 (1988): 67-89.

Hamilton, James D. “Was the Deflation during the Great Depression Anticipated? Evidence from the Commodity Futures Market.” American Economic Review 82, no. 1 (1992): 157-78.

Hayek, Friedrich A. von. Monetary Theory and the Trade Cycle. New

York: A. M. Kelley, 1967 (originally published in 1929).

Hayek, Friedrich A. von, Prices and Production. New York: A. M.

Kelley, 1966 (originally published in 1931).

Hoover, Herbert. The Memoirs of Herbert Hoover: The Great Depression, 1929–1941. New York: Macmillan, 1952.

Keynes, John M. The General Theory of Employment, Interest, and Money. London: Macmillan, 1936.

Kindleberger, Charles P. The World in Depression, 1929–1939. Berkeley: University of California Press, 1973.

Laidler, David. Fabricating the Keynesian Revolution. Cambridge: Cambridge University Press, 1999.

McCallum, Bennett T. “Could a Monetary Base Rule Have Prevented the Great Depression?” Journal of Monetary Economics 26 (1990): 3-26.

Meltzer, Allan H. “Monetary and Other Explanations of the Start of the Great Depression.” Journal of Monetary Economics 2 (1976): 455-71.

Mishkin, Frederick S. “The Household Balance Sheet and the Great Depression.” Journal of Economic History 38, no. 4 (1978): 918-37.

Nelson, Daniel B. “Was the Deflation of 1929–1930 Anticipated? The Monetary Regime as Viewed by the Business Press.” Research in Economic History 13 (1991): 1-65.

Peppers, Larry. “Full Employment Surplus Analysis and Structural Change: The 1930s.” Explorations in Economic History 10 (1973): 197-210..

Persons, Charles E. “Credit Expansion, 1920 to 1929, and Its Lessons.” Quarterly Journal of Economics 45, no. 1 (1930): 94-130.

Polenberg, Richard. The Era of Franklin D. Roosevelt, 1933–1945: A Brief History with Documents. Boston: Bedford/St. Martin’s, 2000.

Raynold, Prosper, W. Douglas McMillin and Thomas R. Beard. “The Impact of Federal Government Expenditures in the 1930s.” Southern Economic Journal 58, no. 1 (1991): 15-28.

Romer, Christina D. “World War I and the Postwar Depression: A Reappraisal Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22, no. 1 (1988): 91-115.

Romer, Christina D. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105, no. 3 (1990): 597-624.

Romer, Christina D. “The Nation in Depression.” Journal of Economic Perspectives 7, no. 2 (1993): 19-39.

Snowdon, Brian and Howard R. Vane. Conversations with Leading Economists: Interpreting Modern Macroeconomics, Cheltenham, UK: Edward Elgar, 1999.

Soule, George H. Prosperity Decade, From War to Depression: 1917–1929. New York: Rinehart, 1947.

Temin, Peter. Did Monetary Forces Cause the Great Depression? New York: W.W. Norton, 1976.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: MIT Press, 1989.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” Journal of Economic Perspectives 4, no. 2 (1990): 67-83.

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922–33: A Reinterpretation.” Journal of Political Economy 73, no. 4 (1965): 325-43.

1 Bankers’ acceptances are explained at http://www.rich.frb.org/pubs/instruments/ch10.html.

2 Liquidity is the ease of converting an asset into money.

3 The monetary base is measured as the sum of currency in the hands of the public plus reserves in the banking system. It is also called high-powered money since the monetary base is the quantity that gets multiplied into greater amounts of money supply as banks make loans and people spend and thereby create new bank deposits.

4 The money multiplier equals [D/R*(1 + D/C)]/(D/R + D/C + D/E), where

D = deposits, R = reserves, C = currency and E = excess reserves in the

banking system.

5 The real interest rate adjusts the observed (nominal) interest rate for inflation or deflation. Ex post refers to the real interest rate after the actual change in prices has been observed; ex ante refers to the real interest rate that is expected at the time the lending occurs.

6 See note 3.

Citation: Parker, Randall. “An Overview of the Great Depression”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/an-overview-of-the-great-depression/

Gold Standard

Lawrence H. Officer, University of Illinois at Chicago

The gold standard is the most famous monetary system that ever existed. The periods in which the gold standard flourished, the groupings of countries under the gold standard, and the dates during which individual countries adhered to this standard are delineated in the first section. Then characteristics of the gold standard (what elements make for a gold standard), the various types of the standard (domestic versus international, coin versus other, legal versus effective), and implications for the money supply of a country on the standard are outlined. The longest section is devoted to the “classical” gold standard, the predominant monetary system that ended in 1914 (when World War I began), followed by a section on the “interwar” gold standard, which operated between the two World Wars (the 1920s and 1930s).

Countries and Dates on the Gold Standard

Countries on the gold standard and the periods (or beginning and ending dates) during which they were on gold are listed in Tables 1 and 2 for the classical and interwar gold standards. Types of gold standard, ambiguities of dates, and individual-country cases are considered in later sections. The country groupings reflect the importance of countries to establishment and maintenance of the standard. Center countries — Britain in the classical standard, the United Kingdom (Britain’s legal name since 1922) and the United States in the interwar period — were indispensable to the spread and functioning of the gold standard. Along with the other core countries — France and Germany, and the United States in the classical period — they attracted other countries to adopt the gold standard, in particular, British colonies and dominions, Western European countries, and Scandinavia. Other countries — and, for some purposes, also British colonies and dominions — were in the periphery: acted on, rather than actors, in the gold-standard eras, and generally not as committed to the gold standard.

Table 1Countries on Classical Gold Standard
Country Type of Gold Standard Period
Center Country
Britaina Coin 1774-1797b, 1821-1914
Other Core Countries
United Statesc Coin 1879-1917d
Francee Coin 1878-1914
Germany Coin 1871-1914
British Colonies and Dominions
Australia Coin 1852-1915
Canadaf Coin 1854-1914
Ceylon Coin 1901-1914
Indiag Exchange (British pound) 1898-1914
Western Europe
Austria-Hungaryh Coin 1892-1914
Belgiumi Coin 1878-1914
Italy Coin 1884-1894
Liechtenstein Coin 1898-1914
Netherlandsj Coin 1875-1914
Portugalk Coin 1854-1891
Switzerland Coin 1878-1914
Scandinavia
Denmarkl Coin 1872-1914
Finland Coin 1877-1914
Norway Coin 1875-1914
Sweden Coin 1873-1914
Eastern Europe
Bulgaria Coin 1906-1914
Greece Coin 1885, 1910-1914
Montenegro Coin 1911-1914
Romania Coin 1890-1914
Russia Coin 1897-1914
Middle East
Egypt Coin 1885-1914
Turkey (Ottoman Empire) Coin 1881m-1914
Asia
Japann Coin 1897-1917
Philippines Exchange (U.S. dollar) 1903-1914
Siam Exchange (British pound) 1908-1914
Straits Settlementso Exchange (British pound) 1906-1914
Mexico and Central America
Costa Rica Coin 1896-1914
Mexico Coin 1905-1913
South America
Argentina Coin 1867-1876, 1883-1885, 1900-1914
Bolivia Coin 1908-1914
Brazil Coin 1888-1889, 1906-1914
Chile Coin 1895-1898
Ecuador Coin 1898-1914
Peru Coin 1901-1914
Uruguay Coin 1876-1914
Africa
Eritrea Exchange (Italian lira) 1890-1914
German East Africa Exchange (German mark) 1885p-1914
Italian Somaliland Exchange (Italian lira) 1889p-1914

a Including colonies (except British Honduras) and possessions without a national currency: New Zealand and certain other Oceanic colonies, South Africa, Guernsey, Jersey, Malta, Gibraltar, Cyprus, Bermuda, British West Indies, British Guiana, British Somaliland, Falkland Islands, other South and West African colonies.
b Or perhaps 1798.
c Including countries and territories with U.S. dollar as exclusive or predominant currency: British Honduras (from 1894), Cuba (from 1898), Dominican Republic (from 1901), Panama (from 1904), Puerto Rico (from 1900), Alaska, Aleutian Islands, Hawaii, Midway Islands (from 1898), Wake Island, Guam, and American Samoa.
d Except August – October 1914.
e Including Tunisia (from 1891) and all other colonies except Indochina.
f Including Newfoundland (from 1895).
g Including British East Africa, Uganda, Zanzibar, Mauritius, and Ceylon (to 1901).
h Including Montenegro (to 1911).
I Including Belgian Congo.
j Including Netherlands East Indies.
k Including colonies, except Portuguese India.
l Including Greenland and Iceland.
m Or perhaps 1883.
n Including Korea and Taiwan.
o Including Borneo.
p Approximate beginning date.

Sources: Bloomfield (1959, pp. 13, 15; 1963), Bordo and Kydland (1995), Bordo and Schwartz (1996), Brown (1940, pp.15-16), Bureau of the Mint (1929), de Cecco (1984, p. 59), Ding (1967, pp. 6- 7), Director of the Mint (1913, 1917), Ford (1985, p. 153), Gallarotti (1995, pp. 272 75), Gunasekera (1962), Hawtrey (1950, p. 361), Hershlag (1980, p. 62), Ingram (1971, p. 153), Kemmerer (1916; 1940, pp. 9-10; 1944, p. 39), Kindleberger (1984, pp. 59-60), Lampe (1986, p. 34), MacKay (1946, p. 64), MacLeod (1994, p. 13), Norman (1892, pp. 83-84), Officer (1996, chs. 3 4), Pamuk (2000, p. 217), Powell (1999, p. 14), Rifaat (1935, pp. 47, 54), Shinjo (1962, pp. 81-83), Spalding (1928), Wallich (1950, pp. 32-36), Yeager (1976, p. 298), Young (1925).

Table 2Countries on Interwar Gold Standard
Country Type ofGold Standard Ending Date
Exchange-RateStabilization CurrencyConvertibilitya
United Kingdomb 1925 1931
Coin 1922e Other Core Countries
Bullion 1928 Germany 1924 1931
Australiag 1925 1930
Exchange 1925 Canadai 1925 1929
Exchange 1925 Indiaj 1925 1931
Coin 1929k South Africa 1925 1933
Austria 1922 1931
Exchange 1926 Danzig 1925 1935
Coin 1925 Italym 1927 1934
Coin 1925 Portugalo 1929 1931
Coin 1925 Scandinavia
Bullion 1927 Finland 1925 1931
Bullion 1928 Sweden 1922 1931
Albania 1922 1939
Exchange 1927 Czechoslovakia 1923 1931
Exchange 1928 Greece 1927 1932
Exchange 1925 Latvia 1922 1931
Coin 1922 Poland 1926 1936
Exchange 1929 Yugoslavia 1925 1932
Egypt 1925 1931
Exchange 1925 Palestine 1927 1931
Exchange 1928 Asia
Coin 1930 Malayat 1925 1931
Coin 1925 Philippines 1922 1933
Exchange 1928 Mexico and Central America
Exchange 1922 Guatemala 1925 1933
Exchange 1922 Honduras 1923 1933
Coin 1925 Nicaragua 1915 1932
Coin 1920 South America
Coin 1927 Bolivia 1926 1931
Exchange 1928 Chile 1925 1931
Coin 1923 Ecuador 1927 1932
Exchange 1927 Peru 1928 1932
Exchange 1928 Venezuela 1923 1930

a And freedom of gold export and import.
b Including colonies (except British Honduras) and possessions without a national currency: Guernsey, Jersey, Malta, Gibraltar, Cyprus, Bermuda, British West Indies, British Guiana, British Somaliland, Falkland Islands, British West African and certain South African colonies, certain Oceanic colonies.
cIncluding countries and territories with U.S. dollar as exclusive or predominant currency: British Honduras, Cuba, Dominican Republic, Panama, Puerto Rico, Alaska, Aleutian Islands, Hawaii, Midway Islands, Wake Island, Guam, and American Samoa.
dNot applicable; “the United States dollar…constituted the central point of reference in the whole post-war stabilization effort and was throughout the period of stabilization at par with gold.” — Brown (1940, p. 394)
e1919 for freedom of gold export.
f Including colonies and possessions, except Indochina and Syria.
g Including Papua (New Guinea) and adjoining islands.
h Kenya, Uganda, and Tanganyika.
I Including Newfoundland.
j Including Bhutan, Nepal, British Swaziland, Mauritius, Pemba Island, and Zanzibar.
k 1925 for freedom of gold export.
l Including Luxemburg and Belgian Congo.
m Including Italian Somaliland and Tripoli.
n Including Dutch Guiana and Curacao (Netherlands Antilles).
o Including territories, except Portuguese India.
p Including Liechtenstein.
q Including Greenland and Iceland.
r Including Greater Lebanon.
s Including Korea and Taiwan.
t Including Straits Settlements, Sarawak, Labuan, and Borneo.

Sources: Bett (1957, p. 36), Brown (1940), Bureau of the Mint (1929), Ding (1967, pp. 6-7), Director of the Mint (1917), dos Santos (1996, pp. 191-92), Eichengreen (1992, p. 299), Federal Reserve Bulletin (1928, pp. 562, 847; 1929, pp. 201, 265, 549; 1930, pp. 72, 440; 1931, p. 554; 1935, p. 290; 1936, pp. 322, 760), Gunasekera (1962), Jonung (1984, p. 361), Kemmerer (1954, pp. 301 302), League of Nations (1926, pp. 7, 15; 1927, pp. 165-69; 1929, pp. 208-13; 1931, pp. 265-69; 1937/38, p. 107; 1946, p. 2), Moggridge (1989, p. 305), Officer (1996, chs. 3-4), Powell (1999, pp. 23-24), Spalding (1928), Wallich (1950, pp. 32-37), Yeager (1976, pp. 330, 344, 359); Young (1925, p. 76).

Characteristics of Gold Standards

Types of Gold Standards

Pure Coin and Mixed Standards

In theory, “domestic” gold standards — those that do not depend on interaction with other countries — are of two types: “pure coin” standard and “mixed” (meaning coin and paper, but also called simply “coin”) standard. The two systems share several properties. (1) There is a well-defined and fixed gold content of the domestic monetary unit. For example, the dollar is defined as a specified weight of pure gold. (2) Gold coin circulates as money with unlimited legal-tender power (meaning it is a compulsorily acceptable means of payment of any amount in any transaction or obligation). (3) Privately owned bullion (gold in mass, foreign coin considered as mass, or gold in the form of bars) is convertible into gold coin in unlimited amounts at the government mint or at the central bank, and at the “mint price” (of gold, the inverse of the gold content of the monetary unit). (4) Private parties have no restriction on their holding or use of gold (except possibly that privately created coined money may be prohibited); in particular, they may melt coin into bullion. The effect is as if coin were sold to the monetary authority (central bank or Treasury acting as a central bank) for bullion. It would make sense for the authority to sell gold bars directly for coin, even though not legally required, thus saving the cost of coining. Conditions (3) and (4) commit the monetary authority in effect to transact in coin and bullion in each direction such that the mint price, or gold content of the monetary unit, governs in the marketplace.

Under a pure coin standard, gold is the only money. Under a mixed standard, there are also paper currency (notes) — issued by the government, central bank, or commercial banks — and demand-deposit liabilities of banks. Government or central-bank notes (and central-bank deposit liabilities) are directly convertible into gold coin at the fixed established price on demand. Commercial-bank notes and demand deposits might be converted not directly into gold but rather into gold-convertible government or central-bank currency. This indirect convertibility of commercial-bank liabilities would apply certainly if the government or central- bank currency were legal tender but also generally even if it were not. As legal tender, gold coin is always exchangeable for paper currency or deposits at the mint price, and usually the monetary authority would provide gold bars for its coin. Again, two-way transactions in unlimited amounts fix the currency price of gold at the mint price. The credibility of the monetary-authority commitment to a fixed price of gold is the essence of a successful, ongoing gold-standard regime.

A pure coin standard did not exist in any country during the gold-standard periods. Indeed, over time, gold coin declined from about one-fifth of the world money supply in 1800 (2/3 for gold and silver coin together, as silver was then the predominant monetary standard) to 17 percent in 1885 (1/3 for gold and silver, for an eleven-major-country aggregate), 10 percent in 1913 (15 percent for gold and silver, for the major-country aggregate), and essentially zero in 1928 for the major-country aggregate (Triffin, 1964, pp. 15, 56). See Table 3. The zero figure means not that gold coin did not exist, rather that its main use was as reserves for Treasuries, central banks, and (generally to a lesser extent) commercial banks.

Table 3Structure of Money: Major-Countries Aggregatea(end of year)
1885 1928
8 50
33 0d
18 21
33 99

a Core countries: Britain, United States, France, Germany. Western Europe: Belgium, Italy, Netherlands, Switzerland. Other countries: Canada, Japan, Sweden.
b Metallic money, minor coin, paper currency, and demand deposits.
c 1885: Gold and silver coin; overestimate, as includes commercial-bank holdings that could not be isolated from coin held outside banks by the public. 1913: Gold and silver coin. 1928: Gold coin.
d Less than 0.5 percent.
e 1885 and 1913: Gold, silver, and foreign exchange. 1928: Gold and foreign exchange.
f Official gold: Gold in official reserves. Money gold: Gold-coin component of money supply.

Sources: Triffin (1964, p. 62), Sayers (1976, pp. 348, 352) for 1928 Bank of England dollar reserves (dated January 2, 1929).

An “international” gold standard, which naturally requires that more than one country be on gold, requires in addition freedom both of international gold flows (private parties are permitted to import or export gold without restriction) and of foreign-exchange transactions (an absence of exchange control). Then the fixed mint prices of any two countries on the gold standard imply a fixed exchange rate (“mint parity”) between the countries’ currencies. For example, the dollar- sterling mint parity was $4.8665635 per pound sterling (the British pound).

Gold-Bullion and Gold-Exchange Standards

In principle, a country can choose among four kinds of international gold standards — the pure coin and mixed standards, already mentioned, a gold-bullion standard, and a gold- exchange standard. Under a gold-bullion standard, gold coin neither circulates as money nor is it used as commercial-bank reserves, and the government does not coin gold. The monetary authority (Treasury or central bank) stands ready to transact with private parties, buying or selling gold bars (usable only for import or export, not as domestic currency) for its notes, and generally a minimum size of transaction is specified. For example, in 1925 1931 the Bank of England was on the bullion standard and would sell gold bars only in the minimum amount of 400 fine (pure) ounces, approximately £1699 or $8269. Finally, the monetary authority of a country on a gold-exchange standard buys and sells not gold in any form but rather gold- convertible foreign exchange, that is, the currency of a country that itself is on the gold coin or bullion standard.

Gold Points and Gold Export/Import

A fixed exchange rate (the mint parity) for two countries on the gold standard is an oversimplification that is often made but is misleading. There are costs of importing or exporting gold. These costs include freight, insurance, handling (packing and cartage), interest on money committed to the transaction, risk premium (compensation for risk), normal profit, any deviation of purchase or sale price from the mint price, possibly mint charges, and possibly abrasion (wearing out or removal of gold content of coin — should the coin be sold abroad by weight or as bullion). Expressing the exporting costs as the percent of the amount invested (or, equivalently, as percent of parity), the product of 1/100th of these costs and mint parity (the number of units of domestic currency per unit of foreign currency) is added to mint parity to obtain the gold-export point — the exchange rate at which gold is exported. To obtain the gold-import point, the product of 1/100th of the importing costs and mint parity is subtracted from mint parity.

If the exchange rate is greater than the gold-export point, private-sector “gold-point arbitrageurs” export gold, thereby obtaining foreign currency. Conversely, for the exchange rate less than the gold-import point, gold is imported and foreign currency relinquished. Usually the gold is, directly or indirectly, purchased from the monetary authority of the one country and sold to the monetary authority in the other. The domestic-currency cost of the transaction per unit of foreign currency obtained is the gold-export point. That per unit of foreign currency sold is the gold-import point. Also, foreign currency is sold, or purchased, at the exchange rate. Therefore arbitrageurs receive a profit proportional to the exchange-rate/gold-point divergence.

Gold-Point Arbitrage

However, the arbitrageurs’ supply of foreign currency eliminates profit by returning the exchange rate to below the gold-export point. Therefore perfect “gold-point arbitrage” would ensure that the exchange rate has upper limit of the gold-export point. Similarly, the arbitrageurs’ demand for foreign currency returns the exchange rate to above the gold-import point, and perfect arbitrage ensures that the exchange rate has that point as a lower limit. It is important to note what induces the private sector to engage in gold-point arbitrage: (1) the profit motive; and (2) the credibility of the commitment to (a) the fixed gold price and (b) freedom of foreign exchange and gold transactions, on the part of the monetary authorities of both countries.

Gold-Point Spread

The difference between the gold points is called the (gold-point) spread. The gold points and the spread may be expressed as percentages of parity. Estimates of gold points and spreads involving center countries are provided for the classical and interwar gold standards in Tables 4 and 5. Noteworthy is that the spread for a given country pair generally declines over time both over the classical gold standard (evidenced by the dollar-sterling figures) and for the interwar compared to the classical period.

Table 4Gold-Point Estimates: Classical Gold Standard
Countries Period Gold Pointsa(percent) Spreadd(percent) Method of Computation
Exportb Importc
U.S./Britain 1881-1890 0.6585 0.7141 1.3726 PA
U.S./Britain 1891-1900 0.6550 0.6274 1.2824 PA
U.S./Britain 1901-1910 0.4993 0.5999 1.0992 PA
U.S./Britain 1911-1914 0.5025 0.5915 1.0940 PA
France/U.S. 1877-1913 0.6888 0.6290 1.3178 MED
Germany/U.S. 1894-1913 0.4907 0.7123 1.2030 MED
France/Britain 1877-1913 0.4063 0.3964 0.8027 MED
Germany/Britain 1877-1913 0.3671 0.4405 0.8076 MED
Germany/France 1877-1913 0.4321 0.5556 0.9877 MED
Austria/Britain 1912 0.6453 0.6037 1.2490 SE
Netherlands/Britain 1912 0.5534 0.3552 0.9086 SE
Scandinaviae /Britain 1912 0.3294 0.6067 0.9361 SE

a For numerator country.
b Gold-import point for denominator country.
c Gold-export point for denominator country.
d Gold-export point plus gold-import point.
e Denmark, Sweden, and Norway.

Method of Computation: PA = period average. MED = median exchange rate form estimate of various authorities for various dates, converted to percent deviation from parity. SE = single exchange-rate- form estimate, converted to percent deviation from parity.

Sources: U.S./Britain — Officer (1996, p. 174). France/U.S., Germany/U.S., France/Britain, Germany/Britain, Germany/France — Morgenstern (1959, pp. 178-81). Austria/Britain, Netherlands/Britain, Scandinavia/Britain — Easton (1912, pp. 358-63).

Table 5Gold-Point Estimates: Interwar Gold Standard
Countries Period Gold Pointsa(percent) Spreadd(percent) Method of Computation
Exportb Importc
U.S./Britain 1925-1931 0.6287 0.4466 1.0753 PA
U.S./France 1926-1928e 0.4793 0.5067 0.9860 PA
U.S./France 1928-1933f 0.5743 0.3267 0.9010 PA
U.S./Germany 1926-1931 0.8295 0.3402 1.1697 PA
France/Britain 1926 0.2042 0.4302 0.6344 SE
France/Britain 1929-1933 0.2710 0.3216 0.5926 MED
Germany/Britain 1925-1933 0.3505 0.2676 0.6181 MED
Canada/Britain 1929 0.3521 0.3465 0.6986 SE
Netherlands/Britain 1929 0.2858 0.5146 0.8004 SE
Denmark/Britain 1926 0.4432 0.4930 0.9362 SE
Norway/Britain 1926 0.6084 0.3828 0.9912 SE
Sweden/Britain 1926 0.3881 0.3828 0.7709 SE

a For numerator country.
b Gold-import point for denominator country.
c Gold-export point for denominator country.
d Gold-export point plus gold-import point.
e To end of June 1928. French-franc exchange-rate stabilization, but absence of currency convertibility; see Table 2.
f Beginning July 1928. French-franc convertibility; see Table 2.

Method of Computation: PA = period average. MED = median exchange rate form estimate of various authorities for various dates, converted to percent deviation from parity. SE = single exchange-rate- form estimate, converted to percent deviation from parity.

Sources: U.S./Britain — Officer (1996, p. 174). U.S./France, U.S./Germany, France/Britain 1929- 1933, Germany/Britain — Morgenstern (1959, pp. 185-87). Canada/Britain, Netherlands/Britain — Einzig (1929, pp. 98-101) [Netherlands/Britain currencies’ mint parity from Spalding (1928, p. 135). France/Britain 1926, Denmark/Britain, Norway/Britain, Sweden/Britain — Spalding (1926, pp. 429-30, 436).

The effective monetary standard of a country is distinguishable from its legal standard. For example, a country legally on bimetallism usually is effectively on either a gold or silver monometallic standard, depending on whether its “mint-price ratio” (the ratio of its mint price of gold to mint price of silver) is greater or less than the world price ratio. In contrast, a country might be legally on a gold standard but its banks (and government) have “suspended specie (gold) payments” (refusing to convert their notes into gold), so that the country is in fact on a “paper standard.” The criterion adopted here is that a country is deemed on the gold standard if (1) gold is the predominant effective metallic money, or is the monetary bullion, (2) specie payments are in force, and (3) there is a limitation on the coinage and/or the legal-tender status of silver (the only practical and historical competitor to gold), thus providing institutional or legal support for the effective gold standard emanating from (1) and (2).

Implications for Money Supply

Consider first the domestic gold standard. Under a pure coin standard, the gold in circulation, monetary base, and money supply are all one. With a mixed standard, the money supply is the product of the money multiplier (dependent on the commercial-banks’ reserves/deposit and the nonbank-public’s currency/deposit ratios) and the monetary base (the actual and potential reserves of the commercial banking system, with potential reserves held by the nonbank public). The monetary authority alters the monetary base by changing its gold holdings and its loans, discounts, and securities portfolio (non gold assets, called its “domestic assets”). However, the level of its domestic assets is dependent on its gold reserves, because the authority generates demand liabilities (notes and deposits) by increasing its assets, and convertibility of these liabilities must be supported by a gold reserve, if the gold standard is to be maintained. Therefore the gold standard provides a constraint on the level (or growth) of the money supply.

The international gold standard involves balance-of-payments surpluses settled by gold imports at the gold-import point, and deficits financed by gold exports at the gold-export point. (Within the spread, there are no gold flows and the balance of payments is in equilibrium.) The change in the money supply is then the product of the money multiplier and the gold flow, providing the monetary authority does not change its domestic assets. For a country on a gold- exchange standard, holdings of “foreign exchange” (the reserve currency) take the place of gold. In general, the “international assets” of a monetary authority may consist of both gold and foreign exchange.

The Classical Gold Standard

Dates of Countries Joining the Gold Standard

Table 1 (above) lists all countries that were on the classical gold standard, the gold- standard type to which each adhered, and the period(s) on the standard. Discussion here concentrates on the four core countries. For centuries, Britain was on an effective silver standard under legal bimetallism. The country switched to an effective gold standard early in the eighteenth century, solidified by the (mistakenly) gold-overvalued mint-price ratio established by Isaac Newton, Master of the Mint, in 1717. In 1774 the legal-tender property of silver was restricted, and Britain entered the gold standard in the full sense on that date. In 1798 coining of silver was suspended, and in 1816 the gold standard was formally adopted, ironically during a paper-standard regime (the “Bank Restriction Period,” of 1797-1821), with the gold standard effectively resuming in 1821.

The United States was on an effective silver standard dating back to colonial times, legally bimetallic from 1786, and on an effective gold standard from 1834. The legal gold standard began in 1873-1874, when Acts ended silver-dollar coinage and limited legal tender of existing silver coins. Ironically, again the move from formal bimetallism to a legal gold standard occurred during a paper standard (the “greenback period,” of 1861-1878), with a dual legal and effective gold standard from 1879.

International Shift to the Gold Standard

The rush to the gold standard occurred in the 1870s, with the adherence of Germany, the Scandinavian countries, France, and other European countries. Legal bimetallism shifted from effective silver to effective gold monometallism around 1850, as gold discoveries in the United States and Australia resulted in overvalued gold at the mints. The gold/silver market situation subsequently reversed itself, and, to avoid a huge inflow of silver, many European countries suspended the coinage of silver and limited its legal-tender property. Some countries (France, Belgium, Switzerland) adopted a “limping” gold standard, in which existing former-standard silver coin retained full legal tender, permitting the monetary authority to redeem its notes in silver as well as gold.

As Table 1 shows, most countries were on a gold-coin (always meaning mixed) standard. The gold-bullion standard did not exist in the classical period (although in Britain that standard was embedded in legislation of 1819 that established a transition to restoration of the gold standard). A number of countries in the periphery were on a gold-exchange standard, usually because they were colonies or territories of a country on a gold-coin standard. In situations in which the periphery country lacked its own (even-coined) currency, the gold-exchange standard existed almost by default. Some countries — China, Persia, parts of Latin America — never joined the classical gold standard, instead retaining their silver or bimetallic standards.

Sources of Instability of the Classical Gold Standard

There were three elements making for instability of the classical gold standard. First, the use of foreign exchange as reserves increased as the gold standard progressed. Available end-of- year data indicate that, worldwide, foreign exchange in official reserves (the international assets of the monetary authority) increased by 36 percent from 1880 to 1899 and by 356 percent from 1899 to 1913. In comparison, gold in official reserves increased by 160 percent from 1880 to 1903 but only by 88 percent from 1903 to 1913. (Lindert, 1969, pp. 22, 25) While in 1913 only Germany among the center countries held any measurable amount of foreign exchange — 15 percent of total reserves excluding silver (which was of limited use) — the percentage for the rest of the world was double that for Germany (Table 6). If there were a rush to cash in foreign exchange for gold, reduction or depletion of the gold of reserve-currency countries could place the gold standard in jeopardy.

Table 6Share of Foreign Exchange in Official Reserves(end of year, percent)
Country 1928b
Excluding Silverb
0 10
0 0c
0d 51
13 16
27 32

a Official reserves: gold, silver, and foreign exchange.
b Official reserves: gold and foreign exchange.
c Less than 0.05 percent.
d Less than 0.5 percent.

Sources: 1913 — Lindert (1969, pp. 10-11). 1928 — Britain: Board of Governors of the Federal Reserve System [cited as BG] (1943, p. 551), Sayers (1976, pp. 348, 352) for Bank of England dollar reserves (dated January 2, 1929). United States: BG (1943, pp. 331, 544), foreign exchange consisting of Federal Reserve Banks holdings of foreign-currency bills. France and Germany: Nurkse (1944, p. 234). Rest of world [computed as residual]: gold, BG (1943, pp. 544-51); foreign exchange, from “total” (Triffin, 1964, p. 66), France, and Germany.

Second, Britain — the predominant reserve-currency country — was in a particularly sensitive situation. Again considering end-of 1913 data, almost half of world foreign-exchange reserves were in sterling, but the Bank of England had only three percent of world gold reserves (Tables 7-8). Defining the “reserve ratio” of the reserve-currency-country monetary authority as the ratio of (i) official reserves to (ii) liabilities to foreign monetary authorities held in financial institutions in the country, in 1913 this ratio was only 31 percent for the Bank of England, far lower than those of the monetary authorities of the other core countries (Table 9). An official run on sterling could easily force Britain off the gold standard. Because sterling was an international currency, private foreigners also held considerable liquid assets in London, and could themselves initiate a run on sterling.

Table 7Composition of World Official Foreign-Exchange Reserves(end of year, percent)
1913a British pounds 77
2 French francs }2}

}

16
5b

a Excluding holdings for which currency unspecified.
b Primarily Dutch guilders and Scandinavian kroner.

Sources: 1913 — Lindert (1969, pp. 18-19). 1928 — Components of world total: Triffin (1964, pp. 22, 66), Sayers (1976, pp. 348, 352) for Bank of England dollar reserves (dated January 2, 1929), Board of Governors of the Federal Reserve System [cited as BG] (1943, p. 331) for Federal Reserve Banks holdings of foreign-currency bills.

Table 8Official-Reserves Components: Percent of World Total(end of year)
Country 1928
Gold Foreign Exchange
0 7 United States 27 0a
0b 13 Germany 6 4
95 36 Table 9Reserve Ratiosa of Reserve-Currency Countries

(end of year)

Country 1928c
Excluding Silverc
0.31 0.33
90.55 5.45
2.38 not available
2.11 not available

a Ratio of official reserves to official liquid liabilities (that is, liabilities to foreign governments and central banks).
b Official reserves: gold, silver, and foreign exchange.
c Official reserves: gold and foreign exchange.

Sources : 1913 — Lindert (1969, pp. 10-11, 19). Foreign-currency holdings for which currency unspecified allocated proportionately to the four currencies based on known distribution. 1928 — Gold reserves: Board of Governors of the Federal Reserve System [cited as BG] (1943, pp. 544, 551). Foreign- exchange reserves: Sayers (1976, pp. 348, 352) for Bank of England dollar reserves (dated January 2, 1929); BG (1943, p. 331) for Federal Reserve Banks holdings of foreign-currency bills. Official liquid liabilities: Triffin (1964, p. 22), Sayers (1976, pp. 348, 352).

Third, the United States, though a center country, was a great source of instability to the gold standard. Its Treasury held a high percentage of world gold reserves (more than that of the three other core countries combined in 1913), resulting in an absurdly high reserve ratio — Tables 7-9). With no central bank and a decentralized banking system, financial crises were frequent. Far from the United States assisting Britain, gold often flowed from the Bank of England to the United States to satisfy increases in U.S. demand for money. Though in economic size the United States was the largest of the core countries, in many years it was a net importer rather than exporter of capital to the rest of the world — the opposite of the other core countries. The political power of silver interests and recurrent financial panics led to imperfect credibility in the U.S. commitment to the gold standard. Runs on banks and runs on the Treasury gold reserve placed the U.S. gold standard near collapse in the early and mid-1890s. During that period, the credibility of the Treasury’s commitment to the gold standard was shaken. Indeed, the gold standard was saved in 1895 (and again in 1896) only by cooperative action of the Treasury and a bankers’ syndicate that stemmed gold exports.

Rules of the Game

According to the “rules of the [gold-standard] game,” central banks were supposed to reinforce, rather than “sterilize” (moderate or eliminate) or ignore, the effect of gold flows on the monetary supply. A gold outflow typically decreases the international assets of the central bank and thence the monetary base and money supply. The central-bank’s proper response is: (1) raise its “discount rate,” the central-bank interest rate for rediscounting securities (cashing, at a further deduction from face value, a short-term security from a financial institution that previously discounted the security), thereby inducing commercial banks to adopt a higher reserves/deposit ratio and therefore decreasing the money multiplier; and (2) decrease lending and sell securities, thereby decreasing domestic assets and thence the monetary base. On both counts the money supply is further decreased. Should the central bank rather increase its domestic assets when it loses gold, it engages in “sterilization” of the gold flow and is decidedly not following the “rules of the game.” The converse argument (involving gold inflow and increases in the money supply) also holds, with sterilization involving the central bank decreasing its domestic assets when it gains gold.

Price Specie-Flow Mechanism

A country experiencing a balance-of-payments deficit loses gold and its money supply decreases, both automatically and by policy in accordance with the “rules of the game.” Money income contracts and the price level falls, thereby increasing exports and decreasing imports. Similarly, a surplus country gains gold, the money supply increases, money income expands, the price level rises, exports decrease and imports increase. In each case, balance-of-payments equilibrium is restored via the current account. This is called the “price specie-flow mechanism.” To the extent that wages and prices are inflexible, movements of real income in the same direction as money income occur; in particular, the deficit country suffers unemployment but the payments imbalance is nevertheless corrected.

The capital account also acts to restore balance, via interest-rate increases in the deficit country inducing a net inflow of capital. The interest-rate increases also reduce real investment and thence real income and imports. Similarly, interest-rate decreases in the surplus country elicit capital outflow and increase real investment, income, and imports. This process enhances the current-account correction of the imbalance.

One problem with the “rules of the game” is that, on “global-monetarist” theoretical grounds, they were inconsequential. Under fixed exchange rates, gold flows simply adjust money supply to money demand; the money supply is not determined by policy. Also, prices, interest rates, and incomes are determined worldwide. Even core countries can influence these variables domestically only to the extent that they help determine them in the global marketplace. Therefore the price-specie-flow and like mechanisms cannot occur. Historical data support this conclusion: gold flows were too small to be suggestive of these mechanisms; and prices, incomes, and interest rates moved closely in correspondence (rather than in the opposite directions predicted by the adjustment mechanisms induced by the “rules of the game”) — at least among non-periphery countries, especially the core group.

Discount Rate Rule and the Bank of England

However, the Bank of England did, in effect, manage its discount rate (“Bank Rate”) in accordance with rule (1). The Bank’s primary objective was to maintain convertibility of its notes into gold, that is, to preserve the gold standard, and its principal policy tool was Bank Rate. When its “liquidity ratio” of gold reserves to outstanding note liabilities decreased, it would usually increase Bank Rate. The increase in Bank Rate carried with it market short-term increase rates, inducing a short-term capital inflow and thereby moving the exchange rate away from the gold-export point by increasing the exchange value of the pound. The converse also held, with a rise in the liquidity ratio involving a Bank Rate decrease, capital outflow, and movement of the exchange rate away from the gold import point. The Bank was constantly monitoring its liquidity ratio, and in response altered Bank Rate almost 200 times over 1880- 1913.

While the Reichsbank (the German central bank), like the Bank of England, generally moved its discount rate inversely to its liquidity ratio, most other central banks often violated the rule, with changes in their discount rates of inappropriate direction, or of insufficient amount or frequency. The Bank of France, in particular, kept its discount rate stable. Unlike the Bank of England, it chose to have large gold reserves (see Table 8), with payments imbalances accommodated by fluctuations in its gold rather than financed by short-term capital flows. The United States, lacking a central bank, had no discount rate to use as a policy instrument.

Sterilization Was Dominant

As for rule (2), that the central-bank’s domestic and international assets move in the same direction; in fact the opposite behavior, sterilization, was dominant, as shown in Table 10. The Bank of England followed the rule more than any other central bank, but even so violated it more often than not! How then did the classical gold standard cope with payments imbalances? Why was it a stable system?

Table 10Annual Changes in Internationala and Domesticb Assets of Central BankPercent of Changes in the Same Directionc
1880-1913d Britain 33
__ France 33
31 British Dominionse 13
32 Scandinaviag 25
33 South Americai 23

a 1880-1913: Gold, silver and foreign exchange. 1922-1936: Gold and foreign exchange.
b Domestic income-earning assets: discounts, loans, securities.
c Implying country is following “rules of the game.” Observations with zero or negligible changes in either class of assets excluded.
d Years when country is off gold standard excluded. See Tables 1 and 2.
e Australia and South Africa.
f1880-1913: Austria-Hungary, Belgium, and Netherlands. 1922-1936: Austria, Italy, Netherlands, and Switzerland.
g Denmark, Finland, Norway, and Sweden.
h1880-1913: Russia. 1922-1936: Bulgaria, Czechoslovakia, Greece, Hungary, Poland, Romania, and Yugoslavia.
I Chile, Colombia, Peru, and Uruguay.

Sources: Bloomfield (1959, p. 49), Nurkse (1944, p. 69).

The Stability of the Classical Gold Standard

The fundamental reason for the stability of the classical gold standard is that there was always absolute private-sector credibility in the commitment to the fixed domestic-currency price of gold on the part of the center country (Britain), two (France and Germany) of the three remaining core countries, and certain other European countries (Belgium, Netherlands, Switzerland, and Scandinavia). Certainly, that was true from the late-1870s onward. (For the United States, this absolute credibility applied from about 1900.) In earlier periods, that commitment had a contingency aspect: it was recognized that convertibility could be suspended in the event of dire emergency (such as war); but, after normal conditions were restored, convertibility would be re-established at the pre-existing mint price and gold contracts would again be honored. The Bank Restriction Period is an example of the proper application of the contingency, as is the greenback period (even though the United States, effectively on the gold standard, was legally on bimetallism).

Absolute Credibility Meant Zero Convertibility and Exchange Risk

The absolute credibility in countries’ commitment to convertiblity at the existing mint price implied that there was extremely low, essentially zero, convertibility risk (the probability that Treasury or central-bank notes would not be redeemed in gold at the established mint price) and exchange risk (the probability that the mint parity between two currencies would be altered, or that exchange control or prohibition of gold export would be instituted).

Reasons Why Commitment to Convertibility Was So Credible

There were many reasons why the commitment to convertibility was so credible. (1) Contracts were expressed in gold; if convertibility were abandoned, contracts would inevitably be violated — an undesirable outcome for the monetary authority. (2) Shocks to the domestic and world economies were infrequent and generally mild. There was basically international peace and domestic calm.

(3) The London capital market was the largest, most open, most diversified in the world, and its gold market was also dominant. A high proportion of world trade was financed in sterling, London was the most important reserve-currency center, and balances of payments were often settled by transferring sterling assets rather than gold. Therefore sterling was an international currency — not merely supplemental to gold but perhaps better: a boon to non- center countries, because sterling involved positive, not zero, interest return and its transfer costs were much less than those of gold. Advantages to Britain were the charges for services as an international banker, differential interest returns on its financial intermediation, and the practice of countries on a sterling (gold-exchange) standard of financing payments surpluses with Britain by piling up short-term sterling assets rather than demanding Bank of England gold.

(4) There was widespread ideology — and practice — of “orthodox metallism,” involving authorities’ commitment to an anti-inflation, balanced-budget, stable-money policy. In particular, the ideology implied low government spending and taxes and limited monetization of government debt (financing of budget deficits by printing money). Therefore it was not expected that a country’s price level or inflation would get out of line with that of other countries, with resulting pressure on the country’s adherence to the gold standard. (5) This ideology was mirrored in, and supported by, domestic politics. Gold had won over silver and paper, and stable-money interests (bankers, industrialists, manufacturers, merchants, professionals, creditors, urban groups) over inflationary interests (farmers, landowners, miners, debtors, rural groups).

(6) There was freedom from government regulation and a competitive environment, domestically and internationally. Therefore prices and wages were more flexible than in other periods of human history (before and after). The core countries had virtually no capital controls; the center country (Britain) had adopted free trade, and the other core countries had moderate tariffs. Balance-of-payments financing and adjustment could proceed without serious impediments.

(7) Internal balance (domestic macroeconomic stability, at a high level of real income and employment) was an unimportant goal of policy. Preservation of convertibility of paper currency into gold would not be superseded as the primary policy objective. While sterilization of gold flows was frequent (see above), the purpose was more “meeting the needs of trade” (passive monetary policy) than fighting unemployment (active monetary policy).

(8) The gradual establishment of mint prices over time ensured that the implied mint parities (exchange rates) were in line with relative price levels; so countries joined the gold standard with exchange rates in equilibrium. (9) Current-account and capital-account imbalances tended to be offsetting for the core countries, especially for Britain. A trade deficit induced a gold loss and a higher interest rate, attracting a capital inflow and reducing capital outflow. Indeed, the capital- exporting core countries — Britain, France, and Germany — could eliminate a gold loss simply by reducing lending abroad.

Rareness of Violations of Gold Points

Many of the above reasons not only enhanced credibility in existing mint prices and parities but also kept international-payments imbalances, and hence necessary adjustment, of small magnitude. Responding to the essentially zero convertibility and exchange risks implied by the credible commitment, private agents further reduced the need for balance-of-payments adjustment via gold-point arbitrage (discussed above) and also via a specific kind of speculation. When the exchange rate moved beyond a gold point, arbitrage acted to return it to the spread. So it is not surprising that “violations of the gold points” were rare on a monthly average basis, as demonstrated in Table 11 for the dollar, franc, and mark exchange rate versus sterling. Certainly, gold-point violations did occur; but they rarely persisted sufficiently to be counted on monthly average data. Such measured violations were generally associated with financial crises. (The number of dollar-sterling violations for 1890-1906 exceeding that for 1889-1908 is due to the results emanating from different researchers using different data. Nevertheless, the important common finding is the low percent of months encompassed by violations.)

Table 11Violations of Gold Points
Exchange Rate Time Period Number of Months Number dollar-sterling 240 0.4
1890-1906 3 dollar-sterling 76 0
1889-1908 12b mark-sterling 240 7.5

a May 1925 – August 1931: full months during which both United States and Britain on gold standard.
b Approximate number, deciphered from graph.

Sources: Dollar-sterling, 1890-1906 and 1925-1931 — Officer (1996, p. 235). All other — Giovannini (1993, pp. 130-31).

Stabilizing Speculation

The perceived extremely low convertibility and exchange risks gave private agents profitable opportunities not only outside the spread (gold-point arbitrage) but also within the spread (exchange-rate speculation). As the exchange value of a country’s currency weakened, the exchange rate approaching the gold-export point, speculators had an ever greater incentive to purchase domestic currency with foreign currency (a capital inflow); for they had good reason to believe that the exchange rate would move in the opposite direction, whereupon they would reverse their transaction at a profit. Similarly, a strengthened currency, with the exchange rate approaching the gold-import point, involved speculators selling the domestic currency for foreign currency (a capital outflow). Clearly, the exchange rate would either not go beyond the gold point (via the actions of other speculators of the same ilk) or would quickly return to the spread (via gold-point arbitrage). Also, the further the exchange rate moved toward the gold point, the greater the potential profit opportunity; for there was a decreased distance to that gold point and an increased distance from the other point.

This “stabilizing speculation” enhanced the exchange value of depreciating currencies that were about to lose gold; and thus the gold loss could be prevented. The speculation was all the more powerful, because the absence of controls on capital movements meant private capital flows were highly responsive to exchange-rate changes. Dollar-sterling data, in Table 12, show that this speculation was extremely efficient in keeping the exchange rate away from the gold points — and increasingly effective over time. Interestingly, these statements hold even for the 1890s, during which at times U.S. maintenance of currency convertibility was precarious. The average deviation of the exchange rate from the midpoint of the spread fell decade-by-decade from about 1/3 of one percent of parity in 1881-1890 (23 percent of the gold-point spread) to only 12/100th of one percent of parity in 1911-1914 (11 percent of the spread).

Table 12Average Deviation of Dollar-Sterling Exchange Rate from Gold-Point-Spread Midpoint
Percent of Parity Quarterly observations
0.32 1891-1900 19
0.15 1911-1914a 11
0.28 Monthly observations
0.24 1925-1931c 26

a Ending with second quarter of 1914.
b Third quarter 1925 – second quarter 1931: full quarters during which both United States and Britain on gold standard.
c May 1925 – August 1931: full months during which both United States and Britain on gold standard.

Source: Officer (1996, pp. 182, 191, 272).

Government Policies That Enhanced Gold-Standard Stability

Government policies also enhanced gold-standard stability. First, by the turn of the century South Africa — the main world gold producer — sold all its gold in London, either to private parties or actively to the Bank of England, with the Bank serving also as residual purchaser of the gold. Thus the Bank had the means to replenish its gold reserves. Second, the orthodox- metallism ideology and the leadership of the Bank of England — other central banks would often gear their monetary policy to that of the Bank — kept monetary policies harmonized. Monetary discipline was maintained.

Third, countries used “gold devices,” primarily the manipulation of gold points, to affect gold flows. For example, the Bank of England would foster gold imports by lowering the foreign gold-export point (number of units of foreign currency per pound, the British gold-import point) through interest-free loans to gold importers or raising its purchase price for bars and foreign coin. The Bank would discourage gold exports by lowering the foreign gold-import point (the British gold-export point) via increasing its selling prices for gold bars and foreign coin, refusing to sell bars, or redeeming its notes in underweight domestic gold coin. These policies were alternative to increasing Bank Rate.

The Bank of France and Reichsbank employed gold devices relative to discount-rate changes more than Britain did. Some additional policies included converting notes into gold only in Paris or Berlin rather than at branches elsewhere in the country, the Bank of France converting its notes in silver rather than gold (permitted under its “limping” gold standard), and the Reichsbank using moral suasion to discourage the export of gold. The U.S. Treasury followed similar policies at times. In addition to providing interest-free loans to gold importers and changing the premium at which it would sell bars (or refusing to sell bars outright), the Treasury condoned banking syndicates to put pressure on gold arbitrageurs to desist from gold export in 1895 and 1896, a time when the U.S. adherence to the gold standard was under stress.

Fourth, the monetary system was adept at conserving gold, as evidenced in Table 3. This was important, because the increased gold required for a growing world economy could be obtained only from mining or from nonmonetary hoards. While the money supply for the eleven- major-country aggregate more than tripled from 1885 to 1913, the percent of the money supply in the form of metallic money (gold and silver) more than halved. This process did not make the gold standard unstable, because gold moved into commercial-bank and central-bank (or Treasury) reserves: the ratio of gold in official reserves to official plus money gold increased from 33 to 54 percent. The relative influence of the public versus private sector in reducing the proportion of metallic money in the money supply is an issue warranting exploration by monetary historians.

Fifth, while not regular, central-bank cooperation was not generally required in the stable environment in which the gold standard operated. Yet this cooperation was forthcoming when needed, that is, during financial crises. Although Britain was the center country, the precarious liquidity position of the Bank of England meant that it was more often the recipient than the provider of financial assistance. In crises, it would obtain loans from the Bank of France (also on occasion from other central banks), and the Bank of France would sometimes purchase sterling to push up that currency’s exchange value. Assistance also went from the Bank of England to other central banks, as needed. Further, the credible commitment was so strong that private bankers did not hesitate to make loans to central banks in difficulty.

In sum, “virtuous” two-way interactions were responsible for the stability of the gold standard. The credible commitment to convertibility of paper money at the established mint price, and therefore the fixed mint parities, were both a cause and a result of (1) the stable environment in which the gold standard operated, (2) the stabilizing behavior of arbitrageurs and speculators, and (3) the responsible policies of the authorities — and (1), (2), and (3), and their individual elements, also interacted positively among themselves.

Experience of Periphery

An important reason for periphery countries to join and maintain the gold standard was the access to the capital markets of the core countries thereby fostered. Adherence to the gold standard connoted that the peripheral country would follow responsible monetary, fiscal, and debt-management policies — and, in particular, faithfully repay the interest on and principal of debt. This “good housekeeping seal of approval” (the term coined by Bordo and Rockoff, 1996), by reducing the risk premium, involved a lower interest rate on the country’s bonds sold abroad, and very likely a higher volume of borrowing. The favorable terms and greater borrowing enhanced the country’s economic development.

However, periphery countries bore the brunt of the burden of adjustment of payments imbalances with the core (and other Western European) countries, for three reasons. First, some of the periphery countries were on a gold-exchange standard. When they ran a surplus, they typically increased — and with a deficit, decreased — their liquid balances in London (or other reserve-currency country) rather than withdraw gold from the reserve-currency country. The monetary base of the periphery country would increase, or decrease, but that of the reserve-currency country would remain unchanged. This meant that such changes in domestic variables — prices, incomes, interest rates, portfolios, etc.–that occurred to correct the surplus or deficit, were primarily in the periphery country. The periphery, rather than the core, “bore the burden of adjustment.”

Second, when Bank Rate increased, London drew funds from France and Germany, that attracted funds from other Western European and Scandinavian countries, that drew capital from the periphery. Also, it was easy for a core country to correct a deficit by reducing lending to, or bringing capital home from, the periphery. Third, the periphery countries were underdeveloped; their exports were largely primary products (agriculture and mining), which inherently were extremely sensitive to world market conditions. This feature made adjustment in the periphery compared to the core take the form more of real than financial correction. This conclusion also follows from the fact that capital obtained from core countries for the purpose of economic development was subject to interruption and even reversal. While the periphery was probably better off with access to the capital than in isolation, its welfare gain was reduced by the instability of capital import.

The experience on adherence to the gold standard differed among periphery groups. The important British dominions and colonies — Australia, New Zealand, Canada, and India — successfully maintained the gold standard. They were politically stable and, of course, heavily influenced by Britain. They paid the price of serving as an economic cushion to the Bank of England’s financial situation; but, compared to the rest of the periphery, gained a relatively stable long-term capital inflow. In undeveloped Latin American and Asia, adherence to the gold standard was fragile, with lack of complete credibility in the commitment to convertibility. Many of the reasons for credible commitment that applied to the core countries were absent — for example, there were powerful inflationary interests, strong balance-of-payments shocks, and rudimentary banking sectors. For Latin America and Asia, the cost of adhering to the gold standard was very apparent: loss of the ability to depreciate the currency to counter reductions in exports. Yet the gain, in terms of a steady capital inflow from the core countries, was not as stable or reliable as for the British dominions and colonies.

The Breakdown of the Classical Gold Standard

The classical gold standard was at its height at the end of 1913, ironically just before it came to an end. The proximate cause of the breakdown of the classical gold standard was political: the advent of World War I in August 1914. However, it was the Bank of England’s precarious liquidity position and the gold-exchange standard that were the underlying cause. With the outbreak of war, a run on sterling led Britain to impose extreme exchange control — a postponement of both domestic and international payments — that made the international gold standard non-operational. Convertibility was not legally suspended; but moral suasion, legalistic action, and regulation had the same effect. Gold exports were restricted by extralegal means (and by Trading with the Enemy legislation), with the Bank of England commandeering all gold imports and applying moral suasion to bankers and bullion brokers.

Almost all other gold-standard countries undertook similar policies in 1914 and 1915. The United States entered the war and ended its gold standard late, adopting extralegal restrictions on convertibility in 1917 (although in 1914 New York banks had temporarily imposed an informal embargo on gold exports). An effect of the universal removal of currency convertibility was the ineffectiveness of mint parities and inapplicability of gold points: floating exchange rates resulted.

Interwar Gold Standard

Return to the Gold Standard

In spite of the tremendous disruption to domestic economies and the worldwide economy caused by World War I, a general return to gold took place. However, the resulting interwar gold standard differed institutionally from the classical gold standard in several respects. First, the new gold standard was led not by Britain but rather by the United States. The U.S. embargo on gold exports (imposed in 1917) was removed in 1919, and currency convertibility at the prewar mint price was restored in 1922. The gold value of the dollar rather than of the pound sterling would typically serve as the reference point around which other currencies would be aligned and stabilized. Second, it follows that the core would now have two center countries, the United Kingdom and the United States.

Third, for many countries there was a time lag between stabilizing a country’s currency in the foreign-exchange market (fixing the exchange rate or mint parity) and resuming currency convertibility. Given a lag, the former typically occurred first, currency stabilization operating via central-bank intervention in the foreign-exchange market (transacting in the domestic currency and a reserve currency, generally sterling or the dollar). Table 2 presents the dates of exchange- rate stabilization and currency convertibility resumption for the countries on the interwar gold standard. It is fair to say that the interwar gold standard was at its height at the end of 1928, after all core countries were fully on the standard and before the Great Depression began.

Fourth, the contingency aspect of convertibility conversion, that required restoration of convertibility at the mint price that existed prior to the emergency (World War I), was broken by various countries — even core countries. Some countries (including the United States, United Kingdom, Denmark, Norway, Netherlands, Sweden, Switzerland, Australia, Canada, Japan, Argentina) stabilized their currencies at the prewar mint price. However, other countries (France, Belgium, Italy, Portugal, Finland, Bulgaria, Romania, Greece, Chile) established a gold content of their currency that was a fraction of the prewar level: the currency was devalued in terms of gold, the mint price was higher than prewar. A third group of countries (Germany, Austria, Hungary) stabilized new currencies adopted after hyperinflation. A fourth group (Czechoslovakia, Danzig, Poland, Estonia, Latvia, Lithuania) consisted of countries that became independent or were created following the war and that joined the interwar gold standard. A fifth group (some Latin American countries) had been on silver or paper standards during the classical period but went on the interwar gold standard. A sixth country group (Russia) had been on the classical gold standard, but did not join the interwar gold standard. A seventh group (Spain, China, Iran) joined neither gold standard.

The fifth way in which the interwar gold standard diverged from the classical experience was the mix of gold-standard types. As Table 2 shows, the gold coin standard, dominant in the classical period, was far less prevalent in the interwar period. In particular, all four core countries had been on coin in the classical gold standard; but, of them, only the United States was on coin interwar. The gold-bullion standard, nonexistent prewar, was adopted by two core countries (United Kingdom and France) as well as by two Scandinavian countries (Denmark and Norway). Most countries were on a gold-exchange standard. The central banks of countries on the gold-exchange standard would convert their currencies not into gold but rather into “gold-exchange” currencies (currencies themselves convertible into gold), in practice often sterling, sometimes the dollar (the reserve currencies).

Instability of the Interwar Gold Standard

The features that fostered stability of the classical gold standard did not apply to the interwar standard; instead, many forces made for instability. (1) The process of establishing fixed exchange rates was piecemeal and haphazard, resulting in disequilibrium exchange rates. The United Kingdom restored convertibility at the prewar mint price without sufficient deflation, resulting in an overvalued currency of about ten percent. (Expressed in a common currency at mint parity, the British price level was ten percent higher than that of its trading partners and competitors). A depressed export sector and chronic balance-of-payments difficulties were to result. Other overvalued currencies (in terms of mint parity) were those of Denmark, Italy, and Norway. In contrast, France, Germany, and Belgium had undervalued currencies. (2) Wages and prices were less flexible than in the prewar period. In particular, powerful unions kept wages and unemployment high in British export industries, hindering balance-of-payments correction.

(3) Higher trade barriers than prewar also restrained adjustment.

(4) The gold-exchange standard economized on total world gold via the gold of reserve- currency countries backing their currencies in their reserves role for countries on that standard and also for countries on a coin or bullion standard that elected to hold part of their reserves in London or New York. (Another economizing element was continuation of the move of gold out of the money supply and into banking and official reserves that began in the classical period: for the eleven-major-country aggregate, gold declined to less than œ of one percent of the money supply in 1928, and the ratio of official gold to official-plus-money gold reached 99 percent — Table 3). The gold-exchange standard was inherently unstable, because of the conflict between (a) the expansion of sterling and dollar liabilities to foreign central banks to expand world liquidity, and (b) the resulting deterioration in the reserve ratio of the Bank of England, and U.S. Treasury and Federal Reserve Banks.

This instability was particularly severe in the interwar period, for several reasons. First, France was now a large official holder of sterling, with over half the official reserves of the Bank of France in foreign exchange in 1928, versus essentially none in 1913 (Table 6); and France was resentful that the United Kingdom had used its influence in the League of Nations to induce financially reconstructed countries in Europe to adopt the gold-exchange (sterling) standard. Second, many more countries were on the gold-exchange standard than prewar. Cooperation in restraining a run on sterling or the dollar would be difficult to achieve. Third, the gold-exchange standard, associated with colonies in the classical period, was viewed as a system inferior to a coin standard.

(5) In the classical period, London was the one dominant financial center; in the interwar period it was joined by New York and, in the late 1920s, Paris. Both private and official holdings of foreign currency could shift among the two or three centers, as interest-rate differentials and confidence levels changed.

(6) The problem with gold was not overall scarcity but rather maldistribution. In 1928, official reserve-currency liabilities were much more concentrated than in 1913: the United Kingdom accounted for 77 percent of world foreign-exchange reserves and France less than two percent (versus 47 and 30 percent in 1913 — Table 7). Yet the United Kingdom held only seven percent of world official gold and France 13 percent (Table 8). Reflecting its undervalued currency, France also possessed 39 percent of world official foreign exchange. Incredibly, the United States held 37 percent of world official gold — more than all the non-core countries together.

(7) Britain’s financial position was even more precarious than in the classical period. In 1928, the gold and dollar reserves of the Bank of England covered only one third of London’s liquid liabilities to official foreigners, a ratio hardly greater than in 1913 (and compared to a U.S. ratio of almost 5œ — Table 9). Various elements made the financial position difficult compared to prewar. First, U.K. liquid liabilities were concentrated on stronger countries (France, United States), whereas its liquid assets were predominantly in weaker countries (such as Germany). Second, there was ongoing tension with France, that resented the sterling-dominated gold- exchange standard and desired to cash in its sterling holding for gold to aid its objective of achieving first-class financial status for Paris.

(8) Internal balance was an important goal of policy, which hindered balance-of-payments adjustment, and monetary policy was affected greatly by domestic politics rather than geared to preservation of currency convertibility. (9) Especially because of (8), the credibility in authorities’ commitment to the gold standard was not absolute. Convertibility risk and exchange risk could be well above zero, and currency speculation could be destabilizing rather than stabilizing; so that when a country’s currency approached or reached its gold-export point, speculators might anticipate that currency convertibility would not be maintained and the currency devalued. Hence they would sell rather than buy the currency, which, of course, would help bring about the very outcome anticipated.

(10) The “rules of the game” were infrequently followed and, for most countries, violated even more often than in the classical gold standard — Table 10. Sterilization of gold inflows by the Bank of England can be viewed as an attempt to correct the overvalued pound by means of deflation. However, the U.S. and French sterilization of their persistent gold inflows reflected exclusive concern for the domestic economy and placed the burden of adjustment on other countries in the form of deflation.

(11) The Bank of England did not provide a leadership role in any important way, and central-bank cooperation was insufficient to establish credibility in the commitment to currency convertibility.

Breakdown of the Interwar Gold Standard

Although Canada effectively abandoned the gold standard early in 1929, this was a special case in two respects. First, the action was an early drastic reaction to high U.S. interest rates established to fight the stock-market boom but that carried the threat of unsustainable capital outflow and gold loss for other countries. Second, use of gold devices was the technique used to restrict gold exports and informally terminate the Canadian gold standard.

The beginning of the end of the interwar gold standard occurred with the Great Depression. The depression began in the periphery, with low prices for exports and debt-service requirements leading to insurmountable balance-of-payments difficulties while on the gold standard. However, U.S. monetary policy was an important catalyst. In the second half of 1927 the Federal Reserve pursued an easy-money policy, which supported foreign currencies but also fed the boom in the New York stock market. Reversing policy to fight the Wall Street boom, higher interest rates attracted monies to New York, which weakened sterling in particular. The stock market crash in October 1929, while helpful to sterling, was followed by a passive monetary policy that did not prevent the U.S. depression that started shortly thereafter and that spread to the rest of the world via declines in U.S. trade and lending. In 1929 and 1930 a number of periphery countries either formally suspended currency convertibility or restricted it so that their currencies went beyond the gold-export point.

It was destabilizing speculation, emanating from lack of confidence in authorities’ commitment to currency convertibility that ended the interwar gold standard. In May 1931 there was a run on Austria’s largest commercial bank, and the bank failed. The run spread to Germany, where an important bank also collapsed. The countries’ central banks lost substantial reserves; international financial assistance was too late; and in July 1931 Germany adopted exchange control, followed by Austria in October. These countries were definitively off the gold standard.

The Austrian and German experiences, as well as British budgetary and political difficulties, were among the factors that destroyed confidence in sterling, which occurred in mid-July 1931. Runs on sterling ensued, and the Bank of England lost much of its reserves. Loans from abroad were insufficient, and in any event taken as a sign of weakness. The gold standard was abandoned in September, and the pound quickly and sharply depreciated on the foreign- exchange market, as overvaluation of the pound would imply.

Amazingly, there were no violations of the dollar-sterling gold points on a monthly average basis to the very end of August 1931 (Table 11). In contrast, the average deviation of the dollar-sterling exchange rate from the midpoint of the gold-point spread in 1925-1931 was more than double that in 1911-1914, by either of two measures (Table 12), suggesting less- dominant stabilizing speculation compared to the prewar period. Yet the 1925-1931 average deviation was not much more (in one case, even less) than in earlier decades of the classical gold standard. The trust in the Bank of England had a long tradition, and the shock to confidence in sterling that occurred in July 1931 was unexpected by the British authorities.

Following the U.K. abandonment of the gold standard, many countries followed, some to maintain their competitiveness via currency devaluation, others in response to destabilizing capital flows. The United States held on until 1933, when both domestic and foreign demands for gold, manifested in runs on U.S. commercial banks, became intolerable. The “gold bloc” countries (France, Belgium, Netherlands, Switzerland, Italy, Poland) and Danzig lasted even longer; but, with their currencies now overvalued and susceptible to destabilizing speculation, these countries succumbed to the inevitable by the end of 1936. Albania stayed on gold until occupied by Italy in 1939. As much as a cause, the Great Depression was a consequence of the gold standard; for gold-standard countries hesitated to inflate their economies for fear of weakening the balance of payments, suffering loss of gold and foreign-exchange reserves, and being forced to abandon convertibility or the gold parity. So the gold standard involved “golden fetters” (the title of the classic work of Eichengreen, 1992) that inhibited monetary and fiscal policy to fight the depression. Therefore, some have argued, these fetters seriously exacerbated the severity of the Great Depression within countries (because expansionary policy to fight unemployment was not adopted) and fostered the international transmission of the Depression (because as a country’s output decreased, its imports fell, thus reducing exports and income of other countries).

The “international gold standard,” defined as the period of time during which all four core countries were on the gold standard, existed from 1879 to 1914 (36 years) in the classical period and from 1926 or 1928 to 1931 (four or six years) in the interwar period. The interwar gold standard was a dismal failure in longevity, as well as in its association with the greatest depression the world has known.

References

Bayoumi, Tamim, Barry Eichengreen, and Mark P. Taylor, eds. Modern Perspectives on the Gold Standard. Cambridge: Cambridge University Press, 1996.

Bernanke, Ben, and Harold James. “The Gold Standard, Deflation, and Financial Crisis in the Great Depression: An International Comparison.” In Financial Market and Financial Crises, edited by R. Glenn Hubbard, 33-68. Chicago: University of Chicago Press, 1991.

Bett, Virgil M. Central Banking in Mexico: Monetary Policies and Financial Crises, 1864-1940. Ann Arbor: University of Michigan, 1957.

Bloomfield, Arthur I. Monetary Policy under the International Gold Standard, 1880 1914. New York: Federal Reserve Bank of New York, 1959.

Bloomfield, Arthur I. Short-Term Capital Movements Under the Pre-1914 Gold Standard. Princeton: International Finance Section, Princeton University, 1963.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics, 1914-1941. Washington, DC, 1943.

Bordo, Michael D. “The Classical Gold Standard: Some Lessons for Today.” Federal Reserve Bank of St. Louis Review 63, no. 5 (1981): 2-17.

Bordo, Michael D. “The Classical Gold Standard: Lessons from the Past.” In The International Monetary System: Choices for the Future, edited by Michael B. Connolly, 229-65. New York: Praeger, 1982.

Bordo, Michael D. “Gold Standard: Theory.” In The New Palgrave Dictionary of Money & Finance, vol. 2, edited by Peter Newman, Murray Milgate, and John Eatwell, 267 71. London: Macmillan, 1992.

Bordo, Michael D. “The Gold Standard, Bretton Woods and Other Monetary Regimes: A Historical Appraisal.” Federal Reserve Bank of St. Louis Review 75, no. 2 (1993): 123-91.

Bordo, Michael D. The Gold Standard and Related Regimes: Collected Essays. Cambridge: Cambridge University Press, 1999.

Bordo, Michael D., and Forrest Capie, eds. Monetary Regimes in Transition. Cambridge: Cambridge University Press, 1994.

Bordo, Michael D., and Barry Eichengreen, eds. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press, 1993.

Bordo, Michael D., and Finn E. Kydland. “The Gold Standard as a Rule: An Essay in Exploration.” Explorations in Economic History 32, no. 4 (1995): 423-64.

Bordo, Michael D., and Hugh Rockoff. “The Gold Standard as a ‘Good Housekeeping Seal of Approval’. ” Journal of Economic History 56, no. 2 (1996): 389- 428.

Bordo, Michael D., and Anna J. Schwartz, eds. A Retrospective on the Classical Gold Standard, 1821-1931. Chicago: University of Chicago Press, 1984.

Bordo, Michael D., and Anna J. Schwartz. “The Operation of the Specie Standard: Evidence for Core and Peripheral Countries, 1880-1990.” In Currency Convertibility: The Gold Standard and Beyond, edited by Jorge Braga de Macedo, Barry Eichengreen, and Jaime Reis, 11-83. London: Routledge, 1996.

Bordo, Michael D., and Anna J. Schwartz. “Monetary Policy Regimes and Economic Performance: The Historical Record.” In Handbook of Macroeconomics, vol. 1A, edited by John B. Taylor and Michael Woodford, 149-234. Amsterdam: Elsevier, 1999.

Broadberry, S. N., and N. F. R. Crafts, eds. Britain in the International Economy. Cambridge: Cambridge University Press, 1992.

Brown, William Adams, Jr. The International Gold Standard Reinterpreted, 1914- 1934. New York: National Bureau of Economic Research, 1940.

Bureau of the Mint. Monetary Units and Coinage Systems of the Principal Countries of the World, 1929. Washington, DC: Government Printing Office, 1929.

Cairncross, Alec, and Barry Eichengreen. Sterling in Decline: The Devaluations of 1931, 1949 and 1967. Oxford: Basil Blackwell, 1983.

Calleo, David P. “The Historiography of the Interwar Period: Reconsiderations.” In Balance of Power or Hegemony: The Interwar Monetary System, edited by Benjamin M. Rowland, 225-60. New York: New York University Press, 1976.

Clarke, Stephen V. O. Central Bank Cooperation: 1924-31. New York: Federal Reserve Bank of New York, 1967.

Cleveland, Harold van B. “The International Monetary System in the Interwar Period.” In Balance of Power or Hegemony: The Interwar Monetary System, edited by Benjamin M. Rowland, 1-59. New York: New York University Press, 1976.

Cooper, Richard N. “The Gold Standard: Historical Facts and Future Prospects.” Brookings Papers on Economic Activity 1 (1982): 1-45.

Dam, Kenneth W. The Rules of the Game: Reform and Evolution in the International Monetary System. Chicago: University of Chicago Press, 1982.

De Cecco, Marcello. The International Gold Standard. New York: St. Martin’s Press, 1984.

De Cecco, Marcello. “Gold Standard.” In The New Palgrave Dictionary of Money & Finance, vol. 2, edited by Peter Newman, Murray Milgate, and John Eatwell, 260 66. London: Macmillan, 1992.

De Cecco, Marcello. “Central Bank Cooperation in the Inter-War Period: A View from the Periphery.” In International Monetary Systems in Historical Perspective, edited by Jaime Reis, 113-34. Houndmills, Basingstoke, Hampshire: Macmillan, 1995.

De Macedo, Jorge Braga, Barry Eichengreen, and Jaime Reis, eds. Currency Convertibility: The Gold Standard and Beyond. London: Routledge, 1996.

Ding, Chiang Hai. “A History of Currency in Malaysia and Singapore.” In The Monetary System of Singapore and Malaysia: Implications of the Split Currency, edited by J. Purcal, 1-9. Singapore: Stamford College Press, 1967.

Director of the Mint. The Monetary Systems of the Principal Countries of the World, 1913. Washington: Government Printing Office, 1913.

Director of the Mint. Monetary Systems of the Principal Countries of the World, 1916. Washington: Government Printing Office, 1917.

Dos Santos, Fernando Teixeira. “Last to Join the Gold Standard, 1931.” In Currency Convertibility: The Gold Standard and Beyond, edited by Jorge Braga de Macedo, Barry Eichengreen, and Jaime Reis, 182-203. London: Routledge, 1996.

Dowd, Kevin, and Richard H. Timberlake, Jr., eds. Money and the National State: The Financial Revolution, Government and the World Monetary System. New Brunswick (U.S.): Transaction, 1998.

Drummond, Ian. M. The Gold Standard and the International Monetary System, 1900 1939. Houndmills, Basingstoke, Hampshire: Macmillan, 1987.

Easton, H. T. Tate’s Modern Cambist. London: Effingham Wilson, 1912.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Methuen, 1985.

Eichengreen, Barry. Elusive Stability: Essays in the History of International Finance, 1919-1939. New York: Cambridge University Press, 1990.

Eichengreen, Barry. “International Monetary Instability between the Wars: Structural Flaws or Misguided Policies?” In The Evolution of the International Monetary System: How can Efficiency and Stability Be Attained? edited by Yoshio Suzuki, Junichi Miyake, and Mitsuaki Okabe, 71-116. Tokyo: University of Tokyo Press, 1990.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919 1939. New York: Oxford University Press, 1992.

Eichengreen, Barry. “The Endogeneity of Exchange-Rate Regimes.” In Understanding Interdependence: The Macroeconomics of the Open Economy, edited by Peter B. Kenen, 3-33. Princeton: Princeton University Press, 1995.

Eichengreen, Barry. “History of the International Monetary System: Implications for Research in International Macroeconomics and Finance.” In The Handbook of International Macroeconomics, edited by Frederick van der Ploeg, 153-91. Cambridge, MA: Basil Blackwell, 1994.

Eichengreen, Barry, and Marc Flandreau. The Gold Standard in Theory and History, second edition. London: Routledge, 1997.

Einzig, Paul. International Gold Movements. London: Macmillan, 1929. Federal Reserve Bulletin, various issues, 1928-1936.

Ford, A. G. The Gold Standard 1880-1914: Britain and Argentina. Oxford: Clarendon Press, 1962.

Ford, A. G. “Notes on the Working of the Gold Standard before 1914.” In The Gold Standard in Theory and History, edited by Barry Eichengreen, 141-65. New York: Methuen, 1985.

Ford, A. G. “International Financial Policy and the Gold Standard, 1870-1914.” In The Industrial Economies: The Development of Economic and Social Policies, The Cambridge Economic History of Europe, vol. 8, edited by Peter Mathias and Sidney Pollard, 197-249. Cambridge: Cambridge University Press, 1989.

Frieden, Jeffry A. “The Dynamics of International Monetary Systems: International and Domestic Factors in the Rise, Reign, and Demise of the Classical Gold Standard.” In Coping with Complexity in the International System, edited by Jack Snyder and Robert Jervis, 137-62. Boulder, CO: Westview, 1993.

Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University, Press, 1963.

Gallarotti, Giulio M. The Anatomy of an International Monetary Regime: The Classical Gold Standard, 1880-1914. New York: Oxford University Press, 1995.

Giovannini, Alberto. “Bretton Woods and its Precursors: Rules versus Discretion in the History of International Monetary Regimes.” In A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform, edited by Michael D. Bordo and Barry Eichengreen, 109-47. Chicago: University of Chicago Press, 1993.

Gunasekera, H. A. de S. From Dependent Currency to Central Banking in Ceylon: An Analysis of Monetary Experience, 1825-1957. London: G. Bell, 1962.

Hawtrey, R. G. The Gold Standard in Theory and Practice, fifth edition. London: Longmans, Green, 1947.

Hawtrey, R. G. Currency and Credit, fourth edition. London: Longmans, Green, 1950.

Hershlag, Z. Y. Introduction to the Modern Economic History of the Middle East. London: E. J. Brill, 1980.

Ingram, James C. Economic Changes in Thailand, 1850-1970. Stanford, CA: Stanford University, 1971.

Jonung, Lars. “Swedish Experience under the Classical Gold Standard, 1873-1914.” In A Retrospective on the Classical Gold Standard, 1821-1931, edited by Michael D. Bordo and Anna J. Schwartz, 361-99. Chicago: University of Chicago Press, 1984.

Kemmerer, Donald L. “Statement.” In Gold Reserve Act Amendments, Hearings, U.S. Senate, 83rd Cong., second session, pp. 299-302. Washington, DC: Government Printing Office, 1954.

Kemmerer, Edwin Walter. Modern Currency Reforms: A History and Discussion of Recent Currency Reforms in India, Puerto Rico, Philippine Islands, Straits Settlements and Mexico. New York: Macmillan, 1916.

Kemmerer, Edwin Walter. Inflation and Revolution: Mexico’s Experience of 1912- 1917. Princeton: Princeton University Press, 1940.

Kemmerer, Edwin Walter. Gold and the Gold Standard: The Story of Gold Money – – Past, Present and Future. New York: McGraw-Hill, 1944.

Kenwood, A.G., and A. L. Lougheed. The Growth of the International Economy, 1820 1960. London: George Allen & Unwin, 1971.

Kettell, Brian. Gold. Cambridge, MA: Ballinger, 1982.

Kindleberger, Charles P. A Financial History of Western Europe. London: George Allen & Unwin, 1984.

Kindleberger, Charles P. The World in Depression, 1929-1939, revised edition. Berkeley, University of California Press, 1986.

Lampe, John R. The Bulgarian Economy in the Twentieth Century. London: Croom Helm, 1986.

League of Nations. Memorandum on Currency and Central Banks, 1913-1925, second edition, vol. 1. Geneva, 1926.

League of Nations. International Statistical Yearbook, 1926. Geneva, 1927.

League of Nations. International Statistical Yearbook, 1928. Geneva, 1929.

League of Nations. Statistical Yearbook, 1930/31.Geneva, 1931.

League of Nations. Money and Banking, 1937/38, vol. 1: Monetary Review. Geneva.

League of Nations. The Course and Control of Inflation. Geneva, 1946.

Lindert, Peter H. Key Currencies and Gold, 1900-1913. Princeton: International Finance Section, Princeton University, 1969.

McCloskey, Donald N., and J. Richard Zecher. “How the Gold Standard Worked, 1880 1913.” In The Monetary Approach to the Balance of Payments, edited by Jacob A. Frenkel and Harry G. Johnson, 357-85. Toronto: University of Toronto Press, 1976.

MacKay, R. A., ed. Newfoundland: Economic Diplomatic, and Strategic Studies. Toronto: Oxford University Press, 1946.

MacLeod, Malcolm. Kindred Countries: Canada and Newfoundland before Confederation. Ottawa: Canadian Historical Association, 1994.

Moggridge, D. E. British Monetary Policy, 1924-1931: The Norman Conquest of $4.86. Cambridge: Cambridge University Press, 1972.

Moggridge, D. E. “The Gold Standard and National Financial Policies, 1919-39.” In The Industrial Economies: The Development of Economic and Social Policies, The Cambridge Economic History of Europe, vol. 8, edited by Peter Mathias and Sidney Pollard, 250-314. Cambridge: Cambridge University Press, 1989.

Morgenstern, Oskar. International Financial Transactions and Business Cycles. Princeton: Princeton University Press, 1959.

Norman, John Henry. Complete Guide to the World’s Twenty-nine Metal Monetary Systems. New York: G. P. Putnam, 1892.

Nurkse, Ragnar. International Currency Experience: Lessons of the Inter-War Period. Geneva: League of Nations, 1944.

Officer, Lawrence H. Between the Dollar-Sterling Gold Points: Exchange Rates, Parity, and Market Behavior. Cambridge: Cambridge University Press, 1996.

Pablo, Martín Acena, and Jaime Reis, eds. Monetary Standards in the Periphery: Paper, Silver and Gold, 1854-1933. Houndmills, Basingstoke, Hampshire: Macmillan, 2000.

Palyi, Melchior. The Twilight of Gold, 1914-1936: Myths and Realities. Chicago: Henry Regnery, 1972.

Pamuk, Sevket. A Monetary History of the Ottoman Empire. Cambridge: Cambridge University Press, 2000.

Pani?, M. European Monetary Union: Lessons from the Classical Gold Standard. Houndmills, Basingstoke, Hampshire: St. Martin’s Press, 1992.

Powell, James. A History of the Canadian Dollar. Ottawa: Bank of Canada, 1999.

Redish, Angela. Bimetallism: An Economic and Historical Analysis. Cambridge: Cambridge University Press, 2000.

Rifaat, Mohammed Ali. The Monetary System of Egypt: An Inquiry into its History and Present Working. London: George Allen & Unwin, 1935.

Rockoff, Hugh. “Gold Supply.” In The New Palgrave Dictionary of Money & Finance, vol. 2, edited by Peter Newman, Murray Milgate, and John Eatwell, 271 73. London: Macmillan, 1992.

Sayers, R. S. The Bank of England, 1891-1944, Appendixes. Cambridge: Cambridge University Press, 1976.

Sayers, R. S. The Bank of England, 1891-1944. Cambridge: Cambridge University Press, 1986.

Schwartz, Anna J. “Alternative Monetary Regimes: The Gold Standard.” In Alternative Monetary Regimes, edited by Colin D. Campbell and William R. Dougan, 44-72. Baltimore: Johns Hopkins University Press, 1986.

Shinjo, Hiroshi. History of the Yen: 100 Years of Japanese Money-Economy. Kobe: Kobe University, 1962.

Spalding, William F. Tate’s Modern Cambist. London: Effingham Wilson, 1926.

Spalding, William F. Dictionary of the World’s Currencies and Foreign Exchange. London: Isaac Pitman, 1928.

Triffin, Robert. The Evolution of the International Monetary System: Historical Reappraisal and Future Perspectives. Princeton: International Finance Section, Princeton University, 1964.

Triffin, Robert. Our International Monetary System: Yesterday, Today, and Tomorrow. New York: Random House, 1968.

Wallich, Henry Christopher. Monetary Problems of an Export Economy: The Cuban Experience, 1914-1947. Cambridge, MA: Harvard University Press, 1950.

Yeager, Leland B. International Monetary Relations: Theory, History, and Policy, second edition. New York: Harper & Row, 1976.

Young, John Parke. Central American Currency and Finance. Princeton: Princeton University Press, 1925.

Citation: Officer, Lawrence. “Gold Standard”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/gold-standard/

California Gold Rush

Robert Whaples, Wake Forest University

The gold rush beginning in 1849 brought a flood of workers to California and played an important role in integrating California’s economy into that of the eastern United States.

The California Gold Rush began with the discovery of significant gold deposits near Sacramento in 1848. As accounts of the discovery spread, residents of the thinly-populated West Coast poured into the gold fields and migrants swarmed in from the Far East, Mexico, South America, and the East Coast – which provided about 80 percent of the newcomers. The rush occurred because the gold mining industry was very labor intensive and was easy to enter due to modest capital requirements and laws which made acquiring a claim stake fairly simple. Many easterners went overland in wagon trains through undeveloped territory – a trip that took from April or May until September and cost around $200 at a time when laborers earned somewhat less than one dollar per day in the East. Others came by ship – either sailing around South America or sailing to Panama, crossing the isthmus, and then boarding another ship to San Francisco. The trip via Central America took six to eight weeks, but was very dangerous due to disease, an issue that abated when Cornelius Vanderbilt forged a path through Nicaragua in 1851 and William Henry Aspinwall completed a railroad across the Panamian isthmus in 1855. Sailing around Cape Horn could take from three to eight months depending on winds. After the arduous trip miners faced shockingly high prices for food, supplies, rent and other goods and services and endured mining camps that lacked basic sanitation.

Despite these costs, the population of California (excluding non-Christianized Indians) soared from about eight thousand on the eve of the rush to 93,000 – 77 percent of whom were males aged fifteen to forty – in 1850. By 1852, population reached about 250,000 – more than one percent of the nation’s population had moved to California in just four years. Although the rush slowed down after 1850, California’s population reached 380,000 by 1860.

In 1850 there were 624 miners for every 1000 people in the state, but many soon realized that they could do just as well or better by supplying services to miners. San Francisco developed most rapidly. As a port in close proximity to the gold fields it became a booming market where imported supplies were unloaded and sold to miners.

Because of the mining opportunities the demand for labor in California was the highest in the world and the supply of common labor in other jobs fell as workers turned to mining. This combination, according to the estimates of Robert Margo (2000), pushed up real (i.e. inflation-adjusted) wages of common laborers in California by an astonishing 515 percent between late 1847 and 1849. As the influx of settlers continued, real wages fell but even after the arrival of hundreds of thousands in 1860 laborers’ wages were almost four times higher than they had originally been. The transitory shock of the gold rush, Margo concludes, had integrated California into the U.S. economy. These real wages could not have remained so high – well above the national average – without the existence of a range of untapped natural resources in California (including fertile land) and the simultaneous influx of investment capital – including funds to build the first transcontinental railroad to California in the 1860s.

The early mines were placer deposits of pure gold mixed with sand and gravel and could be recovered by agitating water and debris in a pan. As these deposits were exhausted attention turned to lode mining, which required capital-intensive industrial methods for crushing the ore and the adoption of new technologies like compressed-air drills and chemical processes – especially the cyanide process – which eventually allowed recovery of gold from low-grade ores.

From 1792 until 1847 cumulative U.S. production of gold was only about 37 tons. California’s production in 1849 alone exceeded this figure, and annual production from 1848 to 1857 averaged 76 tons. During this decade California’s gold production equaled $550 million – about 1.8% of American GDP.

The two other gold rushes of the nineteenth century that rival California’s in size and impact began in Australia in 1851 and South Africa in 1886. (Smaller gold rushes include those in British Columbia (1858), Nevada and Colorado (1859), South Dakota (1876) and Canada’s Yukon (1898).) The Australian gold rush triggered a tripling of population to 1.1 million in the following ten years. This gold rush seems to have reduced Australia’s industrialization – at least in the short term – because high returns from mining pulled workers out of other sectors. Australians began importing goods which they had previously manufactured.

An additional impact of gold rushes of the nineteenth century was on prices. Because precious metals were at the base of the monetary system, rushes increased the money supply which resulted in inflation. Soaring gold output from the California and Australia gold rushes is linked with a thirty percent increase in wholesale prices between 1850 and 1855. Likewise, right at the end of the nineteenth century a surge in gold production reversed a decades-long deflationary trend and is often credited with aiding indebted farmers and helping to end the Populist Party’s strength and its call for a bimetallic (gold and silver) money standard.

Eichengreen, Barry and Ian McLean. “The Supply of Gold under the Pre-1914 Gold Standard.” Economic History Review 47, no. 2 (1994): 288-309.

Maddock, Rodney and Ian McLean. “Supply-Side Shocks: The Case of Australian Gold.” Journal of Economic History 44, no. 4 (1984): 1047-67.

Robert A. Margo, Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000.

Citation: Whaples, Robert. “California Gold Rush”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/california-gold-rush/

Manufactured and Natural Gas Industry

Christopher Castaneda, California State University – Sacramento

The historical gas industry includes two chemically distinct flammable gasses. These are natural gas and several variations of manufactured coal gas. Natural gas is composed primarily of methane, a hydrocarbon composed of one carbon atom and four hydrogen atoms, or CH4. As a “fossil fuel,” natural gas flowing from the earth is rarely pure. It is commonly associated with petroleum and may contain other hydrocarbons including butane, ethane, and propane. In the United States, substantial commercial natural gas utilization did not begin until after the discovery of large quantities of both crude oil and natural gas in western Pennsylvania during 1859.

Manufactured Gas

Manufactured coal gas (sometimes referred to as “town gas”), and its several variants, was used for lighting throughout most of the nineteenth century. Consumers also used this gas as a fuel for heating and cooking from the late nineteenth through the mid-twentieth century in many locations where natural gas was unavailable. Generally, a rather simple process of heating coal, or other organic substance, produces a flammable gas. The resulting gas (a combination of carbon monoxide, hydrogen and other gasses depending upon the exact process) was stored in a “holder” or “gasometer” for later distribution. Coal based “gas works” produced manufactured gas from the early nineteenth century through the mid-twentieth century. Commercial utilization of manufactured coal gas occurred prior to that of natural gas due to the comparative ease of producing coal gas. The first manufactured coal gas light demonstration in the United States apparently took place in 1802. Benjamin Henfrey of Northumberland, Pennsylvania, used a “thermo-lamp,” reportedly based on European design, with which he produced a “beautiful and brilliant light,” Despite Henfrey’s successful demonstration in this case and others, he was unable to attract financial support to develop further his gas light endeavors.

Other experimenters followed, but the most successful were several members of the Peale family. Charles Willson Peale, the family patriarch, Revolutionary War colonel, and George Washington’s portraitist, opened a museum in Independence Hall in Philadelphia and subsequently transferred control of it to his son Rubens. Seeking ways to attract paying visitors, Rubens decided to use gaslights in the museum. With technical assistance from chemist Benjamin Kugler in 1814, Rubens installed gaslights. He operated and maintained the museum’s gas works for the next several years until his fear that a fire, or explosion, might destroy the building caused him to disassemble the equipment.

Rembrandt Peale in Baltimore

In the meantime, Rembrandt Peale, another of Charles’ sons, opened a new Peale Museum in Baltimore. The Baltimore museum was similar to his father’s Philadelphia museum in that it contained both works of art and specimens of nature. Rembrandt understood that his museum’s success depended upon its ability to attract paying visitors, and he installed gaslights in the Baltimore museum.

The first advertisement for the museum’s new gas light attraction appeared in the “American and Commercial Daily Advertiser” on June 13, 1816. The ad stated:

Gas Lights – Without Oil, Tallow, Wicks or Smoke. It is not necessary to invite attention to the gas lights by which my salon of paintings is now illuminated; those who have seen the ring beset with gems of light are sufficiently disposed to spread their reputation; the purpose of this notice is merely to say that the Museum will be illuminated every evening until the public curiosity be gratified.

Controlled by a valve attached to the wall in a side room on the second floor next to the lecture hall, Rembrandt Peale dazzled onlookers with his “magic ring” of one hundred burners. The valve allowed Rembrandt to vary the luminosity from dim to very bright. The successful demonstration of gas lighting at the museum underscored to Rembrandt the immense potential for the widespread application of gas lighting.

In his successful gas light demonstration, Rembrandt recognized an opportunity to develop a commercial gasworks for Baltimore. Rembrandt had purchased the patent for Dr. Kugler’s gas light method, and he organized a group of men to join him in a commercial gas lighting venture. These men established the Gas Light Company of Baltimore (GLCB) on June 17, 1816. On February 7, 1817, the GLCB lit its first street lamp at Market and Lemon Streets. The Belvidere Theater located directly across the street from the gas works became the first building illuminated by GLCB, and J. T. Cohen who lived on North Charles Street owned the first private home lit by gas. Rembrandt’s role at GLCB soon diminished, in large part because he lacked understanding of both business and relevant technological issues. Rembrandt was ultimately forced out of the company, and he continued his career as an artist.

The Gas Light Company of Baltimore was the first commercial gas light company in the United States. Other entrepreneurs soon thereafter formed gas light firms for their cities and towns. By 1850, about 50 urban areas in the United States had a manufactured gas works. Generally, gas lighting was available only in medium sized or larger cities, and it was used for lighting streets, commercial establishments, and some residences. Despite the rapid spread of gas lighting, it was expensive and beyond the means of most Americans. Other than gas, whale oil and tallow candles continued to be the most popular fuels for lighting.

1840s-50s: Use of Manufactured Gas Spreads Rapidly

Manufactured gas utilization for lighting and heating spread rapidly throughout the nation during the 1840s and 1850s. By the mid-nineteenth century, New York City ranked first in manufactured gas utilization by consuming approximately 600 million cubic feet (MMcf) per year, compared to Philadelphia’s consumption of approximately 300 MMcf per year.

Developments in portable gas lighting allowed for gas lamp installations in some passenger railroad cars. In the 1850s, the New Jersey Railroad’s service between New York City and Philadelphia offered gas lighting. Coal gas was stored in a wrought-iron cylinder attached to the undercarriage of the passenger cars. Each cylinder contained enough gas to light the two burners per car for fifteen hours. The New Haven Railroad also used gas lighting in the smoking cars of its night express. Each car had two burners that together consumed 7 cubic feet (cf) of gas per hour.

Challenge from Electric Lighting and Consolidation

Although kerosene and tallow candles competed with coal gas for the nineteenth century lighting market, it was electricity that forced permanent restructuring on the manufactured gas industry. In the early 1880s, Thomas Edison promoted electricity as both a safer and cleaner energy source than coal gas which had a strong odor and left soot around the burners. However, the superior quality of electric light and its rapid accessibility after 1882 forced gas light companies to begin promoting manufactured gas for cooking instead of lighting.

By the late nineteenth century, independent gas distribution firms began to merge. Competitive pressures from electric power, in particular, forced gas firms located in the same urban area to consider consolidating operations. By the early twentieth century many coal gas companies also began merging with electric power firms. These business combinations resulted in the formation of large public utility holding companies, many of which were referred to collectively as the “Power Trust.” These large utility firms controlled urban manufactured and natural gas production, transmission, and distribution as well as the same for electric power.

Manufactured gas continued to be used well into the twentieth century in many urban areas that did not have access to natural gas. Between 1930 and the mid-1950s, however, utility companies began converting their manufactured gas plants to natural gas, as the natural fuel became available through newly built long-distance gas pipelines.

Natural Gas

While the manufactured gas business expanded rapidly in the United States during the nineteenth century, natural gas was then neither widely available nor easy to utilize. During the Colonial era, it was the subject more of curiosity than utility. Both George Washington and Thomas Jefferson observed natural gas “springs” in present-day West Virginia. However, the first sustained commercial use of natural gas, albeit relatively minimal, occurred in Fredonia, New York in 1825.

After discovery of large quantities of both oil and natural gas at Titusville, Pennsylvania in 1859, natural gas found a growing market. The large iron and steel works in Pittsburgh contracted for natural gas supply as this fuel offered a stable temperature for industrial heat. Residents and commercial establishments in Pittsburgh also used natural gas for heating purposes. In 1884, the New York Times proclaimed that natural gas would help reduce Pittsburgh’s unpleasant coal smoke pollution.

1920s: Development of Southwestern Fields

The discovery of massive southwestern natural gas fields and technological advancements in long distance pipeline construction dramatically altered the twentieth century gas industry market structure. In 1918, drillers discovered huge natural gas fields in the Panhandle area of North Texas. In 1922, a crew located a large gas well in Kansas that became the first one in the Hugoton field, located in the common Kansas, Oklahoma, and Texas border area (generally referred to as the mid-continent area). The combined Panhandle/Hugoton Field became the nation’s largest gas producing area comprising more than 1.6 million acres. It contained as much as 117 trillion cubic feet (Tcf) of natural gas and accounted for approximately 16 percent of total U.S. reserves in the twentieth century.

As oil drillers had done earlier in Appalachia, they initially exploited the Panhandle Field for petroleum only while allowing an estimated 1 billion cubic feet per day (Bcf/d) of natural gas to escape into the atmosphere. As new markets emerged for the burgeoning natural gas supply, the commercial value of southwestern natural gas attracted entrepreneurial interest and bolstered the fortunes of existing firms. These discoveries led to the establishment of many new companies including the Lone Star Gas Company, Arkansas Louisiana Gas Company, Kansas Natural Gas Company, United Gas Company, and others, some of which evolved into large firms.

Pipeline Advances

The sheer volume of the southwestern fields emphasized the need for advancements in pipeline technology to transport the natural gas to distant urban markets. In particular, new welding technologies allowed pipeline builders in the 1920s to construct longer lines. In the early years of the decade, oxy-acetylene torches were used for welding, and in 1923 electric arc welding was successfully used on thin-walled, high tensile strength, large-diameter pipelines necessary for long-distance compressed gas transmission. Improved welding techniques made pipe joints stronger than the pipe itself; seamless pipe became available for gas pipelines beginning in 1925. Along with enhancements in pipeline construction materials and techniques, gas compressor and ditching machine technology improved as well. Long-distance pipelines became a significant segment of the gas industry beginning in the 1920s.

These new technologies made possible the transportation of southwestern natural gas to distant markets. Until the late 1920s, most interstate natural gas transportation took place in the Northeast, and it was based upon Appalachian production. In 1921, natural gas produced in West Virginia accounted for approximately 65% of interstate gas transportation while only 2% of interstate gas originated in Texas. The discovery of southwestern gas fields occurred as Appalachian gas reserves and production began to diminish. The southwestern gas fields quickly overshadowed those of the historically important Appalachian area.

Between the mid-1920s and the mid-1930s, the combination of abundant and relatively inexpensive southwestern natural gas production, improved pipeline technology, and increasing nation-wide natural gas demand stimulated the creation of a new interstate gas pipeline industry. Metropolitan manufactured gas distribution companies, typically part of large holding companies, financed most of the pipelines built during this first era of rapid pipeline construction. Long distance lines built during this era included the Northern Natural Gas Company, Panhandle Eastern Pipe Line Company, and the Natural Gas Pipeline Company.

Midwestern urban utilities that began receiving natural gas typically mixed it with existing manufactured gas production. This mixed gas had a higher Btu content than straight manufactured gas. Eventually, with access to reliable supplies of natural gas, all U.S. gas utilities converted their distribution systems to straight natural gas.

Samuel Insull

In the late 1920s and early 1930s, the most well-known public utility figure was Samuel Insull, a former personal secretary of Thomas Edison. Insull’s public utility empire headquartered in Chicago did not fare well in the economic climate that followed the 1929 Wall Street stock market crash. His gas and electric power empire crumbled, and he fled the country. The collapse of the Insull empire symbolized the end of a long period of unrestrained and rapid growth in the U.S. public utility industry.

Federal Regulation

In the meantime, the Federal Trade Commission (FTC) launched a massive investigation of the nation’s public utilities, and its work culminated in New Deal legislation that imposed federal regulation on the gas and electric industries. The Public Utility Holding Company Act (1935) broke apart the multi-tiered gas and electric power companies while the Federal Power Act (1935) and the Natural Gas Act (1938), respectively authorized the Federal Power Commission (FPC) to regulate the interstate transmission and sale of electric power and natural gas.

During the Depression the gas industry also suffered its worst tragedy in the twentieth century. In 1937 at New London, Texas, an undetected natural gas leak at the Consolidated High School resulted in a tremendous explosion that virtually destroyed the Consolidated High School, 15 minutes before the end of the school day. Initial estimates of 500 dead were later revised to 294. Texas Governor Allred appointed a military court of inquiry that determined an accumulation of odorless gas in the school’s basement, possibly ignited by the spark of an electric light switch, created the explosion. This terrible tragedy was marked in irony. On top of the wreckage, a broken blackboard contained these words apparently written before the explosion:

Oil and natural gas are East Texas’ greatest mineral blessings. Without them this school would not be here, and none of us would be here learning our lessons.

Although many gas firms used odorants, the New London explosion resulted in the implementation of new natural gas odorization regulations in Texas.

The New Deal era regulatory regime did not appear to constrain gas industry growth during the post-World War II era, as entrepreneurs organized several long-distance gas pipeline firms to connect southwestern gas supply with northeastern markets. Both during and immediately after World War II, a second era of rapid gas industry growth occurred. Pipeline firms targeted northeastern markets such as Philadelphia, New York and Boston, very large urban areas previously without natural gas supply. These cities subsequently converted their distribution systems from manufactured coal gas to the more efficient natural gas.

In the 1950s, the beginnings of a national market for natural gas had emerged. During the last half of the twentieth century, natural gas consumption in the U.S. ranged from about 20-30% of total national energy utilization. However, the era of natural gas abundance ended in the late 1960s.

1960s to 1980s: Price Controls, Shortages, and Decontrol

The first overt sign of serious industry trouble emerged in the late 1960s when natural gas shortages first appeared. Economists almost uniformly blamed the shortages on gas pricing regulations instituted by the so-called Phillips Decision of 1954. This law extended the FPC’s price setting authority over the natural gas producers that sold gas to interstate pipelines for resale. The FPC’s consumerist orientation meant that it had held gas prices low and producers lost their incentive to develop new gas supply for the interstate market.

The 1973 OPEC oil embargo exacerbated the growing shortage problem as factories switched boiler fuels from petroleum to natural gas. Cold winters further strained the nation’s gas industry. The resulting energy crisis compelled consumer groups and politicians to call for changes in the regulatory system that had constricted gas production. In 1978, a new comprehensive federal gas policy dictated by the Natural Gas Policy Act (NGPA) created a new federal agency, the Federal Energy Regulatory Commission (FERC) to assume regulatory authority for the interstate gas industry.

The NGPA also included a complex system of natural gas price decontrols that sought to stimulate domestic natural gas production. These measures soon resulted in the creation of a nationwide gas supply “bubble” and lower prices. The lower prices wreaked additional havoc on the gas pipeline industry since most interstate lines were purchasing gas at high prices under long-term contracts. Large gas purchasers, particularly utilities, subsequently sought to circumvent their high-priced gas contracts with pipelines and purchase natural gas on the emerging spot market.

Once again, dysfunction of the regulated market forced government to act in order to try and bring market balance to the gas industry. Beginning in the mid-1980s, a number of FERC Orders culminating in Order 636 (and amendments) transformed interstate pipelines into virtual common carriers. This industry structural change allowed gas utilities and end-users to contract directly with producers for gas purchases. FERC continued to regulate the gas pipelines’ transportation function.

The Future

Natural gas is a limited resource. While it is the most clean burning of all fossil fuels, it exists in limited supply. Estimates of natural gas availability vary widely from hundreds to thousands of years. Such estimates are dependent upon the technology that must be developed in order to drill for gas in more difficult geographical conditions, find gas where it is expected to be located, and transport it to the consumer. Methane can also be extracted from coal, peat, and oil shale, and if these sources can be successfully utilized for methane production the world’s methane supply will be extended another 500 or more years.

For the foreseeable future, natural gas will continue to be used primarily for residential and commercial heating, electric power generation, and industrial heat processes. The market for methane as a transportation fuel will undoubtedly grow, but improvements in electric vehicles may well dampen any dramatic increase in natural gas powered engines. The environmental characteristics of natural gas will certainly retain this fuel’s position at the forefront of all fossil fuels. In a broadly historical and environmental perspective, we should recognize that in a period of a few hundred years, human society will have burned as fuel for lighting, cooking and heating a very large percentage of the earth’s natural gas supply.

References:

Castaneda, Christopher J. Invisible Fuel: Manufactured and Natural Gas in America, 1800-2000. New York: Twayne Publishers, 1999.

Herbert, John H. Clean Cheap Heat: The Development of Residential Markets for Natural Gas in the United States. New York: Praeger, 1992.

MacAvoy, Paul W. The Natural Gas Market: Sixty Years of Regulation and Deregulation. New Haven: Yale University Press, 2000.

Rose, Mark H. Cities of Light and Heat: Domesticating Gas and Electricity in Urban America. University Park: Pennsylvania State University Press, 1995.

Tussing, Arlon R. and Bob Tippee. The Natural Gas Industry: Evolution, Structure, and Economics, second edition. Cambridge, MA: Ballinger Publishing, 1984.

Citation: Castaneda, Christopher. “Manufactured and Natural Gas Industry”. EH.Net Encyclopedia, edited by Robert Whaples. September 3, 2001. URL http://eh.net/encyclopedia/manufactured-and-natural-gas-industry/

A History of Futures Trading in the United States

Joseph Santos, South Dakota State University

Many contemporary [nineteenth century] critics were suspicious of a form of business in which one man sold what he did not own to another who did not want it… Morton Rothstein (1966)

Anatomy of a Futures Market

The Futures Contract

A futures contract is a standardized agreement between a buyer and a seller to exchange an amount and grade of an item at a specific price and future date. The item or underlying asset may be an agricultural commodity, a metal, mineral or energy commodity, a financial instrument or a foreign currency. Because futures contracts are derived from these underlying assets, they belong to a family of financial instruments called derivatives.

Traders buy and sell futures contracts on an exchange – a marketplace that is operated by a voluntary association of members. The exchange provides buyers and sellers the infrastructure (trading pits or their electronic equivalent), legal framework (trading rules, arbitration mechanisms), contract specifications (grades, standards, time and method of delivery, terms of payment) and clearing mechanisms (see section titled The Clearinghouse) necessary to facilitate futures trading. Only exchange members are allowed to trade on the exchange. Nonmembers trade through commission merchants – exchange members who service nonmember trades and accounts for a fee.

The September 2004 light sweet crude oil contract is an example of a petroleum (mineral) future. It trades on the New York Mercantile exchange (NYM). The contract is standardized – every one is an agreement to trade 1,000 barrels of grade light sweet crude in September, on a day of the seller’s choosing. As of May 25, 2004 the contract sold for $40,120=$40.12x1000 and debits Member S’s margin account the same amount.

The Clearinghouse

The clearinghouse is the counterparty to every trade – its members buy every contract that traders sell on the exchange and sell every contract that traders buy on the exchange. Absent a clearinghouse, traders would interact directly, and this would introduce two problems. First, traders. concerns about their counterparty’s credibility would impede trading. For example, Trader A might refuse to sell to Trader B, who is supposedly untrustworthy.

Second, traders would lose track of their counterparties. This would occur because traders typically settle their contractual obligations by offset – traders buy/sell the contracts that they sold/bought earlier. For example, Trader A sells a contract to Trader B, who sells a contract to Trader C to offset her position, and so on.

The clearinghouse eliminates both of these problems. First, it is a guarantor of all trades. If a trader defaults on a futures contract, the clearinghouse absorbs the loss. Second, clearinghouse members, and not outside traders, reconcile offsets at the end of trading each day. Margin accounts and a process called marking-to-market all but assure the clearinghouse’s solvency.

A margin account is a balance that a trader maintains with a commission merchant in order to offset the trader’s daily unrealized loses in the futures markets. Commission merchants also maintain margins with clearinghouse members, who maintain them with the clearinghouse. The margin account begins as an initial lump sum deposit, or original margin.

To understand the mechanics and merits of marking-to-market, consider that the values of the long and short positions of an existing futures contract change daily, even though futures trading is a zero-sum game – a buyer’s gain/loss equals a seller’s loss/gain. So, the clearinghouse breaks even on every trade, while its individual members. positions change in value daily.

With this in mind, suppose Trader B buys a 5,000 bushel soybean contract for $9.70 from Trader S. Technically, Trader B buys the contract from Clearinghouse Member S and Trader S sells the contract to Clearinghouse Member B. Now, suppose that at the end of the day the contract is priced at $9.71. That evening the clearinghouse marks-to-market each member’s account. That is to say, the clearinghouse credits Member B’s margin account $50 and debits Member S’s margin account the same amount.

Member B is now in a position to draw on the clearinghouse $50, while Member S must pay the clearinghouse a $50 variation margin – incremental margin equal to the difference between a contract’s price and its current market value. In turn, clearinghouse members debit and credit accordingly the margin accounts of their commission merchants, who do the same to the margin accounts of their clients (i.e., traders). This iterative process all but assures the clearinghouse a sound financial footing. In the unlikely event that a trader defaults, the clearinghouse closes out the position and loses, at most, the trader’s one day loss.

Active Futures Markets

Futures exchanges create futures contracts. And, because futures exchanges compete for traders, they must create contracts that appeal to the financial community. For example, the New York Mercantile Exchange created its light sweet crude oil contract in order to fill an unexploited niche in the financial marketplace.

Not all contracts are successful and those that are may, at times, be inactive – the contract exists, but traders are not trading it. For example, of all contracts introduced by U.S. exchanges between 1960 and 1977, only 32% traded in 1980 (Stein 1986, 7). Consequently, entire exchanges can become active – e.g., the New York Futures Exchange opened in 1980 – or inactive – e.g., the New Orleans Exchange closed in 1983 (Leuthold 1989, 18). Government price supports or other such regulation can also render trading inactive (see Carlton 1984, 245).

Futures contracts succeed or fail for many reasons, but successful contracts do share certain basic characteristics (see for example, Baer and Saxon 1949, 110-25; Hieronymus 1977, 19-22). To wit, the underlying asset is homogeneous, reasonably durable, and standardized (easily describable); its supply and demand is ample, its price is unfettered, and all relevant information is available to all traders. For example, futures contracts have never derived from, say, artwork (heterogeneous and not standardized) or rent-controlled housing rights (supply, and hence price is fettered by regulation).

Purposes and Functions

Futures markets have three fundamental purposes. The first is to enable hedgers to shift price risk – asset price volatility – to speculators in return for basis risk – changes in the difference between a futures price and the cash, or current spot price of the underlying asset. Because basis risk is typically less than asset price risk, the financial community views hedging as a form of risk management and speculating as a form of risk taking.

Generally speaking, to hedge is to take opposing positions in the futures and cash markets. Hedgers include (but are not restricted to) farmers, feedlot operators, grain elevator operators, merchants, millers, utilities, export and import firms, refiners, lenders, and hedge fund managers (see Peck 1985, 13-21). Meanwhile, to speculate is to take a position in the futures market with no counter-position in the cash market. Speculators may not be affiliated with the underlying cash markets.

To demonstrate how a hedge works, assume Hedger A buys, or longs, 5,000 bushels of corn, which is currently worth $2.40 per bushel, or $12,000=$2.40×5000; the date is May 1st and Hedger A wishes to preserve the value of his corn inventory until he sells it on June 1st. To do so, he takes a position in the futures market that is exactly opposite his position in the spot – current cash – market. For example, Hedger A sells, or shorts, a July futures contract for 5,000 bushels of corn at a price of $2.50 per bushel; put differently, Hedger A commits to sell in July 5,000 bushels of corn for $12,500=$2.50×5000. Recall that to sell (buy) a futures contract means to commit to sell (buy) an amount and grade of an item at a specific price and future date.

Absent basis risk, Hedger A’s spot and futures markets positions will preserve the value of the 5,000 bushels of corn that he owns, because a fall in the spot price of corn will be matched penny for penny by a fall in the futures price of corn. For example, suppose that by June 1st the spot price of corn has fallen five cents to $2.35 per bushel. Absent basis risk, the July futures price of corn has also fallen five cents to $2.45 per bushel.

So, on June 1st, Hedger A sells his 5,000 bushels of corn and loses $250=($2.35-$2.40)x5000 in the spot market. At the same time, he buys a July futures contract for 5,000 bushels of corn and gains $250=($2.50-$2.45)x5000 in the futures market. Notice, because Hedger A has both sold and bought a July futures contract for 5,000 bushels of corn, he has offset his commitment in the futures market.

This example of a textbook hedge – one that eliminates price risk entirely – is instructive but it is also a bit misleading because: basis risk exists; hedgers may choose to hedge more or less than 100% of their cash positions; and hedgers may cross hedge – trade futures contracts whose underlying assets are not the same as the assets that the hedger owns. So, in reality hedgers cannot immunize entirely their cash positions from market fluctuations and in some cases they may not wish to do so. Again, the purpose of a hedge is not to avoid risk, but rather to manage or even profit from it.

The second fundamental purpose of a futures market is to facilitate firms’ acquisitions of operating capital – short term loans that finance firms’ purchases of intermediate goods such as inventories of grain or petroleum. For example, lenders are relatively more likely to finance, at or near prime lending rates, hedged (versus non-hedged) inventories. The futures contact is an efficient form of collateral because it costs only a fraction of the inventory’s value, or the margin on a short position in the futures market.

Speculators make the hedge possible because they absorb the inventory’s price risk; for example, the ultimate counterparty to the inventory dealer’s short position is a speculator. In the absence of futures markets, hedgers could only engage in forward contracts – unique agreements between private parties, who operate independently of an exchange or clearinghouse. Hence, the collateral value of a forward contract is less than that of a futures contract.3

The third fundamental purpose of a futures market is to provide information to decision makers regarding the market’s expectations of future economic events. So long as a futures market is efficient – the market forms expectations by taking into proper consideration all available information – its forecasts of future economic events are relatively more reliable than an individual’s. Forecast errors are expensive, and well informed, highly competitive, profit-seeking traders have a relatively greater incentive to minimize them.

The Evolution of Futures Trading in the U.S.

Early Nineteenth Century Grain Production and Marketing

Into the early nineteenth century, the vast majority of American grains – wheat, corn, barley, rye and oats – were produced throughout the hinterlands of the United States by producers who acted primarily as subsistence farmers – agricultural producers whose primary objective was to feed themselves and their families. Although many of these farmers sold their surplus production on the market, most lacked access to large markets, as well as the incentive, affordable labor supply, and myriad technologies necessary to practice commercial agriculture – the large scale production and marketing of surplus agricultural commodities.

At this time, the principal trade route to the Atlantic seaboard was by river through New Orleans4; though the South was also home to terminal markets – markets of final destination – for corn, provisions and flour. Smaller local grain markets existed along the tributaries of the Ohio and Mississippi Rivers and east-west overland routes. The latter were used primarily to transport manufactured (high valued and nonperishable) goods west.

Most farmers, and particularly those in the East North Central States – the region consisting today of Illinois, Indiana, Michigan, Ohio and Wisconsin – could not ship bulk grains to market profitably (Clark 1966, 4, 15).5 Instead, most converted grains into relatively high value flour, livestock, provisions and whiskies or malt liquors and shipped them south or, in the case of livestock, drove them east (14).6 Oats traded locally, if at all; their low value-to-weight ratios made their shipment, in bulk or otherwise, prohibitive (15n).

The Great Lakes provided a natural water route east to Buffalo but, in order to ship grain this way, producers in the interior East North Central region needed local ports to receive their production. Although the Erie Canal connected Lake Erie to the port of New York by 1825, water routes that connected local interior ports throughout northern Ohio to the Canal were not operational prior to the mid-1830s. Indeed, initially the Erie aided the development of the Old Northwest, not because it facilitated eastward grain shipments, but rather because it allowed immigrants and manufactured goods easy access to the West (Clark 1966, 53).

By 1835 the mouths of rivers and streams throughout the East North Central States had become the hubs, or port cities, from which farmers shipped grain east via the Erie. By this time, shippers could also opt to go south on the Ohio River and then upriver to Pittsburgh and ultimately to Philadelphia, or north on the Ohio Canal to Cleveland, Buffalo and ultimately, via the Welland Canal, to Lake Ontario and Montreal (19).

By 1836 shippers carried more grain north on the Great Lakes and through Buffalo, than south on the Mississippi through New Orleans (Odle 1964, 441). Though, as late as 1840 Ohio was the only state/region who participated significantly in the Great Lakes trade. Illinois, Indiana, Michigan, and the region of modern day Wisconsin either produced for their respective local markets or relied upon Southern demand. As of 1837 only 4,107 residents populated the “village” of Chicago, which became an official city in that year (Hieronymus 1977, 72).7

Antebellum Grain Trade Finance in the Old Northwest

Before the mid-1860s, a network of banks, grain dealers, merchants, millers and commission houses – buying and selling agents located in the central commodity markets – employed an acceptance system to finance the U.S. grain trade (see Clark 1966, 119; Odle 1964, 442). For example, a miller who required grain would instruct an agent in, say, New York to establish, on the miller’s behalf, a line of credit with a merchant there. The merchant extended this line of credit in the form of sight drafts, which the merchant made payable, in sixty or ninety days, up to the amount of the line of credit.

With this credit line established, commission agents in the hinterland would arrange with grain dealers to acquire the necessary grain. The commission agent would obtain warehouse receipts – dealer certified negotiable titles to specific lots and quantities of grain in store – from dealers, attach these to drafts that he drew on the merchant’s line of credit, and discount these drafts at his local bank in return for banknotes; the local bank would forward these drafts on to the New York merchant’s bank for redemption. The commission agents would use these banknotes to advance – lend – grain dealers roughly three quarters of the current market value of the grain. The commission agent would pay dealers the remainder (minus finance and commission fees) when the grain was finally sold in the East. That is, commission agents and grain dealers entered into consignment contracts.

Unfortunately, this approach linked banks, grain dealers, merchants, millers and commission agents such that the “entire procedure was attended by considerable risk and speculation, which was assumed by both the consignee and consignor” (Clark 1966, 120). The system was reasonably adequate if grain prices went unchanged between the time the miller procured the credit and the time the grain (bulk or converted) was sold in the East, but this was rarely the case. The fundamental problem with this system of finance was that commission agents were effectively asking banks to lend them money to purchase as yet unsold grain. To be sure, this inadequacy was most apparent during financial panics, when many banks refused to discount these drafts (Odle 1964, 447).

Grain Trade Finance in Transition: Forward Contracts and Commodity Exchanges

In 1848 the Illinois-Michigan Canal connected the Illinois River to Lake Michigan. The canal enabled farmers in the hinterlands along the Illinois River to ship their produce to merchants located along the river. These merchants accumulated, stored and then shipped grain to Chicago, Milwaukee and Racine. At first, shippers tagged deliverables according to producer and region, while purchasers inspected and chose these tagged bundles upon delivery. Commercial activity at the three grain ports grew throughout the 1850s. Chicago emerged as a dominant grain (primarily corn) hub later that decade (Pierce 1957, 66).8

Amidst this growth of Lake Michigan commerce, a confluence of innovations transformed the grain trade and its method of finance. By the 1840s, grain elevators and railroads facilitated high volume grain storage and shipment, respectively. Consequently, country merchants and their Chicago counterparts required greater financing in order to store and ship this higher volume of grain.9 And, high volume grain storage and shipment required that inventoried grains be fungible – of such a nature that one part or quantity could be replaced by another equal part or quantity in the satisfaction of an obligation. For example, because a bushel of grade No. 2 Spring Wheat was fungible, its price did not depend on whether it came from Farmer A, Farmer B, Grain Elevator C, or Train Car D.

Merchants could secure these larger loans more easily and at relatively lower rates if they obtained firm price and quantity commitments from their buyers. So, merchants began to engage in forward (not futures) contracts. According to Hieronymus (1977), the first such “time contract” on record was made on March 13, 1851. It specified that 3,000 bushels of corn were to be delivered to Chicago in June at a price of one cent below the March 13th cash market price (74).10

Meanwhile, commodity exchanges serviced the trade’s need for fungible grain. In the 1840s and 1850s these exchanges emerged as associations for dealing with local issues such as harbor infrastructure and commercial arbitration (e.g., Detroit in 1847, Buffalo, Cleveland and Chicago in 1848 and Milwaukee in 1849) (see Odle 1964). By the 1850s they established a system of staple grades, standards and inspections, all of which rendered inventory grain fungible (Baer and Saxon 1949, 10; Chandler 1977, 211). As collection points for grain, cotton, and provisions, they weighed, inspected and classified commodity shipments that passed from west to east. They also facilitated organized trading in spot and forward markets (Chandler 1977, 211; Odle 1964, 439).11

The largest and most prominent of these exchanges was the Board of Trade of the City of Chicago, a grain and provisions exchange established in 1848 by a State of Illinois corporate charter (Boyle 1920, 38; Lurie 1979, 27); the exchange is known today as the Chicago Board of Trade (CBT). For at least its first decade, the CBT functioned as a meeting place for merchants to resolve contract disputes and discuss commercial matters of mutual concern. Participation was part-time at best. The Board’s first directorate of 25 members included “a druggist, a bookseller, a tanner, a grocer, a coal dealer, a hardware merchant, and a banker” and attendance was often encouraged by free lunches (Lurie 1979, 25).

However, in 1859 the CBT became a state- (of Illinois) chartered private association. As such, the exchange requested and received from the Illinois legislature sanction to establish rules “for the management of their business and the mode in which it shall be transacted, as they may think proper;” to arbitrate over and settle disputes with the authority as “if it were a judgment rendered in the Circuit Court;” and to inspect, weigh and certify grain and grain trades such that these certifications would be binding upon all CBT members (Lurie 1979, 27).

Nineteenth Century Futures Trading

By the 1850s traders sold and resold forward contracts prior to actual delivery (Hieronymus 1977, 75). A trader could not offset, in the futures market sense of the term, a forward contact. Nonetheless, the existence of a secondary market – market for extant, as opposed to newly issued securities – in forward contracts suggests, if nothing else, speculators were active in these early time contracts.

On March 27, 1863, the Chicago Board of Trade adopted its first rules and procedures for trade in forwards on the exchange (Hieronymus 1977, 76). The rules addressed contract settlement, which was (and still is) the fundamental challenge associated with a forward contract – finding a trader who was willing to take a position in a forward contract was relatively easy to do; finding that trader at the time of contract settlement was not.

The CBT began to transform actively traded and reasonably homogeneous forward contracts into futures contracts in May, 1865. At this time, the CBT: restricted trade in time contracts to exchange members; standardized contract specifications; required traders to deposit margins; and specified formally contract settlement, including payments and deliveries, and grievance procedures (Hieronymus 1977, 76).

The inception of organized futures trading is difficult to date. This is due, in part, to semantic ambiguities – e.g., was a “to arrive” contract a forward contract or a futures contract or neither? However, most grain trade historians agree that storage (grain elevators), shipment (railroad), and communication (telegraph) technologies, a system of staple grades and standards, and the impetus to speculation provided by the Crimean and U.S. Civil Wars enabled futures trading to ripen by about 1874, at which time the CBT was the U.S.’s premier organized commodities (grain and provisions) futures exchange (Baer and Saxon 1949, 87; Chandler 1977, 212; CBT 1936, 18; Clark 1966, 120; Dies 1925, 15; Hoffman 1932, 29; Irwin 1954, 77, 82; Rothstein 1966, 67).

Nonetheless, futures exchanges in the mid-1870s lacked modern clearinghouses, with which most exchanges began to experiment only in the mid-1880s. For example, the CBT’s clearinghouse got its start in 1884, and a complete and mandatory clearing system was in place at the CBT by 1925 (Hoffman 1932, 199; Williams 1982, 306). The earliest formal clearing and offset procedures were established by the Minneapolis Grain Exchange in 1891 (Peck 1985, 6).

Even so, rudiments of a clearing system – one that freed traders from dealing directly with one another – were in place by the 1870s (Hoffman 1920, 189). That is to say, brokers assumed the counter-position to every trade, much as clearinghouse members would do decades later. Brokers settled offsets between one another, though in the absence of a formal clearing procedure these settlements were difficult to accomplish.

Direct settlements were simple enough. Here, two brokers would settle in cash their offsetting positions between one another only. Nonetheless, direct settlements were relatively uncommon because offsetting purchases and sales between brokers rarely balanced with respect to quantity. For example, B1 might buy a 5,000 bushel corn future from B2, who then might buy a 6,000 bushel corn future from B1; in this example, 1,000 bushels of corn remain unsettled between B1 and B2. Of course, the two brokers could offset the remaining 1,000 bushel contract if B2 sold a 1,000 bushel corn future to B1. But what if B2 had already sold a 1,000 bushel corn future to B3, who had sold a 1,000 bushel corn future to B1? In this case, each broker’s net futures market position is offset, but all three must meet in order to settle their respective positions. Brokers referred to such a meeting as a ring settlement. Finally, if, in this example, B1 and B3 did not have positions with each other, B2 could settle her position if she transferred her commitment (which she has with B1) to B3. Brokers referred to this method as a transfer settlement. In either ring or transfer settlements, brokers had to find other brokers who held and wished to settle open counter-positions. Often brokers used runners to search literally the offices and corridors for the requisite counter-parties (see Hoffman 1932, 185-200).

Finally, the transformation in Chicago grain markets from forward to futures trading occurred almost simultaneously in New York cotton markets. Forward contracts for cotton traded in New York (and Liverpool, England) by the 1850s. And, like Chicago, organized trading in cotton futures began on the New York Cotton Exchange in about 1870; rules and procedures formalized the practice in 1872. Futures trading on the New Orleans Cotton Exchange began around 1882 (Hieronymus 1977, 77).

Other successful nineteenth century futures exchanges include the New York Produce Exchange, the Milwaukee Chamber of Commerce, the Merchant’s Exchange of St. Louis, the Chicago Open Board of Trade, the Duluth Board of Trade, and the Kansas City Board of Trade (Hoffman 1920, 33; see Peck 1985, 9).

Early Futures Market Performance

Volume

Data on grain futures volume prior to the 1880s are not available (Hoffman 1932, 30). Though in the 1870s “[CBT] officials openly admitted that there was no actual delivery of grain in more than ninety percent of contracts” (Lurie 1979, 59). Indeed, Chart 1 demonstrates that trading was relatively voluminous in the nineteenth century.

An annual average of 23,600 million bushels of grain futures traded between 1884 and 1888, or eight times the annual average amount of crops produced during that period. By comparison, an annual average of 25,803 million bushels of grain futures traded between 1966 and 1970, or four times the annual average amount of crops produced during that period. In 2002, futures volume outnumbered crop production by a factor of eleven.

The comparable data for cotton futures are presented in Chart 2. Again here, trading in the nineteenth century was significant. To wit, by 1879 futures volume had outnumbered production by a factor of five, and by 1896 this factor had reached eight.

Price of Storage

Nineteenth century observers of early U.S. futures markets either credited them for stabilizing food prices, or discredited them for wagering on, and intensifying, the economic hardships of Americans (Baer and Saxon 1949, 12-20, 56; Chandler 1977, 212; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115). To be sure, the performance of early futures markets remains relatively unexplored. The extant research on the subject has generally examined this performance in the context of two perspectives on the theory of efficiency: the price of storage and futures price efficiency more generally.

Holbrook Working pioneered research into the price of storage – the relationship, at a point in time, between prices (of storable agricultural commodities) applicable to different future dates (Working 1949, 1254).12 For example, what is the relationship between the current spot price of wheat and the current September 2004 futures price of wheat? Or, what is the relationship between the current September 2004 futures price of wheat and the current May 2005 futures price of wheat?

Working reasoned that these prices could not differ because of events that were expected to occur between these dates. For example, if the May 2004 wheat futures price is less than the September 2004 price, this cannot be due to, say, the expectation of a small harvest between May 2004 and September 2004. On the contrary, traders should factor such an expectation into both May and September prices. And, assuming that they do, then this difference can only reflect the cost of carrying – storing – these commodities over time.13 Though this strict interpretation has since been modified somewhat (see Peck 1985, 44).

So, for example, the September 2004 price equals the May 2004 price plus the cost of storing wheat between May 2004 and September 2004. If the difference between these prices is greater or less than the cost of storage, and the market is efficient, arbitrage will bring the difference back to the cost of storage – e.g., if the difference in prices exceeds the cost of storage, then traders can profit if they buy the May 2004 contract, sell the September 2004 contract, take delivery in May and store the wheat until September. Working (1953) demonstrated empirically that the theory of the price of storage could explain quite satisfactorily these inter-temporal differences in wheat futures prices at the CBT as early as the late 1880s (Working 1953, 556).

Futures Price Efficiency

Many contemporary economists tend to focus on futures price efficiency more generally (for example, Beck 1994; Kahl and Tomek 1986; Kofi 1973; McKenzie, et al. 2002; Tomek and Gray, 1970). That is to say, do futures prices shadow consistently (but not necessarily equal) traders’ rational expectations of future spot prices? Here, the research focuses on the relationship between, say, the cash price of wheat in September 2004 and the September 2004 futures price of wheat quoted two months earlier in July 2004.

Figure 1illustrates the behavior of corn futures prices and their corresponding spot prices between 1877 and 1890. The data consist of the average month t futures price in the last full week of month t-2 and the average cash price in the first full week of month t.

The futures price and its corresponding spot price need not be equal; futures price efficiency does not mean that the futures market is clairvoyant. But, a difference between the two series should exist only because of an unpredictable forecast error and a risk premium – futures prices may be, say, consistently below the expected future spot price if long speculators require an inducement, or premium, to enter the futures market. Recent work finds strong evidence that these early corn (and corresponding wheat) futures prices are, in the long run, efficient estimates of their underlying spot prices (Santos 2002, 35). Although these results and Working’s empirical studies on the price of storage support, to some extent, the notion that early U.S. futures markets were efficient, this question remains largely unexplored by economic historians.

The Struggle for Legitimacy

Nineteenth century America was both fascinated and appalled by futures trading. This is apparent from the litigation and many public debates surrounding its legitimacy (Baer and Saxon 1949, 55; Buck 1913, 131, 271; Hoffman 1932, 29, 351; Irwin 1954, 80; Lurie 1979, 53, 106). Many agricultural producers, the lay community and, at times, legislatures and the courts, believed trading in futures was tantamount to gambling. The difference between the latter and speculating, which required the purchase or sale of a futures contract but not the shipment or delivery of the commodity, was ostensibly lost on most Americans (Baer and Saxon 1949, 56; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115).

Many Americans believed that futures traders frequently manipulated prices. From the end of the Civil War until 1879 alone, corners – control of enough of the available supply of a commodity to manipulate its price – allegedly occurred with varying degrees of success in wheat (1868, 1871, 1878/9), corn (1868), oats (1868, 1871, 1874), rye (1868) and pork (1868) (Boyle 1920, 64-65). This manipulation continued throughout the century and culminated in the Three Big Corners – the Hutchinson (1888), the Leiter (1898), and the Patten (1909). The Patten corner was later debunked (Boyle 1920, 67-74), while the Leiter corner was the inspiration for Frank Norris’s classic The Pit: A Story of Chicago (Norris 1903; Rothstein 1982, 60).14 In any case, reports of market corners on America’s early futures exchanges were likely exaggerated (Boyle 1920, 62-74; Hieronymus 1977, 84), as were their long term effects on prices and hence consumer welfare (Rothstein 1982, 60).

By 1892 thousands of petitions to Congress called for the prohibition of “speculative gambling in grain” (Lurie, 1979, 109). And, attacks from state legislatures were seemingly unrelenting: in 1812 a New York act made short sales illegal (the act was repealed in 1858); in 1841 a Pennsylvania law made short sales, where the position was not covered in five days, a misdemeanor (the law was repealed in 1862); in 1882 an Ohio law and a similar one in Illinois tried unsuccessfully to restrict cash settlement of futures contracts; in 1867 the Illinois constitution forbade dealing in futures contracts (this was repealed by 1869); in 1879 California’s constitution invalidated futures contracts (this was effectively repealed in 1908); and, in 1882, 1883 and 1885, Mississippi, Arkansas, and Texas, respectively, passed laws that equated futures trading with gambling, thus making the former a misdemeanor (Peterson 1933, 68-69).

Two nineteenth century challenges to futures trading are particularly noteworthy. The first was the so-called Anti-Option movement. According to Lurie (1979), the movement was fueled by agrarians and their sympathizers in Congress who wanted to end what they perceived as wanton speculative abuses in futures trading (109). Although options were (are) not futures contracts, and were nonetheless already outlawed on most exchanges by the 1890s, the legislation did not distinguish between the two instruments and effectively sought to outlaw both (Lurie 1979, 109).

In 1890 the Butterworth Anti-Option Bill was introduced in Congress but never came to a vote. However, in 1892 the Hatch (and Washburn) Anti-Option bills passed both houses of Congress, and failed only on technicalities during reconciliation between the two houses. Had either bill become law, it would have effectively ended options and futures trading in the United States (Lurie 1979, 110).

A second notable challenge was the bucket shop controversy, which challenged the legitimacy of the CBT in particular. A bucket shop was essentially an association of gamblers who met outside the CBT and wagered on the direction of futures prices. These associations had legitimate-sounding names such as the Christie Grain and Stock Company and the Public Grain Exchange. To most Americans, these “exchanges” were no less legitimate than the CBT. That some CBT members were guilty of “bucket shopping” only made matters worse!

The bucket shop controversy was protracted and colorful (see Lurie 1979, 138-167). Between 1884 and 1887 Illinois, Iowa, Missouri and Ohio passed anti-bucket shop laws (Lurie 1979, 95). The CBT believed these laws entitled them to restrict bucket shops access to CBT price quotes, without which the bucket shops could not exist. Bucket shops argued that they were competing exchanges, and hence immune to extant anti-bucket shop laws. As such, they sued the CBT for access to these price quotes.15

The two sides and the telegraph companies fought in the courts for decades over access to these price quotes; the CBT’s very survival hung in the balance. After roughly twenty years of litigation, the Supreme Court of the U.S. effectively ruled in favor of the Chicago Board of Trade and against bucket shops (Board of Trade of the City of Chicago v. Christie Grain & Stock Co., 198 U.S. 236, 25 Sup. Ct. (1905)). Bucket shops disappeared completely by 1915 (Hieronymus 1977, 90).

Regulation

The anti-option movement, the bucket shop controversy and the American public’s discontent with speculation masks an ironic reality of futures trading: it escaped government regulation until after the First World War; though early exchanges did practice self-regulation or administrative law.16 The absence of any formal governmental oversight was due in large part to two factors. First, prior to 1895, the opposition tried unsuccessfully to outlaw rather than regulate futures trading. Second, strong agricultural commodity prices between 1895 and 1920 weakened the opposition, who blamed futures markets for low agricultural commodity prices (Hieronymus 1977, 313).

Grain prices fell significantly by the end of the First World War, and opposition to futures trading grew once again (Hieronymus 1977, 313). In 1922 the U.S. Congress enacted the Grain Futures Act, which required exchanges to be licensed, limited market manipulation and publicized trading information (Leuthold 1989, 369).17 However, regulators could rarely enforce the act because it enabled them to discipline exchanges, rather than individual traders. To discipline an exchange was essentially to suspend it, a punishment unfit (too harsh) for most exchange-related infractions.

The Commodity Exchange Act of 1936 enabled the government to deal directly with traders rather than exchanges. It established the Commodity Exchange Authority (CEA), a bureau of the U.S. Department of Agriculture, to monitor and investigate trading activities and prosecute price manipulation as a criminal offense. The act also: limited speculators’ trading activities and the sizes of their positions; regulated futures commission merchants; banned options trading on domestic agricultural commodities; and restricted futures trading – designated which commodities were to be traded on which licensed exchanges (see Hieronymus 1977; Leuthold, et al. 1989).

Although Congress amended the Commodity Exchange Act in 1968 in order to increase the regulatory powers of the Commodity Exchange Authority, the latter was ill-equipped to handle the explosive growth in futures trading in the 1960s and 1970s. So, in 1974 Congress passed the Commodity Futures Trading Act, which created far-reaching federal oversight of U.S. futures trading and established the Commodity Futures Trading Commission (CFTC).

Like the futures legislation before it, the Commodity Futures Trading Act seeks “to ensure proper execution of customer orders and to prevent unlawful manipulation, price distortion, fraud, cheating, fictitious trades, and misuse of customer funds” (Leuthold, et al. 1989, 34). Unlike the CEA, the CFTC was given broad regulator powers over all futures trading and related exchange activities throughout the U.S. The CFTC oversees and approves modifications to extant contracts and the creation and introduction of new contracts. The CFTC consists of five presidential appointees who are confirmed by the U.S. Senate.

The Futures Trading Act of 1982 amended the Commodity Futures Trading Act of 1974. The 1982 act legalized options trading on agricultural commodities and identified more clearly the jurisdictions of the CFTC and Securities and Exchange Commission (SEC). The regulatory overlap between the two organizations arose because of the explosive popularity during the 1970s of financial futures contracts. Today, the CFTC regulates all futures contracts and options on futures contracts traded on U.S. futures exchanges; the SEC regulates all financial instrument cash markets as well as all other options markets.

Finally, in 2000 Congress passed the Commodity Futures Modernization Act, which reauthorized the Commodity Futures Trading Commission for five years and repealed an 18-year old ban on trading single stock futures. The bill also sought to increase competition and “reduce systematic risk in markets for futures and over-the-counter derivatives” (H.R. 5660, 106th Congress 2nd Session).

Modern Futures Markets

The growth in futures trading has been explosive in recent years (Chart 3).

Futures trading extended beyond physical commodities in the 1970s and 1980s – currency futures in 1972; interest rate futures in 1975; and stock index futures in 1982 (Silber 1985, 83). The enormous growth of financial futures at this time was likely because of the breakdown of the Bretton Woods exchange rate regime, which essentially fixed the relative values of industrial economies’ exchange rates to the American dollar (see Bordo and Eichengreen 1993), and relatively high inflation from the late 1960s to the early 1980s. Flexible exchange rates and inflation introduced, respectively, exchange and interest rate risks, which hedgers sought to mitigate through the use of financial futures. Finally, although futures contracts on agricultural commodities remain popular, financial futures and options dominate trading today. Trading volume in metals, minerals and energy remains relatively small.

Trading volume in agricultural futures contracts first dropped below 50% in 1982. By 1985 this volume had dropped to less than one fourth all trading. In the same year the volume of futures trading in the U.S. Treasury bond contract alone exceeded trading volume in all agricultural commodities combined (Leuthold et al. 1989, 2). Today exchanges in the U.S. actively trade contracts on several underlying assets (Table 1). These range from the traditional – e.g., agriculture and metals – to the truly innovative – e.g. the weather. The latter’s payoff varies with the number of degree-days by which the temperature in a particular region deviates from 65 degrees Fahrenheit.

Table 1: Select Futures Contracts Traded as of 2002

Agriculture Currencies Equity Indexes Interest Rates Metals & Energy
Corn British pound S&P 500 index Eurodollars Copper
Oats Canadian dollar Dow Jones Industrials Euroyen Aluminum
Soybeans Japanese yen S&P Midcap 400 Euro-denominated bond Gold
Soybean meal Euro Nasdaq 100 Euroswiss Platinum
Soybean oil Swiss franc NYSE index Sterling Palladium
Wheat Australian dollar Russell 2000 index British gov. bond (gilt) Silver
Barley Mexican peso Nikkei 225 German gov. bond Crude oil
Flaxseed Brazilian real FTSE index Italian gov. bond Heating oil
Canola CAC-40 Canadian gov. bond Gas oil
Rye DAX-30 Treasury bonds Natural gas
Cattle All ordinary Treasury notes Gasoline
Hogs Toronto 35 Treasury bills Propane
Pork bellies Dow Jones Euro STOXX 50 LIBOR CRB index
Cocoa EURIBOR Electricity
Coffee Municipal bond index Weather
Cotton Federal funds rate
Milk Bankers’ acceptance
Orange juice
Sugar
Lumber
Rice

Source: Bodie, Kane and Marcus (2005), p. 796.

Table 2 provides a list of today’s major futures exchanges.

Table 2: Select Futures Exchanges as of 2002

Exchange Exchange
Chicago Board of Trade CBT Montreal Exchange ME
Chicago Mercantile Exchange CME Minneapolis Grain Exchange MPLS
Coffee, Sugar & Cocoa Exchange, New York CSCE Unit of Euronext.liffe NQLX
COMEX, a division of the NYME CMX New York Cotton Exchange NYCE
European Exchange EUREX New York Futures Exchange NYFE
Financial Exchange, a division of the NYCE FINEX New York Mercantile Exchange NYME
International Petroleum Exchange IPE OneChicago ONE
Kansas City Board of Trade KC Sydney Futures Exchange SFE
London International Financial Futures Exchange LIFFE Singapore Exchange Ltd. SGX
Marche a Terme International de France MATIF

Source: Wall Street Journal, 5/12/2004, C16.

Modern trading differs from its nineteenth century counterpart in other respects as well. First, the popularity of open outcry trading is waning. For example, today the CBT executes roughly half of all trades electronically. And, electronic trading is the rule, rather than the exception throughout Europe. Second, today roughly 99% of all futures contracts are settled prior to maturity. Third, in 1982 the Commodity Futures Trading Commission approved cash settlement – delivery that takes the form of a cash balance – on financial index and Eurodollar futures, whose underlying assets are not deliverable, as well as on several non-financial contracts including lean hog, feeder cattle and weather (Carlton 1984, 253). And finally, on Dec. 6, 2002, the Chicago Mercantile Exchange became the first publicly traded financial exchange in the U.S.

References and Further Reading

Baer, Julius B. and Olin. G. Saxon. Commodity Exchanges and Futures Trading. New York: Harper & Brothers, 1949.

Bodie, Zvi, Alex Kane and Alan J. Marcus. Investments. New York: McGraw-Hill/Irwin, 2005.

Bordo, Michael D. and Barry Eichengreen, editors. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press, 1993.

Boyle, James. E. Speculation and the Chicago Board of Trade. New York: MacMillan Company, 1920.

Buck, Solon. J. The Granger Movement: A Study of Agricultural Organization and Its Political,

Carlton, Dennis W. “Futures Markets: Their Purpose, Their History, Their Growth, Their Successes and Failures.” Journal of Futures Markets 4, no. 3 (1984): 237-271.

Chicago Board of Trade Bulletin. The Development of the Chicago Board of Trade. Chicago: Chicago Board of Trade, 1936.

Chandler, Alfred. D. The Visible Hand: The Managerial Revolution in American Business. Cambridge: Harvard University Press, 1977.

Clark, John. G. The Grain Trade in the Old Northwest. Urbana: University of Illinois Press, 1966.

Commodity Futures Trading Commission. Annual Report. Washington, D.C. 2003.

Dies, Edward. J. The Wheat Pit. Chicago: The Argyle Press, 1925.

Ferris, William. G. The Grain Traders: The Story of the Chicago Board of Trade. East Lansing, MI: Michigan State University Press, 1988.

Hieronymus, Thomas A. Economics of Futures Trading for Commercial and Personal Profit. New York: Commodity Research Bureau, Inc., 1977.

Hoffman, George W. Futures Trading upon Organized Commodity Markets in the United States. Philadelphia: University of Pennsylvania Press, 1932.

Irwin, Harold. S. Evolution of Futures Trading. Madison, WI: Mimir Publishers, Inc., 1954

Leuthold, Raymond M., Joan C. Junkus and Jean E. Cordier. The Theory and Practice of Futures Markets. Champaign, IL: Stipes Publishing L.L.C., 1989.

Lurie, Jonathan. The Chicago Board of Trade 1859-1905. Urbana: University of Illinois Press, 1979.

National Agricultural Statistics Service. “Historical Track Records.” Agricultural Statistics Board, U.S. Department of Agriculture, Washington, D.C. April 2004.

Norris, Frank. The Pit: A Story of Chicago. New York, NY: Penguin Group, 1903.

Odle, Thomas. “Entrepreneurial Cooperation on the Great Lakes: The Origin of the Methods of American Grain Marketing.” Business History Review 38, (1964): 439-55.

Peck, Anne E., editor. Futures Markets: Their Economic Role. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Peterson, Arthur G. “Futures Trading with Particular Reference to Agricultural Commodities.” Agricultural History 8, (1933): 68-80.

Pierce, Bessie L. A History of Chicago: Volume III, the Rise of a Modern City. New York: Alfred A. Knopf, 1957.

Rothstein, Morton. “The International Market for Agricultural Commodities, 1850-1873.” In Economic Change in the Civil War Era, edited by David. T. Gilchrist and W. David Lewis, 62-71. Greenville DE: Eleutherian Mills-Hagley Foundation, 1966.

Rothstein, Morton. “Frank Norris and Popular Perceptions of the Market.” Agricultural History 56, (1982): 50-66.

Santos, Joseph. “Did Futures Markets Stabilize U.S. Grain Prices?” Journal of Agricultural Economics 53, no. 1 (2002): 25-36.

Silber, William L. “The Economic Role of Financial Futures.” In Futures Markets: Their Economic Role, edited by Anne E. Peck, 83-114. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Stein, Jerome L. The Economics of Futures Markets. Oxford: Basil Blackwell Ltd, 1986.

Taylor, Charles. H. History of the Board of Trade of the City of Chicago. Chicago: R. O. Law, 1917.

Werner, Walter and Steven T. Smith. Wall Street. New York: Columbia University Press, 1991.

Williams, Jeffrey C. “The Origin of Futures Markets.” Agricultural History 56, (1982): 306-16.

Working, Holbrook. “The Theory of the Price of Storage.” American Economic Review 39, (1949): 1254-62.

Working, Holbrook. “Hedging Reconsidered.” Journal of Farm Economics 35, (1953): 544-61.

1 The clearinghouse is typically a corporation owned by a subset of exchange members. For details regarding the clearing arrangements of a specific exchange, go to www.cftc.gov and click on “Clearing Organizations.”

2 The vast majority of contracts are offset. Outright delivery occurs when the buyer receives from, or the seller “delivers” to the exchange a title of ownership, and not the actual commodity or financial security – the urban legend of the trader who neglected to settle his long position and consequently “woke up one morning to find several car loads of a commodity dumped on his front yard” is indeed apocryphal (Hieronymus 1977, 37)!

3 Nevertheless, forward contracts remain popular today (see Peck 1985, 9-12).

4 The importance of New Orleans as a point of departure for U.S. grain and provisions prior to the Civil War is unquestionable. According to Clark (1966), “New Orleans was the leading export center in the nation in terms of dollar volume of domestic exports, except for 1847 and a few years during the 1850s, when New York’s domestic exports exceeded those of the Crescent City” (36).

5 This area was responsible for roughly half of U.S. wheat production and a third of U.S. corn production just prior to 1860. Southern planters dominated corn output during the early to mid- 1800s.

6 Millers milled wheat into flour; pork producers fed corn to pigs, which producers slaughtered for provisions; distillers and brewers converted rye and barley into whiskey and malt liquors, respectively; and ranchers fed grains and grasses to cattle, which were then driven to eastern markets.

7 Significant advances in transportation made the grain trade’s eastward expansion possible, but the strong and growing demand for grain in the East made the trade profitable. The growth in domestic grain demand during the early to mid-nineteenth century reflected the strong growth in eastern urban populations. Between 1820 and 1860, the populations of Baltimore, Boston, New York and Philadelphia increased by over 500% (Clark 1966, 54). Moreover, as the 1840’s approached, foreign demand for U.S. grain grew. Between 1845 and 1847, U.S. exports of wheat and flour rose from 6.3 million bushels to 26.3 million bushels and corn exports grew from 840,000 bushels to 16.3 million bushels (Clark 1966, 55).

8 Wheat production was shifting to the trans-Mississippi West, which produced 65% of the nation’s wheat by 1899 and 90% by 1909, and railroads based in the Lake Michigan port cities intercepted the Mississippi River trade that would otherwise have headed to St. Louis (Clark 1966, 95). Lake Michigan port cities also benefited from a growing concentration of corn production in the West North Central region – Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota and South Dakota, which by 1899 produced 40% percent of the country’s corn (Clark 1966, 4).

9 Corn had to be dried immediately after it was harvested and could only be shipped profitably by water to Chicago, but only after rivers and lakes had thawed; so, country merchants stored large quantities of corn. On the other hand, wheat was more valuable relative to its weight, and it could be shipped to Chicago by rail or road immediately after it was harvested; so, Chicago merchants stored large quantities of wheat.

10 This is consistent with Odle (1964), who adds that “the creators of the new system of marketing [forward contracts] were the grain merchants of the Great Lakes” (439). However, Williams (1982) presents evidence of such contracts between Buffalo and New York City as early as 1847 (309). To be sure, Williams proffers an intriguing case that forward and, in effect, future trading was active and quite sophisticated throughout New York by the late 1840s. Moreover, he argues that this trading grew not out of activity in Chicago, whose trading activities were quite primitive at this early date, but rather trading in London and ultimately Amsterdam. Indeed, “time bargains” were common in London and New York securities markets in the mid- and late 1700s, respectively. A time bargain was essentially a cash-settled financial forward contract that was unenforceable by law, and as such “each party was forced to rely on the integrity and credit of the other” (Werner and Smith 1991, 31). According to Werner and Smith, “time bargains prevailed on Wall Street until 1840, and were gradually replaced by margin trading by 1860” (68). They add that, “margin trading … had an advantage over time bargains, in which there was little protection against default beyond the word of another broker. Time bargains also technically violated the law as wagering contracts; margin trading did not” (135). Between 1818 and 1840 these contracts comprised anywhere from 0.7% (49-day average in 1830) to 34.6% (78-day average in 1819) of daily exchange volume on the New York Stock & Exchange Board (Werner and Smith 1991, 174).

11 Of course, forward markets could and indeed did exist in the absence of both grading standards and formal exchanges, though to what extent they existed is unclear (see Williams 1982).

12 In the parlance of modern financial futures, the term cost of carry is used instead of the term storage. For example, the cost of carrying a bond is comprised of the cost of acquiring and holding (or storing) it until delivery minus the return earned during the carry period.

13 More specifically, the price of storage is comprised of three components: (1) physical costs such as warehouse and insurance; (2) financial costs such as borrowing rates of interest; and (3) the convenience yield – the return that the merchant, who stores the commodity, derives from maintaining an inventory in the commodity. The marginal costs of (1) and (2) are increasing functions of the amount stored; the more the merchant stores, the greater the marginal costs of warehouse use, insurance and financing. Whereas the marginal benefit of (3) is a decreasing function of the amount stored; put differently, the smaller the merchant’s inventory, the more valuable each additional unit of inventory becomes. Working used this convenience yield to explain a negative price of storage – the nearby contract is priced higher than the faraway contract; an event that is likely to occur when supplies are exceptionally low. In this instance, there is little for inventory dealers to store. Hence, dealers face extremely low physical and financial storage costs, but extremely high convenience yields. The price of storage turns negative; essentially, inventory dealers are willing to pay to store the commodity.

14 Norris’ protagonist, Curtis Jadwin, is a wheat speculator emotionally consumed and ultimately destroyed, while the welfare of producers and consumers hang in the balance, when a nineteenth century CBT wheat futures corner backfires on him.

15 One particularly colorful incident in the controversy came when the Supreme Court of Illinois ruled that the CBT had to either make price quotes public or restrict access to everyone. When the Board opted for the latter, it found it needed to “prevent its members from running (often literally) between the [CBT and a bucket shop next door], but with minimal success. Board officials at first tried to lock the doors to the exchange…However, after one member literally battered down the door to the east side of the building, the directors abandoned this policy as impracticable if not destructive” (Lurie 1979, 140).

16 Administrative law is “a body of rules and doctrines which deals with the powers and actions of administrative agencies” that are organizations other than the judiciary or legislature. These organizations affect the rights of private parties “through either adjudication, rulemaking, investigating, prosecuting, negotiating, settling, or informally acting” (Lurie 1979, 9).

17 In 1921 Congress passed The Futures Trading Act, which was declared unconstitutional.

Citation: Santos, Joseph. “A History of Futures Trading in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/a-history-of-futures-trading-in-the-united-states/

The Economic History of the Fur Trade: 1670 to 1870

Ann M. Carlos, University of Colorado
Frank D. Lewis, Queen’s University

Introduction

A commercial fur trade in North America grew out of the early contact between Indians and European fisherman who were netting cod on the Grand Banks off Newfoundland and on the Bay of Gaspé near Quebec. Indians would trade the pelts of small animals, such as mink, for knives and other iron-based products, or for textiles. Exchange at first was haphazard and it was only in the late sixteenth century, when the wearing of beaver hats became fashionable, that firms were established who dealt exclusively in furs. High quality pelts are available only where winters are severe, so the trade took place predominantly in the regions we now know as Canada, although some activity took place further south along the Mississippi River and in the Rocky Mountains. There was also a market in deer skins that predominated in the Appalachians.

The first firms to participate in the fur trade were French, and under French rule the trade spread along the St. Lawrence and Ottawa Rivers, and down the Mississippi. In the seventeenth century, following the Dutch, the English developed a trade through Albany. Then in 1670, a charter was granted by the British crown to the Hudson’s Bay Company, which began operating from posts along the coast of Hudson Bay (see Figure 1). For roughly the next hundred years, this northern region saw competition of varying intensity between the French and the English. With the conquest of New France in 1763, the French trade shifted to Scottish merchants operating out of Montreal. After the negotiation of Jay’s Treaty (1794), the northern border was defined and trade along the Mississippi passed to the American Fur Company under John Jacob Astor. In 1821, the northern participants merged under the name of the Hudson’s Bay Company, and for many decades this merged company continued to trade in furs. Finally, in the 1990s, under pressure from animal rights groups, the Hudson’s Bay Company, which in the twentieth century had become a large Canadian retailer, ended the fur component of its operation.

Figure 1
Hudson’s Bay Company Hinterlands
 Hudson's Bay Company Hinterlands (map)

Source: Ray (1987, plate 60)

The fur trade was based on pelts destined either for the luxury clothing market or for the felting industries, of which hatting was the most important. This was a transatlantic trade. The animals were trapped and exchanged for goods in North America, and the pelts were transported to Europe for processing and final sale. As a result, forces operating on the demand side of the market in Europe and on the supply side in North America determined prices and volumes; while intermediaries, who linked the two geographically separated areas, determined how the trade was conducted.

The Demand for Fur: Hats, Pelts and Prices

However much hats may be considered an accessory today, they were for centuries a mandatory part of everyday dress, for both men and women. Of course styles changed, and, in response to the vagaries of fashion and politics, hats took on various forms and shapes, from the high-crowned, broad-brimmed hat of the first two Stuarts to the conically-shaped, plainer hat of the Puritans. The Restoration of Charles II of England in 1660 and the Glorious Revolution in 1689 brought their own changes in style (Clarke, 1982, chapter 1). What remained a constant was the material from which hats were made – wool felt. The wool came from various animals, but towards the end of the fifteenth century beaver wool began to be predominate. Over time, beaver hats became increasingly popular eventually dominating the market. Only in the nineteenth century did silk replace beaver in high-fashion men’s hats.

Wool Felt

Furs have long been classified as either fancy or staple. Fancy furs are those demanded for the beauty and luster of their pelt. These furs – mink, fox, otter – are fashioned by furriers into garments or robes. Staple furs are sought for their wool. All staple furs have a double coating of hair with long, stiff, smooth hairs called guard hairs which protect the shorter, softer hair, called wool, that grows next to the animal skin. Only the wool can be felted. Each of the shorter hairs is barbed and once the barbs at the ends of the hair are open, the wool can be compressed into a solid piece of material called felt. The prime staple fur has been beaver, although muskrat and rabbit have also been used.

Wool felt was used for over two centuries to make high-fashion hats. Felt is stronger than a woven material. It will not tear or unravel in a straight line; it is more resistant to water, and it will hold its shape even if it gets wet. These characteristics made felt the prime material for hatters especially when fashion called for hats with large brims. The highest quality hats would be made fully from beaver wool, whereas lower quality hats included inferior wool, such as rabbit.

Felt Making

The transformation of beaver skins into felt and then hats was a highly skilled activity. The process required first that the beaver wool be separated from the guard hairs and the skin, and that some of the wool have open barbs, since felt required some open-barbed wool in the mixture. Felt dates back to the nomads of Central Asia, who are said to have invented the process of felting and made their tents from this light but durable material. Although the art of felting disappeared from much of western Europe during the first millennium, felt-making survived in Russia, Sweden, and Asia Minor. As a result of the Medieval Crusades, felting was reintroduced through the Mediterranean into France (Crean, 1962).

In Russia, the felting industry was based on the European beaver (castor fiber). Given their long tradition of working with beaver pelts, the Russians had perfected the art of combing out the short barbed hairs from among the longer guard hairs, a technology that they safeguarded. As a consequence, the early felting trades in England and France had to rely on beaver wool imported from Russia, although they also used domestic supplies of wool from other animals, such rabbit, sheep and goat. But by the end of the seventeenth century, Russian supplies were drying up, reflecting the serious depletion of the European beaver population.

Coincident with the decline in European beaver stocks was the emergence of a North American trade. North American beaver (castor canadensis) was imported through agents in the English, French and Dutch colonies. Although many of the pelts were shipped to Russia for initial processing, the growth of the beaver market in England and France led to the development of local technologies, and more knowledge of the art of combing. Separating the beaver wool from the felt was only the first step in the felting process. It was also necessary that some of the barbs on the short hairs be raised or open. On the animal these hairs were naturally covered with keratin to prevent the barbs from opening, thus to make felt, the keratin had to be stripped from at least some of the hairs. The process was difficult to refine and entailed considerable experimentation by felt-makers. For instance, one felt maker “bundled [the skins] in a sack of linen and boiled [them] for twelve hours in water containing several fatty substances and nitric acid” (Crean, 1962, p. 381). Although such processes removed the keratin, they did so at the price of a lower quality wool.

The opening of the North American trade not only increased the supply of skins for the felting industry, it also provided a subset of skins whose guard hairs had already been removed and the keratin broken down. Beaver pelts imported from North America were classified as either parchment beaver (castor sec – dry beaver), or coat beaver (castor gras – greasy beaver). Parchment beaver were from freshly caught animals, whose skins were simply dried before being presented for trade. Coat beaver were skins that had been worn by the Indians for a year or more. With wear, the guard hairs fell out and the pelt became oily and more pliable. In addition, the keratin covering the shorter hairs broke down. By the middle of the seventeenth century, hatters and felt-makers came to learn that parchment and coat beaver could be combined to produce a strong, smooth, pliable, top-quality waterproof material.

Until the 1720s, beaver felt was produced with relatively fixed proportions of coat and parchment skins, which led to periodic shortages of one or the other type of pelt. The constraint was relaxed when carotting was developed, a chemical process by which parchment skins were transformed into a type of coat beaver. The original carrotting formula consisted of salts of mercury diluted in nitric acid, which was brushed on the pelts. The use of mercury was a big advance, but it also had serious health consequences for hatters and felters, who were forced to breathe the mercury vapor for extended periods. The expression “mad as a hatter” dates from this period, as the vapor attacked the nervous systems of these workers.

The Prices of Parchment and Coat Beaver

Drawn from the accounts of the Hudson’s Bay Company, Table 1 presents some eighteenth century prices of parchment and coat beaver pelts. From 1713 to 1726, before the carotting process had become established, coat beaver generally fetched a higher price than parchment beaver, averaging 6.6 shillings per pelt as compared to 5.5 shillings. Once carotting was widely used, however, the prices were reversed, and from 1730 to 1770 parchment exceeded coat in almost every year. The same general pattern is seen in the Paris data, although there the reversal was delayed, suggesting slower diffusion in France of the carotting technology. As Crean (1962, p. 382) notes, Nollet’s L’Art de faire des chapeaux included the exact formula, but it was not published until 1765.

A weighted average of parchment and coat prices in London reveals three episodes. From 1713 to 1722 prices were quite stable, fluctuating within the narrow band of 5.0 and 5.5 shillings per pelt. During the period, 1723 to 1745, prices moved sharply higher and remained in the range of 7 to 9 shillings. The years 1746 to 1763 saw another big increase to over 12 shillings per pelt. There are far fewer prices available for Paris, but we do know that in the period 1739 to 1753 the trend was also sharply higher with prices more than doubling.

Table 1
Price of Beaver Pelts in Britain: 1713-1763
(shillings per skin)

Year Parchment Coat Averagea Year Parchment Coat Averagea
1713 5.21 4.62 5.03 1739 8.51 7.11 8.05
1714 5.24 7.86 5.66 1740 8.44 6.66 7.88
1715 4.88 5.49 1741 8.30 6.83 7.84
1716 4.68 8.81 5.16 1742 7.72 6.41 7.36
1717 5.29 8.37 5.65 1743 8.98 6.74 8.27
1718 4.77 7.81 5.22 1744 9.18 6.61 8.52
1719 5.30 6.86 5.51 1745 9.76 6.08 8.76
1720 5.31 6.05 5.38 1746 12.73 7.18 10.88
1721 5.27 5.79 5.29 1747 10.68 6.99 9.50
1722 4.55 4.97 4.55 1748 9.27 6.22 8.44
1723 8.54 5.56 7.84 1749 11.27 6.49 9.77
1724 7.47 5.97 7.17 1750 17.11 8.42 14.00
1725 5.82 6.62 5.88 1751 14.31 10.42 12.90
1726 5.41 7.49 5.83 1752 12.94 10.18 11.84
1727 7.22 1753 10.71 11.97 10.87
1728 8.13 1754 12.19 12.68 12.08
1729 9.56 1755 12.05 12.04 11.99
1730 8.71 1756 13.46 12.02 12.84
1731 6.27 1757 12.59 11.60 12.17
1732 7.12 1758 13.07 11.32 12.49
1733 8.07 1759 15.99 14.68
1734 7.39 1760 13.37 13.06 13.22
1735 8.33 1761 10.94 13.03 11.36
1736 8.72 7.07 8.38 1762 13.17 16.33 13.83
1737 7.94 6.46 7.50 1763 16.33 17.56 16.34
1738 8.95 6.47 8.32

a A weighted average of the prices of parchment, coat and half parchment beaver pelts. Weights are based on the trade in these types of furs at Fort Albany. Prices of the individual types of pelts are not available for the years, 1727 to 1735.

Source: Carlos and Lewis, 1999.

The Demand for Beaver Hats

The main cause of the rising beaver pelt prices in England and France was the increasing demand for beaver hats, which included hats made exclusively with beaver wool and referred to as “beaver hats,” and those hats containing a combination of beaver and a lower cost wool, such as rabbit. These were called “felt hats.” Unfortunately, aggregate consumption series for the eighteenth century Europe are not available. We do, however, have Gregory King’s contemporary work for England which provides a good starting point. In a table entitled “Annual Consumption of Apparell, anno 1688,” King calculated that consumption of all types of hats was about 3.3 million, or nearly one hat per person. King also included a second category, caps of all sorts, for which he estimated consumption at 1.6 million (Harte, 1991, p. 293). This means that as early as 1700, the potential market for hats in England alone was nearly 5 million per year. Over the next century, the rising demand for beaver pelts was a result of a number factors including population growth, a greater export market, a shift toward beaver hats from hats made of other materials, and a shift from caps to hats.

The British export data indicate that demand for beaver hats was growing not just in England, but in Europe as well. In 1700 a modest 69,500 beaver hats were exported from England and almost the same number of felt hats; but by 1760, slightly over 500,000 beaver hats and 370,000 felt halts were shipped from English ports (Lawson, 1943, app. I). In total, over the seventy years to 1770, 21 million beaver and felt hats were exported from England. In addition to the final product, England exported the raw material, beaver pelts. In 1760, £15,000 in beaver pelts were exported along with a range of other furs. The hats and the pelts tended to go to different parts of Europe. Raw pelts were shipped mainly to northern Europe, including Germany, Flanders, Holland and Russia; whereas hats went to the southern European markets of Spain and Portugal. In 1750, Germany imported 16,500 beaver hats, while Spain imported 110,000 and Portugal 175,000 (Lawson, 1943, appendices F & G). Over the first six decades of the eighteenth century, these markets grew dramatically, such that the value of beaver hat sales to Portugal alone was £89,000 in 1756-1760, representing about 300,000 hats or two-thirds of the entire export trade.

European Intermediaries in the Fur Trade

By the eighteenth century, the demand for furs in Europe was being met mainly by exports from North America with intermediaries playing an essential role. The American trade, which moved along the main water systems, was organized largely through chartered companies. At the far north, operating out of Hudson Bay, was the Hudson’s Bay Company, chartered in 1670. The Compagnie d’Occident, founded in 1718, was the most successful of a series of monopoly French companies. It operated through the St. Lawrence River and in the region of the eastern Great Lakes. There was also an English trade through Albany and New York, and a French trade down the Mississippi.

The Hudson’s Bay Company and the Compagnie d’Occident, although similar in title, had very different internal structures. The English trade was organized along hierarchical lines with salaried managers, whereas the French monopoly issued licenses (congés) or leased out the use of its posts. The structure of the English company allowed for more control from the London head office, but required systems that could monitor the managers of the trading posts (Carlos and Nicholas, 1990). The leasing and licensing arrangements of the French made monitoring unnecessary, but led to a system where the center had little influence over the conduct of the trade.

The French and English were distinguished as well by how they interacted with the Natives. The Hudson’s Bay Company established posts around the Bay and waited for the Indians, often middlemen, to come to them. The French, by contrast, moved into the interior, directly trading with the Indians who harvested the furs. The French arrangement was more conducive to expansion, and by the end of the seventeenth century, they had moved beyond the St. Lawrence and Ottawa rivers into the western Great Lakes region (see Figure 1). Later they established posts in the heart of the Hudson Bay hinterland. In addition, the French explored the river systems to the south, setting up a post at the mouth of the Mississippi. As noted earlier, after Jay’s Treaty was signed, the French were replaced in the Mississippi region by U.S. interests which later formed the American Fur Company (Haeger, 1991).

The English takeover of New France at the end of the French and Indian Wars in 1763 did not, at first, fundamentally change the structure of the trade. Rather, French management was replaced by Scottish and English merchants operating in Montreal. But, within a decade, the Montreal trade was reorganized into partnerships between merchants in Montreal and traders who wintered in the interior. The most important of these arrangements led to the formation of the Northwest Company, which for the first two decades of the nineteenth century, competed with the Hudson’s Bay Company (Carlos and Hoffman, 1986). By the early decades of the nineteenth century, the Hudson’s Bay Company, the Northwest Company, and the American Fur Company had, combined, a system of trading posts across North America, including posts in Oregon and British Columbia and on the Mackenzie River. In 1821, the Northwest Company and the Hudson’s Bay Company merged under the name of the Hudson’s Bay Company. The Hudson’s Bay Company then ran the trade as a monopsony until the late 1840s when it began facing serious competition from trappers to the south. The Company’s role in the northwest changed again with the Canadian Confederation in 1867. Over the next decades treaties were signed with many of the northern tribes forever changing the old fur trade order in Canada.

The Supply of Furs: The Harvesting of Beaver and Depletion

During the eighteenth century, the changing technology of felt production and the growing demand for felt hats were met by attempts to increase the supply of furs, especially the supply of beaver pelts. Any permanent increase, however, was ultimately dependent on the animal resource base. How that base changed over time must be a matter of speculation since no animal counts exist from that period; nevertheless, the evidence we do have points to a scenario in which over-harvesting, at least in some years, gave rise to serious depletion of the beaver and possibly other animals such as marten that were also being traded. Why the beaver were over-harvested was closely related to the prices Natives were receiving, but important as well was the nature of Native property rights to the resource.

Harvests in the Fort Albany and York Factory Regions

That beaver populations along the Eastern seaboard regions of North America were depleted as the fur trade advanced is widely accepted. In fact the search for new sources of supply further west, including the region of Hudson Bay, has been attributed in part to dwindling beaver stocks in areas where the fur trade had been long established. Although there has been little discussion of the impact that the Hudson’s Bay Company and the French, who traded in the region of Hudson Bay, were having on the beaver stock, the remarkably complete records of the Hudson’s Bay Company provide the basis for reasonable inferences about depletion. From 1700 there is an uninterrupted annual series of fur returns at Fort Albany; the fur returns from York Factory begin in 1716 (see Figure 1).

The beaver returns at Fort Albany and York Factory for the period 1700 to 1770 are described in Figure 2. At Fort Albany the number of beaver skins over the period 1700 to 1720 averaged roughly 19,000, with wide year-to-year fluctuations; the range was about 15,000 to 30,000. After 1720 and until the late 1740s average returns declined by about 5,000 skins, and remained within the somewhat narrower range of roughly 10,000 to 20,000 skins. The period of relative stability was broken in the final years of the 1740s. In 1748 and 1749, returns increased to an average of nearly 23,000. Following these unusually strong years, the trade fell precipitously so that in 1756 fewer than 6,000 beaver pelts were received. There was a brief recovery in the early 1760s but by the end decade trade had fallen below even the mid-1750s levels. In 1770, Fort Albany took in just 3,600 beaver pelts. This pattern – unusually large returns in the late 1740s and low returns thereafter – indicates that the beaver in the Fort Albany region were being seriously depleted.

Figure 2
Beaver Traded at Fort Albany and York Factory 1700 – 1770

Source: Carlos and Lewis, 1993.

The beaver returns at York Factory from 1716 to 1770, also described in Figure 2, have some of the key features of the Fort Albany data. After some low returns early on (from 1716 to 1720), the number of beaver pelts increased to an average of 35,000. There were extraordinary returns in 1730 and 1731, when the average was 55,600 skins, but beaver receipts then stabilized at about 31,000 over the remainder of the decade. The first break in the pattern came in the early 1740s shortly after the French established several trading posts in the area. Surprisingly perhaps, given the increased competition, trade in beaver pelts at the Hudson’s Bay Company post increased to an average of 34,300, this over the period 1740 to 1743. Indeed, the 1742 return of 38,791 skins was the largest since the French had established any posts in the region. The returns in 1745 were also strong, but after that year the trade in beaver pelts began a decline that continued through to 1770. Average returns over the rest of the decade were 25,000; the average during the 1750s was 18,000, and just 15,500 in the 1760s. The pattern of beaver returns at York Factory – high returns in the early 1740s followed by a large decline – strongly suggests that, as in the Fort Albany hinterland, the beaver population had been greatly reduced.

The overall carrying capacity of any region, or the size of the animal stock, depends on the nature of the terrain and the underlying biological determinants such as birth and death rates. A standard relationship between the annual harvest and the animal population is the Lotka-Volterra logistic, commonly used in natural resource models to relate the natural growth of a population to the size of that population:
F(X) = aX – bX2, a, b > 0 (1)

where X is the population, F(X) is the natural growth in the population, a is the maximum proportional growth rate of the population, and b = a/X, where X is the upper limit to population size. The population dynamics of the species exploited depends on the harvest each period:

DX = aX – bX2- H (2)

where DX is the annual change in the population and H is the harvest. The choice of parameter a and maximum population X is central to the population estimates and have been based largely on estimates from the beaver ecology literature and Ontario provincial field reports of beaver densities (Carlos and Lewis, 1993).

Simulations based on equation 2 suggest that, until the 1730s, beaver populations remained at levels roughly consistent with maximum sustained yield management, sometimes referred to as the biological optimum. But after the 1730s there was a decline in beaver stocks to about half the maximum sustained yield levels. The cause of the depletion was closely related to what was happening in Europe. There, buoyant demand for felt hats and dwindling local fur supplies resulted in much higher prices for beaver pelts. These higher prices, in conjunction with the resulting competition from the French in the Hudson Bay region, led the Hudson’s Bay Company to offer much better terms to Natives who came to their trading posts (Carlos and Lewis, 1999).

Figure 3 reports a price index for furs at Fort Albany and at York Factory. The index represents a measure of what Natives received in European goods for their furs. At Fort Albany, fur prices were close to 70 from 1713 to 1731, but in 1732, in response to higher European fur prices and the entry of la Vérendrye, an important French trader, the price jumped to 81. After that year, prices continued to rise. The pattern at York Factory was similar. Although prices were high in the early years when the post was being established, beginning in 1724 the price settled down to about 70. At York Factory, the jump in price came in 1738, which was the year la Vérendrye set up a trading post in the York Factory hinterland. Prices then continued to increase. It was these higher fur prices that led to over-harvesting and, ultimately, a decline in beaver stocks.

Figure 3
Price Index for Furs: Fort Albany and York Factory, 1713 – 1770

Source: Carlos and Lewis, 2001.

Property Rights Regimes

An increase in price paid to Native hunters did not have to lead to a decline in the animal stocks, because Indians could have chosen to limit their harvesting. Why they did not was closely related their system of property rights. One can classify property rights along a spectrum with, at one end, open access, where anyone can hunt or fish, and at the other, complete private property, where a sole owner has full control over the resource. Between, there are a range of property rights regimes with access controlled by a community or a government, and where individual members of the group do not necessarily have private property rights. Open access creates a situation where there is less incentive to conserve, because animals not harvested by a particular hunter will be available to other hunters in the future. Thus the closer is a system to open access the more likely it is that the resource will be depleted.

Across aboriginal societies in North America, one finds a range of property rights regimes. Native Americans did have a concept of trespass and of property, but individual and family rights to resources were not absolute. Sometimes referred to as the Good Samaritan principle (McManus, 1972), outsiders were not permitted to harvest furs on another’s territory for trade, but they were allowed to hunt game and even beaver for food. Combined with this limitation to private property was an Ethic of Generosity that included liberal gift-giving where any visitor to one’s encampment was to be supplied with food and shelter.

Why a social norm such as gift-giving or the related Good Samaritan principle emerged was due to the nature of the aboriginal environment. The primary objective of aboriginal societies was survival. Hunting was risky, and so rules were put in place that would reduce the risk of starvation. As Berkes et al.(1989, p. 153) notes, for such societies: “all resources are subject to the overriding principle that no one can prevent a person from obtaining what he needs for his family’s survival.” Such actions were reciprocal and especially in the sub-arctic world were an insurance mechanism. These norms, however, also reduced the incentive to conserve the beaver and other animals that were part of the fur trade. The combination of these norms and the increasing price paid to Native traders led to the large harvests in the 1740s and ultimately depletion of the animal stock.

The Trade in European Goods

Indians were the primary agents in the North American commercial fur trade. It was they who hunted the animals, and transported and traded the pelts or skins to European intermediaries. The exchange was a voluntary. In return for their furs, Indians obtained both access to an iron technology to improve production and access to a wide range of new consumer goods. It is important to recognize, however, that although the European goods were new to aboriginals, the concept of exchange was not. The archaeological evidence indicates an extensive trade between Native tribes in the north and south of North America prior to European contact.

The extraordinary records of the Hudson’s Bay Company allow us to form a clear picture of what Indians were buying. Table 2 lists the goods received by Natives at York Factory, which was by far the largest of the Hudson’s Bay Company trading posts. As is evident from the table, the commercial trade was more than in beads and baubles or even guns and alcohol; rather Native traders were receiving a wide range of products that improved their ability to meet their subsistence requirements and allowed them to raise their living standards. The items have been grouped by use. The producer goods category was dominated by firearms, including guns, shot and powder, but also includes knives, awls and twine. The Natives traded for guns of different lengths. The 3-foot gun was used mainly for waterfowl and in heavily forested areas where game could be shot at close range. The 4-foot gun was more accurate and suitable for open spaces. In addition, the 4-foot gun could play a role in warfare. Maintaining guns in the harsh sub-arctic environment was a serious problem, and ultimately, the Hudson’s Bay Company was forced to send gunsmiths to its trading posts to assess quality and help with repairs. Kettles and blankets were the main items in the “household goods” category. These goods probably became necessities to the Natives who adopted them. Then there were the luxury goods, which have been divided into two broad categories: “tobacco and alcohol,” and “other luxuries,” dominated by cloth of various kinds (Carlos and Lewis, 2001; 2002).

Table 2
Value of Goods Received at York Factory in 1740 (made beaver)

We have much less information about the French trade. The French are reported to have exchanged similar items, although given their higher transport costs, both the furs received and the goods traded tended to be higher in value relative to weight. The Europeans, it might be noted, supplied no food to the trade in the eighteenth century. In fact, Indians helped provision the posts with fish and fowl. This role of food purveyor grew in the nineteenth century as groups known as the “home guard Cree” came to live around the posts; as well, pemmican, supplied by Natives, became an important source of nourishment for Europeans involved in the buffalo hunts.

The value of the goods listed in Table 2 is expressed in terms of the unit of account, the made beaver, which the Hudson’s Bay Company used to record its transactions and determine the rate of exchange between furs and European goods. The price of a prime beaver pelt was 1 made beaver, and every other type of fur and good was assigned a price based on that unit. For example, a marten (a type of mink) was a made beaver, a blanket was 7 made beaver, a gallon of brandy, 4 made beaver, and a yard of cloth, 3? made beaver. These were the official prices at York Factory. Thus Indians, who traded at these prices, received, for example, a gallon of brandy for four prime beaver pelts, two yards of cloth for seven beaver pelts, and a blanket for 21 marten pelts. This was barter trade in that no currency was used; and although the official prices implied certain rates of exchange between furs and goods, Hudson’s Bay Company factors were encouraged to trade at rates more favorable to the Company. The actual rates, however, depended on market conditions in Europe and, most importantly, the extent of French competition in Canada. Figure 3 illustrates the rise in the price of furs at York Factory and Fort Albany in response to higher beaver prices in London and Paris, as well as to a greater French presence in the region (Carlos and Lewis, 1999). The increase in price also reflects the bargaining ability of Native traders during periods of direct competition between the English and French and later the Hudson’s Bay Company and the Northwest Company. At such times, the Native traders would play both parties off against each other (Ray and Freeman, 1978).

The records of the Hudson’s Bay Company provide us with a unique window to the trading process, including the bargaining ability of Native traders, which is evident in the range of commodities received. Natives only bought goods they wanted. Clear from the Company records is that it was the Natives who largely determined the nature and quality of those goods. As well the records tell us how income from the trade was being allocated. The breakdown differed by post and varied over time; but, for example, in 1740 at York Factory, the distribution was: producer goods – 44 percent; household goods – 9 percent; alcohol and tobacco – 24 percent; and other luxuries – 23 percent. An important implication of the trade data is that, like many Europeans and most American colonists, Native Americans were taking part in the consumer revolution of the eighteenth century (de Vries, 1993; Shammas, 1993). In addition to necessities, they were consuming a remarkable variety of luxury products. Cloth, including baize, duffel, flannel, and gartering, was by far the largest class, but they also purchased beads, combs, looking glasses, rings, shirts, and vermillion among a much longer list. Because these items were heterogeneous in nature, the Hudson’s Bay Company’s head office went to great lengths to satisfy the specific tastes of Native consumers. Attempts were also made, not always successfully, to introduce new products (Carlos and Lewis, 2002).

Perhaps surprising, given the emphasis that has been placed on it in the historical literature, was the comparatively small role of alcohol in the trade. At York Factory, Native traders received in 1740 a total of 494 gallons of brandy and “strong water,” which had a value of 1,976 made beaver. More than twice this amount was spent on tobacco in that year, nearly five times was spent on firearms, twice was spent on cloth, and more was spent on blankets and kettles than on alcohol. Thus, brandy, although a significant item of trade, was by no means a dominant one. In addition, alcohol could hardly have created serious social problems during this period. The amount received would have allowed for no more than ten two-ounce drinks per year for the adult Native population living in the region.

The Labor Supply of Natives

Another important question can be addressed using the trade data. Were Natives “lazy and improvident” as they have been described by some contemporaries, or were they “industrious” like the American colonists and many Europeans? Central to answering this question is how Native groups responded to the price of furs, which began rising in the 1730s. Much of the literature argues that Indian trappers reduced their effort in response to higher fur prices; that is, they had backward-bending supply curves of labor. The view is that Natives had a fixed demand for European goods that, at higher fur prices, could be met with fewer furs, and hence less effort. Although widely cited, this argument does not stand up. Not only were higher fur prices accompanied by larger total harvests of furs in the region, but the pattern of Native expenditure also points to a scenario of greater effort. From the late 1730s to the 1760s, as the price of furs rose, the share of expenditure on luxury goods increased dramatically (see Figure 4). Thus Natives were not content simply to accept their good fortune by working less; rather they seized the opportunity provided to them by the strong fur market by increasing their effort in the commercial sector, thereby dramatically augmenting the purchases of those goods, namely the luxuries, that could raise their living standards.

Figure 4
Native Expenditure Shares at York Factory 1716 – 1770

Source: Carlos and Lewis, 2001.

A Note on the Non-commercial Sector

As important as the fur trade was to Native Americans in the sub-arctic regions of Canada, commerce with the Europeans comprised just one, relatively small, part of their overall economy. Exact figures are not available, but the traditional sectors; hunting, gathering, food preparation and, to some extent, agriculture must have accounted for at least 75 to 80 percent of Native labor during these decades. Nevertheless, despite the limited time spent in commercial activity, the fur trade had a profound effect on the nature of the Native economy and Native society. The introduction of European producer goods, such as guns, and household goods, mainly kettles and blankets, changed the way Native Americans achieved subsistence; and the European luxury goods expanded the range of products that allowed them to move beyond subsistence. Most importantly, the fur trade connected Natives to Europeans in ways that affected how and how much they chose to work, where they chose to live, and how they exploited the resources on which the trade and their survival was based.

References

Berkes, Fikret, David Feeny, Bonnie J. McCay, and James M. Acheson. “The Benefits of the Commons.” Nature 340 (July 13, 1989): 91-93.

Braund, Kathryn E. Holland.Deerskins and Duffels: The Creek Indian Trade with Anglo-America, 1685-1815. Lincoln: University of Nebraska Press, 1993.

Carlos, Ann M., and Elizabeth Hoffman. “The North American Fur Trade: Bargaining to a Joint Profit Maximum under Incomplete Information, 1804-1821.” Journal of Economic History 46, no. 4 (1986): 967-86.

Carlos, Ann M., and Frank D. Lewis. “Indians, the Beaver and the Bay: The Economics of Depletion in the Lands of the Hudson’s Bay Company, 1700-1763.” Journal of Economic History 53, no. 3 (1993): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Property Rights, Competition and Depletion in the Eighteenth-Century Canadian Fur Trade: The Role of the European Market.” Canadian Journal of Economics 32, no. 3 (1999): 705-28.

Carlos, Ann M., and Frank D. Lewis. “Property Rights and Competition in the Depletion of the Beaver: Native Americans and the Hudson’s Bay Company.” In The Other Side of the Frontier: Economic Explorations in Native American History, edited by Linda Barrington, 131-149. Boulder, CO: Westview Press, 1999.

Carlos, Ann M., and Frank D. Lewis. “Trade, Consumption, and the Native Economy: Lessons from York Factory, Hudson Bay.” Journal of Economic History61, no. 4 (2001): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Marketing in the Land of Hudson Bay: Indian Consumers and the Hudson’s Bay Company, 1670-1770.” Enterprise and Society 2 (2002): 285-317.

Carlos, Ann and Nicholas, Stephen. “Agency Problems in Early Chartered Companies: The Case of the Hudson’s Bay Company.” Journal of Economic History 50, no. 4 (1990): 853-75.

Clarke, Fiona. Hats. London: Batsford, 1982.

Crean, J. F. “Hats and the Fur Trade.” Canadian Journal of Economics and Political Science 28, no. 3 (1962): 373-386.

Corner, David. “The Tyranny of Fashion: The Case of the Felt-Hatting Trade in the Late Seventeenth and Eighteenth Centuries.” Textile History 22, no.2 (1991): 153-178.

de Vries, Jan. “Between Purchasing Power and the World of Goods: Understanding the Household Economy in Early Modern Europe.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 85-132. London: Routledge, 1993.

Ginsburg Madeleine. The Hat: Trends and Traditions. London: Studio Editions, 1990.

Haeger, John D. John Jacob Astor: Business and Finance in the Early Republic. Detroit: Wayne State University Press, 1991.

Harte, N.B. “The Economics of Clothing in the Late Seventeenth Century.” Textile History 22, no. 2 (1991): 277-296.

Heidenreich, Conrad E., and Arthur J. Ray. The Early Fur Trade: A Study in Cultural Interaction. Toronto: McClelland and Stewart, 1976.

Helm, Jane, ed. Handbook of North American Indians 6, Subarctic. Washington: Smithsonian, 1981.

Innis, Harold. The Fur Trade in Canada (revised edition). Toronto: University of Toronto Press, 1956.

Krech III, Shepard. The Ecological Indian: Myth and History. New York: Norton, 1999.

Lawson, Murray G. Fur: A Study in English Mercantilism. Toronto: University of Toronto Press, 1943.

McManus, John. “An Economic Analysis of Indian Behavior in the North American Fur Trade.” Journal of Economic History 32, no.1 (1972): 36-53.

Ray, Arthur J. Indians in the Fur Trade: Their Role as Hunters, Trappers and Middlemen in the Lands Southwest of Hudson Bay, 1660-1870. Toronto: University of Toronto Press, 1974.

Ray, Arthur J. and Donald Freeman. “Give Us Good Measure”: An Economic Analysis of Relations between the Indians and the Hudson’s Bay Company before 1763. Toronto: University of Toronto Press, 1978.

Ray, Arthur J. “Bayside Trade, 1720-1780.” In Historical Atlas of Canada 1, edited by R. Cole Harris, plate 60. Toronto: University of Toronto Press, 1987.

Rich, E. E. Hudson’s Bay Company, 1670 – 1870. 2 vols. Toronto: McClelland and Stewart, 1960.

Rich, E.E. “Trade Habits and Economic Motivation among the Indians of North America.” Canadian Journal of Economics and Political Science 26, no. 1 (1960): 35-53.

Shammas, Carole. “Changes in English and Anglo-American Consumption from 1550-1800.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 177-205. London: Routledge, 1993.

Wien, Thomas. “Selling Beaver Skins in North America and Europe, 1720-1760: The Uses of Fur-Trade Imperialism.” Journal of the Canadian Historical Association, New Series 1 (1990): 293-317.

Citation: Carlos, Ann and Frank Lewis. “Fur Trade (1670-1870)”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-the-fur-trade-1670-to-1870/

The Freedmen’s Bureau

William Troost, University of British Columbia

The Bureau of Refugees, Freedmen, and Abandoned Lands, more commonly know as the Freedmen’s Bureau, was a federal agency established to help Southern blacks transition from their lives as slaves to free individuals. The challenges of this transformation were enormous as the Civil War devastated the region – leaving farmland dilapidated and massive amounts of capital destroyed. Additionally, the entire social order of the region was disturbed as slave owners and former slaves were forced to interact with one another in completely new ways. The Freedmen’s Bureau was an unprecedented foray by the federal government into the sphere of social welfare during a critical period of American history. This article briefly describes this unique agency, its colorful history, and many functions that the bureau performed during its brief existence.

The Beginning of the Bureau

In March 1863, the American Freedmen’s Inquiry Commission was set up to investigate “the measures which may best contribute to the protection and improvement of the recently emancipated freedmen of the United States, and to their self-defense and self-support.”1 The commission debated various methods and activities to alleviate the current condition of freedmen and aid their transition to free individuals. Basic aid activities to alleviate physical suffering and provide legal justice, education, and land redistribution were commonly mentioned in these meetings and hearings. This inquiry commission examined many issues and came up with some ideas that would eventually become the foundation for the eventual Freedmen’s Bureau Law. In 1864, the commission issued their final report which laid out the basic philosophy that would guide the actions of the Freedmen’s Bureau.

“The sum of our recommendations is this: Offer the freedmen temporary aid and counsel until they become a little accustomed to their new sphere of life; secure to them, by law, their just rights of person and property; relieve them, by a fair and equal administration of justice, from the depressing influence of disgraceful prejudice; above all, guard them against the virtual restoration of slavery in any form, and let them take care of themselves. If we do this, the future of the African race in this country will be conducive to its prosperity and associated with its well-being. There will be nothing connected with it to excite regret to inspire apprehension.”2

When the Congress finally got down to the business of writing a bill to aid the transition of the freedmen they tried to integrate many of the American Freedmen’s Inquiry Commission’s recommendations. Originally the agency set up to aid in this transition was to be named the Bureau of Emancipation. However, when the bill came up for a vote on March 1, 1864 the name was changed to the Bureau of Refugees, Freedmen, and Abandoned Lands. This change was due in large part to objections that the bill was exclusionary and aimed solely towards the aid of blacks. This name changed was aimed at enlarging support for the bill.

The House and the Senate argued about the powers and place that the bureau should reside within the government. Those in the House wanted the agency placed within the War Department, concluding that the power used to free the slaves would be best to aid them in their transition. Oppositely, in the Senate Charles Sumner’s Committee on Slavery and Freedom wanted the bureau placed within the Department of the Treasury – as it had the power to tax and had possession of confiscated lands. Sumner felt that they “should not be separated from their best source of livelihood.”3 After a year of debate, finally a compromise was agreed to that entrusted the Freedmen’s Bureau with the administration of confiscated lands while placing the bureau within the Department of War. Thus, On March 3, 1865, with the stroke of a pen, Abraham Lincoln signed into existence the Bureau of Refugees, Freedmen, and Abandoned Lands. Selected to head of the new bureau was General Otis Oliver Howard – commonly known as the Christian General. Howard had strong ties with the philanthropic community and forged strong ties with freedmen’s aid organizations.

The Freedmen’s Bureau was active in a variety of aid functions. Eric Foner writes it was “an experiment in social policy that did not belong to the America of its day”.4 The bureau did important work in many key areas and had many functions that even today are not considered the responsibility of the national government.

Relief Services

A key function of the bureau, especially in the beginning, was to provide temporary relief for the suffering of destitute freedmen. The bureau provided rations for those most in need due to the abandonment of plantations, poor crop yields, and unemployment. This aid was taken advantage of by a staggering number of both freedmen and refugees. A ration was defined as enough corn meal, flour, and sugar sufficient to feed a person for one week. In “the first 15 months following the war, the Bureau issued over 13 million rations, two thirds to blacks.”5 The size of this aid was staggering and while it was deemed a great necessity, it also fostered tremendous anxiety for both General Howard and the general population – mainly that it would cause idleness. Because of these worries, General Howard ordered that this form of relief be discontinued in the fall of 1866.

Health Care

In a similar vein the bureau also provided medical care to the recently freed slaves. The health situation of freedmen at the conclusion of the Civil War was atrocious. Frequent pandemics of cholera, poor sanitation, and outbreaks of smallpox killed scores of freedmen. Because the freed population lacked the financial assets to purchase private healthcare and were denied care in many other cases, the bureau played a valuable role.

“Since hospitals and doctors could not be relied on to provide adequate health care for freedmen, individual bureau agents on occasion responded innovatively to black distress. During epidemics, Pine Bluff and Little Rock agents relocated freedpersons to less contagion-ridden places. When blacks could not be moved, agents imposed quarantines to prevent the spread of disease. General Order Number 8…prohibited new residents from congregating in towns. The order also mandated weekly inspections of freedmen’s homes to check for filth and overcrowding.”6

In addition to preventing and containing outbreaks, the bureau also engaged more directly in health care. Being placed in the War Department, the bureau was also able to assume operations of hospitals established by the Army during the war. After the war it expanded the system to areas previously not under military control. Observing that freedmen were not receiving an adequate quality of health services, the bureau established dispensaries providing basic medical care and drugs free of charge, or at a nominal cost. The Bureau “managed in the early years of Reconstruction to treat an estimated half million suffering freedmen, as well as a smaller but significant number of whites.”7

Land Redistribution

Perhaps the most well-known function of the bureau was one that never came to fruition. During the course of the Civil War, the U.S. Army took control of a good deal of land that had been confiscated or abandoned by the Confederacy. From the time of emancipation there were rumors that confiscated lands would be provided to the recently freed slaves. This land would enable the blacks to be economically self-sufficient and provide protection from their former owners. In January 1865, General Sherman issued Special Field Orders, No. 15, which set aside the Sea Islands and lands from South Carolina to Florida for blacks to settle. According to his order, each family would receive forty acres of land and the loan of horses and mules from the Army. Similar to General Sherman’s order, the promise of land was incorporated into the bureau bill. Quickly the bureau helped blacks settle some of the abandoned lands and “by June 1865, roughly 10,000 families of freed people, with the assistance of the Freedmen’s Bureau, had taken up more than 400,000 acres.”8

While the promise of “forty acres and a mule” excited the freedmen, the widespread implementation of this policy was quickly thwarted. In the summer of 1865, President Andrew Johnson issued special pardons restoring the property of many Confederates – throwing into question the status of abandoned lands. In response, General Howard, the Commissioner of the Freedmen’s Bureau, issued Circular 13 which told agents to conserve forty-acre tracts of land for the freedmen – as he claimed presidential pardons conflicted with the laws establishing the bureau. However, Johnson quickly instructed Howard to rescind his circular and send out a new circular ordering the restoration to pardoned owners of all land except those tracts already sold. These actions by the President were devastating, as freedmen were evicted from lands that they had long occupied and improved. Johnson’s actions took away what many felt was the freedmen’s best chance at economic protection and self-sufficiency.

Judicial Functions

While the land distribution of the new agency was thwarted, the bureau was able to perform many duties. Bureau agents had judicial authority in the South attempting to secure equal justice from the state and local governments for both blacks and white Unionists. Local agents individually adjudicated a wide variety of disputes. In some circumstances the bureau established courts where freedmen could bring forth their complaints. After the local courts regained their jurisdiction, bureau agents kept an eye on local courts retaining the authority to overturn decisions that were discriminatory towards blacks. In May 1865, the Commissioner of the bureau issued a circular “authorizing assistant commissioners to exercise jurisdiction in cases where blacks were not allowed to testify.”9

In addition to these judicial functions, the bureau also helped provide legal services in the domestic sphere. Agents helped legitimize slave marriages and presided over freedmen marriage ceremonies in areas where black marriages were obstructed. Beginning in 1866, the bureau became responsible for filing the claims of black soldiers for back pay, pensions, and bounties. The claims division remained in operation until the end of the bureau’s existence. During a time when many of the states tried to strip rights away from blacks, the bureau was essential in providing freedmen redress and access to more equitable judicial decisions and services.

Labor Relations

Another important function of the bureau was to help draw up work contracts to help facilitate the hiring of freedmen. The abolition of slavery created economic confusion and stagnation as many planters had a difficult time finding labor to work their fields. Additionally, many blacks were anxious and unsure about working for former slave owners. “Into this chaos stepped the Freedmen’s Bureau as an intermediary.”10 The bureau helped planters and freedmen draft contracts on mutually agreeable terms – negotiating several hundred thousand contracts. Once agreed upon, the agency tried to make sure both planter and worker lived up to their part of the agreement. In essence, the bureau “would undertake the role of umpire.”11

Of the bureau’s many activities this was one of its most controversial. Both planters and freedmen complained about the insistence on labor contracts. Planters complained that labor contracts forbade the use of corporal punishment used in the past. They resented the limits on their activities and felt the restrictions of the contracts limited the productivity of their workers. On the other hand, freedmen complained that the contract structures were too restrictive and didn’t allow them to move freely. In essence, the bureau had an impossible task – trying to get the freedmen to return to work for former slave owners while preserving their rights and limiting abuse. The Freedmen’s Bureau’s judicial functions were of great help in enforcing these contracts in a fair manner making both parties live up to their end of the bargain. While historians have split over whether the bureau favored planters or the freedmen, Ralph Shlomowitz in his detailed analysis of bureau-assisted labor contracts found that contracts were determined by the free interplay of market forces.12 First, he finds contracts brokered by the bureau were extremely detailed to an extent that would not make sense in the absence of compliance. Second, contrary to popular belief he finds the share of crops received by labor was highly variable. In areas of higher quality land the share awarded to labor was less than in areas with lower land quality.

Educational Efforts

Prior to the Civil War it had been policy in the sixteen slave states to fine, whip, or imprison those who gave instruction to blacks or mulattos. In many states the punishments for teaching a person of color were quite severe. These laws severely restricted the educational opportunity of blacks – especially access to formal schooling. As a result, when given their freedom, many former slaves lacked the literacy skills necessary to protect themselves from discrimination and exploitation, and pursue many personal activities. This lack of literacy created great problems for blacks in a free labor system. Freedmen were repeatedly taken advantage of as they were often unable to read or draft contracts. Additionally, individuals lacked the ability to read newspapers and trade manuals, or worship by reading the Bible. Thus, when emancipated there was a great demand for freedmen schools.

General Howard quickly realized that education was perhaps the most important endeavor that the bureau could undertake. However, the financial resources and the few functions that the bureau was authorized to undertake limited the extent to which it was able to assist. Much of the early work in schooling was done by a number of benevolent and religious Northern societies. While initially the direct aid of the bureau was limited, it provided an essential role in organizing and coordinating these organizations in their efforts. The agency also allowed the use of many buildings in the Army’s possession and the bureau helped transport a trove of teachers from the North – commonly referred to as yankee school marms.

While the limits of the original Freedmen’s Bureau bill hamstrung the efforts of agents, subsequent bills changed the situation as the purse strings and functions of the bureau in the area of education were rapidly expanded. This shift in attention followed the lead of General Howard whose “stated goal was to close one after another of the original bureau divisions while the educational work was increased with all possible energy.”13 Among the provisions of the second bureau bill were: the appropriation of salaries for State Superintendents of Education, the repair and rental of school buildings, the ability to use military taxes to pay teachers’ salaries, and the establishment of the education division as a separate entity in the bureau.

These new resources were used to great success as enrollments at bureau-financed schools grew quickly, new schools were constructed in a variety of areas, and the quality and curriculum of the schools was significantly improved. The Freedmen’s Bureau was very successful in establishing a vast network of schools to help educate the freedmen. In retrospect this was a Herculean task for the federal government to accomplish. In a region where it was illegal to teach blacks how to read or write just a few years prior, the bureau was able to help establish nearly 1,600 day schools educating over 100,000 blacks at a time. The number of bureau-aided day and night schools in operation grew to a maximum of 1,737 in March 1870, employing 2,799 teachers, and instructing 103,396 pupils. In addition, 1,034 Sabbath schools were aided by the bureau that employed 4,988 teachers and instructed 85,557 pupils.

Matching the Integrated Public Use Sample of the 1870 Census and a constructed data set on bureau school location, one can examine the reach and prevalence of bureau-aided schools.14 Table 1 presents the summary statistics of various school concentration measures and educational outcomes for individual blacks 10-15 years old.

The variable “Freedmen’s Bureau School” equals one if there was at least one bureau-aided school in the individual’s county. The data reveals that 63.6 percent of blacks lived in counties with at least one bureau school. This shows the bureau was quite effective in reaching a large segment of the black population – as nearly two thirds of blacks living in the states of the ex-Confederacy had at least some minimal exposure to these schools. While the schools were widespread, it appears their concentration was somewhat low. For individuals living in a county with at least one bureau-aided school, the concentration of bureau-aided schools was 0.3165 per 30 square miles, or 0.4630 bureau aided-schools per 1,000 blacks.

Although the concentration of schools was somewhat low it appears they had a large impact on the educational outcomes of southern blacks. Ten to fifteen year olds living in a county with at least one bureau-aided school had literacy rates that were 6.1 percentage points higher. This appears to have been driven by the bureau increasing access to formal education for black children in these counties as school attendance rates were 7.5 percentage points higher than in counties without such schools.

Andrew Johnson and the Freedmen’s Bureau

Only eleven days after signing the bureau into existence, Abraham Lincoln was struck down by John Wilkes Booth. Taking his place in office was Andrew Johnson, a former Democratic Senator from Tennessee. Despite Johnson’s Southern roots, hopes were high that Congress and the new President could work closer together than the previous administration. President Lincoln and Congress had championed vastly different policies for Reconstruction. Lincoln preferred the term “Restoration” instead of “Reconstruction,” as he felt it was constitutionally impossible for a state to succeed.15 Lincoln championed the quick integration of the South into the Union and believed it could best be accomplished under the direction of the executive branch. Oppositely, Republicans in Congress led by Charles Sumner and Thaddeus Stevens felt the Confederate states had actually seceded and relinquished their constitutional rights. The Republicans in Congress advocated strict conditions for re-entry into the Union and programs aimed at reshaping society.

The ascension of Johnson to the presidency gave hope to Congress that they would have an ally in the White House in terms of Reconstruction philosophy. According to Howard Nash, the “Radicals were delighted….to have Vice President Andrew Johnson, who they had good reason to suppose was one of their number, elevated to the presidency.”16 In the months before and immediately after taking office, Johnson repeatedly talked about the need to punish rebels in the South. After Lincoln’s death Johnson became more impassioned in his speeches. In late April 1865 Johnson told an Indiana delegation “Treason must be made odious…traitors must be punished and impoverished…their social power must be destroyed.”17 If anything, many feared that Johnson may stray too far from the Presidential Reconstruction offered by Lincoln and be overly harsh in his treatment of the South.

Immediately after taking office Johnson honored Lincoln’s choice to head the bureau by appointing General Oliver Otis Howard as commissioner of the bureau. While this action raised hopes in Congress they would be able to work with the new administration, Johnson quickly switched course. After his selection of Howard, President Johnson and the “Radical” Republicans would scarcely agree on anything during the remainder of his term. On May 29, 1865, Johnson issued a proclamation that conferred amnesty, pardon, and the restoration of property rights for almost all Confederate soldiers who took an oath pledging loyalty to the Union. Johnson later came out in support of the black codes of the South, which tried to bring blacks back to a position of near slavery and argued that the Confederate states should be accepted back into the Union without the condition of ratifying and adopting the Fourteenth Amendment in their state constitutions.

The original bill signed by Lincoln established the bureau during and for a period of one year after the Civil War. The language of the bill was somewhat ambiguous, and with the surrender of Confederate forces military conflict had ceased. This led people to debate when the bureau would be discontinued. Consensus seemed to imply that if another bill wasn’t brought forth that the bureau would be discontinued in early 1866. In response Congress quickly got to work on a new Freedmen’s Bureau bill.

While Congress started work on a new bill, President Johnson tried to gain support for the view that the need for the bureau had come to an end. Ulysses S. Grant was called upon by the President to make a whirlwind tour of the South, and report on the present situation. The route set up was exceptionally brief and skewed to those areas best under control. Accordingly, his report said that the Freedmen’s Bureau had done good work and it appeared as though the freedmen were now able to fend for themselves without the help of the federal government.

In contrast, Carl Schurz made a long tour of the South only a few months after Grant and found the freedmen in a much different situation. In many areas the bureau was viewed as the only restraint to the most insidious of treatment of blacks. As Q.A. Gilmore stated in the report,

“For reasons already suggested I believe that the restoration of civil power that would take the control of this question out of the hands of the United States authorities (whether exercised through the military authorities or through the Freedmen’s Bureau) would, instead of removing existing evils, be almost certain to augment them.”18

While the first bill was adequate in many ways, it was rather weak in a few areas. In particular, the bill didn’t have any appropriations for officers of the bureau or direct funds earmarked for the establishment of schools. General Howard and many of his officers reported on the great need for the bureau and pushed for its existence indefinitely or at least until the freedmen were in a less vulnerable position. After listening to the reports and the recommendations of General Howard, a new bill was crafted by Senator Lyman Trumbull, a moderate Republican. The new bill proposed the bureau should remain in existence until abolished by law, provide more explicit aid to education and land to the freedmen, and protect the civil rights of blacks. The bill passed in both the Senate and House and was sent to Andrew Johnson, who promptly vetoed the measure. In his response to the Senate, Johnson wrote “there can be no necessity for the enlargement of the powers of the bureau for which provision is made in the bill.”19

While the President’s message was definitive, the veto came as a shock to many in Congress. President Johnson had been consulted prior to its passage and assured General Howard and Senator Trumbull that he would support the bill. In response to the President’s opposition, the Senate and House passed a bill that addressed some of the complaints that Johnson had with the bill, including limiting the length of the bill to two more years. Even after this watering down of the bill, it was once again vetoed. However, the new bill garnered enough support to override President Johnson’s veto. The veto of the bill and the subsequent override officially established a policy of open hostility between the legislative and executive branch. Prior to the Johnson administration, overriding a veto was extremely rare – as it had only occurred six times up until this time.20 However, after the passage of this bill it became mere commonplace for the remainder of Johnson’s term, as Congress would overturn fifteen vetoes during the less than four years Johnson was in office.

End of the Bureau

While work in the educational division picked up after the passage of the second bill, many of the other activities of the bureau were winding down. On July 25, 1868 a bill was signed into law requiring the withdrawal of most bureau officers from the states, and to stop the functions of the bureau except those that were related to education and claims. Although the educational activities of the bureau were to continue for an indefinite period of time, most state superintendent of education offices had closed by the middle of 1870. On November 30, 1870 Rev. Alvord resigned his post as General Superintendent of Education.21 While some small activities of the bureau continued after his resignation, these activities were scaled back greatly and largely consisted of correspondence. Finally due to lack of appropriations the activities of the bureau ceased in March 1871.

The expiration of the bureau was somewhat anti-climatic. A number of representatives wanted to establish a permanent bureau or organization for blacks, so that they could regulate their relations with the national and state governments.22 However, this concept was too radical to get passed by enough of a margin to override a veto. There was also talk of moving many of its functions into other parts of the government. However, over time the appropriations began to dwindle and the urgency to work out a proposal for transfer withered away in a manner similar to the bureau.

References

Alston, Lee J. and Joseph P. Ferrie. “Paternalism in Agricultural Labor Contracts in the U.S. South: Implications for the Growth of the Welfare State.” American Economic Review 83, no. 4 (1993): 852-76.

American Freedmen’s Inquiry Commission. Records of the American Freedmen’s Inquiry Commission, Final Report, Senate Executive Document 53, 38th Congress, 1st Session, Serial 1176, 1864.

Cimbala, Paul and Randall Miller. The Freedmen’s Bureau and Reconstruction: Reconsiderations. New York: Fordham University Press, 1999.

Congressional Research Service, http://clerk.house.gov/art_history/house_history/vetoes.html

Finley, Randy. From Slavery to Uncertain Freedom: The Freedmen’s Bureau in Arkansas, 1865-1869. Fayetteville: University of Arkansas Press, 1996.

Johnson, Andrew. “Message of the President: Returning Bill (S.60),” Pg. 3, 39th Congress, 1st Session, Executive Document No. 25, February 19, 1866.

McFeely, William S. Yankee Stepfather: General O.O. Howard and the Freedmen. New York: W.W. Norton, 1994.

Milton, George Fort. The Age of Hate: Andrew Johnson and the Radicals. New York: Coward-McCann, 1930.

Nash, Howard P. Andrew Johnson: Congress and Reconstruction. Rutherford, NJ: Fairleigh Dickinson University Press, 1972.

Parker, Marjorie H. “Some Educational Activities of the Freedmen’s Bureau.” Journal of Negro Education 23, no. 1 (1954): 9-21.

Q.A. Gillmore to Carl Schurz, July 27, 1865, Documents Accompanying the Report of Major General Carl Schurz, Hilton Head, SC.

Ruggles, Steven, Matthew Sobek, Trent Alexander, Catherine A. Fitch, Ronald Goeken, Patricia Kelly Hall, Miriam King, and Chad Ronnander. Integrated Public Use Microdata Series: Version 3.0 [Machine-readable database]. Minneapolis, MN: Minnesota Population Center [producer and distributor], 2004.

Shlomowitz, Ralph. “The Transition from Slave to Freedman Labor Arrangements in Southern Agriculture, 1865-1870.” Journal of Economic History 39, no. 1 (1979): 333-36.

Shlomowitz, Ralph, “The Origins of Southern Sharecropping,” Agricultural History 53, no. 3 (1979): 557-75.

Simpson, Brooks D. “Ulysses S. Grant and the Freedmen’s Bureau.” In The Freedmen’s Bureau and Reconstruction: Reconsiderations, edited by Paul A. Cimbala and Randall M. Miller. New York: Fordham University Press, 1999.

Citation: Troost, William. “Freedmen’s Bureau”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/the-freedmens-bureau/

Fraternal Sickness Insurance

Herb Emery, University of Calgary

Introduction

During the nineteenth and early-twentieth century, lost income due to illness was one of the greatest risks to a wage earner’s household’s standard of living (Horrell and Oxley 2000, Hoffman 2001). Prior to the introduction of state health insurance in England in 1911, similar “patchworks of protection” — that included fraternal organizations, trade unions and workplace-based mutual benefit associations, commercial insurance contracts and discretionary charity — were available to workers in England and North America. Within the patchwork the largest source of illness-related income protection was through Friendly Societies; voluntary organizations such as fraternal orders and trade unions that provided stipulated amounts of “relief” for members who were sick and unable to work. Conditions have changed since the 1920s. Health care for family members, not loss of the family head’s income, has become the chief cost of sickness. Government social programs and commercial group plans have become the principal sources of disability insurance and health insurance. Friendly societies have largely discontinued their sick benefits. Most of them, moreover, have had declining memberships in growing populations.

Overview

This article

  • Explains the types of fraternal orders that existed in the late nineteenth and early twentieth centuries and the types of insurance they offered.
  • Provides estimates of the share of the adult male population that participated in fraternal self-help organizations – over 40 percent in the UK and almost as high in the US – and describes the characteristics of these society’s members.
  • Explains how friendly societies worked to provide sickness insurance as a reasonable price by overcoming the adverse selection and moral hazard problems, while facing problems of risk diversification.
  • Discusses the decline of fraternal sickness insurance after the turn of the twentieth century.
    • Concludes that fraternal lodges were financially sound despite claims that they were weakened by unsoundly pricing sickness insurance.
    • Examines the impact of competition from other insurers – including group insurance, government programs, labor unions, and company-sponsored sick-benefits societies.
    • Examines the impact of broader social and economic changes.
    • Concludes that fraternal sickness insurance was in greatest demand among young men and that its decline is tied mainly to the ageing of fraternal membership.
  • Closes by examining historians’ assessments of the importance and adequacy of fraternal sickness insurance.
  • Includes a lengthy bibliography of sources on fraternal sickness insurance.

Some Details and Definitions Pertaining to Fraternal Sickness Insurance

Fraternal orders were affiliated societies, or societies with branches. The branches were known by various names such as lodges, courts, tents, and hives. Fraternal orders emphasized benefits to their members rather than service to the community. They used secret passwords, rituals, and benefits to attract, bond, and hold members and distinguish themselves from members of rival orders.

Fraternal orders fell into three groups from an insurance perspective. The Masonic order and the Elks comprised the no-benefit group. Lodges in these orders often aided their members on a discretionary basis; that is where members were determined to be in “need” of assistance. They did not provide stipulated stated) insurance benefits (or relief).

econd group, the friendly societies, provided stipulated sick and funeral benefits to their members. The Independent Order of Odd Fellows, the Knights of Pythias, the Improved Order of Red Men, the Loyal Order of Moose, the Fraternal Order of Eagles, the Ancient Order of Foresters and the Foresters of America were the largest orders in this group.

A third group, the life-insurance orders, provided stipulated life-insurance, endowment, and annuity benefits to their members. The Maccabees, the Royal Arcanum, the Independent Order of Foresters, the Woodmen of the World, the Modern Woodmen of America, the Ancient Order of United Workmen, and the Catholic Order of Foresters were major orders in this group. In historical usage, the term “fraternal insurance” meant life insurance, but not sickness and funeral (burial) insurance.

The boundaries between the categories blur on close examination. Certain friendly societies, such as the Knights of Pythias and the Improved Order of Red Men, offered optional life-insurance at extra cost through their centrally-administered endowment branches. Certain insurance orders, such as the Independent Order of Foresters, offered optional sick and funeral benefits at extra cost through centrally-administered separate sickness and funeral funds. In other cases, the members of a society had privileged access to third-party insurance. The Canadian Odd Fellows Relief Association, for example, was entirely separate from the IOOF, but sold life policies exclusively to Odd Fellows.

Friendly Societies and Sickness Insurance

From the late eighteenth and early nineteenth centuries, friendly societies were often local lodges with no affiliations to other lodges. Over time, larger national and sometimes international orders that consisted of local lodges affiliated under jurisdictional grand lodges and national or international supreme bodies displaced the purely local lodge.1 The Ancient Order of Foresters was one of England’s larger affiliated Orders and it had subordinate Courts and jurisdictions in North America. The first Independent Order of Odd Fellows (IOOF) subordinate lodge in North America opened in Baltimore in 1819 under the jurisdiction of the British IOOF Manchester Unity. In the 1840s, the North American Odd Fellows seceded from the IOOFMU and founded the IOOF Sovereign Grand Lodge (SGL) that had jurisdiction over state and province level Grand Lodge jurisdictions in North America.

Membership Estimates

For the United Kingdom near the peak of the self-help movement in the 1890s, estimates of participation in friendly societies and trade unions for insurance against the costs of sickness and/or burial range from as many as 20 percent of the population (Horrell and Oxley 2000), to 41.2 percent of adult males (Johnson 1985) to one-half or more of adult males and as many as two-thirds of workingmen (Riley 1997). Estimates for participation in self-help organizations in North America are somewhat lower but they suggest a similar importance of friendly societies for insuring households against the costs of sickness and burial. Beito (1999) argues that a conservative estimate of participation in fraternal self-help organizations in the United States would have one of three adult males as a member in 1920, “including a large segment of the working class.” Millis (1937) reports that 30 per cent of Illinois wage-earners had market insurance for the disability risk in 1919 where fraternal organizations were the principal source of market insurance.

Characteristics of Friendly Society Members

Studies of British friendly societies suggest that friendly society membership was the “badge of the skilled worker” and made no appeal whatever to the “grey, faceless, lower third” of the working class (Johnson 1985, Hopkins 1995, Riley 1997). The major friendly societies in North America found their market for insurance among white, protestant males who came from upper-working-class and lower-middle-class backgrounds. Not surprisingly, the composition of local lodge memberships bore a resemblance to that of the local working population. Most Odd Fellows in Canada and the United States, however, were higher-paid workers, shop keepers, clerks, and farmers (Emery and Emery 1999). As Theodore Ross, the SGL’s grand secretary, noted in 1890, American Odd Fellows came from “the great middle, industrial classes almost exclusively.” Similarly, studies for Lynn, Massachusetts and Missouri found a heavy working-class representation among IOOF lodge memberships (Cumbler, 1979, p.46; Thelen, 1986, p. 165). In Missouri the social-class composition of Odd Fellows was similar to those for the Knights of Pythias and three life-insurance orders (the Ancient Order of United Workmen, the Maccabees, and the Modern Woodmen of the World). Beito’s (2000) work suggests that while the poor, non-whites and immigrants were not usually members of the larger fraternal orders’ memberships, they had their own mutual aid organizations.

Friendly Insurance: Modest Benefits at Low Cost

Friendly society sick benefits exemplified classic features of working-class insurance: a low cost and a small, fixed benefit amount equal to part of the wages of a worker with average wages. By contrast, commercial policies for middle-class clients offered insurance in variable amounts up to full-income replacement, at a cost beyond the reach of most workers. The affiliated orders established Constitutions which standardized rules and arrangements for sick benefit provision. For most of the friendly societies, local lodges or courts paid the sick claims of its members. Subject to requirements of higher bodies, the local lodge set the amounts of its weekly benefit, joining fees, and membership dues. The affiliation of lodges across locations also resulted in members having portable sickness insurance. If a member moved from one location to another, he could transfer his membership from one lodge to another within the organization.

Claiming Benefits

To claim benefits in the IOOF, a member had to provide his lodge with notice of sickness or disability within a week of its commencement. On receiving notice of a brother’s illness, the member of the visiting committee was to visit the brother within twenty-four hours to render him aid and confirm his sickness. Subsequently, the lodge visitors reported weekly on the brother’s condition until he recovered.

Strengths of Friendly Society’s Insurance: Low Overhead, Effective Monitoring

The local lodge or court system of the affiliated friendly societies like the IOOF and the Ancient Order of Foresters had important strengths for the sickness-insurance market. First, it had low overhead costs. Lodge members, not paid agents, recruited clients. Nominally-paid or unpaid lodge officers did the administrative work. Second, the intrusive methods of monitoring within the lodge system helped friendly societies to respond effectively to two classic problems in sickness insurance: adverse selection and moral hazard.

Overcoming the Adverse Selection Problem

An adverse selection of customers for sickness insurers refers to the fact that when the insurance is priced to reflect the average risk of a specified population, unhealthy persons (above average risk of sickness) have more incentive than healthy persons to purchase sickness insurance. Adverse selection in fraternal memberships was potentially a large problem as many orders had membership dues that were not scaled according to age despite the reality that the risk of sickness increased with age. To keep claims and costs manageable, an insurer needs ways to screen out poor risks. To this end, many organizations scaled initiation fees by the age of an initiate to discourage applications from older males, who had above-average sickness risk. In other cases, fraternal lodges or courts scaled the membership dues by the age at which the member was initiated. In addition, lodge-approved physicians often examined the physical conditions and health histories of applicants for membership. Lodge committees investigated the “moral character” of applicants.

Overcoming the Moral Hazard Problem

Sickness insurers also faced the problem of moral hazard (malingering) — an insured person has an incentive to claim to be disabled when he is not and an incentive to not take due care in avoiding injury or illness. The moral hazard problem was small for accident insurance as disability from accident is definite as to time and cause, and external symptoms are usually self-evident (Osborn, 1958). Disability from sickness, by contrast, is subjective and variable in definition. Friendly societies defined sickness, or disability, as the inability to work at one’s usual occupation. Relatively minor complaints disabled some individuals, while serious complaints failed to incapacitate others. The very possession of sickness insurance may have increased a worker’s willingness to consider himself disabled. The friendly society benefit contract dealt with this problem in several ways. First, by having one to two week waiting periods, and much less than full earnings replacement, self-help benefits required the disabled member to co-insure the loss which reduces the incentive to make a claim. In many fraternal orders, members receiving benefits could not drink or gamble and in some cases were not allowed to be away from their residence after dark. The activities of the lodge visiting committee helped to ward off false claims. In addition, fraternal ideology emphasized a member’s moral responsibility for not making a false claim and for reporting on brothers who were falsely claiming benefits.

Problem with Lack of Risk Diversification

On the negative side, the fraternal-lodge system made little provision for risk diversification. In the IOOF, the Knights of Pythias and the Ancient Order of Foresters, each subordinate lodge (or Court) was responsible for the sick claims of its members. Thus in principle, a high local rate of sick claims in a given year could shock a lodge’s financial condition. Certain commercial practices might have reduced the problem. For example, a grand lodge could have pooled the risks from all lodges in a central fund. Alternatively, it could have initiated a scheme of reinsurance, whereby each lodge assumed a portion of the claims in other lodges. Yet any centralization stood to weaken a friendly society’s management of adverse selection and the moral hazard. The behaviour of lodge members was observed to be a function of the structure of the benefit system. In 1908, for example, when the IOOF, Manchester Unity, in New South Wales, Australia established central funds for sick and funeral benefits, the effect was to turn the lodges into “mere collection agencies.” Participation in lodge affairs fell off, and members developed a more selfish attitude to claims. “When the lodges administered sick pay,” Green and Cromwell observed, “the members knew who was paying — it was the members themselves. But once ‘head office’ took over, the illusion that someone else was paying made its entry” (Green and Cromwell, 1984, pp. 59-60).

Commercial Insurers Couldn’t Match Friendly Societies in the Working-Class Sickness Insurance Market

On balance friendly societies provided an efficient delivery of working-class sickness insurance that commercial insurers could not match. Without the intrusive screening methods and low overhead of the decentralized lodge system, commercial insurers could not as easily solve the problems of moral hazard and adverse selection. “The assurance of a stipulated sum during sickness,” the president of the Prudential Insurance Company conceded in 1909, “can only safely be transacted ? by fraternal organizations having a perfect knowledge of and complete supervision over the individual members.”2

The Decline of Fraternal Sickness Insurance

By the 1890s, friendly societies in North America were withdrawing from the sickness insurance field. The IOOF imposed limits on the length of time that full sick benefits had to be paid, and one or two week waiting periods before the payment of claims began. In 1894, the Knights of Pythias eliminated their constitutional requirement that all subordinate lodges be required to pay stated sick benefits. By the 1920s, the IOOF followed the Knights of Pythias and eliminated its compulsory requirement for the payment of stipulated sick benefits. In England, where friendly societies opposed government pension and insurance schemes in the 1890s, they did not stand in the way of the introduction of Old Age Pensions in 1908 and compulsory state health insurance in 1911. Thus, the decline of fraternal sickness insurance pre-dates the Depression of the 1930s and for many organizations dates from at least the 1890s.

Unsound Pricing Practices?

Why did sickness insurance provided by friendly societies decline? Perhaps friendly society sickness insurance was a casualty of unsound pricing practices in the presence of ageing memberships. To illustrate this argument, consider the IOOF benefit contract. On the one hand, the incidence and duration of sickness claims increased with a member’s age. On the other hand, most IOOF lodges set quarterly dues at a flat rate, rather than by the member’s age, or the member’s age at joining. As the IOOF lodge benefit arrangement was essentially insurance benefits provided on a pay-as-you-go basis (current revenues are used to meet current expenditures), this posed little problem during a lodge’s early years when its members were young and had low sick-claim rates. Over time, however, the members aged and their claim rates showed a rising trend. When revenues from level dues became insufficient to cover claims, the argument goes, the lodge’s insurance provision collapsed. Thus fraternal-insurance provision was essentially a failed, experimental phase in the development of sickness and health insurance.

Lodges Were Financially Sound Despite Non-Actuarial Pricing

By contrast with the above scenario, evidence for British Columbia showed that the IOOF lodges were financially sound, despite their non-actuarial pricing practices (Emery 1996). Typically a lodge accumulated assets during its first years of operation, when its members were young and had below-average sickness risk. In later years, as its membership aged and the cost of claims exceeded income from members’ dues and fees, income from investments made up the difference. Consequently none of British Columbia’s twenty lodge closures before 1929 resulted from the bankruptcy of lodge assets. Similarly none of the British Columbia lodges had a significant probability of ruin from high claims in a particular year.

Non-payment of dues also helped lodge finances. A member became ineligible for benefits if he fell behind in his dues. If he fell far enough behind on his dues, his lodge could suspend him from membership or declare him “ceased” (dropped from membership). A member’s unpaid dues continued to accumulate after suspension. Thus a suspended member had to pay the full, accumulated amount (or a maximum sum, if his grand lodge set one), to get reinstated. Lodges did not pay sick claims to members who were in arrears.

Turnover of Membership Explains How They Remained Financially Sound

When members did not pay their dues owing to be reinstated, their exit from membership relieved lodge financial pressures. Most men joined fraternal lodges when they were under age 35 and for the members who quit, they typically did so before age 40.3 Thus, a substantial proportion of initiates into fraternal memberships did not remain in the membership long enough for their rising risk of illness after age 40 to pose a problem for lodge finances. On average, they belonged when they were most likely net payers and quit before they were net recipients. These features of the substantial turnover in fraternal memberships helps to explain how fraternal lodges were actually going concerns when official actuarial valuations of lodge finances and reserves inevitably showed that the lodges had actuarial deficits at the prevailing levels of dues. These valuations assessed the adequacy of accumulated reserves and dues revenues expected over the remaining lifetimes of the membership at the time of the valuation for meeting the expected benefits of the membership over the remainder of the members’ lifetimes. The assumption that all current members would remain in the membership until death always resulted in valuations that showed the sick benefits were inadequately, if not hazardously, priced. The fact that many members were not lifetime members meant that the pricing was not so hazardous.

Competition from Other Insurers

If poor finances cannot explain the decline of friendly society sickness benefits, then perhaps increasing competition from government and commercial insurance arrangements can explain the decline. Trends for competition do not provide strong support for this explanation for the decline of friendly society sickness-insurance. Competition for friendly societies came from commercial-group plans, government workmen’s compensation programs, trade unions and industrial unions, company-sponsored mutual benefit societies, and other fraternal orders that provided life insurance, or non-stipulated (discretionary) relief.

Group Insurance

Group insurance used the employer’s mass-purchasing power to provide low-cost insurance without a medical examination (Ilse, 1953, chapter 1). Often the employer paid the premium. Otherwise employees paid part of the cost through payroll deductions, a practice that kept the insurer’s overhead costs low. The insurance company made the group-plan contract with the employer, who then issued certificates to individuals in the plan. Group plans compared favourably with IOOF benefits in terms of cost and the amount of the benefit. They also gave a viable commercial solution to the problems of adverse selection and moral hazard.

During the 1920s, however, group plans were available to few workers. In the United States, they missed men who were self-employed or employed in firms with less than fifty workers. The employee’s coverage ceased if he left the company. It also stopped if either the insurer or the employer did not renew the contract at the end of its standard one-year term. When coverage ceased, the employee might find himself too old or unhealthy to obtain insurance elsewhere. More importantly, the challenge of commercial-group insurance was just beginning during the 1920s. By 1929 Americans and Canadians in group plans were less numerous than the number of Odd Fellows alone.

Government Programs

Government programs such as compulsory sickness insurance dated from 1883 in Germany and 1911 in Britain. Between 1914 and 1920, eight state commissions, two national conferences, and several state legislatures attended to the issue in the United States (see Armstrong, 1932, Beito 2000, Hoffman 2001). Despite these initiatives, no American or Canadian government — national, state, or provincial — adopted compulsory sickness insurance until the 1940s (Osborn, 1958, chapter 4; Ilse, 1953, chapter 8).

Workmen’s compensation was another matter. During the years 1911-25, forty-two of the forty-eight American states and six of Canada’s nine provinces passed workmen’s compensation laws (Weis, 1935; Leacy, 1983). Nevertheless, half of all state laws in 1917, and a fifth of them in 1932, applied only to persons in hazardous occupations. None of the various state laws covered employees of interstate railways. In twenty-four states, the law exempted small businesses; in five it exempted public employees. In some states the law was so hedged with restrictions that the scale of benefits was uncertain. Although comprehensive by American standards, Ontario’s law omitted persons in farming, wholesale and retail establishments, and domestic service (Guest, 1980).

Overall, government programs provided negligible competition for friendly society sick benefits during the 1920s. No state or province provided for compulsory sickness insurance. Workmen’s compensation laws were commonplace, but missed important parts of the workforce. More importantly, industrial accidents accounted for just ten percent of all disability (Armstrong, 1932, pp. 284ff; Osborn, 1958, chapter 1).

Labor Unions

Labor unions traditionally used benefits to attract members and hold the loyalty of existing members. During the 1890s miners’ unions in the American west and British Columbia reportedly devoted more time to mutual aid than to collective bargaining (Derickson, 1988, chapter 3). By 1907 nineteen unions, accounting for 25 per cent of organized labor in the United States, offered sick benefits (Rubinow, 1913, chapter 18). During the 1920s, however, the friendly society competition from unions followed a declining trend. After years of steady growth, for example, the membership of American trade unions dropped by 32 per cent between 1920 and 1929.4 Similarly, the membership of Canadian trade unions fell by 23 per cent between 1919 and 1926. In an unprecedented development in 1926, the street railway workers’ union in Newburgh, New York, obtained commercial group-sickness coverage through a collective bargaining agreement with the employer (Ilse, 1953, ch. 13). Although rare during the 1920s, this marked the start of collective bargaining for sick benefits rather than direct union provision.

Company-sponsored Sick-Benefit Societies

Company-sponsored sick-benefit societies, often known as Mutual Benefit Associations, originated in a tradition of corporate paternalism during the 1870s (Brandes, 1976; Brody, 1980; Zahavi, 1988; McCallum, 1990). The United States had more than 500 such societies by 1908. Typically these societies obtained most or all of their funds from employee dues, not company funds, ostensibly to encourage the workers to be self-reliant.

Participation was voluntary in 85 per cent of 461 American societies surveyed on the eve of the First World War. Eligibility for membership commonly required a waiting period (a minimum period of permanent employment). A major disadvantage, compared to fraternal order sickness benefits, was that coverage ceased when the employee left the firm. In the amount and cost of the benefit (benefits of $5 to 6 per week for up to thirteen weeks for annual dues of $2.50 to $6 per year) the societies were similar to fraternal lodges.

The institutions were part of a larger program of corporate welfarism that had developed during the First World War in conditions of labor scarcity, labor unrest, rising union membership, and government management of capital-labor relations. At the war’s end, however, the economy slumped, the supply of labor became abundant, unions became cooperative and were losing members, and wartime government-economic management ended. In the new circumstances, the pressure on businessmen to promote welfare programs abated, and the membership of company-sponsored sick-benefit societies entered a flat trend.5 By 1929 the societies were still a minority phenomenon. They existed in 30 percent of large firms (250 or more employees), but in just 4.5 percent of small firms, which accounted for half the industrial work force (Jacoby, 1985, ch.6).

Competition from Insurance Orders

Friendly societies (orders with sick and funeral benefits) also competed with the insurance orders (orders with life and/or annuity benefits in small amounts) that offered an optional sick benefit. The Maccabees, Woodmen of the World, Independent Order of Foresters, and the Royal Arcanum were some main rivals in the insurance-order group for the friendly societies.

The insurance-order sick benefit had several features of commercial insurance and compared poorly with the friendly-society benefit. In many cases, these orders paid sick claims from a centrally-administered “sick and funeral fund,” not local lodge funds. They financed sick claims by requiring monthly premiums, paid in advance, not quarterly dues. Their central authority could cancel the member’s sickness insurance by giving him notice; in the IOOF, by contrast, the member retained his coverage as long as his dues were paid up. A member could draw benefits for a maximum of twenty-six weeks in the Maccabees and a maximum of twelve weeks in the IOF. During the 1920s, competition from fraternal life insurance orders showed a flat or declining trend. In terms of membership size, the largest friendly society, the IOOF, gained ground on all competitors in the insurance-order group.

Broader Economic and Social Trends in the 1920s

Another popular explanation for the decline of friendly society sick benefits is one of “changing times” where friendly societies provided an outdated social arrangement. Here fraternal orders were multiple-function organizations that offered their members a variety of social and indirect economic benefits, as well as insurance. Thus in principle, the declining trend for IOOF sickness insurance could have been a by-product of social changes during the 1920s that were undermining the popularity of fraternal lodges (Dumenil, 1984; Brody, 1980; Carnes, 1989; Charles, 1993; Clawson, 1989; Rotundo, 1989; Burley, 1994; Tucker, 1990). For example, the fraternal-lodge meeting faced competition from new forms of entertainment (radio, cinema, automobile travel). The development of installment buying and consumerism undermined fraternal culture and working-class institutional life. Trends in sex relations sapped the appeal of all-male social activities and fraternal ritual of lodge meetings. The rising popularity of luncheon-club organizations (Kiwanis, Lions, Kinsmen) expressed a popular shift to a community-service orientation, as opposed to the fraternal tradition of services to members. The luncheon clubs also exemplified a popular shift to class-specific organizations, at the expense of fraternal orders, which had a cross-class appeal. Finally, with the waning popularity of lodge meetings, lodge nights became less useful occasions for making business contacts.

Rising Health-Care Costs

The decade also gave rise to two important insurance-related developments. The one, described above, was the diffusion of commercial-group plans for income-replacement insurance. The other was the emergence of health-care services as the principal cost of sickness (Starr, 1982). In 1914 lost wages had been between two and four times the medical costs of a worker’s sickness, or about equal if one included the worker’s family. During the 1920’s, however, the medical costs soared, by 20 per cent for families with less than $1,200 income and 85 per cent for families with incomes between $1,200 and $2,500. The medical costs were highly variable as well as rising. Effectively, a serious hospitalized illness could consume a third to a half of a family’s annual income.

External Changes and Competition Don’t Explain the Decline of Fraternal Sickness Insurance Well

Changes during the 1920s, however, provide a poor explanation for the declining trend for the friendly-society sick benefit in North America. First, the timing was wrong. On the one hand, the declining trend dated from the 1890s, not the 1920s. On the other hand, key developments during the decade were at an early stage. By 1929 commercial-group insurance was established, but not widespread. Similarly, health insurance scarcely existed, despite the rising trend for the health-care costs. As Starr explains, health insurance presented an extreme problem of moral hazard that insurers did not solve until the 1930s.6 Second, we lack a theory to explain why the waning of interest in lodge meetings would have caused a declining trend for the sick benefit. Finally, the “changing times” explanation, on its own, incorrectly portrays the sick benefit as a static product that became less relevant in an exogenously changing society and economy.

Young Men Value Sickness Insurance

If external pressure did not cause the decline of the friendly society sick benefits, then why did friendly society sickness insurance decline? Emery and Emery (1999) argue that the sick benefit was primarily in demand amongst men who lacked alternatives to market insurance. For example, at the start of their working lives, male breadwinners had no older children to earn secondary incomes (family insurance). They also lacked savings to cover the disability risk (self-insurance). Thus men joined the Odd Fellows when they were “young”. They then quit after a few years as family and self-insurance alternatives to market insurance opened up to them. Further, as the friendly society sick benefit was a form of precautionary saving, demand for it would have declined as a household accumulated wealth.

Aging Membership and the Declining Demand for Sickness Insurance

Over time, fraternal memberships were ageing as rates of initiation slowed and suspensions from membership continued on at steady rates. Initiates and suspended members were disproportionately from the lower age groups in the memberships thus slower membership growth in the friendly societies represented ageing memberships. In this context of the demand for the sick benefit over the life-cycle, ageing fraternal memberships became less attached to the sick benefit. Thus, as the memberships aged, their collective preferences changed. Older members had priorities and objectives other than sickness insurance.

Friendly Societies and Compulsory State Insurance

Despite the similarity of organizations and the high rates of participation in them in the late nineteenth and early twentieth centuries, the role of voluntary self-help organizations like the friendly societies, diverged on either side of the Atlantic. In England, the “administrative machinery” of friendly societies was the vehicle for introducing and delivering compulsory government sickness/health insurance under the Approved Societies system that prevailed from 1911 to 1944 at which time the government centralized the provision of health insurance (Gosden 1973). In North America the friendly society sickness insurance arrangement declined from at least the 1890s despite growing memberships in the organizations up to the 1920s. While the friendly society sickness insurance declined, government showed little activity in the health/sickness insurance field. Only through the 1930s did commercial and non-profit group health and hospital insurance plans and government social programs rise to primacy in the sickness and health insurance field.7

Critics of Friendly Societies’ Voluntary Self-Help

Critics of voluntary self-help arrangements for insuring the costs of sickness argue that voluntary self-help was a failed system and its obvious short-comings and financial difficulties were the impetus for government involvement in social insurance arrangements. (Smiles 1876, Moffrey 1910, Peebles 1936, Gosden 1961, Gilbert 1965, Hopkins 1995, Horrell and Oxley 2000, Hoffman 2001). Horrell and Oxley (2000) argue that friendly society benefits were too paltry to offer true relief. Hopkins (1995) argues that for those workers who could afford it, self-help through friendly society membership worked well but too much of the working population remained outside the safety net due to low incomes. At best, the critics applaud the intent of individuals taking the initiative to protect themselves and for friendly societies in pioneering the preparation of actuarial data on morbidity and sickness duration that aided commercial insurers in insuring the sickness risk in a financially sound way.

Positive Assessments of Friendly Societies’ Roles

In contrast, Beito (2000) presents a positive assessment of fraternal mutual aid in the United States, and hence working-class self-help, for dealing with the economic consequences of poor health. Beito argues that fraternal societies in America extended social welfare service, such as insurance, to the poor (notably immigrants and blacks) and working class Americans who otherwise would not have had access to such coverage. Far from being an inadequate form of safety net, fraternal mutual aid sustained needy Americans from cradle to grave and over time, extended the range of benefits provided to include hospitals and homes for the aged as the needs in society arose. Beito suggests that changing cultural attitudes and the expanding scale and scope of a paternalistic welfare state undermined an efficient and viable fraternal social insurance arrangement.

Government’s Role in “Crowding Out” Self-Help

Similarly, Green and Cromwell (1984) argue that state paternalism crowded out efficient fraternal methods of social insurance in Australia. Hopkins (1995) suggests that while friendly societies were effective for aiding a sizable portion of the working class, working class self-help “had been weighed in the balance and found wanting” since it failed to provide income protection for the working classes as a whole. Hopkins concludes that compulsory state aid inevitably had to replace voluntary self-help to “spread the net over the abyss” to protect the poorest of the working class. Similar to Beito’s view, Hopkins suggests that equity considerations were the reason for undermining otherwise efficient voluntary self-help arrangements. Beveridge (1948) expresses dismay over the crowding out of friendly societies as social insurers in England following the centralization of compulsory government health insurance arrangements in 1944.

References:

Applebaum, L. “The Development of Voluntary Health Insurance in the United States.” Journal of Insurance 28 (1961): 25-33.

Armstrong, Barbara N. Insuring the Essentials. New York: MacMillan, 1932.

Beito, David. From Mutual Aid to the Welfare State: Fraternal Societies and Social Services, 1890-1967. Chapel Hill: University of North Carolina Press, 2000.

Berkowitz, Edward. “How to Think About the Welfare State” Labor History 32 (1991): 489-502

Berkowitz, Edward and Monroe Berkowitz, “Challenges to Workers’ Compensation: An Historical Analysis.” In Workers’ Compensation Benefits: Adequacy, Equity, and Efficiency, edited by John D. Worrall and David Appel. Ithaca, NY: ILR Press, 1985.

Berkowitz, Edward and Kim McQuaid. “Businessman and Bureaucrat: the Evolution of the American Welfare System, 1900-1940.” Journal of Economic History 38 (1978): 120-41.

Berkowitz, Edward and Kim McQuaid. Creating the Welfare State: The Political Economy of Twentieth Century Reform. New York: Praeger, 1988.

Bernstein, Irving. The Lean Years: A History of the American Worker, 1920-1933. Boston: Houghton Mifflin, 1960.

Bradbury, Bettina. Working Families, Age, Gender, and Daily Survival in Industrializing Montreal. Toronto: McClelland and Stewart, 1993.

Brandes, Stuart D. American Welfare Capitalism 1880-1940. Chicago: University of Chicago Press, 1976.

Brody, David. Workers in Industrial America: Essays on the Twentieth Century Struggle. New York: Oxford University Press, 1980.

Brumberg, Joan Jacobs, and Faye E. Dudden. “Masculinity and Mumbo Jumbo: Nineteenth-Century Fraternalism Revisited.” Reviews in American History 18 (1990): 363-70 [review of Carnes].

Burley, David G. A Particular Condition in Life, Self-Employment and Social Mobility in Mid-Victorian Brantford, Ontario. McGill-Queen’s University Press, 1994.

Burrows, V.A. “On Friendly Societies since the Advent of National Health Insurance.” Journal of the Institute of Actuaries 63 (1932): 307-401

Carnes, Mark C. Secret Ritual and Manhood in Victorian America. New Haven, Yale University Press, 1989.

Charles, Jeffrey A. Service Clubs in American Society, Rotary, Kiwanis, and Lions. Urbana: University of Illinois Press, 1993.

Clawson, Mary Ann. Constructing Brotherhood, Class, Gender, and Fraternalism Princeton: Princeton University Press, 1989.

Cordery, Simon. “Fraternal Orders in the United States: A Quest for Protection and Identity.” In Social Security Mutualism: The Comparative history of Mutual Benefit Societies, edited by Marcel Van der Linden, 83-110. Bern: Peter Lang, 1996.

Cordery, Simon. “Friendly Societies and the Discourse of Respectability in Britain, 1825-1875.” Journal of British Studies 34, no. 1 (1995): 35-58

Costa, Dora. “The Political Economy of State Provided Health Insurance in the Progressive Era: Evidence from California.” National Bureau of Economic Research Working Paper, no. 5328, 1995

Cumbler, John T. Working-Class Community in Industrial America: Work, Leisure, and Struggle in Two Industrial Cities, 1880-1930. Westport: Greenwood Press, 1979.

Davis, K. “National Health Insurance: A Proposal.” American Economic Review 79, no. 2 (1989): 349-352

Derickson, Alan. Workers’ Health Workers’ Democracy, The Western Miners” Struggle, 1891-1925. Ithaca: Cornell University Press, 1988.

Dumenil, Lynn. Freemasonry and American Culture 1880-1930. Princeton: Princeton University Press, 1984.

Ehrlich, Isaac and Gary S. Becker. “Market Insurance, Self-Insurance, and Self-Protection.” Journal of Political Economy 80, no. 4 (1972): 623-648.

Emery, J.C. Herbert. The Rise and Fall of Fraternal Methods of Social Insurance: A Case Study of the Independent Order of Oddfellows of British Columbia Sickness Insurance, 1874-1951. Ph.D. Dissertation: University of British Columbia, 1993.

Emery, J.C. Herbert. “Risky Business? Nonactuarial Pricing Practices and the Financial Viability of Fraternal Sickness Insurers.” Explorations in Economic History 33 (1996): 195-226.

Emery, George and J.C. Herbert Emery. A Young Man’s Benefit: The Independent Order of Odd Fellows and Sickness Insurance in the United States and Canada, 1860-1929. Montreal: McGill-Queen’s University Press, 1999.

Fischer, Stanley. “A Life Cycle Model of Life Insurance Purchases.” International Economic Review 14, no. 1 (1973): 132-152.

Follmann, J.F. “The Growth of Group Health Insurance.” Journal of Risk and Insurance 32 (1965): 105-112.

Galanter, Marc. Cults, Faith, Healing and Coercion. New York: Oxford University Press, 1989.

Gilbert, B.B. “The Decay of Nineteenth-Century Provident Institutions and the Coming of Old Age Pensions in Great Britain.” Economic History Review, 2nd Series 17 (1965): 551-563.

Gilbert, B.B. The Evolution of National Health Insurance in Great Britain: The Origins of the Welfare State. London: Michael Joseph, 1966.

Gist, Noel P. “Secret Societies: A Cultural Study of Fraternalism in the United States.” University of Missouri Studies XV, no. 4 (1940): 1-176.

Gosden, P.(1961). The Friendly Societies in England 1815 to 1875. Manchester: Manchester University Press.

Gosden, P. Self-Help, Voluntary Associations in the 19th Century London: B.T. Batsford, 1973.

Grourinchas, Pierre-Olivier and Jonathan A. Parker. “The Empirical Importance of Precautionary Savings.” National Bureau of Economic Research Working Paper no. 8107, 2001.

Gratton, Brian. “The Poverty of Impoverishment Theory: The Economic Well-Being of the Elderly, 1890-1950.” Journal of Economic History 56, no. 1 (1996): 39-61.

Green, D.G. and L.G. Cromwell. Mutual Aid or Welfare State: Australia’s Friendly Societies. Boston: Allen & Unwin, 1984.

Greenberg, Brian. “Worker and Community: Fraternal Orders in Albany, New York, 1845-1885.” Maryland Historian 8 (1977): 38-53.

Guest, D. The Emergence of Social Security in Canada. Vancouver: University of British Columbia Press, 1980.

Haines, Michael R. “Industrial Work and the Family Life Cycle, 1889-1890.” Research in Economic History 4 (1979): 289-356.

Hirschman, Albert O. Exit, Voice, and Loyalty, Responses to Decline in Firms, Organizations, and States. Cambridge: Harvard University Press, 1970.

History of Odd-Fellowship in Canada under the Old Regime. Brantford: Grand Lodge of Ontario, 1879.

History of the Maccabees, Ancient and Modern, 1881 to 1896. Port Huron, 1896.

Hopkins, Eric. Working-Class Self-Help in Nineteenth-Century England: Responses to Industrialization. New York: St. Martin’s Press, 1995.

Hoffman, Beatrix. The Wages of Sickness: The Politics of Health Insurance in Progressive America. Chapel Hill: University of North Carolina Press, 2001.

Horrell, Sara and Deborah Oxley. “Work and Prudence: Household Responses to Income Variation in Nineteenth Century Britain.” European Review of Economic History 4, no. 1 (2000): 27-58.

Ilse, Louise Wolters. Group Insurance and Employee Retirement Plans. New York: Prentice-Hall, 1953.

Jacoby, Sanford M. Employing Bureaucracy, Managers, Unions, and the Transformation of Work in American Industry, 1900-1945. New York: Columbia University Press, 1985.

James, Marquis. The Metropolitan Life, A Study in Business Growth. New York: Viking Press, 1bove, Roy. The Struggle for Social Security: 1900-1935. Cambridge: Harvard University Press, 1968.

Lynd, Robert S. and Helen Merrell. Middletown: A Study in Contemporary American Culture. New York: Harcourt, Brace & World, 1929.

MacDonald, Fergus. The Catholic Church and Secret Societies in the United States. New York: U.S. Catholic Historical Society, 1946.

Markey, Raymond. “The History of Mutual Benefit Societies in Australia, 1830-1991.” In Social Security Mutualism: The Comparative History of Mutual Benefit Societies, edited by Marcel Van der Linden, 147-76. Bern: Peter Lang, 1996.

McCallum, Margaret E. “Corporate Welfarism in Canada, 1919-39.” Canadian Historical Review LXXI, no. 1 (1990): 49-79.

Millis, Harry A. Sickness Insurance: A Study of the Sickness Problem and Health Insurance.. Chicago: Chicago University Press, 1937.

Moffrey, R.W. A Century of Odd Fellowship. Manchester: IOOFMU G.M. and Board of Directors, 1910.

National Inoulder: Westview Press, 1991.

Osborn, Grant M. Compulsory Temporary Disability Insurance in the United States. Homewood, IL: Richard D. Irwin, 1958.

Palmer, Bryan D. “Mutuality and the Masking/Making of Difference: The Making of Mutual Benefit Societies in Canada, 1850-1950.” In Social Security Mutualism: The Comparative History of Mutual Benefit Societies, edited by Marcel Van der Linden, 111-46. Bern: Peter Lang, 1996.

Peebles, A. “The State and Medicine.” Canadian Journal of Economics and Political Studies 2 (1936): 464-480.

Preuss, Arthur. Dictionary of Secret and Other Societies. St. Louis: B. Herder Co., 1924.

Quadagno, Jill. “Theories of the Welfare State.” Annual Reviews of Sociology 13 (1987): 109-28

Quadagno, Jill. The Transformation of Old Age Security: Class and Politics in the American Welfare State. Chicago: University of Chicago Press, 1988.

Riley, James C. “Ill Health during the English Mortality Decline: The Friendly Societies’ y. “Boston Masons, 1900-1935: The Lower Middle Class in a Divided Society.” Journal of Voluntary Action Research 6 (1977): 119-26.

Ross, Theo. A. Odd Fellowship, Its History and Manual. New York: M.W. Hazen, 1890.

Rotundo, E. Anthony. “Romantic Friendship: Male Intimacy and Middle-Class Youth in the Northern United States, 1800-1900.” Journal of Social History 23 no. 1 (1989): 1-25.

Rubinow, Isaac Max. Social Insurance: With Special References to American Conditions. Henry Holt & Co., 1913.

Schmidt, A.J. Fraternal Organizations. Westport: Greenwood Press, 1980.

Senior, Hereward. Orangeism: The Canadian Phase. Toronto: McGraw-Hill Ryerson, 1972.

Stalson, J. Owen. Marketing Life Insurance: Its History in America. Cambridge: Harvard University Press, 1942; Homewood: R.D. Irwin, 1969.

Smiles, Samuel. Thrift. Toronto: Belford Brothers, 1876.

Starr, Paul. The Social Transformation of American Medicine: The Rise of a Sovereign Profesc History.” Journal of Economic History 51, no. 2 (1991): 271-288.

Thelen, David. Paths of Resistance: Tradition and Dignity in Industrializing Missouri. New York: Oxford University Press, 1986.

Tishler, Hace Sorel. Self-Reliance and Social Security, 1870-1917. Port Washington, N.Y.: Kennikat, 1971.

Tucker, Eric. Administering Danger in the Workplace: The Law and Politics of Occupational Health and Safety Regulation in Ontario, 1850-1914. Toronto: University of Toronto Press, 1990.

Van der Linden, Marcel. “Introduction.” In Social Security Mutualism: The Comparative History of Mutual Benefit Societies, edited by Marcel Van der Linden, 11-38. Bern: Peter Lang, 1996.

Vondracek, Felix John. “The Rise of Fraternal Organizations in the United States, 1868-1900.” Social Science 47 (1972): 26-33.

Weis, Harry. “Employers’ Liability and Workmen’s Compensation.” In History of Labor in the United States, 1896-1932, Vol. III, edited by Don D. Lescohier

Footnotes

1 See Gosden (1961), Hopkins (1995) and Riley (1997) for excellent discussions of the evolution of friendly societies in England.

2 Cited in Starr (1982, p. 242). British industrial-life companies did not offer sickness insurance until 1911, when government allowed them qualify as approved societies under the National Health Act. In acting as approved societies, their motive was not to write sickness insurance, but rather to protect their interest in burial insurance. See Beveridge, 1948, p. 81; Gilbert, 1966, p. 323.

3 Emery and Emery (1999). Riley (1997) shows that British men in their twenties were the majority of initiates and members who exited did so within “a few years of joining”.

4 Data for unions are from Wolman, 1936, pp. 16, 239 and Leacy, 1983, series E175. By 1931 just 10 per cent of non-agricultural workers in the United States were unionized, down from 19 per cent in 1919 (Bernstein, 1960, chapter 2). Unions affiliated with the American Federation of Labor accounted for approximately 80 per cent of the total membership of American labor unions (Wolman, p.7). The reported AFL membership statistics are high. Unions paid per capita tax on more than their actual paid-up memberships for prestige and to maintain their voting strength at AFL meetings. In 1929, the United Mine Workers, an extreme case, reported 400,000 members, but probably had just 262,000 members, including 169,000 paid-up members and 93,000 “exonerated” members (kept on the books because they were unemployed or on strike).

5 Brandes (1976, chapter 10) places their membership at 749,000 in 1916 and 825,000 in 1931.

6 The probable costs of health-care claims were hard to predict (Starr, 1982, pp. 290-1). As with income-replacement insurance, sickness was not a well-defined condition. In addition, the treatment costs were within the insured’s control. They also were within the control of the physician and hospital, both of which could profit from additional services and raise prices as the patient’s ability to pay increased.

7 Employer-purchased/provided group plans came to be the most common source of the health insurance coverage in the United States (Applebaum, 1961; Follmann, 1965; Davis, 1989). In Canada, provincial government health insurance plans, with universal coverage, replaced the work-place based arrangements in the 1960s.

Citation: Emery, Herb. “Fraternal Sickness Insurance”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/fraternal-sickness-insurance/