is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

An Overview of the Economic History of Uruguay
since the 1870s

Luis Bértola, Universidad de la República — Uruguay

Uruguay’s Early History

Without silver or gold, without valuable species, scarcely peopled by gatherers and fishers, the Eastern Strand of the Uruguay River (Banda Oriental was the colonial name; República Oriental del Uruguay is the official name today) was, in the sixteenth and seventeenth centuries, distant and unattractive to the European nations that conquered the region. The major export product was the leather of wild descendants of cattle introduced in the early 1600s by the Spaniards. As cattle preceded humans, the state preceded society: Uruguay’s first settlement was Colonia del Sacramento, a Portuguese military fortress founded in 1680, placed precisely across from Buenos Aires, Argentina. Montevideo, also a fortress, was founded by the Spaniards in 1724. Uruguay was on the border between the Spanish and Portuguese empires, a feature which would be decisive for the creation, with strong British involvement, in 1828-1830, of an independent state.

Montevideo had the best natural harbor in the region, and rapidly became the end-point of the trans-Atlantic routes into the region, the base for a strong commercial elite, and for the Spanish navy in the region. During the first decades after independence, however, Uruguay was plagued by political instability, precarious institution building and economic retardation. Recurrent civil wars with intensive involvement by Britain, France, Portugal-Brazil and Argentina, made Uruguay a center for international conflicts, the most important being the Great War (Guerra Grande), which lasted from 1839 to 1851. At its end Uruguay had only about 130,000 inhabitants.

“Around the middle of the nineteenth century, Uruguay was dominated by the latifundium, with its ill-defined boundaries and enormous herds of native cattle, from which only the hides were exported to Great Britain and part of the meat, as jerky, to Brazil and Cuba. There was a shifting rural population that worked on the large estates and lived largely on the parts of beef carcasses that could not be marketed abroad. Often the landowners were also the caudillos of the Blanco or Colorado political parties, the protagonists of civil wars that a weak government was unable to prevent” (Barrán and Nahum, 1984, 655). This picture still holds, even if it has been excessively stylized, neglecting the importance of subsistence or domestic-market oriented peasant production.

Economic Performance in the Long Run

Despite its precarious beginnings, Uruguay’s per capita gross domestic product (GDP) growth from 1870 to 2002 shows an amazing persistence, with the long-run rate averaging around one percent per year. However, this apparent stability hides some important shifts. As shown in Figure 1, both GDP and population grew much faster before the 1930s; from 1930 to 1960 immigration vanished and population grew much more slowly, while decades of GDP stagnation and fast growth alternated; after the 1960s Uruguay became a net-emigration country, with low natural growth rates and a still spasmodic GDP growth.

GDP growth shows a pattern featured by Kuznets-like swings (Bértola and Lorenzo 2004), with extremely destructive downward phases, as shown in Table 1. This cyclical pattern is correlated with movements of the terms of trade (the relative price of exports versus imports), world demand and international capital flows. In the expansive phases exports performed well, due to increased demand and/or positive terms of trade shocks (1880s, 1900s, 1920s, 1940s and even during the Mercosur years from 1991 to 1998). Capital flows would sometimes follow these booms and prolong the cycle, or even be a decisive force to set the cycle up, as were financial flows in the 1970s and 1990s. The usual outcome, however, has been an overvalued currency, which blurred the debt problem and threatened the balance of trade by overpricing exports. Crises have been the result of a combination of changing trade conditions, devaluation and over-indebtedness, as in the 1880s, early 1910s, late 1920s, 1950s, early 1980s and late 1990s.

Population and per capita GDP of Uruguay, 1870-2002 (1913=100)

Table 1: Swings in the Uruguayan Economy, 1870-2003

Per capita GDP fall (%) Length of recession (years) Time to pre-crisis levels (years) Time to next crisis (years)
1872-1875 26 3 15 16
1888-1890 21 2 19 25
1912-1915 30 3 15 19
1930-1933 36 3 17 24-27
1954/57-59 9 2-5 18-21 27-24
1981-1984 17 3 11 17
1998-2003 21 5

Sources: See Figure 1.

Besides its cyclical movement, the terms of trade showed a sharp positive trend in 1870-1913, a strongly fluctuating pattern around similar levels in 1913-1960 and a deteriorating trend since then. While the volume of exports grew quickly up to the 1920s, it stagnated in 1930-1960 and started to grow again after 1970. As a result, the purchasing power of exports grew fourfold in 1870-1913, fluctuated along with the terms of trade in 1930-1960, and exhibited a moderate growth in 1970-2002.

The Uruguayan economy was very open to trade in the period up to 1913, featuring high export shares, which naturally declined as the rapidly growing population filled in rather empty areas. In 1930-1960 the economy was increasingly and markedly closed to international trade, but since the 1970s the economy opened up to trade again. Nevertheless, exports, which earlier were mainly directed to Europe (beef, wool, leather, linseed, etc.), were increasingly oriented to Argentina and Brazil, in the context of bilateral trade agreements in the 1970s and 1980s and of Mercosur (the trading zone encompassing Argentina, Brazil, Paraguay and Uruguay) in the 1990s.

While industrial output kept pace with agrarian export-led growth during the first globalization boom before World War I, the industrial share in GDP increased in 1930-54, and was mainly domestic-market orientated. Deindustrialization has been profound since the mid-1980s. The service sector was always large: focusing on commerce, transport and traditional state bureaucracy during the first globalization boom; focusing on health care, education and social services, during the import-substituting industrialization (ISI) period in the middle of the twentieth century; and focusing on military expenditure, tourism and finance since the 1970s.

The income distribution changed markedly over time. During the first globalization boom before World War I, an already uneven distribution of income and wealth seems to have worsened, due to massive immigration and increasing demand for land, both rural and urban. However, by the 1920s the relative prices of land and labor changed their previous trend, reducing income inequality. The trend later favored industrialization policies, democratization, introduction of wage councils, and the expansion of the welfare state based on an egalitarian ideology. Inequality diminished in many respects: between sectors, within sectors, between genders and between workers and pensioners. While the military dictatorship and the liberal economic policy implemented since the 1970s initiated a drastic reversal of the trend toward economic equality, the globalizing movements of the 1980s and 1990s under democratic rule didn’t increase equality. Thus, inequality remains at the higher levels reached during the period of dictatorship (1973-85).

Comparative Long-run Performance

If the stable long-run rate of Uruguayan per capita GDP growth hides important internal transformations, Uruguay’s changing position in the international scene is even more remarkable. During the first globalization boom the world became more unequal: the United States forged ahead as the world leader (nearly followed by other settler economies); Asia and Africa lagged far behind. Latin America showed a confusing map, in which countries as Argentina and Uruguay performed rather well, and others, such as the Andean region, lagged far behind (Bértola and Williamson 2003). Uruguay’s strong initial position tended to deteriorate in relation to the successful core countries during the late 1800s, as shown in Figure 2. This trend of negative relative growth was somewhat weak during the first half of the twentieth century, improved significantly during the 1960s, as the import-substituting industrialization model got exhausted, and has continued since the 1970s, despite policies favoring increased integration into the global economy.

 Per capita GDP of Uruguay relative to four core countries, 1870-2002

If school enrollment and literacy rates are reasonable proxies for human capital, in late 1800s both Argentina and Uruguay had a great handicap in relation to the United States, as shown in Table 2. The gap in literacy rates tended to disappear — as well as this proxy’s ability to measure comparative levels of human capital. Nevertheless, school enrollment, which includes college-level and technical education, showed a catching-up trend until the 1960’s, but reverted afterwards.

The gap in life-expectancy at birth has always been much smaller than the other development indicators. Nevertheless, some trends are noticeable: the gap increased in 1900-1930; decreased in 1930-1950; and increased again after the 1970s.

Table 2: Uruguayan Performance in Comparative Perspective, 1870-2000 (US = 100)

1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000
GDP per capita


101 65 63 27 32 27 33 27 26 24 19 18 15 16


63 34 38 31 32 29 25 25 24 21 15 16


23 8 8 8 8 8 7 9 9 13 11 10
Latin America 13 12 13 10 9 9 9 6 6


100 100 100 100 100 100 100 100 100 100 100 100 100 100
Literacy rates


57 65 72 79 85 91 92 94 95 97 99


57 65 72 79 85 91 93 94 94 96 98


39 38 37 42 46 51 61 69 76 81 86

Latin America

28 30 34 37 42 47 56 65 71 77 83


100 100 100 100 100 100 100 100 100 100 100
School enrollment


23 31 31 30 34 42 52 46 43


28 41 42 36 39 43 55 44 45


12 11 12 14 18 22 30 42

Latin America


100 100 100 100 100 100 100 100 100
Life expectancy at birth


102 100 91 85 91 97 97 97 95 96 96


81 85 86 90 88 90 93 94 95 96 95
Brazil 60 60 56 58 58 63 79 83 85 88 88
Latin America 65 63 58 58 59 63 71 77 81 88 87
USA 100 100 100 100 100 100 100 100 100 100 100

Sources: Per capita GDP: Maddison (2001) and Astorga, Bergés and FitzGerald (2003). Literacy rates and life expectancy; Astorga, Bergés and FitzGerald (2003). School enrollment; Bértola and Bertoni (1998).

Uruguay during the First Globalization Boom: Challenge and Response

During the post-Great-War reconstruction after 1851, Uruguayan population grew rapidly (fueled by high natural rates and immigration) and so did per capita output. Productivity grew due to several causes including: the steam ship revolution, which critically reduced the price spread between Europe and America and eased access to the European market; railways, which contributed to the unification of domestic markets and reduced domestic transport costs; the diffusion and adaptation to domestic conditions of innovations in cattle-breeding and services; a significant reduction in transaction costs, related to a fluctuating but noticeable process of institutional building and strengthening of the coercive power of the state.

Wool and woolen products, hides and leather were exported mainly to Europe; salted beef (tasajo) to Brazil and Cuba. Livestock-breeding (both cattle and sheep) was intensive in natural resources and dominated by large estates. By the 1880s, the agrarian frontier was exhausted, land properties were fenced and property rights strengthened. Labor became abundant and concentrated in urban areas, especially around Montevideo’s harbor, which played an important role as a regional (supranational) commercial center. By 1908, it contained 40 percent of the nation’s population, which had risen to more than a million inhabitants, and provided the main part of Uruguay’s services, civil servants and the weak and handicraft-dominated manufacturing sector.

By the 1910s, Uruguayan competitiveness started to weaken. As the benefits of the old technological paradigm were eroding, the new one was not particularly beneficial for resource-intensive countries such as Uruguay. International demand shifted away from primary consumption, the population of Europe grew slowly and European countries struggled for self-sufficiency in primary production in a context of soaring world supply. Beginning in the 1920s, the cattle-breeding sector showed a very poor performance, due to lack of innovation away from natural pastures. In the 1930’s, its performance deteriorated mainly due to unfavorable international conditions. Export volumes stagnated until the 1970s, while purchasing power fluctuated strongly following the terms of trade.

Inward-looking Growth and Structural Change

The Uruguayan economy grew inwards until the 1950s. The multiple exchange rate system was the main economic policy tool. Agrarian production was re-oriented towards wool, crops, dairy products and other industrial inputs, away from beef. The manufacturing industry grew rapidly and diversified significantly, with the help of protectionist tariffs. It was light, and lacked capital goods or technology-intensive sectors. Productivity growth hinged upon technology transfers embodied in imported capital goods and an intensive domestic adaptation process of mature technologies. Domestic demand grew also through an expanding public sector and the expansion of a corporate welfare state. The terms of trade substantially impacted protectionism, productivity growth and domestic demand — the government raised money by manipulating exchange rates, so that when export prices rose the state had a greater capacity to protect the manufacturing sector through low exchange rates for capital goods, raw material and fuel imports and to spur productivity increases by imports of capital, while protection allowed industry to pay higher wages and thus expand domestic demand.

However, rent-seeking industries searching for protectionism and a weak clienteslist state, crowded by civil servants recruited in exchange for political favors to the political parties, directed structural change towards a closed economy and inefficient management. The obvious limits to inward looking growth of a country peopled by only about two million inhabitants were exacerbated in the late 1950s as terms of trade deteriorated. The clientelist political system, which was created by both traditional parties while the state was expanding at the national and local level, was now not able to absorb the increasing social conflicts, dyed by stringent ideological confrontation, in a context of stagnation and huge fiscal deficits.

Re-globalization and Regional Integration

The dictatorship (1973-1985) started a period of increasing openness to trade and deregulation which has persisted until the present. Dynamic integration into the world market is still incomplete, however. An attempt to return to cattle-breeding exports, as the engine of growth, was hindered by the oil crises and the ensuing European response, which restricted meat exports to that destination. The export sector was re-orientated towards “non-traditional exports” — i.e., exports of industrial goods made of traditional raw materials, to which low-quality and low-wage labor was added. Exports were also stimulated by means of strong fiscal exemptions and negative real interest rates and were re-orientated to the regional market (Argentina and Brazil) and to other developing regions. At the end of the 1970s, this policy was replaced by the monetarist approach to the balance of payments. The main goal was to defeat inflation (which had continued above 50% since the 1960s) through deregulation of foreign trade and a pre-announced exchange rate, the “tablita.” A strong wave of capital inflows led to a transitory success, but the Uruguayan peso became more and more overvalued, thus limiting exports, encouraging imports and deepening the chronic balance of trade deficit. The “tablita” remained dependent on increasing capital inflows and obviously collapsed when the risk of a huge devaluation became real. Recession and the debt crisis dominated the scene of the early 1980s.

Democratic regimes since 1985 have combined natural resource intensive exports to the region and other emergent markets, with a modest intra-industrial trade mainly with Argentina. In the 1990s, once again, Uruguay was overexposed to financial capital inflows which fueled a rather volatile growth period. However, by the year 2000, Uruguay had a much worse position in relation to the leaders of the world economy as measured by per capita GDP, real wages, equity and education coverage, than it had fifty years earlier.

Medium-run Prospects

In the 1990s Mercosur as a whole and each of its member countries exhibited a strong trade deficit with non-Mercosur countries. This was the result of a growth pattern fueled by and highly dependent on foreign capital inflows, combined with the traditional specialization in commodities. The whole Mercosur project is still mainly oriented toward price competitiveness. Nevertheless, the strongly divergent macroeconomic policies within Mercosur during the deep Argentine and Uruguayan crisis of the beginning of the twenty-first century, seem to have given place to increased coordination between Argentina and Brazil, thus making of the region a more stable environment.

The big question is whether the ongoing political revival of Mercosur will be able to achieve convergent macroeconomic policies, success in international trade negotiations, and, above all, achievements in developing productive networks which may allow Mercosur to compete outside its home market with knowledge-intensive goods and services. Over that hangs Uruguay’s chance to break away from its long-run divergent siesta.


Astorga, Pablo, Ame R. Bergés and Valpy FitzGerald. “The Standard of Living in Latin America during the Twentieth Century.” University of Oxford Discussion Papers in Economic and Social History 54 (2004).

Barrán, José P. and Benjamín Nahum. “Uruguayan Rural History.” Latin American Historical Review, 1985.

Bértola, Luis. The Manufacturing Industry of Uruguay, 1913-1961: A Sectoral Approach to Growth, Fluctuations and Crisis. Publications of the Department of Economic History, University of Göteborg, 61; Institute of Latin American Studies of Stockholm University, Monograph No. 20, 1990.

Bértola, Luis and Reto Bertoni. “Educación y aprendizaje en escenarios de convergencia y divergencia.” Documento de Trabajo, no. 46, Unidad Multidisciplinaria, Facultad de Ciencias Sociales, Universidad de la República, 1998.

Bértola, Luis and Fernando Lorenzo. “Witches in the South: Kuznets-like Swings in Argentina, Brazil and Uruguay since the 1870s.” In The Experience of Economic Growth, edited by J.L. van Zanden and S. Heikenen. Amsterdam: Aksant, 2004.

Bértola, Luis and Gabriel Porcile. “Argentina, Brasil, Uruguay y la Economía Mundial: una aproximación a diferentes regímenes de convergencia y divergencia.” In Ensayos de Historia Económica by Luis Bertola. Montevideo: Uruguay en la región y el mundo, 2000.

Bértola, Luis and Jeffrey Williamson. “Globalization in Latin America before 1940.” National Bureau of Economic Research Working Paper, no. 9687 (2003).

Bértola, Luis and others. El PBI uruguayo 1870-1936 y otras estimaciones. Montevideo, 1998.

Maddison, A. Monitoring the World Economy, 1820-1992. Paris: OECD, 1995.

Maddison, A. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

U.S. Economy in World War I

Hugh Rockoff, Rutgers University

Although the United States was actively involved in World War I for only nineteen months, from April 1917 to November 1918, the mobilization of the economy was extraordinary. (See the chronology at the end for key dates). Over four million Americans served in the armed forces, and the U.S. economy turned out a vast supply of raw materials and munitions. The war in Europe, of course, began long before the United States entered. On June 28, 1914 in Sarajevo Gavrilo Princip, a young Serbian revolutionary, shot and killed Austrian Archduke Franz Ferdinand and his wife Sophie. A few months later the great powers of Europe were at war.

Many Europeans entered the war thinking that victory would come easily. Few had the understanding shown by a 26 year-old conservative Member of Parliament, Winston Churchill, in 1901. “I have frequently been astonished to hear with what composure and how glibly Members, and even Ministers, talk of a European War.” He went on to point out that in the past European wars had been fought by small professional armies, but in the future huge populations would be involved, and he predicted that a European war would end “in the ruin of the vanquished and the scarcely less fatal commercial dislocation and exhaustion of the conquerors.”[1]

Reasons for U.S. Entry into the War

Once the war began, however, it became clear that Churchill was right. By the time the United States entered the war Americans knew that the price of victory would be high. What, then, impelled the United States to enter? What role did economic forces play? One factor was simply that Americans generally – some ethnic minorities were exceptions – felt stronger ties to Britain and France than to Germany and Austria. By 1917 it was clear that Britain and France were nearing exhaustion, and there was considerable sentiment in the United States for saving our traditional allies.

The insistence of the United States on her trading rights was also important. Soon after the war began Britain, France, and their allies set up a naval blockade of Germany and Austria. Even food was contraband. The Wilson Administration complained bitterly that the blockade violated international law. U.S. firms took to using European neutrals, such as Sweden, as intermediaries. Surely, the Americans argued, international law protected the right of one neutral to trade with another. Britain and France responded by extending the blockade to include the Baltic neutrals. The situation was similar to the difficulties the United States experienced during the Napoleonic wars, which drove the United States into a quasi-war against France, and to war against Britain.

Ultimately, however, it was not the conventional surface vessels used by Britain and France to enforce its blockade that enraged American opinion, but rather submarines used by Germany. When the British (who provided most of the blockading ships) intercepted an American ship, the ship was escorted into a British port, the crew was well treated, and there was a chance of damage payments if it turned out that the interception was a mistake. The situation was very different when the Germans turned to submarine warfare. German submarines attacked without warning, and passengers had little chance of to save themselves. To many Americans this was a brutal violation of the laws of war. The Germans felt they had to use submarines because their surface fleet was too small to defeat the British navy let alone establish an effective counter-blockade.

The first submarine attack to inflame American opinion was the sinking of the Lusitania in May 1915. The Lusitania left New York with a cargo of passengers and freight, including war goods. When the ship was sunk over 1150 passengers were lost including 115 Americans. In the months that followed further sinkings brought more angry warnings from President Wilson. For a time the Germans gave way and agreed to warn American ships before sinking them and to save their passengers. In February 1917, however, the Germans renewed unrestricted submarine warfare in an attempt to starve Britain into submission. The loss of several U.S. ships was a key factor in President Wilson’s decision to break diplomatic relations with Germany and to seek a declaration of war.

U.S. Entry into the War and the Costs of Lost Trade

From a crude dollar-and-cents point of view it is hard to justify the war based on the trade lost to the United States. U.S. exports to Europe rose from $1.479 billion dollars in 1913 to $4.062 billion in 1917. Suppose that the United States had stayed out of the war, and that as a result all trade with Europe was cut off. Suppose further, that the resources that would have been used to produce exports for Europe were able to produce only half as much value when reallocated to other purposes such as producing goods for the domestic market or exports for non-European countries. Then the loss of output in 1917 would have been $2.031 billion per year. This was about 3.7 percent of GNP in 1917, and only about 6.3 percent of the total U.S. cost of the war.[2]

On March 21, 1918 the Germans launched a massive offensive on the Somme battlefield and successfully broke through the Allied lines. In May and early June, after U.S. entry into the war, the Germans followed up with fresh attacks that brought them within fifty miles of Paris. Although a small number of Americans participated it was mainly the old war: the Germans against the British and the French. The arrival of large numbers of Americans, however, rapidly changed the course of the war. The turning point was the Second Battle of the Marne fought between July 18 and August 6. The Allies, bolstered by significant numbers of Americans, halted the German offensive.

The initiative now passed to the Allies. They drove the Germans back in a series of attacks in which American troops played an increasingly important role. The first distinctively American offensive was the battle of the St. Mihiel Salient fought from September 12 to September 16, 1918; over half a million U.S. troops participated. The last major offensive of the war, the Meuse-Argonne offensive, was launched on September 26, with British, French, and American forces attacking the Germans on a broad front. The Germans now realized that their military situation was deteriorating rapidly, and that they would have to agree to end to the fighting. The Armistice occurred on November 11, 1918 – at the eleventh hour, of the eleventh day, of the eleventh month.

Mobilizing the Economy

The first and most important mobilization decision was the size of the army. When the United States entered the war, the army stood at 200,000, hardly enough to have a decisive impact in Europe. However, on May 18, 1917 a draft was imposed and the numbers were increased rapidly. Initially, the expectation was that the United States would mobilize an army of one million. The number, however, would go much higher. Overall some 4,791,172 Americans would serve in World War I. Some 2,084,000 would reach France, and 1,390,000 would see active combat.

Once the size of the Army had been determined, the demands on the economy became obvious, although the means to satisfy them did not: food and clothing, guns and ammunition, places to train, and the means of transport. The Navy also had to be expanded to protect American shipping and the troop transports. Contracts immediately began flowing from the Army and Navy to the private sector. The result, of course, was a rapid increase in federal spending from $477 million in 1916 to a peak of $8,450 million in 1918. (See Table 1 below for this and other data on the war effort.) The latter figure amounted to over 12 percent of GNP, and that amount excludes spending by other wartime agencies and spending by allies, much of which was financed by U.S. loans.

Table 1
Selected Economic Variables, 1916-1920
1916 1917 1918 1919 1920
1. Industrial production (1916 =100) 100 132 139 137 108
2. Revenues of the federal government (millions of dollars) $930 2,373 4,388 5,889 6,110
3. Expenditures of the federal government (millions of dollars) $1,333 7,316 15,585 12,425 5,710
4. Army and Navy spending (millions of dollars) $477 3,383 8,580 6,685 2,063
5. Stock of money, M2 (billions of dollars) $20.7 24.3 26.2 30.7 35.1
6. GNP deflator (1916 =100) 100 120 141 160 185
7. Gross National Product (GNP) (billions of dollars) $46.0 55.1 69.7 77.2 87.2
8. Real GNP (billions of 1916 dollars) $46.0 46.0 49.6 48.1 47.1
9. Average annual earnings per full-time manufacturing employee (1916 dollars) $751 748 802 813 828
10. Total labor force (millions) 40.1 41.5 44.0 42.3 41.5
11. Military personnel (millions) .174 .835 2.968 1.266 .353
Sources by row:

1. Miron and Romer (1990, table 2).

2-3. U.S. Bureau of the Census (1975), series Y352 and Y457.

4. U.S. Bureau of the Census (1975), series Y458 and Y459. The estimates are the average for fiscal year t and fiscal year t+1.

5. Friedman and Schwartz (1970, table 1, June dates).

6-8. Balke and Gordon (1989, table 10, pp. 84-85).The original series were in 1982 dollars.

9. U.S. Bureau of the Census (1975), series D740.

10-11. Kendrick (1961, table A-VI, p. 306; table A-X, p. 312).

Although the Army would number in the millions, raising these numbers did not prove to be an unmanageable burden for the U.S economy. The total labor force rose from about 40 million in 1916 to 44 million in 1918. This increase allowed the United States to field a large military while still increasing the labor force in the nonfarm private sector from 27.8 million in 1916 to 28.6 million in 1918. Real wages rose in the industrial sector during the war, perhaps by six or seven percent, and this increase combined with the ease of finding work was sufficient to draw many additional workers into the labor force.[3] Many of the men drafted into the armed forces were leaving school and would have been entering the labor force for the first time in any case. The farm labor force did drop slightly from 10.5 million in 1916 to 10.3 million workers in 1918, but farming included many low-productivity workers and farm output on the whole was sustained. Indeed, the all-important category of food grains showed strong increases in 1918 and 1919.

Figure 1 shows production of steel ingots and “total industrial production” – an index of steel, copper, rubber, petroleum, and so on – monthly from January 1914 through 1920.[4] It is evident that the United States built up its capacity to turn out these basic raw materials during the years of U.S. neutrality when Britain and France were its buying supplies and the United States was beginning its own tentative build up. The United States then simply maintained the output of these materials during the years of active U.S. involvement and concentrated on turning these materials into munitions.[5]

Figure 1

Steel Ingots and Total Industrial Production, 1914-1920

Prices on the New York Stock Exchange, shown in Figure 2, provide some insight into what investors thought about the strength of the economy during the war era. The upper line shows the Standard and Poor’s/Cowles Commission Index. The lower line shows the “real” price of stocks – the nominal index divided by the consumer price index. When the war broke out the New York Stock Exchange was closed to prevent panic selling. There are no prices for the New York Stock Exchange, although a lively “curb market” did develop. After the market reopened it rose as investors realized that the United States would profit as a neutral. The market then began a long slide that began when tensions between the United States and Germany rose at the end of 1916 and continued after the United States entered the war. A second, less rise began in the spring of 1918 when an Allied victory began to seem possible. The increase continued and gathered momentum after the Armistice. In real terms, however, as shown by the lower line in the figure, the rise in the stock market was not sufficient to offset the rise in consumer prices. At times one hears that war is good for the stock market, but the figures for World War I, as the figures for other wars, tell a more complex story.[6]

Figure 2

The Stock Market, 1913-1920

Table 2 shows the amounts of some of the key munitions produced during the war. During and after the war critics complained that the mobilization was too slow. American troops, for example, often went into battle with French artillery, clearly evidence, the critics implied, of incompetence somewhere in the supply chain. It does take time, however, to convert existing factories or build new ones and to work out the details of the production and distribution process. The last column of Table 2 shows peak monthly production, usually October 1918, at an annual rate. It is obvious that by the end of the war the United States was beginning to achieve the “production miracle” that occurred in World War II. When Franklin Roosevelt called for 50,000 planes in World War II, his demand was seen as an astounding exercise in bravado. Yet when we look at the last column of the table we see that the United States was hitting this level of production for Liberty engines by the end World War I. There were efforts during the war to coordinate Allied production. To some extent this was tried – the United States produced much of the smokeless powder used by the Allies – but it was always clear that the United States wanted its own army equipped with its own munitions.

Table 2
Production of Selected Munitions in World War I
Munition Total Production Peak monthly production at an annual rate
Rifles 3,550,000 3,252,000
Machine guns 226,557 420,000
Artillery units 3,077 4,920
Smokeless powder (pounds) 632,504,000 n.a.
Toxic Gas (tons) 10,817 32,712
De Haviland-4 bombers 3,227 13,200
Liberty airplane engines 13,574 46,200
Source: Ayres (1919, passim)

Financing the War

Where did the money come from to buy all these munitions? Then as now there were, the experts agreed, three basic ways to raise the money: (1) raising taxes, (2) borrowing from the public, and (3) printing money. In the Civil War the government had had simply printed the famous greenbacks. In World War I it was possible to “print money” in a more roundabout way. The government could sell a bond to the newly created Federal Reserve. The Federal Reserve would pay for it by creating a deposit account for the government, which the government could then draw upon to pay its expenses. If the government first sold the bond to the general public, the process of money creation would be even more roundabout. In the end the result would be much the same as if the government had simply printed greenbacks: the government would be paying for the war with newly created money. The experts gave little consideration to printing money. The reason may be that the gold standard was sacrosanct. A financial policy that would cause inflation and drive the United States off the gold standard was not to be taken seriously. Some economists may have known the history of the greenbacks of the Civil War and the inflation they had caused.

The real choice appeared to be between raising taxes and borrowing from the public. Most economists of the World War I era believed that raising taxes was best. Here they were following a tradition that stretched back to Adam Smith who argued that it was necessary to raise taxes in order to communicate the true cost of war to the public. During the war Oliver Morton Sprague, one of the leading economists of the day, offered another reason for avoiding borrowing. It was unfair, Sprague argued, to draft men into the armed forces and then expect them to come home and pay higher taxes to fund the interest and principal on war bonds. Most men of affairs, however, thought that some balance would have to be struck between taxes and borrowing. Treasury Secretary William Gibbs McAdoo thought that financing about 50 percent from taxes and 50 percent from bonds would be about right. Financing more from taxes, especially progressive taxes, would frighten the wealthier classes and undermine their support for the war.

In October 1917 Congress responded to the call for higher taxes with the War Revenue Act. This act increased the personal and corporate income tax rates and established new excise, excess-profit, and luxury taxes. The tax rate for an income of $10,000 with four exemptions (about $140,000 in 2003 dollars) went from 1.2 percent in 1916 to 7.8 percent. For incomes of $1,000,000 the rate went from 10.3 percent in 1916 to 70.3 percent in 1918. These increase in taxes and the increase in nominal income raised revenues from $930 million in 1916 to $4,388 million in 1918. Federal expenditures, however, increased from $1,333 million in 1916 to $15,585 million in 1918. A huge gap had opened up that would have to be closed by borrowing.

Short-term borrowing was undertaken as a stopgap. To reduce the pressure on the Treasury and the danger of a surge in short-term rates, however, it was necessary to issue long-term bonds, so the Treasury created the famous Liberty Bonds. The first issue was a thirty-year bond bearing a 3.5% coupon callable after fifteen years. There were three subsequent issues of Liberty Bonds, and one of shorter-term Victory Bonds after the Armistice. In all, the sale of these bonds raised over $20 billion dollars for the war effort.

In order to strengthen the market for Liberty Bonds, Secretary McAdoo launched a series of nationwide campaigns. Huge rallies were held in which famous actors, such as Charlie Chaplin, urged the crowds to buy Liberty Bonds. The government also enlisted famous artists to draw posters urging people to purchase the bonds. One of these posters, which are widely sought by collectors, is shown below.

But Mother Had Done Nothing Wrong, Had She, Daddy?

Louis Raemaekers. After a Zeppelin Raid in London: “But Mother Had Done Nothing Wrong, Had She, Daddy?” Prevent this in New York: Invest in Liberty Bonds. 19″ x 12.” From the Rutgers University Library Collection of Liberty Bond Posters.

Although the campaigns may have improved the morale of both the armed forces and the people at home, how much the campaigns contributed to expanding the market for the bonds is an open question. The bonds were tax-exempt – the exact degree of exemption varied from issue to issue – and this undoubtedly made them attractive to investors in high tax brackets. Indeed, the Treasury was criticized for imposing high marginal taxes with one hand, and then creating a loophole with the other. The Federal Reserve also bought many of the bonds creating new money. Some of this new “highpowered money” augmented the reserves of the commercial banks which allowed them to buy bonds or to finance their purchase by private citizens. Thus, directly or indirectly, a good deal of the support for the bond market was the result of money creation rather than savings by the general public.

Table 3 provides a rough breakdown of the means used to finance the war. Of the total cost of the war, about 22 percent was financed by taxes and from 20 to 25 percent by printing money, which meant that from 53 to 58 percent was financed through the bond issues.

Table 3
Financing World War I, March 1917-May 1919
Source of finance Billions of Dollars Percent (M2) Percent (M4)
Taxation and nontax receipts 7.3 22 22
Borrowing from the public 24 58 53
Direct money creation 1.6 5 5
Indirect money creation (M2) 4.8 15
Indirect money creation (M4) 6.6 20
Total cost of the war 32.9 100 100
Note: Direct money creation is the increase in the stock of high-powered money net of the increase in monetary gold. Indirect money creation is the increase in monetary liabilities not matched by the increase in high-powered money.

Source: Friedman and Schwartz (1963, 221)

Heavy reliance on the Federal Reserve meant, of course, that the stock of money increased rapidly. As shown in Table 1, the stock of money rose from $20.7 billion in 1916 to $35.1 billion in 1920, about 70 percent. The price level (GDP deflator) increased 85 percent over the same period.

The Government’s Role in Mobilization

Once the contracts for munitions were issued and the money began flowing, the government might have relied on the price system to allocate resources. This was the policy followed during the Civil War. For a number of reasons, however, the government attempted to manage the allocation of resources from Washington. For one thing, the Wilson administration, reflecting the Progressive wing of the Democratic Party, was suspicious of the market, and doubted its ability to work quickly and efficiently, and to protect the average person against profiteering. Another factor was simply that the European belligerents had adopted wide-ranging economic controls and it made sense for the United States, a latecomer, to follow suit.

A wide variety of agencies were created to control the economy during the mobilization. A look at four of the most important – (1) the Food Administration, (2) the Fuel Administration, (3) the Railroad Administration, and (4) the War Industries Board – will suggest the extent to which the United States turned away from its traditional reliance on the market. Unfortunately, space precludes a review of many of the other agencies such as the War Shipping Board, which built noncombatant ships, the War Labor Board, which attempted to settle labor disputes, and the New Issues Committee, which vetted private issues of stocks and bonds.

Food Administration

The Food Administration was created by the Lever Food and Fuel Act in August 1917. Herbert Hoover, who had already won international fame as a relief administrator in China and Europe, was appointed to head it. The mission of the Food Administration was to stimulate the production of food and assure a fair distribution among American civilians, the armed forces, and the Allies, and at a fair price. The Food Administration did not attempt to set maximum prices at retail or (with the exception of sugar) to ration food. The Act itself set what then was a high minimum price for wheat – the key grain in international markets – at the farm gate, although the price would eventually go higher. The markups of processors and distributors were controlled by licensing them and threatening to take their licenses away if they did not cooperate. The Food Administration then attempted control prices and quantities at retail through calls for voluntary cooperation. Millers were encouraged to tie the sale of wheat flour to the sale of less desirable flours – corn meal, potato flour, and so on – thus making a virtue out of a practice that would have been regarded as a disreputable evasion of formal price ceilings. Bakers were encouraged to bake “Victory bread,” which included a wheat-flour substitute. Finally, Hoover urged Americans to curtail their consumption of the most valuable foodstuffs: there were, for example, Meatless Mondays and Wheatless Wednesdays.

Fuel Administration

The Fuel Administration was created under the same Act as the Food Administration. Harry Garfield, the son of President James Garfield, and the President of Williams College, was appointed to head it. Its main problem was controlling the price and distribution of bituminous coal. In the winter of 1918 a variety of factors combined to cause a severe coal shortage that forced school and factory closures. The Fuel Administration set the price of coal at the mines and the margins of dealers, mediated disputes in the coalfields, and worked with the Railroad Administration (described below) to reduce long hauls of coal.

Railroad Administration

The Wilson Administration nationalized the railroads and put them under the control of the Railroad Administration in December of 1917, in response to severe congestion in the railway network that was holding up the movement of war goods and coal. Wilson’s energetic Secretary of the Treasury (and son-in-law), William Gibbs McAdoo, was appointed to head it. The railroads would remain under government control for another 26 months. There has been considerable controversy over how well the system worked under federal control. Defenders of the takeover point out that the congestion was relieved and that policies that increased standardization and eliminated unnecessary competition were put in place. Critics of the takeover point to the large deficit that was incurred, nearly $1.7 billion, and to the deterioration of the capital stock of the industry. William J. Cunningham’s (1921) two papers in the Quarterly Journal of Economics, although written shortly after the event, still provide one of the most detailed and fair-minded treatments of the Railroad Administration.

War Industries Board

The most important federal agency, at least in terms of the scope of its mission, was the War Industries Board. The Board was established in July of 1917. Its purpose was no less than to assure the full mobilization of the nation’s resources for the purpose of winning the war. Initially the Board relied on persuasion to make its orders effective, but rising criticism of the pace of mobilization, and the problems with coal and transport in the winter of 1918, led to a strengthening of its role. In March 1918 the Board was reorganized, and Wilson placed Bernard Baruch, a Wall Street investor, in charge. Baruch installed a “priorities system” to determine the order in which contracts could be filled by manufacturers. Contracts rated AA by the War Industries Board had to be filled before contracts rated A, and so on. Although much hailed at the time, this system proved inadequate when tried in World War II. The War Industries Board also set prices of industrial products such as iron and steel, coke, rubber, and so on. This was handled by the Board’s independent Price Fixing Committee.

It is tempting to look at these experiments for clues on how the economy would perform under various forms of economic control. It is important, however, to keep in mind that these were very brief experiments. When the war ended in November 1918 most of the agencies immediately wound up their activities. Only the Railroad Administration and the War Shipping Board continued to operate. The War Industries Board, for example, was in operation only for a total of sixteen months; Bernard Baruch’s tenure was only eight months. Obviously only limited conclusions can be drawn from these experiments.

Costs of the War

The human and economic costs of the war were substantial. The death rate was high: 48,909 members of the armed forces died in battle, and 63,523 died from disease. Many of those who died from disease, perhaps 40,000, died from pneumonia during the influenza-pneumonia epidemic that hit at the end of the war. Some 230,074 members of the armed forces suffered nonmortal wounds.

John Maurice Clark provided what is still the most detailed and thoughtful estimate of the cost of the war; a total amount of about $32 billion. Clark tried to estimate what an economist would call the resource cost of the war. For that reason he included actual federal government spending on the Army and Navy, the amount of foreign obligations, and the difference between what government employees could earn in the private sector and what they actually earned. He excluded interest on the national debt and part of the subsidies paid to the Railroad Administration because he thought they were transfers. His estimate of $32 billion amounted to about 46 percent of GNP in 1918.

Long-run Economic Consequences

The war left a number of economic legacies. Here we will briefly describe three of the most important.

The finances of the federal government were permanently altered by the war. It is true that the tax increases put in place during the war were scaled back during the 1920s by successive Republican administrations. Tax rates, however, had to remain higher than before the war to pay for higher expenditures due mainly to interest on the national debt and veterans benefits.

The international economic position of the United States was permanently altered by the war. The United States had long been a debtor country. The United States emerged from the war, however, as a net creditor. The turnaround was dramatic. In 1914 U.S investments abroad amounted to $5.0 billion, while total foreign investments in the United States amounted to $7.2 billion. Americans were net debtors to the tune of $2.2 billion. By 1919 U.S investments abroad had risen to $9.7 billion, while total foreign investments in the United States had fallen to $3.3 billion: Americans were net creditors to the tune of $6.4 billion.[7] Before the war the center of the world capital market was London, and the Bank of England was the world’s most important financial institution; after the war leadership shifted to New York, and the role of the Federal Reserve was enhanced.

The management of the war economy by a phalanx of Federal agencies persuaded many Americans that the government could play an important positive role in the economy. This lesson remained dormant during the 1920s, but came to life when the United States faced the Great Depression. Both the general idea of fighting the Depression by creating federal agencies and many of the specific agencies and programs reflected precedents set in Word War I. The Civilian Conservation Corps, a Depression era agency that hired young men to work on conservation projects, for example, attempted to achieve the benefits of military training in a civilian setting. The National Industrial Recovery Act reflected ideas Bernard Baruch developed at the War Industries Board, and the Agricultural Adjustment Administration hearkened back to the Food Administration. Ideas about the appropriate role of the federal government in the economy, in other words, may have been the most important economic legacy of American involvement in World War I.

Chronology of World War I
June Archduke Franz Ferdinand is shot.
August Beginning of the war.
May Sinking of the Lusitania. War talk begins in the United States.
June National Defense Act expands the Army
February Germany renews unrestricted submarine warfare.
U.S.S. Housatonic sunk.
U.S. breaks diplomatic relations with Germany
April U.S. declares war.
May Selective Service Act
June First Liberty Loan
July War Industries Board
August Lever Food and Fuel Control Act
October War Revenue Act
November Second Liberty Loan
December Railroads are nationalized.
January Maximum prices for steel
March Bernard Baruch heads the War Industries Board
Germans begin massive offensive on the western front
May Third Liberty Loan
First independent action by the American Expeditionary Force
June Battle of Belleau Wood – the first sizable U.S. action
July Second Battle of the Marne – German offensive stopped
September 900,000 Americans in the Battle of Meuse-Argonne
October Fourth Liberty Loan
November Armistice

References and Suggestions for Further Reading

Ayres, Leonard P. The War with Germany: A Statistical Summary. Washington DC: Government Printing Office. 1919.

Balke, Nathan S. and Robert J. Gordon. “The Estimation of Prewar Gross National Product: Methodology and New Evidence.” Journal of Political Economy 97, no. 1 (1989): 38-92.

Clark, John Maurice. “The Basis of War-Time Collectivism.” American Economic Review 7 (1917): 772-790.

Clark, John Maurice. The Cost of the World War to the American People. New Haven: Yale University Press for the Carnegie Endowment for International Peace, 1931.

Cuff, Robert D. The War Industries Board: Business-Government Relations during World War I. Baltimore: Johns Hopkins University Press, 1973.

Cunningham, William J. “The Railroads under Government Operation. I: The Period to the Close of 1918.” Quarterly Journal of Economics 35, no. 2 (1921): 288-340. “II: From January 1, 1919, to March 1, 1920.” Quarterly Journal of Economics 36, no. 1. (1921): 30-71.

Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Friedman, Milton, and Anna J. Schwartz. Monetary Statistics of the United States: Estimates, Sources, and Methods. New York: Columbia University Press, 1970.

Gilbert, Martin. The First World War: A Complete History. New York: Henry Holt, 1994.

Kendrick, John W. Productivity Trends in the United States. Princeton: Princeton University Press, 1961.

Koistinen, Paul A. C. Mobilizing for Modern War: The Political Economy of American Warfare, 1865-1919. Lawrence, KS: University Press of Kansas, 1997.

Miron, Jeffrey A. and Christina D. Romer. “A New Monthly Index of Industrial Production, 1884-1940.” Journal of Economic History 50, no. 2 (1990): 321-37.

Rockoff, Hugh. Drastic Measures: A History of Wage and Price Controls in the United States. New York: Cambridge University Press, 1984.

Rockoff, Hugh. “Until It’s Over, Over There: The U.S. Economy in World War I.” National Bureau of Economic Research, Working Paper w10580, 2004.

U.S. Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970, Bicentennial Edition. Washington, DC: Government Printing Office, 1975.


[1] Quoted in Gilbert (1994, 3).

[2] U.S. exports to Europe are from U.S. Bureau of the Census (1975), series U324.

[3] Real wages in manufacturing were computed by dividing “Hourly Earnings in Manufacturing Industries” by the Consumer Price Index (U.S. Bureau of the Census 1975, series D766 and E135).

[4] Steel ingots are from the National Bureau of Economic Research, macrohistory database, series m01135a, Total Industrial Production is from Miron and Romer (1990), Table 2.

[5] The sharp and temporary drop in the winter of 1918 was due to a shortage of coal.

[6] The chart shows end-of-month values of the S&P/Cowles Composite Stock Index, from Global Financial Data: To get real prices I divided this index by monthly values of the United States Consumer Price Index for all items. This is available as series 04128 in the National Bureau of Economic Research Macro-Data Base available at

[7] U.S. investments abroad (U.S. Bureau of the Census 1975, series U26); Foreign investments in the U.S. (U.S.

Citation: Rockoff, Hugh. “US Economy in World War I”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL

Urban Decline (and Success) in the United States

Fred Smith and Sarah Allen, Davidson College


Any discussion of urban decline must begin with a difficult task – defining what is meant by urban decline. Urban decline (or “urban decay”) is a term that evokes images of abandoned homes, vacant storefronts, and crumbling infrastructure, and if asked to name a city that has suffered urban decline, people often think of a city from the upper Midwest like Cleveland, Detroit, or Buffalo. Yet, while nearly every American has seen or experienced urban decline, the term is one that is descriptive and not easily quantifiable. Further complicating the story is this simple fact – metropolitan areas, like greater Detroit, may experience the symptoms of severe urban decline in one neighborhood while remaining economically robust in others. Indeed, the city of Detroit is a textbook case of urban decline, but many of the surrounding communities in metropolitan Detroit are thriving. An additional complication comes from the fact that modern American cities – cities like Dallas, Charlotte, and Phoenix – don’t look much like their early twentieth century counterparts. Phoenix of the early twenty-first century is an economically vibrant city, yet the urban core of Phoenix looks very, very different from the urban core found in “smaller” cities like Boston or San Francisco.[1] It is unlikely that a weekend visitor to downtown Phoenix would come away with the impression that Phoenix is a rapidly growing city, for downtown Phoenix does not contain the housing, shopping, or recreational venues that are found in downtown San Francisco or Boston.

There isn’t a single variable that will serve as a perfect choice for measuring urban decline, but this article will take an in depth look at urban decline by focusing on the best measure of a city’s well being – population. In order to provide a thorough understanding of urban decline, this article contains three additional sections. The next section employs data from a handful of sources to familiarize the reader with the location and severity of urban decline in the United States. Section three is dedicated to explaining the causes of urban decline in the U.S. Finally, the fourth section looks at the future of cities in the United States and provides some concluding remarks.

Urban Decline in the United States – Quantifying the Population Decline

Between 1950 and 2000 the population of the United States increased by approximately 120 million people, from 152 million to 272 million. Despite the dramatic increase in population experienced by the country as a whole, different cities and states experienced radically different rates of growth. Table 1 shows the population figures for a handful of U.S. cities for the years 1950 to 2000. (It should be noted that these figures are population totals for the cities in the list, not for the associated metropolitan areas.)

Table 1: Population for Selected U.S. Cities, 1950-2000



% Change

1950 – 2000







New York
















































Kansas City
























Los Angeles








San Francisco








































New Orleans
















































Source: U.S. Census Bureau.

Several trends emerge from the data in Table 1. The cities in the table are clustered together by region, and the cities at the top of the table – cities from the Northeast and Midwest – experience no significant population growth (New York City) or experience dramatic population loss (Detroit and Cleveland). These cities’ experiences stand in stark contrast to that of the cities located in the South and West – cities found farther down the list. Phoenix, Houston, Dallas, Charlotte, and Nashville all experience triple digit population increases during the five decades from 1950 to 2000. Figure 1 displays this information even more dramatically:

Figure 1: Percent Change in Population, 1950 – 2000

Source: U.S. Census Bureau.

While Table 1 and Figure 1 clearly display the population trends within these cities, they do not provide any information about what was happening to the metropolitan areas in which these cities are located. Table 2 fills this gap. (Please note – these metropolitan areas do not correspond directly to the metropolitan areas identified by the U.S. Census Bureau. Rather, Jordan Rappaport – an economist at the Kansas City Federal Reserve Bank – created these metropolitan areas for his 2005 article “The Shared Fortunes of Cities and Suburbs.”)

Table 2: Population of Selected Metropolitan Areas, 1950 to 2000

Metropolitan Area





Percent Change 1950 to 2000

New York-Newark-Jersey City, NY






Philadelphia, PA






Boston, MA






Chicago-Gary, IL-IN






Detroit, MI






Cleveland, OH






Kansas City, MO-KS






Denver, CO






Omaha, NE






Los Angeles-Long Beach, CA






San Francisco-Oakland, CA






Seattle, WA






Houston, TX






Dallas, TX






Phoenix, AZ






New Orleans, LA






Atlanta, GA






Nashville, TN






Washington, DC






Miami, FL






Charlotte, NC






* The percentage change is for the period from 1960 to 2000.

Source: Rappaport;

Table 2 highlights several of the difficulties in conducting a meaningful discussion about urban decline. First, by glancing at the metro population figures for Cleveland and Detroit, it becomes clear that while these cities were experiencing severe urban decay, the suburbs surrounding them were not. The Detroit metropolitan area grew more rapidly than the Boston, Philadelphia, or New York metro areas, and even the Cleveland metro area experienced growth between 1950 and 2000. Next, we can see from Tables 1 and 2 that some of the cities experiencing dramatic growth between 1950 and 2000 did not enjoy similar increases in population at the metro level. The Phoenix, Charlotte, and Nashville metro areas experienced tremendous growth, but their metro growth rates were not nearly as large as their city growth rates. This raises an important question – did these cities experience tremendous growth rates because the population was growing rapidly or because the cities were annexing large amounts of land from the surrounding suburbs? Table 3 helps to answer this question. In Table 3, land area, measured in square miles, is provided for each of the cities initially listed in Table 1. The data in Table 3 clearly indicate that Nashville and Charlotte, as well as Dallas, Phoenix, and Houston, owe some of their growth to the expansion of their physical boundaries. Charlotte, Phoenix, and Nashville are particularly obvious examples of this phenomenon, for each city increased its physical footprint by over seven hundred percent between 1950 and 2000.

Table 3: Land Area for Selected U.S. Cities, 1950 – 2000

Metropolitan Area





Percent Change 1950 to 2000

New York, NY






Philadelphia, PA






Boston, MA






Chicago, IL






Detroit, MI






Cleveland, OH






Kansas City, MO






Denver, CO






Omaha, NE






Los Angeles, CA






San Francisco, CA






Seattle, WA






Houston, TX






Dallas, TX






Phoenix, AZ






New Orleans, LA






Atlanta, GA






Nashville, TN






Washington, DC






Miami, FL






Charlotte, NC






Sources: Rappaport,; Gibson, Population of the 100 Largest Cities.

Taken together, Tables 1 through 3 paint a clear picture of what has happened in urban areas in the United States between 1950 and 2000: Cities in the Southern and Western U.S. have experienced relatively high rates of growth when they are compared to their neighbors in the Midwest and Northeast. And, as a consequence of this, central cities in the Midwest and Northeast have remained the same size or they have experienced moderate to severe urban decay. But, to complete this picture, it is worth considering some additional data. Table 4 presents regional population and housing data for the United States during the period from 1950 to 2000.

Table 4: Regional Population and Housing Data for the U.S., 1950 – 2000







Population Density – persons/(square mile)







Population by Region





























Population by Region – % of Total





























Population Living in non-Metropolitan Areas (millions)







Population Living in Metropolitan Areas (millions)







Percent in Suburbs in Metropolitan Area







Percent in Central City in Metropolitan Area







Percent Living in the Ten Largest Cities







Percentage Minority by Region

















Housing Units by Region





























Source: Hobbs and Stoops (2002).

There are several items of particular interest in Table 4. Every region in the United States becomes more diverse between 1980 and 2000. No region has a minority population greater than 26.5 percent minority in 1980, but only the Midwest remains below 26.5 percent minority by 2000. The U.S. population becomes increasingly urbanized over time, yet the percentage of Americans who live in central cities remains nearly constant. Thus, it is the number of Americans living in suburban communities that has fueled the dramatic increase in “urban” residents. This finding is reinforced by looking at the figures for average population density for the United States as a whole, the figures listing the numbers of Americans living in metropolitan versus non-metropolitan areas, and the figures listing the percentage of Americans living in the ten largest cities in the United States.

Other Measures of Urban Decline

While the population decline documented in the first part of this section suggests that cities in the Northeast and Midwest experienced severe urban decline, anyone who has visited the cities of Detroit and Boston would be able to tell you that the urban decline in these cities has affected their downtowns in very different ways. The central city in Boston is, for the most part, economically vibrant. A visitor to Boston would fine manicured public spaces as well as thriving retail, housing, and commercial sectors. Detroit’s downtown is still scarred by vacant office towers, abandoned retail space, and relatively little housing. Furthermore, the city’s public spaces would not compare favorably to those of Boston. While the leaders of Detroit have made some needed improvements to the city’s downtown in the past several years, the central city remains a mere shadow of its former self. Thus, the loss of population experienced by Detroit and Boston do not tell the full story about how urban decline has affected these cities. They have both lost population, yet Detroit has lost a great deal more – it no longer possesses a well-functioning urban economy.

To date, there have been relatively few attempts to quantify the loss of economic vitality in cities afflicted by urban decay. This is due, in part, to the complexity of the problem. There are few reliable historical measures of economic activity available at the city level. However, economists and other social scientists are beginning to better understand the process and the consequences of severe urban decline.

Economists Edward Glaeser and Joseph Gyourko (2005) developed a model that thoroughly explains the process of urban decline. One of their principal insights is that the durable nature of housing means that the process of urban decline will not mirror the process of urban expansion. In a growing city, the demand for housing is met through the construction of new dwellings. When a city faces a reduction in economic productivity and the resulting reduction in the demand for labor, workers will begin to leave the city. Yet, when population in a city begins to decline, housing units do not magically disappear from the urban landscape. Thus, in Glaeser and Gyourko’s model a declining city is characterized by a stock of housing that interacts with a reduction in housing demand, producing a rapid reduction in the real price of housing. Empirical evidence supports the assertions made by the model, for in cities like Cleveland, Detroit, and Buffalo the real price of housing declined in the second half of the twentieth century. An important implication of the Glaeser and Gyourko model is that declining housing prices are likely to attract individuals who are poor and who have acquired relatively little human capital. The presence of these workers makes it difficult for a declining city – like Detroit – to reverse its economic decline, for it becomes relatively difficult to attract businesses that need workers with high levels of human capital.

Complementing the theoretical work of Glaeser and Gyourko, Fred H. Smith (2003) used property values as a proxy for economic activity in order to quantify the urban decline experienced by Cleveland, Ohio. Smith found that the aggregate assessed value for the property in the downtown core of Cleveland fell from its peak of nearly $600 million in 1930 to a mere $45 million by 1980. (Both figures are expressed in 1980 dollars.) Economists William Collins and Robert Margo have also examined the impact of urban decline on property values. Their work focuses on how the value of owner occupied housing declined in cities that experienced a race riot in the 1960s, and, in particular, it focuses on the gap in property values that developed between white and black owned homes. Nonetheless, a great deal of work still remains to be done before the magnitude of urban decay in the United States is fully understood.

What Caused Urban Decline in the United States?

Having examined the timing and the magnitude of the urban decline experienced by U.S. cities, it is now necessary to consider why these cities decayed. In the subsections that follow, each of the principal causes of urban decline is considered in turn.

Decentralizing Technologies

In “Sprawl and Urban Growth,” Edward Glaeser and Matthew Kahn (2001) assert that “while many factors may have helped the growth of sprawl, it ultimately has only one root cause: the automobile” (p. 2). Urban sprawl is simply a popular term for the decentralization of economic activity, one of the principal symptoms of urban decline. So it should come as no surprise that many of the forces that have caused urban sprawl are in fact the same forces that have driven the decline of central cities. As Glaeser and Kahn suggest, the list of causal forces must begin with the emergence of the automobile.

In order to maximize profit, firm owners must choose their location carefully. Input prices and transportation costs (for inputs and outputs) vary across locations. Firm owners ultimately face two important decisions about location, and economic forces dictate the choices made in each instance. First, owners must decide in which city they will do business. Then, the firm owners must decide where the business should be located within the chosen city. In each case, transportation costs and input costs must dominate the owners’ decision making. For example, a business owner whose firm will produce steel must consider the costs of transporting inputs (e.g. iron ore), the costs of transporting the output (steel), and the cost of other inputs in the production process (e.g. labor). For steel firms operating in the late nineteenth century these concerns were balanced out by choosing locations in the Midwest, either on the Great Lakes (e.g. Cleveland) or major rivers (e.g. Pittsburgh). Cleveland and Pittsburgh were cities with plentiful labor and relatively low transport costs for both inputs and the output. However, steel firm owners choosing Cleveland or Pittsburgh also had to choose a location within these cities. Not surprisingly, the owners chose locations that minimized transportation costs. In Cleveland, for example, the steel mills were built near the shore of Lake Erie and relatively close to the main rail terminal. This minimized the costs of getting iron ore from ships that had come to the city via Lake Erie, and it also provided easy access to water or rail transportation for shipping the finished product. The cost of choosing a site near the rail terminal and the city’s docks was not insignificant: Land close to the city’s transportation hub was in high demand, and, therefore, relatively expensive. It would have been cheaper for firm owners to buy land on the periphery of these cities, but they chose not to do this because the costs associated with transporting inputs and outputs to and from the transportation hub would have dominated the savings enjoyed from buying cheaper land on the periphery of the city. Ultimately, it was the absence of cheap intra-city transport that compressed economic activity into the center of an urban area.

Yet, transportation costs and input prices have not simply varied across space; they’ve also changed over time. The introduction of the car and truck had a profound impact on transportation costs. In 1890, moving a ton of goods one mile cost 18.5 cents (measured in 2001 dollars). By 2003 the cost had fallen to 2.3 cents (measured in 2001 dollars) per ton-mile (Glaeser and Kahn 2001, p. 4). While the car and truck dramatically lowered transportation costs, they did not immediately affect firm owners’ choices about which city to choose as their base of operations. Rather, the immediate impact was felt in the choice of where within a city a firm should choose to locate. The intra-city truck made it easy for a firm to locate on the periphery of the city, where land was plentiful and relatively cheap. Returning to the example from the previous paragraph, the introduction of the intra-city truck allowed the owners of steel mills in Cleveland to build new plants on the periphery of the urban area where land was much cheaper (Encyclopedia of Cleveland History). Similarly, the car made it possible for residents to move away from the city center and out to the periphery of the city – or even to newly formed suburbs. (The suburbanization of the urban population had begun in the late nineteenth century when streetcar lines extended from the central city out to the periphery of the city or to communities surrounding the city; the automobile simply accelerated the process of decentralization.) The retail cost of a Ford Model T dropped considerably between 1910 and 1925 – from approximately $1850 to $470, measuring the prices in constant 1925 dollars (these values would be roughly $21,260 and $5400 in 2006 dollars), and the market responded accordingly. As Table 5 illustrates, the number of passenger car registrations increased dramatically during the twentieth century.

Table 5: Passenger Car Registrations in the United States, 1910-1980


Millions of Registered Vehicles

















Source: Muller, p. 36.

While changes in transportation technology had a profound effect on firms’ and residents’ choices about where to locate within a given city, they also affected the choice of which city would be the best for the firm or resident. Americans began demanding more and improved roads to capitalize on the mobility made possible by the car. Also, the automotive, construction, and tourism related industries lobbied state and federal governments to become heavily involved in funding road construction, a responsibility previously relegated to local governments. The landmark National Interstate and Defense Highway Act of 1956 signified a long-term commitment by the national government to unite the country through an extensive network of interstates, while also improving access between cities’ central business district and outlying suburbs. As cars became affordable for the average American, and paved roads became increasingly ubiquitous, not only did the suburban frontier open up to a rising proportion of the population; it was now possible to live almost anywhere in the United States. (However, it is important to note that the widespread availability of air conditioning was a critical factor in Americans’ willingness to move to the South and West.)

Another factor that opened up the rest of the United States for urban development was a change in the cost of obtaining energy. Obtaining abundant, cheap energy is a concern for firm owners and for households. Historical constraints on production and residential locations continued to fall away in the late nineteenth and early twentieth century as innovations in energy production began to take hold. One of the most important of these advances was the spread of the alternating-current electric grid, which further expanded firms’ choices regarding plant location and layout. Energy could be generated at any site and could travel long distances through thin copper wires. Over a fifty-year period from 1890 to 1940, the proportion of goods manufactured using electrical power soared from 0.1 percent to 85.6 percent (Nye 1990). With the complementary advancements in transportation, factories now had the option of locating outside of the city where they could capture savings from cheaper land. The flexibility of electrical power also offered factories new freedom in the spatial organization of production. Whereas steam engines had required a vertical system of organization in multi-level buildings, the AC grid made possible a form of production that permanently transformed the face of manufacturing – the assembly line (Nye 1990).

The Great Migration

Technological advances were not bound by urban limits; they also extended into rural America where they had sweeping social and economic repercussions. Historically, the vast majority of African Americans had worked on Southern farms, first as slaves and then as sharecroppers. But progress in the mechanization of farming – particularly the development of the tractor and the mechanical cotton-picker – reduced the need for unskilled labor on farms. The dwindling need for farm laborers coupled with continuing racial repression in the South led hundreds of thousands of southern African Americans to migrate North in search of new opportunities. The overall result was a dramatic shift in the spatial distribution of African Americans. In 1900, more than three-fourths of black Americans lived in rural areas, and all but a handful of rural blacks lived in the South. By 1960, 73% of blacks lived in urban areas, and the majority of the urban blacks lived outside of the South (Cahill 1974).

Blacks had begun moving to Northern cities in large numbers at the onset of World War I, drawn by the lure of booming wartime industries. In the 1940s, Southern blacks began pouring into the industrial centers at more than triple the rate of the previous decade, bringing with them a legacy of poverty, poor education, and repression. The swell of impoverished and uneducated African Americans rarely received a friendly reception in Northern communities. Instead they frequently faced more of the treatment they had sought to escape (Groh 1972). Furthermore, the abundance of unskilled manufacturing jobs that had greeted the first waves of migrants had begun to dwindle. Manufacturing firms in the upper Midwest (the Rustbelt) faced increased competition from foreign firms, and many of the American firms that remained in business relocated to the suburbs or the Sunbelt to take advantage of cheap land. African Americans had difficulty accessing jobs at locations in the suburbs, and the result for many was a “spatial mismatch” – they lived in the inner city where employment opportunities were scarce, yet lacked access to transportation and that would allow them to commute to the suburban jobs (Kain 1968). Institutionalized racism, which hindered blacks’ attempts to purchase real estate in the suburbs, as well as the proliferation of inner city public housing projects, reinforced the spatial mismatch problem. As inner city African Americans coped with high unemployment rates, high crime rates and urban disturbances such as the race riots of the 1960s were obvious symptoms of economic distress. High crime rates and the race riots simply accelerated the demographic transformation of Northern cities. White city residents had once been “pulled” to the suburbs by the availability of cheap land and cheap transportation when the automobile became affordable; now white residents were being “pushed” by racism and the desire to escape the poverty and crime that had become common in the inner city. Indeed, by 2000 more than 80 percent of Detroit’s residents were African American – a stark contrast from 1950 when only 16 percent of the population was black.

The American City in the Twenty-First Century

Some believe that technology – specifically advances in information technology – will render the city obsolete in the twenty-first century. Urban economists find their arguments unpersuasive (Glaeser 1998). Recent history shows that the way we interact with one another has changed dramatically in a very short period of time. E-mail, cell phones, and text messages belonged to the world science fiction as recently as 1980. Clearly, changes in information technology no longer make it a requirement that we locate ourselves in close proximity to the people we want to interact with. Thus, one can understand the temptation to think that we will no longer need to live so close to one another in New York, San Francisco or Chicago. Ultimately, a person or a firm will only locate in a city if the benefits from being in the city outweigh the costs. What is missing from this analysis, though, is that people and firms locate in cities for reasons that are not immediately obvious.

Economists point to economies of agglomeration as one of the main reasons that firms will continue to choose urban locations over rural locations. Economics of agglomeration exist when a firm’s productivity is enhanced (or its cost of doing business is lowered) because it is located in a cluster of complementary firms of in a densely populated area. A classic example of an urban area that displays substantial economies of agglomeration is “Silicon Valley” (near San Jose, California). Firms choosing to locate in Silicon Valley benefit from several sources of economies of agglomeration, but two of the most easily understood are knowledge spillovers and labor pooling. Knowledge spillovers in Silicon Valley occur because individuals who work at “computer firms” (firms producing software, hardware, etc.) are likely to interact with one another on a regular basis. These interactions can be informal – playing together on a softball team, running into one another at a child’s soccer game, etc. – but they are still very meaningful because they promote the exchange of ideas. By exchanging ideas and information it makes it possible for workers to (potentially) increase their productivity at their own job. Another example of economies of agglomeration in Silicon Valley is the labor pooling that occurs there. Because workers who are trained in computer related fields know that computer firms are located in Silicon Valley, they are more likely to choose to live in or around Silicon Valley. Thus, firms operating in Silicon Valley have an abundant supply of labor in close proximity, and, similarly, workers enjoy the opportunities associated with having several firms that can make use of their skills in a small geographic area. The clustering of computer industry workers and firms allows firms to save money when they need to hire another worker, and it makes it easier for workers who need a job to find one.

In addition to economies of agglomeration, there are other economic forces that make the disappearance of the city unlikely. Another of the benefits that some individuals will associate with urban living is the diversity of products and experiences that are available in a city. For example, in a large city like Chicago it is possible to find deep dish pizza, thin crust pizza, Italian food, Persian food, Greek food, Swedish food, Indian food, Chinese food… literally almost any type of food that you might imagine. Why is all of this food available in Chicago but not in a small town in southern Illinois? Economists answer this question using the concept of demand density. Lots of people like Chinese food, so it is not uncommon to find a Chinese restaurant in a small town. Fewer people, though, have been exposed to Persian cuisine. While it is quite likely that the average American would like Persian food if it were available, most Americans haven’t had the opportunity to try it. Hence, the average American is unlikely to demand much Persian food in a given time period. So, individuals who are interested in operating a Persian food restaurant logically choose to operate in Chicago instead of a small town in southern Illinois. While each individual living in Chicago may not demand Persian food any more frequently than the individuals living in the small town, the presence of so many people in a relatively small area makes it possible for the Persian food restaurant to operate and thrive. Moreover, exposure to Persian food may change people’s tastes and preferences. Over time, the amount of Persian food demand (on average) from each inhabitant of the city may increase.

Individuals who value Persian food – or any of the other experiences that can only be found in a large city – will value the opportunity to live in a large city more than they will value the opportunity to live in a rural area. But the incredible diversity that a large city has to offer is a huge benefit to some individuals, not to everyone. Rural areas will continue to be populated as long as there are people who prefer the pleasures of low-density living. For these individuals, the pleasure of being able to walk in the woods or hike in the mountains may be more than enough compensation for living in a part of the country that doesn’t have a Persian restaurant.

As long as there are people (and firm owners) who believe that the benefits from locating in a city outweigh the costs, cities will continue to exist. The data shown above make it clear that Americans continue to value urban living. Indeed, the population figures for Chicago and New York suggest that in the 1990s more people were finding that there are net benefits to living in very large cities. The rapid expansion of cities in the South and Southwest simply reinforces this idea. To be sure, the urban living experienced in Charlotte is not the same as the urban living experience in Chicago or New York. So, while the urban cores of cities like Detroit and Cleveland are not likely to return to their former size anytime soon, and urban decline will continue to be a problem for these cities in the foreseeable future, it remains clear that Americans enjoy the benefits of urban living and that the American city will continue to thrive in the future.


Cahill, Edward E. “Migration and the Decline of the Black Population in Rural and Non-Metropolitan Areas.” Phylon 35, no. 3, (1974): 284-92.

Casadesus-Masanell, Ramon. “Ford’s Model-T: Pricing over the Product Life Cycle,” ABANTE –

Studies in Business Management 1, no. 2, (1998): 143-65.

Chudacoff, Howard and Judith Smith. The Evolution of American Urban Society, fifth edition. Upper Saddle River, NJ: Prentice Hall, 2000.

Collins, William and Robert Margo. “The Economic Aftermath of the 1960s Riots in American Cities: Evidence from Property Values.” Journal of Economic History 67, no. 4 (2007): 849 -83.

Collins, William and Robert Margo. “Race and the Value of Owner-Occupied Housing, 1940-1990.”

Regional Science and Urban Economics 33, no. 3 (2003): 255-86.

Cutler, David et al. “The Rise and Decline of the American Ghetto.” Journal of Political Economy 107, no. 3 (1999): 455-506.

Frey, William and Alden Speare, Jr. Regional and Metropolitan Growth and Decline in the United States. New York: Russell Sage Foundation, 1988.

Gibson, Campbell. “Population of the 100 Largest Cities and Other Urban Places in the United States: 1790 to 1990.” Population Division Working Paper, no. 27, U.S. Bureau of the Census, June 1998. Accessed at:

Glaeser, Edward. “Are Cities Dying?” Journal of Economic Perspectives 12, no. 2 (1998): 139-60.

Glaeser, Edward and Joseph Gyourko. “Urban Decline and Durable Housing.” Journal of Political Economy 113, no. 2 (2005): 345-75.

Glaeser, Edward and Matthew Kahn. “Decentralized Employment and the Transformation of the American City.” Brookings-Wharton Papers on Urban Affairs, 2001.

Glaeser, Edward and Janet Kohlhase. “Cities, Regions, and the Decline of Transport Costs.” NBER Working Paper Series, National Bureau of Economic Research, 2003.

Glaeser, Edward and Albert Saiz. “The Rise of the Skilled City.” Brookings-Wharton Papers on Urban Affairs, 2004.

Glaeser, Edward and Jesse Shapiro. “Urban Growth in the 1990s: Is City Living Back?” Journal of Regional Science 43, no. 1 (2003): 139-65.

Groh, George. The Black Migration: The Journey to Urban America. New York: Weybright and Talley, 1972.

Gutfreund, Owen D. Twentieth Century Sprawl: Highways and the Reshaping of the American Landscape. Oxford: Oxford University Press, 2004.

Hanson, Susan, ed. The Geography of Urban Transportation. New York: Guilford Press, 1986.

Hobbs, Frank and Nicole Stoops. Demographic Trends in the Twentieth Century: Census 2000 Special Reports. Washington, DC: U.S. Census Bureau, 2002.

Kim, Sukkoo. “Urban Development in the United States, 1690-1990.” NBER Working Paper Series, National Bureau of Economic Research, 1999.

Mieszkowski, Peter and Edwin Mills. “The Causes of Metropolitan Suburbanization.” Journal of Economic Perspectives 7, no. 3 (1993): 135-47.

Muller, Peter. “Transportation and Urban Form: Stages in the Spatial Evolution of the American Metropolis.” In The Geography of Urban Transportation, edited by Susan Hanson. New York: Guilford Press, 1986.

Nye, David. Electrifying America: Social Meanings of a New Technology, 1880-1940. Cambridge, MA: MIT Press, 1990.

Nye, David. Consuming Power: A Social History of American Energies. Cambridge, MA: MIT Press, 1998.

Rae, Douglas. City: Urbanism and Its End. New Haven: Yale University Press, 2003.

Rappaport, Jordan. “U.S. Urban Decline and Growth, 1950 to 2000.” Economic Review: Federal Reserve Bank of Kansas City, no. 3, 2003: 15-44.

Rodwin, Lloyd and Hidehiko Sazanami, eds. Deindustrialization and Regional Economic Transformation: The Experience of the United States. Boston: Unwin Hyman, 1989.

Smith, Fred H. “Decaying at the Core: Urban Decline in Cleveland, Ohio.” Research in Economic History 21 (2003): 135-84.

Stanback, Thomas M. Jr. and Thierry J. Noyelle. Cities in Transition: Changing Job Structures in Atlanta, Denver, Buffalo, Phoenix, Columbus (Ohio), Nashville, Charlotte. Totowa, NJ: Allanheld, Osmun, 1982.

Van Tassel, David D. and John J. Grabowski, editors, The Encyclopedia of Cleveland History. Bloomington: Indiana University Press, 1996. Available at

[1] Reporting the size of a “city” should be done with care. In day-to-day usage, many Americans might talk about the size (population) of Boston and assert that Boston is a larger city than Phoenix. Strictly speaking, this is not true. The 2000 Census reports that the population of Boston was 589,000 while Phoenix had a population of 1.3 million. However, the Boston metropolitan area contained 4.4 million inhabitants in 2000 – substantially more than the 3.3 million residents of the Phoenix metropolitan area.

Citation: Smith, Fred and Sarah Allen. “Urban Decline (and Success), US”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL

The 1929 Stock Market Crash

Harold Bierman, Jr., Cornell University


The 1929 stock market crash is conventionally said to have occurred on Thursday the 24th and Tuesday the 29th of October. These two dates have been dubbed “Black Thursday” and “Black Tuesday,” respectively. On September 3, 1929, the Dow Jones Industrial Average reached a record high of 381.2. At the end of the market day on Thursday, October 24, the market was at 299.5 — a 21 percent decline from the high. On this day the market fell 33 points — a drop of 9 percent — on trading that was approximately three times the normal daily volume for the first nine months of the year. By all accounts, there was a selling panic. By November 13, 1929, the market had fallen to 199. By the time the crash was completed in 1932, following an unprecedentedly large economic depression, stocks had lost nearly 90 percent of their value.

The events of Black Thursday are normally defined to be the start of the stock market crash of 1929-1932, but the series of events leading to the crash started before that date. This article examines the causes of the 1929 stock market crash. While no consensus exists about its precise causes, the article will critique some arguments and support a preferred set of conclusions. It argues that one of the primary causes was the attempt by important people and the media to stop market speculators. A second probable cause was the great expansion of investment trusts, public utility holding companies, and the amount of margin buying, all of which fueled the purchase of public utility stocks, and drove up their prices. Public utilities, utility holding companies, and investment trusts were all highly levered using large amounts of debt and preferred stock. These factors seem to have set the stage for the triggering event. This sector was vulnerable to the arrival of bad news regarding utility regulation. In October 1929, the bad news arrived and utility stocks fell dramatically. After the utilities decreased in price, margin buyers had to sell and there was then panic selling of all stocks.

The Conventional View

The crash helped bring on the depression of the thirties and the depression helped to extend the period of low stock prices, thus “proving” to many that the prices had been too high.

Laying the blame for the “boom” on speculators was common in 1929. Thus, immediately upon learning of the crash of October 24 John Maynard Keynes (Moggridge, 1981, p. 2 of Vol. XX) wrote in the New York Evening Post (25 October 1929) that “The extraordinary speculation on Wall Street in past months has driven up the rate of interest to an unprecedented level.” And the Economist when stock prices reached their low for the year repeated the theme that the U.S. stock market had been too high (November 2, 1929, p. 806): “there is warrant for hoping that the deflation of the exaggerated balloon of American stock values will be for the good of the world.” The key phrases in these quotations are “exaggerated balloon of American stock values” and “extraordinary speculation on Wall Street.” Likewise, President Herbert Hoover saw increasing stock market prices leading up to the crash as a speculative bubble manufactured by the mistakes of the Federal Reserve Board. “One of these clouds was an American wave of optimism, born of continued progress over the decade, which the Federal Reserve Board transformed into the stock-exchange Mississippi Bubble” (Hoover, 1952). Thus, the common viewpoint was that stock prices were too high.

There is much to criticize in conventional interpretations of the 1929 stock market crash, however. (Even the name is inexact. The largest losses to the market did not come in October 1929 but rather in the following two years.) In December 1929, many expert economists, including Keynes and Irving Fisher, felt that the financial crisis had ended and by April 1930 the Standard and Poor 500 composite index was at 25.92, compared to a 1929 close of 21.45. There are good reasons for thinking that the stock market was not obviously overvalued in 1929 and that it was sensible to hold most stocks in the fall of 1929 and to buy stocks in December 1929 (admittedly this investment strategy would have been terribly unsuccessful).

Were Stocks Obviously Overpriced in October 1929?
Debatable — Economic Indicators Were Strong

From 1925 to the third quarter of 1929, common stocks increased in value by 120 percent in four years, a compound annual growth of 21.8%. While this is a large rate of appreciation, it is not obvious proof of an “orgy of speculation.” The decade of the 1920s was extremely prosperous and the stock market with its rising prices reflected this prosperity as well as the expectation that the prosperity would continue.

The fact that the stock market lost 90 percent of its value from 1929 to 1932 indicates that the market, at least using one criterion (actual performance of the market), was overvalued in 1929. John Kenneth Galbraith (1961) implies that there was a speculative orgy and that the crash was predictable: “Early in 1928, the nature of the boom changed. The mass escape into make-believe, so much a part of the true speculative orgy, started in earnest.” Galbraith had no difficulty in 1961 identifying the end of the boom in 1929: “On the first of January of 1929, as a matter of probability, it was most likely that the boom would end before the year was out.”

Compare this position with the fact that Irving Fisher, one of the leading economists in the U.S. at the time, was heavily invested in stocks and was bullish before and after the October sell offs; he lost his entire wealth (including his house) before stocks started to recover. In England, John Maynard Keynes, possibly the world’s leading economist during the first half of the twentieth century, and an acknowledged master of practical finance, also lost heavily. Paul Samuelson (1979) quotes P. Sergeant Florence (another leading economist): “Keynes may have made his own fortune and that of King’s College, but the investment trust of Keynes and Dennis Robertson managed to lose my fortune in 1929.”

Galbraith’s ability to ‘forecast’ the market turn is not shared by all. Samuelson (1979) admits that: “playing as I often do the experiment of studying price profiles with their dates concealed, I discovered that I would have been caught by the 1929 debacle.” For many, the collapse from 1929 to 1933 was neither foreseeable nor inevitable.

The stock price increases leading to October 1929, were not driven solely by fools or speculators. There were also intelligent, knowledgeable investors who were buying or holding stocks in September and October 1929. Also, leading economists, both then and now, could neither anticipate nor explain the October 1929 decline of the market. Thus, the conviction that stocks were obviously overpriced is somewhat of a myth.

The nation’s total real income rose from 1921 to 1923 by 10.5% per year, and from 1923 to 1929, it rose 3.4% per year. The 1920s were, in fact, a period of real growth and prosperity. For the period of 1923-1929, wholesale prices went down 0.9% per year, reflecting moderate stable growth in the money supply during a period of healthy real growth.

Examining the manufacturing situation in the United States prior to the crash is also informative. Irving Fisher’s Stock Market Crash and After (1930) offers much data indicating that there was real growth in the manufacturing sector. The evidence presented goes a long way to explain Fisher’s optimism regarding the level of stock prices. What Fisher saw was manufacturing efficiency rapidly increasing (output per worker) as was manufacturing output and the use of electricity.

The financial fundamentals of the markets were also strong. During 1928, the price-earnings ratio for 45 industrial stocks increased from approximately 12 to approximately 14. It was over 15 in 1929 for industrials and then decreased to approximately 10 by the end of 1929. While not low, these price-earnings (P/E) ratios were by no means out of line historically. Values in this range would be considered reasonable by most market analysts today. For example, the P/E ratio of the S & P 500 in July 2003 reached a high of 33 and in May 2004 the high was 23.

The rise in stock prices was not uniform across all industries. The stocks that went up the most were in industries where the economic fundamentals indicated there was cause for large amounts of optimism. They included airplanes, agricultural implements, chemicals, department stores, steel, utilities, telephone and telegraph, electrical equipment, oil, paper, and radio. These were reasonable choices for expectations of growth.

To put the P/E ratios of 10 to 15 in perspective, note that government bonds in 1929 yielded 3.4%. Industrial bonds of investment grade were yielding 5.1%. Consider that an interest rate of 5.1% represents a 1/(0.051) = 19.6 price-earnings ratio for debt.

In 1930, the Federal Reserve Bulletin reported production in 1920 at an index of 87.1 The index went down to 67 in 1921, then climbed steadily (except for 1924) until it reached 125 in 1929. This is an annual growth rate in production of 3.1%. During the period commodity prices actually decreased. The production record for the ten-year period was exceptionally good.

Factory payrolls in September were at an index of 111 (an all-time high). In October the index dropped to 110, which beat all previous months and years except for September 1929. The factory employment measures were consistent with the payroll index.

The September unadjusted measure of freight car loadings was at 121 — also an all-time record.2 In October the loadings dropped to 118, which was a performance second only to September’s record measure.

J.W. Kendrick (1961) shows that the period 1919-1929 had an unusually high rate of change in total factor productivity. The annual rate of change of 5.3% for 1919-1929 for the manufacturing sector was more than twice the 2.5% rate of the second best period (1948-1953). Farming productivity change for 1919-1929 was second only to the period 1929-1937. Overall, the period 1919-1929 easily took first place for productivity increases, handily beating the six other time periods studied by Kendrick (all the periods studies were prior to 1961) with an annual productivity change measure of 3.7%. This was outstanding economic performance — performance which normally would justify stock market optimism.

In the first nine months of 1929, 1,436 firms announced increased dividends. In 1928, the number was only 955 and in 1927, it was 755. In September 1929 dividend increased were announced by 193 firms compared with 135 the year before. The financial news from corporations was very positive in September and October 1929.

The May issue of the National City Bank of New York Newsletter indicated the earnings statements for the first quarter of surveyed firms showed a 31% increase compared to the first quarter of 1928. The August issue showed that for 650 firms the increase for the first six months of 1929 compared to 1928 was 24.4%. In September, the results were expanded to 916 firms with a 27.4% increase. The earnings for the third quarter for 638 firms were calculated to be 14.1% larger than for 1928. This is evidence that the general level of business activity and reported profits were excellent at the end of September 1929 and the middle of October 1929.

Barrie Wigmore (1985) researched 1929 financial data for 135 firms. The market price as a percentage of year-end book value was 420% using the high prices and 181% using the low prices. However, the return on equity for the firms (using the year-end book value) was a high 16.5%. The dividend yield was 2.96% using the high stock prices and 5.9% using the low stock prices.

Article after article from January to October in business magazines carried news of outstanding economic performance. E.K. Berger and A.M. Leinbach, two staff writers of the Magazine of Wall Street, wrote in June 1929: “Business so far this year has astonished even the perennial optimists.”

To summarize: There was little hint of a severe weakness in the real economy in the months prior to October 1929. There is a great deal of evidence that in 1929 stock prices were not out of line with the real economics of the firms that had issued the stock. Leading economists were betting that common stocks in the fall of 1929 were a good buy. Conventional financial reports of corporations gave cause for optimism relative to the 1929 earnings of corporations. Price-earnings ratios, dividend amounts and changes in dividends, and earnings and changes in earnings all gave cause for stock price optimism.

Table 1 shows the average of the highs and lows of the Dow Jones Industrial Index for 1922 to 1932.

Table 1
Dow-Jones Industrials Index Average
of Lows and Highs for the Year
1922 91.0
1923 95.6
1924 104.4
1925 137.2
1926 150.9
1927 177.6
1928 245.6
1929 290.0
1930 225.8
1931 134.1
1932 79.4

Sources: 1922-1929 measures are from the Stock Market Study, U.S. Senate, 1955, pp. 40, 49, 110, and 111; 1930-1932 Wigmore, 1985, pp. 637-639.

Using the information of Table 1, from 1922 to 1929 stocks rose in value by 218.7%. This is equivalent to an 18% annual growth rate in value for the seven years. From 1929 to 1932 stocks lost 73% of their value (different indices measured at different time would give different measures of the increase and decrease). The price increases were large, but not beyond comprehension. The price decreases taken to 1932 were consistent with the fact that by 1932 there was a worldwide depression.

If we take the 386 high of September 1929 and the 1929-year end value of 248.5, the market lost 36% of its value during that four-month period. Most of us, if we held stock in September 1929 would not have sold early in October. In fact, if I had money to invest, I would have purchased after the major break on Black Thursday, October 24. (I would have been sorry.)

Events Precipitating the Crash

Although it can be argued that the stock market was not overvalued, there is evidence that many feared that it was overvalued — including the Federal Reserve Board and the United States Senate. By 1929, there were many who felt the market price of equity securities had increased too much, and this feeling was reinforced daily by the media and statements by influential government officials.

What precipitated the October 1929 crash?

My research minimizes several candidates that are frequently cited by others (see Bierman 1991, 1998, 1999, and 2001).

  • The market did not fall just because it was too high — as argued above it is not obvious that it was too high.
  • The actions of the Federal Reserve, while not always wise, cannot be directly identified with the October stock market crashes in an important way.
  • The Smoot-Hawley tariff, while looming on the horizon, was not cited by the news sources in 1929 as a factor, and was probably not important to the October 1929 market.
  • The Hatry Affair in England was not material for the New York Stock Exchange and the timing did not coincide with the October crashes.
  • Business activity news in October was generally good and there were very few hints of a coming depression.
  • Short selling and bear raids were not large enough to move the entire market.
  • Fraud and other illegal or immoral acts were not material, despite the attention they have received.

Barsky and DeLong (1990, p. 280) stress the importance of fundamentals rather than fads or fashions. “Our conclusion is that major decade-to-decade stock market movements arise predominantly from careful re-evaluation of fundamentals and less so from fads or fashions.” The argument below is consistent with their conclusion, but there will be one major exception. In September 1929, the market value of one segment of the market, the public utility sector, should be based on existing fundamentals, and fundamentals seem to have changed considerably in October 1929.

A Look at the Financial Press

Thursday, October 3, 1929, the Washington Post with a page 1 headline exclaimed “Stock Prices Crash in Frantic Selling.” the New York Times of October 4 headed a page 1 article with “Year’s Worst Break Hits Stock Market.” The article on the first page of the Times cited three contributing factors:

  • A large broker loan increase was expected (the article stated that the loans increased, but the increase was not as large as expected).
  • The statement by Philip Snowden, England’s Chancellor of the Exchequer that described America’s stock market as a “speculative orgy.”
  • Weakening of margin accounts making it necessary to sell, which further depressed prices.

While the 1928 and 1929 financial press focused extensively and excessively on broker loans and margin account activity, the statement by Snowden is the only unique relevant news event on October 3. The October 4 (p. 20) issue of the Wall Street Journal also reported the remark by Snowden that there was “a perfect orgy of speculation.” Also, on October 4, the New York Times made another editorial reference to Snowden’s American speculation orgy. It added that “Wall Street had come to recognize its truth.” The editorial also quoted Secretary of the Treasury Mellon that investors “acted as if the price of securities would infinitely advance.” The Times editor obviously thought there was excessive speculation, and agreed with Snowden.

The stock market went down on October 3 and October 4, but almost all reported business news was very optimistic. The primary negative news item was the statement by Snowden regarding the amount of speculation in the American stock market. The market had been subjected to a barrage of statements throughout the year that there was excessive speculation and that the level of stock prices was too high. There is a possibility that the Snowden comment reported on October 3 was the push that started the boulder down the hill, but there were other events that also jeopardized the level of the market.

On August 8, the Federal Reserve Bank of New York had increased the rediscount rate from 5 to 6%. On September 26 the Bank of England raised its discount rate from 5.5 to 6.5%. England was losing gold as a result of investment in the New York Stock Exchange and wanted to decrease this investment. The Hatry Case also happened in September. It was first reported on September 29, 1929. Both the collapse of the Hatry industrial empire and the increase in the investment returns available in England resulted in shrinkage of English investment (especially the financing of broker loans) in the United States, adding to the market instability in the beginning of October.

Wednesday, October 16, 1929

On Wednesday, October 16, stock prices again declined. the Washington Post (October 17, p. 1) reported “Crushing Blow Again Dealt Stock Market.” Remember, the start of the stock market crash is conventionally identified with Black Thursday, October 24, but there were price declines on October 3, 4, and 16.

The news reports of the Post on October 17 and subsequent days are important since they were Associated Press (AP) releases, thus broadly read throughout the country. The Associated Press reported (p. 1) “The index of 20 leading public utilities computed for the Associated Press by the Standard Statistics Co. dropped 19.7 points to 302.4 which contrasts with the year’s high established less than a month ago.” This index had also dropped 18.7 points on October 3 and 4.3 points on October 4. The Times (October 17, p. 38) reported, “The utility stocks suffered most as a group in the day’s break.”

The economic news after the price drops of October 3 and October 4 had been good. But the deluge of bad news regarding public utility regulation seems to have truly upset the market. On Saturday, October 19, the Washington Post headlined (p. 13) “20 Utility Stocks Hit New Low Mark” and (Associated Press) “The utility shares again broke wide open and the general list came tumbling down almost half as far.” The October 20 issue of the Post had another relevant AP article (p. 12) “The selling again concentrated today on the utilities, which were in general depressed to the lowest levels since early July.”

An evaluation of the October 16 break in the New York Times on Sunday, October 20 (pp. 1 and 29) gave the following favorable factors:

  • stable business condition
  • low money rates (5%)
  • good retail trade
  • revival of the bond market
  • buying power of investment trusts
  • largest short interest in history (this is the total dollar value of stock sold where the investors do not own the stock they sold)

The following negative factors were described:

  • undigested investment trusts and new common stock shares
  • increase in broker loans
  • some high stock prices
  • agricultural prices lower
  • nervous market

The negative factors were not very upsetting to an investor if one was optimistic that the real economic boom (business prosperity) would continue. The Times failed to consider the impact on the market of the news concerning the regulation of public utilities.

Monday, October 21, 1929

On Monday, October 21, the market went down again. The Times (October 22) identified the causes to be

  • margin sellers (buyers on margin being forced to sell)
  • foreign money liquidating
  • skillful short selling

The same newspaper carried an article about a talk by Irving Fisher (p. 24) “Fisher says prices of stocks are low.” Fisher also defended investment trusts as offering investors diversification, thus reduced risk. He was reminded by a person attending the talk that in May he had “pointed out that predicting the human behavior of the market was quite different from analyzing its economic soundness.” Fisher was better with fundamentals than market psychology.

Wednesday, October 23, 1929

On Wednesday, October 23 the market tumbled. The Times headlines (October 24, p.1) said “Prices of Stocks Crash in Heavy Liquidation.” The Washington Post (p. 1) had “Huge Selling Wave Creates Near-Panic as Stocks Collapse.” In a total market value of $87 billion the market declined $4 billion — a 4.6% drop. If the events of the next day (Black Thursday) had not occurred, October 23 would have gone down in history as a major stock market event. But October 24 was to make the “Crash” of October 23 become merely a “Dip.”

The Times lamented October 24, (p. 38) “There was hardly a single item of news which might be construed as bearish.”

Thursday, October 24, 1929

Thursday, October 24 (Black Thursday) was a 12,894,650 share day (the previous record was 8,246,742 shares on March 26, 1929) on the NYSE. The headline on page one of the Times (October 25) was “Treasury Officials Blame Speculation.”

The Times (p. 41) moaned that the cost of call money had been 20% in March and the price break in March was understandable. (A call loan is a loan payable on demand of the lender.) Call money on October 24 cost only 5%. There should not have been a crash. The Friday Wall Street Journal (October 25) gave New York bankers credit for stopping the price decline with $1 billion of support.

the Washington Post (October 26, p. 1) reported “Market Drop Fails to Alarm Officials.” The “officials” were all in Washington. The rest of the country seemed alarmed. On October 25, the market gained. President Hoover made a statement on Friday regarding the excellent state of business, but then added how building and construction had been adversely “affected by the high interest rates induced by stock speculation” (New York Times, October 26, p. 1). A Times editorial (p. 16) quoted Snowden’s “orgy of speculation” again.

Tuesday, October 29, 1929

The Sunday, October 27 edition of the Times had a two-column article “Bay State Utilities Face Investigation.” It implied that regulation in Massachusetts was going to be less friendly towards utilities. Stocks again went down on Monday, October 28. There were 9,212,800 shares traded (3,000,000 in the final hour). The Times on Tuesday, October 29 again carried an article on the New York public utility investigating committee being critical of the rate making process. October 29 was “Black Tuesday.” The headline the next day was “Stocks Collapse in 16,410,030 Share Day” (October 30, p. 1). Stocks lost nearly $16 billion in the month of October or 18% of the beginning of the month value. Twenty-nine public utilities (tabulated by the New York Times) lost $5.1 billion in the month, by far the largest loss of any of the industries listed by the Times. The value of the stocks of all public utilities went down by more than $5.1 billion.

An Interpretive Overview of Events and Issues

My interpretation of these events is that the statement by Snowden, Chancellor of the Exchequer, indicating the presence of a speculative orgy in America is likely to have triggered the October 3 break. Public utility stocks had been driven up by an explosion of investment trust formation and investing. The trusts, to a large extent, bought stock on margin with funds loaned not by banks but by “others.” These funds were very sensitive to any market weakness. Public utility regulation was being reviewed by the Federal Trade Commission, New York City, New York State, and Massachusetts, and these reviews were watched by the other regulatory commissions and by investors. The sell-off of utility stocks from October 16 to October 23 weakened prices and created “margin selling” and withdrawal of capital by the nervous “other” money. Then on October 24, the selling panic happened.

There are three topics that require expansion. First, there is the setting of the climate concerning speculation that may have led to the possibility of relatively specific issues being able to trigger a general market decline. Second, there are investment trusts, utility holding companies, and margin buying that seem to have resulted in one sector being very over-levered and overvalued. Third, there are the public utility stocks that appear to be the best candidate as the actual trigger of the crash.

Contemporary Worries of Excessive Speculation

During 1929, the public was bombarded with statements of outrage by public officials regarding the speculative orgy taking place on the New York Stock Exchange. If the media say something often enough, a large percentage of the public may come to believe it. By October 29 the overall opinion was that there had been excessive speculation and the market had been too high. Galbraith (1961), Kindleberger (1978), and Malkiel (1996) all clearly accept this assumption. the Federal Reserve Bulletin of February 1929 states that the Federal Reserve would restrain the use of “credit facilities in aid of the growth of speculative credit.”

In the spring of 1929, the U.S. Senate adopted a resolution stating that the Senate would support legislation “necessary to correct the evil complained of and prevent illegitimate and harmful speculation” (Bierman, 1991).

The President of the Investment Bankers Association of America, Trowbridge Callaway, gave a talk in which he spoke of “the orgy of speculation which clouded the country’s vision.”

Adolph Casper Miller, an outspoken member of the Federal Reserve Board from its beginning described 1929 as “this period of optimism gone wild and cupidity gone drunk.”

Myron C. Taylor, head of U.S. Steel described “the folly of the speculative frenzy that lifted securities to levels far beyond any warrant of supporting profits.”

Herbert Hoover becoming president in March 1929 was a very significant event. He was a good friend and neighbor of Adolph Miller (see above) and Miller reinforced Hoover’s fears. Hoover was an aggressive foe of speculation. For example, he wrote, “I sent individually for the editors and publishers of major newspapers and magazine and requested them systematically to warn the country against speculation and the unduly high price of stocks.” Hoover then pressured Secretary of the Treasury Andrew Mellon and Governor of the Federal Reserve Board Roy Young “to strangle the speculative movement.” In his memoirs (1952) he titled his Chapter 2 “We Attempt to Stop the Orgy of Speculation” reflecting Snowden’s influence.

Buying on Margin

Margin buying during the 1920’s was not controlled by the government. It was controlled by brokers interested in their own well-being. The average margin requirement was 50% of the stock price prior to October 1929. On selected stocks, it was as high as 75%. When the crash came, no major brokerage firm was bankrupted, because the brokers managed their finances in a conservative manner. At the end of October, margins were lowered to 25%.

Brokers’ loans received a lot of attention in England, as they did in the United States. The Financial Times reported the level and the changes in the amount regularly. For example, the October 4 issue indicated that on October 3 broker loans reached a record high as money rates dropped from 7.5% to 6%. By October 9, money rates had dropped further to below .06. Thus, investors prior to October 24 had relatively easy access to funds at the lowest rate since July 1928.

the Financial Times (October 7, 1929, p. 3) reported that the President of the American Bankers Association was concerned about the level of credit for securities and had given a talk in which he stated, “Bankers are gravely alarmed over the mounting volume of credit being employed in carrying security loans, both by brokers and by individuals.” The Financial Times was also concerned with the buying of investment trusts on margin and the lack of credit to support the bull market.

My conclusion is that the margin buying was a likely factor in causing stock prices to go up, but there is no reason to conclude that margin buying triggered the October crash. Once the selling rush began, however, the calling of margin loans probably exacerbated the price declines. (A calling of margin loans requires the stock buyer to contribute more cash to the broker or the broker sells the stock to get the cash.)

Investment Trusts

By 1929, investment trusts were very popular with investors. These trusts were the 1929 version of closed-end mutual funds. In recent years seasoned closed-end mutual funds sell at a discount to their fundamental value. The fundamental value is the sum of the market values of the fund’s components (securities in the portfolio). In 1929, the investment trusts sold at a premium — i.e. higher than the value of the underlying stocks. Malkiel concludes (p. 51) that this “provides clinching evidence of wide-scale stock-market irrationality during the 1920s.” However, Malkiel also notes (p. 442) “as of the mid-1990’s, Berkshire Hathaway shares were selling at a hefty premium over the value of assets it owned.” Warren Buffett is the guiding force behind Berkshire Hathaway’s great success as an investor. If we were to conclude that rational investors would currently pay a premium for Warren Buffet’s expertise, then we should reject a conclusion that the 1929 market was obviously irrational. We have current evidence that rational investors will pay a premium for what they consider to be superior money management skills.

There were $1 billion of investment trusts sold to investors in the first eight months of 1929 compared to $400 million in the entire 1928. the Economist reported that this was important (October 12, 1929, p. 665). “Much of the recent increase is to be accounted for by the extraordinary burst of investment trust financing.” In September alone $643 million was invested in investment trusts (Financial Times, October 21, p. 3). While the two sets of numbers (from the Economist and the Financial Times) are not exactly comparable, both sets of numbers indicate that investment trusts had become very popular by October 1929.

The common stocks of trusts that had used debt or preferred stock leverage were particularly vulnerable to the stock price declines. For example, the Goldman Sachs Trading Corporation was highly levered with preferred stock and the value of its common stock fell from $104 a share to less than $3 in 1933. Many of the trusts were levered, but the leverage of choice was not debt but rather preferred stock.

In concept, investment trusts were sensible. They offered expert management and diversification. Unfortunately, in 1929 a diversification of stocks was not going to be a big help given the universal price declines. Irving Fisher on September 6, 1929 was quoted in the New York Herald Tribune as stating: “The present high levels of stock prices and corresponding low levels of dividend returns are due largely to two factors. One, the anticipation of large dividend returns in the immediate future; and two, reduction of risk to investors largely brought about through investment diversification made possible for the investor by investment trusts.”

If a researcher could find out the composition of the portfolio of a couple of dozen of the largest investment trusts as of September-October 1929 this would be extremely helpful. Seven important types of information that are not readily available but would be of interest are:

  • The percentage of the portfolio that was public utilities.
  • The extent of diversification.
  • The percentage of the portfolios that was NYSE firms.
  • The investment turnover.
  • The ratio of market price to net asset value at various points in time.
  • The amount of debt and preferred stock leverage used.
  • Who bought the trusts and how long they held.

The ideal information to establish whether market prices are excessively high compared to intrinsic values is to have both the prices and well-defined intrinsic values at the same moment in time. For the normal financial security, this is impossible since the intrinsic values are not objectively well defined. There are two exceptions. DeLong and Schleifer (1991) followed one path, very cleverly choosing to study closed-end mutual funds. Some of these funds were traded on the stock market and the market values of the securities in the funds’ portfolios are a very reasonable estimate of the intrinsic value. DeLong and Schleifer state (1991, p. 675):

“We use the difference between prices and net asset values of closed-end mutual funds at the end of the 1920s to estimate the degree to which the stock market was overvalued on the eve of the 1929 crash. We conclude that the stocks making up the S&P composite were priced at least 30 percent above fundamentals in late summer, 1929.”

Unfortunately (p. 682) “portfolios were rarely published and net asset values rarely calculated.” It was only after the crash that investment trusts started to reveal routinely their net asset value. In the third quarter of 1929 (p. 682), “three types of event seemed to trigger a closed-end fund’s publication of its portfolio.” The three events were (1) listing on the New York Stock Exchange (most of the trusts were not listed), (2) start up of a new closed-end fund (this stock price reflects selling pressure), and (3) shares selling at a discount from net asset value (in September 1929 most trusts were not selling at a discount, the inclusion of any that were introduces a bias). After 1929, some trusts revealed 1929 net asset values. Thus, DeLong and Schleifer lacked the amount and quality of information that would have allowed definite conclusions. In fact, if investors also lacked the information regarding the portfolio composition we would have to place investment trusts in a unique investment category where investment decisions were made without reliable financial statements. If investors in the third quarter of 1929 did not know the current net asset value of investment trusts, this fact is significant.

The closed-end funds were an attractive vehicle to study since the market for investment trusts in 1929 was large and growing rapidly. In August and September alone over $1 billion of new funds were launched. DeLong and Schleifer found the premiums of price over value to be large — the median was about 50% in the third quarter of 1929) (p. 678). But they worried about the validity of their study because funds were not selected randomly.

DeLong and Schleifer had limited data (pp. 698-699). For example, for September 1929 there were two observations, for August 1929 there were five, and for July there were nine. The nine funds observed in July 1929 had the following premia: 277%, 152%, 48%, 22%, 18% (2 times), 8% (3 times). Given that closed-end funds tend to sell at a discount, the positive premiums are interesting. Given the conventional perspective in 1929 that financial experts could manager money better than the person not plugged into the street, it is not surprising that some investors were willing to pay for expertise and to buy shares in investment trusts. Thus, a premium for investment trusts does not imply the same premium for other stocks.

The Public Utility Sector

In addition to investment trusts, intrinsic values are usually well defined for regulated public utilities. The general rule applied by regulatory authorities is to allow utilities to earn a “fair return” on an allowed rate base. The fair return is defined to be equal to a utility’s weighted average cost of capital. There are several reasons why a public utility can earn more or less than a fair return, but the target set by the regulatory authority is the weighted average cost of capital.

Thus, if a utility has an allowed rate equity base of $X and is allowed to earn a return of r, (rX in terms of dollars) after one year the firm’s equity will be worth X + rX or (1 + r)X with a present value of X. (This assumes that r is the return required by the market as well as the return allowed by regulators.) Thus, the present value of the equity is equal to the present rate base, and the stock price should be equal to the rate base per share. Given the nature of public utility accounting, the book value of a utility’s stock is approximately equal to the rate base.

There can be time periods where the utility can earn more (or less) than the allowed return. The reasons for this include regulatory lag, changes in efficiency, changes in the weather, and changes in the mix and number of customers. Also, the cost of equity may be different than the allowed return because of inaccurate (or incorrect) or changing capital market conditions. Thus, the stock price may differ from the book value, but one would not expect the stock price to be very much different than the book value per share for very long. There should be a tendency for the stock price to revert to the book value for a public utility supplying an essential service where there is no effective competition, and the rate commission is effectively allowing a fair return to be earned.

In 1929, public utility stock prices were in excess of three times their book values. Consider, for example, the following measures (Wigmore, 1985, p. 39) for five operating utilities.

border=”1″ cellspacing=”0″ cellpadding=”2″ class=”encyclopedia” width=”580″>

1929 Price-earnings Ratio

High Price for Year

Market Price/Book Value

Commonwealth Edison



Consolidated Gas of New York



Detroit Edison



Pacific Gas & Electric



Public Service of New Jersey



Sooner or later this price bubble had to break unless the regulatory authorities were to decide to allow the utilities to earn more than a fair return, or an infinite stream of greater fools existed. The decision made by the Massachusetts Public Utility Commission in October 1929 applicable to the Edison Electric Illuminating Company of Boston made clear that neither of these improbable events were going to happen (see below).

The utilities bubble did burst. Between the end of September and the end of November 1929, industrial stocks fell by 48%, railroads by 32% and utilities by 55% — thus utilities dropped the furthest from the highs. A comparison of the beginning of the year prices and the highest prices is also of interest: industrials rose by 20%, railroads by 19%, and utilities by 48%. The growth in value for utilities during the first nine months of 1929 was more than twice that of the other two groups.

The following high and low prices for 1929 for a typical set of public utilities and holding companies illustrate how severely public utility prices were hit by the crash (New York Times, 1 January 1930 quotations.)

Firm High Price Low Price Low Price DividedBy High Price
American Power & Light 1753/8 641/4 .37
American Superpower 711/8 15 .21
Brooklyn Gas 2481/2 99 .44
Buffalo, Niagara & Eastern Power 128 611/8 .48
Cities Service 681/8 20 .29
Consolidated Gas Co. of N.Y. 1831/4 801/8 .44
Electric Bond and Share 189 50 .26
Long Island Lighting 91 40 .44
Niagara Hudson Power 303/4 111/4 .37
Transamerica 673/8 201/4 .30

Picking on one segment of the market as the cause of a general break in the market is not obviously correct. But the combination of an overpriced utility segment and investment trusts with a portion of the market that had purchased on margin appears to be a viable explanation. In addition, as of September 1, 1929 utilities industry represented $14.8 billion of value or 18% of the value of the outstanding shares on the NYSE. Thus, they were a large sector, capable of exerting a powerful influence on the overall market. Moreover, many contemporaries pointed to the utility sector as an important force in triggering the market decline.

The October 19, 1929 issue of the Commercial and Financial Chronicle identified the main depressing influences on the market to be the indications of a recession in steel and the refusal of the Massachusetts Department of Public Utilities to allow Edison Electric Illuminating Company of Boston to split its stock. The explanations offered by the Department — that the stock was not worth its price and the company’s dividend would have to be reduced — made the situation worse.

the Washington Post (October 17, p. 1) in explaining the October 16 market declines (an Associated Press release) reported, “Professional traders also were obviously distressed at the printed remarks regarding inflation of power and light securities by the Massachusetts Public Utility Commission in its recent decision.”

Straws That Broke the Camel’s Back?

Edison Electric of Boston

On August 2, 1929, the New York Times reported that the Directors of the Edison Electric Illuminating Company of Boston had called a meeting of stockholders to obtain authorization for a stock split. The stock went up to a high of $440. Its book value was $164 (the ratio of price to book value was 2.6, which was less than many other utilities).

On Saturday (October 12, p. 27) the Times reported that on Friday the Massachusetts Department of Public Utilities has rejected the stock split. The heading said “Bars Stock Split by Boston Edison. Criticizes Dividend Policy. Holds Rates Should Not Be Raised Until Company Can Reduce Charge for Electricity.” Boston Edison lost 15 points for the day even though the decision was released after the Friday closing. The high for the year was $440 and the stock closed at $360 on Friday.

The Massachusetts Department of Public Utilities (New York Times, October 12, p. 27) did not want to imply to investors that this was the “forerunner of substantial increases in dividends.” They stated that the expectation of increased dividends was not justified, offered “scathing criticisms of the company” (October 16, p. 42) and concluded “the public will take over such utilities as try to gobble up all profits available.”

On October 15, the Boston City Council advised the mayor to initiate legislation for public ownership of Edison, on October 16, the Department announced it would investigate the level of rates being charged by Edison, and on October 19, it set the dates for the inquiry. On Tuesday, October 15 (p. 41), there was a discussion in the Times of the Massachusetts decision in the column “Topic in Wall Street.” It “excited intense interest in public utility circles yesterday and undoubtedly had effect in depressing the issues of this group. The decision is a far-reaching one and Wall Street expressed the greatest interest in what effect it will have, if any, upon commissions in other States.”

Boston Edison had closed at 360 on Friday, October 11, before the announcement was released. It dropped 61 points at its low on Monday, (October 14) but closed at 328, a loss of 32 points.

On October 16 (p. 42), the Times reported that Governor Allen of Massachusetts was launching a full investigation of Boston Edison including “dividends, depreciation, and surplus.”

One major factor that can be identified leading to the price break for public utilities was the ruling by the Massachusetts Public Utility Commission. The only specific action was that it refused to permit Edison Electric Illuminating Company of Boston to split its stock. Standard financial theory predicts that the primary effect of a stock split would be to reduce the stock price by 50% and would leave the total value unchanged, thus the denial of the split was not economically significant, and the stock split should have been easy to grant. But the Commission made it clear it had additional messages to communicate. For example, the Financial Times (October 16, 1929, p. 7) reported that the Commission advised the company to “reduce the selling price to the consumer.” Boston was paying $.085 per kilowatt-hour and Cambridge only $.055. There were also rumors of public ownership and a shifting of control. The next day (October 17), the Times reported (p. 3) “The worst pressure was against Public Utility shares” and the headline read “Electric Issue Hard Hit.”

Public Utility Regulation in New York

Massachusetts was not alone in challenging the profit levels of utilities. The Federal Trade Commission, New York City, and New York State were all challenging the status of public utility regulation. New York Governor (Franklin D. Roosevelt) appointed a committee on October 8 to investigate the regulation of public utilities in the state. The Committee stated, “this inquiry is likely to have far-reaching effects and may lead to similar action in other States.” Both the October 17 and October 19 issues of the Times carried articles regarding the New York investigative committee. Professor Bonbright, a Roosevelt appointee, described the regulatory process as a “vicious system” (October 19, p. 21), which ignored consumers. The Chairman of the Public Service Commission, testifying before the Committee wanted more control over utility holding companies, especially management fees and other transfers.

The New York State Committee also noted the increasing importance of investment trusts: “mention of the influence of the investment trust on utility securities is too important for this committee to ignore” (New York Times, October 17, p. 18). They conjectured that the trusts had $3.5 billion to invest, and “their influence has become very important” (p. 18).

In New York City Mayor Jimmy Walker was fighting the accusation of graft charges with statements that his administration would fight aggressively against rate increases, thus proving that he had not accepted bribes (New York Times, October 23). It is reasonable to conclude that the October 16 break was related to the news from Massachusetts and New York.

On October 17, the New York Times (p. 18) reported that the Committee on Public Service Securities of the Investment Banking Association warned against “speculative and uniformed buying.” The Committee published a report in which it asked for care in buying shares in utilities.

On Black Thursday, October 24, the market panic began. The market dropped from 305.87 to 272.32 (a 34 point drop, or 9%) and closed at 299.47. The declines were led by the motor stocks and public utilities.

The Public Utility Multipliers and Leverage

Public utilities were a very important segment of the stock market, and even more importantly, any change in public utility stock values resulted in larger changes in equity wealth. In 1929, there were three potentially important multipliers that meant that any change in a public utility’s underlying value would result in a larger value change in the market and in the investor’s value.

Consider the following hypothetical values for a public utility:

Book value per share for a utility $50

Market price per share 162.502

Market price of investment trust holding stock (assuming a 100% 325.00

premium over market value)

Eliminating the utility’s $112.50 market price premium over book value, the market price of the investment trust would be $50 without a premium. The loss in market value of the stock of the investment trust and the utility would be $387.50 (with no premium). The $387.50 is equal to the $112.50 loss in underlying stock value and the $275 reduction in investment trust stock value. The public utility holding companies, in fact, were even more vulnerable to a stock price change since their ratio of price to book value averaged 4.44 (Wigmore, p. 43). The $387.50 loss in market value implies investments in both the firm’s stock and the investment trust.

For simplicity, this discussion has assumed the trust held all the holding company stock. The effects shown would be reduced if the trust held only a fraction of the stock. However, this discussion has also assumed that no debt or margin was used to finance the investment. Assume the individual investors invested only $162.50 of their money and borrowed $162.50 to buy the investment trust stock costing $325. If the utility stock went down from $162.50 to $50 and the trust still sold at a 100% premium, the trust would sell at $100 and the investors would have lost 100% of their investment since the investors owe $162.50. The vulnerability of the margin investor buying a trust stock that has invested in a utility is obvious.

These highly levered non-operating utilities offered an opportunity for speculation. The holding company typically owned 100% of the operating companies’ stock and both entities were levered (there could be more than two levels of leverage). There were also holding companies that owned holding companies (e.g., Ebasco). Wigmore (p. 43) lists nine of the largest public utility holding companies. The ratio of the low 1929 price to the high price (average) was 33%. These stocks were even more volatile than the publicly owned utilities.

The amount of leverage (both debt and preferred stock) used in the utility sector may have been enormous, but we cannot tell for certain. Assume that a utility purchases an asset that costs $1,000,000 and that asset is financed with 40% stock ($400,000). A utility holding company owns the utility stock and is also financed with 40% stock ($160,000). A second utility holding company owns the first and it is financed with 40% stock ($64,000). An investment trust owns the second holding company’s stock and is financed with 40% stock ($25,600). An investor buys the investment trust’s common stock using 50% margin and investing $12,800 in the stock. Thus, the $1,000,000 utility asset is financed with $12,800 of equity capital.

When the large amount of leverage is combined with the inflated prices of the public utility stock, both holding company stocks, and the investment trust the problem is even more dramatic. Continuing the above example, assume the $1,000,000 asset again financed with $600,000 of debt and $400,000 common stock, but the common stock has a $1,200,000 market value. The first utility holding company has $720,000 of debt and $480,000 of common. The second holding company has $288,000 of debt and $192,000 of stock. The investment trust has $115,200 of debt and $76,800 of stock. The investor uses $38,400 of margin debt. The $1,000,000 asset is supporting $1,761,600 of debt. The investor’s $38,400 of equity is very much in jeopardy.

Conclusions and Lessons

Although no consensus has been reached on the causes of the 1929 stock market crash, the evidence cited above suggests that it may have been that the fear of speculation helped push the stock market to the brink of collapse. It is possible that Hoover’s aggressive campaign against speculation, helped by the overpriced public utilities hit by the Massachusetts Public Utility Commission decision and statements and the vulnerable margin investors, triggered the October selling panic and the consequences that followed.

An important first event may have been Lord Snowden’s reference to the speculative orgy in America. The resulting decline in stock prices weakened margin positions. When several governmental bodies indicated that public utilities in the future were not going to be able to justify their market prices, the decreases in utility stock prices resulted in margin positions being further weakened resulting in general selling. At some stage, the selling panic started and the crash resulted.

What can we learn from the 1929 crash? There are many lessons, but a handful seem to be most applicable to today’s stock market.

  • There is a delicate balance between optimism and pessimism regarding the stock market. Statements and actions by government officials can affect the sensitivity of stock prices to events. Call a market overpriced often enough, and investors may begin to believe it.
  • The fact that stocks can lose 40% of their value in a month and 90% over three years suggests the desirability of diversification (including assets other than stocks). Remember, some investors lose all of their investment when the market falls 40%.
  • A levered investment portfolio amplifies the swings of the stock market. Some investment securities have leverage built into them (e.g., stocks of highly levered firms, options, and stock index futures).
  • A series of presumably undramatic events may establish a setting for a wide price decline.
  • A segment of the market can experience bad news and a price decline that infects the broader market. In 1929, it seems to have been public utilities. In 2000, high technology firms were candidates.
  • Interpreting events and assigning blame is unreliable if there has not been an adequate passage of time and opportunity for reflection and analysis — and is difficult even with decades of hindsight.
  • It is difficult to predict a major market turn with any degree of reliability. It is impressive that in September 1929, Roger Babson predicted the collapse of the stock market, but he had been predicting a collapse for many years. Also, even Babson recommended diversification and was against complete liquidation of stock investments (Financial Chronicle, September 7, 1929, p. 1505).
  • Even a market that is not excessively high can collapse. Both market psychology and the underlying economics are relevant.


Barsky, Robert B. and J. Bradford DeLong. “Bull and Bear Markets in the Twentieth Century,” Journal of Economic History 50, no. 2 (1990): 265-281.

Bierman, Harold, Jr. The Great Myths of 1929 and the Lessons to be Learned. Westport, CT: Greenwood Press, 1991.

Bierman, Harold, Jr. The Causes of the 1929 Stock Market Crash. Westport, CT, Greenwood Press, 1998.

Bierman, Harold, Jr. “The Reasons Stock Crashed in 1929.” Journal of Investing (1999): 11-18.

Bierman, Harold, Jr. “Bad Market Days,” World Economics (2001) 177-191.

Commercial and Financial Chronicle, 1929 issues.

Committee on Banking and Currency. Hearings on Performance of the National and Federal Reserve Banking System. Washington, 1931.

DeLong, J. Bradford and Andrei Schleifer, “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” Journal of Economic History 51, no. 3 (1991): 675-700.

Federal Reserve Bulletin, February, 1929.

Fisher, Irving. The Stock Market Crash and After. New York: Macmillan, 1930.

Galbraith, John K. The Great Crash, 1929. Boston, Houghton Mifflin, 1961.

Hoover, Herbert. The Memoirs of Herbert Hoover. New York, Macmillan, 1952.

Kendrick, John W. Productivity Trends in the United States. Princeton University Press, 1961.

Kindleberger, Charles P. Manias, Panics, and Crashes. New York, Basic Books, 1978.

Malkiel, Burton G., A Random Walk Down Wall Street. New York, Norton, 1975 and 1996.

Moggridge, Donald. The Collected Writings of John Maynard Keynes, Volume XX. New York: Macmillan, 1981.

New York Times, 1929 and 1930.

Rappoport, Peter and Eugene N. White, “Was There a Bubble in the 1929 Stock Market?” Journal of Economic History 53, no. 3 (1993): 549-574.

Samuelson, Paul A. “Myths and Realities about the Crash and Depression.” Journal of Portfolio Management (1979): 9.

Senate Committee on Banking and Currency. Stock Exchange Practices. Washington, 1928.

Siegel, Jeremy J. “The Equity Premium: Stock and Bond Returns since 1802,”

Financial Analysts Journal 48, no. 1 (1992): 28-46.

Wall Street Journal, October 1929.

Washington Post, October 1929.

Wigmore, Barry A. The Crash and Its Aftermath: A History of Securities Markets in the United States, 1929-1933. Greenwood Press, Westport, 1985.

1 1923-25 average = 100.

2 Based a price to book value ratio of 3.25 (Wigmore, p. 39).

Citation: Bierman, Harold. “The 1929 Stock Market Crash”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL

The International Natural Rubber Market, 1870-1930

Zephyr Frank, Stanford University and Aldo Musacchio, Ibmec SãoPaulo

Overview of the Rubber Market, 1870-1930

Natural rubber was first used by the indigenous peoples of the Amazon basin for a variety of purposes. By the middle of the eighteenth century, Europeans had begun to experiment with rubber as a waterproofing agent. In the early nineteenth century, rubber was used to make waterproof shoes (Dean, 1987). The best source of latex, the milky fluid from which natural rubber products were made, was hevea brasiliensis, which grew predominantly in the Brazilian Amazon (but also in the Amazonian regions of Bolivia and Peru). Thus, by geographical accident, the first period of rubber’s commercial history, from the late 1700s through 1900, was centered in Brazil; the second period, from roughly 1910 on, was increasingly centered in East Asia as the result of plantation development. The first century of rubber was typified by relatively low levels of production, high wages, and very high prices; the period following 1910 was one of rapidly increasing production, low wages, and falling prices.

Uses of Rubber

The early uses of the material were quite limited. Initially the problem of natural rubber was its sensitivity to temperature changes, which altered its shape and consistency. In 1839 Charles Goodyear improved the process called vulcanization, which modified rubber so that it would support extreme temperatures. It was then that natural rubber became suitable for producing hoses, tires, industrial bands, sheets, shoes, shoe soles, and other products. What initially caused the beginning of the “Rubber Boom,” however, was the popularization of the bicycle. The boom would then be accentuated after 1900 by the development of the automobile industry and the expansion of the tire industry to produce car tires (Weinstein, 1983; Dean 1987).

Brazil’s Initial Advantage and High-Wage Cost Structure

Until the turn of the twentieth century Brazil and the countries that share the Amazon basin (i.e. Bolivia, Venezuela and Peru), were the only exporters of natural rubber. Brazil sold almost ninety percent of the total rubber commercialized in the world. The fundamental fact that explains Brazil’s entry into and domination of natural rubber production during the period 1870 through roughly 1913 is that most of the world’s rubber trees grew naturally in the Amazon region of Brazil. The Brazilian rubber industry developed a high-wage cost structure as the result of labor scarcity and lack of competition in the early years of rubber production. Since there were no credit markets to finance the trips of the workers of other parts of Brazil to the Amazon, workers paid their trips with loans from their future employers. Much like indenture servitude during colonial times in the United States, these loans were paid back to the employers with work once the laborers were established in the Amazon basin. Another factor that increased the costs of producing rubber was that most provisions for tappers in the field had to be shipped in from outside the region at great expense (Barham and Coomes, 1994). This made Brazilian production very expensive compared to the future plantations in Asia. Nevertheless Brazil’s system of production worked well as long as two conditions were met: first, that the demand for rubber did not grow too quickly, for wild rubber production could not expand rapidly owing to labor and environmental constraints; second, that competition based on some other more efficient arrangement of factors of production did not exist. As can be seen in Figure 1, Brazil dominated the natural rubber market until the first decade of the twentieth century.

Between 1900 and 1913, these conditions ceased to hold. First, the demand for rubber skyrocketed [see Figure 2], providing a huge incentive for other producers to enter the market. Prices had been high before, but Brazilian supply had been quite capable of meeting demand; now, prices were high and demand appeared insatiable. Plantations, which had been possible since the 1880s, now became a reality mainly in the colonies of Southeast Asia. Because Brazil was committed to a high-wage, labor-scarce production regime, it was unable to counter the entry of Asian plantations into the market it had dominated for half a century.

Southeast Asian Plantations Develop a Low-Cost, Labor-Intensive Alternative

In Asia, the British and Dutch drew upon their superior stocks of capital and vast pools of cheap colonial labor to transform rubber collection into a low-cost, labor-intensive industry. Investment per tapper in Brazil was reportedly 337 pounds sterling circa 1910; in the low-cost Asian plantations, investment was estimated at just 210 pounds per worker (Dean, 1987). Not only were Southeast Asian tappers cheaper, they were potentially eighty percent more productive (Dean, 1987).

Ironically, the new plantation system proved equally susceptible to uncertainty and competition. Unexpected sources of uncertainty arose in the technological development of automobile tires. In spite of colonialism, the British and Dutch were unable to collude to control production and prices plummeted after 1910. When the British did attempt to restrict production in the 1920s, the United States attempted to set up plantations in Brazil and the Dutch were happy to take market share. Yet it was too late for Brazil: the cost structure of Southeast Asian plantations could not be matched. In a sense, then, the game was no longer worth the candle: in order to compete in rubber production, Brazil would have to have had significantly lower wages — which would only have been possible with a vastly expanded transport network and domestic agriculture sector in the hinterland of the Amazon basin. Such an expensive solution made no economic sense in the 1910s and 20s when coffee and nascent industrialization in São Paulo offered much more promising prospects.

Natural Rubber Extraction and Commercialization: Brazil

Rubber Tapping in the Amazon Rainforest

One disadvantage Brazilian rubber producers suffered was that the organization of production depended on the distribution of Hevea brasiliensis trees in the forest. The owner (or often lease concessionary) of a large land plot would hire tappers to gather rubber by gouging the tree trunk with an axe. In Brazil, the usual practice was to make a big dent in the tree and put a small bowl to collect the latex that would come out of the trunk. Typically, tappers had two “rows” of trees they worked on, alternating one row per day. The “rows” contained several circular roads that went through the forest with more than 100 trees each. Rubber could only be collected during the tapping season (August to January), and the living conditions of tappers were hard. As the need for rubber expanded, tappers had to be sent deep into the Amazon rainforest to look for unexplored land with more productive trees. Tappers established their shacks close to the river because rubber, once smoked, was sent by boat to Manaus (capital of the state of Amazonas) or to Belém (capital of the state of Pará), both entrepots for rubber exporting to Europe and the US.[1]

Competition or Exploitation? Tappers and Seringalistas

After collecting the rubber, tappers would go back to their shacks and smoke the resin in order to make balls of partially filtered and purified rough rubber that could be sold at the ports. There is much discussion about the commercialization of the product. Weinstein (1983) argues that the seringalista — the employer of the rubber tapper — controlled the transportation of rubber to the ports, where he sold the rubber, many times in exchange for goods that could be sold (with a large gain) back to the tapper. In this economy money was scarce and the “wages” of tappers or seringueiros were determined by the price of rubber. Wages depended on the current price of rubber; the usual agreement for tappers was to split the gross profits with their patrons. These salaries were most commonly paid in goods, such as cigarettes, food, and tools. According to Weinstein (1983), the goods were overpriced by the seringalistas to extract larger profits from the seringueiros work. Barham and Coomes (1994), on the other hand, argue that the structure of the market in the Amazon was less closed and that independent traders would travel around the basin in small boats, willing to exchange goods for rubber. Poor monitoring by employers and an absent state facilitated these under-the-counter transactions, which allowed tappers to get better pay for their work.

Exporting Rubber

From the ports, rubber was in the hands of mainly Brazilian, British and American exporters. Contrary to what Weinstein (1983) argued, Brazilian producers or local merchants from the interior could choose whether to send the rubber on consignment to a New York commission house, rather than selling it to a exporter in the Amazon (Shelley, 1918). Rubber was taken, like other commodities, to ports in Europe and the US to be distributed to the industries that bought large amounts of the product in the London or New York commodities exchanges. A large part of rubber produced was traded at these exchanges, but tire manufacturers and other large consumers also made direct purchases from the distributors in the country of origin.[2]

Rubber Production in Southeast Asia

Seeds Smuggled from Brazil to Britain

The Hevea brasiliensis, the most important type of rubber tree, was an Amazonian species. This is why the countries of the Amazon basin were the main producers of rubber at the beginning of the international rubber trade. How, then, did British and Dutch colonies in Southeast Asia end up dominating the market? Brazil tried to prevent Hevea brasiliensis seeds from being exported, as the Brazilian government knew that by being the main producers of rubber, profits from rubber trading were insured. Protecting property rights in seeds proved a futile exercise. In 1876, the Englishman and aspiring author and rubber expert, Henry Wickham, smuggled 70,000 seeds to London, a feat for which he earned Brazil’s eternal opprobrium and an English knighthood. After experimenting with the seeds, 2,800 plants were raised at the Royal Botanical Gardens in London (Kew Gardens) and then shipped to Perideniya Gardens in Ceylon. In 1877 a case of 22 plants reached Singapore and were planted at the Singapore Botanical Garden. In the same year the first plant arrived in the Malay States. Since rubber trees needed between 6 to 8 years to be mature enough to yield good rubber, tapping began in the 1880s.

Scientific Research to Maximize Yields

In order to develop rubber extraction in the Malay States, more scientific intervention was needed. In 1888, H. N. Ridley was appointed director of the Singapore Botanical Garden and began experimenting with tapping methods. The final result of all the experimentations with different methods of tapping in Southeast Asia was the discovery of how to extract rubber in such a way that the tree would maintain a high yield for a long period of time. Rather than making a deep gouge with an axe on the rubber tree, as in Brazil, Southeast Asian tappers scraped the trunk of the tree by making a series of overlapped Y-shaped cuts with an axe, such that at the bottom there would be a canal ending in a collecting receptacle. According to Akers (1912), the tapping techniques in Asia insured the exploitation of the trees for longer periods, because the Brazilian technique scarred the tree’s bark and lowered yields over time.

Rapid Commercial Development and the Automobile Boom

Commercial planting in the Malay States began in 1895. The development of large-scale plantations was slow because of the lack of capital. Investors did not get interested in plantations until the prospects for rubber improved radically with the spectacular development of the automobile industry. By 1905, European capitalists were sufficiently interested in investing in large-scale plantations in Southeast Asia to plant some 38,000 acres of trees. Between 1905 and 1911 the annual increase was over 70,000 acres per year, and, by the end of 1911, the acreage in the Malay States reached 542,877 (Baxendale, 1913). The expansion of plantations was possible because of the sophistication in the organization of such enterprises. Joint stock companies were created to exploit the land grants and capital was raised through stock issues on the London Stock Exchange. The high returns during the first years (1906-1910) made investors ever more optimistic and capital flowed in large amounts. Plantations depended on a very disciplined system of labor and an intensive use of land.

Malaysia’s Advantages over Brazil

In addition to the intensive use of land, the production system in Malaysia had several economic advantages over that of Brazil. First, in the Malay States there was no specific tapping season, unlike Brazil where the rain did not allow tappers to collect rubber during six months of the year. Second, health conditions were better on the plantations, where rubber companies typically provided basic medical care and built infirmaries. In Brazil, by contrast, yellow fever and malaria made survival harder for rubber tappers who were dispersed in the forest and without even rudimentary medical attention. Finally, better living conditions and the support of the British and Dutch colonial authorities helped to attract Indian labor to the rubber plantations. Japanese and Chinese labor also immigrated to the plantations in Southeast Asia in response to relatively high wages (Baxendale, 1913).

Initially, demand for rubber was associated with specialized industrial components (belts and gaskets, etc.), consumer goods (golf balls, shoe soles, galoshes, etc.), and bicycle tires. Prior to the development of the automobile as a mass-marketed phenomenon, the Brazilian wild rubber industry was capable of meeting world demand and, furthermore, it was impossible for rubber producers to predict the scope and growth of the automobile industry prior to the 1900s. Thus, as Figure 3 indicates, growth in demand, as measured by U.K. imports, was not particularly rapid in the period 1880-1899. There was no reason to believe, in the early 1880s, that demand for rubber would explode as it did in the 1890s. Even as demand rose in the 1890s with the bicycle craze, the rate of increase was not beyond the capacity of wild rubber producers in Brazil and elsewhere (see figure 3). High rubber prices did not induce rapid increases in production or plantation development in the nineteenth century. In this context, Brazil developed a reasonably efficient industry based on its natural resource endowment and limited labor and capital sources.

In the first three decades of the twentieth century, major changes in both supply and demand created unprecedented uncertainty in rubber markets. On the supply side, Southeast Asian rubber plantations transformed the cost structure and capacity of the industry. On the demand side, and directly inducing plantation development, automobile production and associated demand for rubber exploded. Then, in the 1920s, competition and technological advance in tire production led to another shift in the market with profound consequences for rubber producers and tire manufacturers alike.

Rapid Price Fluctuations and Output Lags

Figure 1 shows the fluctuations of the Rubber Smoked Sheet type 1 (RSS1) price in London on an annual basis. The movements from 1906 to 1910 were very volatile on a monthly basis, as well, thus complicating forecasts for producers and making it hard for producers to decide how to react to market signals. Even though the information of prices and amounts in the markets were published every month in the major rubber journals, producers did not have a good idea of what was going to happen in the long run. If prices were high today, then they wanted to expand the area planted, but since it took from 6 to 8 years for trees to yield good rubber, they would have to wait to see the result of the expansion in production many years and price swings later. Since many producers reacted in the same way, periods of overproduction of rubber six to eight -odd years after a price rise were common.[3] Overproduction meant low prices, but since investments were mostly sunk (the costs of preparing the land, planting the trees and bringing in the workers could not be recovered and these resources could not be easily shifted to other uses), the market tended to stay oversupplied for long periods of time.

In figure 1 we see the annual price of Malaysian rubber plotted over time.

The years 1905 and 1906 marked historic highs for rubber prices, only to be surpassed briefly in 1909 and 1910. The area planted in rubber throughout Asia grew from 15,000 acres in 1901 to 433,000 acres in 1907; these plantings matured circa 1913, and cultivated rubber surpassed Brazilian wild rubber in volume exported.[4] The growth of the Asian rubber industry soon swamped Brazil’s market share and drove prices well below pre-Boom levels. After the major peak in prices of 1910, prices plummeted and followed a downward trend throughout the 1920s. By 1921, the bottom had dropped out of the market, and Malaysian rubber producers were induced by the British colonial authorities to enter into a scheme to restrict production. Plantations received export coupons that set quotas that limited the supply of rubber. The shortage of rubber did not affect prices until 1924 when the consumption passed the production of rubber and prices started to rise rapidly. This scheme had a short success because competition from the Dutch plantations in southeast Asia and others drove prices down by 1926. The plan was officially ended in 1928.[5]

Automobiles’ Impact on Rubber Demand

In order to understand the boom in rubber production, it is fundamental to look at the automobile industry. Cars had originally been adapted from horse-drawn carriages; some ran on wooden wheels, some on metal, some shod as it were in solid rubber. In any case, the ride at the speeds cars were soon capable of was impossible to bear. The pneumatic tire was quickly adopted from the bicycle, and the automobile tire industry was born — soon to account for well over half of rubber company sales in the United States where the vast majority of automobiles were manufactured in the early years of the industry.[6] The amount of rubber required to satisfy demand for automobile tires led first to a spike in rubber prices; second, it led to the development of rubber plantations in Asia.[7]

The connection between automobiles, plantations, and the rubber tire industry was explicit and obvious to observers at the time. Harvey Firestone, son of the founder of the company, put it this way:

It was not until 1898 that any serious attention was paid to plantation development. Then came the automobile, and with it the awakening on the part of everybody that without rubber there could be no tires, and without tires there could be no automobiles. (Firestone, 1932, p. 41)

Thus the emergence of a strong consuming sector linked to the automobile was necessary. For instance, the average price of rubber from 1880-1884 was 401 pounds sterling per ton; from 1900 to 1904, when the first plantations were beginning to be set up, the average price was 459 pounds sterling per ton. Thus, Asian plantations were developed both in response to high rubber prices and to what everyone could see was an exponentially growing source of demand in automobiles. Previous consumers of rubber did not show the kind of dynamism needed to spur entry by plantations into the natural rubber market, even though prices were very high throughout most of second half of the nineteenth century.

Producers Need to Forecast Future Supply and Demand Conditions

Rubber producers made decisions about production and planting during the period 1900-1912 with the aim to reap windfall profits, instead of thinking about the long-run sustainability of their business. High prices were an incentive for all to increase production, but increasing production, through more acreage planted could mean a loss for everyone in the future (because too much supply could drive the prices down). Yet, current prices could not yield profits when investment decisions had to be made six or more years in advance, as was the case in plantation production: in order to invest in plantations, capital had to be able to predict future interactions in supply and demand. Demand, although high and apparently relatively price inelastic, was not entirely predictable. It was predictable enough, however, for planters to expand acreage in rubber in Asia at a dramatic rate. Planters were often uncertain as to the aggregate level of supply: new plantations were constantly coming into production; others were entering into decline or bankruptcy. Thus their investments could yield a lot in the short run, but if all the people reacted in the same way, prices were driven down and profits were low too. This is what happened in the 1920s, after all the acreage expansion of the first two decades of the century.

Demand Growth Unexpectedly Slows in the 1920s

Plantings between 1912 and 1916 were destined to come into production during a period in which growth in the automobile industry leveled off significantly owing to recession in 1920-21. Making maters worse for rubber producers, major advances in tire technology further controlled demand — for example, the change from corded to balloon tires increased average tire tread mileage from 8,000 to 15,000 miles.[8] The shift from corded to balloon tires decreased demand for natural rubber even as the automobile industry recovered from recession in the early 1920s. In addition, better design of tire casings circa 1920 led to the growth of the retreading industry, the result of which was further saving on rubber. Finally, better techniques in cotton weaving lowered friction and heat and further extended tire life.[9] As rubber supplies increased and demand decreased and became more price inelastic, prices plummeted: neither demand nor price proved predictable over the long run and suppliers paid a stiff price for overextending themselves during the boom years. Rubber tire manufacturers suffered the same fate: competition and technology (which they themselves introduced) pushed prices downward and, at the same time, flattened demand (Allen, 1936).[10]

Now, if one looks at the price of rubber and the rate of growth in demand as measured by imports in the 1920s, it is clear that the industry was over-invested in capacity. The consequences of technological change were dramatic for tire manufacturer profits as well as for rubber producers.


The natural rubber trade underwent several radical transformations over the period 1870 to 1930. First, prior to 1910, it was associated with high costs of production and high prices for final goods; most rubber was produced, during this period, by tapping rubber trees in the Amazon region of Brazil. After 1900, and especially after 1910, rubber was increasingly produced on low-cost plantations in Southeast Asia. The price of rubber fell with plantation development and, at the same time, the volume of rubber demanded by car tire manufacturers expanded dramatically. Uncertainty, in terms of both supply and demand, (often driven by changing tire technology) meant that natural rubber producers and tire manufacturers both experienced great volatility in returns. The overall evolution of the natural rubber trade and the related tire manufacture industry was toward large volume, low-cost production in an internationally competitive environment marked by commodity price volatility and declining levels of profit as the industry matured.


Akers, C. E. Report on the Amazon Valley: Its Rubber Industry and Other Resources. London: Waterlow & Sons, 1912.

Allen, Hugh. The House of Goodyear. Akron: Superior Printing, 1936.

Alves Pinto, Nelson Prado. Política Da Borracha No Brasil. A Falência Da Borracha Vegetal. São Paulo: HUCITEC, 1984.

Babcock, Glenn D. History of the United States Rubber Company. Indiana: Bureau of Business Research, 1966.

Barham, Bradford, and Oliver Coomes. “The Amazon Rubber Boom: Labor Control, Resistance, and Failed Plantation Development Revisited.” Hispanic American Historical Review 74, no. 2 (1994): 231-57.

Barham, Bradford, and Oliver Coomes. Prosperity’s Promise. The Amazon Rubber Boom and Distorted Economic Development. Boulder: Westview Press, 1996.

Barham, Bradford, and Oliver Coomes. “Wild Rubber: Industrial Organisation and the Microeconomics of Extraction during the Amazon Rubber Boom (1860-1920).” Hispanic American Historical Review 26, no. 1 (1994): 37-72.

Baxendale, Cyril. “The Plantation Rubber Industry.” India Rubber World, 1 January 1913.

Blackford, Mansel and Kerr, K. Austin. BFGoodrich. Columbus: Ohio State University Press, 1996.

Brazil. Instituto Brasileiro de Geografia e Estatística. Anuário Estatístico Do Brasil. Rio de Janeiro: Instituto Brasileiro de Geografia e Estatística, 1940.

Dean, Warren. Brazil and the Struggle for Rubber: A Study in Environmental History. Cambridge: Cambridge University Press, 1987.

Drabble, J. H. Rubber in Malaya, 1876-1922. Oxford: Oxford University Press, 1973.

Firestone, Harvey Jr. The Romance and Drama of the Rubber Industry. Akron: Firestone Tire and Rubber Co., 1932.

Santos, Roberto. História Econômica Da Amazônia (1800-1920). São Paulo: T.A. Queiroz, 1980.

Schurz, William Lytle, O. D Hargis, Curtis Fletcher Marbut, and C. B Manifold. Rubber Production in the Amazon Valley by William L. Schurz, Commercial Attaché, and O.D. Hargis, Special Agent, of the Department of Commerce, and C.F. Marbut, Chief, Division of Soil Survey, and C.B. Manifold, Soil Surveyor, of the Department of Agriculture. U.S. Bureau of Foreign and Domestic Commerce (Dept. of Commerce) Trade Promotion Series: Crude Rubber Survey: Crude Rubber Survey: Trade Promotion Series, no. 4. no. 28. Washington: Govt. Print. Office, 1925.

Shelley, Miguel. “Financing Rubber in Brazil.” India Rubber World, 1 July 1918.

Weinstein, Barbara. The Amazon Rubber Boom, 1850-1920. Stanford: Stanford University Press, 1983.


[1] Rubber taping in the Amazon basin is described in Weinstein (1983), Barham and Coomes (1994), Stanfield (1998), and in several articles published in India Rubber World, the main journal on rubber trading. See, for example, the explanation of tapping in the October 1, 1910 issue, or “The Present and Future of the Native Havea Rubber Industry” in the January 1, 1913 issue. For a detailed analysis of the rubber industry by region in Brazil by contemporary observers, see Schurz et al (1925).

[2] Newspapers such as The Economist or the London Times included sections on rubber trading, such as weekly or monthly reports of the market conditions, prices and other information. For the dealings between tire manufacturers and distributors in Brazil and Malaysia see Firestone (1932).

[3] Using cross-correlations of production and prices, we found that changes in production at time t were correlated with price changes in t-6 and t-8 (years). This is only weak evidence because these correlations are not statistically significant.

[4] Drabble (1973), 213, 220. The expansion in acreage was accompanied by a boom in company formation.

[5] Drabble (1973), 192-199. This was the so-called Stevenson Committee restriction, which lasted from 1922 to 1926. This plan basically limited the amount of rubber each planter could export assigning quotas through coupons.

[6] Pneumatic tires were first adapted to automobiles in 1896; Dunlop’s pneumatic bicycle tire was introduced in 1888. The great advantage of these tires over solid rubber was that they generated far less friction, extending tread life, and, of course, cushioned the ride and allowed for higher speeds.

[7] Early histories of the rubber industry tended to blame Brazilian “monopolists” for holding up supply and reaping windfall profits, see, e.g., Allen (1936), 116-117. In fact, rubber production in Brazil was far from monopolistic; other reasons account for supply inelasticity.

[8] Blackford and Kerr (1996), p. 88.

[9] The so-called “supertwist” weave allowed for the manufacture of larger, more durable tires, especially for trucks. Allen (1936), pp. 215-216.

[10] Allen (1936), p. 320.

Citation: Frank, Zephyr and Aldo Musacchio. “The International Natural Rubber Market, 1870-1930″. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL

Reconstruction Finance Corporation

James Butkiewicz, University of Delaware


The Reconstruction Finance Corporation (RFC) was established during the Hoover administration with the primary objective of providing liquidity to, and restoring confidence in the banking system. The banking system experienced extensive pressure during the economic contraction of 1929-1933. During the contraction period, many banks had to suspend business operations and most of these ultimately failed. A number of these suspensions occurred during banking panics, when large numbers of depositors rushed to convert their deposits to cash from fear their bank might fail. Since this period was prior to the establishment of federal deposit insurance, bank depositors lost part or all of their deposits when their bank failed.

During its first thirteen months of operation, the RFC’s primary activity was to make loans to banks and financial institutions. During President Roosevelt’s New Deal, the RFC’s powers were expanded significantly. At various times, the RFC purchased bank preferred stock, made loans to assist agriculture, housing, exports, business, governments, and for disaster relief, and even purchased gold at the President’s direction in order to change the market price of gold. The scope of RFC activities was expanded further immediately before and during World War II. The RFC established or purchased, and funded, eight corporations that made important contributions to the war effort. After the war, the RFC’s activities were limited primarily to making loans to business. RFC lending ended in 1953, and the corporation ceased operations in 1957, when all remaining assets were transferred to other government agencies.

The Genesis of the Reconstruction Finance Corporation

The difficulties experienced by the American banking system were one of the defining characteristics of the Great Contraction of 1929-1933. During this period, the American banking system was comprised of a very large number of banks. At the end of December 1929, there were 24,633 banks in the United States. The vast majority of these banks were small, serving small towns and rural communities. These small banks were particularly susceptible to local economic difficulties, which could result in failure of the bank.

The Federal Reserve and Small Banks

The Federal Reserve System was created in 1913 to address the problem of periodic banking crises. The Fed had the ability to act as a lender of last resort, providing funds to banks during crises. While nationally chartered banks were required to join the Fed, state-chartered banks could join the Fed at their discretion. Most state-chartered banks chose not to join the Federal Reserve System. The majority of the small banks in rural communities were not Fed members. Thus, during crises, these banks were unable to seek assistance from the Fed, and the Fed felt no obligation to engage in a general expansion of credit to assist nonmember banks.

How Banking Panics Develop

At this time there was no federal deposit insurance system, so bank customers generally lost part or all of their deposits when their bank failed. Fear of failure sometimes caused people to panic. In a panic, bank customers attempt to immediately withdraw their funds. While banks hold enough cash for normal operations, they use most of their deposited funds to make loans and purchase interest-earning assets. In a panic, banks are forced to attempt to rapidly convert these assets to cash. Frequently, they are forced to sell assets at a loss to obtain cash quickly, or may be unable to sell assets at all. As losses accumulate, or cash reserves dwindle, a bank becomes unable to pay all depositors, and must suspend operations. During this period, most banks that suspended operations declared bankruptcy. Bank suspensions and failures may incite panic in adjacent communities or regions. This spread of panic, or contagion, can result in a large number of bank failures. Not only do customers lose some or all of their deposits, but also people become wary of banks in general. A widespread withdrawal of bank deposits reduces the amount of money and credit in society. This monetary contraction can contribute to a recession or depression.

Bank failures were a common event throughout the 1920s. In any year, it was normal for several hundred banks to fail. In 1930, the number of failures increased substantially. Failures and contagious panics occurred repeatedly during the contraction years. President Hoover recognized that the banking system required assistance. However, the President also believed that this assistance, like charity, should come from the private sector rather than the government, if at all possible.

The National Credit Corporation

To this end, Hoover encouraged a number of major banks to form the National Credit Corporation (NCC), to lend money to other banks experiencing difficulties. The NCC was announced on October 13, 1931, and began operations on November 11, 1931. However, the banks in the NCC were not enthusiastic about this endeavor, and made loans very reluctantly, requiring that borrowing banks pledge their best assets as collateral, or security for the loan. Hoover quickly recognized that the NCC would not provide the necessary relief to the troubled banking system.

RFC Approved, January 1932

Eugene Meyer, Governor of the Federal Reserve Board, convinced the President that a public agency was needed to make loans to troubled banks. On December 7, 1931, a bill was introduced to establish the Reconstruction Finance Corporation. The legislation was approved on January 22, 1932, and the RFC opened for business on February 2, 1932.

The original legislation authorized the RFC’s existence for a ten-year period. However, Presidential approval was required to operate beyond January 1, 1933, and Congressional approval was required for lending authority to continue beyond January 1, 1934. Subsequent legislation extended the life of the RFC and added many additional responsibilities and authorities.

The RFC was funded through the United States Treasury. The Treasury provided $500 million of capital to the RFC, and the RFC was authorized to borrow an additional $1.5 billion from the Treasury. The Treasury, in turn, sold bonds to the public to fund the RFC. Over time, this borrowing authority was increased manyfold. Subsequently, the RFC was authorized to sell securities directly to the public to obtain funds. However, most RFC funding was obtained by borrowing from the Treasury. During its years of existence, the RFC borrowed $51.3 billion from the Treasury, and $3.1 billion from the public.

The RFC During the Hoover Administration

RFC Authorized to Lend to Banks and Others

The original legislation authorized the RFC to make loans to banks and other financial institutions, to railroads, and for crop loans. While the original objective of the RFC was to help banks, railroads were assisted because many banks owned railroad bonds, which had declined in value, because the railroads themselves had suffered from a decline in their business. If railroads recovered, their bonds would increase in value. This increase, or appreciation, of bond prices would improve the financial condition of banks holding these bonds.

Through legislation approved on July 21, 1932, the RFC was authorized to make loans for self-liquidating public works project, and to states to provide relief and work relief to needy and unemployed people. This legislation also required that the RFC report to Congress, on a monthly basis, the identity of all new borrowers of RFC funds.

RFC Undercut by Requirement That It Publish Names of Banks Receiving Loans

From its inception through Franklin Roosevelt’s inauguration on March 4, 1933, the RFC primarily made loans to financial institutions. During the first months following the establishment of the RFC, bank failures and currency holdings outside of banks both declined. However, several loans aroused political and public controversy, which was the reason the July 21, 1932 legislation included the provision that the identity of banks receiving RFC loans from this date forward be reported to Congress. The Speaker of the House of Representatives, John Nance Garner, ordered that the identity of the borrowing banks be made public. The publication of the identity of banks receiving RFC loans, which began in August 1932, reduced the effectiveness of RFC lending. Bankers became reluctant to borrow from the RFC, fearing that public revelation of a RFC loan would cause depositors to fear the bank was in danger of failing, and possibly start a panic. Legislation passed in January 1933 required that the RFC publish a list of all loans made from its inception through July 21, 1932, the effective date for the publication of new loan recipients.

RFC, Politics and Bank Failure in February and March 1933

In mid-February 1933, banking difficulties developed in Detroit, Michigan. The RFC was willing to make a loan to the troubled bank, the Union Guardian Trust, to avoid a crisis. The bank was one of Henry Ford’s banks, and Ford had deposits of $7 million in this particular bank. Michigan Senator James Couzens demanded that Henry Ford subordinate his deposits in the troubled bank as a condition of the loan. If Ford agreed, he would risk losing all of his deposits before any other depositor lost a penny. Ford and Couzens had once been partners in the automotive business, but had become bitter rivals. Ford refused to agree to Couzens’ demand, even though failure to save the bank might start a panic in Detroit. When the negotiations failed, the governor of Michigan declared a statewide bank holiday. In spite of the RFC’s willingness to assist the Union Guardian Trust, the crisis could not be averted.

The crisis in Michigan resulted in a spread of panic, first to adjacent states, but ultimately throughout the nation. By the day of Roosevelt’s inauguration, March 4, all states had declared bank holidays or had restricted the withdrawal of bank deposits for cash. As one of his first acts as president, on March 5 President Roosevelt announced to the nation that he was declaring a nationwide bank holiday. Almost all financial institutions in the nation were closed for business during the following week. The RFC lending program failed to prevent the worst financial crisis in American history.

Criticisms of the RFC

The effectiveness of RFC lending to March 1933 was limited in several respects. The RFC required banks to pledge assets as collateral for RFC loans. A criticism of the RFC was that it often took a bank’s best loan assets as collateral. Thus, the liquidity provided came at a steep price to banks. Also, the publicity of new loan recipients beginning in August 1932, and general controversy surrounding RFC lending probably discouraged banks from borrowing. In September and November 1932, the amount of outstanding RFC loans to banks and trust companies decreased, as repayments exceeded new lending.

The RFC in the New Deal

FDR Sees Advantages in Using the RFC

President Roosevelt inherited the RFC. He and his colleagues, as well as Congress, found the independence and flexibility of the RFC to be particularly useful. The RFC was an executive agency with the ability to obtain funding through the Treasury outside of the normal legislative process. Thus, the RFC could be used to finance a variety of favored projects and programs without obtaining legislative approval. RFC lending did not count toward budgetary expenditures, so the expansion of the role and influence of the government through the RFC was not reflected in the federal budget.

RFC Given the Authority to Buy Bank Stock

The first task was to stabilize the banking system. On March 9, 1933, the Emergency Banking Act was approved as law. This legislation and a subsequent amendment improved the RFC’s ability to assist banks by giving it the authority to purchase bank preferred stock, capital notes and debentures (bonds), and to make loans using bank preferred stock as collateral. While banks were initially reluctant, the RFC encouraged banks to issue preferred stock for it to purchase. This provision of capital funds to banks strengthened the financial position of many banks. Banks could use the new capital funds to expand their lending, and did not have to pledge their best assets as collateral. The RFC purchased $782 million of bank preferred stock from 4,202 individual banks, and $343 million of capital notes and debentures from 2,910 individual bank and trust companies. In sum, the RFC assisted almost 6,800 banks. Most of these purchases occurred in the years 1933 through 1935.

The preferred stock purchase program did have controversial aspects. The RFC officials at times exercised their authority as shareholders to reduce salaries of senior bank officers, and on occasion, insisted upon a change of bank management. However, the infusion of new capital into the banking system, and the establishment of the Federal Deposit Insurance Corporation to insure bank depositors against loss, stabilized the financial system. In the years following 1933, bank failures declined to very low levels.

RFC’s Assistance to Farmers

Throughout the New Deal years, the RFC’s assistance to farmers was second only to its assistance to bankers. Total RFC lending to agricultural financing institutions totaled $2.5 billion. Over half, $1.6 billion, went to its subsidiary, the Commodity Credit Corporation. The Commodity Credit Corporation was incorporated in Delaware in 1933, and operated by the RFC for six years. In 1939, control of the Commodity Credit Corporation was transferred to the Department of Agriculture, were it remains today.

Commodity Credit Corporation

The agricultural sector was hit particularly hard by depression, drought, and the introduction of the tractor, displacing many small and tenant farmers. The primary New Deal program for farmers was the Agricultural Adjustment Act. Its objective was to reverse the decline of product prices and farm incomes experienced since 1920. The Commodity Credit Corporation contributed to this objective by purchasing selected agricultural products at guaranteed prices, typically above the prevailing market price. Thus, the CCC purchases established a guaranteed minimum price for these farm products.

The RFC also funded the Electric Home and Farm Authority, a program designed to enable low- and moderate- income households to purchase gas and electric appliances. This program would create demand for electricity in rural areas, such as the area served by the new Tennessee Valley Authority. Providing electricity to rural areas was the objective of the Rural Electrification Program.

Decline in Bank Lending Concerns RFC and New Deal Officials

After 1933, bank assets and bank deposits both increased. However, banks changed their asset allocation dramatically during the recovery years. Prior to the depression, banks primarily made loans, and purchased some securities, such as U.S. Treasury securities. During the recovery years, banks primarily purchased securities, which involved less risk. Whether due to concerns over safety, or because potential borrowers had weakened financial positions due to the depression, bank lending did not recover, as indicated by the data in Table 1.

The relative decline in bank lending was a major concern for RFC officials and the New Dealers, who felt that lack of lending by banks was hindering economic recovery. The sentiment within the Roosevelt administration was that the problem was banks’ unwillingness to lend. They viewed the lending by the Commodity Credit Corporation and the Electric Home and Farm Authority, as well as reports from members of Congress, as evidence that there was unsatisfied business loan demand.

Year Bank Loans and Investments in Millions of Dollars Bank Loans in Millions of Dollars Bank Net Deposits in Millions of Dollars Loans as a Percentage of Loans and Investments Loans as a Percentage of Net Deposits
1921 39895 28927 30129 73% 96%
1922 39837 27627 31803 69% 87%
1923 43613 30272 34359 69% 88%
1924 45067 31409 36660 70% 86%
1925 48709 33729 40349 69% 84%
1926 51474 36035 42114 70% 86%
1927 53645 37208 43489 69% 86%
1928 57683 39507 44911 68% 88%
1929 58899 41581 45058 71% 92%
1930 58556 40497 45586 69% 89%
1931 55267 35285 41841 64% 84%
1932 46310 27888 32166 60% 87%
1933 40305 22243 28468 55% 78%
1934 42552 21306 32184 50% 66%
1935 44347 20213 35662 46% 57%
1936 48412 20636 41027 43% 50%
1937 49565 22410 42765 45% 52%
1938 47212 20982 41752 44% 50%
1939 49616 21320 45557 43% 47%
1940 51336 22340 49951 44% 45%

Source: Banking and Monetary Statistics, 1914 –1941.
Net Deposits are total deposits less interbank deposits.
All data are for the last business day of June in each year.

RFC Provides Credit to Business

Due to the failure of bank lending to return to pre-Depression levels, the role of the RFC expanded to include the provision of credit to business. RFC support was deemed as essential for the success of the National Recovery Administration, the New Deal program designed to promote industrial recovery. To support the NRA, legislation passed in 1934 authorized the RFC and the Federal Reserve System to make working capital loans to businesses. However, direct lending to businesses did not become an important RFC activity until 1938, when President Roosevelt encouraged expanding business lending in response to the recession of 1937-38.

RFC Mortgage Company

During the depression, many families and individuals were unable to make their mortgage payments, and had their homes repossessed. Another New Deal goal was to provide more funding for mortgages, to avoid the displacement of homeowners. In June 1934, the National Housing Act provided for the establishment of the Federal Housing Administration (FHA). The FHA would insure mortgage lenders against loss, and FHA mortgages required a smaller percentage down payment than was customary at that time, thus making it easier to purchase a house. In 1935, the RFC Mortgage Company was established to buy and sell FHA-insured mortgages.

RFC and Fannie Mae

Financial institutions were reluctant to purchase FHA mortgages, so in 1938 the President requested that the RFC establish a national mortgage association, the Federal National Mortgage Association, or Fannie Mae. Fannie Mae was originally funded by the RFC to create a market for FHA and later Veterans Administration (VA) mortgages. The RFC Mortgage Company was absorbed by the RFC in 1947. When the RFC was closed, its remaining mortgage assets were transferred to Fannie Mae. Fannie Mae evolved into a private corporation. During its existence, the RFC provided $1.8 billion of loans and capital to its mortgage subsidiaries.

RFC and Export-Import Bank

President Roosevelt sought to encourage trade with the Soviet Union. To promote this trade, the Export-Import Bank was established in 1934. The RFC provided capital, and later loans to the Ex-Im Bank. Interest in loans to support trade was so strong that a second Ex-Im bank was created to fund trade with other foreign nations a month after the first bank was created. These two banks were merged in 1936, with the authority to make loans to encourage exports in general. The RFC provided $201 million of capital and loans to the Ex-Im Banks.

Other RFC activities during this period included lending to federal government agencies providing relief from the depression including the Public Works Administration and the Works Progress Administration, disaster loans, and loans to state and local governments.

RFC Pushed Up the Price of Gold, Devalues the Dollar

Evidence of the flexibility afforded through the RFC was President Roosevelt’s use of the RFC to affect the market price of gold. The President wanted to reduce the gold value of the dollar from $20.67 per ounce of gold. As the dollar price of gold increased, the dollar exchange rate would fall relative to currencies that had a fixed gold price. A fall in the value of the dollar makes exports cheaper and imports more expensive. In an economy with high levels of unemployment, a decline in imports and increase in exports would increase domestic employment.

The goal of the RFC purchases was to increase the market price of gold. During October 1933 the RFC began purchasing gold at a price of $31.36 per ounce. The price was gradually increased to over $34 per ounce. The RFC price set a floor for the price of gold. In January 1934, the new official dollar price of gold was fixed at $35.00 per ounce, a 59% devaluation of the dollar.

Twice President Roosevelt instructed Jesse Jones, the president of the RFC, to stop lending, as he intended to close the RFC. The first occasion was in October 1937, and the second was in early 1940. The recession of 1937-38 caused Roosevelt to authorize the resumption of RFC lending in early 1938. The German invasion of France and the Low Countries gave the RFC new life on the second occasion.

The RFC in World War II

In 1940 the scope of RFC activities increased significantly, as the United States began preparing to assist its allies, and for possible direct involvement in the war. The RFC’s wartime activities were conducted in cooperation with other government agencies involved in the war effort. For its part, the RFC established seven new corporations, and purchased an existing corporation. The eight RFC wartime subsidiaries are listed in Table 2, below.

Table 2
RFC Wartime Subsidiaries
Metals Reserve Company
Rubber Reserve Company
Defense Plant Corporation
Defense Supplies Corporation
War Damage Corporation
U.S. Commercial Company
Rubber Development Corporation
Petroleum Reserve Corporation (later War Assets Corporation)

Source: Final Report of the Reconstruction Finance Corporation

Development of Materials Cut Off By the War

The RFC subsidiary corporations assisted the war effort as needed. These corporations were involved in funding the development of synthetic rubber, construction and operation of a tin smelter, and establishment of abaca (Manila hemp) plantations in Central America. Both natural rubber and abaca (used to produce rope products) were produced primarily in south Asia, which came under Japanese control. Thus, these programs encouraged the development of alternative sources of supply of these essential materials. Synthetic rubber, which was not produced in the United States prior to the war, quickly became the primary source of rubber in the post-war years.

Other War-Related Activities

Other war-related activities included financing plant conversion and construction for the production of military and essential goods, to deal and stockpile strategic materials, to purchase materials to reduce the supply available to enemy nations, to administer war damage insurance programs, and to finance construction of oil pipelines from Texas to New Jersey to free tankers for other uses.

During its existence, RFC management made discretionary loans and investments of $38.5 billion, of which $33.3 billion was actually disbursed. Of this total, $20.9 billion was disbursed to the RFC’s wartime subsidiaries. From 1941 through 1945, the RFC authorized over $2 billion of loans and investments each year, with a peak of over $6 billion authorized in 1943. The magnitude of RFC lending had increased substantially during the war. Most lending to wartime subsidiaries ended in 1945, and all such lending ended in 1948.

The Final Years of the RFC, 1946-1953

After the war, RFC lending decreased dramatically. In the postwar years, only in 1949 was over $1 billion authorized. Through 1950, most of this lending was directed toward businesses and mortgages. On September 7, 1950, Fannie Mae was transferred to the Housing and Home Finance Agency. During its last three years, almost all RFC loans were to businesses, including loans authorized under the Defense Production Act.

Eisenhower Terminates the RFC

President Eisenhower was inaugurated in 1953, and shortly thereafter legislation was passed terminating the RFC. The original RFC legislation authorized operations for one year of a possible ten-year existence, giving the President the option of extending its operation for a second year without Congressional approval. The RFC survived much longer, continuing to provide credit for both the New Deal and World War II. Now, the RFC would finally be closed.

Small Business Administration

However, there was concern that the end of RFC business loans would hurt small businesses. Thus, the Small Business Administration (SBA) was created in 1953 to continue the program of lending to small businesses, as well as providing training programs for entrepreneurs. The disaster loan program was also transferred to the SBA.

Through legislation passed on July 30, 1953, RFC lending authority ended on September 28, 1953. The RFC continued to collect on its loans and investments through June 30, 1957, at which time all remaining assets were transferred to other government agencies. At the time the liquidation act was passed, the RFC’s production of synthetic rubber, tin, and abaca remained in operation. Synthetic rubber operations were sold or leased to private industry. The tin and abaca programs were ultimately transferred to the General Services Administration.

Successors of the RFC

Three government agencies and one private corporation that were related to the RFC continue today. The Small Business Administration was established to continue lending to small businesses. The Commodity Credit Corporation continues to provide assistance to farmers. The Export-Import Bank continues to provide loans to promote exports. Fannie Mae became a private corporation in 1968. Today it is the most important source of mortgage funds in the nation, and has become one of the largest corporations in the country. Its stock is traded on the New York Stock Exchange under the symbol FNM.

Economic Analysis of the RFC

Role of a Lender of Last Resort

The American central bank, the Federal Reserve System, was created to be a lender of last resort. A lender of last resort exists to provide liquidity to banks during crises. The famous British central banker, Walter Bagehot, advised, “…in a panic the holders of the ultimate Bank reserve (whether one bank or many) should lend to all that bring good securities quickly, freely, and readily. By that policy they allay a panic…”

However, the Fed was not an effective lender of last resort during the depression years. Many of the banks experiencing problems during the depression years were not members of the Federal Reserve System, and thus could not borrow from the Fed. The Fed was reluctant to assist troubled banks, and banks also feared that borrowing from the Fed might weaken depositors’ confidence.

President Hoover hoped to restore stability and confidence in the banking system by creating the Reconstruction Finance Corporation. The RFC made collateralized loans to banks. Many scholars argue that initially RFC lending did provide relief. These observations are based on the decline in bank suspensions and public currency holdings in the months immediately following the creation of the RFC in February 1932. These data are presented in Table 3.

Table 3
1932 Currency in Millions of Dollars Bank Suspensions Number
January 4896 342
February 4824 119
March 4743 45
April 4751 74
May 4746 82
June 4959 151
July 5048 132
August 4988 85
September 4941 67
October 4863 102
November 4842 93
December 4830 161

Data sources: Currency – Friedman and Schwartz (1963)
Bank suspensions – Board of Governors (1937)

Bank suspensions occur when banks cannot open for normal business operations due to financial problems. Most bank suspensions ended in failure of the bank. Currency held by the public can be an indicator of public confidence in banks. As confidence declines, members of the public convert deposits to currency, and vice versa.

The banking situation deteriorated in June 1932 when a crisis developed in and around Chicago. Both Friedman and Schwartz (1963) and Jones (1951) assert that an RFC loan to a key bank helped to end the crisis, even though the bank subsequently failed.

The Debate over the Impact of the RFC

Two studies of RFC lending have come to differing conclusions. Butkiewicz (1995) examines the effect of RFC lending on bank suspensions and finds that lending reduced suspensions in the months prior to publication of the identities of loan recipients. He further argues that publication of the identities of banks receiving loans discouraged banks from borrowing. As noted above, RFC loans to banks declined in two months after publication began. Mason (2001) examines the impact of lending on a sample of Illinois banks and finds that those receiving RFC loans were increasingly likely to fail. Thus, the limited evidence provided from scholarly studies provides conflicting results about the impact of RFC lending.

Critics of RFC lending to banks argue that the RFC took the banks’ best assets as collateral, thereby reducing bank liquidity. Also, RFC lending requirements were initially very stringent. After the financial collapse in March 1933, the RFC was authorized to provide banks with capital through preferred stock and bond purchases. This change, along with the creation of the Federal Deposit Insurance System, stabilized the banking system.

Economic and Noneconomic Rationales for an Agency Like the RFC

Beginning 1933, the RFC became more directly involved in the allocation of credit throughout the economy. There are several economic reasons why a government agency might actively participate in the allocation of liquid capital funds. These are market failure, externalities, and noneconomic reasons.

A market failure occurs if private markets fail to allocate resources efficiently. For example, small business owners complain that markets do not provide enough loans at reasonable interest rates, a so-called “credit gap”. However, small business loans are riskier than loans to large corporations. Higher interest rates compensate for the greater risk involved in lending to small businesses. Thus, the case for a market failure is not compelling. However, small business loans remain politically popular.

An externality exists when the benefits to society are greater than the benefits to the individuals involved. For example, loans to troubled banks may prevent a financial crisis. Purchases of bank capital may also help stabilize the financial system. Prevention of financial crises and the possibility of a recession or depression provide benefits to society beyond the benefits to bank depositors and shareholders. Similarly, encouraging home ownership may create a more stable society. This argument is often used to justify government provision of funds to the mortgage market.

While wars are often fought over economic issues, and wars have economic consequences, a nation may become involved in a war for noneconomic reasons. Thus, the RFC wartime programs were motivated by political reasons, as much or more than economic reasons.

The RFC was a federal credit agency. The first federal credit agency was established in 1917. However, federal credit programs were relatively limited until the advent of the RFC. Many RFC lending programs were targeted to help specific sectors of the economy. A number of these activities were controversial, as are some federal credit programs today. Three important government agencies and one private corporation that descended from the RFC still operate today. All have important effects on the allocation of credit in our economy.

Criticisms of Governmental Credit Programs

Critics of federal credit programs cite several problems. One is that these programs subsidize certain activities, which may result in overproduction and misallocation of resources. For example, small businesses can obtain funds through the SBA at lower interest rates than are available through banks. This interest rate differential is a subsidy to small business borrowers. Crop loans and price supports result in overproduction of agricultural products. In general, federal credit programs reallocate capital resources to favored activities.

Finally, federal credit programs, including the RFC, are not funded as part of the normal budget process. They obtain funds through the Treasury, or their own borrowings are assumed to have the guarantee of the federal government. Thus, their borrowing is based on the creditworthiness of the federal government, not their own activities. These “off-budget” activities increase the scope of federal involvement in the economy while avoiding the normal budgetary decisions of the President and Congress. Also, these lending programs involve risk. Default on a significant number of these loans might require the federal government to bail out the affected agency. Taxpayers would bear the cost of a bailout.

Any analysis of market failures, externalities, or federal programs should involve a comparison of costs and benefits. However, precise measurement of costs and benefits in these cases is often difficult. Supporters value the benefits very highly, while opponents argue that the costs are excessive.


The RFC was created to assist banks during the Great Depression. It experienced some, albeit limited, success in this activity. However, the RFC’s authority to borrow directly from the Treasury outside the normal budget process proved very attractive to President Roosevelt and his advisors. Throughout the New Deal, the RFC was used to finance a vast array of favored activities. During World War II, RFC lending to its subsidiary corporations was an essential component of the war effort. It was the largest and most important federal credit program of its time. Even after the RFC was closed, some of its lending activities have continued through agencies and corporations that were first established or funded by the RFC. These descendent organizations, especially Fannie Mae, play a very important role in the allocation of credit in the American economy. The legacy of the RFC continues, long after it ceased to exist.


Data Sources

Banking data are from Banking and Monetary Statistics, 1914-1941, Board of Governors of the Federal Reserve System, 1943.

RFC data are from Final Report on the Reconstruction Finance Corporation, Secretary of the Treasury, 1959.

Currency data are from The Monetary History of the United States, 1867-1960, Friedman and Schwartz, 1963.

Bank suspension data are from Federal Reserve Bulletin, Board of Governors, September 1937.


Bagehot, Walter. Lombard Street: A Description of the Money Market. New York: Scribner, Armstrong & Co., 1873.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics, 1914-1941. Washington, DC, 1943.

Board of Governors of the Federal Reserve System. Federal Reserve Bulletin. September 1937.

Bremer, Cornelius D. American Bank Failures. New York: AMS Press, 1968.

Butkiewicz, James L. “The Impact of a Lender of Last Resort during the Great Depression: The Case of the Reconstruction Finance Corporation.” Explorations in Economic History 32, no. 2 (1995): 197-216.

Butkiewicz, James L. “The Reconstruction Finance Corporation, the Gold Standard, and the Banking Panic of 1933.” Southern Economic Journal 66, no. 2 (1999): 271-93.

Chandler, Lester V. America’s Greatest Depression, 1929-1941. New York: Harper and Row, 1970.

Friedman, Milton, and Anna J. Schwartz. The Monetary History of the United States, 1867-1960. Princeton, NJ: Princeton University Press, 1963.

Jones, Jesse H. Fifty Billion Dollars: My Thirteen Years with the RFC, 1932-1945. New York: Macmillan Co., 1951.

Keehn, Richard H., and Gene Smiley. “U.S. Bank Failures, 1932-1933: A Provisional Analysis.” Essays in Economic and Business History 6 (1988): 136-56.

Keehn, Richard H., and Gene Smiley. “U.S. Bank Failures, 1932-33: Additional Evidence on Regional Patterns, Timing, and the Role of the Reconstruction Finance Corporation.” Essays in Economic and Business History 11 (1993): 131-45.

Kennedy, Susan E. The Banking Crisis of 1933. Lexington, KY: University of Kentucky Press, 1973.

Mason, Joseph R. “Do Lender of Last Resort Policies Matter? The Effects of Reconstruction Finance Corporation Assistance to Banks During the Great Depression.” Journal of Financial Services Research 20, no 1. (2001): 77-95.

Nadler, Marcus, and Jules L. Bogen. The Banking Crisis: The End of an Epoch. New York, NY: Arno Press, 1980.

Olson, James S. Herbert Hoover and the Reconstruction Finance Corporation. Ames, IA: Iowa State University Press, 1977.

Olson, James S. Saving Capitalism: The Reconstruction Finance Corporation in the New Deal, 1933-1940. Princeton, NJ: Princeton University Press, 1988.

Saulnier, R. J., Harold G. Halcrow, and Neil H. Jacoby. Federal Lending and Loan Insurance. Princeton, NJ: Princeton University Press, 1958.

Schlesinger, Jr., Arthur M. The Age of Roosevelt: The Coming of the New Deal. Cambridge, MA: Riverside Press, 1957.

Secretary of the Treasury, Final Report on the Reconstruction Finance Corporation. Washington, DC: United States Government Printing Office, 1959.

Sprinkel, Beryl Wayne. “Economic Consequences of the Operations of the Reconstruction Finance Corporation.” Journal of Business of the University of Chicago 25, no. 4 (1952): 211-24.

Sullivan, L. Prelude to Panic: The Story of the Bank Holiday. Washington, DC: Statesman Press, 1936.

Trescott, Paul B. “Bank Failures, Interest Rates, and the Great Currency Outflow in the United States, 1929-1933.” Research in Economic History 11 (1988): 49-80.

Upham, Cyril B., and Edwin Lamke. Closed and Distressed Banks: A Study in Public Administration. Washington, DC: Brookings Institution, 1934.

Wicker, Elmus. The Banking Panics of the Great Depression. Cambridge: Cambridge University Press, 1996.

Web Links

Commodity Credit Corporation

Ex-Im Bank

Fannie Mae

Small Business Administration

Citation: Butkiewicz, James. “Reconstruction Finance Corporation”. EH.Net Encyclopedia, edited by Robert Whaples. July 19, 2002. URL

The Panic of 1907

Jon Moen, University of Mississippi

The Panic of 1907 was the last and most severe of the bank panics that plagued the National Banking Era of the United States. Severe panics also happened in 1873, 1884, 1890, and 1893, although numerous other smaller financial crises cropped up from time to time. Bank panics were characterized by the widespread appearance of bank runs, attempts by depositors to simultaneously withdraw their deposits from the banking system. Because banks did not (and still do not) keep a 100% reserve against deposits, it paid to be near the front of the line of depositors demanding their money when a panic blew up. What sets 1907 apart from earlier panics was that the crisis focused on the trusts companies in New York City. The National Banking Era lasted from 1863 to 1914, when Congress, in part to eliminate these recurring panics, created the Federal Reserve System.

What Caused the Panic?

Why would a panic happen? One answer that is really not of much help is that all depositors suddenly became so concerned about the solvency or liquidity of their bank that they decided they would rather hold cash than deposits (Diamond and Dybvig 1983; Jacklin and Bhattacharya 1988). (Solvency refers to the relationship between assets and liabilities; an insolvent bank has liabilities greater than its assets. Liquidity refers to the ease with which assets can be converted to cash without loss of value; liquid assets are close to cash or have a market in which they can be easily and quickly sold.) Whatever the deeper psychological reasons might be, it is not hard to identify some immediate shocks to depositor confidence that sparked the Panic of 1907. Such a shock occurred on October 16, 1907, when F. Augustus Heinze’s scheme to corner the stock of United Copper Company failed. Although United Copper was only a moderately important firm, the collapse of Heinze’s scheme, exposed an intricate network of interlocking directorates across banks, brokerage houses, and trust companies in New York City. Contemporary observers like O.M.W. Sprague (1910) believed that the discovery of the close associations between bankers and stockbrokers seriously raised the anxiety of already nervous depositors.

During the National Banking Era the New York money market faced seasonal variations in interest rates and liquidity resulting from the transportation of crops from the interior of the United States to New York and then to Europe. The outflow of capital necessary to finance crop shipments from the Midwest to the East Coast in September or October usually left the New York City money market squeezed for cash. As a result, short-term interest rates in New York City were prone to spike upward in autumn. Seasonal increases in economic activity were not matched by an increase in the money supply because existing domestic monetary structures tended to make the money supply “inelastic.” Usually gold would flow into the United States from Europe in response to the high seasonal interest rates, increasing the monetary base of the United States and easing the liquidity squeeze somewhat.

Under more normal financial conditions, the discovery of a scheme like Heinze’s might not have sparked a panic, but conditions were not normal in the Fall of 1907. The economy had been slowing, the stock market had been in decline since early 1907, and the supply of credit had been contracting causing rising interest rates. Tight credit markets in Europe, particularly in England where the Bank of England had been raising its bank rate since December 1906, have been implicated in setting an especially precarious financial stage in 1907. Therefore, the normal seasonal inflows of foreign gold were not happening in 1907 as European interest rates rose. Because there was no central bank or reliable lender of last resort during the National Banking Era, there was no reliable way to expand the money supply in the United States.

Heinze’s extensive involvement in New York banking was subsequently linked to one of his close and suspicious associates, C.F. Morse. Morse controlled three national banks directly and was a director of four other banks. After the failure of his attempt to corner United Copper stock, Heinze was forced to resign from the presidency of Mercantile National Bank, and worried depositors began a run on the bank. Depositors began runs on several of the banks controlled by Morse as well. The New York Clearinghouse, a private organization formed by banks to centralize check clearing (a check clears when it is finally presented to the bank on which it was originally written for payment in cash or reserves), had its examiner analyze the banks’ assets. On the basis of the examination, the Clearinghouse authorities stated that they would support Mercantile and the other banks on the condition that Heinze and Morse retire from banking in New York. On Monday, October 21, Mercantile National resumed business with new management, and the runs on these national banks ceased.

The Panic at the Trust Companies

By October 21, nothing resembling a systemic panic, however, had yet stricken the New York banking system. Depositors at Mercantile Bank withdrew funds but redeposited them in other New York City banks. Many accounts of the Panic of 1907 cite Monday, October 21, as the beginning of the crisis among the trust companies and the true onset of the panic. Late that Monday afternoon the National Bank of Commerce announced that it would stop clearing checks for the Knickerbocker Trust Company, the third largest trust in New York City. Vincent Carosso (1987), however, suggests that the run on Knickerbocker began Friday, October 18, when Charles Barney, the Knickerbocker president, was reported to have been involved in Heinze’s copper corner. Drawing from the private papers of J.P. Morgan, Carosso notes that the National Bank of Commerce had been extending loans to the Knickerbocker Trust to hold off depositor runs. National Bank of Commerce’s refusal to continue acting as a clearing agent for Knickerbocker was interpreted as a vote of no confidence that seriously alarmed Knickerbocker depositors.

On Monday evening, October 21, J.P. Morgan organized a meeting of trust company executives to discuss ways to halt the panic. Morgan, along with James Stillman of National City Bank and George Baker of First National Bank, had earlier organized an informal team to oversee relief efforts during the panic at the national banks (Carosso 1987). Assisting them were several young financial experts responsible for evaluating the assets of troubled institutions and indicating which ones were worthy of aid. Chief among these investigators was Benjamin Strong of Banker’s Trust Company, who would later become president of the Federal Reserve Bank of New York. Strong reported to Morgan that he was unable to evaluate Knickerbocker’s financial condition in the short time before funds would have to be committed. Unwilling to act on limited information, Morgan decided not to aid the trust; this decision kept other institutions from offering substantial aid as well. It appears that at first Morgan was uninterested in aiding the trust companies in general, as he felt they should pay for their risky behavior. It is not clear that they were riskier; perhaps Morgan just did not want to aid intermediaries competing with the banks. On October 22 Knickerbocker underwent a run for three hours before suspending operations just after noon, having paid out $8 million in cash.

Ominously, next to the front-page article describing the run on the Knickerbocker Trust in the Wednesday, October 23, edition of the New York Times was a headline describing the Trust Company of America, the second largest trust company in New York City, as the current “sore point” in the panic. By attracting attention to the Trust Company of America, the newspaper article greatly exacerbated the serious run on it. Barney, who was president of Knickerbocker, was also a member of the board of directors of Trust Company of America.

On Tuesday, October 22, withdrawals from Trust Company of America were approximately $1.5 million; on the Wednesday when the ill-timed article was published depositors claimed another $13 million of nearly $60 million in total deposits. Withdrawals from Trust Company of America on Thursday, October 24, were a further $8 million to $9 million. During the span of the run, which lasted two weeks, Trust Company of America reportedly paid out $47.5 million in deposits.

Saving the Trusts

Realizing that the failure of Trust Company of America and Lincoln Trust, another trust company whose distress had been publicized, would endanger the New York money market, five leading trust company presidents formed a committee to assist trusts needing cash. Not all trusts were willing to cooperate, however, so the committee was not able to collect enough cash to provide reliable relief for a trust company facing a sudden run. They petitioned Morgan for more help.

Morgan, Baker, and Stillman knew that aid for Trust Company of America was not certain and saw that the collapse of several large trusts would be disastrous. Strong had arrived at Trust Company of America sometime after 2:00 A.M. Wednesday and had begun to appraise its assets. That afternoon he reported to Morgan that Trust Company was basically sound and deserved assistance. Morgan channeled about $3 million to Trust Company just before closing time, which allowed it to resume business the next day.

Aid began to come from several other sources. J.D. Rockefeller deposited $10 million with the Union Trust to help the trusts and announced his support for Morgan. Secretary of the Treasury George Cortelyou and the major New York financiers met on the evening of Wednesday, October 23, and discussed plans to combat the crisis. Cortelyou deposited $25 million of the Treasury’s funds in national banks the following morning. Between October 21 and October 31, the Treasury deposited a total of $37.6 million in New York national banks and provided $36 million in small bills to meet runs. By the middle of November, however, the U.S. Treasury’s working capital had dwindled to $5 million. Thus Treasury could not and did not contribute much more aid during the rest of the panic (Timberlake 1978, 1993).

The Connection to the Stock Market

Meanwhile, by Thursday, October 24, call money on the New York Stock Exchange was nearly unobtainable. Call money was money lent for the purchase of stock equity, with the stock itself serving as collateral for the loans. Call loans could be called in at any time. The opening rate for call money was 6 percent, but exchange president Ransom H. Thomas noticed a serious scarcity of money. At one point that morning a bid of 60 percent went out for call money. Yet, even at that exorbitant rate, no money was offered. The last recorded transaction of the day was at the opening rate of 6 percent. Fearing a total collapse of the stock market, Thomas called Stillman for aid. Stillman referred Thomas to Morgan, who was in control of most of the available funds. While Thomas traveled to Morgan’s office, the call money rate on the exchange reached 100 percent.

On October 25 another money pool was required. About $10 million came from the Morgan group, $2 million from First National, and $500,000 from Kuhn, Loeb, and Company. This time, however, Morgan allowed the market to determine the call money rate, which remained at nearly 50 percent most of the day. The Morgan funds had restrictions designed to stifle speculation. First, no margin sales were allowed-only cash sales for investment. Also, the full amount of Morgan money was not released until afternoon. Throughout the stock exchange crisis, both Trust Company of America and Lincoln Trust were supported by Morgan’s efforts. The Trust Company of America and Lincoln Trust required further aid, and Morgan convinced other trust presidents to support a $25 million loan for the troubled institutions. The funds were provided on November 4 after several nights of negotiation. The panic began to ease when the trust company presidents organized by Morgan agreed to form a consortium to support trust companies facing runs.

The most severe runs on deposits in New York City were limited to the trust companies, not the state or national banks. Deposits contracted at all the trusts in New York, not just the prominent ones like Knickerbocker (Moen and Tallman 1992). This raises a question. If only the trust companies were being run by depositors, why would the banks want to help their competitors? The stock market provides a key link. Runs on deposits forced trusts to liquidate their most liquid assets, call loans on the stock market. Large-scale liquidation of call loans depressed the value of stocks because the stock serving as collateral for the call loan had to be sold quickly to pay off the loan. The sudden increase in the supply of stock would depress stock prices. Given the predominance of national banks in the call loan market, extensive liquidation of call loans by trusts threatened the assets of national banks. National banks and the clearinghouse were aware that they were economically linked to the trust companies through the call loan market. They realized that runs on the trusts could spread to the national banks through the call loan market, giving the banks a strong financial incentive to help the trusts stop the panic, even if they had no legal interest.

The New York Clearinghouse Association Steps In

While financiers were working out the crises with the trusts and the call loan market, money and reserves had become increasingly tight at banks. On October 26 the Clearinghouse issued clearinghouse loan certificates as an artificial mechanism to increase the supply of currency available to the public, a tactic it had used in earlier financial crises in 1873 and 1893 (Timberlake 1984; Gorton 1985; Tallman 1988).

Although the national banking system offered no legal mechanism to increase the supply of currency quickly, loan certificates provided an informal (if unlawful) way to free up a sizable amount of cash. In normal business banks used currency as reserve assets and as the medium to clear accounts with each other. Clearinghouse loan certificates enabled banks to convert their noncash assets into cash during a crisis: banks would substitute loan certificates for currency in their clearings, thus releasing the currency to pay depositors who demanded cash. In effect, loan certificates were IOUs between banks that were backed by eligible assets of the bank. Loan certificates were not recognized as currency by the public or by depositors, and they could legally circulate only among banks, not the public. A. Piatt Andrew (1908) noted, however, that during the 1907 Panic, a number of substitutes for cash were employed in transactions.

Following the first issue of clearinghouse loan certificates on October 26 during the 1907 Panic, loans initially increased by about $11 million. During the next three weeks more than $110 million in certificates were issued in New York City. Over the entire course of the Panic, nearly $500 million in currency substitutes circulated throughout the country as a “principal means of payment,” according to Andrew (1910, 515). Sprague has criticized the clearinghouse for delaying the use of loan certificates until after the panic was well under way. He believed that issuing certificates as soon as the crisis struck the trusts would have calmed the market by allowing banks to accommodate their depositors more quickly. Aid would have gone directly to troubled banks and trusts, and the cumbersome device of money pools could have been avoided. Fewer loans would have been called in, thus reducing the tension at the stock exchange (Sprague 1910, 257-58).

The clearinghouse also restricted the convertibility of demand deposits into cash — an action, which, like issuing loan certificates to the public, was illegal. The restriction, referred to as “suspension of payments,” increased the costs of doing business by making payments more difficult. Nevertheless, banks continued other business activities such as accepting deposits and clearing checks. The suspension of payments spread across the country through the system of correspondent banks. Although convertibility was widely restored by the beginning of January, in a few instances loan certificates and other substitutes for cash circulated as late as March 1908.

Why Were There Runs on Trust Companies?

There were three main types of financial intermediaries during the National Banking Era: national banks, state banks, and later in the period trust companies. It is not surprising that trust companies were the focal point of the panic. In New York, assets at the trust companies had grown phenomenally between 1890 and 1910, increasing 244 percent during the 10 years ending in 1907, from $396.7 million to $1,394.0 million. In contrast, national bank assets had grown 97 percent, from $915.2 million to $1,800.0 million, while state-chartered bank assets had grown 82 percent, from $297 million to $541.0 million (Barnett 1911, 234-35). Thus the manner in which trust companies used their assets greatly affected the New York money market (Moen and Tallman 1992).

Trust companies were much less regulated than national or state banks in New York. In 1906 New York State instituted a requirement that trusts maintain reserves at 15 percent of deposits, but only 5 percent of deposits needed to be kept as currency in the vault. Before that time trusts simply kept whatever reserves they felt necessary to conduct business. National bank notes were adequate as cash reserves for trusts while national banks in central reserve cities like New York were required to keep a 25 percent reserve in the form of specie or legal tender (greenbacks or treasury notes but not national bank notes).

Trusts were originally rather conservative institutions, managing estates, holding securities, and taking deposits, but by 1907 trusts were performing most of the functions of banks except issuing bank notes. Many of the larger trusts specialized in underwriting security issues. Others wrote mortgages or invested directly in real estate activities barred or limited for national banks. New York City trusts had a higher proportion of collateralized loans than did New York City national banks. Conventional banking wisdom associated collateralized loans with riskier investments and riskier borrowers. The trusts, therefore, had an asset portfolio that may have been riskier than those of other intermediaries.

National and private banks found the investment banking functions of trusts so useful that many of them gained direct or indirect control of a trust through holding companies or by placing their associates on a trust’s board of directors. In many instances a bank and its affiliated trust operated in the same building.

Trusts appear to have provided intermediary functions different from those of banks. Although the volume of deposits subject to check at trusts was similar to that at banks, trusts had many fewer checks (in number and value) written against their demand deposits than did banks. The check clearings of trusts were only about 7 percent of the volume of those at banks. Trusts were not then like commercial banks, whose assets are used as transactions balances by individual depositors or firms. National banks were part of a network of regional banks that had correspondent relationships to expedite interregional transactions (James 1978, 40). Trusts were not part of the correspondent banking system, so their deposits were more local and less directly subject to the recurring seasonal strains on funds.


The New York Clearinghouse had detailed knowledge of the quality of bank assets in New York. A similar, formal organization of trust companies would have had current knowledge of the assets and liabilities of its member trusts. Such an organization could have more readily assessed the situation at trust companies facing runs than the ad hoc consortiums and money pools organized by Morgan. The ability of a clearinghouse to shield its members from runs on deposits was clearly demonstrated by the Chicago Clearinghouse in 1907, where there were virtually no runs on deposits. In Chicago the trust companies, similar in structure to those in New York, were members of the clearinghouse and were not singled out by depositors. A lender of last resort covering all intermediaries in the payments system certainly adds stability to the system. JP Morgan and others, however, may have profited from earlier panics by lending money to otherwise desperate bankers. This is the popular view of their actions in 1907. The 1907 Panic, however, may have turned out to be far more severe than anticipated. Even if Morgan made money after the fact in 1907, the expectation of higher default risk made the possibility of lending in future panics unattractive. Perhaps this is what was realized by the New York bankers, causing them to abandon their role as de facto lenders of last resort and setting the groundwork for the establishment of the Federal Reserve System.


Much of this article is based on a review article from the Federal Reserve Bank of Atlanta, although I have updated some of our economic interpretations of the panic, particularly on matters related to liquidity and solvency. The complete reference to the review article is:

Moen, Jon and Ellis Tallman. “Lessons from the Panic of 1907.” Federal Reserve Bank of Atlanta Economic Review 75 (May/June 1990): 2-13.

Other important references cited or used are:

Andrew, A. Piatt. “Substitutes for Cash in the Panic of 1907.” Quarterly Journal of Economics 23, (August 1908): 497-516.

Barnett, George E. State Banks and Trust Companies since the Passage of the National Bank Act. Washington, D.C.: U.S. Government Printing Office, 1911.

Calomiris, Charles and Gary Gorton. “The Origins of Bank Panics: Models, Facts, and Bank Regulations.” In Financial Markets and Financial Crises, edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Carosso, Vincent P. The Morgans: Private International Bankers, 1854-1913. Cambridge, MA: Harvard University Press, 1987.

The Commercial and Financial Chronicle. Various issues from November 7, 1907 through January 8, 1908.

Diamond, Douglas W., and Philip H. Dybvig. “Bank Runs, Deposit Insurance, and Liquidity.” Journal of Political Economy 91 (June 1983): 401-19.

Friedman, Milton, and Anna 1. Schwartz. A Monetary History of the United States: 1867-1960. Princeton, N.J.: Princeton University Press, 1963.

Gorton, Gary. “Clearinghouses and the Origins of Central Banking in the United States.” Journal of Economic History 45 (June 1985): 277-84.

Jacklin, Charles J., and Sudipto Bhattacharya. “Distinguishing Panics and Information-Based Bank Runs: Welfare and Policy Implications.” Journal of Political Economy 96 (June 1988): 568-92.

James, John. Money and Capital Markets in Postbellum America. Princeton, NJ: Princeton University Press, 1978.

Moen, Jon, and Ellis W. Tallman. “The Bank Panic of 1907: The Role of the Trust Companies.” Journal of Economic History 52 (September 1992): 611-630.

Moen, Jon, and Ellis W. Tallman “Clearinghouse Membership and Deposit Contraction during the Panic of 1907.” Journal of Economic History 60 (March 2000): 145-163.

Sprague, Oliver M.W. “The American Crisis of 1907.” The Economic Journal 18 (September 1908): 353-72.

Sprague, Oliver M.W. History of Crises under the National Banking System. National Monetary Commission. Washington, D.C.: U.S. Government Printing Office, 1910.

Tallman, Ellis W. “Some Unanswered Questions about Banking Panics.” Federal Reserve Bank of Atlanta Economic Review 73 (November/December 1988): 2-21.

Timberlake, Richard Henry. The Origins of Central Banking in the United States. Cambridge, MA: Harvard University Press, 1978.

Timberlake, Richard Henry. Monetary Policy in the United States. Chicago: University of Chicago Press, 1993.

Timberlake, Richard Henry. “The Central Banking Role of Clearinghouse Associations.” Journal of Money, Credit and Banking 16 (February 1984): 1-15.

Citation: Moen, Jon. “Panic of 1907″. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL

Money in the American Colonies

Ron Michener, University of Virginia

“There certainly can’t be a greater Grievance to a Traveller, from one Colony to another, than the different values their Paper Money bears.” An English visitor, circa 1742 (Kimber, 1998, p. 52).

The monetary arrangements in use in America before the Revolution were extremely varied. Each colony had its own conventions, tender laws, and coin ratings, and each issued its own paper money. The monetary system within each colony evolved over time, sometimes dramatically, as when Massachusetts abolished the use of paper money within her borders in 1750 and returned to a specie standard. Any encyclopedia-length overview of the subject will, unavoidably, need to generalize, and few generalizations about the colonial monetary system are immune to criticism because counterexamples can usually be found somewhere in the historical record. Those readers who find their interest piqued by this article would be well advised to continue their study of the subject by consulting the more detailed discussions available in Brock (1956, 1975, 1992), Ernst (1973), and McCusker (1978).

Units of Account

In the colonial era the unit of account and the medium of exchange were distinct in ways that now seem strange. An example from modern times suggests how the ancient system worked. Nowadays race horses are auctioned in England using guineas as the unit of account, although the guinea coin has long since disappeared. It is understood by all who participate in these auctions that payment is made according to the rule that one guinea equals 21s. Guineas are the unit of account, but the medium of exchange accepted in payment is something else entirely. The unit of account and medium of exchange were similarly disconnected in colonial times (Adler, 1900).

The units of account in colonial times were pounds, shillings, and pence (1£ = 20s., 1s. = 12d.).1 These pounds, shillings, and pence, however, were local units, such as New York money, Pennsylvania money, Massachusetts money, or South Carolina money and should not be confused with sterling. To do so is comparable to treating modern Canadian dollars and American dollars as interchangeable simply because they are both called “dollars.” All the local currencies were less valuable than sterling.2 A Spanish piece of eight, for instance, was worth 4 s. 6 d. sterling at the British mint. The same piece of eight, on the eve of the Revolution, would have been treated as 6 s. in New England, as 8 s. in New York, as 7 s. 6 d. in Philadelphia, and as 32 s. 6 d. in Charleston (McCusker, 1978).

Colonists assigned local currency values to foreign specie coins circulating there in these pounds, shillings and pence. The same foreign specie coins (most notably the Spanish dollar) continued to be legal tender in the United States in the first half of the nineteenth century as well as a considerable portion of the circulating specie (Andrews, 1904, pp. 327-28; Michener and Wright, 2005, p. 695). Because the decimal divisions of the dollar so familiar to us today were a newfangled innovation in the early Republic and because the same coins continued to circulate the traditional units of account were only gradually abandoned. Lucius Elmer, in his account of the early settlement of Cumberland County, New Jersey, describes how “Accounts were generally kept in this State in pounds, shillings, and pence, of the 7 s. 6 d. standard, until after 1799, in which year a law was passed requiring all accounts to be kept in dollars or units, dimes or tenths, cents or hundredths, and mills or thousandths. For several years, however, aged persons inquiring the price of an article in West Jersey or Philadelphia, required to told the value in shillings and pence, they not being able to keep in mind the newly-created cents or their relative value . . . So lately as 1820 some traders and tavern keepers in East Jersey kept their accounts in [New] York currency.”3 About 1820, John Quincy Adams (1822) surveyed the progress that had been made in familiarizing the public with the new units:

“It is now nearly thirty years since our new monies of account, our coins, and our mint, have been established. The dollar, under its new stamp, has preserved its name and circulation. The cent has become tolerably familiarized to the tongue, wherever it has been made by circulation familiar to the hand. But the dime having been seldom, and the mille never presented in their material images to the people, have remained . . . utterly unknown. . . . Even now, at the end of thirty years, ask a tradesman, or shopkeeper, in any of our cities, what is a dime or mille, and the chances are four in five that he will not understand your question. But go to New York and offer in payment the Spanish coin, the unit of the Spanish piece of eight [one reale], and the shop or market-man will take it for a shilling. Carry it to Boston or Richmond, and you shall be told it is not a shilling, but nine pence. Bring it to Philadelphia, Baltimore, or the City of Washington, and you shall find it recognized for an eleven-penny bit; and if you ask how that can be, you shall learn that, the dollar being of ninety-pence, the eight part of it is nearer to eleven than to any other number . . .4 And thus we have English denominations most absurdly and diversely applied to Spanish coins; while our own lawfully established dime and mille remain, to the great mass of the people, among the hidden mysteries of political economy – state secrets.”5

It took many more decades for the colonial unit of account to disappear completely. Elmer’s account (Elmer, 1869, p. 137) reported that “Even now, in New York, and in East Jersey, where the eighth of a dollar, so long the common coin in use, corresponded with the shilling of account, it is common to state the price of articles, not above two or three dollars, in shillings, as for instance, ten shillings rather than a dollar and a quarter.”

Not only were the unit of account and medium of exchange disconnected in an unfamiliar manner, but terms such as money and currency did not mean precisely the same thing in colonial times that they do today. In colonial times, “money” and “currency” were practically synonymous and signified whatever was conventionally used as a medium of exchange. The word “currency” today refers narrowly to paper money, but that wasn’t so in colonial times. “The Word, Currency,” Hugh Vance wrote in 1740, “is in common Use in the Plantations . . . and signifies Silver passing current either by Weight or Tale. The same Name is also applicable as well to Tobacco in Virginia, Sugars in the West Indies &c. Every thing at the Market-Rate may be called a Currency; more especially that most general Commodity, for which Contracts are usually made. And according to that Rule, Paper-Currency must signify certain Pieces of Paper, passing current in the Market as Money” (Vance, 1740, CCR III, pp. 396, 431).

Failure to appreciate that the unit of account and medium of exchange were quite distinct in colonial times, and that a familiar term like “currency” had a subtly different meaning, can lead unsuspecting historians astray. They often assume that a phrase such as “£100 New York money” or “£100 New York currency” necessarily refers to £100 of the bills of credit issued by New York. In fact, it simply means £100 of whatever was accepted as money in New York, according to the valuations prevailing in New York.6 Such subtle misunderstandings have led some historians to overestimate the ubiquity of paper money in colonial America.

Means of Payment – Book Credit

While simple “cash-and-carry” transactions sometimes occurred most purchases involved at least short-term book credit; Henry Laurens wrote that before the Revolution it had been “the practice to give credit for one and more years for 7/8th of the whole traffic” (Burnet, 1923, vol. 2, pp. 490-1). The buyer would receive goods and be debited on the seller’s books for an agreed amount in the local money of account. The debt would be extinguished when the buyer paid the seller either in the local medium of exchange or in equally valued goods or services acceptable to the seller. When it was mutually agreeable the debt could be and often was paid in ways that nowadays seem very unorthodox – with the delivery of chickens, or a week’s work fixing fences on land owned by the seller. The debt might be paid at one remove, by the buyer fixing fences on land owned by someone to whom the seller was himself indebted. Accounts would then be settled among the individuals involved. Account books testify to the pervasiveness of this system, termed “bookkeeping barter” by Baxter. Baxter examined the accounts of John Hancock and his father Thomas Hancock, both prominent Boston merchants, whose business dealings naturally involved an atypically large amount of cash. Even these gentlemen managed most of their transactions in such a way that no cash ever changed hands (Baxter, 1965; Plummer, 1942; Soltow, 1965, pp. 124-55; Forman, 1969).

An astonishing array of goods and services therefore served by mutual consent at some time or other to extinguish debt. Whether these goods ought all to be classified as “money” is doubtful; they certainly lacked the liquidity and universal acceptability in exchange that ordinarily defines money. At certain times and in certain colonies, however, specific commodities came to be so widely used in transactions that they might appropriately be termed money. Specie, of course, was such a commodity, but its worldwide acceptance as money made it special, so it is convenient to set it aside for a moment and focus on the others.

Means of Payment – Commodity Money

At various times and places in the colonies such items as tobacco, rice, sugar, beaver skins, wampum, and country pay all served as money. These items were generally accorded a special monetary status by various acts of colonial legislatures. Whether the legislative fiat was essential in monetizing these commodities or whether it simply acknowledged the existing state of affairs is open to question. Sugar was used in the British Caribbean, tobacco was used in the Chesapeake, and rice in South Carolina, each being the central product of their respective plantation economies. Wampum signifies the stringed shells used by the Indians as money before the arrival of European settlers. Wampum and beaver skins were commonly used as money in the northern colonies in the early stages of settlement when the fur trade and Indian trade were still mainstays of the local economy (Nettels, 1928, 1934; Fernow, 1893; Massey, 1976; Brock, 1975, pp. 9-18).

Country pay is more complicated. Where it was used, country pay consisted of a hodgepodge of locally produced agricultural commodities that had been monetized by the colonial legislature. A list of commodities, such as Indian corn, beef, pork, etc. were assigned specific monetary values (so many s. per bushel or barrel), and debtors were permitted by statute to pay certain debts with their choice of these commodities at nominal values set by the colonial legislature.7 In some instances country pay was declared a legal tender for all private debts although contracts explicitly requiring another form of payment might be exempted (Gottfried, 1936; Judd, 1905, pp. 94-96). Sometimes country pay was only a legal tender in payment of obligations to the colonial or town governments. Even where country pay was a legal tender only in payment of taxes it was often used in private transactions and even served as a unit of account. Probate inventories from colonial Connecticut, where country pay was widely used, are generally denominated in country pay (Main and Main, 1988).8

There were predictable difficulties where commodity money was used. A pound in “country pay” was simply not worth a pound in cash even as that cash was valued locally. The legislature sometimes overvalued agricultural commodities in setting their nominal prices. Even when the legislature’s prices were not biased in favor of debtors the debtor still had the power to select the particular commodity tendered and had some discretion over the quality of that commodity. In late 17th century Massachusetts the rule of thumb used to convert country pay to cash was that three pounds in country pay were worth two pounds cash (Republicæ, 1731, pp. 376, 390).9 Even this formula seems to have overvalued country pay. When a group of men seeking to rent a farm in Connecticut offered Boston merchant Thomas Bannister £22 of country pay in 1700, Bannister hesitated. It appears Bannister wanted to be paid £15 per annum in cash. Country pay was “a very uncertain thing,” he wrote. Some years £22 in country pay might be worth £10, some years £12, but he did not expect to see a day when it would fetch fifteen.10 Savvy merchants such as Bannister paid careful attention to the terms of payment. An unwary trader could easily be cheated. Just such an incident occurs in the comic satirical poem “The Sotweed Factor.” Sotweed is slang for tobacco, and a factor was a person in America representing a British merchant. Set in late seventeenth-century Maryland, the poem is a first-person account of the tribulations and humiliations a newly-arrived Briton suffers while seeking to enter the tobacco trade. The Briton agrees with a Quaker merchant to exchange his trade goods for ten thousand weight of oronoco tobacco in cask and ready to ship. When the Quaker fails to deliver any tobacco, the aggrieved factor sues him at the Annapolis court, only to discover that his attorney is a quack who divides his time between pretending to be a lawyer and pretending to be a doctor and that the judges have to be called away from their Punch and Rum at the tavern to hear his case. The verdict?

The Byast Court without delay,
Adjudg’d my Debt in Country Pay:
In Pipe staves, Corn, or Flesh of Boar,
Rare Cargo for the English Shoar.

Thus ruined the poor factor sails away never to return. A footnote to the reader explains “There is a Law in this Country, the Plaintiff may pay his Debt in Country pay, which consists in the produce of the Plantation” (Cooke, 1708).

By the middle of the eighteenth century commodity money had essentially disappeared in northern port cities, but still lingered in the hinterlands and plantation colonies. A pamphlet written in Boston in 1740 observed “Look into our British Plantations, and you’ll see [commodity] Money still in Use, As, Tobacco in Virginia, Rice in South Carolina, and Sugars in the Islands; they are the chief Commodities, used as the general Money, Contracts are made for them, Salaries and Fees of Office are paid in them, and sometimes they are made a lawful Tender at a yearly assigned Rate by publick Authority, even when Silver was promised” (Vance, 1740, CCR III, p. 396). North Carolina was an extreme case. Country pay there continued as a legal tender even in private debts. The system was amended in 1754 and 1764 to require rated commodities to be delivered to government warehouses and be judged of acceptable quality at which point warehouse certificates were issued to the value of the goods (at mandated, not market prices): these certificates were a legal tender (Bullock, 1969, pp. 126-7, 157).

Means of Payment – Bills of Credit

Cash came in two forms: full-bodied specie coins (usually Spanish or Portuguese) and paper money known as “bills of credit.” Bills of credit were notes issued by provincial governments that were similar in many ways to modern paper money: they were issued in convenient denominations, were often a legal tender in the payment of debts, and routinely passed from man to man in transactions.11 Bills of credit were ordinarily put into circulation in one of two ways. The most common method was for the colony to issue bills to pay its debts. Bills of credit were originally designed as a kind of tax-anticipation scrip, similar to that used by many localities in the United States during the Great Depression (Harper, 1948). Therefore when bills of credit were issued to pay for current expenditures a colony would ordinarily levy taxes over the next several years sufficient to call the bills in so they might be destroyed.12 A second method was for the colony to lend newly printed bills on land security at attractive interest rates. The agency established to make these loans was known as a “land bank” (Thayer, 1953).13 Bills of credit were denominated in the £., s., and d. of the colony of issue, and therefore were usually the only form of money in circulation that was actually denominated in the local unit of account.14

Sometimes even the bills of credit issued in a colony were not denominated in the local unit of account. In 1764 Maryland redeemed its Maryland-pound-denominated bills of credit and in 1767 issued new dollar-denominated bills of credit. Nonetheless Maryland pounds, not dollars, remained the predominant unit of account in Maryland up to the Revolution (Michener and Wright, 2006a, p. 34; Grubb; 2006a, pp. 66-67; Michener and Wright, 2006c, p. 264). The most striking example occurred in New England. Massachusetts, Connecticut, New Hampshire, and Rhode Island all had, long before the 1730s, emitted paper money in bills of credit known as “old tenor” bills of credit, and “old tenor” had become the most commonly-used unit of account in New England. The old tenor bills of all four colonies passed interchangeably and at par with one another throughout New England.

Beginning in 1737, Massachusetts introduced a new kind of paper money known as “new tenor.” New tenor can be thought of as a monetary reform that ultimately failed to address underlying issues. It also served as a way of evading a restriction the Board of Trade had placed on the Governor of Massachusetts that limited him to emissions of not more than £30,000. The Massachusetts assembly declared each pound of the new tenor bills to be worth £3 in old tenor bills. What actually happened is that old tenor (abbreviated in records of the time as “O.T.”) continued to be the unit of account in New England, and so long as the old bills continued to circulate, a decreasing portion of the medium of exchange. Each new tenor bill was reckoned at three times its face value in old tenor terms. This was just the beginning of the confusion, for yet newer Massachusetts “new tenor” emissions were created, and the original “new tenor” emission became known as the “middle tenor.”15 The new “new tenor” bills emitted by Massachusetts were accounted in old tenor terms at four times their face value. These bills, like the old ones, circulated across colony borders throughout New England. As if this were not complicated enough, New Hampshire, Rhode Island, and Connecticut all created new tenor emission of their own, and the factors used to convert these new tenor bills into old tenor terms varied across colonies (Davis, 1970; Brock, 1975; McCusker, pp. 131-137). Connecticut, for instance, had a new tenor emission such that each new tenor bill was worth 3½ times its face value in old tenor (Connecticut, vol. 8, pp. 359-60; Brock, 1975, pp. 45-6). “They have a variety of paper currencies in the [New England] provinces; viz., that of New Hampshire, the Massachusetts, Rhode Island, and Connecticut,” bemoaned an English visitor, “all of different value, divided and subdivided into old and new tenors, so that it is a science to know the nature and value of their moneys, and what will cost a stranger some study and application” (Hamilton, 1907, p. 179). Throughout New England, however, Old Tenor remained the unit of account. “The Price of [provisions sold at Market],” a contemporary pamphlet noted, “has been constantly computed in Bills of the old Tenor, ever since the Emission of the middle and new Tenor Bills, just as it was before their Emission, and with no more Regard to or Consideration of either the middle or new Tenor Bills, than if they had never been emitted” (Enquiry, 1744, CCR IV, p. 174). This occurred despite the fact that by 1750 only an inconsiderable portion of the bills of credit in circulation were denominated in old tenor.16

For the most part, bills of credit were fiat money. Although a colony’s treasurer would often consent to exchange these bills for other forms of cash in the treasury, there was rarely a provision in the law stating that holders of bills of credit had a legally binding claim on the government for a fixed sum in specie, and treasurers were sometimes unable to accommodate people who wished to exchange money (Nicholas, 1912, p. 257; The New York Mercury, January 27, 1759, November 24, 1760).17 The form of the bills themselves was sometimes misleading in this respect. It was not uncommon for the bills to be inscribed with an explicit statement that the bill was worth a certain sum in silver. This was often no more than an expression of the assembly’s hope, at the time of issuance, of how the bills would circulate.18 Colonial courts sometimes allowed inhabitants to pay less to royal officials and proprietors by valuing bills of credit used to pay fees, dues, and quit rents according to their “official” rather than actual specie values. (Michener and Wright, 2006c, p. 258, fn. 5; Hart, 2005, pp. 269-71).

Maryland’s paper money was unique. Maryland’s paper money – unlike that of other colonies – gave the possessor an explicit legal claim on a valuable asset. Maryland had levied a tax and invested the proceeds of the tax in London. It issued bills of credit promising a fixed sum in sterling bills of exchange at predetermined dates, to be drawn on the colony’s balance in London. The colony’s accrued balances in London were adequate to fund the redemption, and when redemption dates arrived in 1748 and 1764 the sums due were paid in full so the colony’s pledge was considered credible.

Maryland’s paper money was unique in other ways as well. Its first emission was put into circulation in a novel fashion. Of the £90,000 emitted in 1733, £42,000 was lent to inhabitants, while the other £48,000 was simply given away, at the rate of £1.5 per taxable (McCusker, 1978, pp. 190-196; Brock, 1975, chapter 8; Lester, 1970, chapter 5). Maryland’s paper money was so peculiar that it is unrepresentative of the colonial experience. This was recognized even by contemporaries. Hugh Vance, in the Postscript to his Inquiry into the Nature and Uses of Money, dismissed Maryland as “intirely out of the Question; their Bills being on the Foot of promissory Notes” Vance, 1740, CCR III, p. 462).

In 1690, Massachusetts was the first colony to issue bills of credit (Felt, 1839, pp. 49-52; Davis, 1970, vol. 1, chapter 1; Goldberg, 2009).19 The bills were issued to pay soldiers returning from a failed military expedition against Quebec. Over time, the rest of the colonies followed suit. The last holdout was Virginia, which issued its first bills of credit in 1755 to defray expenses associated with its entry into the French and Indian War (Brock, 1975, chapter 9). The common denominator here is wartime finance, and it is worthwhile to recognize that the vast majority of the bills of credit issued in the colonies were issued during wartime to pay for pressing military expenditures. Peacetime issues did occur and are in some respects quite interesting as they seem to have been motivated in part by a desire to stimulate the economy (Lester, 1970). However, peacetime emissions are dwarfed by those that occurred in war.20 Some historians enamored of the land bank system, whereby newly emitted bills were lent to landowners in order to promote economic development, have stressed the economic development aspect of colonial emissions – particularly those of Pennsylvania – while minimizing the military finance aspect (Schweitzer, 1989, pp. 313-4). The following graph, however, illustrates the fundamental importance of war finance; the dramatic spike marks the French and Indian War (Brock, 1992, Tables 4, 6).



That bills in circulation peaked in 1760 reflects the fact that Quebec fell in 1759 and Montreal in 1760, so that the land war in North America was effectively over by 1760.

Because bills were disproportionally emitted for wartime finance it is not surprising that the colonies whose currencies depreciated due to over-issue were those who shared a border with a hostile neighbor – the New England colonies bordering French Canada and the Carolinas bordering Spanish Florida.21 The colonies from New York to Virginia were buffered by their neighbors and therefore issued no more than modest amounts of paper money until they were drawn into the French and Indian War, by which time their economies were large enough to temporarily absorb the issues.

It is important not to confuse the bills of credit issued by a colony with the bills of credit circulating in that colony. “Under the circumstances of America before the war,” a Maryland resident wrote in 1787, “there was a mutual tacit consent that the paper of each colony should be received by its neighbours” (Hanson, 1787, p. 24).22 Between 1710 and 1750, the currencies of Massachusetts, Connecticut, New Hampshire, and Rhode Island passed indiscriminately and at par with one another in everyday transactions throughout New England (Brock, 1975, pp. 35-6). Although not quite so integrated a currency area as New England the colonies of New York, Pennsylvania, New Jersey, and Delaware each had bills of credit circulating within its neighbors’ borders (McCusker, 1978, pp. 169-70, 181-182). In the early 1760s, Pennsylvania money was the primary medium of exchange in Maryland (Maryland Gazette, September 15, 1763; Hazard, 1852, Eighth Series, vol. VII, p. 5826; McCusker, 1978, p. 193). In 1764 one quarter of South Carolina’s bills of credit circulated in North Carolina and Georgia (Ernst, 1973, p. 106). Where the currencies of neighboring colonies were of equal value, as was the case in New England between 1710 and 1750, bills of credit of neighboring colonies could be credited and debited in book accounts at face value. When this was not the case, as when Pennsylvania, Connecticut, or New Jersey bills of credit were used to pay a debt in New York, an adjustment had to be made to convert these sums to New York money. The conversion was usually based on the par values assigned to Spanish dollars by each colony. Indeed, this was also how merchants generally handled intercolonial exchange transactions (McCusker, 1978, p. 123). For example, on the eve of the Revolution a Spanish dollar was rated at 7 s. 6 d. in Pennsylvania money and at 8 s. in New York money. The ratio of eight to seven and a half being equal to 1.06666, Pennsylvania bills of credit were accepted in New York at a 6 and 1/3% advance (Stevens, 1867, pp. 10-11, 18). Connecticut rated the Spanish dollar at 6 s., and because the ratio of eight to six is 1.333, Connecticut bills of credit were accepted at a one third advance in New York (New York Journal, July 13, 1775). New Jersey’s paper money was a peculiar exception to this rule. By the custom of New York’s merchants, New Jersey bills of credit were accepted for thirty years or more at an advance of one pence in the shilling, or 8 and 1/3%, even though New Jersey rated the Spanish dollar at 7 s, 6 d., just as Pennsylvania did. The practice was controversial in New York, and the advance was finally reduced to the “logical” 6 and 2/3% advance by an act of the New York assembly in 1774.23

Means of Payment – Foreign Specie Coins

Specie coins were the other kind of cash that commonly circulated in the colonies. Few specie coins were minted in the colonies. Massachusetts coined silver “pine tree shillings” between 1652 and the closing of the mint in the early 1680s. This was the only mint of any size or duration in the colonies, although minting of small copper coins and tokens did occur at a number of locations (Jordan, 2002; Mossman, 1993). Colonial coinage is interesting numismatically, but economically it was too slight to be of consequence. Most circulating specie was minted abroad. The gold and silver coins circulating in the colonies were generally of Spanish or Portuguese origin. Among the most important of these coins were the Portuguese Johannes and moidore (more formally, the moeda d’ouro) and the Spanish dollar and pistole. The Johanneses were gold coins, 8 escudos (12,800 reis) in denomination; their name derived from the obverse of the coin, which bore the bust of Johannes V. Minted in Portugal and Brazil they were commonly known in the colonies as “joes.” The fractional denominations were 4 escudo and 2 escudo coins of the same origin. The 4 escudo (6,400 reis) coin, or “half joe,” was one of the most commonly used coins in the late colonial period. The moidore was another Portuguese gold coin, 4,000 reis in denomination. That these coins were being used as a medium of exchange in the colonies is not so peculiar as it might appear. Raphael Solomon (1976, p. 37) noted that these coins “played a very active part in international commerce, flowing in and out of the major seaports in both the Eastern and Western Hemispheres.” In the late colonial period the mid-Atlantic colonies began selling wheat and flour to Spain and Portugal “for which in return, they get hard cash” (Lydon, 1965; Virginia Gazette, January 12, 1769; Brodhead, 1853, vol. 8, p. 448).

The Spanish dollar and its fractional parts were, in McCusker’s (1978, p. 7) words, “the premier coin of the Atlantic world in the seventeenth and eighteenth centuries.” Well known and widely circulated throughout the world, its preeminence in colonial North America accounts for the fact that the United States uses dollars, rather than pounds, as its unit of account. The Spanish pistole was the Spanish gold coin most often encountered in America. While these coins were the most common, many others also circulated there (Solomon, 1976; McCusker, 1978, pp. 3-12).

Alongside the well-known gold and silver coins were various copper coins, most notably the English half-pence, that served as small change in the colonies. Fractional parts of the Spanish dollar and the pistareen, a small silver coin of base alloy, were also commonly used as change.24

None of these foreign specie coins were denominated in local currency units, however. One needed a rule to determine what a particular coin, such as a Spanish dollar, was worth in the £., s., and d. of local currency. Because foreign specie coins were in circulation long before any of the colonies issued paper money setting a rating on these coins amounted to picking a numeraire for the economy; that is, it defined what one meant by a pound of local currency. The ratings attached to individual coins were not haphazard: They were designed to reflect the relative weight and purity of the bullion in each coin as well as the ratio of gold to silver prices prevailing in the wider world.

In the early years of colonization these coin values were set by the colonial assemblies (Nettels, 1934, chap. 9; Solomon, 1976, pp. 28-29; John Hemphill, 1964, chapter 3). In 1700 Pennsylvania passed an act raising the rated value of its coins, causing the Governor of Maryland to complain to the Board of Trade of the difficulties this created in Maryland. He sought the Board’s permission for Maryland to follow suit. When the Board investigated the matter it concluded that the “liberty taken in many of your Majesty’s Plantations, to alter the rates of their coins as often as they think fit, does encourage an indirect practice of drawing the money from one Plantation to another, to the undermining of each other’s trade.” In response they arranged for the disallowance of the Pennsylvania act and a royal proclamation to put an end to the practice.25

Queen Anne’s proclamation, issued in 1704, prohibited a Spanish dollar of 17½ dwt. from passing for more than 6 s. in the colonies. Other current foreign silver coins were rated proportionately and similarly prohibited from circulating at a higher value. This particular rating of coins became known as “proclamation money.”26 It might seem peculiar that the// proclamation did not dictate that the colonies adopt the same ratings as prevailed in England. The Privy Council, however, had incautiously approved a Massachusetts act passed in 1697 rating Spanish dollars at 6 s., and attorney general Edward Northey felt the act could not be nullified by proclamation. This induced the Board of Trade to adopt the rating of the Massachusetts act.27

Had the proclamation been put into operation its effects would have been extremely deflationary because in most colonies coins were already passing at higher rates. When the proclamation reached America only Barbados attempted to enforce it. In New York Governor Lord Cornbury suspended its operation and wrote the Board of Trade that he could not enforce it in New York while it was being ignored in neighboring colonies as New York would be “ruined beyond recovery” if he did so (Brodhead, 1853, vol. 4, pp. 1131-1133; Brock, 1975, chapter 4). A chorus of such responses led the Board of Trade to take the matter to Parliament in hopes of enforcing a uniform compliance throughout America (House of Lords, 1921, pp. 302-3). On April 1, 1708, Parliament passed “An Act for ascertaining the Rates of foreign Coins in her Majesty’s Plantations in America” (Ruffhead, vol. 4, pp. 324-5). The act reiterated the restrictions embodied in Queen Anne’s Proclamation, and declared that anyone “accounting, receiving, taking, or paying the same contrary to the Directions therein contained, shall suffer six Months Imprisonment . . . and shall likewise forfeit the Sum of ten Pounds for every such Offence . . .”

The “Act for ascertaining the Rates of foreign Coins” never achieved its desired aim. In the colonies it was largely ignored, and business continued to be conducted just as if the act had never been passed. Pennsylvania, it was true, went though a show of complying but even that lapsed after a while (Brock, 1975, chapter 4). What the act did do, however, was push the process of coin rating into the shadows because it was no longer possible to address it in an open way by legislative enactment. Laws that passed through colonial legislatures (certain charter and proprietary colonies excepted) were routinely reviewed by the Privy Council, and if found to be inconsistent with British law, were declared null and void.

Two avenues remained open to alter coin ratings – private agreements among merchants that would not be subject to review in London, and a legislative enactment so stealthy as to slip through review unnoticed. New York was the first to succeed using stealth. In November 1709 it emitted bills of credit “for Tenn thousand Ounces of Plate or fourteen Thousand Five hundred & fourty five Lyon Dollars” (Lincoln, 1894, vol. 1, chap. 207, pp. 695-7). The Lyon dollar was an obscure silver coin that had escaped being explicitly mentioned in the enumeration of allowable values that had accompanied Queen Anne’s proclamation. Since 15 years previously New York had rated the Lyon dollar at 5 s. 6 d., it was generally supposed that that rating was still in force (Solomon, 1976, p. 30). The value of silver implied in the law’s title is 8 s. an ounce – a value higher than allowed by Parliament. Until 1723, New York’s emission acts contained clauses designed to rate an ounce of silver at 8 s. The act in 1714, for instance, tediously enumerated the denominations of the bills to be printed, in language such as “Five Hundred Sixty-eight Bills, of Twenty-five Ounces of Plate, or Ten Pounds value each” (Lincoln, 1894, vol. 1, chap. 280, pp. 819). When the Board of Trade finally realized what New York was up to it was too late: the earlier laws had already been confirmed. When the Board wrote Governor Hunter to complain, he replied, in part, “Tis not in the power of men or angels to beat the people of this Continent out of a silly notion of their being gainers by the Augmentation of the value of Plate” (Brodhead, vol. 5, p. 476). These colony laws were still thought to be in force in the late colonial period. Gaine’s New York Pocket Almanack for 1760 states that “Spanish Silver . . . here ‘tis fixed by Law at 8 s. per Ounce, but is often sold and bought from 9 s. to 9 s. and 3 d.”

In 1753 Maryland also succeeded using stealth, including revised coin ratings inconsistent with Queen Anne’s proclamation in “An Act for Amending the Staple of Tobacco, for Preventing Fraud in His Majesty’s Customs, and for the Limitation of Officer’s Fees” (McCusker, 1978, p. 192).

The most common subterfuge was for a colony’s merchants to meet and agree on coin ratings. Once the merchants agreed on such ratings, the colonial courts appear to have deferred to them, which is not surprising in light of the fact that many judges and legislators were drawn from the merchants’ ranks (e.g. Horle, 1991). These private agreements effectively nullified not only the act of Parliament but also local statutes, such as those rating silver in New York at 8 s. an ounce. Records of many such agreements have survived.28 There is also testimony that these agreements were commonplace. Lewis Morris remarked that “It is a common practice … [for] the merchants to put what value they think fit upon Gold and Silver coynes current in the Plantations.” When the Philadelphia merchants published a notice in the Pennsylvania Gazette of September 16, 1742 enumerating the values they had agreed to put on foreign gold and silver coins, only the brazenness of the act came as a surprise to Morris. “Tho’ I believe by the merchants private Agreements amongst themselves they have allwaies done the same thing since the Existence of A paper currency, yet I do not remember so publick an instance of defying an act of parliament” (Morris, 1993, vol. 3, pp. 260-262, 273). These agreements, when backed by a strong consensus among merchants, seem to have been effective. Decades later, Benjamin Franklin (1959, vol. 14, p. 232) recollected how the agreement that had offended Morris “had a great Effect in fixing the Value and Rates of our Gold and Silver.”

After the New York Chamber of Commerce was founded in 1768, merchant deliberations on these agreements were recorded. During this period, the coin ratings in effect in New York were routinely published in almanacs, particularly Gaine’s New-York pocket almanac. When the New York Chamber of Commerce resolved to change the rating of coins and the minimum allowable weight for guineas the almanac values changed immediately to reflect those adopted by the Chamber (Stevens, 1867, pp. 56-7. 69).29


The coin rating table above, reproduced from The New-York Pocket Almanack for the Year 1771 shows how coin-rating worked in practice in the late colonial period. (Note the reference to the deliberations of the Chamber of Commerce.) It shows, for instance, that if you tendered a half joe in payment of debt in Pennsylvania, you would be credited with having paid £3 Pennsylvania money. If the same half joe were tendered in payment of a debt in New York you would be credited with having paid £ 3 4 s. New York money. In Connecticut it would have been £2 8 s. Connecticut money.30

The colonists possessed no central bank and colonial treasurers, however willing they might have been to exchange paper for specie, sometimes found themselves without the means to do so. That these coin ratings were successfully maintained for decades on end was a testament to the public’s faith in the bills of credit, which made them willing to voluntarily exchange them for specie at the established rate. Writing in 1786 and attempting to explain why New Jersey’s colonial bills of credit had retained their value, “Eugenio” attributed their success to the fact that it possessed what he called “the means of instant realization at value.” This awkward phrase signified the bills were instantly convertible at par. “Eugenio” went on to explain why:

“It is true that government did not raise a sum of coin and deposit the same in the treasury to exchange the bills on demand; but the faith of the government, the opinion of the people, and the security of the fund formerly by a well-timed and steady policy, went so hand in hand and so concurred to support each other, that the people voluntarily and without the least compulsion threw all their gold and silver, not locking up a shilling, into circulation concurrently with the bills; whereby the whole coin of the government became forthwith upon an emission of paper, a bank of deposit at every man’s door for the instant realization or immediate exchange of his bill into gold or silver. This had a benign and equitable, a persuasive, a satisfactory, and an extensive influence. If any one doubted the validity or price of his bill, his neighbor immediately removed his doubts by exchanging it without loss into gold or silver. If any one for a particular purpose needed the precious metals, his bill procured them at the next door, without a moment’s delay or a penny’s diminution. So high was the opinion of the people raised, that often an advance was given for paper on account of the convenience of carriage. In the market as well as in the payment of debts, the paper and the coin possessed a voluntary, equal, and concurrent circulation, and no special contract was made which should be paid or whether they should be received at a difference. By this instant realization and immediate exchange, the government had all the gold and silver in the community as effectually in their hands as if those precious metals had all been locked up in their treasury. By this realization and exchange they could extend credit to any degree it was required. The people could not be induced to entertain a doubt of their paper, because the government had never failed them in a single instance, either in war or in peace (New Jersey Gazette, January 30, 1786).”

Insofar as colonial bills of credit were convertible on demand into specie at the rated specie value of coins, there is no mystery as to why those bills of credit maintained their value. How merchants maintained and enforced such accords, however, is relatively inscrutable. Some economists are incredulous that private associations of merchants could accomplish the feat. The best evidence on this question can be found in a pamphlet by a disgruntled inhabitant complaining of the actions of a merchants’ association in Antigua (Anon., 1740), which provides a tantalizing glimpse of the methods merchants used.

Means of Payment – Private debt instruments

This leaves private debt instruments, such as bank notes, bills of exchange, notes of hand, and shop notes. It is sometimes asserted that there were no banks in colonial America, but this is something of an overstatement. There were several experiments made and several embryonic private banks actually got notes into circulation. Andrew McFarland Davis devoted an entire volume to banking in colonial New England (Davis, 1970, vol. 2; Perkins 1991 ). Perhaps the most successful bank of the era was established in South Carolina in 1731. It apparently issued notes totaling £50,000 South Carolina money and operated successfully for a decade.31 However, the banks that did exist did not last long enough or succeed in putting enough notes in circulation for us to be especially concerned about them.

Bills of exchange were similar to checks. A hypothetical example will illustrate how they functioned. The process of creating a bill of exchange began when someone obtained a balance on account overseas (in the case of the colonies, that place was often London). Suppose a Virginia tobacco producer consigned his tobacco to be sold in England, with the sterling proceeds to remain temporarily in the hands of a London merchant. The Virginia planter could then draw on those funds, by writing a bill of exchange payable in London. Suppose further that the planter drew a bill of exchange on his London correspondent, and sold it to a Virginia merchant, who then transmitted it to London to pay a balance due on imported dry goods. When the bill of exchange reached London, the dry goods wholesaler who received it would call on the London merchant holding the funds in order to receive the payment specified in the bill of exchange.

Bills of exchange were widely used in foreign trade, and were the preferred and most common method for paying debts due overseas. Because of the nature of the trade they financed, bills of exchange were usually in large denominations. Also, because bills of exchange were drawn on particular people or institutions overseas, there was an element of risk involved. Perhaps the person drawing the bill was writing a bad check, or perhaps the person on whom the bill was drawn was himself a deadbeat. One needed to be confident of the reputations of the parties involved when purchasing a bill of exchange. Perhaps because of their large denominations and the asymmetric information problems involved, bills of exchange played a limited role as a medium of exchange in the inland economy (McCusker, 1978, especially pp. 20-21).

Small denomination IOUs, called “notes of hand” were widespread, and these were typically denominated in local currency units. For the most part, these were not designed to circulate as a medium of exchange. When someone purchased goods from a shopkeeper on credit, the shopkeeper would generally get a “note of hand” as a receipt. In the court records in the Connecticut archives, one can find the case files for countless colonial-era cases where an individual was sued for nonpayment of a small debt.32 The court records generally include a note of hand entered as evidence to prove the debt. Notes of hand sometimes were proffered to third parties in payment of debt, however, particularly if the issuer was a person of acknowledged creditworthiness (Mather, 1691, p. 191). Some individuals of modest means created notes of hand in small denominations and attempted to circulate them as a medium of exchange; in Pennsylvania in 1768, a newspaper account stated that 10% of the cash offered in the retail trade consisted of such notes (Pennsylvania Chronicle, October 12, 1768; Kimber, 1998, p. 53). Indeed, many private banking schemes, such as the Massachusetts merchants’ bank, the New Hampshire merchants’ bank, the New London Society, and the Land Bank of 1740 were modeled on private notes of hand, and each consisted of an association designed to circulate such notes on a large scale. For the most part, however, notes of hand lacked the universal acceptability that would have unambiguously qualified them as money.

Shop notes were “notes of hand” of a particular type and seem to have been especially widespread in colonial New England. The twentieth-century analogue to shop notes would be scrip issued by an employer that could be used for purchases at the company store.33 Shop notes were I.O.U.s of local shopkeepers, redeemable through the shopkeeper. Such an I.O.U. might promise, for example, £6 in local currency value, half in money and half in goods (Weeden, 1891, vol. 2, p. 589; Ernst, 1990). Hugh Vance described the origins of shop notes in a 1740 pamphlet:

“… by the best Information I can have from Men of Credit then living, the Fact is truly this, viz. about the Year 1700, Silver-Money became exceedingly scarce, and the Trade so embarassed, that we begun to go into the Use of Shop-Goods, as the Money. The Shopkeepers told the Tradesmen, who had Draughts upon them from the Merchants for all Money, that they could not pay all in Money (and very truly) and so by Degrees brought the Tradesmen into the Use of taking Part in Shop-Goods; and likewise the Merchants, who must always follow the natural Course of Trade, were forced into the Way of agreeing with Tradesmen, Fishermen, and others; and also with the Shopkeepers, to draw Bills for Part and sometimes for all Shop-Goods (Vance, 1740, CCR III, pp. 390-91).”

Vance’s account seems accurate in all respects save one. Merchants played an active role in introducing shop notes into circulation. By the 1740s shop notes had been much abused, and it was disingenuous of Vance (himself a merchant) to suggest that merchants had had the system thrust upon them by shopkeepers. Merchants used shop notes to expedite sales and returns. The merchant might contact a shopkeeper and a shipbuilder. The shipbuilder would build a ship for the merchant, the ship to be sent to England and sold as a way of making returns. In exchange the merchant would provide the builder with shop notes and the shopkeeper with imported goods. The builder used the shop notes to pay his workers. The shop notes, in turn, were redeemed at the shop of the shopkeeper when presented to him by workers (Boston Weekly Postboy, December 8, 1740). Thomas Fitch tried to interest an English partner in just such a scheme in 1710:

“Realy it’s extream difficult to raise money here, for goods are generally Sold to take 1/2 money & 1/2 goods again out of the buyers Shops to pay builders of Ships [etc?] which is a great advantage in the readier if not higher sale of goods, as well as that it procures the Return; Wherefore if we sell goods to be paid in money we must give long time or they will not medle (Fitch, 1711, to Edward Warner, November 22, 1710).”

Like other substitutes for cash, shop notes were seldom worth their stated values. A 1736 pamphlet, for instance, reported wages to be 6s in bills of credit, or 7s if paid in shop notes (Anonymous, 1736, p. 143). One reason shop notes failed to remain at par with cash is that shopkeepers often refused to redeem them except with merchandise of their own choosing. Another abuse was to interpret money to mean British goods; half money, half goods often meant no money at all.34


Colonial bills of credit were controversial when they were first issued, and have remained controversial to this day. Those who have wanted to highlight the evils of inflation have focused narrowly on the colonies where the bills of credit depreciated most dramatically – those colonies being New England and the Carolinas, with New England being a special focus because of the wealth of material that exists concerning New England history. When Hillsborough drafted a report for the Board of Trade intended to support the abolition of legal tender paper money in the colonies he rested his argument on the inflationary experiences of these colonies (printed in Whitehead, 1885, vol. IX, pp. 405-414). Those who have wanted to defend the use of bills of credit in the colonies have focused on the Middle colonies, where inflation was practically nonexistent. This tradition dates back at least to Benjamin Franklin (1959, vol. 14, pp. 77-87), who drafted a reply to the Board of Trade’s report in an effort to persuade Parliament to repeal of the Currency Act of 1764. Nineteenth-century authors, such as Bullock (1969) and Davis (1970), tended to follow Hillsborough’s lead whereas twentieth-century authors, such as Ferguson (1953) and Schweitzer (1987), followed Franklin’s.

Changing popular attitudes towards inflation have helped to rehabilitate the colonists. Whereas inflation in earlier centuries was rare, and even the mild inflation suffered in England between 1797 and 1815 was sufficient to stir a political uproar, the twentieth century has become inured to inflation. Even in colonial New England between 1711 and 1749, which was thought to have done a disgraceful job in managing its bills of credit, peacetime inflation was only about 5% per annum. Inflation during King George’s War was about 35% per annum.35

Nineteenth-century economists were guilty of overgeneralizing based on the unrepresentative inflationary experiences and associated debtor-creditor conflicts that occurred in a few colonies. Some twentieth-century economists, however, have swung too far in the other direction by generalizing on the basis of the success of the system in the Middle colonies and by attributing the benign outcomes there to the fundamental soundness of the system and its sagacious management. It would be closer to the truth, I believe, to note that the virtuous restraint exhibited by the Middle colonies was imposed upon them. Emissions in these colonies were sometimes vetoed by royal authorities and frequently stymied by instructions issued to royal or proprietary governors. The success of the Middle colonies owes much to the simple fact that they did not exert themselves in war to the extent that their New England neighbors did and that they were not permitted to freely issue bills of credit in peacetime.

A recent controversy has developed over the correct answer to the question – Why did some bills of credit depreciate, while others did not? Many early writers took it for granted that the price level in a colony would vary proportionally with the number of bills of credit the colony issued. This assumption was mocked by Ernst (1973, chapter 1) and devastated by West (1978). West performed simple regressions relating the quantity of bills of credit outstanding to price indices where such data exist. For most colonies he found no correlation between these variables. This was particularly striking because in the Middle colonies there was a dramatic increase in the quantity of bills of credit outstanding during the French and Indian War, and a dramatic decrease afterwards. Yet this large fluctuation seemed to have little effect on the purchasing power of those bills of credit as measured by prices of bills of exchange and the imperfect commodity price indices we possess. Only in New England in the first half of the eighteenth century did there seem to be a strong correlation between bills of credit outstanding and prices and exchange rates. Officer (2005) examined the New England episode and concluded that the quantity theory provides an adequate explanation in this instance, making the contrast with many other colonies (most notably, the Middle colonies) even more remarkable.

Seizing on West’s results Bruce Smith suggested that they disproved the quantity theory of money and provided evidence in favor of an alternative theory of money based on theoretical models of Wallace and Sargent, which Smith characterized as the “backing theory.”36 According to Smith (1985a, p. 534), the redemption provisions enacted when bills of credit were introduced into circulation on tax and loan funds were what prevented them from depreciating. “Just as the value of privately issued liabilities depends on the issuers’ balance sheet,” he wrote, “the same is true for government liabilities. Thus issues of money which are accompanied by increases in the (expected) discounted present value of the government’s revenues need not be inflationary.” One obvious problem with this theory is that the New England bills of credit which did depreciate were issued in exactly the same way. Smith’s answer was that the New England colonies administered their tax and loan funds poorly and New England’s poor administration accounted for the inflation experienced there.

Others who did not wholly agree with Smith – especially his sweeping refutation of the quantity theory – nonetheless pointed to the redemption provisions in explaining why bills of credit often retained their value (Wicker, 1985; Bernholz, 1988; Calomiris, 1988; Sumner, 1993; Rousseau, 2007). Of those who assigned credit to the redemption provisions, however, only Smith grappled with the key question; namely, why essentially identical redemption provisions failed to prevent inflation elsewhere.

Crediting careful administration of tax and loan funds for the steady value of some colonial currencies, and haphazard administration for the depreciation of others looks superficially appealing. The experiences of Pennsylvania and Rhode Island, generally thought to be the most and least successful issuers of colonial bills of credit, fit the hypothesis nicely. However, when one examines other cases, the hypothesis breaks down. Connecticut was generally credited with administering her bills of credit very carefully, yet they depreciated in lockstep with those of her New England neighbors for forty years (Brock, 1975, pp. 43-47). Virginia’s bills of credit retained their value even though Virginia’s colonial treasurer was discovered to have embezzled a sum equal to nearly half of Virginia’s total outstanding bills of credit and returned them to circulation (Michener, 1987, p. 247). North Carolina’s bills of credit held their value well in the late colonial period despite tax administration so notoriously corrupt it led to an armed revolt (Michener, 1987, pp. 248-9, Ernst, 1973, p. 221).

A competing explanation has been offered by Michener (1987, 1988), Brock (1992), McCallum (1992), and Michener and Wright (2006b). According to this explanation, the coin rating system operating in the colonies meant they were effectively on a specie standard with a de facto fixed par of exchange. Provided emissions of paper money did not exceed the amount needed for domestic purposes (“normal real balances,” in McCallum’s terminology) some specie would remain in circulation, prices would remain stable, and the fixed par could be maintained. Where emissions exceeded this bound specie would disappear from circulation and exchange rates would float freely, no longer tethered to the fixed par. Further emissions would cause inflation.37 This was said to account for inflation in New England after 1712, where specie did, in fact, completely disappear from circulation (Hutchinson, 1936, vol. 2, p. 154; Michener, 1987, pp. 288-94). If this explanation is correct, it would suggest that emissions of bills of credit ought to be offset by specie outflows, ceteris paribus.

Critics of the “specie circulated at rated values” explanation have frequently disregarded the ceteris paribus qualification and maintained that the theory implies specie flows always ought to be highly negatively correlated with changes in the quantity of bills of credit. This amounts to assuming the quantity of money demanded per capita in colonial America was nearly constant. If this were a valid test of the theory, one would be forced to reject it, because the specie stock fell little, if at all, in the Middle colonies in 1755-1760 as bills of credit increased, and when bills of credit began to decrease after 1760, specie became scarcer.

The flaw in critics’ reasoning, in my opinion, is that it assumes three unwarranted facts. First, that the demand for money, narrowly defined to mean bills of credit plus specie, was very stable despite the widespread use of bookkeeping barter; Second, that the absence of evidence of large interest rate fluctuations is evidence of the absence of large interest rate fluctuations (Smith, 1985b, pp. 1193, 1198; Letwin, 1982, p. 466); Third, that the opportunity cost of holding money is adequately measured by the nominal interest rate.38

With respect to the first point, colonial wars significantly influenced the demand for money. During peacetime, most transactions were handled by means of book credit. During major wars, however, many men served in the militia. Men in military service were paid in cash and taken far from the community in which their creditworthiness was commonly known, reducing both their need for book credit and their ability to obtain it. Moreover, it would have to give a shopkeeper pause and discourage him from advancing book credit to consider the real possibility that even his civilian customers might find themselves in the militia in the near future and gone from the local community, possibly forever. In each of the major colonial wars there is evidence suggesting an increase in cash real balances that could be attributed to the war’s impact on the book credit system. The increase in real money balances during the French and Indian War and the subsequent decrease can be largely accounted for in this way. With respect to the second point, fluctuations in the money supply are even compatible with a stable demand for money if periods when money is scarce are also periods when interest rates are high, as is also suggested by the historical record.39 It is true that the maximum interest rates specified in colonial usury laws are stable, generally in the range of 6%-8% per annum, often a bit lower late in the colonial era than at its beginning. This has been taken as evidence that colonial interest rates were stable. However, we know that these usury laws were commonly evaded and that market rates were often much higher (Wright, 2002, pp. 19-26). Some indication of how much higher became evident in the summer of 1768 when the Privy Council unexpectedly struck down New Hampshire’s usury law.40 News of the disallowance did not reach New Hampshire until the end of the year, at which time New Hampshire, having sunk the bills of credit issued to finance the French and Indian War during the 5 year interval permitted by the Currency Act of 1751, was in the throes of a liquidity crisis.41 Governor Wentworth reported to the Lords of Trade, that “Interest arose to 30 p. Ct. within six days of the repeal of the late Act.”42 By contrast, when cash was plentiful in Pennsylvania at the height of the French and Indian War, Pennsylvania’s “wealthy people were catching at every opportunity of letting out their money on good security, on common interest [that is, seven per cent].”43 With respect to the third point, the received theory that the nominal interest rate measures the opportunity cost of holding real money balances is derived from models in which individuals are free to borrow and lend at the nominal interest rate. Insofar as lenders respected the usury ceilings, borrowers were unable to borrow freely at the nominal interest rate. Recent work on moral hazard and adverse selection suggest that even private unregulated lenders forced to make loans in an environment characterized by seriously asymmetric information would be wise to ration loans by charging less than market clearing rates and limiting allowed borrowing. The creditworthiness of individuals was more difficult to determine in colonial times than today, and asymmetric information problems were rife. Under such circumstances, even an unregulated market rate of interest (if we had such data, which we don’t) would understate the opportunity cost of holding money for constrained borrowers.

The debate over why some colonial bills of credit depreciated, while others did not has spilled over into another related question: how much cash [i.e., paper money plus specie] circulated in the American colonies, and how much was in bills of credit, and how much was in specie? Clearly, if there was hardly any specie anywhere in colonial America, the concomitant circulation of specie at fixed rates could scarcely account for the stable purchasing power of bills of credit.

Determining how much cash circulated in the colonies is no easy matter, because the amount of specie in circulation is so hard to determine. The issue is further complicated by the fact that the total amount of cash in circulation fluctuated considerably from year to year, depending on such things as the demand for colonial staples and the magnitude of British military expenditure in the colonies (Sachs, 1957; Hemphill, 1964). The mix of bills of credit and specie in circulation was also highly variable. In the Middle colonies – and much of the most contentious debate involves the Middle colonies – the quantity of bills of credit in circulation was very modest (both absolutely and in per-capita terms) before the French and Indian War. The quantity exploded to cover military expenditures during the French and Indian War, and then fell again following 1760, until by the late colonial period, the quantity outstanding was once again very modest. Pennsylvania’s experience is not atypical of the Middle colonies. In 1754, on the eve of the French and Indian War, only £81,500 in Pennsylvania bills of credit were in circulation. At the height of the conflict, in 1760, this had increased to £446,158, but by 1773 the sum had been reduced to only £135,006 (Brock, 1992, Table 6). Any conclusion about the importance of bills of credit in the colonial money supply has to be carefully qualified because it will depend on the year in question.

Traditionally, economic historians have focused their attention on the eve of the Revolution, with a special focus on 1774, because of Alice Hanson Jones’s extensive study of 1774 probate records. Even with the inquiry dramatically narrowed, estimates have varied widely. McCusker and Menard (1985, p. 338), citing Alexander Hamilton for authority, estimated that just before the Revolution the “current cash” totaled 30 million dollars. Of the 30 million dollars, Hamilton said 8 million consisted of specie (27%). On the basis of this authority, Smith (1985a, p. 538; 1988, p. 22) has maintained that specie was a comparatively minor component in the colonial money supply.

Hamilton was arguing in favor of banks when he made this oft-cited estimate, and his purpose in presenting it was to show that the circulation was capable of absorbing a great deal of paper money, which ought to make us wonder whether his estimate might have been biased by his political agenda. Whether biased, or simply misinformed, Hamilton clearly got his facts wrong.

All estimates of the quantity of colonial bills of credit in circulation – including those of Brock (1975, 1992) that have been relied on by recent authors of all sides of the debate – lead inescapably to the conclusion that in 1774 there were very few bills of credit left outstanding, nowhere near the 22 million dollars implied by Hamilton. Calculations along these lines were first performed by Ratchford. Ratchford (1941, pp. 24-25) estimated the total quantity of bills of credit outstanding in each colony on the eve of the Revolution, and then added the local £., s., and d. of all the colonies (a true case of adding apples and oranges), converted to dollars by valuing dollars at 6 s. each, and concluded that the total was equal to about $5.16 million.

Ratchford’s method of summing local pounds and then converting to dollars is incorrect because local pounds did not have a uniform value across colonies. Since dollars were commonly rated at more than 6 s., his procedure resulted in an inflated estimate. We can correct this error by using McCusker’s (1978) data on 1774 exchange rates to convert local currency to sterling for each colony, obtain a sum in pounds sterling, and then convert to dollars using the rated value of the dollar in pounds sterling, 4½ s. Four and a half s. was very near the dollar’s value in London bullion markets in 1774, so no appreciable error arises from using the rated value. Doing so reduces Ratchford’s estimate to $3.42 million. Replacing Ratchford’s estimates of currency outstanding in New York, New Jersey, Pennsylvania, Virginia, and South Carolina with apparently superior data published by Brock (1975, 1992) reduces the total to $2.93 million. Even allowing for some imprecision in the data, this simply can’t be reconciled with Hamilton’s apparently mythical $22 million in paper money!

How much current cash was there in the colonies in 1774? Alice Hanson Jones’s extensive research into probate records gives an independent estimate of the money supply. Jones (1980, table 5.2) estimated that per capita cash-holding in the Middle colonies in 1774 was £1.8 sterling, and that the entire money supply of the thirteen colonies was slightly more than 12 million dollars.44 McCallum (1992) proposed another way to estimate total money balances in the colonies. McCallum started with the few episodes where historians generally agree paper money entirely displaced specie, making the total money supply measurable. He used money balances in these episodes as a basis for estimating money balances in other colonies by deriving approximate measures of the variability of money holdings over colonies and over time. Given the starkly different methodologies, it is remarkable that McCallum’s approach yields an answer practically indistinguishable from Jones’s.45

Various contemporary estimates, including estimates by Pelatiah Webster, Noah Webster, and Lord Sheffield, also suggest the total colonial money supply in 1774 was ten to twelve million dollars, mostly in specie (Michener 1988, p. 687; Elliot, 1845, p. 938). If we tentatively accept that the total money supply in the American colonies in 1774 was about twelve million dollars, and that only three million dollars worth of bills of credit remained outstanding, then fully 75% of the prewar money supply must have been in specie.

Even this may be an underestimate. Colonial probate inventories are notoriously incomplete, and the usual presumption is that Jones’s estimates are likely to be downwardly biased. Two examples not involving money illustrate the general problem. In Jones’s collection of inventories, over 20% of the estates did not include any clothes (Lindert, 1981, p. 657). In an independent survey of Surry County, Virginia probate records, Anna Hawley (1987, pp. 27-8) noted that only 34% of the estates listed hoes despite the fact that the region’s staple crops, corn and tobacco, had to be hoed several times a year.

In Jones’s 1774 database an amazing 70% of all estates were devoid of money. While the widespread use of credit made it possible to do without money in most transactions it is likely some estates contained cash that does not appear in probate inventories. Peter Lindert (1981, p. 658) surmised “cash was simply allocated informally among survivors even before probate took place.” McCusker and Menard (1985, p. 338, fn. 14) concurred noting “cash would have been one of the things most likely to have been distributed outside the usual probate proceedings.” If Jones actually underestimated cash holdings in 1774 the implication would be that more than 75% of the prewar money supply must have been specie.

That most of the cash circulating in the colonies in 1774 must have been specie seems like an inescapable conclusion. The issue has been clouded, however, by the existence of many contradictory and internally inconsistent estimates in the literature. By using them to defend his contention that specie was relatively unimportant, Smith (1988, p. 22) drew attention to these estimates.

The first such estimate was made by Roger Weiss (1970, p. 779), who computed the ratio of paper money to total money in the Middle colonies, using Jones’s probate data to estimate total money balances as has been done here; he arrived at a considerably smaller fraction of specie in the money supply. There is a simple explanation for this puzzling result: Weiss, whose article was published in 1970, based his analysis on Jones’s 1968 dissertation rather than her 1980 book. In her dissertation, Jones (1968, Tables 3 and 4, pp. 50-51) estimated the money supply in the three Middle colonies at £2.0 local currency per free white capita. Since £1 local currency was worth about £0.6 sterling, Weiss began with an estimated total money supply of £1.2 sterling per free white capita (equal to £1.13 per capita), rather than Jones’s more recent estimate of £1.8 sterling per capita.

Another authority is Letwin (1982, p. 467), who estimated that more than 60% of the money supply of Pennsylvania in 1775 was paper. Letwin used the Historical Statistics of the United States for his money supply data, and a casual back-of-the-envelope estimate that nominal balances in Pennsylvania were £700,000 in 1775 to conclude that 63% of Pennsylvania’s money supply was paper money. However, the data in Historical Statistics of the United States are known to be incorrect: Using Letwin’s back-of-the-envelope estimate, but redoing the calculation using Brock’s estimates of paper money in circulation, gives the result that in 1775 only 45.5% of Pennsylvania’s money supply was paper money; for 1774 the figure is 31%.46

That good faith attempts to estimate the stock of specie in the colonies in 1774 have given rise to such wildly varying and inconsistent estimates gives some indication of the task that remains to be accomplished.47 Many hints about how the specie stock varied over time in colonial America can be found in newspapers, legislative records, pamphlets and correspondence. Organizing those fragments of evidence and interpreting them is going to require great skill and will probably have to be done colony by colony. In addition, if the key to the purchasing power of colonial currency lies in the ratings attached to coins as I personally believe it does, then more effort is going to have to be paid in the future to tracking how those ratings evolved over time. Our knowledge at the moment is very fragmentary, probably because the politics of paper money has so engrossed the attention of historians that few people have attached much significance to coin ratings.

Economic historian Farley Grubb has proposed (2003, 2004, 2007) that the composition of the medium of exchange in colonial America and the early Republic can be determined from the unit of account used in arm’s length transactions, such as rewards offered in runaway ads and prices recorded in indentured servant contract registrations. If, for instance, a runaway reward is offered in pounds, shillings and pence, it means (Grubb argues) that colonial or state bills of credit were the medium of exchange used, while dollar rewards in such ads would imply silver. Grubb then uses contract registrations in the early Republic (2003, 2007) and runaway ads in colonial Pennsylvania (2004) to develop time series for hitherto unmeasurable components of the money supply and draws many striking conclusions from them. I believe Grubb is proceeding on a mistaken premise. Reversing Grubb’s procedure and using runaway ads in the early Republic and contract registrations in colonial Pennsylvania yields dramatically different results, which suggests the method is not useful. I have participated in this contentious published debate (see Michener and Wright 2005, 2006a, 2006c and Grubb 2003, 2004, 2006a, 2006b, 2007) and will leave it to the reader to draw his or her own conclusions.


1. Beginning in 1767, Maryland issued bills of credit denominated in dollars (McCusker, 1978, p. 194).

2. For a number of years, Georgia money was an exception to this rule (McCusker, 1978, pp. 227-8).

3. Elmer (1869, p. 137). Similarly, historian Robert Shalhope (Shalhope, 2003, pp. 140, 142, 147, 290) documents a Vermont farmer who continued to reckon, at least some of the time, in New York currency (i.e. 8 shillings = $1) well into the 1820s.

4. To clarify: In New York, a dollar was rated at eight shillings, hence one reale, an eighth of a dollar, was one shilling. In Richmond and Boston, the dollar was rated at six shillings, or 72 pence, one eighth of which is 9 pence. In Philadelphia and Baltimore, the dollar was rated at seven shillings six pence, or ninety pence, and an eighth of a dollar would be 11.25 pence.

5. In 1822, for example, P. T. Barnum, then a young man from Connecticut making his first visit to New York, paid too much for a brace of oranges because of confusion over the unit of account. “I was told,” he later related, “[the oranges] were four pence apiece [as Barnum failed to realise, in New York there were 96 pence to the dollar], and as four pence in Connecticut was six cents, I offered ten cents for two oranges, which was of course readily taken; and thus, instead of saving two cents, as I thought, I actually paid two cents more than the price demanded” (Barnum, 1886, p. 18).

6. One way to see the truth of this statement is to examine colonial records predating the emission of colonial bills of credit. Virginia pounds are referred to long before Virginia issued its first bills of credit in 1755. See, for example, Pennsylvania Gazette, September 20, 1736, quoting Votes of the House of Burgesses in Virginia, August 30, 1736 or the Pennsylvania Gazette, May 29, 1746, quoting a runaway ad that mentions “a bond from a certain Fielding Turner to William Williams, for 42 pounds Virginia currency.” Advertisements in the Philadelphia newspapers in 1720 promise rewards for the return of runaway servants and slaves in Pennsylvania pounds, even though Pennsylvania did not issue its first bills of credit until 1723. The contemporary meaning of “currency” sheds light on otherwise confusing statements, such as an ad in the Pennsylvania Gazette, May 12, 1763, where the advertiser offered a reward for the recovery of £460 “New York currency” that was stolen from him and then parenthetically noted “the greatest part of said Money was in Jersey Bills.”

7. For an example of a complete list, see Felt (1839, pp. 82-83).

8. Further discussion of country pay in Connecticut can be found in Bronson (1865, pp. 23-4).

9. Weiss (1974, pp. 580-85) cites a passage from an 1684 court case that appears to contradict this discount. However, inspecting the court records shows that the initial debt consisted of 34s. 5d. in money to which the court added 17s. 3d. to cover the difference between money and country pay, a ratio of pay to money of exactly 3 to 2 (Massachusetts, 1961, pp. 303-4). Other good illustrations of the divergence of cash and country pay prices can be found in Knight (1935, pp. 40-1) and Judd (1905, pp. 95-6). The multiple price system was not limited to Massachusetts and Connecticut (Coulter, 1944, p. 107).

10. Thomas Bannister to Mr. Joseph Thomson, March 8, 1699/1700 in (Bannister, 1708).

11. In New York, for instance, early issues were legal tender, but the Currency Act of 1764 put a halt to new issues of legal tender paper money; the legal tender status of practically all existing issues expired in 1768. After prolonged and contentious negotiation with imperial authorities, the Currency Act of 1770 permitted New York to issue paper money that was a legal tender in payments to the colonial government, but not in private transactions. New York made its first issue under the terms of the Currency Act of 1770 in early 1771 (Ernst, 1973).

12. Ordinarily, but not always. For instance, in 1731 South Carolina reissued £106,500 in bills of credit without creating any tax fund with which to redeem them (Nettels, 1934, pp. 261-2; Brock, 1975, p. 123). The Board of Trade repeatedly pressured the colony to create a tax fund for this purpose, but without success. That no tax funds had been earmarked to redeem these bills was common knowledge, but it did not make the bills less acceptable as a medium of exchange, or adversely affect their value. The episode contradicts the common supposition that the promise of future redemption played a key role in determining the value of colonial currencies.

13. Once the bills of credit were placed in circulation, no distinction was made between them based on how they were originally issued. It is not as if one could only pay taxes with bills of the first sort, or repay mortgages with bills of the second sort. Many colonies, to save the cost of printing, would reuse worn but serviceable notes. A bill originally issued on loan, upon returning to the colonial treasury, might be reissued on tax funds; often it would have been impossible, even in principle, for an individual to examine the bills in his possession and deduce the funds ostensibly backing them.

14. Late in the seventeenth century Massachusetts briefly operated a mint that issued silver coins denominated in the local unit of account (Jordon, 2002). On the eve of the Revolution, Virginia obtained official permission to have copper coins minted for use in Virginia (Davis, 1970, vol. 1, chapter 2; Newman, 1956).

15. The Massachusetts government, unable to honor redemption promises made when the first new tenor emission was first created, decided in 1742 to revalue these bills from three to one to four to one with old tenor as compensation. When Massachusetts returned to a specie standard, the remaining middle tenor bills were redeemed at four to one (Davis, 1970; McCusker, 1978, p. 133).

16. New and old tenors have led to much confusion. In the Boston Weekly News Letter, July 1, 1742, there is an ad pertaining to someone who mistakenly passed Rhode Island New Tenor in Boston at three to one, when it was supposed to be valued at four to one. Modern day historians have also occasionally been misled. An excellent example can be found in Patterson (1961, p. 27). Patterson believed he had unearthed evidence of outrageous fraud during the Massachusetts currency reform, whereas he had, in fact, simply failed to convert a sum in an official document stated in new tenor terms into appropriate old tenor terms. Sufro (1976, p. 247) following Patterson, made similar accusations based on a similar misunderstanding of New England’s monetary units.

17. That colonial treasurers did not unfailingly provide this service is implicit in statements found in merchant letters complaining of how difficult it sometimes became to convert paper money to specie (Beekman to Evan and Francis Malbone, March 10, 1769, White, 1956, p. 522).

18. Nathaniel Appleton (1748) preached a sermon excoriating the province of Massachusetts Bay for flagrantly failing to keep the promises inscribed on the face of its bills of credit.

19. Goldberg (2009) uses circumstantial evidence to suggest that Massachusetts was engaged in a “monetary ploy to fool the king” when it made its first emissions. In Goldberg’s telling of the tale, the king had been furious about the Massachusetts mint and officially issuing paper money that was a full legal tender would have been a “colossal mistake” because it would have endangered the colony’s effort to obtain a new charter, which was essential to confirm the land grants the colony had already made. The alleged ploy Goldberg discovered was a provision passed shortly afterwards: “Ordered that all country pay with one third abated shall pass as current money to pay all country’s debts at the same prices set by this court.” Since those with a claim on the Treasury were going to be tendered either paper money or country pay, and since Goldberg interprets this as requiring those creditors to accept either 3 pounds in paper money or 2 pounds in country pay, the provision was, in Goldberg’s estimation, a way of forcing the paper money on the populace at a one third discount. The shortchanging of the public creditors, through some mechanism not adequately explained to my understanding, was sufficient to make the new paper money a defacto legal tender.

There are several problems with Goldberg’s analysis. Jordan (2002, pp. 36-45) has recently written the definitive history of the Massachusetts mint, and he minutely reviews the evidence pertaining to the Massachusetts mint and British reaction to it. He concludes that “there was no concerted effort by the king and his ministers to crush the Massachusetts mint.” In 1692 Massachusetts obtained a new charter and passed a law making the bills of credit a legal tender. The new charter required Massachusetts to submit all its laws to London for review, yet the imperial authorities quietly ratified the legal tender law, even though they were fully empowered to veto it, which seems very peculiar if the legal tender status of the bills was as unpopular with the King and his ministers as Goldberg maintains. The smoking gun Goldberg cites appears to me to be no more than a statement of the “three pounds of country pay equals two pounds cash” rule that prevailed in Massachusetts in the late seventeenth century. In his argument, Goldberg tacitly assumes that a pound of country pay was equal in value to a pound of hard money; he observes that the new bills of credit initially circulated at a one third discount (with respect to specie) and that this might have arisen because recipients (according to his interpretation) were offered only two pounds of country pay in lieu of three pounds of bills of credit (Goldberg, p. 1102). However, because country pay itself was worth, at most, two thirds of its nominal value in specie, by Goldberg’s reasoning paper money should have been at a discount of at least five ninths with respect to specie.

The paper money era in Massachusetts brought forth approximately fifty pamphlets and hundreds of newspaper articles and public debates in the Assembly, none of which confirm Goldberg’s inference.

20. The role bills of credit played as a means of financing government expenditures is discussed in Ferguson (1953).

21. Georgia was not founded until 1733, and one reason for its founding was to create a military buffer to protect the Carolinas from the Spanish in Florida.

22. Grubb (2004, 2006a, 2006b) argues that bills of credit did not commonly circulate across colony borders. Michener and Wright (2006a, 2006c) dispute Grubb’s analysis and provide (Michener and Wright 2006a, pp. 12-13, 24-30) additional evidence of the phenomenon.

23. Poor Thomas Improved: Being More’s Country Almanack for … 1768 gives as a rule that “To reduce New-Jersey Bills into York Currency, only add one penny to every shilling, and the Sum is determined.” (McCusker, 1978, pp. 170-71; Stevens, 1867, pp. 151-3, 160-1, 168, 185-6, 296; Lincoln, 1894, vol. 5, Chapter 1654, pp. 638-9.)

24. In two articles, John R. Hanson (1979, 1980) argued that bills of credit were important to the colonial economy because they provided much-needed small denomination money. His analysis, however, completely ignores the presence of half-pence, pistareens, and fractional denominations of the Spanish dollar. The Spanish minted halves, quarters, eighths, and sixteenths of the dollar, which circulated in the colonies (Solomon, 1976, pp. 31-32). For a good introduction to small change in the colonies, see Andrews (1886), Newman (1976), Mossman (1993, pp. 105-142), and Kays (2001).

25. Council of Trade and Plantations to the Queen, November 23, 1703, in Calendar of State Papers, 1702-1703, entry #1299. Brock, 1975, chap. 4.

26. This, it should be noted, is what British authorities meant by “proclamation money.” Since salaries of royal officials, fees, quit rents, etc. were often denominated in proclamation money, colonial courts often found a rationale to attach their own interpretation to “proclamation money” so as to reduce the real value of such salaries and fees. In New York, for example, eight shillings in New York’s bills of credit were ostensibly worth one ounce of silver although by the late colonial period they were actually worth less. This valuation of bills of credit made each seven pounds of New York bills of credit in principle worth six pounds in proclamation money. The New York courts used that fact to establish the rule that seven pounds in New York currency could pay a debt of six pounds proclamation money. This rule allowed New Yorkers to pay less in real terms than was contemplated by the British (Hart, 2005, pp. 269-71).

27. Brock (1975). The text of the proclamation can be found in the Boston New-Letter, December 11, 1704. To be precise, the Proclamation rate was actually in slight contradiction to that in the Massachusetts law, which had rated a piece of eight weighing 17 dwt. at 6 s. See Brock (1975, p. 133, fn. 7).

28. This contention has engendered considerable controversy, but the evidence for it seems to me both considerable and compelling. Apart from evidence cited in the text, see for Massachusetts, Michener (1987, p. 291, fn. 54), Waite Winthrop to Samuel Reade, March 5, 1708 and Wait Winthrop to Samuel Reade, October 22, 1709 in Winthrop (1892, pp. 165, 201); For South Carolina see South Carolina Gazette, May 14, 1753; August 13, 1744; and Manigualt (1969, p. 188); For Pennsylvania see Pennsylvania Gazette, April 2, 1730, December 3, 1767, February 15, 1775, March 8, 1775; For St. Kitts see Roberdeau to Hyndman & Thomas, October 16, 1766, in Roberdeau (1771); For Antigua, see Anonymous (1740).

29. The Chamber of Commerce adopted its measure in October 1769, apparently too late in the year to appear in the “1770” almanacs, which were printed and sold in late 1769. The 1771 almanacs, printed in 1770, include the revised coin ratings.

30. Note that the relative ratings of the half joe are aligned with the ratings of the dollar. For example, the ratio of the New York value of the half joe to the Pennsylvania value is 64 s./60 s. = 1.066666, and the ratio of the New York value of the half joe to the Connecticut value is 64 s./48 s. = 1.3333.

31. This bank has been largely overlooked, but is well documented. Letter of a Merchant in South Carolina to Alexander Cumings, Charlestown, May 23, 1730, South Carolina Public Records, Vol XIV, pp. 117-20; Anonymous (1734); Easterby (1951, [March 5, 1736/37] vol. 1, pp. 309-10); Governor Johnson to the Board of Trade in Calendar of State Papers, 1731, entry 488, p. 342; Whitaker (1741, p. 25); and Vance (1740, p. 463).

32.I base this on my own experience reviewing the contents of RG3 Litchfield County Court Files, Box 1 at the Connecticut State Library.

33. Though best documented in New England, Benjamin Franklin (1729, CCR II, p. 340) mentions their use in Pennsylvania.

34. See Douglass (1740, CCR III, pp. 328-329) and Vance (1740, CCR III, pp. 328-329). Douglass and Vance disagreed on all the substantive issues, so that their agreement on this point is especially noteworthy. See also Boston Weekly Newsletter, Feb. 12-19, 1741.

35. Data on New England prices during this period are very limited, but annual data exist for wheat prices and silver prices. Regressing the log of these prices on time yields an annual growth rate of prices approximately that mentioned in the text. The price data leave much to be desired, and the inflation estimates should be understood as simply a crude characterization. However, it does show that New England’s peacetime inflation during this era was not so extreme as to shock modern sensibilities.

36. Smith (1985a, 1985b). The quantity theory holds that the price level is determined by the supply and demand for money – loosely, how much money is chasing how many goods. Smith’s version of the backing theory is summarized by the passage quoted from his article.

37. John Adams explained this very clearly in a letter written June 22, 1780 to Vergennes (Wharton, vol. 3, p. 811). Adams’s “certain sum” and McCallum’s “normal real balances” are essentially the same, although Adams is speaking in nominal and McCallum in real terms.

A certain sum of money is necessary to circulate among the society in order to carry on their business. This precise sum is discoverable by calculation and reducible to certainty. You may emit paper or any other currency for this purpose until you reach this rule, and it will not depreciate. After you exceed this rule it will depreciate, and no power or act of legislation hitherto invented will prevent it. In the case of paper, if you go on emitting forever, the whole mass will be worth no more than that was which was emitted within the rule.

38. One of the principle observations Smith (1985b, p. 1198) makes in dismissing the possible importance of interest rate fluctuations is “it is known that sterling bills of exchange did not circulate at a discount.” Sterling bills were payable at a future date, and Smith presumably means that sterling bills should have been discounted if interest made an appreciable difference in their market value. Sterling bills, however, were discounted. These bills were not payable at a particular fixed date, but rather on a certain number of days after they were first presented for payment. For example, a bill might be payable “on sixty days sight,” meaning that once the bill was presented (in London, for example, to the person upon whom it was drawn) the person would have sixty days in which to make payment. Not all bills were drawn at the same sight, and sight periods of 30, 60, and 90 days were all common. Bills payable sooner sold at higher prices, and bills could be and sometimes were discounted in London to obtain quicker payment (McCusker, 1978, p. 21, especially fn. 25; David Vanhorne to Nicholas Browne and Co., October 3, 1766. Brown Papers, P-V2, John Carter Brown Library). In the early Federal period many newspapers published extensive prices current that included prices of bills drawn on 30, 60, and 90 days’ sight.

39. Franklin (1729) wrote a tract on colonial currency, in which he maintained as one of his propositions that “A great Want of Money in any Trading Country, occasions Interest to be at a very high Rate.” An anonymous referee warned that when colonists complained of a “want of money” that they were not complaining of a lack of a circulating medium per se, but were expressing a desire for more credit at lower interest rates. I do not entirely agree with the referee. I believe many colonists, like Franklin, reasoned like modern-day Keynesians, and believed high interest rates and scarce credit were caused by an inadequate money supply. For more on this subject, see Wright (2002, chapter 1).

40. Public Record Office, CO 5/ 947, August 13, 1768, pp. 18-23.

41. New Hampshire Gazette and Historical Chronicle, January 13, 1769.

42. Public Record Office, Wentworth to Hillsborough, CO 5/ 936, July 3, 1769.

43. Pennsylvania Chronicle, and Universal Advertiser, 28 December 1767.

44. This should be understood to be paper money and specie equal in value to 12 million dollars, not 12 million Spanish dollars. The fraction of specie in the money supply can’t be directly estimated from probate records. Jones (1980, p. 132) found that “whether the cash was in coin or paper was rarely stated.”

45. McCallum deflated money balances by the free white population rather than the total population. Using population estimates to put the numbers on a comparable basis reveals how close McCallum’s estimates are to those of Jones. For example, McCallum’s estimate for the Middle colonies, converted to a per-capita basis, is approximately £1.88 sterling.

46. This incident illustrates how mistakes about colonial currency are propagated and seem never to die out. Henry Phillips 1865 book presented data on Pennsylvania bills of credit outstanding. One of his major “findings” was that Pennsylvania retired only £25,000 between 1760 and 1769. This was a mistake: Brock (1992, table 6) found £225,247 had been retired over the same period. Because of the retirements Phillips missed, he overestimated the quantity of Pennsylvania bills of credit in circulation in the late colonial period by 50 to 100%. Lester (1939, pp. 88, 108) used Phillips’s series; Ratchford (1941) obtained his data from Lester. Through Ratchford, Phillips’s series found its way into Historical Statistics of the United States.

47. Benjamin Allen Hicklin (2007) maintains that generations of historians have exaggerated the scarcity of specie in seventeenth and early eighteenth century Massachusetts. Hicklin’s analysis illustrates the unsettled state of our knowledge about colonial specie stocks.


Adams, John Q. “Report upon Weights and Measures.” Reprinted in The North American Review, Boston: Oliver Everett, vol. 14 (New Series, Vol. 5) (1822), pp. 190-230.

Adler, Simon L. Money and Money Units in the American Colonies, Rochester NY: Rochester Historical Society, 1900.

Andrew, A. Piatt. “The End of the Mexican Dollar.” Quarterly Journal of Economics, vol. 18, no. 3 (1904), pp. 321-56.

Andrews, Israel W. “McMaster on our Early Money,” Magazine of Western History, vol. 4 (1886), pp. 141-52.

Anonymous. An Essay on Currency, Charlestown, South Carolina: Printed and sold by Lewis Timothy, 1734.

Anonymous. Two Letters to Mr. Wood on the Coin and Currency in the Leeward Islands, &c. London: Printed for J. Millan, 1740.

Anonymous. “The Melancholy State of this Province Considered,” Boston, 1736, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol III, pp. 135-147.

Appleton, Nathaniel. The Cry of Oppression, Boston: J. Draper, 1748.

Bannister, Thomas. Thomas Bannister letter book, 1695-1708, MSS, Newport Historical Society, Newport, RI.

Barnum, Phineas T. The Life of P.T. Barnum, Buffalo: The Courier Company Printers, 1886.

Baxter, William. The House of Hancock, New York: Russell and Russell, Inc., 1965.

Bernholz, Peter. “Inflation, Monetary Regime and the Financial Asset Theory of Money,” Kyklos, vol. 41, fasc. 1 (1988), pp. 5-34.

Brodhead, John R. Documents Relative to the Colonial History of the State of New York, Albany, NY: Weed Parsons, Printers, 1853.

Brock, Leslie V. Manuscript for a book on Currency, Brock Collection, Accession number 10715, microfilm reel #M1523, Alderman Library special collections, University of Virginia, circa 1956. This book was to be the sequel to Currency of the American Colonies, carrying the story to 1775.

Brock, Leslie V. The Currency of the American Colonies, 1700-1764, New York: Arno Press, 1975.

Brock, Leslie V. “The Colonial Currency, Prices, and Exchange Rates,” Essays in History, vol. 34 (1992), 70-132. This article contains the best available data on colonial bills of credit in circulation.

Bronson, Henry. “A Historical Account of Connecticut Currency, Colonial Money, and Finances of the Revolution,” Printed in New Haven Colony Historical Papers, New Haven, vol. 1, 1865.

Bullock, Charles J. Essays on the Monetary History of the United States, New York: Greenwood Press, 1969.

Burnett, Edmund C. Letters to Members of the Continental Congress, Carnegie Institution of Washington Publication no. 299, Papers of the Dept. of Historical Research, Gloucester, MA: P. Smith, 1963.

Calomiris, Charles W. “Institutional Failure, Monetary Scarcity, and the Depreciation of the Continental,” Journal of Economic History, 48 (1988), pp. 47-68

Cooke, Ebenezer. The Sot-weed Factor Or, A Voyage To Maryland. A Satyr. In which Is describ’d, the laws, government, courts And constitutions of the country, and also the buildings, feasts, frolicks, entertainments And drunken humours of the inhabitants of that part of America. In burlesque verse, London: B. Bragg, 1708.

Connecticut. Public Records of the Colony of Connecticut [1636-1776], Hartford CT: Brown and Parsons, 1850-1890.

Coulter, Calvin Jr. The Virginia Merchant, Ph. D. dissertation, Princeton University, 1944.

Davis, Andrew McFarland. Currency and Banking in the Province of the Massachusetts Bay, New York: Augustus M. Kelley, 1970.

Douglass, William.“A Discourse concerning the Currencies of the British Plantations in America &c.” Boston, 1739, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol III, pp.307-356.

Easterby, James H. et. al. The Journal of the Commons House of Assembly, Columbia: Historical Commission of South Carolina, 1951-.

Elliot, Jonathan. The Funding System of the United States and of Great Britain, Washington, D.C.: Blair and River, 1845.

Elmer, Lucius Q. C., History of the Early Settlement and Progress of Cumberland Conty, New Jersey; and of the Currency of this and the Adjoining Colonies. Bridgeport, N.J.: George F. Nixon, Publisher, 1869.

Enquiry into the State of the Bills of Credit of the Province of the Massachusetts-Bay in New-England: In a Letter from a Gentleman in Boston to a Merchant in London. Boston, 1743/4, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol IV, pp.149-209.

Ernst, Joseph A. Money and Politics in America, 1755-1775, Chapel Hill, NC: University of North Carolina Press, 1973.

Ernst, Joseph A. “The Labourers Have been the Greatest Sufferers; the Truck System in Early Eighteenth-Century Massachusetts,” in Merchant Credit and Labour Strategies in Historical Perspective, Rosemary E. Ommer, ed., Frederickton, New Brunswick: Acadiensis Press, 1990.

Felt, Joseph B. Historical Account of Massachusetts Currency. New York: Burt Franklin, 1968, reprint of 1839 edition.

Ferguson, James E. “Currency Finance, An Interpretation of Colonial Monetary Practices,” William and Mary Quarterly, 10, no. 2 (April 1953): 153-180.

Fernow, Berthold. “Coins and Currency in New-York,” The Memorial History of New York, New York, 1893, vol. 4, pp. 297-343.

Fitch, Thomas. Thomas Fitch letter book, 1703-1711, MSS, American Antiquarian Society, Worcester, MA.

Forman, Benno M. “The Account Book of John Gould, Weaver, of Topsfield, Massachusetts,” Essex Institute Historical Collections, vol. 105, no. 1 (1969), pp. 36-49.

Franklin, Benjamin. “A Modest Enquiry into the Nature and Necessity of a Paper Currency,” Philadelphia, 1729, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol. II, p. 340.

Franklin, Benjamin, The Papers of Benjamin Franklin, Leonard W. Labaree (ed.), New Haven, CT: Yale University Press, 1959.

Goldberg, Dror. “The Massachusetts Paper Money of 1690,” Journal of Economic History, vol. 69, no. 4 (2009), pp. 1092-1106.

Gottfried, Marion H. “The First Depression in Massachusetts,” New England Quarterly, vol. 9, no. 4 (1936), pp. 655-678.

Great Britain. Public Record Office. Calendar of State Papers, Colonial Series, London: Her Majesty’s Stationary Office, 44 vols., 1860-1969.

Grubb, Farley W. “Creating the U.S. Dollar Currency Union, 1748-1811: A Quest for Monetary Stability or a Usurpation of State Sovereignty for Personal Gain?” American Economic Review, vol. 93, no. 5 (2003), pp. 1778-98.

Grubb, Farley W. “The Circulating Medium of Exchange in Colonial Pennsylvania, 1729-1775: New Estimates of Monetary Composition, Performance, and Economic Growth,” Explorations in Economic History, vol. 41, no. 4 (2004), pp. 329-360.

Grubb, Farley W. “Theory, Evidence, and Belief—The Colonial Money Puzzle Revisited: Reply to Michener and Wright.” Econ Journal Watch, vol. 3, no. 1, (2006a), pp. 45-72.

Grubb, Farley W. “Benjamin Franklin and Colonial Money: A Reply to Michener and Wright—Yet Again” Econ Journal Watch, vol. 3, no. 3, (2006b), pp. 484-510.

Grubb, Farley W. “The Constitutional Creation of a Common Currency in the U.S.: Monetary Stabilization versus Merchant Rent Seeking.” In Lars Jonung and Jurgen Nautz, eds., Conflict Potentials in Monetary Unions, Stuttgart, Franz Steiner Verlag, 2007, pp. 19-50.

Hamilton, Alexander. Hamilton’s Itinerarium, Albert Bushnell (ed.), St. Louis, MO: William Bixby, 1907.

Hanson, Alexander C. Remarks on the proposed plan of an emission of paper, and on the means of effecting it, addressed to the citizens of Maryland, by Aristides, Annapolis: Frederick Green, 1787.

Hanson John R. II. “Money in the Colonial American Economy: An Extension,” Economic Inquiry, vol. 17 (April 1979), pp. 281-86.

Hanson John R. II. “Small Notes in the American Economy,” Explorations in Economic History, vol. 17 (1980), pp. 411-20.

Harper, Joel W. C. Scrip and other forms of local money, Ph. D. dissertation, University of Chicago, 1948.

Hart, Edward H. Almost a Hero: Andrew Elliot, the King’s Moneyman in New York, 1764-1776. Unionville, N.Y.: Royal Fireworks Press, 2005.

Hawley, Anna. “The Meaning of Absence: Household Inventories in Surry County, Virginia, 1690-1715,” in Peter Benes (ed.) Early American Probate Inventories, Dublin Seminar for New England Folklore: Annual Proceedings, 1987.

Hazard, Samuel et. al. (eds.). Pennsylvania Archives, Philadelphia: Joseph Severns, 1852.

Hemphill, John II. Virginia and the English Commercial System, 1689-1733, Ph. D. diss., Princeton University, 1964.

Horle, Craig et. al. (eds.). Lawmaking and Legislators in Pennsylvania: A Biographical Dictionary. Philadelphia: University of Pennsylvania Press, 1991-.

Horle, Craig et. al. (eds.). Lawmaking and Legislators in Pennsylvania: A Biographical Dictionary. Philadelphia: University of Pennsylvania Press, 1991-.

House of Lords. The Manuscripts of the House of Lords, 1706-1708, Vol. VII (New Series), London: His Majesty’s Stationery Office, 1921.

Hutchinson, Thomas. The History of the Province of Massachusetts Bay, Cambridge, MA: Harvard University Press, 1936.

Jones, Alice Hanson. Wealth Estimates for the American Middle Colonies, 1774, Ph.D. diss., University of Chicago, 1968.

Jones, Alice Hanson, Wealth of a Nation to Be, New York: Columbia University Press, 1980.

Jordan, Louis. John Hull, the Mint and the Economics of Massachusetts Coinage, Lebanon, NH: University Press of New England, 2002.

Judd, Sylvester. History of Hadley, Springfield, MA: H.R. Huntting & Co., 1905.

Kays, Thomas A. “When Cross Pistareens Cut their Way through the Tobacco Colonies,” The Colonial Newsletter, April 2001, pp. 2169-2199.

Kimber, Edward, Itinerant Observations in America, (Kevin J. Hayes, ed.), Newark, NJ: University of Delaware Press, 1998.

Knight, Sarah K. The Journal of Madam Knight, New York: Peter Smith, 1935.

Lester, Richard A. Monetary Experiments: Early American and Recent Scandinavian, New York: Augustus Kelley, 1970.

Letwin, William. “Monetary Practice and Theory of the North American Colonies during the 17th and 18th Centuries,” in Barbagli Bagnoli (ed.), La Moneta Nell’economia Europea, Secoli XIII-XVIII, Florence, Italy: Le Monnier, 1981, pp. 439-69.

Lincoln, Charles Z. The Colonial Laws of New York, Vol V., Albany: James B. Lyon, State Printer, 1894.

Lindert, Peter H. “An Algorithm for Probate Sampling,” Journal of Interdisciplinary History, vol. 11, (1981).

Lydon, James G. “Fish and Flour for Gold: Southern Europe and the Colonial American Balance of Payments,” Business History Review, 39 (Summer 1965), pp. 171-183.

Main, Gloria T. and Main, Jackson T. “Economic Growth and the Standard of Living in Southern New England, 1640-1774,” Journal of Economic History, vol. 48 (March 1988), pp. 27-46.

Manigault, Peter. “The Letterbook of Peter Manigault, 1763-1773,” Maurice A. Crouse (ed.), South Carolina Historical Magazine, Vol 70 #3 (July 1969), pp. 177-95.

Massachusetts. Courts (Hampshire Co.). Colonial justice in western Massachusetts, 1639-1702; the Pynchon court record, an original judges’ diary of the administration of justice in the Springfield courts in the Massachusetts Bay Colony. Edited by Joseph H. Smith. Cambridge: Harvard University Press, 1961.

Massey, J. Earl. “Early Money Substitutes,” in Eric P. Newman and Richard G. Doty (eds.), Studies on Money in Early America, New York: American Numismatic Society, 1976, pp. 15-24.

Mather, Cotton. “Some Considerations on the Bills of Credit now passing in New-England,” Boston, 1691, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol. I, pp. 189-95.

McCallum, Bennett. “Money and Prices in Colonial America: A New Test of Competing Theories,” Journal of Political Economy, vol. 100 (1992), pp. 143-61,

McCusker, John J. Money and Exchange in Europe and America, 1600-1775: A Handbook, Williamsburg, VA: University of North Carolina Press, 1978.

McCusker, John J. and Menard, Russell R. The Economy of British America, 1607-1789, Chapel Hill, N.C.: University of North Carolina Press, 1985.

Michener, Ronald. “Fixed Exchange Rates and the Quantity Theory in Colonial America,” Carnegie-Rochester Conference Series on Public Policy, vol. 27 (1987), pp. 245-53.

Michener, Ron. “Backing Theories and the Currencies of the Eighteenth-Century America: A Comment,” Journal of Economic History, 48 (1988), pp. 682-92.

Michener, Ronald W. and Robert E. Wright. 2005. “State ‘Currencies’ and the Transition to the U.S. Dollar: Clarifying Some Confusions,” American Economic Review, vol. 95, no. 3 (2005), pp. 682-703.

Michener, Ronald W. and Robert E. Wright. 2006a. “Miscounting Money of Colonial America.” Econ Journal Watch, vol. 3, no. 1 (2006a), 4-44.

Michener, Ronald W. and Robert E. Wright. 2006b. “Development of the U.S. Monetary Union,” Financial History Review, vol. 13, no. 1 (2006b), pp. 19-41.

Michener, Ronald W. and Robert E. Wright. 2006c. “ Farley Grubb’s Noisy Evasions on Colonial Money: A Rejoinder ” Econ Journal Watch, vol. 3, no. 2 (2006c), pp. 1-24.

Morris, Lewis. The Papers of Lewis Morris,Eugene R. Sheridan, (ed.), Newark, NJ: New Jersey Historical Society, 1993.

Mossman Philip L. Money of the American Colonies and Confederation, New York: American Numismatic Society, 1993, pp. 105-142.

Nettels, Curtis P. “The Beginnings of Money in Connecticut,” Transactions of the Wisconsin Academy of Sciences, Arts, and Letters, vol. 23, (January 1928), pp. 1-28.

Nettels, Curtis P. The Money Supply of the American Colonies before 1720, Madison: University of Wisconsin Press, 1934.

Newman, Eric P. “Coinage for Colonial Virginia,” Numismatic Notes and Monographs, No. 135, New York: The American Numismatic Society, 1956.

Newman, Eric P. “American Circulation of English and Bungtown Halfpence,” in Eric P. Newman and Richard G. Doty (eds.) Studies on Money in Early America, New York: The American Numismatic Society, 1976, pp. 134-72.

Nicholas, Robert C. “Paper Money in Colonial Virginia,” The William and Mary Quarterly, vol. 20 (1912), pp. 227-262.

Officer, Lawrence C. “The Quantity Theory in New England, 1703-1749: New Data to Analyze an Old Question,” Explorations in Economic History, vol. 42, no. 1 (2005), pp. 101-121.

Patterson, Stephen Everett. Boston Merchants and the American Revolution to 1776, Masters thesis, University of Wisconsin, 1961.

Phillips, Henry. Historical Sketches of the Paper Currency of the American Colonies, original 1865, reprinted New York: Burt Franklin, 1969.

Plummer, Wilbur C. “Consumer Credit in Colonial Pennsylvania,” The Pennsylvania Magazine of History and Biography, LXVI (1942), pp. 385-409.

Ratchford, Benjamin U. American State Debts, Durham, N.C.: Duke University Press, 1941.

Reipublicæ, Amicus. “Trade and Commerce Inculcated; in a Discourse,” (1731). Reprinted in Andrew McFarland Davis, Colonial Currency Reprints, vol. 2, pp. 360-428.

Roberdeau, Daniel. David Roberdeau letter book, 1764-1771, MSS, Pennsylvania Historical Society, Philadelphia, PA.

Rosseau, Peter L. “Backing, the Quantity Theory, and the Transition to the U.S. Dollar, 1723-1850,” American Economic Review, vol. 97, no. 2 (2007), pp. 266-270.

Ruffhead, Owen. (ed.) The Statutes at Large, from the Magna Charta to the End of the last Parliament, 1761, 18 volumes., London: Mark Basket, 1763-1800.

Sachs, William S. The Business Outlook in the Northern Colonies, 1750-1775, Ph. D. Dissertation, Columbia University, 1957.

Schweitzer, Mary M. Custom and Contract: Household, Government, and the Economy in Colonial Pennsylvania, New York:Columbia University Press, 1987.

Schweitzer, Mary M. “State-Issued Currency and the Ratification of the U.S. Constitution,” Journal of Economic History, 49 (1989), pp. 311-22.

Shalhope, Robert E. A Tale of New England: the Diaries of Hiram Harwood, Vermont Farmer, 1810–1837, Baltimore: John Hopkins University Press, 2003.

Smith, Bruce. “American Colonial Monetary Regimes: The Failure of the Quantity Theory and Some Evidence in Favor of an Alternate View,” The Canadian Journal of Economics, 18 (1985a), pp. 531-64.

Smith, Bruce. “Some Colonial Evidence on Two Theories of Money: Maryland and the Carolinas, Journal of Political Economy, 93 (1985b), pp. 1178-1211.

Smith, Bruce. “The Relationship between Money and Prices: Some Historical Evidence Reconsidered,” Federal Reserve Bank of Minneapolis Quarterly Review, vol 12, no. 3 (1988), pp. 19-32.

Solomon, Raphael E. “Foreign Specie Coins in the American Colonies,”in Eric P. Newman (ed.), Studies on Money in Early America, New York: The American Numismatic Society, 1976, pp. 25-42.

Soltow, James H. The Economic Role of Williamsburg, Charlottesville, VA: University of Virginia Press, 1965.

South Carolina. Public Records of South Carolina, manuscript transcripts of the South Carolina material in the British Public Record office, at Historical Commission of South Carolina.

Stevens John A. Jr. Colonial Records of the New York Chamber of Commerce, 1768-1784, New York: John F. Trow & Co., 1867.

Sufro, Joel A. Boston in Massachusetts Politics 1730-1760, Ph.D. dissertation, University of Wisconsin, 1976.

Sumner, Scott. “Colonial Currency and the Quantity Theory of Money: A Critique of Smith’s Interpretation,” Journal of Economic History, 53 (1993), pp. 139-45.

Thayer, Theodore. “The Land Bank System in the American Colonies,” Journal of Economic History, vol. 13 (Spring 1953), pp. 145-59.

Vance, Hugh. An Inquiry into the Nature and Uses of Money, Boston, 1740, reprinted in Andrew McFarland Davis, Colonial Currency Reprints, Boston: The Prince Society, 1911, vol. III, pp. 365-474.

Weeden, William B. Economic and Social History of New England, Boston, MA: Houghton, Mifflin, 1891.

Weiss, Roger. “Issues of Paper Money in the American Colonies, 1720-1774,” Journal of Economic History, 30 (1970), pp. 770-784.

West, Roger C. “Money in the Colonial American Economy,” Economic Inquiry, vol. 16 (1985), pp. 1-15.

Wharton, Francis. (ed.) The revolutionary diplomatic correspondence of the United States, Washington, D.C.: Government Printing office, 1889.

Whitaker, Benjamin. The Chief Justice’s Charge to the Grand Jury for the Body of this Province, Charlestown, South Carolina: Printed by Peter Timothy, 1741.

White, Phillip L. Beekman Mercantile Papers, 1746-1799, New York: New York Historical Society, 1956.

Whitehead, William A. et. al. (eds.). Documents relating to the colonial, revolutionary and post-revolutionary history of the State of New Jersey, Newark: Daily Advertising Printing House, 1880-1949.

Wicker, Elmus. “Colonial Monetary Standards Contrasted: Evidence from the Seven Years War,” Journal of Economic History, 45 (1985), pp. 869-84.

Winthrop, Wait. “Winthrop Papers,” Collections of the Massachusetts Historical Society, Series 6, Vol 5, Boston: Massachusetts Historical Society, 1892.

Wright, Robert E. Hamilton Unbound: Finance and the Creation of the American Republic, Westport, Connecticut: Greenwood Press, 2002.

Citation: Michener, Ron. “Money in the American Colonies”. EH.Net Encyclopedia, edited by Robert Whaples. June 8, 2003, revised January 13, 2011. URL

The Marshall Plan, 1948-1951

Albrecht Ritschl, Humboldt Universitaet – Berlin

Between 1948 and 1951, the United States poured financial aiding totaling $13 billion (about $100 billion at 2003 prices) into the economies of Western Europe. Officially termed the European Recovery Program (ERP), the Marshall Plan was approved by Congress in the Economic Cooperation Act of April 1948. After a transitory 90-Days Recovery Program, the Marshall Plan spanned three ERP years from July 1948 to June 1951. Congress appropriated payments to European countries in annual installments. Most of U.S. assistance under the ERP took the form of grants; the loan component had deliberately been kept low to avoid transfer problems. Distribution of the ERP funds among the recipient countries and their allocation to key sectors were placed in the hands of a U.S. board operating in Europe, the Economic Cooperation Agency (ECA). Countries would present requests for deliveries of goods to the ECA, which evaluated and decided them according to a set scheme of priorities. Dollar payments by the ECA for any deliveries were complemented by a system of national matching funds in the recipient countries, called counterpart funds. Countries would pay for ERP deliveries, not in U.S. dollars but in their own national currencies. These payments were credited to their respective counterpart funds. With a view to the German transfer problem of the inter-war period, no attempt was made to transfer these payments into U.S. dollars. Instead, the ECA employed these counterpart funds to channel investment into bottleneck sectors of the respective national economies. Repayment to the U.S. of the ERP’s loan component was effected in the mid-1950s.

The Marshall Plan was by no means the first U.S. aid program for post-war Europe. Already during 1945-1947, the U.S. paid out substantial financial assistance to Europe under various different schemes. In total annual amount, these payments were actually larger than the Marshall Plan itself. One key element of the Marshall Plan was to bundle existing, rival programs in a package and to identify and iron out inconsistencies. The origin of the Marshall Plan lay precisely in a crisis of the previous aid schemes. Extreme weather conditions in Europe in 1946/47 had disrupted an already shaky system of food rationing, exacerbated a coal and power shortage, and threatened to slow down the pace of recovery in Western Europe. Faced with increasing doubts in Congress about the efficiency of existing programs, the Truman administration felt the need to come up with a unifying concept. The Marshall Plan differed from previous programs mainly in the centralized administration of aid allotments and the strengthened link with America’s political agenda. Researchers currently agree that any effects of the Marshall Plan must have operated through its political conditionality, far less so through its size.

The Marshall Plan also did not bring about the immediate integration of Europe into international markets. Large external debts presented a serious obstacle to liberalization of Europe’s foreign exchange markets. A British attempt in 1947 to lift capital controls triggered a run on Britain’s foreign exchange reserves, and was abandoned after six weeks. As a result, markets would not easily provide the large capital imports needed for European reconstruction. The prospect of having to finance Europe’s so-called dollar gap out of U.S. aid indefinitely was instrumental in shaping the Marshall Plan. During the three years of the Plan’s operation, U.S. policy temporarily turned away from the goal of implementing the Bretton Woods system. Instead, it focused on the more modest goal of liberalizing trade and payments within Europe. To this end, the European Payments Union (EPU) was established in 1950. It lifted most capital controls within Europe, and combined a European fixed exchange rate system with a first round of trade liberalization among its members (Kaplan and Schleiminger (1989)). Although itself independent of the Marshall Plan, the EPU’s system of overdrafts and drawing rights was backed by ECA funds. The EPU was designed to smooth Europe’s transition to full convertibility with the Bretton Woods system, and had largely achieved this goal when it was dissolved formally in 1958 (Eichengreen (1993)).

Competing Interpretations of the Effects of the Marshall Plan

The Marshall Plan is still renowned as a showcase of successful U.S. intervention abroad. It was hailed by contemporaries as the decisive kick that pushed Western Europe beyond the threshold of sustained recovery (e.g., Ellis (1950), Wallich (1982 [1955]). Later observers sympathetic to the Marshall Plan pointed to its high political payoff and its allegedly strong multiplier effects (e.g., Arkes (1972), van der Wee (1986)). Still today, economic folklore credits the Marshall Plan with everything that improved in Europe after the war: the restoration of decent food supplies, the opening of supply bottlenecks in industry, and most importantly, the reconstruction of capital equipment and housing stocks in the devastated economies of Western Europe.

Later analyses of the Marshall Plan have disagreed fundamentally with this favorable interpretation, and have offered more skeptical views. An older literature interpreted the Marshall Plan largely as an American export program, inspired by Keynesian fears about stagnation in the U.S. post-war economy. At times enriched with a good dose of political Anti-Americanism, this interpretation was quick to assume that Marshall Aid primarily served the interests of U.S. big business.

A revision to this doctrine highlighted the small relative magnitude of the Marshall Plan. U.S. assistance hardly exceeded 2.5% of GNP of the recipient countries, and accounted for less than 20% of capital formation in that period. The allocation of aid often seemed to follow political, not economic needs: nearly half the resources never arrived in the disaster areas on the former European battlefields but served to buy political support in England and France, and to fend off communist threats in various countries. Also, the overall political outcome hardly seemed to fit with U.S. plans. Post-war Europe emerged from the Marshall Plan as a largely protectionist bloc of countries under French leadership. Rather than integrating smoothly into the Bretton Woods system as envisaged by the U.S., Europe seemed to work towards its own economic and financial integration. Epitomized by the work of Milward (1984), this line of research sees France as the main winner over the U.S. in a contest over political dominance in post-war Europe. In this perspective, Marshall Aid appears as a frustrated, economically less-than-significant attempt to influence the course of events in Europe.

This interpretation has seen its own revision. In spite of its small contribution to aggregate output growth, the Marshall Plan may have played a critical role in opening strategic bottlenecks in key industries. Borchardt and Buchheim (1991) argued that raw material imports under the Marshall Plan accelerated the recovery of West German manufacturing. De Long and Eichengreen (1993) argued for Marshall Plan conditionality as a key element in breaking up structural rigidities and bringing about readjustment in the recipient economies. This perspective is a classical story about backward and forward linkages: according to it, the Marshall Plan relaxed binding constraints in a complex input-output framework. Consequently, a purely macroeconomic perspective would be misleading. However, as Eichengreen and Uzan (1992) pointed out, most of these effects were probably temporary, and even their magnitude is questionable. Conditionality and the investment of counterpart funds into strategic sectors may have accelerated the speed of Europe’s convergence back to its steady state. However, to affect the conditional steady state itself, the Marshall Plan would have had to accomplish more than that, and solve a cooperation problem that free markets could not easily handle.

One such cooperation problem was a hold-up problem in labor markets, a theme recurrent also in Eichengreen (1996). Agents in Europe’s highly cartelized labor markets had the choice between reverting to an uncooperative equilibrium with high wage demands and low investment, or a new equilibrium with temporary wage restraint and high investment rates. To the extent that the ECA successfully linked Marshall Plan deliveries to wage restraint in collective bargaining, it implemented a low-wage, high-investment equilibrium. Again, however, from a neoclassical perspective this may have affected the speed of convergence more than the steady state itself.

There was also a bigger, international cooperation problem in whose solution the Marshall Plan was instrumental. Germany’s financial war machinery had left behind large amounts of debts owed to the formerly occupied countries. To this were added reparation demands that potentially dwarfed those of World War I. Any scheme for economic recovery and cooperation in Western Europe would have to deal with these unsettled financial consequences of World War II. At the same time, it had to address the security concerns of America’s allies, which perceived any reconstruction of Germany beyond the necessary minimum as a future threat. All of this implied defining a role for postwar Germany, a delicate task that had initially been left open.

The Monnet Plan for French postwar reconstruction envisioned shifting the center of European heavy industry from Germany’s Ruhr valley to France. U.S. postwar policies were initially built on similar principles: under the Morgenthau Plan, Germany’s heavy industry would be cut back and the German economy would be restructured to be based on light industry and agriculture. The price of these policies consisted of continued U.S. assistance to Europe. Coal and steel as well as machinery were shipped to Europe across the Atlantic, while German heavy industry, a traditional exporter of such items, was operating far below capacity. Among other things, the Marshall Plan was also a reaction to this problem of deficient German deliveries to Europe.

Diplomatic historians have long argued that German reconstruction under U.S. political aegis was the core of the Marshall Plan (see particularly Gimbel (1976) and Hogan (1987)). Given continued U.S. military presence in Europe, self-sustained recovery and economic cooperation could be implemented, such that U.S. deliveries to Western Europe were substituted with German exports. Berger and Ritschl (1995) document the diplomatic arm-twisting especially of France by the U.S., and interpret the Marshall Plan as a set of institutions, designed to serve as a commitment device for economic cooperation within Europe. To implement a cooperative equilibrium, U.S. policies linked Marshall Aid to free trade within Europe, to an agreement over the economic reconstruction of West Germany, and to a standstill regarding reparations and war debts as long as Germany was divided. Viewed from this perspective, Marshall Aid and its conditionality were merely the outer shell of a program whose core was a far wider political agenda for economic cooperation in Western Europe.


Arkes, Hadley. Bureaucracy, the Marshall Plan, and the National Interest. Princeton: Princeton University Press, 1972.

Berger, Helge and Albrecht Ritschl. “Germany and the Political Economy of the Marshall Plan, 1947-1952: A Re-Revisionist View.” In Europe’s Postwar Recovery, edited by Barry Eichengreen, 199-245. Cambridge: Cambridge University Press, 1995

Borchardt, Knut and Christoph Buchheim. “The Marshall Plan and Key Economic Sectors: A Microeconomic Perspective.” In The Marshall Plan and Germany, edited by Charles S. Maier and Gunter Bischof, 410-451. Oxford: Berg, 1991

De Long, J. Bradford and Barry Eichengreen. “The Marshall Plan: History’s Most Successful Structural Adjustment Program.” In Postwar Economic Reconstruction and Lessons for the East Today, edited by Rudiger Dornbusch et al, 189-230. Cambridge: MIT Press, 1993

Eichengreen, Barry. Reconstructing Europe’s Trade and Payments: The European Payments System. Manchester: Manchester University Press, 1993.

Eichengreen, Barry. “Institutions and Economic Growth: Europe after World War II.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo, 38-70. Cambridge: Cambridge University Press, 1996

Eichengreen, Barry and Marc Uzan. “The Marshall Plan: Economic Effects and Implications for Eastern Europe and the USSR.” Economic Policy 14 (1992): 14-75.

Ellis, Howard. The Economics of Freedom: The Progress and Future of Aid to Europe. New York: Harper & Row, 1950

Gimbel, John. The Origins of the Marshall Plan. Stanford: Stanford University Press, 1976

Hogan, Michael J. The Marshall Plan, Britain, and the Reconstruction of Western Europe, 1947-1952. Cambridge: Cambridge University Press, 1987.

Kaplan, Jacob and Gunter Schleiminger. The European Payments Union: Financial Diplomacy in the 1950s. Oxford: Oxford University Press, 1989.

Milward, Alan S. The Reconstruction of Western Europe, 1945-1951. London: Methuen, 1984.

van der Wee, Herman. Prosperity and Upheaval: The World Economy, 1945-1980. Berkeley: University of California Press, 1986.

Wallich, Henry. Mainsprings of the German Revival. New Haven: Yale University Press, 1982 (1955).

Citation: Ritschl, Albrecht. “The Marshall Plan, 1948-1951″. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL

The Law of One Price

Karl Gunnar Persson, University of Copenhagen

Definitions and Explanation of the Law of One Price

The concept “Law of One Price” relates to the impact of market arbitrage and trade on the prices of identical commodities that are exchanged in two or more markets. In an efficient market there must be, in effect, only one price of such commodities regardless of where they are traded. The “law” can also be applied to factor markets, as is briefly noted in the concluding section.

The intellectual history of the concept can be traced back to economists active in France in the 1760-70’s, which applied the “law” to markets involved in international trade. Most of the modern literature also tends to discuss the “law” in that context.

However, since transport and transaction costs are positive the law of one price must be re-formulated when applied to spatial trade. Let us first look at a case with two markets which are trading, say, wheat but with wheat going in one direction only, from Chicago to Liverpool, as has been the case since the 1850’s.

In this case the price difference between Liverpool and Chicago markets of wheat of a particular quality, say, Red Winter no. 2, should be equal to the transport and transaction cost of shipping grain from Chicago to Liverpool. This is to say that the ratio of the Liverpool price to the price in Chicago plus transport and transaction costs should be equal to one. Tariffs are not explicitly discussed in the next paragraphs but can easily be introduced as a specific transaction cost at par with commissions and other trading costs.

If the price differential exceeds the transport and transaction costs, this means that the price ratio is greater than one, then self-interested and well-informed traders take the opportunity to make a profit by shipping wheat from Chicago to Liverpool. Such arbitrage closes the price gap because it increases supply and hence decreases price in Liverpool, while it increases demand, and hence price in Chicago. To be sure the operation of the law of one price is not only based on trade flows but inventory adjustments as well. In the example above traders in Liverpool might choose to release wheat from warehouses in Liverpool immediately since they anticipate shipments to Liverpool. This inventory release works to depress prices immediately. So the expectation of future shipments will have an impact on price immediately because of inventory adjustments.

If the price differential does not exceed the transport and transaction cost, this means that the price ratio is less than one, then self-interested and well informed traders take the opportunity to restrict the release of wheat from the warehouses in Liverpool and decrease the demand for shipments of wheat from Chicago. These reactions will trigger off an immediate price increase in Liverpool since supply falls in Liverpool and a price decrease in Chicago because demand falls.

Formal Presentation of the Law of One Price

Let PL and PC denote the prices in Liverpool and Chicago respectively. Furthermore, we also observe the transport and transactions costs, linked to shipping the commodity from Chicago to Liverpool, PTc. All prices are measured in the same currency and units, say, shillings per imperial quarter. What has been explained above verbally can be expressed formally. The law of one price adjusted for transport and transaction costs implies the following equilibrium, which henceforward will be referred to as the Fundamental Law of One Price Identity or FLOPI:

[Equation - Fundamental Law of One Price Identity]

In case the two markets both produce and can trade a commodity in either direction the law of one price states that the price difference should be smaller or equal to transport and transaction costs. FLOPI then is smaller or equal to one. If the price difference is larger than transport and transaction costs, trade will close the gap as suggested above. Occasionally domestic demand and supply conditions in two producing economies can be such that price differences are smaller than transport and transaction costs and there will not be any need for trade. In this particular case the two economies are both self-sufficient in wheat.

A case with many markets will necessitate a third elaboration of the concept of the law of one price. Let us look at it in a world of three markets, say Chicago, Liverpool and Copenhagen. Assume furthermore that both Chicago and Copenhagen supply Liverpool with the same commodity, say wheat. If so, the Liverpool-Copenhagen price differential must be equal to the transport and transaction costs between Copenhagen and Liverpool and the Chicago-London price differential will be equal to the transport and transaction costs between Chicago and Liverpool. But what about the price difference between Chicago and Copenhagen? It turns out that it will be determined by the difference between transport and transactions costs from Chicago to Liverpool and from Copenhagen to Liverpool. If it costs 7 cents to ship a bushel of grain from Chicago to Liverpool and 5 cents from Copenhagen to Liverpool, the law of price difference between Copenhagen and Chicago will be 2 cents that is 7 – 5 = 2. If price is 100 cents per bushel in Chicago it will be 107 in Liverpool and 102 in Copenhagen. So although the distance and transport cost between Chicago and Copenhagen is larger than between Chicago and Liverpool, the equilibrium price differential is smaller! This argument can be extended to many markets in the following sense: the price difference between two markets which do not trade with each other will be determined by the minimum difference in transport and transaction costs between these two markets to a market with which they both trade.

The argument in the preceding paragraph has important implications for the relationship between distance and price differences. It is often argued that the difference between prices of a commodity in two markets increases monotonically with distance. But this is true only if the two markets actually trade directly with each other. However, the likelihood that markets cease to trade directly with each other increases as the distance increases and long distance markets will therefore typically be only indirectly linked through a third common market. Hence the paradox illustrated above that the law of one price difference between Chicago and Copenhagen is smaller despite the larger geographical distance than that between Copenhagen and Liverpool or Chicago and Liverpool. In fact it is quite easy to imagine two markets at a distance of two units both exporting to a third market in between them at a distance of one unit from each of them and enjoying the same price despite the large distance.

Efficient Markets and the Law of One Price

In what follows we typically discuss the “law” in a context with trade of a particular commodity going in one direction only, that is FLOPI = 1.

In a market with arbitrage and trade, violations of the law of one price must be transitory. However, price differentials often differ from the law of one price equilibrium, that is FLOPI is larger or smaller than 1, so it is convenient to understand the law of one price as an “attractor equilibrium” rather than a permanent state in which prices and the ratio of prices rest. The concept “attractor equilibrium” can be understood with reference to the forces described in the preceding section. That is, there are forces which act to restore FLOPI when it has been subject to a shock.

A perfectly efficient set of markets will allow only very short violations of the law of one price. But this is too strong a condition to be of practical significance. There are always local shocks which will take time to get diffused to other markets and distortions of information will make global shocks affect local markets differently. How long violations can persist depends on the state of information technology, whether markets operate with inventories and how competitive markets are. Commodity markets with telegraphic or electronic information transmission, inventories and no barriers to entry for traders can be expected to tolerate only short and transitory violations of the law of one price. News about a price change in one major market will have immediate effects on prices elsewhere due to inventory adjustments.

A convenient econometric way of analyzing the nature of the law of one price as an “attractor equilibrium” is a so-called error correction model. In such a model an equilibrium law of one price is estimated. If markets are not well integrated one cannot establish or estimate FLOPI. Given the existence of a long-run or equilibrium price relationship between markets, a violation is a so called “innovation” or shock, which will be corrected for so that the equilibrium price difference is restored. Here is the intuition of the model described below: Assume first that Liverpool and Chicago prices are in a law of one price equilibrium. Then, for example, the price in Chicago is subject to a local shock or “innovation” so that price in Chicago plus transport and transaction costs now exceeds the price in Liverpool. That happens in period t-1, and then the price in Liverpool will increase in the next period, t, while the price in Chicago will fall. Prices will fall in Chicago because demand for shipments will fall and it will increase in Liverpool because of a fall in supply when traders in Liverpool stop releasing grain from the warehouses in expectation of higher prices in the future. Eventually the FLOPI = 1 condition will be restored but at higher prices in both Liverpool and Chicago.

To summarize, the logic behind the error correction model is that prices in Liverpool and Chicago will react if there is a dis-equilibrium, that is when the price differential is larger or smaller than transport and transaction costs. In this case the prices will adjust such that the deviation from equilibrium is decreasing. The error correction model is usually expressed in differences of log prices. Let. The error correction model in this version is given by:

[Equation - Error Correction Model]

whereare statistical error terms with are assumed to be normally distributed with mean zero and constant variances. Please, note that errors are not the “error” that figures in the term “error correction model.” A better name for the latter would be “shock correction model” or “innovation correction model” to evade misunderstanding.

and are so-called adjustment parameters which indicate the power of FLOPI as an “attractor equilibrium.” The expected sign of the parameter is negative and it is positive for. To see this, imagine a case where the expression in the parenthesis above is larger than one. Then price in Liverpool should fall and increase in Chicago.

The parameters and indicate the speed at which “innovations” are corrected, the larger the parameters are for a given magnitude of the “innovation,” the more transitory are the violations of the law of one price – in other words, the faster is the equilibrium restored. The magnitudes of the parameters are an indicator of the efficiency of the markets. The higher they are, the faster will the equilibrium law of one price (FLOPI) be restored and the more efficient markets are. (The absolute values of the sum of the parameters should not exceed one.) The magnitude of “innovations” also tends to fall as markets get more efficient as defined above.

It is convenient to express the parameters in terms of the half life of shocks. Half life of a shock measures the time it takes for an original deviation from the equilibrium law of one price (FLOPI) to be reduced to half. The half life of shocks has been reduced dramatically in the long-distance trade of bulky commodities like grain – that is distances above 1500 km. From the seventeenth to the late nineteenth centuries, the half life was reduced from up to two years to only two weeks in international wheat markets, as revealed by the increase in the adjustment parameters. The major reason for this dramatic change is the improvement in information transmission.

The adjustment parameters can also be illustrated graphically and Figure 1 displays the stylized characteristics of adjustment speed in long-distance wheat trade and indicates a spectacular increase in grain market efficiency, specifically in the nineteenth century.

Read Figure 1 in the following way. At time 0 the two markets are in a law of one price equilibrium (FLOPI), that is prices in the two markets are exactly equal (set here arbitrarily at 100), and the ratio of prices is one. In this particular graphical example we abstract from transport and transactions costs. Now imagine a shock to the price in one market by 10 percent to 110. That will be followed by a process of mutual adjustment to the law of one price equilibrium (FLOPI) but at higher prices in both markets compared to the situation before the shock. The new price level will not necessarily be halfway between the initial level and the level attained in the economy which was subject to a shock. Adjustments can be strong in some markets and weak in others. As can be seen in Figure 1, the adjustment is very slow in the case of the Pisa (Italy) to Ruremonde (Netherlands). In fact, a new law on price equilibrium is not attained within the time period, 24 months, allowed by the Figure. This indicates very low, but still significant, adjustment parameters. It is also worth noting the difference in adjustments speed between pre-telegraph Chicago-Liverpool trade in the 1850’s and post-telegraph trade in the 1880’s.

Figure 1

Adjustment Speed in Markets after a Local Shock in Long-distance Wheat Markets
Cases from 1700-1900.

[Figure 1 - Speed in Markets after a Local Shock in Long-distance Wheat  Markets]

Note: The data underlying the construction are from Persson (1988) and Ejrnæs and Persson (2006).

It is worth noting that the fast speed of adjustment back to the law of one price recorded for single goods in the nineteenth century contrasts strongly with the sluggish adjustment in price indices (prices for bundles of goods) across economies (Giovanini 1998). However, some of these surprising results may depend on misspecifications of the tests (Taylor 2001).

Law of One Price and Convergence

The relationship between the convergence of prices on identical goods and the law of one price is not as straightforward as often believed. As was highlighted above, the law of one price can exist as an “equilibrium attractor,” despite large price differentials between markets, as long as the price differential reflects transport and transaction costs and if they are not prohibitively high. So in principle the adjustment parameters can be high, despite large price differentials. For example, the Chicago to Liverpool trade in the nineteenth century was based on highly efficient markets, but transport and transaction costs remained at about 20-25 percent of the Chicago price of wheat. However, historically the convergence in price levels in the nineteenth century was associated with an improvement in market efficiency as revealed by higher adjustment parameters. Convergence seems to be a nineteenth-century phenomenon. Figure 2 below indicates that there is not a long-run convergence in wheat markets. Convergence is here expressed as the UK price relative to the U.S. price. Falling transport costs, falling tariffs and increased market efficiency, which reduced risk premiums for traders, compressed price levels in the nineteenth century. Falling transport costs were particularly important for the landlocked producers when they penetrated foreign long-distance markets, as displayed by the dramatic convergence of Chicago to UK price levels. When the U.S. Midwest started to export grain to UK, the UK price level was 2.5 times the Chicago price. However, the figure exaggerates the true convergence significantly because the prices used do not refer to identical quality goods. As much as a third of the convergence shown in the graph has to do with improved quality of Chicago wheat relative to UK wheat, a factor often neglected in the convergence literature.

However, after the convergence forces had been exploited, trade policy was reversed. European farmers had little land relative to farmers in the New World economies, such as Argentina, Canada and U.S. and the former faced strong competition from imported grain. A protectionist backlash in continental Europe emerged in the 1880’s, continued during the Great Depression and after 1960, which contributed to price divergence. The trends discussed above are applicable to agricultural commodities but not necessarily to other commodities because protectionism is commodity specific. However, it is important to note that long-distance ocean shipping costs have not been subject to a long-run declining trend despite the widespread belief that this has been the case and therefore the convergence/divergence outcome is mostly a matter of trade policy.

Figure 2
Price Convergence, United States to United Kingdom, 1800-2000

(UK price relative to Chicago or New York price of wheat)

[Figure 3 - Price Convergence, United States to United Kingdom, 1800-2000]

Source: Federico and Persson (2006).

Note: Kernel regression is a convenient way of smoothing a time series.

The Law of One Price, Trade Restrictions and Barriers to Factor Mobility

Tariffs affect the equilibrium price differential very much like transport and transaction costs, but will tariffs also affect adjustment speed and market efficiency as defined above? The answer to that question depends on the level of tariffs. If tariffs are prohibitively high, then the domestic market will be cut off from the world market and the law of one price as an “equilibrium attractor” will cease to operate.

The law of one price can also, of course, be applied to factor markets – that is markets for capital and labor. For capital markets the law of one price would be such that interest rate or return differentials on identical assets traded in different locations or nations converge to zero or close to zero – that is the ratio of interest rates should converge to 1. If there are significant differences in interest rates between economies, capital will flow into the economy with high yields and contribute to leveling the differentials. It is clear that international capital market restrictions affect interest rate spreads. Periods of open capital markets, such as the Gold Standard period from 1870 to 1914, were periods of small and falling interest rate differentials. But the disintegration of the international capital markets and the introduction of capital market controls in the aftermath of the Great Depression in the 1930s witnessed an increase in interest rate spreads which remained substantial also under the Bretton Woods System c.1945 to 1971(73), in which capital mobility was restricted. It was not until the capital market liberalization of the 1980s and 1990s that interest rate differences again reached levels as low as a century earlier. Periods of war, when capital markets cease to function, are also periods when interest rates spreads increase.

The labor market is, however, the market that displays the most persistent violations of the law of price. We need to be careful, however, in spotting violations, in that we need to compare wages of identically skilled laborers and take differences in costs of living into consideration. Even so, huge real wage differences persist. A major reason for that is that labor markets in high income nations are shielded from international migration by a multitude of barriers.

The law of one price does not thrive under restrictions to trade or factor mobility.


Ejrnæs, Mette, and Karl Gunnar Persson. “The Gains from Improved Market Efficiency: Trade before and after the Transatlantic Telegraph,” Working paper, Department of Economics, University of Copenhagen, 2006.

Federico. Giovanni and Karl Gunnar Persson. “Market Integration and Convergence in the World Wheat Market, 1800-2000.” In New Comparative Economic History, Essays in Honor of Jeffrey G. Williamson, edited by Timothy Hatton, Kevin O’Rourke and Alan Taylor. Cambridge, MA.:MIT Press, 2006.

Giovanini, Alberto. “Exchange Rates and Traded Goods Prices.” Journal of International Economics 24 (1988): 45-68.

Persson. Karl Gunnar. Grain Markets in Europe, 1500-1900: Integration and Deregulation. Cambridge: Cambridge University Press, 1998.

Taylor, Alan M. “Potential Pitfalls for the Purchasing Power Parity Puzzle? Sampling and Specification Biases in Mean-Reversion Tests of the Law of One Price,” Econometrica 69, no. 2 (2001): 473-98.

Citation: Persson, Karl. “Law of One Price”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL