EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Economic History of Hawai’i

Sumner La Croix, University of Hawai’i and East-West Center

The Hawaiian Islands are a chain of 132 islands, shoals, and reefs extending over 1,523 miles in the Northeast Pacific Ocean. Eight islands — Hawai’i, Maui, O’ahu, Kaua’i, Moloka’i, Lana’i, Ni’ihau, and Kaho’olawe — possess 99 percent of the land area (6,435 square miles) and are noted for their volcanic landforms, unique flora and fauna, and diverse climates.

From Polynesian Settlement to Western Contact

The Islands were uninhabited until sometime around 400 AD when Polynesian voyagers sailing double-hulled canoes arrived from the Marquesas Islands (Kirch, 1985, p. 68). Since the settlers had no written language and virtually no contact with the Western world until 1778, our knowledge of Hawai’i’s pre-history comes primarily from archaeological investigations and oral legends. A relatively egalitarian society and subsistence economy were coupled with high population growth rates until about 1100 when continued population growth led to a major expansion of the areas of settlement and cultivation. Perhaps under pressures of increasing resource scarcity, a new, more hierarchical social structure emerged, characterized by chiefs (ali’i) and subservient commoners (maka’ainana). In the two centuries prior to Western contact, there is considerable evidence that ruling chiefs (ali’i nui) competed to extend their lands by conquest and that this led to cycles of expansion and retrenchment.

Captain James Cook’s ships reached Hawai’i in 1778, thereby ending a long period of isolation for the Islands. Captain James King observed in 1779 that Hawaiians were generally “above the middle size” of Europeans, a rough indicator that Hawaiians generally had a diet superior to eighteenth-century Europeans. At contact, Hawaiian social and political institutions were similar to those found in other Polynesian societies. Hawaiians were sharply divided into three main social classes: ali’i (chiefs), maka’ainana (commoners), and kahuna (priests). Oral legends tell us that the Islands were usually divided into six to eight small kingdoms consisting of an island or part of an island, each governed by an ali’i nui (ruling chief). The ali’i nui had extensive rights to all lands and material goods and the ability to confiscate or redistribute material wealth at any time. Redistribution usually occurred only when a new ruling chief took office or when lands were conquered or lost. The ali’i nui gave temporary land grants to ali’i who, in turn, gave temporary land grants to konohiki (managers), who then “contracted” with maka’ainana, the great majority of the populace, to work the lands.

The Hawaiian society and economy has its roots in extended families (‘ohana) working cooperatively on an ahupua’a, a land unit running from the mountains to the sea. Numerous tropical root, tuber, and tree crops were cultivated. Taro, a wetland crop, was cultivated primarily in windward areas, while sweet potatoes and yams, both dryland crops, were cultivated in drier leeward areas. The maka’ainana apparently lived well above subsistence levels, with extensive time available for cultural activities, sports, and games. There were unquestionably periods of hardship, but these times tended to be associated with drought or other causes of poor harvest.

Unification of Hawai’i and Population Decline

The long-prevailing political equilibrium began to disintegrate shortly after the introduction of guns and the spread of new diseases to the Islands. In 1784, the most powerful ali’i nui, Kamehameha, began a war of conquest, and with his superior use of modern weapons and western advisors, he subdued all other chiefdoms, with the exception of Kaua’i, by 1795. Each chief in his ruling coalition received the right to administer large areas of land, consisting of smaller strips on various islands. Sumner La Croix and James Roumasset (1984) have argued that the strip system conveyed durability to the newly unified kingdom (by making it more costly for an ali’i to accumulate a power base on one island) and facilitated monitoring of ali’i production by the new king. In 1810, Kamehameha reached a negotiated settlement with Kaumuali’i, the ruling chief of Kaua’i, which brought the island under his control, thereby bringing the entire island chain under a single monarchy.

Exposure to Western diseases produced a massive decline in the native population of Hawai’i from 1778 through 1900 (Table 1). Estimates of Hawai’i’s population at the time of contact vary wildly, from approximately 110,000 to one million people (Bushnell, 1993; Dye, 1994). The first missionary census in 1831-1832 counted 130,313 people. A substantial portion of the decline can be attributed to a series of epidemics beginning after contact, including measles, influenza, diarrhea, and whooping cough. The introduction of venereal diseases was a factor behind declining crude birth rates. The first accurate census conducted in the Islands revealed a population of 80,641 in 1849. The native Hawaiian population reached its lowest point in 1900 when the U.S. census revealed only 39,656 full or part Hawaiians.

Table 1: Population of Hawai’i

Year

Total Population

Native Hawaiian Population

1778

110,000-1,000,000

110,000-1,000,000

1831-32

130,313

Na

1853

73,137

71,019

1872

56,897

51,531

1890

89,990

40,622

1900

154,001

39,656

1920

255,881

41,750

1940

422,770

64,310

1960

632,772

102,403

1980

964,691

115,500

2000

1,211,537

239,655

Sources: Total population from http://www.hawaii.gov/dbedt/db99/index.html, Table 1.01, Dye (1994), and Bushnell (1993). Native Hawaiian population for 1853-1960 from Schmitt (1977), p. 25. Data from the 2000 census includes people declaring “Native Hawaiian” as their only race or one of two races. See http://factfinder.census.gov/servlet/DTTable?_ts=18242084330 for the 2000 census population.

The Rise and Fall of Sandalwood and Whaling

With the unification of the Islands came the opening of foreign trade. Trade in sandalwood, a wood in demand in China for ornamental uses and burning as incense, began in 1805. The trade was interrupted by the War of 1812 and then flourished from 1816 to the late 1820s before fading away in the 1830s and 1840s (Kuykendall, 1957, I, pp. 86-87). La Croix and Roumasset (1984) have argued that the centralized organization of the sandalwood trade under King Kamehameha provided the king with incentives to harvest sandalwood efficiently. The adoption of a decentralized production system by his successor (Liholiho) led to the sandalwood being treated by ali’i as a common property resource. The reallocation of resources from agricultural production to sandalwood production not only led to rapid exhaustion of the sandalwood resource but also to famine.

As the sandalwood industry declined, Hawai’i became the base for the north-central Pacific whaling trade. The impetus for the new trade was the 1818 discovery of the “Offshore Ground” west of Peru and the 1820 discovery of rich sperm whale grounds off the coast of Japan. The first whaling ship visited the Islands in 1820, and by the late 1820s over 150 whaling ships were stopping in Hawai’i annually. While ship visits declined somewhat during the 1830s, by 1843 over 350 whaling ships annually visited the two major ports of Honolulu and Lahaina. Through the 1850s over 500 whaling ships visited Hawai’i annually. The demise of the Pacific whaling fleet during the U.S. Civil War and the rapid rise of the petroleum industry led to steep declines in the number of ships visiting Hawai’i, and after 1870 only a trickle of ships continued to visit.

Missionaries and Land Tenure

In 1819, King Kamehameha’s successor, Liholiho, abandoned the system of religious practices known as the kapu system and ordered temples (heiau) and images of the gods desecrated and burnt. In April 1820, missionaries from New England arrived and began filling the religious void with conversions to protestant Christianity. Over the next two decades as church attendance became widespread, the missionaries suppressed many traditional Hawaiian cultural practices, operated over 1,000 common schools, and instructed the ali’i in western political economy. The king promulgated a constitution with provisions for a Hawai’i legislature in 1840. It was followed, later in the decade, by laws establishing a cabinet, civil service, and judiciary. Under the 1852 constitution, male citizens received the right to vote in elections for a legislative lower house. Missionaries and other foreigners regularly served in cabinets through the end of the monarchy.

In 1844, the government began a 12-year program, known as the Great Mahele (Division), to dismantle the traditional system of land tenure. King Kauikeaouli gave up his interest in all island lands, retaining ownership only in selected estates. Ali’i had the right to take out fee simple title to lands held at the behest of the king. Maka’ainana had the right to claim fee simple title to small farms (kuleana). At the end of the claiming period, maka’ainana received less than ~40,000 acres of land, while the government (~1.5 million acres), the king (~900,000 acres), and the ali’i (~1.5 million acres) all received substantial shares. Foreigners were initially not allowed to own land in fee simple, but an 1850 law overturned this restriction. By the end of the 19th century, commoners and chiefs had sold, lost, or given up their lands, with foreigners and large estates owning most non-government lands.

Lilikala Kame’eleihiwa (1992) found the origins of the Mahele in the traditional duty of a king to undertake a redistribution of land and the difficulty of such an undertaking during the initial years of missionary influence. By contrast, La Croix and Roumasset (1990) found the origins of the Mahele in the rising value of Hawaii land in sugar cultivation, with fee simple title facilitating investment in the land, irrigation facilities, and processing factories.

Sugar, Immigration, and Population Increase

The first commercially-viable sugar plantation, Ladd and Co., was started on Kaua’i in 1835, and the sugar industry achieved moderate growth through the 1850s. Hawai’i’s sugar exports to California soared during the U.S. Civil War, but the end of hostilities in 1865 also meant the end of the sugar boom. The U.S. tariff on sugar posed a major obstacle to expanding sugar production in Hawai’i during peacetime, as the high tariff, ranging from 20 to 42 percent between 1850 and 1870, limited the extent of profitable sugar cultivation in the islands. Sugar interests helped elect King Kalakaua to the Hawaiian throne over the British-leaning Queen Emma in February 1874, and Kalakaua immediately sought a trade agreement with the United States. The 1876 reciprocity treaty between Hawai’i and the United States allowed duty-free sales of Hawai’i sugar and other selected agricultural products in the United States as well as duty-free sales of most U.S. manufactured goods in Hawai’i. Sugar exports from Hawai’i to the United States soared after the treaty’s promulgation, rising from 21 million pounds in 1876 to 114 million pounds in 1883 to 224.5 million pounds in 1890 (Table 2).

Table 2: Hawai’i Sugar Production (1000 short tons)

Year

Exports

Year

Production

Year

Production

1850

.4

1900

289.5

1950

961

1860

.7

1910

529.9

1960

935.7

1870

9.4

1920

560.4

1970

1162.1

1880

31.8

1930

939.3

1990

819.6

1890

129.9

1940

976.7

1999

367.5

Sources: Data for 1850-1970 are from Schmitt (1977), pp. 418-420. Data for 1990 and 1999 are from http://www.hawaii.gov/dbedt/db99/index.html, Table 22.09. Data for 1850-1880 are exports. Data for 1910-1990 are converted to 96° raw value.

The reciprocity treaty set the tone for Hawai’i’s economy and society over the next 80 years by establishing the sugar industry as the Hawai’i’s leading industry and altering the demographic composition of the Islands via the industry’s labor demands. Rapid expansion of the sugar industry after reciprocity sharply increased its demand for labor: Plantation employment rose from 3,921 in 1872 to 10,243 in 1882 to 20,536 in 1892. The increase in labor demand occurred while the native Hawaiian population continued its precipitous decline, and the Hawai’i government responded to labor shortages by allowing sugar planters to bring in overseas contract laborers bound to serve at fixed wages for 3-5 year periods. The enormous increase in the plantation workforce consisted of first Chinese, then Japanese, then Portuguese contract laborers.

The extensive investment in sugar industry lands and irrigations systems coupled with the rapid influx of overseas contract laborers changed the bargaining positions of Hawai’i and the United States when the reciprocity treaty was due for renegotiation in 1883. La Croix and Christopher Grandy (1997) argued that the profitability of the planters’ new investment was dependent on access to the U.S. market, and this improved the bargaining position of the United States. As a condition for renewal of the treaty, the United States demanded access to Pearl Bay [now Pearl Harbor]. King Kalakaua opposed this demand, and in July 1887, opponents of the government forced the king to accept a new constitution and cabinet. With the election of a new pro-American government in September 1887, the king signed an extension of the reciprocity treaty in October 1887 that granted access rights to Pearl Bay to the United States for the life of the treaty.

Annexation and the Sugar Economy

In 1890, the U.S. Congress enacted the McKinley Tariff, which allowed raw sugar to enter the United States free of duty and established a two-cent per pound bounty for domestic producers. The overall effect of the McKinley Tariff was to completely erase the advantages that the reciprocity treaty had provided to Hawaiian sugar producers over other foreign sugar producers selling in the U.S. market. The value of Hawaiian merchandise exports plunged from $13 million in 1890 to $10 million in 1891 to a low point of $8 million in 1892.

La Croix and Grandy (1997) argued that the McKinley Tariff threatened the wealth of the planters and induced important changes in Hawai’i’s domestic politics. King Kalakaua died in January 1891, and his sister succeeded him. After Queen Lili’uokalani proposed to declare a new constitution in January 1893, a group of U.S. residents, with the incautious assistance of the U.S. Minister and troops from a U.S. warship, overthrew the monarchy. The new government, dominated by the white minority, offered Hawai’i for annexation by the United States from 1893. Annexation was first opposed by U.S. President Cleveland, and then, during U.S. President McKinley’s term, failed to obtain Congressional approval. The advent of the Spanish-American War and the ensuing hostilities in the Philippines raised Hawai’i’s strategic value to the United States, and Hawai’i was annexed by a joint resolution of Congress in July 1898. Hawai’i became a U.S. territory with the passage of the Organic Act on June 14, 1900.

Economic Integration with the United States

In 1900 annexation by the United States eliminated bound labor contracts and freed the existing labor force from their contracts. After annexation, the sugar planters and the Hawaii government recruited workers from Japan, Korea, the Philippines, Spain, Portugal, Puerto Rico, England, Germany, and Russia. The ensuing flood of immigrants swelled the population of the Hawaiian Islands from 109,020 people in 1896 to 232,856 people in 1915. The growth in the plantation labor force was one factor behind the expansion of sugar production from 289,500 short tons in 1900 to 939,300 short tons in 1930. Pineapple production also expanded, from just 2,000 cases of canned fruit in 1903 to 12,808,000 cases in 1931.

La Croix and Price Fishback (2000) established that European and American workers on sugar plantations were paid job-specific wage premiums relative to Asian workers and that the premium paid for unskilled American workers fell by one third between 1901 and 1915 and for European workers by 50 percent or more over the same period. While similar wage gaps disappeared during this period on the U.S. West Coast, Hawai’i plantations were able to maintain a portion of the wage gaps because they constantly found new low-wage immigrants to work in the Hawai’i market. Immigrant workers from Asia failed, however, to climb many rungs up the job ladder on Hawai’i sugar plantations, and this was a major factor behind labor unrest in the sugar industry. Edward Beechert (1985) concluded that large-scale strikes on sugar plantations during 1909 and 1920 improved the welfare of sugar plantation workers but did not lead to recognition of labor unions. Between 1900 and 1941, many sugar workers responded to limited advancement and wage prospects on the sugar plantation by leaving the plantations for jobs in Hawai’i’s growing urban areas.

The rise of the sugar industry and the massive inflow of immigrant workers into Hawaii was accompanied by a decline in the Native Hawaiian population and its overall welfare (La Croix and Rose, 1999). Native Hawaiians and their political representatives argued that government lands should be made available for homesteading to enable Hawaiians to resettle in rural areas and to return to farming occupations. The U.S. Congress enacted legislation in 1921 to reserve specified rural and urban lands for a new Hawaiian Homes Program. La Croix and Louis Rose have argued that the Hawaiian Homes Program has functioned poorly, providing benefits for only a small portion of the Hawaiian population over the course of the twentieth century.

Five firms-Castle & Cooke, Alexander & Baldwin, C. Brewer & Co., Theo. Davies & Co., and American Factors-came to dominate the sugar industry. Originally established to provide financial, labor recruiting, transportation, and marketing services to plantations, they gradually acquired the plantations and also gained control over other vital industries such as banking, insurance, retailing, and shipping. By 1933, their plantations produced 96 percent of the sugar crop. The “Big Five’s” dominance would continue until the rise of the tourism industry and statehood induced U.S. and foreign firms to enter Hawai’i’s markets.

The Great Depression hit Hawai’i hard, as employment in the sugar and pineapple industries declined during the early 1930s. In December 1936, about one-quarter of Hawai’i’s labor force was unemployed. Full recovery would not occur until the military began a buildup in the mid-1930s in reaction to Japan’s occupation of Manchuria. With the Japanese invasion of China in 1937, the number of U.S. military personnel in Hawai’i increased to 48,000 by September 1940.

World War II and its Aftermath

The Japanese attack on the American Pacific Fleet at Pearl Harbor on December 7, 1941 led to a declaration of martial law, a state that continued until October 24, 1944. The war was accompanied by a massive increase in American armed service personnel in Hawai’i, with numbers increasing from 28,000 in 1940 to 378,000 in 1944. The total population increased from 429,000 in 1940 to 858,000 in 1944, thereby substantially increasing the demand for retail, restaurant, and other consumer services. An enormous construction program to house the new personnel was undertaken in 1941 and 1942. The wartime interruption of commercial shipping reduced the tonnage of civilian cargo arriving in Hawai’i by more than 50 percent. Employees working in designated high priority organizations, including sugar plantations, had their jobs and wages frozen in place by General Order 18 which also suspended union activity.

In March 1943, the National Labor Relations Board was allowed to resume operations, and the International Longshoreman’s Union (ILWU) organized 34 of Hawai’i’s 35 sugar plantations, the pineapple plantations, and the longshoremen by November 1945. The passage of the Hawai’i Employment Relations Act in 1945 facilitated union organizing by providing agricultural workers with the same union organizing rights as industrial workers.

After the War, Hawai’i’s economy stagnated, as demobilized armed services personnel left Hawai’i for the U.S. mainland. With the decline in population, real per capita personal income declined at an annual rate of 5.7 percent between 1945 and 1949 (Schmitt, 1976, pp. 148, 167). During this period, Hawai’i’s newly formed unions embarked on a series of disruptive strikes covering West Coast and Hawai’i longshoremen (1946-1949); the sugar industry (1946); and the pineapple industry (1947, 1951). The economy began a nine-year period of moderate expansion in 1949, with the annual growth rate of real personal income averaging 2.3 percent. The expansion of propeller-driven commercial air service sent visitor numbers soaring, from 15,000 in 1946 to 171,367 in 1958, and induced construction of new hotels and other tourism facilities and infrastructure. The onset of the Korean War increased the number of armed service personnel stationed in Hawai’i from 21,000 in 1950 to 50,000 in 1958. Pineapple production and canning also displayed substantial increases over the decade, increasing from 13,697,000 cases in 1949 to 18,613,000 cases in 1956.

Integration and Growth after Statehood

In 1959, Hawai’i became the fiftieth state. The transition from territorial to statehood status was one factor behind the 1958-1973 boom, in which real per capita personal income increased at an annual rate of 4 percent. The most important factor behind the long expansion was the introduction of commercial jet service in 1959, as the jet plane dramatically reduced the money and time costs of traveling to Hawai’i. Also fueled by rapidly rising real incomes in the United States and Japan, the tourism industry would continue its rapid growth through 1990. Visitor arrivals (see Table 3) increased from 171,367 in 1958 to 6,723,531 in 1990. Growth in visitor arrivals was once again accompanied by growth in the construction industry, particularly from 1965 to 1975. The military build-up during the Vietnam War also contributed to the boom by increasing defense expenditures in Hawai’i by 3.9 percent annually from 1958 to 1973 (Schmitt, 1977, pp. 148, 668).

Table 3: Visitor Arrivals to Hawai’i

Year

Visitor Arrivals

Year

Visitor Arrivals

1930

18,651

1970

1,745,904

1940

25,373

1980

3,928,789

1950

46,593

1990

6,723,531

1960

296,249

2000

6,975,866

Source: Hawai’i Tourism Authority, http://www.hawaii.gov/dbedt/monthly/historical-r.xls at Table 5 and http://www.state.hi.us/dbedt/monthly/index2k.html.

From 1973 to 1990, growth in real per capita personal income slowed to 1.1 percent annually. The defense and agriculture sectors stagnated, with most growth generated by the relentless increase in visitor arrivals. Japan’s persistently high rates of economic growth during the 1970s and 1980s spilled over to Hawai’i in the form of huge increases in the numbers of Japanese tourists and in the value of Japanese foreign investment in Hawai’i. At the end of the 1980s, the Hawai’i unemployment rate was just 2-3 percent, employment had been steadily growing since 1983, and prospects looked good for continued expansion of both tourism and the overall economy.

The Malaise of the 1990s

From 1991 to 1998, Hawai’i’s economy was hit by several negative shocks. The 1990-1991 recession in the United States, the closure of California military bases and defense plants, and uncertainty over the safety of air travel during the 1991 Gulf War combined to reduce visitor arrivals from the United States in the early and mid-1990s. Volatile and slow growth in Japan throughout the 1990s led to declines in Japanese visitor arrivals in the late 1990s. The ongoing decline in sugar and pineapple production gathered steam in the 1990s, with only a handful of plantations still in business by 2001. The cumulative impact of these adverse shocks was severe, as real per capita personal income did not change between 1991 and 1998.

The recovery continued through summer 2001 despite a slowing U.S. economy. It came to an abrupt halt with the terrorism attack of September 11, 2001, as domestic and foreign tourism declined sharply.

References

Beechert, Edward D. Working in Hawaii: A Labor History. Honolulu: University of Hawaii Press, 1985.

Bushnell, Andrew F. “The ‘Horror’ Reconsidered: An Evaluation of the Historical Evidence for Population Decline in Hawai’i, 1778-1803.” Pacific Studies 16 (1993): 115-161.

Daws, Gavan. Shoal of Time: A History of the Hawaiian Islands. Honolulu: University of Hawaii Press, 1968.

Dye, Tom. “Population Trends in Hawai’i before 1778.” The Hawaiian Journal of History 28 (1994): 1-20.

Hitch, Thomas Kemper. Islands in Transition: The Past, Present, and Future of Hawaii’s Economy. Honolulu: First Hawaiian Bank, 1992.

Kame’eleihiwa, Lilikala. Native Land and Foreign Desires: Pehea La E Pono Ai? Honolulu: Bishop Museum Press, 1992.

Kirch, Patrick V. Feathered Gods and Fishhooks: An Introduction to Hawaiian Archaeology and Prehistory. Honolulu: University of Hawaii Press, 1985.

Kuykendall, Ralph S. A History of the Hawaiian Kingdom. 3 vols. Honolulu: University of Hawaii Press, 1938-1967.

La Croix, Sumner J., and Price Fishback. “Firm-Specific Evidence on Racial Wage Differentials and Workforce Segregation in Hawaii’s Sugar Industry.” Explorations in Economic History 26 (1989): 403-423.

La Croix, Sumner J., and Price Fishback. “Migration, Labor Market Dynamics, and Wage Differentials in Hawaii’s Sugar Industry.” Advances in Agricultural Economic History 1 (2000): 31-72.

La Croix, Sumner J., and Christopher Grandy. “The Political Instability of Reciprocal Trade and the Overthrow of the Hawaiian Kingdom.” Journal of Economic History 57 (1997): 161-189.

La Croix, Sumner J., and Louis A. Rose. “The Political Economy of the Hawaiian Homelands Program.” In The Other Side of the Frontier: Economic Explorations into Native American History, edited by Linda Barrington. Boulder, Colorado: Westview Press, 1999.

La Croix, Sumner J., and James Roumasset. “An Economic Theory of Political Change in Pre-Missionary Hawaii.” Explorations in Economic History 21 (1984): 151-168.

La Croix, Sumner J., and James Roumasset. “The Evolution of Property Rights in Nineteenth-Century Hawaii.” Journal of Economic History 50 (1990): 829-852.

Morgan, Theodore. Hawaii, A Century of Economic Change: 1778-1876. Cambridge, MA: Harvard University Press, 1948.

Schmitt, Robert C. Historical Statistics of Hawaii. Honolulu: University Press of Hawaii, 1977.

Citation: La Croix, Sumner. “Economic History of Hawai’i”. EH.Net Encyclopedia, edited by Robert Whaples. September 27, 2001. URL http://eh.net/encyclopedia/economic-history-of-hawaii/

Economic Recovery in the Great Depression

Frank G. Steindl, Oklahoma State University

Introduction

The Great Depression has two meanings. One is the horrendous debacle of 1929-33 during which unemployment rose from 3 to 25 percent as the nation’s output fell over 25 percent and prices over 30 percent, in what also has been called the Great Contraction. A second meaning has the Great Depression as the entire decade of the thirties, the anxieties and apprehensions for which John Steinbeck’s The Grapes of Wrath is a metaphor. Much has been written about the unprecedented drop in economic activity in the Great Contraction, with questions about its causes and the reasons for its protracted decline especially prominent. The amount of scholarship devoted to these issues dwarfs that dealing with the recovery. But there indeed was a recovery, though long, tortuous, and uneven. In fact, it was well over twice as long as the contraction.

The economy hit its trough in March 1933. Whether or not by coincidence, President Franklin D. Roosevelt took office that month, initiating the New Deal and its fabled first hundred days, among which was the creation in June 1933 of its principal recovery vehicle, the NIRA — National Industrial Recovery Act.

Facts of the Recovery

Figure 1 uses monthly data. This allows us to see more finely the movements of the economy, as contrasted with the use of quarterly or annual data. For present purposes, the decade of the Depression runs from August 1929, when the economy was at its business cycle peak, through March 1933, the contraction trough, to June 1942, when the economy clearly was back to it long-run high-employment trend.

Figure 1 depicts the behavior of industrial output and prices over the Great Depression decade, the former as measured by the Index of Industrial Employment and the latter by the Wholesale Price Index.[1] Among the notable features are the large declines in output and prices in the Great Contraction, with the former falling 52 percent and the latter 37 percent. Another noteworthy feature is the sharp, severe 1937-38 depression, when in twelve months output fell 33 percent and prices 11 percent. A third feature is the over-two-year deflation in the face of a robust increase in output following the 1937-38 depression.

The behavior of the unemployment rate is shown in Figure 2.[2] The dashed line shows the reported official data, which do not count as employed those holding “temporary” relief jobs. The solid line adjusts the official series by including those holding such temporary jobs as employed, the effect of which is to reduce the unemployment rate (Darby 1976). Each series rises from around 3 to about 23 percent between 1929 and 1932. The official series then climbs to near 25 percent the following year whereas the adjusted series is over four percentage points lower. Each continues declining the rest of the recovery, though both rise sharply in 1938. By 1940, each is still in double digits.

Three other charts that are helpful for understanding the recovery are Figures 3, 4, and 5. The first of these shows that the monetary base of the economy — which is the reserves of commercial banks plus currency held by the public — grew principally through increases in the stock of gold In contrast to the normal situation, the base did not increase because of credit provided by the Federal Reserve System. Such credit was essentially constant. That is, the Fed, the nation’s central bank, was basically passive for most of the recovery. The rise in the stock of gold occurred initially because of revaluation of gold from $20.67 to $35 an ounce in 1933-34 (which though not changing the physical holdings of gold raised the value of such holdings by 69 percent). The physical stock of gold now valued at the higher price then increased because of an inflow of gold principally from Europe due to the deteriorating political and economic situation there.

Figure 4 shows the behavior of the stock of money, both the narrow M1and broader M2 measures of it. The shaded area shows the decreases in those money stocks in the 1937-38 depression. Those declines were one of the reasons for that depression, just as the large declines in the money stock in 1929-33 were major factors responsible for the Great Contraction. During the Contraction of 1929-33, the narrow measure of the money stock — currency held by the public and demand deposits, M1 — fell 28 percent and the broader measure of it (M1 plus time deposits at commercial banks) fell 35 percent. These declines were major factors in causing the sharp decline that was the debacle of 1929-33.

Lastly, the budget position of the federal government is shown in Figure 5. One of the notable features is the sharp increase in expenditures in mid-1936 and the equally sharp decrease thereafter. The budget therefore went dramatically into deficit, and then began to move toward a surplus by the end of 1936, largely due to the tax revenues arising from the Social Security Act of 1935.

Reasons for Recovery

In Golden Fetters (1992), Barry Eichengreen advanced the basis for the most widely accepted understanding of the slide and recovery of economies in the 1930s. The depression was a worldwide phenomenon, as indicated in Figure 6, which shows the behavior of industrial production for several major countries. His basic thesis related to the gold standard and the manner in which countries altered their behavior under it during the 1930s. Under the classical “rules of the game,” countries experiencing balance of payments deficits financed those deficits by exporting gold. The loss of gold forced them to contract their money stock, which then resulted in deflationary pressures. Countries running balance of payments surpluses received gold, which expanded their money stocks, thereby inducing expansionary pressures. According to Eichengreen’s framework, countries did not “play by the rules” of the international gold standard during the depression era. Rather, countries losing gold were forced to contract. Those receiving gold, however, did not expand. This generated a net deflationary bias, as a result of which the depression was world wide for those countries on the gold standard. As countries cut their ties to gold, which the U.S. did in early 1933, they were free to pursue expansionary monetary and fiscal policies, and this is the principal reason underlying the recovery. The inflow of gold into the U.S., for instance, expanded the reserves of the banking system, which became the basis for the increases in the stock of money.

The quantity theory of money is a useful framework that can be used to understand movements of prices and output. The theory holds that increases in the supply of money relative to the demand results in increased spending on goods, services, financial assets, and real capital. The theory can be expressed in the following equation, where M is the stock of money, V is velocity, the rate at which it is spent, which is the mirror side of the demand for money — the desire to hold it. P is the price level and y is real output.

Increases in M relative to V result in increases in P and y.

Research into the forces of recovery generally concludes that the growth of the money supply (M) was the principal cause of the rise in output (y) after March 1933, the trough of the Great Contraction. Furthermore, those increases in the money stock also pushed up the price level (P).

Four studies expressly dealing with the recovery are of note. Milton Friedman and Anna Schwartz show that “the broad movements in the stock of money correspond with those in income” (1963, 497) and argue that “the rapid rate of rise in the money stock certainly promoted and facilitated the concurrent economic expansion” (1963, 544). Christina Romer concludes that the growth of the money stock was “crucial to the recovery. If [it] had been held to its normal level, the U.S. economy in 1942 would have been 50 percent below its pre-Depression trend path” (1992, 768-69). She also finds that fiscal policy “contributed almost nothing to the recovery” (1992, 767), a finding that mirrors much of the postwar research on the influence of fiscal policy, and stands in contrast to the views of much of the public as it came to believe that the fiscal budget deficits of President Roosevelt were fundamental in promoting recovery.[3]

Ben Bernanke (1995) similarly stresses the importance of the growth of the money stock as basic to the recovery. He focuses on the gold standard as a restraint on independent monetary actions, finding that “the evidence is that countries leaving the gold standard recovered substantially more rapidly and vigorously than those who did not” (1995, 12) because they “had greater freedom to initiate expansionary monetary policies” (1995, 15).

More recently Allan Meltzer (2003) finds the recovery driven by increases in the stock of money, based on an expanding monetary base due to gold. “The main policy stimulus to output came from the rise in money, an unplanned consequence of the 1934 devaluation of the dollar against gold. Later in the decade the rising threat of war, and war itself supplemented the $35 gold price as a cause of the rise in gold and money” (2003, 573).

That the recovery was due principally to the growth of the stock of money appears to be a robust conclusion of postwar research into causes of the 1930s recovery.

The manner in which the stock of money increased is important. The growing stock of gold increased the reserves of banks, hence the monetary base. With their greater reserves, banks did two things. First, they held some as precautionary reserves, called excess reserves. This is measured on the left hand side of Figure 7. Secondly, they bought U.S.government securities, more than tripling their holdings, as seen on the right hand axis of Figure 7. Also, as seen there, commercial bank loans increased only slightly in the recovery, rising only 25 percent in over nine years.[4] The principal impetus to the growth of the money stock, therefore, was banks’ increased purchases of U.S. government securities, both ones already outstanding and ones issued to finance the deficits of those years.

The 1937-38 Depression and Revival

After four years of recovery, the economy plunged into a deep depression in May 1937, as output fell 33 percent and prices 11 percent in twelve months (shown in Figure 1). Two developments were identified with being principally responsible for the depression.[5] The one most prominently identified by contemporary scholars is the action of the Federal Reserve.

As the Fed saw the volume of excess reserves climbing month after month, it became concerned about the potential inflationary consequences if banks were to begin making more loans, thereby expanding the money supply and driving up prices. The Banking Act of 1935 gave the Fed authority to change reserve requirements. With its newly granted authority, it decided upon a “preemptive strike” against what it regarded as incipient inflation. Because it thought that those excess reserves were due to a “shortage of borrowers,” it therefore raised reserve requirements, the effect of which was to impound in required reserves the former excess reserves. The increased requirements were in fact doubled, in three steps: August 1936, March 1937, and May 1937. As Figure 7 exhibits, excess reserves therefore fell. The principal effect of the doubling of reserve requirements was to reduce the stock of money, as shown in the shaded area of Figure 4.[6]

A second factor causing the depression was the falling federal budget deficit, due to two considerations. First, there was a sharp one-time rise in expenditures in mid-1936, due to the payment of a World War I Veterans’ Bonus. Thereafter, expenditures fell — the “spike” in the figure. Secondly, the Social Security Act of 1935 mandated collection of payroll taxes beginning in 1937, with the first payments to be made several years later. The joint effect of these two was to move the budget to near surplus by late 1937.

During the depression, both output and prices fell, as was their usual behavior in depressions. The bottom of the depression was May 1938, one year after it began. Thereafter, output began growing quite robustly, rising 58 percent by August 1940. Prices, however, continued to fall, for over two years. Figure 8 shows the depression and revival experience from May 1937 through August 1940, the month in which prices last fell. The two shaded areas are the year-long depression and the price “spike” in September 1939. Of interest is that the shock of the war that spurred the price jump did not induce expectations of further price rises. Prices continued to fall for another year, through August 1940.

Difficulties with Current Understanding

According to the currently accepted interpretation, the recovery owes its existence to increases in the stock of money. One difficulty with this view is the marked contrast to the price experience of recovery through mid-1937. How could rising prices in the 1933 turnaround be fundamental to the recovery but not in the vigorous, later recovery, when prices actually fell? Another difficulty is that the continued rise in the stock of money is due to the political turmoil in Europe. There is little intrinsic to the U.S economy that contributed. Presumably, had there been no continuing inflow of gold raising the monetary base and money stock, the economy would have languished until the demands of World War II would have made their impact. In other words, would there have been virtually no recovery had there been no Adolf Hitler?

Of more consequence is the conundrum presented by the experience of more than two years of deflation in the face of dramatically rising aggregate demand, of which the sharply rising money stock appears as a major force. If the rising stock of money were fundamental to the recovery, then prices and output would have been rising, as the aggregate demand for output, spurred also by increasing fiscal budget deficits, would have been increasing relative to aggregate supply. But in the present instance, prices were declining, not rising. Something else was driving the economy during the entire recovery, but the seemingly dominant aggregate demand pressures obscured it in the early part.

One prospective impetus to aggregate supply would be declining real wages that would spur the hiring of additional workers. But with prices declining, it is unlikely that real wages would have fallen in the revival from the late 1930s depression. The evidence as indicated in Figure 9 shows that they in fact increased. With few exceptions, real wages increased throughout the entire deflationary period, rising 18 percent overall and 6 percent in the revival. The real wage rate, by rising, was thus a detriment to increased supply. Real wages cannot therefore be a factor inducing greater aggregate supply.

The economic phenomenon that was driving the recovery was probably increasing productivity. An early indication of this comes from the pioneering work of Robert Solow (1957) who in the course of examining factors contributing to economic growth developed data on the behavior of productivity. In support of this, Alexander Field presents both macroeconomic and microeconomic evidence showing that “the years 1929-41 were, in the aggregate, the most technologically progressive of any comparable period in U.S. economic history” (2003, 1399).

The rapid productivity increases were an important factor explaining the seemingly anomalous problem of rapid recovery and the stubbornness of the unemployment rate. In today’s parlance, this has come to be known as a “jobless recovery,” one in which rising productivity generates increased output rather than greater labor input producing more.

To acknowledge that productivity increases were crucial to the economic recovery is not however the end of the story because we are still left trying to understand the mechanisms underlying their sharp increases. What induced such increases? Serendipity — the idea that productivity increased at just the right time and in the appropriate amounts — is not an appealing explanation.

More likely, there is something intrinsic to the economy that encapsulates mechanisms — that is, incentives spurring inventive capital and labor innovations generating productivity increases, as well as other factors — that move the economy back to its potential.

References

Bernanke, Ben S. “The Macroeconomics of the Great Depression: A Comparative Approach.” Journal of Money, Credit, and Banking 27 (1995): 1-28.

Darby, Michael R. “Three-and-a-Half Million U.S. Employees Have Been Mislaid: Or an Explanation of Unemployment, 1934-41.” Journal of Political Economy 84 (1976):1-16.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression 1919-1939. New York: Oxford University Press, 1992.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” American Economic Review 93 2003): 1399-1413.

Friedman, Milton and Anna J. Schwartz. A Monetary History of the United States: 1867-1960. Princeton, NJ: Princeton University Press, 1963.

Meltzer, Allan H. A History of the Federal Reserve, volume 1, 1913-1951. Chicago: University of Chicago Press, 2003.

Romer, Christina D. “What Ended the Great Depression?” Journal of Economic History 52 (1992): 757-84.

Solow, Robert M. “Technical Change and the Aggregate Production Function.” Review of Economics and Statistics 39 (1957): 312-20.

Smithies, Arthur. “The American Economy in the Thirties.” American Economic Review Papers and Proceedings 36 (1946):11-27.

Steindl, Frank G. Understanding Economic Recovery in the 1930s: Endogenous Propagation in the Great Depression. Ann Arbor: University of Michigan Press, 2004.


[1] Industrial production and the nation’s real output, real GDP, are highly correlated. The correlation relation is 98 percent, both for quarterly and annual data over the recovery period

[2] Data on the unemployment rate are available only on an annual basis for the Depression decade.

[3] In fact, large numbers of academics held that view, of which Arthur Smithies’ address to the American Economic Association is an example. His assessment was that “My main conclusion … is that fiscal policy did prove to be … the only effective means to recovery” (1946, 25, emphasis added).

[4] Real loans — loans relative to the price level — in fact declined, falling 24 percent in the 111 months of recovery.

[5] A third factor was the action of the U.S. Treasury as it “sterilized” gold, at the instigation of the Federal Reserve. By sterilization of gold, the Treasury prevented the gold inflows from increasing bank reserves.

[6] The reason the stock of money fell is that banks responded to the increased reserve requirements by trying to rebuild their excess reserves. That is, the banks did not regard their excess reserves as surplus reserves, but rather as precautionary reserves. This contrasted with the Federal Reserve’s view that the excess reserves were surplus ones, due to a “shortage” of borrowers at banks.

Citation: Steindl, Frank. “Economic Recovery in the Great Depression”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/economic-recovery-in-the-great-depression/

An Overview of the Great Depression

Randall Parker, East Carolina University

This article provides an overview of selected events and economic explanations of the interwar era. What follows is not intended to be a detailed and exhaustive review of the literature on the Great Depression, or of any one theory in particular. Rather, it will attempt to describe the “big picture” events and topics of interest. For the reader who wishes more extensive analysis and detail, references to additional materials are also included.

The 1920s

The Great Depression, and the economic catastrophe that it was, is perhaps properly scaled in reference to the decade that preceded it, the 1920s. By conventional macroeconomic measures, this was a decade of brisk economic growth in the United States. Perhaps the moniker “the roaring twenties” summarizes this period most succinctly. The disruptions and shocking nature of World War I had been survived and it was felt the United States was entering a “new era.” In January 1920, the Federal Reserve seasonally adjusted index of industrial production, a standard measure of aggregate economic activity, stood at 81 (1935–39 = 100). When the index peaked in July 1929 it was at 114, for a growth rate of 40.6 percent over this period. Similar rates of growth over the 1920–29 period equal to 47.3 percent and 42.4 percent are computed using annual real gross national product data from Balke and Gordon (1986) and Romer (1988), respectively. Further computations using the Balke and Gordon (1986) data indicate an average annual growth rate of real GNP over the 1920–29 period equal to 4.6 percent. In addition, the relative international economic strength of this country was clearly displayed by the fact that nearly one-half of world industrial output in 1925–29 was produced in the United States (Bernanke, 1983).

Consumer Durables Market

The decade of the 1920s also saw major innovations in the consumption behavior of households. The development of installment credit over this period led to substantial growth in the consumer durables market (Bernanke, 1983). Purchases of automobiles, refrigerators, radios and other such durable goods all experienced explosive growth during the 1920s as small borrowers, particularly households and unincorporated businesses, utilized their access to available credit (Persons, 1930; Bernanke, 1983; Soule, 1947).

Economic Growth in the 1920s

Economic growth during this period was mitigated only somewhat by three recessions. According to the National Bureau of Economic Research (NBER) business cycle chronology, two of these recessions were from May 1923 through July 1924 and October 1926 through November 1927. Both of these recessions were very mild and unremarkable. In contrast, the 1920s began with a recession lasting 18 months from the peak in January 1920 until the trough of July 1921. Original estimates of real GNP from the Commerce Department showed that real GNP fell 8 percent between 1919 and 1920 and another 7 percent between 1920 and 1921 (Romer, 1988). The behavior of prices contributed to the naming of this recession “the Depression of 1921,” as the implicit price deflator for GNP fell 16 percent and the Bureau of Labor Statistics wholesale price index fell 46 percent between 1920 and 1921. Although thought to be severe, Romer (1988) has argued that the so-called “postwar depression” was not as severe as once thought. While the deflation from war-time prices was substantial, revised estimates of real GNP show falls in output of only 1 percent between 1919 and 1920 and 2 percent between 1920 and 1921. Romer (1988) also argues that the behaviors of output and prices are inconsistent with the conventional explanation of the Depression of 1921 being primarily driven by a decline in aggregate demand. Rather, the deflation and the mild recession are better understood as resulting from a decline in aggregate demand together with a series of positive supply shocks, particularly in the production of agricultural goods, and significant decreases in the prices of imported primary commodities. Overall, the upshot is that the growth path of output was hardly impeded by the three minor downturns, so that the decade of the 1920s can properly be viewed economically as a very healthy period.

Fed Policies in the 1920s

Friedman and Schwartz (1963) label the 1920s “the high tide of the Reserve System.” As they explain, the Federal Reserve became increasingly confident in the tools of policy and in its knowledge of how to use them properly. The synchronous movements of economic activity and explicit policy actions by the Federal Reserve did not go unnoticed. Taking the next step and concluding there was cause and effect, the Federal Reserve in the 1920s began to use monetary policy as an implement to stabilize business cycle fluctuations. “In retrospect, we can see that this was a major step toward the assumption by government of explicit continuous responsibility for economic stability. As the decade wore on, the System took – and perhaps even more was given – credit for the generally stable conditions that prevailed, and high hopes were placed in the potency of monetary policy as then administered” (Friedman and Schwartz, 1963).

The giving/taking of credit to/by the Federal Reserve has particular value pertaining to the recession of 1920–21. Although suggesting the Federal Reserve probably tightened too much, too late, Friedman and Schwartz (1963) call this episode “the first real trial of the new system of monetary control introduced by the Federal Reserve Act.” It is clear from the history of the time that the Federal Reserve felt as though it had successfully passed this test. The data showed that the economy had quickly recovered and brisk growth followed the recession of 1920–21 for the remainder of the decade.

Questionable Lessons “Learned” by the Fed

Moreover, Eichengreen (1992) suggests that the episode of 1920–21 led the Federal Reserve System to believe that the economy could be successfully deflated or “liquidated” without paying a severe penalty in terms of reduced output. This conclusion, however, proved to be mistaken at the onset of the Depression. As argued by Eichengreen (1992), the Federal Reserve did not appreciate the extent to which the successful deflation could be attributed to the unique circumstances that prevailed during 1920–21. The European economies were still devastated after World War I, so the demand for United States’ exports remained strong many years after the War. Moreover, the gold standard was not in operation at the time. Therefore, European countries were not forced to match the deflation initiated in the United States by the Federal Reserve (explained below pertaining to the gold standard hypothesis).

The implication is that the Federal Reserve thought that deflation could be generated with little effect on real economic activity. Therefore, the Federal Reserve was not vigorous in fighting the Great Depression in its initial stages. It viewed the early years of the Depression as another opportunity to successfully liquidate the economy, especially after the perceived speculative excesses of the 1920s. However, the state of the economic world in 1929 was not a duplicate of 1920–21. By 1929, the European economies had recovered and the interwar gold standard was a vehicle for the international transmission of deflation. Deflation in 1929 would not operate as it did in 1920–21. The Federal Reserve failed to understand the economic implications of this change in the international standing of the United States’ economy. The result was that the Depression was permitted to spiral out of control and was made much worse than it otherwise would have been had the Federal Reserve not considered it to be a repeat of the 1920–21 recession.

The Beginnings of the Great Depression

In January 1928 the seeds of the Great Depression, whenever they were planted, began to germinate. For it is around this time that two of the most prominent explanations for the depth, length, and worldwide spread of the Depression first came to be manifest. Without any doubt, the economics profession would come to a firm consensus around the idea that the economic events of the Great Depression cannot be properly understood without a solid linkage to both the behavior of the supply of money together with Federal Reserve actions on the one hand and the flawed structure of the interwar gold standard on the other.

It is well documented that many public officials, such as President Herbert Hoover and members of the Federal Reserve System in the latter 1920s, were intent on ending what they perceived to be the speculative excesses that were driving the stock market boom. Moreover, as explained by Hamilton (1987), despite plentiful denials to the contrary, the Federal Reserve assumed the role of “arbiter of security prices.” Although there continues to be debate as to whether or not the stock market was overvalued at the time (White, 1990; DeLong and Schleifer, 1991), the main point is that the Federal Reserve believed there to be a speculative bubble in equity values. Hamilton (1987) describes how the Federal Reserve, intending to “pop” the bubble, embarked on a highly contractionary monetary policy in January 1928. Between December 1927 and July 1928 the Federal Reserve conducted $393 million of open market sales of securities so that only $80 million remained in the Open Market account. Buying rates on bankers’ acceptances1 were raised from 3 percent in January 1928 to 4.5 percent by July, reducing Federal Reserve holdings of such bills by $193 million, leaving a total of only $185 million of these bills on balance. Further, the discount rate was increased from 3.5 percent to 5 percent, the highest level since the recession of 1920–21. “In short, in terms of the magnitudes consciously controlled by the Fed, it would be difficult to design a more contractionary policy than that initiated in January 1928” (Hamilton, 1987).

The pressure did not stop there, however. The death of Federal Reserve Bank President Benjamin Strong and the subsequent control of policy ascribed to Adolph Miller of the Federal Reserve Board insured that the fall in the stock market was going to be made a reality. Miller believed the speculative excesses of the stock market were hurting the economy, and the Federal Reserve continued attempting to put an end to this perceived harm (Cecchetti, 1998). The amount of Federal Reserve credit that was being extended to market participants in the form of broker loans became an issue in 1929. The Federal Reserve adamantly discouraged lending that was collateralized by equities. The intentions of the Board of Governors of the Federal Reserve were made clear in a letter dated February 2, 1929 sent to Federal Reserve banks. In part the letter read:

The board has no disposition to assume authority to interfere with the loan practices of member banks so long as they do not involve the Federal reserve banks. It has, however, a grave responsibility whenever there is evidence that member banks are maintaining speculative security loans with the aid of Federal reserve credit. When such is the case the Federal reserve bank becomes either a contributing or a sustaining factor in the current volume of speculative security credit. This is not in harmony with the intent of the Federal Reserve Act, nor is it conducive to the wholesome operation of the banking and credit system of the country. (Board of Governors of the Federal Reserve 1929: 93–94, quoted from Cecchetti, 1998)

The deflationary pressure to stock prices had been applied. It was now a question of when the market would break. Although the effects were not immediate, the wait was not long.

The Economy Stumbles

The NBER business cycle chronology dates the start of the Great Depression in August 1929. For this reason many have said that the Depression started on Main Street and not Wall Street. Be that as it may, the stock market plummeted in October of 1929. The bursting of the speculative bubble had been achieved and the economy was now headed in an ominous direction. The Federal Reserve’s seasonally adjusted index of industrial production stood at 114 (1935–39 = 100) in August 1929. By October it had fallen to 110 for a decline of 3.5 percent (annualized percentage decline = 14.7 percent). After the crash, the incipient recession intensified, with the industrial production index falling from 110 in October to 100 in December 1929, or 9 percent (annualized percentage decline = 41 percent). In 1930, the index fell further from 100 in January to 79 in December, or an additional 21percent.

Links between the Crash and the Depression?

While popular history treats the crash and the Depression as one and the same event, economists know that they were not. But there is no doubt that the crash was one of the things that got the ball rolling. Several authors have offered explanations for the linkage between the crash and the recession of 1929–30. Mishkin (1978) argues that the crash and an increase in liabilities led to a deterioration in households’ balance sheets. The reduced liquidity2 led consumers to defer consumption of durable goods and housing and thus contributed to a fall in consumption. Temin (1976) suggests that the fall in stock prices had a negative wealth effect on consumption, but attributes only a minor role to this given that stocks were not a large fraction of total wealth; the stock market in 1929, although falling dramatically, remained above the value it had achieved in early 1928, and the propensity to consume from wealth was small during this period. Romer (1990) provides evidence suggesting that if the stock market were thought to be a predictor of future economic activity, then the crash can rightly be viewed as a source of increased consumer uncertainty that depressed spending on consumer durables and accelerated the decline that had begun in August 1929. Flacco and Parker (1992) confirm Romer’s findings using different data and alternative estimation techniques.

Looking back on the behavior of the economy during the year of 1930, industrial production declined 21 percent, the consumer price index fell 2.6 percent, the supply of high-powered money (that is, the liabilities of the Federal Reserve that are usable as money, consisting of currency in circulation and bank reserves; also called the monetary base) fell 2.8 percent, the nominal supply of money as measured by M1 (the product of the monetary base3 multiplied by the money multiplier4) dipped 3.5 percent and the ex post real interest rate turned out to be 11.3 percent, the highest it had been since the recession of 1920–21 (Hamilton, 1987). In spite of this, when put into historical context, there was no reason to view the downturn of 1929–30 as historically unprecedented. Its magnitude was comparable to that of many recessions that had previously occurred. Perhaps there was justifiable optimism in December 1930 that the economy might even shake off the negative movement and embark on the path to recovery, rather like what had occurred after the recession of 1920–21 (Bernanke, 1983). As we know, the bottom would not come for another 27 months.

The Economy Crumbles

Banking Failures

During 1931, there was a “change in the character of the contraction” (Friedman and Schwartz, 1963). Beginning in October 1930 and lasting until December 1930, the first of a series of banking panics now accompanied the downward spasms of the business cycle. Although bank failures had occurred throughout the 1920s, the magnitude of the failures that occurred in the early 1930s was of a different order altogether (Bernanke, 1983). The absence of any type of deposit insurance resulted in the contagion of the panics being spread to sound financial institutions and not just those on the margin.

Traditional Methods of Combating Bank Runs Not Used

Moreover, institutional arrangements that had existed in the private banking system designed to provide liquidity – to convert assets into cash – to fight bank runs before 1913 were not exercised after the creation of the Federal Reserve System. For example, during the panic of 1907, the effects of the financial upheaval had been contained through a combination of lending activities by private banks, called clearinghouses, and the suspension of deposit convertibility into currency. While not preventing bank runs and the financial panic, their economic impact was lessened to a significant extent by these countermeasures enacted by private banks, as the economy quickly recovered in 1908. The aftermath of the panic of 1907 and the desire to have a central authority to combat the contagion of financial disruptions was one of the factors that led to the establishment of the Federal Reserve System. After the creation of the Federal Reserve, clearinghouse lending and suspension of deposit convertibility by private banks were not undertaken. Believing the Federal Reserve to be the “lender of last resort,” it was apparently thought that the responsibility to fight bank runs was the domain of the central bank (Friedman and Schwartz, 1963; Bernanke, 1983). Unfortunately, when the banking panics came in waves and the financial system was collapsing, being the “lender of last resort” was a responsibility that the Federal Reserve either could not or would not assume.

Money Supply Contracts

The economic effects of the banking panics were devastating. Aside from the obvious impact of the closing of failed banks and the subsequent loss of deposits by bank customers, the money supply accelerated its downward spiral. Although the economy had flattened out after the first wave of bank failures in October–December 1930, with the industrial production index steadying from 79 in December 1930 to 80 in April 1931, the remainder of 1931 brought a series of shocks from which the economy was not to recover for some time.

Second Wave of Banking Failure

In May, the failure of Austria’s largest bank, the Kredit-anstalt, touched off financial panics in Europe. In September 1931, having had enough of the distress associated with the international transmission of economic depression, Britain abandoned its participation in the gold standard. Further, just as the United States’ economy appeared to be trying to begin recovery, the second wave of bank failures hit the financial system in June and did not abate until December. In addition, the Hoover administration in December 1931, adhering to its principles of limited government, embarked on a campaign to balance the federal budget. Tax increases resulted the following June, just as the economy was to hit the first low point of its so-called “double bottom” (Hoover, 1952).

The results of these events are now evident. Between January and December 1931 the industrial production index declined from 78 to 66, or 15.4 percent, the consumer price index fell 9.4 percent, the nominal supply of M1 dipped 5.7 percent, the ex post real interest rate5 remained at 11.3 percent, and although the supply of high-powered money6 actually increased 5.5 percent, the currency–deposit and reserve–deposit ratios began their upward ascent, and thus the money multiplier started its downward plunge (Hamilton, 1987). If the economy had flattened out in the spring of 1931, then by December output, the money supply, and the price level were all on negative growth paths that were dragging the economy deeper into depression.

Third Wave of Banking Failure

The economic difficulties were far from over. The economy displayed some evidence of recovery in late summer/early fall of 1932. However, in December 1932 the third, and largest, wave of banking panics hit the financial markets and the collapse of the economy arrived with the business cycle hitting bottom in March 1933. Industrial production between January 1932 and March 1933 fell an additional 15.6 percent. For the combined years of 1932 and 1933, the consumer price index fell a cumulative 16.2 percent, the nominal supply of M1 dropped 21.6 percent, the nominal M2 money supply fell 34.7 percent, and although the supply of high-powered money increased 8.4 percent, the currency–deposit and reserve–deposit ratios accelerated their upward ascent. Thus the money multiplier continued on a downward plunge that was not arrested until March 1933. Similar behaviors for real GDP, prices, money supplies and other key macroeconomic variables occurred in many European economies as well (Snowdon and Vane, 1999; Temin, 1989).

An examination of the macroeconomic data in August 1929 compared to March 1933 provides a stark contrast. The unemployment rate of 3 percent in August 1929 was at 25 percent in March 1933. The industrial production index of 114 in August 1929 was at 54 in March 1933, or a 52.6 percent decrease. The money supply had fallen 35 percent, prices plummeted by about 33 percent, and more than one-third of banks in the United States were either closed or taken over by other banks. The “new era” ushered in by “the roaring twenties” was over. Roosevelt took office in March 1933, a nationwide bank holiday was declared from March 6 until March 13, and the United States abandoned the international gold standard in April 1933. Recovery commenced immediately and the economy began its long path back to the pre-1929 secular growth trend.

Table 1 summarizes the drop in industrial production in the major economies of Western Europe and North America. Table 2 gives gross national product estimates for the United States from 1928 to 1941. The constant price series adjusts for inflation and deflation.

Table 1
Indices of Total Industrial Production, 1927 to 1935 (1929 = 100)

1927 1928 1929 1930 1931 1932 1933 1934 1935
Britain 95 94 100 94 86 89 95 105 114
Canada 85 94 100 91 78 68 69 82 90
France 84 94 100 99 85 74 83 79 77
Germany 95 100 100 86 72 59 68 83 96
Italy 87 99 100 93 84 77 83 85 99
Netherlands 87 94 100 109 101 90 90 93 95
Sweden 85 88 100 102 97 89 93 111 125
U.S. 85 90 100 83 69 55 63 69 79

Source: Industrial Statistics, 1900-57 (Paris, OEEC, 1958), Table 2.

Table 2
U.S. GNP at Constant (1929) and Current Prices, 1928-1941

Year GNP at constant (1929) prices (billions of $) GNP at current prices (billions of $)
1928 98.5 98.7
1929 104.4 104.6
1930 95.1 91.2
1931 89.5 78.5
1932 76.4 58.6
1933 74.2 56.1
1934 80.8 65.5
1935 91.4 76.5
1936 100.9 83.1
1937 109.1 91.2
1938 103.2 85.4
1939 111.0 91.2
1940 121.0 100.5
1941 131.7 124.7

Contemporary Explanations

The economics profession during the 1930s was at a loss to explain the Depression. The most prominent conventional explanations were of two types. First, some observers at the time firmly grounded their explanations on the two pillars of classical macroeconomic thought, Say’s Law and the belief in the self-equilibrating powers of the market. Many argued that it was simply a question of time before wages and prices adjusted fully enough for the economy to return to full employment and achieve the realization of the putative axiom that “supply creates its own demand.” Second, the Austrian school of thought argued that the Depression was the inevitable result of overinvestment during the 1920s. The best remedy for the situation was to let the Depression run its course so that the economy could be purified from the negative effects of the false expansion. Government intervention was viewed by the Austrian school as a mechanism that would simply prolong the agony and make any subsequent depression worse than it would ordinarily be (Hayek, 1966; Hayek, 1967).

Liquidationist Theory

The Hoover administration and the Federal Reserve Board also contained several so-called “liquidationists.” These individuals basically believed that economic agents should be forced to re-arrange their spending proclivities and alter their alleged profligate use of resources. If it took mass bankruptcies to produce this result and wipe the slate clean so that everyone could have a fresh start, then so be it. The liquidationists viewed the events of the Depression as an economic penance for the speculative excesses of the 1920s. Thus, the Depression was the price that was being paid for the misdeeds of the previous decade. This is perhaps best exemplified in the well-known quotation of Treasury Secretary Andrew Mellon, who advised President Hoover to “Liquidate labor, liquidate stocks, liquidate the farmers, liquidate real estate.” Mellon continued, “It will purge the rottenness out of the system. High costs of living and high living will come down. People will work harder, live a more moral life. Values will be adjusted, and enterprising people will pick up the wrecks from less competent people” (Hoover, 1952). Hoover apparently followed this advice as the Depression wore on. He continued to reassure the public that if the principles of orthodox finance were faithfully followed, recovery would surely be the result.

The business press at the time was not immune from such liquidationist prescriptions either. The Commercial and Financial Chronicle, in an August 3, 1929 editorial entitled “Is Not Group Speculating Conspiracy, Fostering Sham Prosperity?” complained of the economy being replete with profligate spending including:

(a) The luxurious diversification of diet advantageous to dairy men … and fruit growers …; (b) luxurious dressing … more silk and rayon …; (c) free spending for automobiles and their accessories, gasoline, house furnishings and equipment, radios, travel, amusements and sports; (d) the displacement from the farms by tractors and autos of produce-consuming horses and mules to a number aggregating 3,700,000 for the period 1918–1928 … (e) the frills of education to thousands for whom places might better be reserved at bench or counter or on the farm. (Quoted from Nelson, 1991)

Persons, in a paper which appeared in the November 1930 Quarterly Journal of Economics, demonstrates that some academic economists also held similar liquidationist views.

Although certainly not universal, the descriptions above suggest that no small part of the conventional wisdom at the time believed the Depression to be a penitence for past sins. In addition, it was thought that the economy would be restored to full employment equilibrium once wages and prices adjusted sufficiently. Say’s Law will ensure the economy will return to health, and supply will create its own demand sufficient to return to prosperity, if we simply let the system work its way through. In his memoirs published in 1952, 20 years after his election defeat, Herbert Hoover continued to steadfastly maintain that if Roosevelt and the New Dealers would have stuck to the policies his administration put in place, the economy would have made a full recovery within 18 months after the election of 1932. We have to intensify our resolve to “stay the course.” All will be well in time if we just “take our medicine.” In hindsight, it challenges the imagination to think up worse policy prescriptions for the events of 1929–33.

Modern Explanations

There remains considerable debate regarding the economic explanations for the behavior of the business cycle between August 1929 and March 1933. This section describes the main hypotheses that have been presented in the literature attempting to explain the causes for the depth, protracted length, and worldwide propagation of the Great Depression.

The United States’ experience, considering the preponderance of empirical results and historical simulations contained in the economic literature, can largely be accounted for by the monetary hypothesis of Friedman and Schwartz (1963) together with the nonmonetary/financial hypotheses of Bernanke (1983) and Fisher (1933). That is, most, but not all, of the characteristic phases of the business cycle and depth to which output fell from 1929 to 1933 can be accounted for by the monetary and nonmonetary/financial hypotheses. The international experience, well documented in Choudri and Kochin (1980), Hamilton (1988), Temin (1989), Bernanke and James (1991), and Eichengreen (1992), can be properly understood as resulting from a flawed interwar gold standard. Each of these hypotheses is explained in greater detail below.

Nonmonetary/Nonfinancial Theories

It should be noted that I do not include a section covering the nonmonetary/nonfinancial theories of the Great Depression. These theories, including Temin’s (1976) focus on autonomous consumption decline, the collapse of housing construction contained in Anderson and Butkiewicz (1980), the effects of the stock market crash, the uncertainty hypothesis of Romer (1990), and the Smoot–Hawley Tariff Act of 1930, are all worthy of mention and can rightly be apportioned some of the responsibility for initiating the Depression. However, any theory of the Depression must be able to account for the protracted problems associated with the punishing deflation imposed on the United States and the world during that era. While the nonmonetary/nonfinancial theories go a long way accounting for the impetus for, and first year of the Depression, my reading of the empirical results of the economic literature indicates that they do not have the explanatory power of the three other theories mentioned above to account for the depths to which the economy plunged.

Moreover, recent research by Olney (1999) argues convincingly that the decline in consumption was not autonomous at all. Rather, the decline resulted because high consumer indebtedness threatened future consumption spending because default was expensive. Olney shows that households were shouldering an unprecedented burden of installment debt – especially for automobiles. In addition, down payments were large and contracts were short. Missed installment payments triggered repossession, reducing consumer wealth in 1930 because households lost all acquired equity. Cutting consumption was the only viable strategy in 1930 for avoiding default.

The Monetary Hypothesis

In reviewing the economic history of the Depression above, it was mentioned that the supply of money fell by 35 percent, prices dropped by about 33 percent, and one-third of all banks vanished. Milton Friedman and Anna Schwartz, in their 1963 book A Monetary History of the United States, 1867–1960, call this massive drop in the supply of money “The Great Contraction.”

Friedman and Schwartz (1963) discuss and painstakingly document the synchronous movements of the real economy with the disruptions that occurred in the financial sector. They point out that the series of bank failures that occurred beginning in October 1930 worsened economic conditions in two ways. First, bank shareholder wealth was reduced as banks failed. Second, and most importantly, the bank failures were exogenous shocks and led to the drastic decline in the money supply. The persistent deflation of the 1930s follows directly from this “great contraction.”

Criticisms of Fed Policy

However, this raises an important question: Where was the Federal Reserve while the money supply and the financial system were collapsing? If the Federal Reserve was created in 1913 primarily to be the “lender of last resort” for troubled financial institutions, it was failing miserably. Friedman and Schwartz pin the blame squarely on the Federal Reserve and the failure of monetary policy to offset the contractions in the money supply. As the money multiplier continued on its downward path, the monetary base, rather than being aggressively increased, simply progressed slightly upwards on a gently positive sloping time path. As banks were failing in waves, was the Federal Reserve attempting to contain the panics by aggressively lending to banks scrambling for liquidity? The unfortunate answer is “no.” When the panics were occurring, was there discussion of suspending deposit convertibility or suspension of the gold standard, both of which had been successfully employed in the past? Again the unfortunate answer is “no.” Did the Federal Reserve consider the fact that it had an abundant supply of free gold, and therefore that monetary expansion was feasible? Once again the unfortunate answer is “no.” The argument can be summarized by the following quotation:

At all times throughout the 1929–33 contraction, alternative policies were available to the System by which it could have kept the stock of money from falling, and indeed could have increased it at almost any desired rate. Those policies did not involve radical innovations. They involved measures of a kind the System had taken in earlier years, of a kind explicitly contemplated by the founders of the System to meet precisely the kind of banking crisis that developed in late 1930 and persisted thereafter. They involved measures that were actually proposed and very likely would have been adopted under a slightly different bureaucratic structure or distribution of power, or even if the men in power had had somewhat different personalities. Until late 1931 – and we believe not even then – the alternative policies involved no conflict with the maintenance of the gold standard. Until September 1931, the problem that recurrently troubled the System was how to keep the gold inflows under control, not the reverse. (Friedman and Schwartz, 1963)

The inescapable conclusion is that it was a failure of the policies of the Federal Reserve System in responding to the crises of the time that made the Depression as bad as it was. If monetary policy had responded differently, the economic events of 1929–33 need not have been as they occurred. This assertion is supported by the results of Fackler and Parker (1994). Using counterfactual historical simulations, they show that if the Federal Reserve had kept the M1 money supply growing along its pre-October 1929 trend of 3.3 percent annually, most of the Depression would have been averted. McCallum (1990) also reaches similar conclusions employing a monetary base feedback policy in his counterfactual simulations.

Lack of Leadership at the Fed

Friedman and Schwartz trace the seeds of these regrettable events to the death of Federal Reserve Bank of New York President Benjamin Strong in 1928. Strong’s death altered the locus of power in the Federal Reserve System and left it without effective leadership. Friedman and Schwartz maintain that Strong had the personality, confidence and reputation in the financial community to lead monetary policy and sway policy makers to his point of view. Friedman and Schwartz believe that Strong would not have permitted the financial panics and liquidity crises to persist and affect the real economy. Instead, after Governor Strong died, the conduct of open market operations changed from a five-man committee dominated by the New York Federal Reserve to that of a 12-man committee of Federal Reserve Bank governors. Decisiveness in leadership was replaced by inaction and drift. Others (Temin, 1989; Wicker, 1965) reject this point, claiming the policies of the Federal Reserve in the 1930s were not inconsistent with the policies pursued in the decade of the 1920s.

The Fed’s Failure to Distinguish between Nominal and Real Interest Rates

Meltzer (1976) also points out errors made by the Federal Reserve. His argument is that the Federal Reserve failed to distinguish between nominal and real interest rates. That is, while nominal rates were falling, the Federal Reserve did virtually nothing, since it construed this to be a sign of an “easy” credit market. However, in the face of deflation, real rates were rising and there was in fact a “tight” credit market. Failure to make this distinction led money to be a contributing factor to the initial decline of 1929.

Deflation

Cecchetti (1992) and Nelson (1991) bolster the monetary hypothesis by demonstrating that the deflation during the Depression was anticipated at short horizons, once it was under way. The result, using the Fisher equation, is that high ex ante real interest rates were the transmission mechanism that led from falling prices to falling output. In addition, Cecchetti (1998) and Cecchetti and Karras (1994) argue that if the lower bound of the nominal interest rate is reached, then continued deflation renders the opportunity cost of holding money negative. In this instance the nature of money changes. Now the rate of deflation places a floor on the real return nonmoney assets must provide to make them attractive to hold. If they cannot exceed the rate on money holdings, then agents will move their assets into cash and the result will be negative net investment and a decapitalization of the economy.

Critics of the Monetary Hypothesis

The monetary hypothesis, however, is not without its detractors. Paul Samuelson observes that the monetary base did not fall during the Depression. Moreover, expecting the Federal Reserve to have aggressively increased the monetary base by whatever amount was necessary to stop the decline in the money supply is hindsight. A course of action for monetary policy such as this was beyond the scope of discussion prevailing at the time. In addition, others, like Moses Abramovitz, point out that the money supply had endogenous components that were beyond the Federal Reserve’s ability to control. Namely, the money supply may have been falling as a result of declining economic activity, or so-called “reverse causation.” Moreover the gold standard, to which the United States continued to adhere until March 1933, also tied the hands of the Federal Reserve in so far as gold outflows that occurred required the Federal Reserve to contract the supply of money. These views are also contained in Temin (1989) and Eichengreen (1992), as discussed below.

Bernanke (1983) argues that the monetary hypothesis: (i) is not a complete explanation of the link between the financial sector and aggregate output in the 1930s; (ii) does not explain how it was that decreases in the money supply caused output to keep falling over many years, especially since it is widely believed that changes in the money supply only change prices and other nominal economic values in the long run, not real economic values like output ; and (iii) is quantitatively insufficient to explain the depth of the decline in output. Bernanke (1983) not only resurrected and sharpened Fisher’s (1933) debt deflation hypothesis, but also made further contributions to what has come to be known as the nonmonetary/financial hypothesis.

The Nonmonetary/Financial Hypothesis

Bernanke (1983), building on the monetary hypothesis of Friedman and Schwartz (1963), presents an alternative interpretation of the way in which the financial crises may have affected output. The argument involves both the effects of debt deflation and the impact that bank panics had on the ability of financial markets to efficiently allocate funds from lenders to borrowers. These nonmonetary/financial theories hold that events in financial markets other than shocks to the money supply can help to account for the paths of output and prices during the Great Depression.

Fisher (1933) asserted that the dominant forces that account for “great” depressions are (nominal) over-indebtedness and deflation. Specifically, he argued that real debt burdens were substantially increased when there were dramatic declines in the price level and nominal incomes. The combination of deflation, falling nominal income and increasing real debt burdens led to debtor insolvency, lowered aggregate demand, and thereby contributed to a continuing decline in the price level and thus further increases in the real burden of debt.

The “Credit View”

Bernanke (1983), in what is now called the “credit view,” provided additional details to help explain Fisher’s debt deflation hypothesis. He argued that in normal circumstances, an initial decline in prices merely reallocates wealth from debtors to creditors, such as banks. Usually, such wealth redistributions are minor in magnitude and have no first-order impact on the economy. However, in the face of large shocks, deflation in the prices of assets forfeited to banks by debtor bankruptcies leads to a decline in the nominal value of assets on bank balance sheets. For a given value of bank liabilities, also denominated in nominal terms, this deterioration in bank assets threatens insolvency. As banks reallocate away from loans to safer government securities, some borrowers, particularly small ones, are unable to obtain funds, often at any price. Further, if this reallocation is long-lived, the shortage of credit for these borrowers helps to explain the persistence of the downturn. As the disappearance of bank financing forces lower expenditure plans, aggregate demand declines, which again contributes to the downward deflationary spiral. For debt deflation to be operative, it is necessary to demonstrate that there was a substantial build-up of debt prior to the onset of the Depression and that the deflation of the 1930s was at least partially unanticipated at medium- and long-term horizons at the time that the debt was being incurred. Both of these conditions appear to have been in place (Fackler and Parker, 2001; Hamilton, 1992; Evans and Wachtel, 1993).

The Breakdown in Credit Markets

In addition, the financial panics which occurred hindered the credit allocation mechanism. Bernanke (1983) explains that the process of credit intermediation requires substantial information gathering and non-trivial market-making activities. The financial disruptions of 1930–33 are correctly viewed as substantial impediments to the performance of these services and thus impaired the efficient allocation of credit between lenders and borrowers. That is, financial panics and debtor and business bankruptcies resulted in a increase in the real cost of credit intermediation. As the cost of credit intermediation increased, sources of credit for many borrowers (especially households, farmers and small firms) became expensive or even unobtainable at any price. This tightening of credit put downward pressure on aggregate demand and helped turn the recession of 1929–30 into the Great Depression. The empirical support for the validity of the nonmonetary/financial hypothesis during the Depression is substantial (Bernanke, 1983; Fackler and Parker, 1994, 2001; Hamilton, 1987, 1992), although support for the “credit view” for the transmission mechanism of monetary policy in post-World War II economic activity is substantially weaker. In combination, considering the preponderance of empirical results and historical simulations contained in the economic literature, the monetary hypothesis and the nonmonetary/financial hypothesis go a substantial distance toward accounting for the economic experiences of the United States during the Great Depression.

The Role of Pessimistic Expectations

To this combination, the behavior of expectations should also be added. As explained by James Tobin, there was another reason for a “change in the character of the contraction” in 1931. Although Friedman and Schwartz attribute this “change” to the bank panics that occurred, Tobin points out that change also took place because of the emergence of pessimistic expectations. If it was thought that the early stages of the Depression were symptomatic of a recession that was not different in kind from similar episodes in our economic history, and that recovery was a real possibility, the public need not have had pessimistic expectations. Instead the public may have anticipated things would get better. However, after the British left the gold standard, expectations changed in a very pessimistic way. The public may very well have believed that the business cycle downturn was not going to be reversed, but rather was going to get worse than it was. When households and business investors begin to make plans based on the economy getting worse instead of making plans based on anticipations of recovery, the depressing economic effects on consumption and investment of this switch in expectations are common knowledge in the modern macroeconomic literature. For the literature on the Great Depression, the empirical research conducted on the expectations hypothesis focuses almost exclusively on uncertainty (which is not the same thing as pessimistic/optimistic expectations) and its contribution to the onset of the Depression (Romer, 1990; Flacco and Parker, 1992). Although Keynes (1936) writes extensively about the state of expectations and their economic influence, the literature is silent regarding the empirical validity of the expectations hypothesis in 1931–33. Yet, in spite of this, the continued shocks that the United States’ economy received demonstrated that the business cycle downturn of 1931–33 was of a different kind than had previously been known. Once the public believed this to be so and made their plans accordingly, the results had to have been economically devastating. There is no formal empirical confirmation and I have not segregated the expectations hypothesis as a separate hypothesis in the overview. However, the logic of the above argument compels me to be of the opinion that the expectations hypothesis provides an impressive addition to the monetary hypothesis and the nonmonetary/financial hypothesis in accounting for the economic experiences of the United States during the Great Depression.

The Gold Standard Hypothesis

Recent research on the operation of the interwar gold standard has deepened our understanding of the Depression and its international character. The way and manner in which the interwar gold standard was structured and operated provide a convincing explanation of the international transmission of deflation and depression that occurred in the 1930s.

The story has its beginning in the 1870–1914 period. During this time the gold standard functioned as a pegged exchange rate system where certain rules were observed. Namely, it was necessary for countries to permit their money supplies to be altered in response to gold flows in order for the price-specie flow mechanism to function properly. It operated successfully because countries that were gaining gold allowed their money supply to increase and raise the domestic price level to restore equilibrium and maintain the fixed exchange rate of their currency. Countries that were losing gold were obligated to permit their money supply to decrease and generate a decline in their domestic price level to restore equilibrium and maintain the fixed exchange rate of their currency. Eichengreen (1992) discusses and extensively documents that the gold standard of this period functioned as smoothly as it did because of the international commitment countries had to the gold standard and the level of international cooperation exhibited during this time. “What rendered the commitment to the gold standard credible, then, was that the commitment was international, not merely national. That commitment was activated through international cooperation” (Eichengreen, 1992).

The gold standard was suspended when the hostilities of World War I broke out. By the end of 1928, major countries such as the United States, the United Kingdom, France and Germany had re-established ties to a functioning fixed exchange rate gold standard. However, Eichengreen (1992) points out that the world in which the gold standard functioned before World War I was not the same world in which the gold standard was being re-established. A credible commitment to the gold standard, as Hamilton (1988) explains, required that a country maintain fiscal soundness and political objectives that insured the monetary authority could pursue a monetary policy consistent with long-run price stability and continuous convertibility of the currency. Successful operation required these conditions to be in place before re-establishment of the gold standard was operational. However, many governments during the interwar period went back on the gold standard in the opposite set of circumstances. They re-established ties to the gold standard because they were incapable, due to the political chaos generated after World War I, of fiscal soundness and did not have political objectives conducive to reforming monetary policy such that it could insure long-run price stability. “By this criterion, returning to the gold standard could not have come at a worse time or for poorer reasons” (Hamilton, 1988). Kindleberger (1973) stresses the fact that the pre-World War I gold standard functioned as well as it did because of the unquestioned leadership exercised by Great Britain. After World War I and the relative decline of Britain, the United States did not exhibit the same strength of leadership Britain had shown before. The upshot is that it was an unsuitable environment in which to re-establish the gold standard after World War I and the interwar gold standard was destined to drift in a state of malperformance as no one took responsibility for its proper functioning. However, the problems did not end there.

Flaws in the Interwar International Gold Standard

Lack of Symmetry in the Response of Gold-Gaining and Gold-Losing Countries

The interwar gold standard operated with four structural/technical flaws that almost certainly doomed it to failure (Eichengreen, 1986; Temin, 1989; Bernanke and James, 1991). The first, and most damaging, was the lack of symmetry in the response of gold-gaining countries and gold-losing countries that resulted in a deflationary bias that was to drag the world deeper into deflation and depression. If a country was losing gold reserves, it was required to decrease its money supply to maintain its commitment to the gold standard. Given that a minimum gold reserve had to be maintained and that countries became concerned when the gold reserve fell within 10 percent of this minimum, little gold could be lost before the necessity of monetary contraction, and thus deflation, became a reality. Moreover, with a fractional gold reserve ratio of 40 percent, the result was a decline in the domestic money supply equal to 2.5 times the gold outflow. On the other hand, there was no such constraint on countries that experienced gold inflows. Gold reserves were accumulated without the binding requirement that the domestic money supply be expanded. Thus the price–specie flow mechanism ceased to function and the equilibrating forces of the pre-World War I gold standard were absent during the interwar period. If a country attracting gold reserves were to embark on a contractionary path, the result would be the further extraction of gold reserves from other countries on the gold standard and the imposition of deflation on their economies as well, as they were forced to contract their money supplies. “As it happened, both of the two major gold surplus countries – France and the United States, who at the time together held close to 60 percent of the world’s monetary gold – took deflationary paths in 1928–1929” (Bernanke and James, 1991).

Foreign Exchange Reserves

Second, countries that did not have reserve currencies could hold their minimum reserves in the form of both gold and convertible foreign exchange reserves. If the threat of devaluation of a reserve currency appeared likely, a country holding foreign exchange reserves could divest itself of the foreign exchange, as holding it became a more risky proposition. Further, the convertible reserves were usually only fractionally backed by gold. Thus, if countries were to prefer gold holdings as opposed to foreign exchange reserves for whatever reason, the result would be a contraction in the world money supply as reserves were destroyed in the movement to gold. This effect can be thought of as equivalent to the effect on the domestic money supply in a fractional reserve banking system of a shift in the public’s money holdings toward currency and away from bank deposits.

The Bank of France and Open Market Operations

Third, the powers of many European central banks were restricted or excluded outright. In particular, as discussed by Eichengreen (1986), the Bank of France was prohibited from engaging in open market operations, i.e. the purchase or sale of government securities. Given that France was one of the countries amassing gold reserves, this restriction largely prevented them from adhering to the rules of the gold standard. The proper response would have been to expand their supply of money and inflate so as not to continue to attract gold reserves and impose deflation on the rest of the world. This was not done. France continued to accumulate gold until 1932 and did not leave the gold standard until 1936.

Inconsistent Currency Valuations

Lastly, the gold standard was re-established at parities that were unilaterally determined by each individual country. When France returned to the gold standard in 1926, it returned at a parity rate that is believed to have undervalued the franc. When Britain returned to the gold standard in 1925, it returned at a parity rate that is believed to have overvalued the pound. In this situation, the only sustainable equilibrium required the French to inflate their economy in response to the gold inflows. However, given their legacy of inflation during the 1921–26 period, France steadfastly resisted inflation (Eichengreen, 1986). The maintenance of the gold standard and the resistance to inflation were now inconsistent policy objectives. The Bank of France’s inability to conduct open market operations only made matters worse. The accumulation of gold and the exporting of deflation to the world was the result.

The Timing of Recoveries

Taken together, the flaws described above made the interwar gold standard dysfunctional and in the end unsustainable. Looking back, we observe that the record of departure from the gold standard and subsequent recovery was different for many different countries. For some countries recovery came sooner. For some it came later. It is in this timing of departure from the gold standard that recent research has produced a remarkable empirical finding. From the work of Choudri and Kochin (1980), Eichengreen and Sachs (1985), Temin (1989), and Bernanke and James (1991), we now know that the sooner a country abandoned the gold standard, the quicker recovery commenced. Spain, which never restored its participation in the gold standard, missed the ravages of the Depression altogether. Britain left the gold standard in September 1931, and started to recover. Sweden left the gold standard at the same time as Britain, and started to recover. The United States left in March 1933, and recovery commenced. France, Holland, and Poland continued to have their economies struggle after the United States’ recovery began as they continued to adhere to the gold standard until 1936. Only after they left did recovery start; departure from the gold standard freed a country from the ravages of deflation.

The Fed and the Gold Standard: The “Midas Touch”

Temin (1989) and Eichengreen (1992) argue that it was the unbending commitment to the gold standard that generated deflation and depression worldwide. They emphasize that the gold standard required fiscal and monetary authorities around the world to submit their economies to internal adjustment and economic instability in the face of international shocks. Given how the gold standard tied countries together, if the gold parity were to be defended and devaluation was not an option, unilateral monetary actions by any one country were pointless. The end result is that Temin (1989) and Eichengreen (1992) reject Friedman and Schwartz’s (1963) claim that the Depression was caused by a series of policy failures on the part of the Federal Reserve. Actions taken in the United States, according to Temin (1989) and Eichengreen (1992), cannot be properly understood in isolation with respect to the rest of the world. If the commitment to the gold standard was to be maintained, monetary and fiscal authorities worldwide had little choice in responding to the crises of the Depression. Why did the Federal Reserve continue a policy of inaction during the banking panics? Because the commitment to the gold standard, what Temin (1989) has labeled “The Midas Touch,” gave them no choice but to let the banks fail. Monetary expansion and the injection of liquidity would lower interest rates, lead to a gold outflow, and potentially be contrary to the rules of the gold standard. Continued deflation due to gold outflows would begin to call into question the monetary authority’s commitment to the gold standard. “Defending gold parity might require the authorities to sit idly by as the banking system crumbled, as the Federal Reserve did at the end of 1931 and again at the beginning of 1933” (Eichengreen, 1992). Thus, if the adherence to the gold standard were to be maintained, the money supply was endogenous with respect to the balance of payments and beyond the influence of the Federal Reserve.

Eichengreen (1992) concludes further that what made the pre-World War I gold standard so successful was absent during the interwar period: credible commitment to the gold standard activated through international cooperation in its implementation and management. Had these important ingredients of the pre-World War I gold standard been present during the interwar period, twentieth-century economic history may have been very different.

Recovery and the New Deal

March 1933 was the rock bottom of the Depression and the inauguration of Franklin D. Roosevelt represented a sharp break with the status quo. Upon taking office, a bank holiday was declared, the United States left the interwar gold standard the following month, and the government commenced with several measures designed to resurrect the financial system. These measures included: (i) the establishment of the Reconstruction Finance Corporation which set about funneling large sums of liquidity to banks and other intermediaries; (ii) the Securities Exchange Act of 1934 which established margin requirements for bank loans used to purchase stocks and bonds and increased information requirements to potential investors; and (iii) the Glass–Steagal Act which strictly separated commercial banking and investment banking. Although delivering some immediate relief to financial markets, lenders continued to be reluctant to extend credit after the events of 1929–33, and the recovery of financial markets was slow and incomplete. Bernanke (1983) estimates that the United States’ financial system did not begin to shed the inefficiencies under which it was operating until the end of 1935.

The NIRA

Policies designed to promote different economic institutions were enacted as part of the New Deal. The National Industrial Recovery Act (NIRA) was passed on June 6, 1933 and was designed to raise prices and wages. In addition, the Act mandated the formation of planning boards in critical sectors of the economy. The boards were charged with setting output goals for their respective sector and the usual result was a restriction of production. In effect, the NIRA was a license for industries to form cartels and was struck down as unconstitutional in 1935. The Agricultural Adjustment Act of 1933 was similar legislation designed to reduce output and raise prices in the farming sector. It too was ruled unconstitutional in 1936.

Relief and Jobs Programs

Other policies intended to provide relief directly to people who were destitute and out of work were rapidly enacted. The Civilian Conservation Corps (CCC), the Tennessee Valley Authority (TVA), the Public Works Administration (PWA) and the Federal Emergency Relief Administration (FERA) were set up shortly after Roosevelt took office and provided jobs for the unemployed and grants to states for direct relief. The Civil Works Administration (CWA), created in 1933–34, and the Works Progress Administration (WPA), created in 1935, were also designed to provide work relief to the jobless. The Social Security Act was also passed in 1935. There surely are other programs with similar acronyms that have been left out, but the intent was the same. In the words of Roosevelt himself, addressing Congress in 1938:

Government has a final responsibility for the well-being of its citizenship. If private co-operative endeavor fails to provide work for the willing hands and relief for the unfortunate, those suffering hardship from no fault of their own have a right to call upon the Government for aid; and a government worthy of its name must make fitting response. (Quoted from Polenberg, 2000)

The Depression had shown the inaccuracies of classifying the 1920s as a “new era.” Rather, the “new era,” as summarized by Roosevelt’s words above and initiated in government’s involvement in the economy, began in March 1933.

The NBER business cycle chronology shows continuous growth from March 1933 until May 1937, at which time a 13-month recession hit the economy. The business cycle rebounded in June 1938 and continued on its upward march to and through the beginning of the United States’ involvement in World War II. The recovery that started in 1933 was impressive, with real GNP experiencing annual rates of the growth in the 10 percent range between 1933 and December 1941, excluding the recession of 1937–38 (Romer, 1993). However, as reported by Romer (1993), real GNP did not return to its pre-Depression level until 1937 and real GNP did not catch up to its pre-Depression secular trend until 1942. Indeed, the unemployment rate, peaking at 25 percent in March 1933, continued to dwell near or above the double-digit range until 1940. It is in this sense that most economists attribute the ending of the Depression to the onset of World War II. The War brought complete recovery as the unemployment rate quickly plummeted after December 1941 to its nadir during the War of below 2 percent.

Explanations for the Pace of Recovery

The question remains, however, that if the War completed the recovery, what initiated it and sustained it through the end of 1941? Should we point to the relief programs of the New Deal and the leadership of Roosevelt? Certainly, they had psychological/expectational effects on consumers and investors and helped to heal the suffering experienced during that time. However, as shown by Brown (1956), Peppers (1973), and Raynold, McMillin and Beard (1991), fiscal policy contributed little to the recovery, and certainly could have done much more.

Once again we return to the financial system for answers. The abandonment of the gold standard, the impact this had on the money supply, and the deliverance from the economic effects of deflation would have to be singled out as the most important contributor to the recovery. Romer (1993) stresses that Eichengreen and Sachs (1985) have it right; recovery did not come before the decision to abandon the old gold parity was made operational. Once this became reality, devaluation of the currency permitted expansion in the money supply and inflation which, rather than promoting a policy of beggar-thy-neighbor, allowed countries to escape the deflationary vortex of economic decline. As discussed in connection with the gold standard hypothesis, the simultaneity of leaving the gold standard and recovery is a robust empirical result that reflects more than simple temporal coincidence.

Romer (1993) reports an increase in the monetary base in the United States of 52 percent between April 1933 and April 1937. The M1 money supply virtually matched this increase in the monetary base, with 49 percent growth over the same period. The sources of this increase were two-fold. First, aside from the immediate monetary expansion permitted by devaluation, as Romer (1993) explains, monetary expansion continued into 1934 and beyond as gold flowed to the United States from Europe due to the increasing political unrest and heightened probability of hostilities that began the progression to World War II. Second, the increase in the money supply matched the increase in the monetary base and the Treasury chose not to sterilize the gold inflows. This is evidence that the monetary expansion resulted from policy decisions and not endogenous changes in the money multiplier. The new regime was freed from the constraints of the gold standard and the policy makers were intent on taking actions of a different nature than what had been done between 1929 and 1933.

Incompleteness of the Recovery before WWII

The Depression had turned a corner and the economy was emerging from the abyss in 1933. However, it still had a long way to go to reach full recovery. Friedman and Schwartz (1963) comment that “the most notable feature of the revival after 1933 was not its rapidity but its incompleteness.” They claim that monetary policy and the Federal Reserve were passive after 1933. The monetary authorities did nothing to stop the fall from 1929 to 1933 and did little to promote the recovery. The Federal Reserve made no effort to increase the stock of high-powered money through the use of either open market operations or rediscounting; Federal Reserve credit outstanding remained “almost perfectly constant from 1934 to mid-1940” (Friedman and Schwartz, 1963). As we have seen above, it was the Treasury that was generating increases in the monetary base at the time by issuing gold certificates equal to the amount of gold reserve inflow and depositing them at the Federal Reserve. When the government spent the money, the Treasury swapped the gold certificates for Federal Reserve notes and this expanded the monetary base (Romer, 1993). Monetary policy was thought to be powerless to promote recovery, and instead it was fiscal policy that became the implement of choice. The research shows that fiscal policy could have done much more to aid in recovery – ironically fiscal policy was the vehicle that was now the focus of attention. There is an easy explanation for why this is so.

The Emergences of Keynes

The economics profession as a whole was at a loss to provide cogent explanations for the events of 1929–33. In the words of Robert Gordon (1998), “economics had lost its intellectual moorings, and it was time for a new diagnosis.” There were no convincing answers regarding why the earlier theories of macroeconomic behavior failed to explain the events that were occurring, and worse, there was no set of principles that established a guide for proper actions in the future. That changed in 1936 with the publication of Keynes’s book The General Theory of Employment, Interest and Money. Perhaps there has been no other person and no other book in economics about which so much has been written. Many consider the arrival of Keynesian thought to have been a “revolution,” although this too is hotly contested (see, for example, Laidler, 1999). The debates that The General Theory generated have been many and long-lasting. There is little that can be said here to add or subtract from the massive literature devoted to the ideas promoted by Keynes, whether they be viewed right or wrong. But the influence over academic thought and economic policy that was generated by The General Theory is not in doubt.

The time was right for a set of ideas that not only explained the Depression’s course of events, but also provided a prescription for remedies that would create better economic performance in the future. Keynes and The General Theory, at the time the events were unfolding, provided just such a package. When all is said and done, we can look back in hindsight and argue endlessly about what Keynes “really meant” or what the “true” contribution of Keynesianism has been to the world of economics. At the time the Depression happened, Keynes represented a new paradigm for young scholars to latch on to. The stage was set for the nurturing of macroeconomics for the remainder of the twentieth century.

This article is a modified version of the introduction to Randall Parker, editor, Reflections on the Great Depression, Edward Elgar Publishing, 2002.

Bibliography

Olney, Martha. “Avoiding Default:The Role of Credit in the Consumption Collapse of 1930.” Quarterly Journal of Economics 114, no. 1 (1999): 319-35.

Anderson, Barry L. and James L. Butkiewicz. “Money, Spending and the Great Depression.” Southern Economic Journal 47 (1980): 388-403.

Balke, Nathan S. and Robert J. Gordon. “Historical Data.” In The American Business Cycle: Continuity and Change, edited by Robert J. Gordon. Chicago: University of Chicago Press, 1986.

Bernanke, Ben S. “Nonmonetary Effects of the Financial Crisis in the Propagation of the Great Depression.” American Economic Review 73, no. 3 (1983): 257-76.

Bernanke, Ben S. and Harold James. “The Gold Standard, Deflation, and Financial Crisis in the Great Depression: An International Comparison.” In Financial Markets and Financial Crises, edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Brown, E. Cary. “Fiscal Policy in the Thirties: A Reappraisal.” American Economic Review 46, no. 5 (1956): 857-79.

Cecchetti, Stephen G. “Prices during the Great Depression: Was the Deflation of 1930-1932 Really Anticipated?” American Economic Review 82, no. 1 (1992): 141-56.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, edited by Mark Wheeler. Kalamazoo, MI: W.E. Upjohn Institute for Employment Research, 1998.

Cecchetti, Stephen G. and Georgios Karras. “Sources of Output Fluctuations during the Interwar Period: Further Evidence on the Causes of the Great Depression.” Review of Economics and Statistics 76, no. 1 (1994): 80-102

Choudri, Ehsan U. and Levis A. Kochin. “The Exchange Rate and the International Transmission of Business Cycle Disturbances: Some Evidence from the Great Depression.” Journal of Money, Credit, and Banking 12, no. 4 (1980): 565-74.

De Long, J. Bradford and Andrei Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” Journal of Economic History 51, no. 3 (1991): 675-700.

Eichengreen, Barry. “The Bank of France and the Sterilization of Gold, 1926–1932.” Explorations in Economic History 23, no. 1 (1986): 56-84.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919–1939. New York: Oxford University Press, 1992.

Eichengreen, Barry and Jeffrey Sachs. “Exchange Rates and Economic Recovery in the 1930s.” Journal of Economic History 45, no. 4 (1985): 925-46.

Evans, Martin and Paul Wachtel. “Were Price Changes during the Great Depression Anticipated? Evidence from Nominal Interest Rates.” Journal of Monetary Economics 32, no. 1 (1993): 3-34.

Fackler, James S. and Randall E. Parker. “Accounting for the Great Depression: A Historical Decomposition.” Journal of Macroeconomics 16 (1994): 193-220.

Fackler, James S. and Randall E. Parker. “Was Debt Deflation Operative during the Great Depression?” East Carolina University Working Paper, 2001.

Fisher, Irving. “The Debt–Deflation Theory of Great Depressions.” Econometrica 1, no. 4 (1933): 337-57.

Flacco, Paul R. and Randall E. Parker. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30, no. 1 (1992): 154-71.

Friedman, Milton and Anna J. Schwartz. A Monetary History of the United States, 1867–1960. Princeton, NJ: Princeton University Press, 1963.

Gordon, Robert J. Macroeconomics, seventh edition. New York: Addison Wesley, 1998.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 13 (1987): 1-25.

Hamilton, James D. “Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6, no. 2 (1988): 67-89.

Hamilton, James D. “Was the Deflation during the Great Depression Anticipated? Evidence from the Commodity Futures Market.” American Economic Review 82, no. 1 (1992): 157-78.

Hayek, Friedrich A. von. Monetary Theory and the Trade Cycle. New

York: A. M. Kelley, 1967 (originally published in 1929).

Hayek, Friedrich A. von, Prices and Production. New York: A. M.

Kelley, 1966 (originally published in 1931).

Hoover, Herbert. The Memoirs of Herbert Hoover: The Great Depression, 1929–1941. New York: Macmillan, 1952.

Keynes, John M. The General Theory of Employment, Interest, and Money. London: Macmillan, 1936.

Kindleberger, Charles P. The World in Depression, 1929–1939. Berkeley: University of California Press, 1973.

Laidler, David. Fabricating the Keynesian Revolution. Cambridge: Cambridge University Press, 1999.

McCallum, Bennett T. “Could a Monetary Base Rule Have Prevented the Great Depression?” Journal of Monetary Economics 26 (1990): 3-26.

Meltzer, Allan H. “Monetary and Other Explanations of the Start of the Great Depression.” Journal of Monetary Economics 2 (1976): 455-71.

Mishkin, Frederick S. “The Household Balance Sheet and the Great Depression.” Journal of Economic History 38, no. 4 (1978): 918-37.

Nelson, Daniel B. “Was the Deflation of 1929–1930 Anticipated? The Monetary Regime as Viewed by the Business Press.” Research in Economic History 13 (1991): 1-65.

Peppers, Larry. “Full Employment Surplus Analysis and Structural Change: The 1930s.” Explorations in Economic History 10 (1973): 197-210..

Persons, Charles E. “Credit Expansion, 1920 to 1929, and Its Lessons.” Quarterly Journal of Economics 45, no. 1 (1930): 94-130.

Polenberg, Richard. The Era of Franklin D. Roosevelt, 1933–1945: A Brief History with Documents. Boston: Bedford/St. Martin’s, 2000.

Raynold, Prosper, W. Douglas McMillin and Thomas R. Beard. “The Impact of Federal Government Expenditures in the 1930s.” Southern Economic Journal 58, no. 1 (1991): 15-28.

Romer, Christina D. “World War I and the Postwar Depression: A Reappraisal Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22, no. 1 (1988): 91-115.

Romer, Christina D. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105, no. 3 (1990): 597-624.

Romer, Christina D. “The Nation in Depression.” Journal of Economic Perspectives 7, no. 2 (1993): 19-39.

Snowdon, Brian and Howard R. Vane. Conversations with Leading Economists: Interpreting Modern Macroeconomics, Cheltenham, UK: Edward Elgar, 1999.

Soule, George H. Prosperity Decade, From War to Depression: 1917–1929. New York: Rinehart, 1947.

Temin, Peter. Did Monetary Forces Cause the Great Depression? New York: W.W. Norton, 1976.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: MIT Press, 1989.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” Journal of Economic Perspectives 4, no. 2 (1990): 67-83.

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922–33: A Reinterpretation.” Journal of Political Economy 73, no. 4 (1965): 325-43.

1 Bankers’ acceptances are explained at http://www.rich.frb.org/pubs/instruments/ch10.html.

2 Liquidity is the ease of converting an asset into money.

3 The monetary base is measured as the sum of currency in the hands of the public plus reserves in the banking system. It is also called high-powered money since the monetary base is the quantity that gets multiplied into greater amounts of money supply as banks make loans and people spend and thereby create new bank deposits.

4 The money multiplier equals [D/R*(1 + D/C)]/(D/R + D/C + D/E), where

D = deposits, R = reserves, C = currency and E = excess reserves in the

banking system.

5 The real interest rate adjusts the observed (nominal) interest rate for inflation or deflation. Ex post refers to the real interest rate after the actual change in prices has been observed; ex ante refers to the real interest rate that is expected at the time the lending occurs.

6 See note 3.

Citation: Parker, Randall. “An Overview of the Great Depression”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/an-overview-of-the-great-depression/

Gold Standard

Lawrence H. Officer, University of Illinois at Chicago

The gold standard is the most famous monetary system that ever existed. The periods in which the gold standard flourished, the groupings of countries under the gold standard, and the dates during which individual countries adhered to this standard are delineated in the first section. Then characteristics of the gold standard (what elements make for a gold standard), the various types of the standard (domestic versus international, coin versus other, legal versus effective), and implications for the money supply of a country on the standard are outlined. The longest section is devoted to the “classical” gold standard, the predominant monetary system that ended in 1914 (when World War I began), followed by a section on the “interwar” gold standard, which operated between the two World Wars (the 1920s and 1930s).

Countries and Dates on the Gold Standard

Countries on the gold standard and the periods (or beginning and ending dates) during which they were on gold are listed in Tables 1 and 2 for the classical and interwar gold standards. Types of gold standard, ambiguities of dates, and individual-country cases are considered in later sections. The country groupings reflect the importance of countries to establishment and maintenance of the standard. Center countries — Britain in the classical standard, the United Kingdom (Britain’s legal name since 1922) and the United States in the interwar period — were indispensable to the spread and functioning of the gold standard. Along with the other core countries — France and Germany, and the United States in the classical period — they attracted other countries to adopt the gold standard, in particular, British colonies and dominions, Western European countries, and Scandinavia. Other countries — and, for some purposes, also British colonies and dominions — were in the periphery: acted on, rather than actors, in the gold-standard eras, and generally not as committed to the gold standard.

Table 1Countries on Classical Gold Standard
Country Type of Gold Standard Period
Center Country
Britaina Coin 1774-1797b, 1821-1914
Other Core Countries
United Statesc Coin 1879-1917d
Francee Coin 1878-1914
Germany Coin 1871-1914
British Colonies and Dominions
Australia Coin 1852-1915
Canadaf Coin 1854-1914
Ceylon Coin 1901-1914
Indiag Exchange (British pound) 1898-1914
Western Europe
Austria-Hungaryh Coin 1892-1914
Belgiumi Coin 1878-1914
Italy Coin 1884-1894
Liechtenstein Coin 1898-1914
Netherlandsj Coin 1875-1914
Portugalk Coin 1854-1891
Switzerland Coin 1878-1914
Scandinavia
Denmarkl Coin 1872-1914
Finland Coin 1877-1914
Norway Coin 1875-1914
Sweden Coin 1873-1914
Eastern Europe
Bulgaria Coin 1906-1914
Greece Coin 1885, 1910-1914
Montenegro Coin 1911-1914
Romania Coin 1890-1914
Russia Coin 1897-1914
Middle East
Egypt Coin 1885-1914
Turkey (Ottoman Empire) Coin 1881m-1914
Asia
Japann Coin 1897-1917
Philippines Exchange (U.S. dollar) 1903-1914
Siam Exchange (British pound) 1908-1914
Straits Settlementso Exchange (British pound) 1906-1914
Mexico and Central America
Costa Rica Coin 1896-1914
Mexico Coin 1905-1913
South America
Argentina Coin 1867-1876, 1883-1885, 1900-1914
Bolivia Coin 1908-1914
Brazil Coin 1888-1889, 1906-1914
Chile Coin 1895-1898
Ecuador Coin 1898-1914
Peru Coin 1901-1914
Uruguay Coin 1876-1914
Africa
Eritrea Exchange (Italian lira) 1890-1914
German East Africa Exchange (German mark) 1885p-1914
Italian Somaliland Exchange (Italian lira) 1889p-1914

a Including colonies (except British Honduras) and possessions without a national currency: New Zealand and certain other Oceanic colonies, South Africa, Guernsey, Jersey, Malta, Gibraltar, Cyprus, Bermuda, British West Indies, British Guiana, British Somaliland, Falkland Islands, other South and West African colonies.
b Or perhaps 1798.
c Including countries and territories with U.S. dollar as exclusive or predominant currency: British Honduras (from 1894), Cuba (from 1898), Dominican Republic (from 1901), Panama (from 1904), Puerto Rico (from 1900), Alaska, Aleutian Islands, Hawaii, Midway Islands (from 1898), Wake Island, Guam, and American Samoa.
d Except August – October 1914.
e Including Tunisia (from 1891) and all other colonies except Indochina.
f Including Newfoundland (from 1895).
g Including British East Africa, Uganda, Zanzibar, Mauritius, and Ceylon (to 1901).
h Including Montenegro (to 1911).
I Including Belgian Congo.
j Including Netherlands East Indies.
k Including colonies, except Portuguese India.
l Including Greenland and Iceland.
m Or perhaps 1883.
n Including Korea and Taiwan.
o Including Borneo.
p Approximate beginning date.

Sources: Bloomfield (1959, pp. 13, 15; 1963), Bordo and Kydland (1995), Bordo and Schwartz (1996), Brown (1940, pp.15-16), Bureau of the Mint (1929), de Cecco (1984, p. 59), Ding (1967, pp. 6- 7), Director of the Mint (1913, 1917), Ford (1985, p. 153), Gallarotti (1995, pp. 272 75), Gunasekera (1962), Hawtrey (1950, p. 361), Hershlag (1980, p. 62), Ingram (1971, p. 153), Kemmerer (1916; 1940, pp. 9-10; 1944, p. 39), Kindleberger (1984, pp. 59-60), Lampe (1986, p. 34), MacKay (1946, p. 64), MacLeod (1994, p. 13), Norman (1892, pp. 83-84), Officer (1996, chs. 3 4), Pamuk (2000, p. 217), Powell (1999, p. 14), Rifaat (1935, pp. 47, 54), Shinjo (1962, pp. 81-83), Spalding (1928), Wallich (1950, pp. 32-36), Yeager (1976, p. 298), Young (1925).

Table 2Countries on Interwar Gold Standard
Country Type ofGold Standard Ending Date
Exchange-RateStabilization CurrencyConvertibilitya
United Kingdomb 1925 1931
Coin 1922e Other Core Countries
Bullion 1928 Germany 1924 1931
Australiag 1925 1930
Exchange 1925 Canadai 1925 1929
Exchange 1925 Indiaj 1925 1931
Coin 1929k South Africa 1925 1933
Austria 1922 1931
Exchange 1926 Danzig 1925 1935
Coin 1925 Italym 1927 1934
Coin 1925 Portugalo 1929 1931
Coin 1925 Scandinavia
Bullion 1927 Finland 1925 1931
Bullion 1928 Sweden 1922 1931
Albania 1922 1939
Exchange 1927 Czechoslovakia 1923 1931
Exchange 1928 Greece 1927 1932
Exchange 1925 Latvia 1922 1931
Coin 1922 Poland 1926 1936
Exchange 1929 Yugoslavia 1925 1932
Egypt 1925 1931
Exchange 1925 Palestine 1927 1931
Exchange 1928 Asia
Coin 1930 Malayat 1925 1931
Coin 1925 Philippines 1922 1933
Exchange 1928 Mexico and Central America
Exchange 1922 Guatemala 1925 1933
Exchange 1922 Honduras 1923 1933
Coin 1925 Nicaragua 1915 1932
Coin 1920 South America
Coin 1927 Bolivia 1926 1931
Exchange 1928 Chile 1925 1931
Coin 1923 Ecuador 1927 1932
Exchange 1927 Peru 1928 1932
Exchange 1928 Venezuela 1923 1930

a And freedom of gold export and import.
b Including colonies (except British Honduras) and possessions without a national currency: Guernsey, Jersey, Malta, Gibraltar, Cyprus, Bermuda, British West Indies, British Guiana, British Somaliland, Falkland Islands, British West African and certain South African colonies, certain Oceanic colonies.
cIncluding countries and territories with U.S. dollar as exclusive or predominant currency: British Honduras, Cuba, Dominican Republic, Panama, Puerto Rico, Alaska, Aleutian Islands, Hawaii, Midway Islands, Wake Island, Guam, and American Samoa.
dNot applicable; “the United States dollar…constituted the central point of reference in the whole post-war stabilization effort and was throughout the period of stabilization at par with gold.” — Brown (1940, p. 394)
e1919 for freedom of gold export.
f Including colonies and possessions, except Indochina and Syria.
g Including Papua (New Guinea) and adjoining islands.
h Kenya, Uganda, and Tanganyika.
I Including Newfoundland.
j Including Bhutan, Nepal, British Swaziland, Mauritius, Pemba Island, and Zanzibar.
k 1925 for freedom of gold export.
l Including Luxemburg and Belgian Congo.
m Including Italian Somaliland and Tripoli.
n Including Dutch Guiana and Curacao (Netherlands Antilles).
o Including territories, except Portuguese India.
p Including Liechtenstein.
q Including Greenland and Iceland.
r Including Greater Lebanon.
s Including Korea and Taiwan.
t Including Straits Settlements, Sarawak, Labuan, and Borneo.

Sources: Bett (1957, p. 36), Brown (1940), Bureau of the Mint (1929), Ding (1967, pp. 6-7), Director of the Mint (1917), dos Santos (1996, pp. 191-92), Eichengreen (1992, p. 299), Federal Reserve Bulletin (1928, pp. 562, 847; 1929, pp. 201, 265, 549; 1930, pp. 72, 440; 1931, p. 554; 1935, p. 290; 1936, pp. 322, 760), Gunasekera (1962), Jonung (1984, p. 361), Kemmerer (1954, pp. 301 302), League of Nations (1926, pp. 7, 15; 1927, pp. 165-69; 1929, pp. 208-13; 1931, pp. 265-69; 1937/38, p. 107; 1946, p. 2), Moggridge (1989, p. 305), Officer (1996, chs. 3-4), Powell (1999, pp. 23-24), Spalding (1928), Wallich (1950, pp. 32-37), Yeager (1976, pp. 330, 344, 359); Young (1925, p. 76).

Characteristics of Gold Standards

Types of Gold Standards

Pure Coin and Mixed Standards

In theory, “domestic” gold standards — those that do not depend on interaction with other countries — are of two types: “pure coin” standard and “mixed” (meaning coin and paper, but also called simply “coin”) standard. The two systems share several properties. (1) There is a well-defined and fixed gold content of the domestic monetary unit. For example, the dollar is defined as a specified weight of pure gold. (2) Gold coin circulates as money with unlimited legal-tender power (meaning it is a compulsorily acceptable means of payment of any amount in any transaction or obligation). (3) Privately owned bullion (gold in mass, foreign coin considered as mass, or gold in the form of bars) is convertible into gold coin in unlimited amounts at the government mint or at the central bank, and at the “mint price” (of gold, the inverse of the gold content of the monetary unit). (4) Private parties have no restriction on their holding or use of gold (except possibly that privately created coined money may be prohibited); in particular, they may melt coin into bullion. The effect is as if coin were sold to the monetary authority (central bank or Treasury acting as a central bank) for bullion. It would make sense for the authority to sell gold bars directly for coin, even though not legally required, thus saving the cost of coining. Conditions (3) and (4) commit the monetary authority in effect to transact in coin and bullion in each direction such that the mint price, or gold content of the monetary unit, governs in the marketplace.

Under a pure coin standard, gold is the only money. Under a mixed standard, there are also paper currency (notes) — issued by the government, central bank, or commercial banks — and demand-deposit liabilities of banks. Government or central-bank notes (and central-bank deposit liabilities) are directly convertible into gold coin at the fixed established price on demand. Commercial-bank notes and demand deposits might be converted not directly into gold but rather into gold-convertible government or central-bank currency. This indirect convertibility of commercial-bank liabilities would apply certainly if the government or central- bank currency were legal tender but also generally even if it were not. As legal tender, gold coin is always exchangeable for paper currency or deposits at the mint price, and usually the monetary authority would provide gold bars for its coin. Again, two-way transactions in unlimited amounts fix the currency price of gold at the mint price. The credibility of the monetary-authority commitment to a fixed price of gold is the essence of a successful, ongoing gold-standard regime.

A pure coin standard did not exist in any country during the gold-standard periods. Indeed, over time, gold coin declined from about one-fifth of the world money supply in 1800 (2/3 for gold and silver coin together, as silver was then the predominant monetary standard) to 17 percent in 1885 (1/3 for gold and silver, for an eleven-major-country aggregate), 10 percent in 1913 (15 percent for gold and silver, for the major-country aggregate), and essentially zero in 1928 for the major-country aggregate (Triffin, 1964, pp. 15, 56). See Table 3. The zero figure means not that gold coin did not exist, rather that its main use was as reserves for Treasuries, central banks, and (generally to a lesser extent) commercial banks.

Table 3Structure of Money: Major-Countries Aggregatea(end of year)
1885 1928
8 50
33 0d
18 21
33 99

a Core countries: Britain, United States, France, Germany. Western Europe: Belgium, Italy, Netherlands, Switzerland. Other countries: Canada, Japan, Sweden.
b Metallic money, minor coin, paper currency, and demand deposits.
c 1885: Gold and silver coin; overestimate, as includes commercial-bank holdings that could not be isolated from coin held outside banks by the public. 1913: Gold and silver coin. 1928: Gold coin.
d Less than 0.5 percent.
e 1885 and 1913: Gold, silver, and foreign exchange. 1928: Gold and foreign exchange.
f Official gold: Gold in official reserves. Money gold: Gold-coin component of money supply.

Sources: Triffin (1964, p. 62), Sayers (1976, pp. 348, 352) for 1928 Bank of England dollar reserves (dated January 2, 1929).

An “international” gold standard, which naturally requires that more than one country be on gold, requires in addition freedom both of international gold flows (private parties are permitted to import or export gold without restriction) and of foreign-exchange transactions (an absence of exchange control). Then the fixed mint prices of any two countries on the gold standard imply a fixed exchange rate (“mint parity”) between the countries’ currencies. For example, the dollar- sterling mint parity was $4.8665635 per pound sterling (the British pound).

Gold-Bullion and Gold-Exchange Standards

In principle, a country can choose among four kinds of international gold standards — the pure coin and mixed standards, already mentioned, a gold-bullion standard, and a gold- exchange standard. Under a gold-bullion standard, gold coin neither circulates as money nor is it used as commercial-bank reserves, and the government does not coin gold. The monetary authority (Treasury or central bank) stands ready to transact with private parties, buying or selling gold bars (usable only for import or export, not as domestic currency) for its notes, and generally a minimum size of transaction is specified. For example, in 1925 1931 the Bank of England was on the bullion standard and would sell gold bars only in the minimum amount of 400 fine (pure) ounces, approximately £1699 or $8269. Finally, the monetary authority of a country on a gold-exchange standard buys and sells not gold in any form but rather gold- convertible foreign exchange, that is, the currency of a country that itself is on the gold coin or bullion standard.

Gold Points and Gold Export/Import

A fixed exchange rate (the mint parity) for two countries on the gold standard is an oversimplification that is often made but is misleading. There are costs of importing or exporting gold. These costs include freight, insurance, handling (packing and cartage), interest on money committed to the transaction, risk premium (compensation for risk), normal profit, any deviation of purchase or sale price from the mint price, possibly mint charges, and possibly abrasion (wearing out or removal of gold content of coin — should the coin be sold abroad by weight or as bullion). Expressing the exporting costs as the percent of the amount invested (or, equivalently, as percent of parity), the product of 1/100th of these costs and mint parity (the number of units of domestic currency per unit of foreign currency) is added to mint parity to obtain the gold-export point — the exchange rate at which gold is exported. To obtain the gold-import point, the product of 1/100th of the importing costs and mint parity is subtracted from mint parity.

If the exchange rate is greater than the gold-export point, private-sector “gold-point arbitrageurs” export gold, thereby obtaining foreign currency. Conversely, for the exchange rate less than the gold-import point, gold is imported and foreign currency relinquished. Usually the gold is, directly or indirectly, purchased from the monetary authority of the one country and sold to the monetary authority in the other. The domestic-currency cost of the transaction per unit of foreign currency obtained is the gold-export point. That per unit of foreign currency sold is the gold-import point. Also, foreign currency is sold, or purchased, at the exchange rate. Therefore arbitrageurs receive a profit proportional to the exchange-rate/gold-point divergence.

Gold-Point Arbitrage

However, the arbitrageurs’ supply of foreign currency eliminates profit by returning the exchange rate to below the gold-export point. Therefore perfect “gold-point arbitrage” would ensure that the exchange rate has upper limit of the gold-export point. Similarly, the arbitrageurs’ demand for foreign currency returns the exchange rate to above the gold-import point, and perfect arbitrage ensures that the exchange rate has that point as a lower limit. It is important to note what induces the private sector to engage in gold-point arbitrage: (1) the profit motive; and (2) the credibility of the commitment to (a) the fixed gold price and (b) freedom of foreign exchange and gold transactions, on the part of the monetary authorities of both countries.

Gold-Point Spread

The difference between the gold points is called the (gold-point) spread. The gold points and the spread may be expressed as percentages of parity. Estimates of gold points and spreads involving center countries are provided for the classical and interwar gold standards in Tables 4 and 5. Noteworthy is that the spread for a given country pair generally declines over time both over the classical gold standard (evidenced by the dollar-sterling figures) and for the interwar compared to the classical period.

Table 4Gold-Point Estimates: Classical Gold Standard
Countries Period Gold Pointsa(percent) Spreadd(percent) Method of Computation
Exportb Importc
U.S./Britain 1881-1890 0.6585 0.7141 1.3726 PA
U.S./Britain 1891-1900 0.6550 0.6274 1.2824 PA
U.S./Britain 1901-1910 0.4993 0.5999 1.0992 PA
U.S./Britain 1911-1914 0.5025 0.5915 1.0940 PA
France/U.S. 1877-1913 0.6888 0.6290 1.3178 MED
Germany/U.S. 1894-1913 0.4907 0.7123 1.2030 MED
France/Britain 1877-1913 0.4063 0.3964 0.8027 MED
Germany/Britain 1877-1913 0.3671 0.4405 0.8076 MED
Germany/France 1877-1913 0.4321 0.5556 0.9877 MED
Austria/Britain 1912 0.6453 0.6037 1.2490 SE
Netherlands/Britain 1912 0.5534 0.3552 0.9086 SE
Scandinaviae /Britain 1912 0.3294 0.6067 0.9361 SE

a For numerator country.
b Gold-import point for denominator country.
c Gold-export point for denominator country.
d Gold-export point plus gold-import point.
e Denmark, Sweden, and Norway.

Method of Computation: PA = period average. MED = median exchange rate form estimate of various authorities for various dates, converted to percent deviation from parity. SE = single exchange-rate- form estimate, converted to percent deviation from parity.

Sources: U.S./Britain — Officer (1996, p. 174). France/U.S., Germany/U.S., France/Britain, Germany/Britain, Germany/France — Morgenstern (1959, pp. 178-81). Austria/Britain, Netherlands/Britain, Scandinavia/Britain — Easton (1912, pp. 358-63).

Table 5Gold-Point Estimates: Interwar Gold Standard
Countries Period Gold Pointsa(percent) Spreadd(percent) Method of Computation
Exportb Importc
U.S./Britain 1925-1931 0.6287 0.4466 1.0753 PA
U.S./France 1926-1928e 0.4793 0.5067 0.9860 PA
U.S./France 1928-1933f 0.5743 0.3267 0.9010 PA
U.S./Germany 1926-1931 0.8295 0.3402 1.1697 PA
France/Britain 1926 0.2042 0.4302 0.6344 SE
France/Britain 1929-1933 0.2710 0.3216 0.5926 MED
Germany/Britain 1925-1933 0.3505 0.2676 0.6181 MED
Canada/Britain 1929 0.3521 0.3465 0.6986 SE
Netherlands/Britain 1929 0.2858 0.5146 0.8004 SE
Denmark/Britain 1926 0.4432 0.4930 0.9362 SE
Norway/Britain 1926 0.6084 0.3828 0.9912 SE
Sweden/Britain 1926 0.3881 0.3828 0.7709 SE

a For numerator country.
b Gold-import point for denominator country.
c Gold-export point for denominator country.
d Gold-export point plus gold-import point.
e To end of June 1928. French-franc exchange-rate stabilization, but absence of currency convertibility; see Table 2.
f Beginning July 1928. French-franc convertibility; see Table 2.

Method of Computation: PA = period average. MED = median exchange rate form estimate of various authorities for various dates, converted to percent deviation from parity. SE = single exchange-rate- form estimate, converted to percent deviation from parity.

Sources: U.S./Britain — Officer (1996, p. 174). U.S./France, U.S./Germany, France/Britain 1929- 1933, Germany/Britain — Morgenstern (1959, pp. 185-87). Canada/Britain, Netherlands/Britain — Einzig (1929, pp. 98-101) [Netherlands/Britain currencies’ mint parity from Spalding (1928, p. 135). France/Britain 1926, Denmark/Britain, Norway/Britain, Sweden/Britain — Spalding (1926, pp. 429-30, 436).

The effective monetary standard of a country is distinguishable from its legal standard. For example, a country legally on bimetallism usually is effectively on either a gold or silver monometallic standard, depending on whether its “mint-price ratio” (the ratio of its mint price of gold to mint price of silver) is greater or less than the world price ratio. In contrast, a country might be legally on a gold standard but its banks (and government) have “suspended specie (gold) payments” (refusing to convert their notes into gold), so that the country is in fact on a “paper standard.” The criterion adopted here is that a country is deemed on the gold standard if (1) gold is the predominant effective metallic money, or is the monetary bullion, (2) specie payments are in force, and (3) there is a limitation on the coinage and/or the legal-tender status of silver (the only practical and historical competitor to gold), thus providing institutional or legal support for the effective gold standard emanating from (1) and (2).

Implications for Money Supply

Consider first the domestic gold standard. Under a pure coin standard, the gold in circulation, monetary base, and money supply are all one. With a mixed standard, the money supply is the product of the money multiplier (dependent on the commercial-banks’ reserves/deposit and the nonbank-public’s currency/deposit ratios) and the monetary base (the actual and potential reserves of the commercial banking system, with potential reserves held by the nonbank public). The monetary authority alters the monetary base by changing its gold holdings and its loans, discounts, and securities portfolio (non gold assets, called its “domestic assets”). However, the level of its domestic assets is dependent on its gold reserves, because the authority generates demand liabilities (notes and deposits) by increasing its assets, and convertibility of these liabilities must be supported by a gold reserve, if the gold standard is to be maintained. Therefore the gold standard provides a constraint on the level (or growth) of the money supply.

The international gold standard involves balance-of-payments surpluses settled by gold imports at the gold-import point, and deficits financed by gold exports at the gold-export point. (Within the spread, there are no gold flows and the balance of payments is in equilibrium.) The change in the money supply is then the product of the money multiplier and the gold flow, providing the monetary authority does not change its domestic assets. For a country on a gold- exchange standard, holdings of “foreign exchange” (the reserve currency) take the place of gold. In general, the “international assets” of a monetary authority may consist of both gold and foreign exchange.

The Classical Gold Standard

Dates of Countries Joining the Gold Standard

Table 1 (above) lists all countries that were on the classical gold standard, the gold- standard type to which each adhered, and the period(s) on the standard. Discussion here concentrates on the four core countries. For centuries, Britain was on an effective silver standard under legal bimetallism. The country switched to an effective gold standard early in the eighteenth century, solidified by the (mistakenly) gold-overvalued mint-price ratio established by Isaac Newton, Master of the Mint, in 1717. In 1774 the legal-tender property of silver was restricted, and Britain entered the gold standard in the full sense on that date. In 1798 coining of silver was suspended, and in 1816 the gold standard was formally adopted, ironically during a paper-standard regime (the “Bank Restriction Period,” of 1797-1821), with the gold standard effectively resuming in 1821.

The United States was on an effective silver standard dating back to colonial times, legally bimetallic from 1786, and on an effective gold standard from 1834. The legal gold standard began in 1873-1874, when Acts ended silver-dollar coinage and limited legal tender of existing silver coins. Ironically, again the move from formal bimetallism to a legal gold standard occurred during a paper standard (the “greenback period,” of 1861-1878), with a dual legal and effective gold standard from 1879.

International Shift to the Gold Standard

The rush to the gold standard occurred in the 1870s, with the adherence of Germany, the Scandinavian countries, France, and other European countries. Legal bimetallism shifted from effective silver to effective gold monometallism around 1850, as gold discoveries in the United States and Australia resulted in overvalued gold at the mints. The gold/silver market situation subsequently reversed itself, and, to avoid a huge inflow of silver, many European countries suspended the coinage of silver and limited its legal-tender property. Some countries (France, Belgium, Switzerland) adopted a “limping” gold standard, in which existing former-standard silver coin retained full legal tender, permitting the monetary authority to redeem its notes in silver as well as gold.

As Table 1 shows, most countries were on a gold-coin (always meaning mixed) standard. The gold-bullion standard did not exist in the classical period (although in Britain that standard was embedded in legislation of 1819 that established a transition to restoration of the gold standard). A number of countries in the periphery were on a gold-exchange standard, usually because they were colonies or territories of a country on a gold-coin standard. In situations in which the periphery country lacked its own (even-coined) currency, the gold-exchange standard existed almost by default. Some countries — China, Persia, parts of Latin America — never joined the classical gold standard, instead retaining their silver or bimetallic standards.

Sources of Instability of the Classical Gold Standard

There were three elements making for instability of the classical gold standard. First, the use of foreign exchange as reserves increased as the gold standard progressed. Available end-of- year data indicate that, worldwide, foreign exchange in official reserves (the international assets of the monetary authority) increased by 36 percent from 1880 to 1899 and by 356 percent from 1899 to 1913. In comparison, gold in official reserves increased by 160 percent from 1880 to 1903 but only by 88 percent from 1903 to 1913. (Lindert, 1969, pp. 22, 25) While in 1913 only Germany among the center countries held any measurable amount of foreign exchange — 15 percent of total reserves excluding silver (which was of limited use) — the percentage for the rest of the world was double that for Germany (Table 6). If there were a rush to cash in foreign exchange for gold, reduction or depletion of the gold of reserve-currency countries could place the gold standard in jeopardy.

Table 6Share of Foreign Exchange in Official Reserves(end of year, percent)
Country 1928b
Excluding Silverb
0 10
0 0c
0d 51
13 16
27 32

a Official reserves: gold, silver, and foreign exchange.
b Official reserves: gold and foreign exchange.
c Less than 0.05 percent.
d Less than 0.5 percent.

Sources: 1913 — Lindert (1969, pp. 10-11). 1928 — Britain: Board of Governors of the Federal Reserve System [cited as BG] (1943, p. 551), Sayers (1976, pp. 348, 352) for Bank of England dollar reserves (dated January 2, 1929). United States: BG (1943, pp. 331, 544), foreign exchange consisting of Federal Reserve Banks holdings of foreign-currency bills. France and Germany: Nurkse (1944, p. 234). Rest of world [computed as residual]: gold, BG (1943, pp. 544-51); foreign exchange, from “total” (Triffin, 1964, p. 66), France, and Germany.

Second, Britain — the predominant reserve-currency country — was in a particularly sensitive situation. Again considering end-of 1913 data, almost half of world foreign-exchange reserves were in sterling, but the Bank of England had only three percent of world gold reserves (Tables 7-8). Defining the “reserve ratio” of the reserve-currency-country monetary authority as the ratio of (i) official reserves to (ii) liabilities to foreign monetary authorities held in financial institutions in the country, in 1913 this ratio was only 31 percent for the Bank of England, far lower than those of the monetary authorities of the other core countries (Table 9). An official run on sterling could easily force Britain off the gold standard. Because sterling was an international currency, private foreigners also held considerable liquid assets in London, and could themselves initiate a run on sterling.

Table 7Composition of World Official Foreign-Exchange Reserves(end of year, percent)
1913a British pounds 77
2 French francs }2}

}

16
5b

a Excluding holdings for which currency unspecified.
b Primarily Dutch guilders and Scandinavian kroner.

Sources: 1913 — Lindert (1969, pp. 18-19). 1928 — Components of world total: Triffin (1964, pp. 22, 66), Sayers (1976, pp. 348, 352) for Bank of England dollar reserves (dated January 2, 1929), Board of Governors of the Federal Reserve System [cited as BG] (1943, p. 331) for Federal Reserve Banks holdings of foreign-currency bills.

Table 8Official-Reserves Components: Percent of World Total(end of year)
Country 1928
Gold Foreign Exchange
0 7 United States 27 0a
0b 13 Germany 6 4
95 36 Table 9Reserve Ratiosa of Reserve-Currency Countries

(end of year)

Country 1928c
Excluding Silverc
0.31 0.33
90.55 5.45
2.38 not available
2.11 not available

a Ratio of official reserves to official liquid liabilities (that is, liabilities to foreign governments and central banks).
b Official reserves: gold, silver, and foreign exchange.
c Official reserves: gold and foreign exchange.

Sources : 1913 — Lindert (1969, pp. 10-11, 19). Foreign-currency holdings for which currency unspecified allocated proportionately to the four currencies based on known distribution. 1928 — Gold reserves: Board of Governors of the Federal Reserve System [cited as BG] (1943, pp. 544, 551). Foreign- exchange reserves: Sayers (1976, pp. 348, 352) for Bank of England dollar reserves (dated January 2, 1929); BG (1943, p. 331) for Federal Reserve Banks holdings of foreign-currency bills. Official liquid liabilities: Triffin (1964, p. 22), Sayers (1976, pp. 348, 352).

Third, the United States, though a center country, was a great source of instability to the gold standard. Its Treasury held a high percentage of world gold reserves (more than that of the three other core countries combined in 1913), resulting in an absurdly high reserve ratio — Tables 7-9). With no central bank and a decentralized banking system, financial crises were frequent. Far from the United States assisting Britain, gold often flowed from the Bank of England to the United States to satisfy increases in U.S. demand for money. Though in economic size the United States was the largest of the core countries, in many years it was a net importer rather than exporter of capital to the rest of the world — the opposite of the other core countries. The political power of silver interests and recurrent financial panics led to imperfect credibility in the U.S. commitment to the gold standard. Runs on banks and runs on the Treasury gold reserve placed the U.S. gold standard near collapse in the early and mid-1890s. During that period, the credibility of the Treasury’s commitment to the gold standard was shaken. Indeed, the gold standard was saved in 1895 (and again in 1896) only by cooperative action of the Treasury and a bankers’ syndicate that stemmed gold exports.

Rules of the Game

According to the “rules of the [gold-standard] game,” central banks were supposed to reinforce, rather than “sterilize” (moderate or eliminate) or ignore, the effect of gold flows on the monetary supply. A gold outflow typically decreases the international assets of the central bank and thence the monetary base and money supply. The central-bank’s proper response is: (1) raise its “discount rate,” the central-bank interest rate for rediscounting securities (cashing, at a further deduction from face value, a short-term security from a financial institution that previously discounted the security), thereby inducing commercial banks to adopt a higher reserves/deposit ratio and therefore decreasing the money multiplier; and (2) decrease lending and sell securities, thereby decreasing domestic assets and thence the monetary base. On both counts the money supply is further decreased. Should the central bank rather increase its domestic assets when it loses gold, it engages in “sterilization” of the gold flow and is decidedly not following the “rules of the game.” The converse argument (involving gold inflow and increases in the money supply) also holds, with sterilization involving the central bank decreasing its domestic assets when it gains gold.

Price Specie-Flow Mechanism

A country experiencing a balance-of-payments deficit loses gold and its money supply decreases, both automatically and by policy in accordance with the “rules of the game.” Money income contracts and the price level falls, thereby increasing exports and decreasing imports. Similarly, a surplus country gains gold, the money supply increases, money income expands, the price level rises, exports decrease and imports increase. In each case, balance-of-payments equilibrium is restored via the current account. This is called the “price specie-flow mechanism.” To the extent that wages and prices are inflexible, movements of real income in the same direction as money income occur; in particular, the deficit country suffers unemployment but the payments imbalance is nevertheless corrected.

The capital account also acts to restore balance, via interest-rate increases in the deficit country inducing a net inflow of capital. The interest-rate increases also reduce real investment and thence real income and imports. Similarly, interest-rate decreases in the surplus country elicit capital outflow and increase real investment, income, and imports. This process enhances the current-account correction of the imbalance.

One problem with the “rules of the game” is that, on “global-monetarist” theoretical grounds, they were inconsequential. Under fixed exchange rates, gold flows simply adjust money supply to money demand; the money supply is not determined by policy. Also, prices, interest rates, and incomes are determined worldwide. Even core countries can influence these variables domestically only to the extent that they help determine them in the global marketplace. Therefore the price-specie-flow and like mechanisms cannot occur. Historical data support this conclusion: gold flows were too small to be suggestive of these mechanisms; and prices, incomes, and interest rates moved closely in correspondence (rather than in the opposite directions predicted by the adjustment mechanisms induced by the “rules of the game”) — at least among non-periphery countries, especially the core group.

Discount Rate Rule and the Bank of England

However, the Bank of England did, in effect, manage its discount rate (“Bank Rate”) in accordance with rule (1). The Bank’s primary objective was to maintain convertibility of its notes into gold, that is, to preserve the gold standard, and its principal policy tool was Bank Rate. When its “liquidity ratio” of gold reserves to outstanding note liabilities decreased, it would usually increase Bank Rate. The increase in Bank Rate carried with it market short-term increase rates, inducing a short-term capital inflow and thereby moving the exchange rate away from the gold-export point by increasing the exchange value of the pound. The converse also held, with a rise in the liquidity ratio involving a Bank Rate decrease, capital outflow, and movement of the exchange rate away from the gold import point. The Bank was constantly monitoring its liquidity ratio, and in response altered Bank Rate almost 200 times over 1880- 1913.

While the Reichsbank (the German central bank), like the Bank of England, generally moved its discount rate inversely to its liquidity ratio, most other central banks often violated the rule, with changes in their discount rates of inappropriate direction, or of insufficient amount or frequency. The Bank of France, in particular, kept its discount rate stable. Unlike the Bank of England, it chose to have large gold reserves (see Table 8), with payments imbalances accommodated by fluctuations in its gold rather than financed by short-term capital flows. The United States, lacking a central bank, had no discount rate to use as a policy instrument.

Sterilization Was Dominant

As for rule (2), that the central-bank’s domestic and international assets move in the same direction; in fact the opposite behavior, sterilization, was dominant, as shown in Table 10. The Bank of England followed the rule more than any other central bank, but even so violated it more often than not! How then did the classical gold standard cope with payments imbalances? Why was it a stable system?

Table 10Annual Changes in Internationala and Domesticb Assets of Central BankPercent of Changes in the Same Directionc
1880-1913d Britain 33
__ France 33
31 British Dominionse 13
32 Scandinaviag 25
33 South Americai 23

a 1880-1913: Gold, silver and foreign exchange. 1922-1936: Gold and foreign exchange.
b Domestic income-earning assets: discounts, loans, securities.
c Implying country is following “rules of the game.” Observations with zero or negligible changes in either class of assets excluded.
d Years when country is off gold standard excluded. See Tables 1 and 2.
e Australia and South Africa.
f1880-1913: Austria-Hungary, Belgium, and Netherlands. 1922-1936: Austria, Italy, Netherlands, and Switzerland.
g Denmark, Finland, Norway, and Sweden.
h1880-1913: Russia. 1922-1936: Bulgaria, Czechoslovakia, Greece, Hungary, Poland, Romania, and Yugoslavia.
I Chile, Colombia, Peru, and Uruguay.

Sources: Bloomfield (1959, p. 49), Nurkse (1944, p. 69).

The Stability of the Classical Gold Standard

The fundamental reason for the stability of the classical gold standard is that there was always absolute private-sector credibility in the commitment to the fixed domestic-currency price of gold on the part of the center country (Britain), two (France and Germany) of the three remaining core countries, and certain other European countries (Belgium, Netherlands, Switzerland, and Scandinavia). Certainly, that was true from the late-1870s onward. (For the United States, this absolute credibility applied from about 1900.) In earlier periods, that commitment had a contingency aspect: it was recognized that convertibility could be suspended in the event of dire emergency (such as war); but, after normal conditions were restored, convertibility would be re-established at the pre-existing mint price and gold contracts would again be honored. The Bank Restriction Period is an example of the proper application of the contingency, as is the greenback period (even though the United States, effectively on the gold standard, was legally on bimetallism).

Absolute Credibility Meant Zero Convertibility and Exchange Risk

The absolute credibility in countries’ commitment to convertiblity at the existing mint price implied that there was extremely low, essentially zero, convertibility risk (the probability that Treasury or central-bank notes would not be redeemed in gold at the established mint price) and exchange risk (the probability that the mint parity between two currencies would be altered, or that exchange control or prohibition of gold export would be instituted).

Reasons Why Commitment to Convertibility Was So Credible

There were many reasons why the commitment to convertibility was so credible. (1) Contracts were expressed in gold; if convertibility were abandoned, contracts would inevitably be violated — an undesirable outcome for the monetary authority. (2) Shocks to the domestic and world economies were infrequent and generally mild. There was basically international peace and domestic calm.

(3) The London capital market was the largest, most open, most diversified in the world, and its gold market was also dominant. A high proportion of world trade was financed in sterling, London was the most important reserve-currency center, and balances of payments were often settled by transferring sterling assets rather than gold. Therefore sterling was an international currency — not merely supplemental to gold but perhaps better: a boon to non- center countries, because sterling involved positive, not zero, interest return and its transfer costs were much less than those of gold. Advantages to Britain were the charges for services as an international banker, differential interest returns on its financial intermediation, and the practice of countries on a sterling (gold-exchange) standard of financing payments surpluses with Britain by piling up short-term sterling assets rather than demanding Bank of England gold.

(4) There was widespread ideology — and practice — of “orthodox metallism,” involving authorities’ commitment to an anti-inflation, balanced-budget, stable-money policy. In particular, the ideology implied low government spending and taxes and limited monetization of government debt (financing of budget deficits by printing money). Therefore it was not expected that a country’s price level or inflation would get out of line with that of other countries, with resulting pressure on the country’s adherence to the gold standard. (5) This ideology was mirrored in, and supported by, domestic politics. Gold had won over silver and paper, and stable-money interests (bankers, industrialists, manufacturers, merchants, professionals, creditors, urban groups) over inflationary interests (farmers, landowners, miners, debtors, rural groups).

(6) There was freedom from government regulation and a competitive environment, domestically and internationally. Therefore prices and wages were more flexible than in other periods of human history (before and after). The core countries had virtually no capital controls; the center country (Britain) had adopted free trade, and the other core countries had moderate tariffs. Balance-of-payments financing and adjustment could proceed without serious impediments.

(7) Internal balance (domestic macroeconomic stability, at a high level of real income and employment) was an unimportant goal of policy. Preservation of convertibility of paper currency into gold would not be superseded as the primary policy objective. While sterilization of gold flows was frequent (see above), the purpose was more “meeting the needs of trade” (passive monetary policy) than fighting unemployment (active monetary policy).

(8) The gradual establishment of mint prices over time ensured that the implied mint parities (exchange rates) were in line with relative price levels; so countries joined the gold standard with exchange rates in equilibrium. (9) Current-account and capital-account imbalances tended to be offsetting for the core countries, especially for Britain. A trade deficit induced a gold loss and a higher interest rate, attracting a capital inflow and reducing capital outflow. Indeed, the capital- exporting core countries — Britain, France, and Germany — could eliminate a gold loss simply by reducing lending abroad.

Rareness of Violations of Gold Points

Many of the above reasons not only enhanced credibility in existing mint prices and parities but also kept international-payments imbalances, and hence necessary adjustment, of small magnitude. Responding to the essentially zero convertibility and exchange risks implied by the credible commitment, private agents further reduced the need for balance-of-payments adjustment via gold-point arbitrage (discussed above) and also via a specific kind of speculation. When the exchange rate moved beyond a gold point, arbitrage acted to return it to the spread. So it is not surprising that “violations of the gold points” were rare on a monthly average basis, as demonstrated in Table 11 for the dollar, franc, and mark exchange rate versus sterling. Certainly, gold-point violations did occur; but they rarely persisted sufficiently to be counted on monthly average data. Such measured violations were generally associated with financial crises. (The number of dollar-sterling violations for 1890-1906 exceeding that for 1889-1908 is due to the results emanating from different researchers using different data. Nevertheless, the important common finding is the low percent of months encompassed by violations.)

Table 11Violations of Gold Points
Exchange Rate Time Period Number of Months Number dollar-sterling 240 0.4
1890-1906 3 dollar-sterling 76 0
1889-1908 12b mark-sterling 240 7.5

a May 1925 – August 1931: full months during which both United States and Britain on gold standard.
b Approximate number, deciphered from graph.

Sources: Dollar-sterling, 1890-1906 and 1925-1931 — Officer (1996, p. 235). All other — Giovannini (1993, pp. 130-31).

Stabilizing Speculation

The perceived extremely low convertibility and exchange risks gave private agents profitable opportunities not only outside the spread (gold-point arbitrage) but also within the spread (exchange-rate speculation). As the exchange value of a country’s currency weakened, the exchange rate approaching the gold-export point, speculators had an ever greater incentive to purchase domestic currency with foreign currency (a capital inflow); for they had good reason to believe that the exchange rate would move in the opposite direction, whereupon they would reverse their transaction at a profit. Similarly, a strengthened currency, with the exchange rate approaching the gold-import point, involved speculators selling the domestic currency for foreign currency (a capital outflow). Clearly, the exchange rate would either not go beyond the gold point (via the actions of other speculators of the same ilk) or would quickly return to the spread (via gold-point arbitrage). Also, the further the exchange rate moved toward the gold point, the greater the potential profit opportunity; for there was a decreased distance to that gold point and an increased distance from the other point.

This “stabilizing speculation” enhanced the exchange value of depreciating currencies that were about to lose gold; and thus the gold loss could be prevented. The speculation was all the more powerful, because the absence of controls on capital movements meant private capital flows were highly responsive to exchange-rate changes. Dollar-sterling data, in Table 12, show that this speculation was extremely efficient in keeping the exchange rate away from the gold points — and increasingly effective over time. Interestingly, these statements hold even for the 1890s, during which at times U.S. maintenance of currency convertibility was precarious. The average deviation of the exchange rate from the midpoint of the spread fell decade-by-decade from about 1/3 of one percent of parity in 1881-1890 (23 percent of the gold-point spread) to only 12/100th of one percent of parity in 1911-1914 (11 percent of the spread).

Table 12Average Deviation of Dollar-Sterling Exchange Rate from Gold-Point-Spread Midpoint
Percent of Parity Quarterly observations
0.32 1891-1900 19
0.15 1911-1914a 11
0.28 Monthly observations
0.24 1925-1931c 26

a Ending with second quarter of 1914.
b Third quarter 1925 – second quarter 1931: full quarters during which both United States and Britain on gold standard.
c May 1925 – August 1931: full months during which both United States and Britain on gold standard.

Source: Officer (1996, pp. 182, 191, 272).

Government Policies That Enhanced Gold-Standard Stability

Government policies also enhanced gold-standard stability. First, by the turn of the century South Africa — the main world gold producer — sold all its gold in London, either to private parties or actively to the Bank of England, with the Bank serving also as residual purchaser of the gold. Thus the Bank had the means to replenish its gold reserves. Second, the orthodox- metallism ideology and the leadership of the Bank of England — other central banks would often gear their monetary policy to that of the Bank — kept monetary policies harmonized. Monetary discipline was maintained.

Third, countries used “gold devices,” primarily the manipulation of gold points, to affect gold flows. For example, the Bank of England would foster gold imports by lowering the foreign gold-export point (number of units of foreign currency per pound, the British gold-import point) through interest-free loans to gold importers or raising its purchase price for bars and foreign coin. The Bank would discourage gold exports by lowering the foreign gold-import point (the British gold-export point) via increasing its selling prices for gold bars and foreign coin, refusing to sell bars, or redeeming its notes in underweight domestic gold coin. These policies were alternative to increasing Bank Rate.

The Bank of France and Reichsbank employed gold devices relative to discount-rate changes more than Britain did. Some additional policies included converting notes into gold only in Paris or Berlin rather than at branches elsewhere in the country, the Bank of France converting its notes in silver rather than gold (permitted under its “limping” gold standard), and the Reichsbank using moral suasion to discourage the export of gold. The U.S. Treasury followed similar policies at times. In addition to providing interest-free loans to gold importers and changing the premium at which it would sell bars (or refusing to sell bars outright), the Treasury condoned banking syndicates to put pressure on gold arbitrageurs to desist from gold export in 1895 and 1896, a time when the U.S. adherence to the gold standard was under stress.

Fourth, the monetary system was adept at conserving gold, as evidenced in Table 3. This was important, because the increased gold required for a growing world economy could be obtained only from mining or from nonmonetary hoards. While the money supply for the eleven- major-country aggregate more than tripled from 1885 to 1913, the percent of the money supply in the form of metallic money (gold and silver) more than halved. This process did not make the gold standard unstable, because gold moved into commercial-bank and central-bank (or Treasury) reserves: the ratio of gold in official reserves to official plus money gold increased from 33 to 54 percent. The relative influence of the public versus private sector in reducing the proportion of metallic money in the money supply is an issue warranting exploration by monetary historians.

Fifth, while not regular, central-bank cooperation was not generally required in the stable environment in which the gold standard operated. Yet this cooperation was forthcoming when needed, that is, during financial crises. Although Britain was the center country, the precarious liquidity position of the Bank of England meant that it was more often the recipient than the provider of financial assistance. In crises, it would obtain loans from the Bank of France (also on occasion from other central banks), and the Bank of France would sometimes purchase sterling to push up that currency’s exchange value. Assistance also went from the Bank of England to other central banks, as needed. Further, the credible commitment was so strong that private bankers did not hesitate to make loans to central banks in difficulty.

In sum, “virtuous” two-way interactions were responsible for the stability of the gold standard. The credible commitment to convertibility of paper money at the established mint price, and therefore the fixed mint parities, were both a cause and a result of (1) the stable environment in which the gold standard operated, (2) the stabilizing behavior of arbitrageurs and speculators, and (3) the responsible policies of the authorities — and (1), (2), and (3), and their individual elements, also interacted positively among themselves.

Experience of Periphery

An important reason for periphery countries to join and maintain the gold standard was the access to the capital markets of the core countries thereby fostered. Adherence to the gold standard connoted that the peripheral country would follow responsible monetary, fiscal, and debt-management policies — and, in particular, faithfully repay the interest on and principal of debt. This “good housekeeping seal of approval” (the term coined by Bordo and Rockoff, 1996), by reducing the risk premium, involved a lower interest rate on the country’s bonds sold abroad, and very likely a higher volume of borrowing. The favorable terms and greater borrowing enhanced the country’s economic development.

However, periphery countries bore the brunt of the burden of adjustment of payments imbalances with the core (and other Western European) countries, for three reasons. First, some of the periphery countries were on a gold-exchange standard. When they ran a surplus, they typically increased — and with a deficit, decreased — their liquid balances in London (or other reserve-currency country) rather than withdraw gold from the reserve-currency country. The monetary base of the periphery country would increase, or decrease, but that of the reserve-currency country would remain unchanged. This meant that such changes in domestic variables — prices, incomes, interest rates, portfolios, etc.–that occurred to correct the surplus or deficit, were primarily in the periphery country. The periphery, rather than the core, “bore the burden of adjustment.”

Second, when Bank Rate increased, London drew funds from France and Germany, that attracted funds from other Western European and Scandinavian countries, that drew capital from the periphery. Also, it was easy for a core country to correct a deficit by reducing lending to, or bringing capital home from, the periphery. Third, the periphery countries were underdeveloped; their exports were largely primary products (agriculture and mining), which inherently were extremely sensitive to world market conditions. This feature made adjustment in the periphery compared to the core take the form more of real than financial correction. This conclusion also follows from the fact that capital obtained from core countries for the purpose of economic development was subject to interruption and even reversal. While the periphery was probably better off with access to the capital than in isolation, its welfare gain was reduced by the instability of capital import.

The experience on adherence to the gold standard differed among periphery groups. The important British dominions and colonies — Australia, New Zealand, Canada, and India — successfully maintained the gold standard. They were politically stable and, of course, heavily influenced by Britain. They paid the price of serving as an economic cushion to the Bank of England’s financial situation; but, compared to the rest of the periphery, gained a relatively stable long-term capital inflow. In undeveloped Latin American and Asia, adherence to the gold standard was fragile, with lack of complete credibility in the commitment to convertibility. Many of the reasons for credible commitment that applied to the core countries were absent — for example, there were powerful inflationary interests, strong balance-of-payments shocks, and rudimentary banking sectors. For Latin America and Asia, the cost of adhering to the gold standard was very apparent: loss of the ability to depreciate the currency to counter reductions in exports. Yet the gain, in terms of a steady capital inflow from the core countries, was not as stable or reliable as for the British dominions and colonies.

The Breakdown of the Classical Gold Standard

The classical gold standard was at its height at the end of 1913, ironically just before it came to an end. The proximate cause of the breakdown of the classical gold standard was political: the advent of World War I in August 1914. However, it was the Bank of England’s precarious liquidity position and the gold-exchange standard that were the underlying cause. With the outbreak of war, a run on sterling led Britain to impose extreme exchange control — a postponement of both domestic and international payments — that made the international gold standard non-operational. Convertibility was not legally suspended; but moral suasion, legalistic action, and regulation had the same effect. Gold exports were restricted by extralegal means (and by Trading with the Enemy legislation), with the Bank of England commandeering all gold imports and applying moral suasion to bankers and bullion brokers.

Almost all other gold-standard countries undertook similar policies in 1914 and 1915. The United States entered the war and ended its gold standard late, adopting extralegal restrictions on convertibility in 1917 (although in 1914 New York banks had temporarily imposed an informal embargo on gold exports). An effect of the universal removal of currency convertibility was the ineffectiveness of mint parities and inapplicability of gold points: floating exchange rates resulted.

Interwar Gold Standard

Return to the Gold Standard

In spite of the tremendous disruption to domestic economies and the worldwide economy caused by World War I, a general return to gold took place. However, the resulting interwar gold standard differed institutionally from the classical gold standard in several respects. First, the new gold standard was led not by Britain but rather by the United States. The U.S. embargo on gold exports (imposed in 1917) was removed in 1919, and currency convertibility at the prewar mint price was restored in 1922. The gold value of the dollar rather than of the pound sterling would typically serve as the reference point around which other currencies would be aligned and stabilized. Second, it follows that the core would now have two center countries, the United Kingdom and the United States.

Third, for many countries there was a time lag between stabilizing a country’s currency in the foreign-exchange market (fixing the exchange rate or mint parity) and resuming currency convertibility. Given a lag, the former typically occurred first, currency stabilization operating via central-bank intervention in the foreign-exchange market (transacting in the domestic currency and a reserve currency, generally sterling or the dollar). Table 2 presents the dates of exchange- rate stabilization and currency convertibility resumption for the countries on the interwar gold standard. It is fair to say that the interwar gold standard was at its height at the end of 1928, after all core countries were fully on the standard and before the Great Depression began.

Fourth, the contingency aspect of convertibility conversion, that required restoration of convertibility at the mint price that existed prior to the emergency (World War I), was broken by various countries — even core countries. Some countries (including the United States, United Kingdom, Denmark, Norway, Netherlands, Sweden, Switzerland, Australia, Canada, Japan, Argentina) stabilized their currencies at the prewar mint price. However, other countries (France, Belgium, Italy, Portugal, Finland, Bulgaria, Romania, Greece, Chile) established a gold content of their currency that was a fraction of the prewar level: the currency was devalued in terms of gold, the mint price was higher than prewar. A third group of countries (Germany, Austria, Hungary) stabilized new currencies adopted after hyperinflation. A fourth group (Czechoslovakia, Danzig, Poland, Estonia, Latvia, Lithuania) consisted of countries that became independent or were created following the war and that joined the interwar gold standard. A fifth group (some Latin American countries) had been on silver or paper standards during the classical period but went on the interwar gold standard. A sixth country group (Russia) had been on the classical gold standard, but did not join the interwar gold standard. A seventh group (Spain, China, Iran) joined neither gold standard.

The fifth way in which the interwar gold standard diverged from the classical experience was the mix of gold-standard types. As Table 2 shows, the gold coin standard, dominant in the classical period, was far less prevalent in the interwar period. In particular, all four core countries had been on coin in the classical gold standard; but, of them, only the United States was on coin interwar. The gold-bullion standard, nonexistent prewar, was adopted by two core countries (United Kingdom and France) as well as by two Scandinavian countries (Denmark and Norway). Most countries were on a gold-exchange standard. The central banks of countries on the gold-exchange standard would convert their currencies not into gold but rather into “gold-exchange” currencies (currencies themselves convertible into gold), in practice often sterling, sometimes the dollar (the reserve currencies).

Instability of the Interwar Gold Standard

The features that fostered stability of the classical gold standard did not apply to the interwar standard; instead, many forces made for instability. (1) The process of establishing fixed exchange rates was piecemeal and haphazard, resulting in disequilibrium exchange rates. The United Kingdom restored convertibility at the prewar mint price without sufficient deflation, resulting in an overvalued currency of about ten percent. (Expressed in a common currency at mint parity, the British price level was ten percent higher than that of its trading partners and competitors). A depressed export sector and chronic balance-of-payments difficulties were to result. Other overvalued currencies (in terms of mint parity) were those of Denmark, Italy, and Norway. In contrast, France, Germany, and Belgium had undervalued currencies. (2) Wages and prices were less flexible than in the prewar period. In particular, powerful unions kept wages and unemployment high in British export industries, hindering balance-of-payments correction.

(3) Higher trade barriers than prewar also restrained adjustment.

(4) The gold-exchange standard economized on total world gold via the gold of reserve- currency countries backing their currencies in their reserves role for countries on that standard and also for countries on a coin or bullion standard that elected to hold part of their reserves in London or New York. (Another economizing element was continuation of the move of gold out of the money supply and into banking and official reserves that began in the classical period: for the eleven-major-country aggregate, gold declined to less than œ of one percent of the money supply in 1928, and the ratio of official gold to official-plus-money gold reached 99 percent — Table 3). The gold-exchange standard was inherently unstable, because of the conflict between (a) the expansion of sterling and dollar liabilities to foreign central banks to expand world liquidity, and (b) the resulting deterioration in the reserve ratio of the Bank of England, and U.S. Treasury and Federal Reserve Banks.

This instability was particularly severe in the interwar period, for several reasons. First, France was now a large official holder of sterling, with over half the official reserves of the Bank of France in foreign exchange in 1928, versus essentially none in 1913 (Table 6); and France was resentful that the United Kingdom had used its influence in the League of Nations to induce financially reconstructed countries in Europe to adopt the gold-exchange (sterling) standard. Second, many more countries were on the gold-exchange standard than prewar. Cooperation in restraining a run on sterling or the dollar would be difficult to achieve. Third, the gold-exchange standard, associated with colonies in the classical period, was viewed as a system inferior to a coin standard.

(5) In the classical period, London was the one dominant financial center; in the interwar period it was joined by New York and, in the late 1920s, Paris. Both private and official holdings of foreign currency could shift among the two or three centers, as interest-rate differentials and confidence levels changed.

(6) The problem with gold was not overall scarcity but rather maldistribution. In 1928, official reserve-currency liabilities were much more concentrated than in 1913: the United Kingdom accounted for 77 percent of world foreign-exchange reserves and France less than two percent (versus 47 and 30 percent in 1913 — Table 7). Yet the United Kingdom held only seven percent of world official gold and France 13 percent (Table 8). Reflecting its undervalued currency, France also possessed 39 percent of world official foreign exchange. Incredibly, the United States held 37 percent of world official gold — more than all the non-core countries together.

(7) Britain’s financial position was even more precarious than in the classical period. In 1928, the gold and dollar reserves of the Bank of England covered only one third of London’s liquid liabilities to official foreigners, a ratio hardly greater than in 1913 (and compared to a U.S. ratio of almost 5œ — Table 9). Various elements made the financial position difficult compared to prewar. First, U.K. liquid liabilities were concentrated on stronger countries (France, United States), whereas its liquid assets were predominantly in weaker countries (such as Germany). Second, there was ongoing tension with France, that resented the sterling-dominated gold- exchange standard and desired to cash in its sterling holding for gold to aid its objective of achieving first-class financial status for Paris.

(8) Internal balance was an important goal of policy, which hindered balance-of-payments adjustment, and monetary policy was affected greatly by domestic politics rather than geared to preservation of currency convertibility. (9) Especially because of (8), the credibility in authorities’ commitment to the gold standard was not absolute. Convertibility risk and exchange risk could be well above zero, and currency speculation could be destabilizing rather than stabilizing; so that when a country’s currency approached or reached its gold-export point, speculators might anticipate that currency convertibility would not be maintained and the currency devalued. Hence they would sell rather than buy the currency, which, of course, would help bring about the very outcome anticipated.

(10) The “rules of the game” were infrequently followed and, for most countries, violated even more often than in the classical gold standard — Table 10. Sterilization of gold inflows by the Bank of England can be viewed as an attempt to correct the overvalued pound by means of deflation. However, the U.S. and French sterilization of their persistent gold inflows reflected exclusive concern for the domestic economy and placed the burden of adjustment on other countries in the form of deflation.

(11) The Bank of England did not provide a leadership role in any important way, and central-bank cooperation was insufficient to establish credibility in the commitment to currency convertibility.

Breakdown of the Interwar Gold Standard

Although Canada effectively abandoned the gold standard early in 1929, this was a special case in two respects. First, the action was an early drastic reaction to high U.S. interest rates established to fight the stock-market boom but that carried the threat of unsustainable capital outflow and gold loss for other countries. Second, use of gold devices was the technique used to restrict gold exports and informally terminate the Canadian gold standard.

The beginning of the end of the interwar gold standard occurred with the Great Depression. The depression began in the periphery, with low prices for exports and debt-service requirements leading to insurmountable balance-of-payments difficulties while on the gold standard. However, U.S. monetary policy was an important catalyst. In the second half of 1927 the Federal Reserve pursued an easy-money policy, which supported foreign currencies but also fed the boom in the New York stock market. Reversing policy to fight the Wall Street boom, higher interest rates attracted monies to New York, which weakened sterling in particular. The stock market crash in October 1929, while helpful to sterling, was followed by a passive monetary policy that did not prevent the U.S. depression that started shortly thereafter and that spread to the rest of the world via declines in U.S. trade and lending. In 1929 and 1930 a number of periphery countries either formally suspended currency convertibility or restricted it so that their currencies went beyond the gold-export point.

It was destabilizing speculation, emanating from lack of confidence in authorities’ commitment to currency convertibility that ended the interwar gold standard. In May 1931 there was a run on Austria’s largest commercial bank, and the bank failed. The run spread to Germany, where an important bank also collapsed. The countries’ central banks lost substantial reserves; international financial assistance was too late; and in July 1931 Germany adopted exchange control, followed by Austria in October. These countries were definitively off the gold standard.

The Austrian and German experiences, as well as British budgetary and political difficulties, were among the factors that destroyed confidence in sterling, which occurred in mid-July 1931. Runs on sterling ensued, and the Bank of England lost much of its reserves. Loans from abroad were insufficient, and in any event taken as a sign of weakness. The gold standard was abandoned in September, and the pound quickly and sharply depreciated on the foreign- exchange market, as overvaluation of the pound would imply.

Amazingly, there were no violations of the dollar-sterling gold points on a monthly average basis to the very end of August 1931 (Table 11). In contrast, the average deviation of the dollar-sterling exchange rate from the midpoint of the gold-point spread in 1925-1931 was more than double that in 1911-1914, by either of two measures (Table 12), suggesting less- dominant stabilizing speculation compared to the prewar period. Yet the 1925-1931 average deviation was not much more (in one case, even less) than in earlier decades of the classical gold standard. The trust in the Bank of England had a long tradition, and the shock to confidence in sterling that occurred in July 1931 was unexpected by the British authorities.

Following the U.K. abandonment of the gold standard, many countries followed, some to maintain their competitiveness via currency devaluation, others in response to destabilizing capital flows. The United States held on until 1933, when both domestic and foreign demands for gold, manifested in runs on U.S. commercial banks, became intolerable. The “gold bloc” countries (France, Belgium, Netherlands, Switzerland, Italy, Poland) and Danzig lasted even longer; but, with their currencies now overvalued and susceptible to destabilizing speculation, these countries succumbed to the inevitable by the end of 1936. Albania stayed on gold until occupied by Italy in 1939. As much as a cause, the Great Depression was a consequence of the gold standard; for gold-standard countries hesitated to inflate their economies for fear of weakening the balance of payments, suffering loss of gold and foreign-exchange reserves, and being forced to abandon convertibility or the gold parity. So the gold standard involved “golden fetters” (the title of the classic work of Eichengreen, 1992) that inhibited monetary and fiscal policy to fight the depression. Therefore, some have argued, these fetters seriously exacerbated the severity of the Great Depression within countries (because expansionary policy to fight unemployment was not adopted) and fostered the international transmission of the Depression (because as a country’s output decreased, its imports fell, thus reducing exports and income of other countries).

The “international gold standard,” defined as the period of time during which all four core countries were on the gold standard, existed from 1879 to 1914 (36 years) in the classical period and from 1926 or 1928 to 1931 (four or six years) in the interwar period. The interwar gold standard was a dismal failure in longevity, as well as in its association with the greatest depression the world has known.

References

Bayoumi, Tamim, Barry Eichengreen, and Mark P. Taylor, eds. Modern Perspectives on the Gold Standard. Cambridge: Cambridge University Press, 1996.

Bernanke, Ben, and Harold James. “The Gold Standard, Deflation, and Financial Crisis in the Great Depression: An International Comparison.” In Financial Market and Financial Crises, edited by R. Glenn Hubbard, 33-68. Chicago: University of Chicago Press, 1991.

Bett, Virgil M. Central Banking in Mexico: Monetary Policies and Financial Crises, 1864-1940. Ann Arbor: University of Michigan, 1957.

Bloomfield, Arthur I. Monetary Policy under the International Gold Standard, 1880 1914. New York: Federal Reserve Bank of New York, 1959.

Bloomfield, Arthur I. Short-Term Capital Movements Under the Pre-1914 Gold Standard. Princeton: International Finance Section, Princeton University, 1963.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics, 1914-1941. Washington, DC, 1943.

Bordo, Michael D. “The Classical Gold Standard: Some Lessons for Today.” Federal Reserve Bank of St. Louis Review 63, no. 5 (1981): 2-17.

Bordo, Michael D. “The Classical Gold Standard: Lessons from the Past.” In The International Monetary System: Choices for the Future, edited by Michael B. Connolly, 229-65. New York: Praeger, 1982.

Bordo, Michael D. “Gold Standard: Theory.” In The New Palgrave Dictionary of Money & Finance, vol. 2, edited by Peter Newman, Murray Milgate, and John Eatwell, 267 71. London: Macmillan, 1992.

Bordo, Michael D. “The Gold Standard, Bretton Woods and Other Monetary Regimes: A Historical Appraisal.” Federal Reserve Bank of St. Louis Review 75, no. 2 (1993): 123-91.

Bordo, Michael D. The Gold Standard and Related Regimes: Collected Essays. Cambridge: Cambridge University Press, 1999.

Bordo, Michael D., and Forrest Capie, eds. Monetary Regimes in Transition. Cambridge: Cambridge University Press, 1994.

Bordo, Michael D., and Barry Eichengreen, eds. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press, 1993.

Bordo, Michael D., and Finn E. Kydland. “The Gold Standard as a Rule: An Essay in Exploration.” Explorations in Economic History 32, no. 4 (1995): 423-64.

Bordo, Michael D., and Hugh Rockoff. “The Gold Standard as a ‘Good Housekeeping Seal of Approval’. ” Journal of Economic History 56, no. 2 (1996): 389- 428.

Bordo, Michael D., and Anna J. Schwartz, eds. A Retrospective on the Classical Gold Standard, 1821-1931. Chicago: University of Chicago Press, 1984.

Bordo, Michael D., and Anna J. Schwartz. “The Operation of the Specie Standard: Evidence for Core and Peripheral Countries, 1880-1990.” In Currency Convertibility: The Gold Standard and Beyond, edited by Jorge Braga de Macedo, Barry Eichengreen, and Jaime Reis, 11-83. London: Routledge, 1996.

Bordo, Michael D., and Anna J. Schwartz. “Monetary Policy Regimes and Economic Performance: The Historical Record.” In Handbook of Macroeconomics, vol. 1A, edited by John B. Taylor and Michael Woodford, 149-234. Amsterdam: Elsevier, 1999.

Broadberry, S. N., and N. F. R. Crafts, eds. Britain in the International Economy. Cambridge: Cambridge University Press, 1992.

Brown, William Adams, Jr. The International Gold Standard Reinterpreted, 1914- 1934. New York: National Bureau of Economic Research, 1940.

Bureau of the Mint. Monetary Units and Coinage Systems of the Principal Countries of the World, 1929. Washington, DC: Government Printing Office, 1929.

Cairncross, Alec, and Barry Eichengreen. Sterling in Decline: The Devaluations of 1931, 1949 and 1967. Oxford: Basil Blackwell, 1983.

Calleo, David P. “The Historiography of the Interwar Period: Reconsiderations.” In Balance of Power or Hegemony: The Interwar Monetary System, edited by Benjamin M. Rowland, 225-60. New York: New York University Press, 1976.

Clarke, Stephen V. O. Central Bank Cooperation: 1924-31. New York: Federal Reserve Bank of New York, 1967.

Cleveland, Harold van B. “The International Monetary System in the Interwar Period.” In Balance of Power or Hegemony: The Interwar Monetary System, edited by Benjamin M. Rowland, 1-59. New York: New York University Press, 1976.

Cooper, Richard N. “The Gold Standard: Historical Facts and Future Prospects.” Brookings Papers on Economic Activity 1 (1982): 1-45.

Dam, Kenneth W. The Rules of the Game: Reform and Evolution in the International Monetary System. Chicago: University of Chicago Press, 1982.

De Cecco, Marcello. The International Gold Standard. New York: St. Martin’s Press, 1984.

De Cecco, Marcello. “Gold Standard.” In The New Palgrave Dictionary of Money & Finance, vol. 2, edited by Peter Newman, Murray Milgate, and John Eatwell, 260 66. London: Macmillan, 1992.

De Cecco, Marcello. “Central Bank Cooperation in the Inter-War Period: A View from the Periphery.” In International Monetary Systems in Historical Perspective, edited by Jaime Reis, 113-34. Houndmills, Basingstoke, Hampshire: Macmillan, 1995.

De Macedo, Jorge Braga, Barry Eichengreen, and Jaime Reis, eds. Currency Convertibility: The Gold Standard and Beyond. London: Routledge, 1996.

Ding, Chiang Hai. “A History of Currency in Malaysia and Singapore.” In The Monetary System of Singapore and Malaysia: Implications of the Split Currency, edited by J. Purcal, 1-9. Singapore: Stamford College Press, 1967.

Director of the Mint. The Monetary Systems of the Principal Countries of the World, 1913. Washington: Government Printing Office, 1913.

Director of the Mint. Monetary Systems of the Principal Countries of the World, 1916. Washington: Government Printing Office, 1917.

Dos Santos, Fernando Teixeira. “Last to Join the Gold Standard, 1931.” In Currency Convertibility: The Gold Standard and Beyond, edited by Jorge Braga de Macedo, Barry Eichengreen, and Jaime Reis, 182-203. London: Routledge, 1996.

Dowd, Kevin, and Richard H. Timberlake, Jr., eds. Money and the National State: The Financial Revolution, Government and the World Monetary System. New Brunswick (U.S.): Transaction, 1998.

Drummond, Ian. M. The Gold Standard and the International Monetary System, 1900 1939. Houndmills, Basingstoke, Hampshire: Macmillan, 1987.

Easton, H. T. Tate’s Modern Cambist. London: Effingham Wilson, 1912.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Methuen, 1985.

Eichengreen, Barry. Elusive Stability: Essays in the History of International Finance, 1919-1939. New York: Cambridge University Press, 1990.

Eichengreen, Barry. “International Monetary Instability between the Wars: Structural Flaws or Misguided Policies?” In The Evolution of the International Monetary System: How can Efficiency and Stability Be Attained? edited by Yoshio Suzuki, Junichi Miyake, and Mitsuaki Okabe, 71-116. Tokyo: University of Tokyo Press, 1990.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919 1939. New York: Oxford University Press, 1992.

Eichengreen, Barry. “The Endogeneity of Exchange-Rate Regimes.” In Understanding Interdependence: The Macroeconomics of the Open Economy, edited by Peter B. Kenen, 3-33. Princeton: Princeton University Press, 1995.

Eichengreen, Barry. “History of the International Monetary System: Implications for Research in International Macroeconomics and Finance.” In The Handbook of International Macroeconomics, edited by Frederick van der Ploeg, 153-91. Cambridge, MA: Basil Blackwell, 1994.

Eichengreen, Barry, and Marc Flandreau. The Gold Standard in Theory and History, second edition. London: Routledge, 1997.

Einzig, Paul. International Gold Movements. London: Macmillan, 1929. Federal Reserve Bulletin, various issues, 1928-1936.

Ford, A. G. The Gold Standard 1880-1914: Britain and Argentina. Oxford: Clarendon Press, 1962.

Ford, A. G. “Notes on the Working of the Gold Standard before 1914.” In The Gold Standard in Theory and History, edited by Barry Eichengreen, 141-65. New York: Methuen, 1985.

Ford, A. G. “International Financial Policy and the Gold Standard, 1870-1914.” In The Industrial Economies: The Development of Economic and Social Policies, The Cambridge Economic History of Europe, vol. 8, edited by Peter Mathias and Sidney Pollard, 197-249. Cambridge: Cambridge University Press, 1989.

Frieden, Jeffry A. “The Dynamics of International Monetary Systems: International and Domestic Factors in the Rise, Reign, and Demise of the Classical Gold Standard.” In Coping with Complexity in the International System, edited by Jack Snyder and Robert Jervis, 137-62. Boulder, CO: Westview, 1993.

Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University, Press, 1963.

Gallarotti, Giulio M. The Anatomy of an International Monetary Regime: The Classical Gold Standard, 1880-1914. New York: Oxford University Press, 1995.

Giovannini, Alberto. “Bretton Woods and its Precursors: Rules versus Discretion in the History of International Monetary Regimes.” In A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform, edited by Michael D. Bordo and Barry Eichengreen, 109-47. Chicago: University of Chicago Press, 1993.

Gunasekera, H. A. de S. From Dependent Currency to Central Banking in Ceylon: An Analysis of Monetary Experience, 1825-1957. London: G. Bell, 1962.

Hawtrey, R. G. The Gold Standard in Theory and Practice, fifth edition. London: Longmans, Green, 1947.

Hawtrey, R. G. Currency and Credit, fourth edition. London: Longmans, Green, 1950.

Hershlag, Z. Y. Introduction to the Modern Economic History of the Middle East. London: E. J. Brill, 1980.

Ingram, James C. Economic Changes in Thailand, 1850-1970. Stanford, CA: Stanford University, 1971.

Jonung, Lars. “Swedish Experience under the Classical Gold Standard, 1873-1914.” In A Retrospective on the Classical Gold Standard, 1821-1931, edited by Michael D. Bordo and Anna J. Schwartz, 361-99. Chicago: University of Chicago Press, 1984.

Kemmerer, Donald L. “Statement.” In Gold Reserve Act Amendments, Hearings, U.S. Senate, 83rd Cong., second session, pp. 299-302. Washington, DC: Government Printing Office, 1954.

Kemmerer, Edwin Walter. Modern Currency Reforms: A History and Discussion of Recent Currency Reforms in India, Puerto Rico, Philippine Islands, Straits Settlements and Mexico. New York: Macmillan, 1916.

Kemmerer, Edwin Walter. Inflation and Revolution: Mexico’s Experience of 1912- 1917. Princeton: Princeton University Press, 1940.

Kemmerer, Edwin Walter. Gold and the Gold Standard: The Story of Gold Money – – Past, Present and Future. New York: McGraw-Hill, 1944.

Kenwood, A.G., and A. L. Lougheed. The Growth of the International Economy, 1820 1960. London: George Allen & Unwin, 1971.

Kettell, Brian. Gold. Cambridge, MA: Ballinger, 1982.

Kindleberger, Charles P. A Financial History of Western Europe. London: George Allen & Unwin, 1984.

Kindleberger, Charles P. The World in Depression, 1929-1939, revised edition. Berkeley, University of California Press, 1986.

Lampe, John R. The Bulgarian Economy in the Twentieth Century. London: Croom Helm, 1986.

League of Nations. Memorandum on Currency and Central Banks, 1913-1925, second edition, vol. 1. Geneva, 1926.

League of Nations. International Statistical Yearbook, 1926. Geneva, 1927.

League of Nations. International Statistical Yearbook, 1928. Geneva, 1929.

League of Nations. Statistical Yearbook, 1930/31.Geneva, 1931.

League of Nations. Money and Banking, 1937/38, vol. 1: Monetary Review. Geneva.

League of Nations. The Course and Control of Inflation. Geneva, 1946.

Lindert, Peter H. Key Currencies and Gold, 1900-1913. Princeton: International Finance Section, Princeton University, 1969.

McCloskey, Donald N., and J. Richard Zecher. “How the Gold Standard Worked, 1880 1913.” In The Monetary Approach to the Balance of Payments, edited by Jacob A. Frenkel and Harry G. Johnson, 357-85. Toronto: University of Toronto Press, 1976.

MacKay, R. A., ed. Newfoundland: Economic Diplomatic, and Strategic Studies. Toronto: Oxford University Press, 1946.

MacLeod, Malcolm. Kindred Countries: Canada and Newfoundland before Confederation. Ottawa: Canadian Historical Association, 1994.

Moggridge, D. E. British Monetary Policy, 1924-1931: The Norman Conquest of $4.86. Cambridge: Cambridge University Press, 1972.

Moggridge, D. E. “The Gold Standard and National Financial Policies, 1919-39.” In The Industrial Economies: The Development of Economic and Social Policies, The Cambridge Economic History of Europe, vol. 8, edited by Peter Mathias and Sidney Pollard, 250-314. Cambridge: Cambridge University Press, 1989.

Morgenstern, Oskar. International Financial Transactions and Business Cycles. Princeton: Princeton University Press, 1959.

Norman, John Henry. Complete Guide to the World’s Twenty-nine Metal Monetary Systems. New York: G. P. Putnam, 1892.

Nurkse, Ragnar. International Currency Experience: Lessons of the Inter-War Period. Geneva: League of Nations, 1944.

Officer, Lawrence H. Between the Dollar-Sterling Gold Points: Exchange Rates, Parity, and Market Behavior. Cambridge: Cambridge University Press, 1996.

Pablo, Martín Acena, and Jaime Reis, eds. Monetary Standards in the Periphery: Paper, Silver and Gold, 1854-1933. Houndmills, Basingstoke, Hampshire: Macmillan, 2000.

Palyi, Melchior. The Twilight of Gold, 1914-1936: Myths and Realities. Chicago: Henry Regnery, 1972.

Pamuk, Sevket. A Monetary History of the Ottoman Empire. Cambridge: Cambridge University Press, 2000.

Pani?, M. European Monetary Union: Lessons from the Classical Gold Standard. Houndmills, Basingstoke, Hampshire: St. Martin’s Press, 1992.

Powell, James. A History of the Canadian Dollar. Ottawa: Bank of Canada, 1999.

Redish, Angela. Bimetallism: An Economic and Historical Analysis. Cambridge: Cambridge University Press, 2000.

Rifaat, Mohammed Ali. The Monetary System of Egypt: An Inquiry into its History and Present Working. London: George Allen & Unwin, 1935.

Rockoff, Hugh. “Gold Supply.” In The New Palgrave Dictionary of Money & Finance, vol. 2, edited by Peter Newman, Murray Milgate, and John Eatwell, 271 73. London: Macmillan, 1992.

Sayers, R. S. The Bank of England, 1891-1944, Appendixes. Cambridge: Cambridge University Press, 1976.

Sayers, R. S. The Bank of England, 1891-1944. Cambridge: Cambridge University Press, 1986.

Schwartz, Anna J. “Alternative Monetary Regimes: The Gold Standard.” In Alternative Monetary Regimes, edited by Colin D. Campbell and William R. Dougan, 44-72. Baltimore: Johns Hopkins University Press, 1986.

Shinjo, Hiroshi. History of the Yen: 100 Years of Japanese Money-Economy. Kobe: Kobe University, 1962.

Spalding, William F. Tate’s Modern Cambist. London: Effingham Wilson, 1926.

Spalding, William F. Dictionary of the World’s Currencies and Foreign Exchange. London: Isaac Pitman, 1928.

Triffin, Robert. The Evolution of the International Monetary System: Historical Reappraisal and Future Perspectives. Princeton: International Finance Section, Princeton University, 1964.

Triffin, Robert. Our International Monetary System: Yesterday, Today, and Tomorrow. New York: Random House, 1968.

Wallich, Henry Christopher. Monetary Problems of an Export Economy: The Cuban Experience, 1914-1947. Cambridge, MA: Harvard University Press, 1950.

Yeager, Leland B. International Monetary Relations: Theory, History, and Policy, second edition. New York: Harper & Row, 1976.

Young, John Parke. Central American Currency and Finance. Princeton: Princeton University Press, 1925.

Citation: Officer, Lawrence. “Gold Standard”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/gold-standard/

The Freedmen’s Bureau

William Troost, University of British Columbia

The Bureau of Refugees, Freedmen, and Abandoned Lands, more commonly know as the Freedmen’s Bureau, was a federal agency established to help Southern blacks transition from their lives as slaves to free individuals. The challenges of this transformation were enormous as the Civil War devastated the region – leaving farmland dilapidated and massive amounts of capital destroyed. Additionally, the entire social order of the region was disturbed as slave owners and former slaves were forced to interact with one another in completely new ways. The Freedmen’s Bureau was an unprecedented foray by the federal government into the sphere of social welfare during a critical period of American history. This article briefly describes this unique agency, its colorful history, and many functions that the bureau performed during its brief existence.

The Beginning of the Bureau

In March 1863, the American Freedmen’s Inquiry Commission was set up to investigate “the measures which may best contribute to the protection and improvement of the recently emancipated freedmen of the United States, and to their self-defense and self-support.”1 The commission debated various methods and activities to alleviate the current condition of freedmen and aid their transition to free individuals. Basic aid activities to alleviate physical suffering and provide legal justice, education, and land redistribution were commonly mentioned in these meetings and hearings. This inquiry commission examined many issues and came up with some ideas that would eventually become the foundation for the eventual Freedmen’s Bureau Law. In 1864, the commission issued their final report which laid out the basic philosophy that would guide the actions of the Freedmen’s Bureau.

“The sum of our recommendations is this: Offer the freedmen temporary aid and counsel until they become a little accustomed to their new sphere of life; secure to them, by law, their just rights of person and property; relieve them, by a fair and equal administration of justice, from the depressing influence of disgraceful prejudice; above all, guard them against the virtual restoration of slavery in any form, and let them take care of themselves. If we do this, the future of the African race in this country will be conducive to its prosperity and associated with its well-being. There will be nothing connected with it to excite regret to inspire apprehension.”2

When the Congress finally got down to the business of writing a bill to aid the transition of the freedmen they tried to integrate many of the American Freedmen’s Inquiry Commission’s recommendations. Originally the agency set up to aid in this transition was to be named the Bureau of Emancipation. However, when the bill came up for a vote on March 1, 1864 the name was changed to the Bureau of Refugees, Freedmen, and Abandoned Lands. This change was due in large part to objections that the bill was exclusionary and aimed solely towards the aid of blacks. This name changed was aimed at enlarging support for the bill.

The House and the Senate argued about the powers and place that the bureau should reside within the government. Those in the House wanted the agency placed within the War Department, concluding that the power used to free the slaves would be best to aid them in their transition. Oppositely, in the Senate Charles Sumner’s Committee on Slavery and Freedom wanted the bureau placed within the Department of the Treasury – as it had the power to tax and had possession of confiscated lands. Sumner felt that they “should not be separated from their best source of livelihood.”3 After a year of debate, finally a compromise was agreed to that entrusted the Freedmen’s Bureau with the administration of confiscated lands while placing the bureau within the Department of War. Thus, On March 3, 1865, with the stroke of a pen, Abraham Lincoln signed into existence the Bureau of Refugees, Freedmen, and Abandoned Lands. Selected to head of the new bureau was General Otis Oliver Howard – commonly known as the Christian General. Howard had strong ties with the philanthropic community and forged strong ties with freedmen’s aid organizations.

The Freedmen’s Bureau was active in a variety of aid functions. Eric Foner writes it was “an experiment in social policy that did not belong to the America of its day”.4 The bureau did important work in many key areas and had many functions that even today are not considered the responsibility of the national government.

Relief Services

A key function of the bureau, especially in the beginning, was to provide temporary relief for the suffering of destitute freedmen. The bureau provided rations for those most in need due to the abandonment of plantations, poor crop yields, and unemployment. This aid was taken advantage of by a staggering number of both freedmen and refugees. A ration was defined as enough corn meal, flour, and sugar sufficient to feed a person for one week. In “the first 15 months following the war, the Bureau issued over 13 million rations, two thirds to blacks.”5 The size of this aid was staggering and while it was deemed a great necessity, it also fostered tremendous anxiety for both General Howard and the general population – mainly that it would cause idleness. Because of these worries, General Howard ordered that this form of relief be discontinued in the fall of 1866.

Health Care

In a similar vein the bureau also provided medical care to the recently freed slaves. The health situation of freedmen at the conclusion of the Civil War was atrocious. Frequent pandemics of cholera, poor sanitation, and outbreaks of smallpox killed scores of freedmen. Because the freed population lacked the financial assets to purchase private healthcare and were denied care in many other cases, the bureau played a valuable role.

“Since hospitals and doctors could not be relied on to provide adequate health care for freedmen, individual bureau agents on occasion responded innovatively to black distress. During epidemics, Pine Bluff and Little Rock agents relocated freedpersons to less contagion-ridden places. When blacks could not be moved, agents imposed quarantines to prevent the spread of disease. General Order Number 8…prohibited new residents from congregating in towns. The order also mandated weekly inspections of freedmen’s homes to check for filth and overcrowding.”6

In addition to preventing and containing outbreaks, the bureau also engaged more directly in health care. Being placed in the War Department, the bureau was also able to assume operations of hospitals established by the Army during the war. After the war it expanded the system to areas previously not under military control. Observing that freedmen were not receiving an adequate quality of health services, the bureau established dispensaries providing basic medical care and drugs free of charge, or at a nominal cost. The Bureau “managed in the early years of Reconstruction to treat an estimated half million suffering freedmen, as well as a smaller but significant number of whites.”7

Land Redistribution

Perhaps the most well-known function of the bureau was one that never came to fruition. During the course of the Civil War, the U.S. Army took control of a good deal of land that had been confiscated or abandoned by the Confederacy. From the time of emancipation there were rumors that confiscated lands would be provided to the recently freed slaves. This land would enable the blacks to be economically self-sufficient and provide protection from their former owners. In January 1865, General Sherman issued Special Field Orders, No. 15, which set aside the Sea Islands and lands from South Carolina to Florida for blacks to settle. According to his order, each family would receive forty acres of land and the loan of horses and mules from the Army. Similar to General Sherman’s order, the promise of land was incorporated into the bureau bill. Quickly the bureau helped blacks settle some of the abandoned lands and “by June 1865, roughly 10,000 families of freed people, with the assistance of the Freedmen’s Bureau, had taken up more than 400,000 acres.”8

While the promise of “forty acres and a mule” excited the freedmen, the widespread implementation of this policy was quickly thwarted. In the summer of 1865, President Andrew Johnson issued special pardons restoring the property of many Confederates – throwing into question the status of abandoned lands. In response, General Howard, the Commissioner of the Freedmen’s Bureau, issued Circular 13 which told agents to conserve forty-acre tracts of land for the freedmen – as he claimed presidential pardons conflicted with the laws establishing the bureau. However, Johnson quickly instructed Howard to rescind his circular and send out a new circular ordering the restoration to pardoned owners of all land except those tracts already sold. These actions by the President were devastating, as freedmen were evicted from lands that they had long occupied and improved. Johnson’s actions took away what many felt was the freedmen’s best chance at economic protection and self-sufficiency.

Judicial Functions

While the land distribution of the new agency was thwarted, the bureau was able to perform many duties. Bureau agents had judicial authority in the South attempting to secure equal justice from the state and local governments for both blacks and white Unionists. Local agents individually adjudicated a wide variety of disputes. In some circumstances the bureau established courts where freedmen could bring forth their complaints. After the local courts regained their jurisdiction, bureau agents kept an eye on local courts retaining the authority to overturn decisions that were discriminatory towards blacks. In May 1865, the Commissioner of the bureau issued a circular “authorizing assistant commissioners to exercise jurisdiction in cases where blacks were not allowed to testify.”9

In addition to these judicial functions, the bureau also helped provide legal services in the domestic sphere. Agents helped legitimize slave marriages and presided over freedmen marriage ceremonies in areas where black marriages were obstructed. Beginning in 1866, the bureau became responsible for filing the claims of black soldiers for back pay, pensions, and bounties. The claims division remained in operation until the end of the bureau’s existence. During a time when many of the states tried to strip rights away from blacks, the bureau was essential in providing freedmen redress and access to more equitable judicial decisions and services.

Labor Relations

Another important function of the bureau was to help draw up work contracts to help facilitate the hiring of freedmen. The abolition of slavery created economic confusion and stagnation as many planters had a difficult time finding labor to work their fields. Additionally, many blacks were anxious and unsure about working for former slave owners. “Into this chaos stepped the Freedmen’s Bureau as an intermediary.”10 The bureau helped planters and freedmen draft contracts on mutually agreeable terms – negotiating several hundred thousand contracts. Once agreed upon, the agency tried to make sure both planter and worker lived up to their part of the agreement. In essence, the bureau “would undertake the role of umpire.”11

Of the bureau’s many activities this was one of its most controversial. Both planters and freedmen complained about the insistence on labor contracts. Planters complained that labor contracts forbade the use of corporal punishment used in the past. They resented the limits on their activities and felt the restrictions of the contracts limited the productivity of their workers. On the other hand, freedmen complained that the contract structures were too restrictive and didn’t allow them to move freely. In essence, the bureau had an impossible task – trying to get the freedmen to return to work for former slave owners while preserving their rights and limiting abuse. The Freedmen’s Bureau’s judicial functions were of great help in enforcing these contracts in a fair manner making both parties live up to their end of the bargain. While historians have split over whether the bureau favored planters or the freedmen, Ralph Shlomowitz in his detailed analysis of bureau-assisted labor contracts found that contracts were determined by the free interplay of market forces.12 First, he finds contracts brokered by the bureau were extremely detailed to an extent that would not make sense in the absence of compliance. Second, contrary to popular belief he finds the share of crops received by labor was highly variable. In areas of higher quality land the share awarded to labor was less than in areas with lower land quality.

Educational Efforts

Prior to the Civil War it had been policy in the sixteen slave states to fine, whip, or imprison those who gave instruction to blacks or mulattos. In many states the punishments for teaching a person of color were quite severe. These laws severely restricted the educational opportunity of blacks – especially access to formal schooling. As a result, when given their freedom, many former slaves lacked the literacy skills necessary to protect themselves from discrimination and exploitation, and pursue many personal activities. This lack of literacy created great problems for blacks in a free labor system. Freedmen were repeatedly taken advantage of as they were often unable to read or draft contracts. Additionally, individuals lacked the ability to read newspapers and trade manuals, or worship by reading the Bible. Thus, when emancipated there was a great demand for freedmen schools.

General Howard quickly realized that education was perhaps the most important endeavor that the bureau could undertake. However, the financial resources and the few functions that the bureau was authorized to undertake limited the extent to which it was able to assist. Much of the early work in schooling was done by a number of benevolent and religious Northern societies. While initially the direct aid of the bureau was limited, it provided an essential role in organizing and coordinating these organizations in their efforts. The agency also allowed the use of many buildings in the Army’s possession and the bureau helped transport a trove of teachers from the North – commonly referred to as yankee school marms.

While the limits of the original Freedmen’s Bureau bill hamstrung the efforts of agents, subsequent bills changed the situation as the purse strings and functions of the bureau in the area of education were rapidly expanded. This shift in attention followed the lead of General Howard whose “stated goal was to close one after another of the original bureau divisions while the educational work was increased with all possible energy.”13 Among the provisions of the second bureau bill were: the appropriation of salaries for State Superintendents of Education, the repair and rental of school buildings, the ability to use military taxes to pay teachers’ salaries, and the establishment of the education division as a separate entity in the bureau.

These new resources were used to great success as enrollments at bureau-financed schools grew quickly, new schools were constructed in a variety of areas, and the quality and curriculum of the schools was significantly improved. The Freedmen’s Bureau was very successful in establishing a vast network of schools to help educate the freedmen. In retrospect this was a Herculean task for the federal government to accomplish. In a region where it was illegal to teach blacks how to read or write just a few years prior, the bureau was able to help establish nearly 1,600 day schools educating over 100,000 blacks at a time. The number of bureau-aided day and night schools in operation grew to a maximum of 1,737 in March 1870, employing 2,799 teachers, and instructing 103,396 pupils. In addition, 1,034 Sabbath schools were aided by the bureau that employed 4,988 teachers and instructed 85,557 pupils.

Matching the Integrated Public Use Sample of the 1870 Census and a constructed data set on bureau school location, one can examine the reach and prevalence of bureau-aided schools.14 Table 1 presents the summary statistics of various school concentration measures and educational outcomes for individual blacks 10-15 years old.

The variable “Freedmen’s Bureau School” equals one if there was at least one bureau-aided school in the individual’s county. The data reveals that 63.6 percent of blacks lived in counties with at least one bureau school. This shows the bureau was quite effective in reaching a large segment of the black population – as nearly two thirds of blacks living in the states of the ex-Confederacy had at least some minimal exposure to these schools. While the schools were widespread, it appears their concentration was somewhat low. For individuals living in a county with at least one bureau-aided school, the concentration of bureau-aided schools was 0.3165 per 30 square miles, or 0.4630 bureau aided-schools per 1,000 blacks.

Although the concentration of schools was somewhat low it appears they had a large impact on the educational outcomes of southern blacks. Ten to fifteen year olds living in a county with at least one bureau-aided school had literacy rates that were 6.1 percentage points higher. This appears to have been driven by the bureau increasing access to formal education for black children in these counties as school attendance rates were 7.5 percentage points higher than in counties without such schools.

Andrew Johnson and the Freedmen’s Bureau

Only eleven days after signing the bureau into existence, Abraham Lincoln was struck down by John Wilkes Booth. Taking his place in office was Andrew Johnson, a former Democratic Senator from Tennessee. Despite Johnson’s Southern roots, hopes were high that Congress and the new President could work closer together than the previous administration. President Lincoln and Congress had championed vastly different policies for Reconstruction. Lincoln preferred the term “Restoration” instead of “Reconstruction,” as he felt it was constitutionally impossible for a state to succeed.15 Lincoln championed the quick integration of the South into the Union and believed it could best be accomplished under the direction of the executive branch. Oppositely, Republicans in Congress led by Charles Sumner and Thaddeus Stevens felt the Confederate states had actually seceded and relinquished their constitutional rights. The Republicans in Congress advocated strict conditions for re-entry into the Union and programs aimed at reshaping society.

The ascension of Johnson to the presidency gave hope to Congress that they would have an ally in the White House in terms of Reconstruction philosophy. According to Howard Nash, the “Radicals were delighted….to have Vice President Andrew Johnson, who they had good reason to suppose was one of their number, elevated to the presidency.”16 In the months before and immediately after taking office, Johnson repeatedly talked about the need to punish rebels in the South. After Lincoln’s death Johnson became more impassioned in his speeches. In late April 1865 Johnson told an Indiana delegation “Treason must be made odious…traitors must be punished and impoverished…their social power must be destroyed.”17 If anything, many feared that Johnson may stray too far from the Presidential Reconstruction offered by Lincoln and be overly harsh in his treatment of the South.

Immediately after taking office Johnson honored Lincoln’s choice to head the bureau by appointing General Oliver Otis Howard as commissioner of the bureau. While this action raised hopes in Congress they would be able to work with the new administration, Johnson quickly switched course. After his selection of Howard, President Johnson and the “Radical” Republicans would scarcely agree on anything during the remainder of his term. On May 29, 1865, Johnson issued a proclamation that conferred amnesty, pardon, and the restoration of property rights for almost all Confederate soldiers who took an oath pledging loyalty to the Union. Johnson later came out in support of the black codes of the South, which tried to bring blacks back to a position of near slavery and argued that the Confederate states should be accepted back into the Union without the condition of ratifying and adopting the Fourteenth Amendment in their state constitutions.

The original bill signed by Lincoln established the bureau during and for a period of one year after the Civil War. The language of the bill was somewhat ambiguous, and with the surrender of Confederate forces military conflict had ceased. This led people to debate when the bureau would be discontinued. Consensus seemed to imply that if another bill wasn’t brought forth that the bureau would be discontinued in early 1866. In response Congress quickly got to work on a new Freedmen’s Bureau bill.

While Congress started work on a new bill, President Johnson tried to gain support for the view that the need for the bureau had come to an end. Ulysses S. Grant was called upon by the President to make a whirlwind tour of the South, and report on the present situation. The route set up was exceptionally brief and skewed to those areas best under control. Accordingly, his report said that the Freedmen’s Bureau had done good work and it appeared as though the freedmen were now able to fend for themselves without the help of the federal government.

In contrast, Carl Schurz made a long tour of the South only a few months after Grant and found the freedmen in a much different situation. In many areas the bureau was viewed as the only restraint to the most insidious of treatment of blacks. As Q.A. Gilmore stated in the report,

“For reasons already suggested I believe that the restoration of civil power that would take the control of this question out of the hands of the United States authorities (whether exercised through the military authorities or through the Freedmen’s Bureau) would, instead of removing existing evils, be almost certain to augment them.”18

While the first bill was adequate in many ways, it was rather weak in a few areas. In particular, the bill didn’t have any appropriations for officers of the bureau or direct funds earmarked for the establishment of schools. General Howard and many of his officers reported on the great need for the bureau and pushed for its existence indefinitely or at least until the freedmen were in a less vulnerable position. After listening to the reports and the recommendations of General Howard, a new bill was crafted by Senator Lyman Trumbull, a moderate Republican. The new bill proposed the bureau should remain in existence until abolished by law, provide more explicit aid to education and land to the freedmen, and protect the civil rights of blacks. The bill passed in both the Senate and House and was sent to Andrew Johnson, who promptly vetoed the measure. In his response to the Senate, Johnson wrote “there can be no necessity for the enlargement of the powers of the bureau for which provision is made in the bill.”19

While the President’s message was definitive, the veto came as a shock to many in Congress. President Johnson had been consulted prior to its passage and assured General Howard and Senator Trumbull that he would support the bill. In response to the President’s opposition, the Senate and House passed a bill that addressed some of the complaints that Johnson had with the bill, including limiting the length of the bill to two more years. Even after this watering down of the bill, it was once again vetoed. However, the new bill garnered enough support to override President Johnson’s veto. The veto of the bill and the subsequent override officially established a policy of open hostility between the legislative and executive branch. Prior to the Johnson administration, overriding a veto was extremely rare – as it had only occurred six times up until this time.20 However, after the passage of this bill it became mere commonplace for the remainder of Johnson’s term, as Congress would overturn fifteen vetoes during the less than four years Johnson was in office.

End of the Bureau

While work in the educational division picked up after the passage of the second bill, many of the other activities of the bureau were winding down. On July 25, 1868 a bill was signed into law requiring the withdrawal of most bureau officers from the states, and to stop the functions of the bureau except those that were related to education and claims. Although the educational activities of the bureau were to continue for an indefinite period of time, most state superintendent of education offices had closed by the middle of 1870. On November 30, 1870 Rev. Alvord resigned his post as General Superintendent of Education.21 While some small activities of the bureau continued after his resignation, these activities were scaled back greatly and largely consisted of correspondence. Finally due to lack of appropriations the activities of the bureau ceased in March 1871.

The expiration of the bureau was somewhat anti-climatic. A number of representatives wanted to establish a permanent bureau or organization for blacks, so that they could regulate their relations with the national and state governments.22 However, this concept was too radical to get passed by enough of a margin to override a veto. There was also talk of moving many of its functions into other parts of the government. However, over time the appropriations began to dwindle and the urgency to work out a proposal for transfer withered away in a manner similar to the bureau.

References

Alston, Lee J. and Joseph P. Ferrie. “Paternalism in Agricultural Labor Contracts in the U.S. South: Implications for the Growth of the Welfare State.” American Economic Review 83, no. 4 (1993): 852-76.

American Freedmen’s Inquiry Commission. Records of the American Freedmen’s Inquiry Commission, Final Report, Senate Executive Document 53, 38th Congress, 1st Session, Serial 1176, 1864.

Cimbala, Paul and Randall Miller. The Freedmen’s Bureau and Reconstruction: Reconsiderations. New York: Fordham University Press, 1999.

Congressional Research Service, http://clerk.house.gov/art_history/house_history/vetoes.html

Finley, Randy. From Slavery to Uncertain Freedom: The Freedmen’s Bureau in Arkansas, 1865-1869. Fayetteville: University of Arkansas Press, 1996.

Johnson, Andrew. “Message of the President: Returning Bill (S.60),” Pg. 3, 39th Congress, 1st Session, Executive Document No. 25, February 19, 1866.

McFeely, William S. Yankee Stepfather: General O.O. Howard and the Freedmen. New York: W.W. Norton, 1994.

Milton, George Fort. The Age of Hate: Andrew Johnson and the Radicals. New York: Coward-McCann, 1930.

Nash, Howard P. Andrew Johnson: Congress and Reconstruction. Rutherford, NJ: Fairleigh Dickinson University Press, 1972.

Parker, Marjorie H. “Some Educational Activities of the Freedmen’s Bureau.” Journal of Negro Education 23, no. 1 (1954): 9-21.

Q.A. Gillmore to Carl Schurz, July 27, 1865, Documents Accompanying the Report of Major General Carl Schurz, Hilton Head, SC.

Ruggles, Steven, Matthew Sobek, Trent Alexander, Catherine A. Fitch, Ronald Goeken, Patricia Kelly Hall, Miriam King, and Chad Ronnander. Integrated Public Use Microdata Series: Version 3.0 [Machine-readable database]. Minneapolis, MN: Minnesota Population Center [producer and distributor], 2004.

Shlomowitz, Ralph. “The Transition from Slave to Freedman Labor Arrangements in Southern Agriculture, 1865-1870.” Journal of Economic History 39, no. 1 (1979): 333-36.

Shlomowitz, Ralph, “The Origins of Southern Sharecropping,” Agricultural History 53, no. 3 (1979): 557-75.

Simpson, Brooks D. “Ulysses S. Grant and the Freedmen’s Bureau.” In The Freedmen’s Bureau and Reconstruction: Reconsiderations, edited by Paul A. Cimbala and Randall M. Miller. New York: Fordham University Press, 1999.

Citation: Troost, William. “Freedmen’s Bureau”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/the-freedmens-bureau/

An Economic History of Finland

Riitta Hjerppe, University of Helsinki

Finland in the early 2000s is a small industrialized country with a standard of living ranked among the top twenty in the world. At the beginning of the twentieth century it was a poor agrarian country with a gross domestic product per capita less than half of that of the United Kingdom and the United States, world leaders at the time in this respect. Finland was part of Sweden until 1809, and a Grand Duchy of Russia from 1809 to 1917, with relatively broad autonomy in its economic and many internal affairs. It became an independent republic in 1917. While not directly involved in the fighting in World War I, the country went through a civil war during the years of early independence in 1918, and fought against the Soviet Union during World War II. Participation in Western trade liberalization and bilateral trade with the Soviet Union required careful balancing of foreign policy, but also enhanced the welfare of the population. Finland has been a member of the European Union since 1995, and has belonged to the European Economic and Monetary Union since 1999, when it adopted the euro as its currency.

Gross Domestic Product per capita in Finland and in EU 15, 1860-2004, index 2004 = 100

Sources: Eurostat (2001–2005)

Finland has large forest areas of coniferous trees, and forests have been and still are an important natural resource in its economic development. Other natural resources are scarce: there is no coal or oil, and relatively few minerals. Outokumpu, the biggest copper mine in Europe in its time, was depleted in the 1980s. Even water power is scarce, despite the large number of lakes, because of the small height differences. The country is among the larger ones in Europe in area, but it is sparsely populated with 44 people per square mile, 5.3 million people altogether. The population is very homogeneous. There are a small number of people of foreign origin, about two percent, and for historical reasons there are two official language groups, the Finnish-speaking majority and a Swedish-speaking minority. In recent years population has grown at about 0.3 percent per year.

The Beginnings of Industrialization and Accelerating Growth

Finland was an agrarian country in the 1800s, despite poor climatic conditions for efficient grain growing. Seventy percent of the population was engaged in agriculture and forestry, and half of the value of production came from these primary industries in 1900. Slash and burn cultivation finally gave way to field cultivation during the nineteenth century, even in the eastern parts of the country.

Some iron works were founded in the southwestern part of the country in order to process Swedish iron ore as early as in the seventeenth century. Significant tar burning, sawmilling and fur trading brought cash with which to buy a few imported items such as salt, and some luxuries – coffee, sugar, wines and fine cloths. The small towns in the coastal areas flourished through the shipping of these items, even if restrictive legislation in the eighteenth century required transport via Stockholm. The income from tar and timber shipping accumulated capital for the first industrial plants.

The nineteenth century saw the modest beginnings of industrialization, clearly later than in Western Europe. The first modern cotton factories started up in the 1830s and 1840s, as did the first machine shops. The first steam machines were introduced in the cotton factories and the first rag paper machine in the 1840s. The first steam sawmills were allowed to start only in 1860. The first railroad shortened the traveling time from the inland towns to the coast in 1862, and the first telegraphs came at around the same time. Some new inventions, such as electrical power and the telephone, came into use early in the 1880s, but generally the diffusion of new technology to everyday use took a long time.

The export of various industrial and artisan products to Russia from the 1840s on, as well as the opening up of British markets to Finnish sawmill products in the 1860s were important triggers of industrial development. From the 1870s on pulp and paper based on wood fiber became major export items to the Russian market, and before World War I one-third of the demand of the vast Russian empire was satisfied with Finnish paper. Finland became a very open economy after the 1860s and 1870s, with an export share equaling one-fifth of GDP and an import share of one-fourth. A happy coincidence was the considerable improvement in the terms of trade (export prices/import prices) from the late 1860s to 1900, when timber and other export prices improved in relation to the international prices of grain and industrial products.

Openness of the economies (exports+imports of goods/GDP, percent) in Finland and EU 15, 1960-2005

Sources: Heikkinen and van Zanden 2004; Hjerppe 1989.

Finland participated fully in the global economy of the first gold-standard era, importing much of its grain tariff-free and a lot of other foodstuffs. Half of the imports consisted of food, beverages and tobacco. Agriculture turned to dairy farming, as in Denmark, but with poorer results. The Finnish currency, the markka from 1865, was tied to gold in 1878 and the Finnish Senate borrowed money from Western banking houses in order to build railways and schools.

GDP grew at a slightly accelerating average rate of 2.6 percent per annum, and GDP per capita rose 1.5 percent per year on average between 1860 and 1913. The population was also growing rapidly, and from two million in the 1860s it reached three million on the eve of World War I. Only about ten percent of the population lived in towns. The investment rate was a little over 10 percent of GDP between the 1860s and 1913 and labor productivity was low compared to the leading nations. Accordingly, economic growth depended mostly on added labor inputs, as well as a growing cultivated area.

Catching up in the Interwar Years

The revolution of 1917 in Russia and Finland’s independence cut off Russian trade, which was devastating for Finland’s economy. The food situation was particularly difficult as 60 percent of grain required had been imported.

Postwar reconstruction in Europe and the consequent demand for timber soon put the economy on a swift growth path. The gap between the Finnish economy and Western economies narrowed dramatically in the interwar period, although it remained the same among the Scandinavian countries, which also experienced fast growth: GDP grew by 4.7 percent per annum and GDP per capita by 3.8 percent in 1920–1938. The investment rate rose to new heights, which also improved labor productivity. The 1930s depression was milder than in many other European countries because of the continued demand for pulp and paper. On the other hand, Finnish industries went into depression at different times, which made the downturn milder than it would have been if all the industries had experienced their troughs simultaneously. The Depression, however, had serious and long-drawn-out consequences for poor people.

The land reform of 1918 secured land for tenant farmers and farm workers. A large number of new, small farms were established, which could only support families if they had extra income from forest work. The country remained largely agrarian. On the eve of World War II, almost half of the labor force and one-third of the production were still in the primary industries. Small-scale agriculture used horses and horse-drawn machines, lumberjacks went into the forest with axes and saws, and logs were transported from the forest by horses or by floating. Tariff protection and other policy measures helped to raise the domestic grain production to 80–90 percent of consumption by 1939.

Soon after the end of World War I, Finnish sawmill products, pulp and paper found old and new markets in the Western world. The structure of exports became more one-sided, however. Textiles and metal products found no markets in the West and had to compete hard with imports on the domestic market. More than four-fifths of exports were based on wood, and one-third of industrial production was in sawmilling, other wood products, pulp and paper. Other growing industries included mining, basic metal industries and machine production, but they operated on the domestic market, protected by the customs barriers that were typical of Europe at that time.

The Postwar Boom until the 1970s

Finland came out of World War II crippled by the loss of a full tenth of its territory, and with 400.000 evacuees from Karelia. Productive units were dilapidated and the raw material situation was poor. The huge war reparations to the Soviet Union were the priority problem of the decision makers. The favorable development of the domestic machinery and shipbuilding industries, which was based on domestic demand during the interwar period and arms deliveries to the army during the War made war-reparations deliveries possible. They were paid on time and according to the agreements. At the same time, timber exports to the West started again. Gradually the productive capacity was modernized and the whole industry was reformed. Evacuees and soldiers were given land on which to settle, and this contributed to the decrease in farm size.

Finland became part of the Western European trade-liberalization movement by joining the World Bank, the International Monetary Fund (IMF) and the Bretton Woods agreement in 1948, becoming a member of the General Agreement on Tariffs and Trade (GATT) two years later, and joining Finnefta (an agreement between the European Free Trade Area (EFTA) and Finland) in 1961. The government chose not to receive Marshall Aid because of the world political situation. Bilateral trade agreements with the Soviet Union started in 1947 and continued until 1991. Tariffs were eased and imports from market economies liberated from 1957. Exports and imports, which had stayed at internationally high levels during the interwar years, only slowly returned to the earlier relative levels.

The investment rate climbed to new levels soon after War World II under a government policy favoring investments and it remained on this very high level until the end of the 1980s. The labor-force growth stopped in the early 1960s, and economic growth has since depended on increases in productivity rather than increased labor inputs. GDP growth was 4.9 percent and GDP per capita 4.3 percent in 1950–1973 – matching the rapid pace of many other European countries.

Exports and, accordingly, the structure of the manufacturing industry were diversified by Soviet and, later, on Western orders for machinery products including paper machines, cranes, elevators, and special ships such as icebreakers. The vast Soviet Union provided good markets for clothing and footwear, while Finnish wool and cotton factories slowly disappeared because of competition from low-wage countries. The modern chemical industry started to develop in the early twentieth century, often led by foreign entrepreneurs, and the first small oil refinery was built by the government in the 1950s. The government became actively involved in industrial activities in the early twentieth century, with investments in mining, basic industries, energy production and transmission, and the construction of infrastructure, and this continued in the postwar period.

The new agricultural policy, the aim of which was to secure reasonable incomes and favorable loans to the farmers and the availability of domestic agricultural products for the population, soon led to overproduction in several product groups, and further to government-subsidized dumping on the international markets. The first limitations on agricultural production were introduced at the end of the 1960s.

The population reached four million in 1950, and the postwar baby boom put extra pressure on the educational system. The educational level of the Finnish population was low in Western European terms in the 1950s, even if everybody could read and write. The underdeveloped educational system was expanded and renewed as new universities and vocational schools were founded, and the number of years of basic, compulsory education increased. Education has been government run since the 1960s and 1970s, and is free at all levels. Finland started to follow the so-called Nordic welfare model, and similar improvements in health and social care have been introduced, normally somewhat later than in the other Nordic countries. Public child-health centers, cash allowances for children, and maternity leave were established in the 1940s, and pension plans have covered the whole population since the 1950s. National unemployment programs had their beginnings in the 1930s and were gradually expanded. A public health-care system was introduced in 1970, and national health insurance also covers some of the cost of private health care. During the 1980s the income distribution became one of the most even in the world.

Slower Growth from the 1970s

The oil crises of the 1970s put the Finnish economy under pressure. Although the oil reserves of the main supplier, the Soviet Union, showed no signs of running out, the price increased in line with world market prices. This was a source of devastating inflation in Finland. On the other hand, it was possible to increase exports under the terms of the bilateral trade agreement with the Soviet Union. This boosted export demand and helped Finland to avoid the high and sustained unemployment that plagued Western Europe.

Economic growth in the 1980s was somewhat better than in most Western economies, and at the end of the 1980s Finland caught up with the sluggishly-growing Swedish GDP per capita for the first time. In the early 1990s the collapse of the Soviet trade, Western European recession and problems in adjusting to the new liberal order of international capital movement led the Finnish economy into a depression that was worse than that of the 1930s. GDP fell by over 10 percent in three years, and unemployment rose to 18 percent. The banking crisis triggered a profound structural change in the Finnish financial sector. The economy revived again to a brisk growth rate of 3.6 percent in 1994-2005: GDP growth was 2.5 percent and GDP per capita 2.1 percent between 1973 and 2005.

Electronics started its spectacular rise in the 1980s and it is now the largest single manufacturing industry with a 25 percent share of all manufacturing. Nokia is the world’s largest producer of mobile phones and a major transmission-station constructor. Connected to this development was the increase in the research-and- development outlay to three percent of GDP, one of the highest in the world. The Finnish paper companies UPM-Kymmene and M-real and the Finnish-Swedish Stora-Enso are among the largest paper producers in the world, although paper production now accounts for only 10 percent of manufacturing output. The recent discussion on the future of the industry is alarming, however. The position of the Nordic paper industry, which is based on expensive, slowly-growing timber, is threatened by new paper factories founded near the expanding consumption areas in Asia and South America, which use local, fast-growing tropical timber. The formerly significant sawmilling operations now constitute a very small percentage of the activities, although the production volumes have been growing. The textile and clothing industries have shrunk into insignificance.

What has typified the last couple of decades is the globalization that has spread to all areas. Exports and imports have increased as a result of export-favoring policies. Some 80 percent of the stocks of Finnish public companies are now in foreign hands: foreign ownership was limited and controlled until the early 1990s. A quarter of the companies operating in Finland are foreign-owned, and Finnish companies have even bigger investments abroad. Most big companies are truly international nowadays. Migration to Finland has increased, and since the collapse of the eastern bloc Russian immigrants have become the largest single foreign group. The number of foreigners is still lower than in many other countries – there are about 120.000 people with foreign background out of a population of 5.3 million.

The directions of foreign trade have been changing because trade with the rising Asian economies has been gaining in importance and Russian trade has fluctuated. Otherwise, almost the same country distribution prevails as has been common for over a century. Western Europe has a share of three-fifths, which has been typical. The United Kingdom was for long Finland’s biggest trading partner, with a share of one-third, but this started to diminish in the 1960s. Russia accounted for one-third of Finnish foreign trade in the early 1900s, but the Soviet Union had minimal trade with the West at first, and its share of the Finnish foreign trade was just a few percentage points. After World War II Soviet-Finnish trade increased gradually until it reached 25 percent of Finnish foreign trade in the 1970s and early 1980s. Trade with Russia is now gradually gaining ground again from the low point of the early 1990s, and had risen to about ten percent in 2006. This makes Russia one of Finland’s three biggest trading partners, Sweden and Germany being the other two with a ten percent share each.

The balance of payments was a continuing problem in the Finnish economy until the 1990s. Particularly in the post-World War II period inflation repeatedly eroded the competitive capacity of the economy and led to numerous devaluations of the currency. An economic policy favoring exports helped the country out of the depression of the 1990s and improved the balance of payments.

Agriculture continued its problematic development of overproduction and high subsidies, which finally became very unpopular. The number of farms has shrunk since the 1960s and the average size has recently risen to average European levels. The share of agricultural production and labor are also on the Western European levels nowadays. Finnish agriculture is incorporated into the Common Agricultural Policy of the European Union and shares its problems, even if Finnish overproduction has been virtually eliminated.

The share of forestry is equally low, even if it supplies four-fifths of the wood used in Finnish sawmills and paper factories: the remaining fifth is imported mainly from the northwestern parts of Russia. The share of manufacturing is somewhat above Western European levels and, accordingly, that of services is high but slightly lower than in the old industrialized countries.

Recent discussion on the state of the economy mainly focuses on two issues. The very open economy of Finland is very much influenced by the rather sluggish economic development of the European Union. Accordingly, not very high growth rates are to be expected in Finland either. Since the 1990s depression, the investment rate has remained at a lower level than was common in the postwar period, and this is cause for concern.

The other issue concerns the prominent role of the public sector in the economy. The Nordic welfare model is basically approved of, but the costs create tensions. High taxation is one consequence of this and political parties discuss whether or not the high public-sector share slows down economic growth.

The aging population, high unemployment and the decreasing numbers of taxpayers in the rural areas of eastern and central Finland place a burden on the local governments. There is also continuing discussion about tax competition inside the European Union: how does the high taxation in some member countries affect the location decisions of companies?

Development of Finland’s exports by commodity group 1900-2005, percent

Source: Finnish National Board of Customs, Statistics Unit

Note on classification: Metal industry products SITC 28, 67, 68, 7, 87; Chemical products SITC 27, 32, 33, 34, 5, 66; Textiles SITC 26, 61, 65, 84, 85; Wood, paper and printed products SITC 24, 25, 63, 64, 82; Food, beverages, tobacco SITC 0, 1, 4.

Development of Finland’s imports by commodity group 1900-2005, percent

Source: Finnish National Board of Customs, Statistics Unit

Note on classification: Metal industry products SITC 28, 67, 68, 7, 87; Chemical products SITC 27, 32, 33, 34, 5, 66; Textiles SITC 26, 61, 65, 84, 85; Wood, paper and printed products SITC 24, 25, 63, 64, 82; Food, beverages, tobacco SITC 0, 1, 4.

References:

Heikkinen, S. and J.L van Zanden, eds. Explorations in Economic Growth. Amsterdam: Aksant, 2004.

Heikkinen, S. Labour and the Market: Workers, Wages and Living Standards in Finland, 1850–1913. Commentationes Scientiarum Socialium 51 (1997).

Hjerppe, R. The Finnish Economy 1860–1985: Growth and Structural Change. Studies on Finland’s Economic Growth XIII. Helsinki: Bank of Finland Publications, 1989.

Jalava, J., S. Heikkinen and R. Hjerppe. “Technology and Structural Change: Productivity in the Finnish Manufacturing Industries, 1925-2000.” Transformation, Integration and Globalization Economic Research (TIGER), Working Paper No. 34, December 2002.

Kaukiainen, Yrjö. A History of Finnish Shipping. London: Routledge, 1993.

Myllyntaus, Timo. Electrification of Finland: The Transfer of a New Technology into a Late Industrializing Economy. Worcester, MA: Macmillan, Worcester, 1991.

Ojala, J., J. Eloranta and J. Jalava, editors. The Road to Prosperity: An Economic History of Finland. Helsinki: Suomalaisen Kirjallisuuden Seura, 2006.

Pekkarinen, J. and J. Vartiainen. Finlands ekonomiska politik: den långa linjen 1918–2000, Stockholm: Stiftelsen Fackföreningsrörelsens institut för ekonomisk forskning FIEF, 2001.

Citation: Hjerppe, Riitta. “An Economic History of Finland”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-finland/

The Economic History of the International Film Industry

Gerben Bakker, University of Essex

Introduction

Like other major innovations such as the automobile, electricity, chemicals and the airplane, cinema emerged in most Western countries at the same time. As the first form of industrialized mass-entertainment, it was all-pervasive. From the 1910s onwards, each year billions of cinema-tickets were sold and consumers who did not regularly visit the cinema became a minority. In Italy, today hardly significant in international entertainment, the film industry was the fourth-largest export industry before the First World War. In the depression-struck U.S., film was the tenth most profitable industry, and in 1930s France it was the fastest-growing industry, followed by paper and electricity, while in Britain the number of cinema-tickets sold rose to almost one billion a year (Bakker 2001b). Despite this economic significance, despite its rapid emergence and growth, despite its pronounced effect on the everyday life of consumers, and despite its importance as an early case of the industrialization of services, the economic history of the film industry has hardly been examined.

This article will limit itself exclusively to the economic development of the industry. It will discuss just a few countries, mainly the U.S., Britain and France, and then exclusively to investigate the economic issues it addresses, not to give complete histories of the industries in those countries. This entry cannot do justice to developments in each and every country, given the nature of an encyclopedia article. This entry also limits itself to the evolution of the Western film industry, because it has been and still is the largest film industry in the world, in revenue terms, although this may well change in the future.

Before Cinema

In the late eighteenth century most consumers enjoyed their entertainment in an informal, haphazard and often non-commercial way. When making a trip they could suddenly meet a roadside entertainer, and their villages were often visited by traveling showmen, clowns and troubadours. Seasonal fairs attracted a large variety of musicians, magicians, dancers, fortune-tellers and sword-swallowers. Only a few large cities harbored legitimate theaters, strictly regulated by the local and national rulers. This world was torn apart in two stages.

First, most Western countries started to deregulate their entertainment industries, enabling many more entrepreneurs to enter the business and make far larger investments, for example in circuits of fixed stone theaters. The U.S. was the first with liberalization in the late eighteenth century. Most European countries followed during the nineteenth century. Britain, for example, deregulated in the mid-1840s, and France in the late 1860s. The result of this was that commercial, formalized and standardized live entertainment emerged that destroyed a fair part of traditional entertainment. The combined effect of liberalization, innovation and changes in business organization, made the industry grow rapidly throughout the nineteenth century, and integrated local and regional entertainment markets into national ones. By the end of the nineteenth century, integrated national entertainment industries and markets maximized productivity attainable through process innovations. Creative inputs, for example, circulated swiftly along the venues – often in dedicated trains – coordinated by centralized booking offices, maximizing capital and labor utilization.

At the end of the nineteenth century, in the era of the second industrial revolution, falling working hours, rising disposable income, increasing urbanization, rapidly expanding transport networks and strong population growth resulted in a sharp rise in the demand for entertainment. The effect of this boom was further rapid growth of live entertainment through process innovations. At the turn of the century, the production possibilities of the existing industry configuration were fully realized and further innovation within the existing live-entertainment industry could only increase productivity incrementally.

At this moment, in a second stage, cinema emerged and in its turn destroyed this world, by industrializing it into the modern world of automated, standardized, tradable mass-entertainment, integrating the national entertainment markets into an international one.

Technological Origins

In the early 1890s, Thomas Edison introduced the kinematograph, which enabled the shooting of films and their play-back in slot-coin machines for individual viewing. In the mid-1890s, the Lumière brothers added projection to the invention and started to play films in theater-like settings. Cinema reconfigured different technologies that all were available from the late 1880s onwards: photography (1830s), taking negative pictures and printing positives (1880s), roll films (1850s), celluloid (1868), high-sensitivity photographic emulsion (late 1880s), projection (1645) and movement dissection/ persistence of vision (1872).

After the preconditions for motion pictures had been established, cinema technology itself was invented. Already in 1860/1861 patents were filed for viewing and projecting motion pictures, but not for the taking of pictures. The scientist Jean Marey completed the first working model of a film camera in 1888 in Paris. Edison visited Georges Demeney in 1888 and saw his films. In 1891, he filed an American patent for a film camera, which had a different moving mechanism than the Marey camera. In 1890, the Englishman Friese Green presented a working camera to a group of enthusiasts. In 1893 the Frenchman Demeney filed a patent for a camera. Finally, the Lumière brothers filed a patent for their type of camera and for projection in February 1895. In December of that year they gave the first projection for a paying audience. They were followed in February 1896 by the Englishman Robert W. Paul. Paul also invented the ‘Maltese cross,’ a device which is still used in film cameras today. It is instrumental in the smooth rolling of the film, and in the correcting of the lens for the space between the exposures (Michaelis 1958; Musser 1990: 65-67; Low and Manvell 1948).

Three characteristics stand out in this innovation process. First, it was an international process of invention, taking place in several countries at the same time, and the inventors building upon and improving upon each other’s inventions. This connects to Joel Mokyr’s notion that in the nineteenth century communication became increasingly important to innovations, and many innovations depended on international communication between inventors (Mokyr 1990: 123-124). Second, it was what Mokyr calls a typical nineteenth century invention, in that it was a smart combination of many existing technologies. Many different innovations in the technologies which it combined had been necessary to make possible the innovation of cinema. Third, cinema was a major innovation in the sense that it was quickly and universally adopted throughout the western world, quicker than the steam engine, the railroad or the steamship.

The Emergence of Cinema

For about the first ten years of its existence, cinema in the United States and elsewhere was mainly a trick and a gadget. Before 1896 the coin-operated Kinematograph of Edison was present at fairs and in entertainment venues. Spectators had to throw a coin in the machine and peek through glasses to see the film. The first projections, from 1896 onwards, attracted large audiences. Lumière had a group of operators who traveled around the world with the cinematograph, and showed the pictures in theaters. After a few years films became a part of the program in vaudeville and sometimes in theater as well. At the same time traveling cinema emerged: cinemas which traveled around with a tent or mobile theater and set up shop for a short time in towns and villages. These differed from the Lumière operators and others in that they catered for the general, popular audiences, while the former were more upscale parts of theater programs, or a special program for the bourgeoisie (Musser 1990: 140, 299, 417-20).

This whole era, which in the U.S. lasted up to about 1905, was a time in which cinema seemed just one of many new fashions, and it was not at all certain that it would persist, or that it would be forgotten or marginalized quickly, such as happened to the boom in skating rinks and bowling alleys at the time. This changed when Nickelodeons, fixed cinemas with a few hundred seats, emerged and quickly spread all over the country between 1905 and 1907. From this time onwards cinema changed into an industry in its own right, which was distinct from other entertainments, since it had its own buildings and its own advertising. The emergence of fixed cinemas coincided which a huge growth phase in the business in general; film production increased greatly, and film distribution developed into a special activity, often managed by large film producers. However, until about 1914, besides the cinemas, films also continued to be combined with live entertainment in vaudeville and other theaters (Musser 1990; Allen 1980).

Figure 1 shows the total length of negatives released on the U.S., British and French film markets. In the U.S., the total released negative length increased from 38,000 feet in 1897, to two million feet in 1910, to twenty million feet in 1920. Clearly, the initial U.S. growth between 1893 and 1898 was very strong: the market increased by over three orders of magnitude, but from an infinitesimal initial base. Between 1898 and 1906, far less growth took place, and in this period it may well have looked like the cinematograph would remain a niche product, a gimmick shown at fairs and used to be interspersed with live entertainment. From 1907, however, a new, sharp sustained growth phase starts: The market increased further again by two orders of magnitude – and from a far higher base this time. At the same time, the average film length increased considerably, from eighty feet in 1897 to seven hundred feet in 1910 to three thousand feet in 1920. One reel of film held about 1,500 feet and had a playing time of about fifteen minutes.

Between the mid-1900s and 1914 the British and French markets were growing at roughly the same rates as the U.S. one. World War I constituted a discontinuity: from 1914 onwards European growth rates are far lower those in the U.S.

The prices the Nickelodeons charged were between five and ten cents, for which spectators could stay as long as they liked. Around 1910, when larger cinemas emerged in hot city center locations, more closely resembling theaters than the small and shabby Nickelodeons, prices increased. They varied from between one dollar to one dollar and-a-half for ‘first run’ cinemas to five cents for sixth-run neighborhood cinemas (see also Sedgwick 1998).

Figure 1

Total Released Length on the U.S., British and French Film Markets (in Meters), 1893-1922

Note: The length refers to the total length of original negatives that were released commercially.

See Bakker 2005, appendix I for the method of estimation and for a discussion of the sources.

Source: Bakker 2001b; American Film Institute Catalogue, 1893-1910; Motion Picture World, 1907-1920.

The Quality Race

Once Nickelodeons and other types of cinemas were established, the industry entered a new stage with the emergence of the feature film. Before 1915, cinemagoers saw a succession of many different films, each between one and fifteen minutes, of varying genres such as cartoons, newsreels, comedies, travelogues, sports films, ‘gymnastics’ pictures and dramas. After the mid-1910s, going to the cinema meant watching a feature film, a heavily promoted dramatic film with a length that came closer to that of a theater play, based on a famous story and featuring famous stars. Shorts remained only as side dishes.

The feature film emerged when cinema owners discovered that films with a far higher quality and length, enabled them to ask far higher ticket prices and get far more people into their cinemas, resulting in far higher profits, even if cinemas needed to pay far more for the film rental. The discovery that consumers would turn their back on packages of shorts (newsreels, sports, cartoons and the likes) as the quality of features increased set in motion a quality race between film producers (Bakker 2005). They all started investing heavily in portfolios of feature films, spending large sums on well-known stars, rights to famous novels and theater plays, extravagant sets, star directors, etc. A contributing factor in the U.S. was the demise of the Motion Picture Patents Company (MPPC), a cartel that tried to monopolize film production and distribution. Between about 1908 and 1912 the Edison-backed MPPC had restricted quality artificially by setting limits on film length and film rental prices. When William Fox and the Department of Justice started legal action in 1912, the power of the MPPC quickly waned and the ‘independents’ came to dominate the industry.

In the U.S., the motion picture industry became the internet of the 1910s. When companies put the word motion pictures in their IPO investors would flock to it. Many of these companies went bankrupt, were dissolved or were taken over. A few survived and became the Hollywood studios most of which we still know today: Paramount, Metro-Goldwyn-Mayer (MGM), Warner Brothers, Universal, Radio-Keith-Orpheum (RKO), Twentieth Century-Fox, Columbia and United Artists.

A necessary condition for the quality race was some form of vertical integration. In the early film industry, films were sold. This meant that the cinema-owner who bought a film, would receive all the marginal revenues the film generated. In the film industry, these revenues were largely marginal profits, as most costs were fixed, so an additional film ticket sold was pure (gross) profit. Because the producer did not get any of these revenues, at the margin there was little incentive to increase quality. When outright sales made way for the rental of films to cinemas for a fixed fee, producers got a higher incentive to increase a film’s quality, because it would generate more rentals (Bakker 2005). This further increased when percentage contracts were introduced for large city center cinemas, and when producers-distributors actually started to buy large cinemas. The changing contractual relationship between cinemas and producers was paralleled between producers and distributors.

The Decline and Fall of the European Film Industry

Because the quality race happened when Europe was at war, European companies could not participate in the escalation of quality (and production costs) discussed above. This does not mean all of them were in crisis. Many made high profits during the war from newsreels, other short films, propaganda films and distribution. They also were able to participate in the shift towards the feature film, substantially increasing output in the new genre during the war (Figure 2). However, it was difficult for them to secure the massive amount of venture capital necessary to participate in the quality race while their countries were at war. Even if they would have managed it may have been difficult to justify these lavish expenditures when people were dying in the trenches.

Yet a few European companies did participate in the escalation phase. The Danish Nordisk company invested heavily in long feature-type films, and bought cinema chains and distributors in Germany, Austria and Switzerland. Its strategy ended when the German government forced it to sell its German assets to the newly founded UFA company, in return for a 33 percent minority stake. The French Pathé company was one of the largest U.S. film producers. It set up its own U.S. distribution network and invested in heavily advertised serials (films in weekly installments) expecting that this would become the industry standard. As it turned out, Pathé bet on the wrong horse and was overtaken by competitors riding high on the feature film. Yet it eventually switched to features and remained a significant company. In the early 1920s, its U.S. assets were sold to Merrill Lynch and eventually became part of RKO.

Figure 2

Number of Feature Films Produced in Britain, France and the U.S., 1911-1925

(semi-logarithmic scale)

Source: Bakker 2005 [American Film Institute Catalogue; British Film Institute; Screen Digest; Globe, World Film Index, Chirat, Longue métrage.]

Because it could not participate in the quality race, the European film industry started to decline in relative terms. Its market share at home and abroad diminished substantially (Figure 3). In the 1900s European companies supplied at least half of the films shown in the U.S. In the early 1910s this dropped to about twenty percent. In the mid-1910s, when the feature film emerged, the European market share declined to nearly undetectable levels.

By the 1920s, most large European companies gave up film production altogether. Pathé and Gaumont sold their U.S. and international business, left film making and focused on distribution in France. Éclair, their major competitor, went bankrupt. Nordisk continued as an insignificant Danish film company, and eventually collapsed into receivership. The eleven largest Italian film producers formed a trust, which terribly failed and one by one they fell into financial disaster. The famous British producer, Cecil Hepworth, went bankrupt. By late 1924, hardly any films were being made in Britain. American films were shown everywhere.

Figure 3

Market Shares by National Film Industries, U.S., Britain, France, 1893-1930

Note: EU/US is the share of European companies on the U.S. market, EU/UK is the share of European companies on the British market, and so on. For further details see Bakker 2005.

The Rise of Hollywood

Once they had lost out, it was difficult for European companies to catch up. First of all, since the sharply rising film production costs were fixed and sunk, market size was becoming of essential importance as it affected the amount of money that could be spent on a film. Exactly at this crucial moment, the European film market disintegrated, first because of war, later because of protectionism. The market size was further diminished by heavy taxes on cinema tickets that sharply increased the price of cinema compared to live entertainment.

Second, the emerging Hollywood studios benefited from first mover advantages in feature film production: they owned international distribution networks, they could offer cinemas large portfolios of films at a discount (block-booking), sometimes before they were even made (blind-bidding), the quality gap with European features was so large it would be difficult to close in one go, and, finally, the American origin of the feature films in the 1910s had established U.S. films as a kind of brand, leaving consumers with high switching costs to try out films from other national origins. It would be extremely costly for European companies to re-enter international distribution, produce large portfolios, jump-start film quality, and establish a new brand of films – all at the same time (Bakker 2005).

A third factor was the rise of Hollywood as production location. The large existing American Northeast coast film industry and the newly emerging film industry in Florida declined as U.S. film companies started to locate in Southern California. First of all, the ‘sharing’ of inputs facilitated knowledge spillovers and allowed higher returns. The studios lowered costs because creative inputs had less down-time, needed to travel less, could participate in many try-outs to achieve optimal casting and could be rented out easily to competitors when not immediately wanted. Hollywood also attracted new creative inputs through non-monetary means: even more than money creative inputs wanted to maximize fame and professional recognition. For an actress, an offer to work with the world’s best directors, costume designers, lighting specialists and make-up artists was difficult to decline.

Second, a thick market for specialized supply and demand existed. Companies could easily rent out excess studio capacity (for example, during the nighttime B-films were made), and a producer was quite likely to find the highly specific products or services needed somewhere in Hollywood (Christopherson and Storper 1987, 1989). While a European industrial ‘film’ district may have been competitive and even have a lower over-all cost/quality ratio than Hollywood, a first European major would have a substantially higher cost/quality ratio (lacking external economies) and would therefore not easily enter (see, for example, Krugman and Obstfeld 2003, chapter 6). If entry did happen, the Hollywood studios could and would buy successful creative inputs away, since they could realize higher returns on these inputs, which resulted in American films with even a higher perceived quality, thus perpetuating the situation.

Sunlight, climate and the variety of landscape in California were of course favorable to film production, but were not unique. Locations such as Florida, Italy, Spain and Southern France offered similar conditions.

The Coming of Sound

In 1927, sound films were introduced. The main innovator was Warner Brothers, backed by the bank Goldman, Sachs, which actually parachuted a vice-president to Warner. Although many other sound systems had been tried and marketed from the 1900s onwards, the electrical microphone, invented at Bell labs in the mid-1920s, sharply increased the quality of sound films and made possible the change of the industry. Sound increased the interests in the film industry of large industrial companies such as General Electric, Western Electric and RCA, as well as those of the banks who were eager the finance the new innovation, such as the Bank of America and Goldman, Sachs.

In economic terms, sound represented an exogenous jump in sunk costs (and product quality) which did not affect the basic industry structure very much: The industry structure was already highly concentrated before sound and the European, New York/Jersey and Florida film industries were already shattered. What it did do was industrialize away most of the musicians and entertainers that had complemented the silent films with sound and entertainment, especially those working in the smaller cinemas. This led to massive unemployment among musicians (see, for example, Gomery 1975; Kraft 1996).

The effect of sound film in Europe was to increase the domestic revenues of European films, because they became more culture-specific as they were in the local language, but at the same time it decreased the foreign revenues European films received (Bakker 2004b). It is difficult to completely assess the impact of sound film, as it coincided with increased protection; many European countries set quotas for the amount of foreign films that could be shown shortly before the coming of sound. In France, for example, where sound became widely adopted from 1930 onwards, the U.S. share of films dropped from eighty to fifty percent between 1926 and 1929, mainly the result of protectionist legislation. During the 1930s, the share temporarily declined to about forty percent, and then hovered to between fifty and sixty percent. In short, protectionism decreased the U.S. market share and increased the French market shares of French and other European films, while sound film increased French market share, mostly at the expense of other European films and less so at the expense of U.S. films.

In Britain, the share of releases of American films declined from eighty percent in 1927 to seventy percent in 1930, while British films increased from five percent to twenty percent, exactly in line with the requirements of the 1927 quota act. After 1930, the American share remained roughly stable. This suggests that sound film did not have a large influence, and that the share of U.S. films was mainly brought down by the introduction of the Cinematograph Films Act in 1927, which set quotas for British films. Nevertheless, revenue data, which are unfortunately lacking, would be needed to give a definitive answer, as little is known about effects on the revenue per film.

The Economics of the Interwar Film Trade

Because film production costs were mainly fixed and sunk, international sales or distribution were important, because these were additional sales without much additional cost to the producer; the film itself had already been made. Films had special characteristics that necessitated international sales. Because they essentially were copyrights rather than physical products, theoretically the costs of additional sales were zero. Film production involved high endogenous sunk costs, recouped through renting the copyright to the film. The marginal foreign revenue equaled marginal net revenue (and marginal profits after the film’s production costs had been fully amortized). All companies large or small had to take into account foreign sales when setting film budgets (Bakker 2004b).

Films were intermediate products sold to foreign distributors and cinemas. While the rent paid varied depending on perceived quality and general conditions of supply and demand, the ticket price paid by consumers generally did not vary. It only varied by cinema: highest in first-run city center cinemas and lowest in sixth-run ramshackle neighborhood cinemas. Cinemas used films to produce ‘spectator-hours’: a five-hundred-seat cinema providing one hour of film, produced five hundred spectator-hours of entertainment. If it sold three hundred tickets, the other two hundred spectator-hours produced would have perished.

Because film was an intermediate product and a capital good at that, international competition could not be on price alone, just as sales of machines depend on the price/performance ratio. If we consider a film’s ‘capacity to sell spectator-hours’ (hereafter called selling capacity) as proportional to production costs, a low-budget producer could not simply push down a film’s rental price in line with its quality in order to make a sale; even at a price of zero, some low-budget films could not be sold. The reasons were twofold.

First, because cinemas had mostly fixed costs and few variable costs, a film’s selling capacity needed to be at least as large as fixed cinema costs plus its rental price. A seven-hundred-seat cinema, with a production capacity of 39,200 spectator-hours a week, weekly fixed costs of five hundred dollars, and an average admission price of five cents per spectator-hour, needed a film selling at least ten thousand spectator-hours, and would not be prepared to pay for that (marginal) film, because it only recouped fixed costs. Films needed a minimum selling capacity to cover cinema fixed costs. Producers could only price down low-budget films to just above the threshold level. With a lower expected selling capacity, these films could not be sold at any price.

This reasoning assumes that we know a film’s selling capacity ex ante. A main feature distinguishing foreign markets from domestic ones was that uncertainty was markedly lower: from a film’s domestic launch the audience appeal was known, and each subsequent country added additional information. While a film’s audience appeal across countries was not perfectly correlated, uncertainty was reduced. For various companies, correlations between foreign and domestic revenues for entire film portfolios fluctuated between 0.60 and 0.95 (Bakker 2004b). Given the riskiness of film production, this reduction in uncertainty undoubtedly was important.

The second reason for limited price competition was the opportunity cost, given cinemas’ production capacities. If the hypothetical cinema obtained a high-capacity film for a weekly rental of twelve hundred dollars, which sold all 39,200 spectator-hours, the cinema made a profit of $260 (($0.05 times 39,200) – $1,200 – $500 = $260). If a film with half the budget and, we assume, half the selling capacity, rented for half the price, the cinema-owner would lose $120 (($0.05 times 19,600) – $600 – $500 = -$120). Thus, the cinema owner would want to pay no more than $220 for the lower budget film, given that the high budget film is available (($0.05 times 19,600) – $220- $500 = $260). So the low-capacity film with half the selling capacity of the high-capacity film would need to sell for under a fifth of the price of the high capacity film to even enable the possibility of a transaction.

These sharply increasing returns to selling capacity made the setting of production outlays important, as a right price/capacity ratio was crucial to win foreign markets.

How Films Became Branded Products

To make sure film revenues reached above cinema fixed costs, film companies transformed films into branded products. With the emergence of the feature film, they started to pay large sums to actors, actresses and directors and for rights to famous plays and novels. This is still a major characteristic of the film industry today that fascinates many people. Yet the huge sums paid for stars and stories are not as irrational and haphazard as they sometimes may seem. Actually, they might be just as ‘rational’ and have just as quantifiable a return as direct spending on marketing and promotion (Bakker 2001a).

To secure an audience, film producers borrowed branding techniques from other consumer goods’ industries, but the short product-life-cycle forced them to extend the brand beyond one product – using trademarks or stars – to buy existing ‘brands,’ such as famous plays or novels, and to deepen the product-life-cycle by licensing their brands.

Thus, the main value of stars and stories lay not in their ability to predict successes, but in their services as giant ‘publicity machines’ which optimized advertising effectiveness by rapidly amassing high levels of brand-awareness. After a film’s release, information such as word-of-mouth and reviews would affect its success. The young age at which stars reached their peak, and the disproportionate income distribution even among the superstars, confirm that stars were paid for their ability to generate publicity. Likewise, because ‘stories’ were paid several times as much as original screenplays, they were at least partially bought for their popular appeal (Bakker 2001a).

Stars and stories marked a film’s qualities to some extent, confirming that they at least contained themselves. Consumer preferences confirm that stars and stories were the main reason to see a film. Further, fame of stars is distributed disproportionately, possibly even twice as unequal as income. Film companies, aided by long-term contracts, probably captured part of the rent of their popularity. Gradually these companies specialized in developing and leasing their ‘instant brands’ to other consumer goods’ industries in the form of merchandising.

Already from the late 1930s onwards, the Hollywood studios used the new scientific market research techniques of George Gallup to continuously track the brand-awareness among the public of their major stars (Bakker 2003). Figure 4 is based on one such graph used by Hollywood. It shows that Lana Turner was a rising star, Gable was consistently a top star, while Stewart’s popularity was high but volatile. James Stewart was eleven percentage-points more popular among the richest consumers than among the poorest, while Lana Turner differed only a few percentage-points. Additional segmentation by city size seemed to matter, since substantial differences were found: Clark Gable was ten percentage-points more popular in small cities than in large ones. Of the richest consumers, 51 percent wanted to see a movie starring Gable, but altogether they constituted just 14 percent of Gable’s market, while the 57 percent poorest Gable-fans constituted 34 percent. The increases in Gable’s popularity roughly coincided with his releases, suggesting that while producers used Gable partially for the brand-awareness of his name, each use (film) subsequently increased or maintained that awareness in what seems to have been a self-reinforcing process.

Figure 4

Popularity of Clark Gable, James Stewart and Lana Turner among U.S. respondents

April 1940 – October 1942, in percentage

Source: Audience Research Inc.; Bakker 2003.

The Film Industry’s Contribution to Economic Growth and Welfare

By the late 1930s, cinema had become an important mass entertainment industry. Nearly everyone in the Western world went to the cinema and many at least once a week. Cinema had made possible a massive growth in productivity in the entertainment industry and thereby disproved the notions of some economists that productivity growth in certain service industries is inherently impossible. Between 1900 and 1938, output of the entertainment industry, measured in spectator-hours, grew substantially in the U.S., Britain and France, varying from three to eleven percent per year over a period of nearly forty years (Table 1). The output per worker increased from 2,453 spectator hours in the U.S. in 1900 to 34,879 in 1938. In Britain it increased from 16,404 to 37,537 spectator-hours and in France from 1,575 to 8,175 spectator-hours. This phenomenal growth could be explained partially by adding more capital (such as in the form of film technology and film production outlays) and partially by simply producing more efficiently with the existing amount of capital and labor. The increase in efficiency (‘total factor productivity’) varied from about one percent per year in Britain to over five percent in the U.S., with France somewhere in between. In all countries, this increase in efficiency was at least one and a half times the increase in efficiency at the level of the entire nation. For the U.S. it was as much as five times and for France it was more than three times the national increase in efficiency (Bakker 2004a).

Another noteworthy feature is that the labor productivity in entertainment varied less across countries in the late 1930s than it did in 1900. Part of the reason is that cinema technology made entertainment partially tradable and therefore forced productivity in similar directions in all countries; the tradable part of the entertainment industry would now exert competitive pressure on the non-tradable part (Bakker 2004a). It is therefore not surprising that cinema caused the lowest efficiency increase in Britain, which had already a well-developed and competitive entertainment industry (with the highest labor and capital productivity both in 1900 and in 1938) and higher efficiency increases in the U.S. and to a lesser extent in France, which had less well-developed entertainment industries in 1900.

Another way to measure the contribution of film technology to the economy in the late 1930s is by using a social savings methodology. If we assume that cinema did not exist and all demand for entertainment (measured in spectator-hours) would have to be met by live entertainment, we can calculate the extra costs to society and thus the amount saved by film technology. In the U.S., these social savings amounted to as much as 2.2 percent ($2.5 billion) of GDP, in France to just 1.4 percent ($0.16 billion) and in Britain to only 0.3 percent ($0.07 billion) of GDP.

A third and different way to look at the contribution of film technology to the economy is to look at the consumer surplus generated by cinema. Contrary to the TFP and social savings techniques used above, which assume that cinema is a substitute for live entertainment, this approach assumes that cinema is a wholly new good and that therefore the entire consumer surplus generated by it is ‘new’ and would not have existed without cinema. For an individual consumer, the surplus is the difference between the price she was willing to pay and the ticket she actually paid. This difference varies from consumer to consumer, but with econometric techniques, one can estimate the sum of individual surpluses for an entire country. The resulting national consumer surpluses for entertainment varied from about a fifth of total entertainment expenditure in the U.S., to about half in Britain and as much as three quarters in France.

All the measures show that by the late 1930s cinema was making an essential contribution in increasing total welfare as well as the entertainment industry’s productivity.

Vertical Disintegration

After the Second World War, the Hollywood film industry disintegrated: production, distribution and exhibition became separate activities that were not always owned by the same organization. Three main causes brought about the vertical disintegration. First, the U.S. Supreme Court forced the studios to divest their cinema chains in 1948. Second, changes in the social-demographic structure in the U.S. brought about a shift towards entertainment within the home: many young couples started to live in the new suburbs and wanted to stay home for entertainment. Initially, they mainly used radio for this purpose and later they switched to television (Gomery 1985). Third, television broadcasting in itself (without the social-demographic changes that increased demand for it) constituted a new distribution channel for audiovisual entertainment and thus decreased the scarcity of distribution capacity. This meant that television took over the focus on the lowest common denominator from radio and cinema, while the latter two differentiated their output and started to focus more on specific market segments.

Figure 5

Real Cinema Box Office Revenue, Real Ticket Price and Number of Screens in the U.S., 1945-2002

Note: The values are in dollars of 2002, using the EH.Net consumer price deflator.

Source: Adapted from Vogel 2004 and Robertson 2001.

The consequence was a sharp fall in real box office revenue in the decade after the war (Figure 5). After the mid-1950s, real revenue stabilized, and remained the same, with some fluctuations, until the mid-1990s. The decline in screens was more limited. After 1963 the number of screens increased again steadily to reach nearly twice the 1945 level in the 1990s. Since the 1990s there have been more movie screens in the U.S. than ever before. The proliferation of screens, coinciding with declining capacity per screen, facilitated market segmentation. Revenue per screen nearly halved in the decade after the war, then made a rebound during the 1960s, to start a long and steady decline from 1970 onwards. The real price of a cinema ticket was quite stable until the 1960s, after which it more than doubled. Since the early 1970s, the price has been declining again and nowadays the real admission price is about what it was in 1965.

It was in this adverse post-war climate that the vertical disintegration unfolded. It took place at three levels. First (obviously) the Hollywood studios divested their cinema-chains. Second, they outsourced part of their film production and most of their production factors to independent companies. This meant that the Hollywood studios would only produce part of the films they distributed themselves, that they changed the long-term, seven-year contracts with star actors for per-film contracts and that they sold off part of their studio facilities to rent them back for individual films. Third, the Hollywood studios’ main business became film distribution and financing. They specialized in planning and assembling a portfolio of films, contracting and financing most of them and marketing and distributing them world-wide.

The developments had three important effects. First, production by a few large companies was replaced by production by many small flexibly specialized companies. Southern California became an industrial district for the film industry and harbored an intricate network of these businesses, from set design companies and costume makers, to special effects firms and equipment rental outfits (Storper and Christopherson 1989). Only at the level of distribution and financing did concentration remain high. Second, films became more differentiated and tailored to specific market segments; they were now aimed at a younger and more affluent audience. Third, the European film market gained in importance: because the social-demographic changes (suburbanization) and the advent of television happened somewhat later in Europe, the drop in cinema attendance also happened later there. The result was that the Hollywood off-shored a large chunk – at times over half – of their production to Europe in the 1960s. This was stimulated by lower European production costs, difficulties in repatriating foreign film revenues and by the vertical disintegration in California, which severed the studios’ ties with their production units and facilitated outside contracting.

European production companies could better adapt to changes in post-war demand because they were already flexibly specialized. The British film production industry, for example, had been fragmented almost from its emergence in the 1890s. In the late 1930s, distribution became concentrated, mainly through the efforts of J. Arthur Rank, while the production sector, a network of flexibly specialized companies in and around London, boomed. After the war, the drop in admissions followed the U.S. with about a ten year delay (Figure 6). The drop in the number of screens experienced the same lag, but was more severe: about two-third of British cinema screens disappeared, versus only one-third in the U.S. In France, after the First World War film production had disintegrated rapidly and chaotically into a network of numerous small companies, while a few large firms dominated distribution and production finance. The result was a burgeoning industry, actually one of the fastest growing French industries in the 1930s.

Figure 6

Admissions and Number of Screens in Britain, 1945-2005

Source: Screen Digest/Screen Finance/British Film Institute and Robertson 2001.

Several European companies attempted to (re-)enter international film distribution, such as Rank in the 1930s and 1950s, the International Film Finance Corporation in the 1960s, Gaumont in the 1970s, PolyGram in the 1970s and again in the 1990s, Cannon in the 1980s. All of them failed in terms of long-run survival, even if they made profits during some years. The only postwar entry strategy that was successful in terms of survival was the direct acquisition of a Hollywood studio (Bakker 2000).

The Come-Back of Hollywood

From the mid-1970s onwards, the Hollywood studios revived. The slide of box office revenue was brought to a standstill. Revenues were stabilized by the joint effect of seven different factors. First, the blockbuster movie increased cinema attendance. This movie was heavily marketed and supported by intensive television advertisement. Jaws was one of the first of these kind of movies and an enormous success. Second, the U.S. film industry received several kinds of tax breaks from the early 1970s onwards, which were kept in force until the mid-1980s, when Hollywood was in good shape again. Third, coinciding with the blockbuster movie and tax-breaks film budgets increased substantially, resulting in a higher perceived quality and higher quality difference with television, drawing more consumers into the cinema. Fourth, a rise in multiplex cinemas, cinemas with several screens, increased consumer choice and increased the appeal of cinema by offering more variety within a specific cinema, thus decreasing the difference with television in this respect. Fifth, one could argue that the process of flexible specialization of the California film industry was completed in the early 1970s, thus making the film industry ready to adapt more flexibly to changes in the market. MGM’s sale of its studio complex in 1970 marked the final ending of an era. Sixth, new income streams from video sales and rentals and cable television increased the revenues a high-quality film could generate. Seventh, European broadcasting deregulation increased the demand for films by television stations substantially.

From the 1990s onwards further growth was driven by newer markets in Eastern Europe and Asia. Film industries from outside the West also grew substantially, such as those of Japan, Hong Kong, India and China. At the same time, the European Union started a large scale subsidy program for its audiovisual film industry, with mixed economic effects. By 1997, ten years after the start of the program, a film made in the European Union cost 500,000 euros on average, was seventy to eighty percent state-financed, and grossed 800,000 euros world-wide, reaching an audience of 150,000 persons. In contrast, the average American film cost fifteen million euros, was nearly hundred percent privately financed, grossed 58 million euros, and reached 10.5 million persons (Dale 1997). This seventy-fold difference in performance is remarkable. Even when measured in gross return on investment or gross margin, the U.S. still had a fivefold and twofold lead over Europe, respectively.[1] In few other industries does such a pronounced difference exist.

During the 1990s, the film industry moved into television broadcasting. In Europe, broadcasters often co-funded small-scale boutique film production. In the U.S., the Hollywood studios started to merge with broadcasters. In the 1950s they had experienced difficulties with obtaining broadcasting licenses, because their reputation had been compromised by the antitrust actions. They had to wait for forty years before they could finally complete what they intended.[2] Disney, for example, bought the ABC network, Paramount’s owner Viacom bought CBS, and General Electric, owner of NBC, bought Universal. At the same time, the feature film industry was also becoming more connected to other entertainment industries, such as videogames, theme parks and musicals. With video game revenues now exceeding films’ box office revenues, it seems likely that feature films will simply be the flagship part of large entertainment supply system that will exploit the intellectual property in feature films in many different formats and markets.

Conclusion

The take-off of the film industry in the early twentieth century had been driven mainly by changes in demand. Cinema industrialized entertainment by standardizing it, automating it and making it tradable. After its early years, the industry experienced a quality race that led to increasing industrial concentration. Only later did geographical concentration take place, in Southern California. Cinema made a substantial contribution to productivity and total welfare, especially before television. After television, the industry experienced vertical disintegration, the flexible specialization of production, and a self-reinforcing process of increasing distribution channels and capacity as well as market growth. Cinema, then, was not only the first in a row of media industries that industrialized entertainment, but also the first in a series of international industries that industrialized services. The evolution of the film industry thus may give insight into technological change and its attendant welfare gains in many service industries to come.

Selected Bibliography

Allen, Robert C. Vaudeville and Film, 1895-1915. New York: Arno Press, 1980.

Bächlin, Peter, Der Film als Ware. Basel: Burg-Verlag, 1945.

Bakker, Gerben, “American Dreams: The European Film Industry from Dominance to Decline.” EUI Review (2000): 28-36.

Bakker, Gerben. “Stars and Stories: How Films Became Branded Products.” Enterprise and Society 2, no. 3 (2001a): 461-502.

Bakker, Gerben. Entertainment Industrialised: The Emergence of the International Film Industry, 1890-1940. Ph.D. dissertation, European University Institute, 2001b.

Bakker, Gerben. “Building Knowledge about the Consumer: The Emergence of Market Research in the Motion Picture Industry.” Business History 45, no. 1 (2003): 101-27.

Bakker, Gerben. “At the Origins of Increased Productivity Growth in Services: Productivity, Social Savings and the Consumer Surplus of the Film Industry, 1900-1938.” Working Papers in Economic History, No. 81, Department of Economic History, London School of Economics, 2004a.

Bakker, Gerben. “Selling French Films on Foreign Markets: The International Strategy of a Medium-Sized Film Company.” Enterprise and Society 5 (2004b): 45-76.

Bakker, Gerben. “The Decline and Fall of the European Film Industry: Sunk Costs, Market Size and Market Structure, 1895-1926.” Economic History Review 58, no. 2 (2005): 311-52.

Caves, Richard E. Creative Industries: Contracts between Art and Commerce. Cambridge, MA: Harvard University Press, 2000.

Christopherson, Susan, and Michael Storper. “Flexible Specialization and Regional Agglomerations: The Case of the U.S. Motion Picture Industry.” Annals of the Association of American Geographers 77, no. 1 (1987).

Christopherson, Susan, and Michael Storper. “The Effects of Flexible Specialization on Industrial Politics and the Labor Market: The Motion Picture Industry.” Industrial and Labor Relations Review 42, no. 3 (1989): 331-47.

Gomery, Douglas, The Coming of Sound to the American Cinema: A History of the Transformation of an Industry. Ph.D. dissertation, University of Wisconsin, 1975.

Gomery, Douglas, “The Coming of television and the ‘Lost’ Motion Picture Audience.” Journal of Film and Video 37, no. 3 (1985): 5-11.

Gomery, Douglas. The Hollywood Studio System. London: MacMillan/British Film Institute, 1986; reprinted 2005.

Kraft, James P. Stage to Studio: Musicians and the Sound Revolution, 1890-1950. Baltimore: Johns Hopkins University Press, 1996.

Krugman, Paul R., and Maurice Obstfeld, International Economics: Theory and Policy (sixth edition). Reading, MA: Addison-Wesley, 2003.

Low, Rachael, and Roger Manvell, The History of the British Film, 1896-1906. London, George Allen & Unwin, 1948.

Michaelis, Anthony R. “The Photographic Arts: Cinematography.” In A History of Technology, Vol. V: The Late Nineteenth Century, c. 1850 to c. 1900, edited by Charles Singer, 734-51. Oxford, Clarendon Press, 1958, reprint 1980.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Progress. Oxford: Oxford University Press, 1990.

Musser, Charles. The Emergence of Cinema: The American Screen to 1907. The History of American Cinema, Vol. I. New York: Scribner, 1990.

Sedgwick, John, “Product Differentiation at the Movies: Hollywood, 1946-65.” Journal of Economic History 63 (2002): 676-705.

Sedgwick, John, and Michael Pokorny. “The Film Business in Britain and the United States during the 1930s.” Economic History Review 57, no. 1 (2005): 79-112.

Sedgwick, John, and Mike Pokorny, editors. An Economic History of Film. London: Routledge, 2004.

Thompson, Kristin.. Exporting Entertainment: America in the World Film Market, 1907-1934. London: British Film Institute, 1985.

Vogel, Harold L. Entertainment Industry Economics: A Guide for Financial Analysis. Cambridge: Cambridge University Press, Sixth Edition, 2004.

Gerben Bakker may be contacted at gbakker at essex.ac.uk


[1] Gross return on investment, disregarding interest costs and distribution charges was 60 percent for European vs. 287 percent for U.S. films. Gross margin was 37 percent for European vs. 74 percent for U.S. films. Costs per viewer are 3.33 vs. 1.43 euros, revenues per viewer are 5.30 vs. 5.52 euros.

[2] The author is indebted to Douglas Gomery for this point.

Citation: Bakker, Gerben. “The Economic History of the International Film Industry”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-the-international-film-industry/

The Euro and Its Antecedents

Jerry Mushin, Victoria University of Wellington

The establishment, in 1999, of the euro was not an isolated event. It was the latest installment in the continuing story of attempts to move towards economic and monetary integration in western Europe. Its relationship with developments since 1972, when the Bretton Woods system of fixed (but adjustable) exchange rates in terms of the United States dollar was collapsing, is of particular interest.

Political moves towards monetary cooperation in western Europe began at the end of the Second World War, but events before 1972 are beyond the scope of this article. Coffey and Presley (1971) have described and analyzed relevant events between 1945 and 1971.

The Snake

In May 1972, at the end of the Bretton Woods (adjustable-peg) system, many countries in western Europe attempted to stabilize their currencies in relation to each other’s currencies. The arrangements known as the Snake in the Tunnel (or, more frequently, as the Snake), which were set up by members of the European Economic Community (EEC), one of the forerunners of the European Union, lasted until 1979. Each member agreed to limit, by market intervention, the fluctuations of its currency’s exchange rate in terms of other members’ currencies. The maximum divergence between the strongest and the weakest currencies was 2.25%. The agreement meant that the French government, for example, would ensure that the value of the French franc would show very limited fluctuation in terms of the Italian lira or the Netherlands guilder, but that there would be no commitment to stabilize its fluctuations against the United States dollar, the Japanese yen, or other currencies outside the agreement.

This was a narrower objective than the aim of the adjustable-peg system, which was intended to stabilize the value of each currency in terms of the values of all other major currencies, but for which the amount of reserves held by governments had proved to be insufficient. It was felt that this limited objective could be achieved with the amount of reserves available to member governments.

The agreement also had a political dimension. Stable exchange rates are likely to encourage international trade, and it was hoped that the new exchange-rate regime would stimulate members’ trade within western Europe at the expense of their trade with the rest of the world. This was one of the objectives of the EEC from its inception.

Exchange rates within the group of currencies were to be managed by market intervention; member governments undertook to buy and sell their currencies in sufficiently large quantities to influence their exchange rates. There was an agreed maximum divergence between the strongest and weakest currencies. Exchange rates of the whole group of currencies fluctuated together against external denominators such as the United States dollar.

The Snake is generally regarded as a failure. Membership was very unstable; the United Kingdom and the Irish Republic withdrew after less than one month, and only the German Federal Republic remained a member for the whole of its existence. Other members withdrew and rejoined, and some did this several times. In addition, the political context of the Snake was not clearly defined. Sweden and Norway participated in the Snake although, at that time, neither of these countries was a member of the EEC and Sweden was not a candidate for admission.

The curious name of the Snake in the Tunnel comes from the appearance of exchange-rate graphs. In terms of a non-member currency, the value of each currency in the system could fluctuate but only within a narrow band that was also fluctuating. The trend of each exchange rate showed some resemblance to a snake inside the narrow confines of a tunnel.

European Monetary System

The Snake came to an end in 1979 and was replaced with the European Monetary System (EMS). The exchange-rate mechanism of the EMS had the same objectives as the Snake, but the procedure for allocating intervention responsibilities among member governments was more precisely specified.

The details of the EMS arrangements have been explained by Adams (1990). Membership of the EMS involved an obligation on each EMS-member government to undertake to stabilize its currency value with respect to the value of a basket of EMS-member currencies called the European Currency Unit (ECU). Each country’s currency had a weight in the ECU that was related to the importance of that country’s trade within the EEC. An autonomous shift in the external value of any EMS-member currency changed the value of the ECU and therefore imposed exchange-rate adjustment obligations on all members of the system. The magnitude of each of these obligations was related to the weight allocated to the currency experiencing the initial disturbance.

The effects of the EMS requirements on each individual member depended upon that country’s weight in the ECU. The system ensured that major members delegated to their smaller partners a greater proportion of their exchange-rate adjustment responsibilities than the less important members imposed on the dominant countries. The explanation for this lack of symmetry depends on the fact that a particular percentage shift in the external value of the currency of a major member of the EMS (with a high weight in the ECU) had a greater effect on the external value of the ECU than had the same percentage disturbance to the external value of the currency of a less important member. It therefore imposed greater exchange-rate adjustment responsibilities on the remaining members than did the same percentage shift applied to the external value of the less important currency. While each of the major members of the EMS could delegate to the remaining members a high proportion of its adjustment obligations, the same is not true for the smaller countries in the system. This burden was, however, seen by the smaller nations (including Denmark, Belgium, and Netherlands) as an acceptable price for exchange-rate stability with their main trading partners (including France and the German Federal Republic).

The position of the Irish Republic, which joined the EMS in 1979 despite both the very low weight of its currency in the ECU and the absence of the UK, its dominant trading partner, appears to be anomalous. The explanation of this decision is that it was principally concerned about the significant problem of imported inflation that was derived from the rising price level of its British imports. This was based on the assumption that, once the rigid link between the two currencies was broken, inflation in the UK would lead to a fall in the value of the British pound relative to the value of the Irish Republic pound. However, purchasing power is not the only determinant of exchange rates, and the value of the British pound increased sharply in 1979 causing increased imported inflation in the Irish Republic. The appreciation of the British pound was probably caused principally by developments in the UK oil industry and by the monetarist style of UK macroeconomic policy.

Partly because it had different rules for different countries, the EMS had a more stable membership than had the Snake. The standard maximum exchange-rate fluctuation from its reference value that was permitted for each EMS currency was ±2.25%. However, there were wider bands (±6%) for weaker members (Italy from 1979, Spain from 1989, and the UK from 1990) and the Netherlands observed a band of ±1%. The system was also subject to frequent realignments of the parity grid. The Irish Republic joined the EMS in 1979 but the UK did not, thus ending the link between the British pound and the Irish Republic pound. The UK joined in 1990 but, as a result of substantial international capital flows, left in 1992. The bands were increased in width to ±15% in 1992.

Incentives to join the EMS were comparable to those that applied to the Snake and included the desire for stable exchange rates with a country’s principal trading partners and the desire to encourage trade within the group of EMS members rather than with countries in the rest of the world. Cohen (2003), in his analysis of monetary unions, has explained the advantages and disadvantages of trans-national monetary integration.

The UK decided not to participate in the exchange-rate mechanism of the EMS at its inception. It was influenced by the fact that the weight allocated to the British pound (0.13) in the definition of the ECU was insufficient to allow the UK government to delegate to other EMS members a large proportion of the exchange-rate stabilization responsibilities that it would acquire under EMS rules. The outcome of EMS membership for the UK in 1979 would have been, therefore, in marked contrast to the outcome for France (with an ECU-weight of 0.20) and, especially, for the German Federal Republic (with an ECU-weight of 0.33). The proportion of the UK’s exports that, at that time, was sold in EMS countries was low relative to the proportion of any other EMS-member’s exports, and this was reflected in its ECU weight. The relationship between the weight assigned to an individual EMS-member’s currency in the definition of the ECU and the ability of that country to delegate adjustment responsibilities was that a particular percentage shift in the external value of the currency of a major member of the EMS had a greater effect on the value of the ECU than the same percentage disturbance to the external value of the currency of a less important member, and it therefore imposed greater exchange-rate adjustment responsibilities on the remaining EMS members than did the same percentage shift applied to the external value of the less important EMS-member currency.

A second reason for the refusal of the UK to join the EMS in 1979 was that membership would not have led to greater stability of its exchange rates with respect to the currencies of its major trading partners, which were, at that time, outside the EMS group of countries.

An important reason for the British government’s continued refusal, for more than eleven years, to participate in the EMS was its concern about the loss of sovereignty that membership would imply. A floating exchange rate (even a managed floating exchange rate such as was operated by the UK government from 1972 to 1990) permits an independent monetary policy, but EMS obligations make this impossible. Monetarist views on the efficacy of restraining the rate of inflation by controlling the rate of growth of the money supply were dominant during the early years of the EMS, and an independent monetary policy was seen as being particularly significant.

By 1990, when the UK government decided to join the EMS, a number of economic conditions had changed. It is significant that the proportion of UK exports sold in EMS countries had risen markedly. Following substantial speculative selling of British currency in September 1992, however, the UK withdrew from the EMS. One of the causes of this was the substantial flow of short-term capital from the UK, where interest rates were relatively low, to Germany, which was implementing a very tight monetary policy and hence had very high interest rates. This illustrates that a common monetary policy is one of the necessary conditions for the operation of agreements, such as the EMS, that are intended to limit exchange-rate fluctuations.

The Euro

Despite the partial collapse of the EMS in 1992, a common currency, the euro, was introduced in 1999 by eleven of the fifteen members of the European Union, and a twelfth country joined the euro zone in 2001. From 1999, each national currency in this group had a rigidly fixed exchange rate with the euro (and, hence, with each other). Fixed exchange rates, in national currency units per euro, are listed in Table 1. In 2002, euro notes and coins replaced national currencies in these countries. The intention of the new currency arrangement is to reduce transactions costs and encourage economic integration. The Snake and the EMS can perhaps be regarded as transitional structures leading to the introduction of the euro, which is the single currency of a single integrated economy.

Table 1
Value of the Euro (in terms of national currencies)

Austria 13.7603
Belgium 40.3399
Finland 5.94573
France 6.55957
Germany 1.95583
Greece 340.750
Irish Republic 0.787564
Italy 1936.27
Luxembourg 40.3399
Netherlands 2.20371
Portugal 200.482
Spain 166.386

Source: European Central Bank

Of the members of the European Union, to which participation in this innovation was restricted, Denmark, Sweden, and the UK chose not to introduce the euro in place of their existing currencies. The countries that adopted the euro in 1999 were Austria, Belgium, France, Finland, Germany, Irish Republic, Italy, Luxembourg, Netherlands, Portugal, and Spain.

Greece, which adopted the euro in 2001, was initially excluded from the new currency arrangement because it had failed to satisfy the conditions described in the Treaty of Maastricht, 1991. The maximum value for each of five variables for each country that was specified in the Treaty is listed in Table 2.

Table 2
Conditions for Euro Introduction (Treaty of Maastricht, 1991)

Inflation rate 1.5 percentage points above the average of the three euro countries with the lowest rates
Long-term interest rates 2.0 percentage points above the average of the three euro countries with the lowest rates
Exchange-rate stability fluctuations within the EMS band for at least two years
Budget deficit/GDP ratio 3%
Government debt/GDP ratio 60%

Source: The Economist, May 31, 1997.

The euro is also used in countries that, before 1999, used currencies that it has replaced: Andorra (French franc and Spanish peseta), Kosovo (German mark), Monaco (French franc), Montenegro (German mark), San Marino (Italian lira), and Vatican (Italian lira). The euro is also the currency of French Guiana, Guadeloupe, Martinique, Mayotte, Réunion, and St Pierre-Miquelon that, as départements d’outre-mer, are constitutionally part of France.

The euro was adopted by Slovenia in 2007, by Cyprus (South) and Malta in 2008, by Slovakia in 2009, by Estonia in 2011,  by Latvia in 2014, and by Lithuania in 2015. Table 3 shows the exchange rates between the euro and the currencies of these countries.

Table 3 Value of the Euro (in terms of national currencies)

         Cyprus (South) 0.585274
         Estonia 15.6466
         Latvia 0.702804
         Lithuania 3.4528
         Malta 0.4293
         Slovakia 30.126
         Slovenia 239.64

Source: European Central Bank

Currencies whose exchange rates were, in 1998, pegged to currencies that have been replaced by the euro have had exchange rates defined in terms of the euro since its inception. The Communauté Financière Africaine (CFA) franc, which is used by Benin, Burkina Faso, Cameroon, Central African Republic, Chad, Congo Republic, Côte d’Ivoire, Equatorial Guinea, Gabon, Guinea-Bissau, Mali, Niger, Sénégal, and Togo was defined in terms of the French franc until 1998, and is now pegged to the euro. The Comptoirs Français du Pacifique (CFP) franc, which is used in the three French territories in the south Pacific (Wallis and Futuna Islands, French Polynesia, and New Caledonia), was also defined in terms of the French franc and is now pegged to the euro. The Comoros franc has similarly moved from a French-franc peg to a euro peg. The Cape Verde escudo, which was pegged to the Portuguese escudo, is also now pegged to the euro. Bosnia-Herzegovina and Bulgaria, which previously operated currency-board arrangements with respect to the German mark, now fix the exchange rates of their currencies in terms of the euro. Albania, Botswana, Croatia, Czech Republic, Denmark, Hungary, Iran, North Macedonia, Poland, Romania, São Tomé-Príncipe, and Serbia also peg their currencies to the euro. Additional countries that peg their currencies to a basket that includes the euro are Algeria, Belarus, China, Fiji, Kuwait, Libya, Morocco, Samoa (Western), Singapore, Syria, Tunisia, Turkey, and Vanuatu. (European Central Bank, 2020).

The group of countries that use the euro or that have linked the values of their currencies to the euro might be called the “greater euro zone.” It is interesting that membership of this group of countries has been determined largely by historical accident. Its members exhibit a marked absence of macroeconomic commonality. Within this bloc, macroeconomic indicators, including the values of GDP and of GDP per person, have a wide range of values. The degree of financial integration with international markets also varies substantially in these countries. Countries that stabilize their exchange rates with respect to a basket of currencies that includes the euro have adjustment systems that are less closely related to its value. This weaker connection means that these countries should not be regarded as part of the greater euro zone.

The establishment of the euro is a remarkable development whose economic effects, especially in the long term, are uncertain. This type of exercise, involving the rigid fixing of certain exchange rates and then the replacement of a group of existing currencies, has rarely been undertaken in the recent past. Other than the introduction of the euro, and the much less significant case of the merger in 1990 of the former People’s Democratic Republic of Yemen (Aden) and the former Arab Republic of Yemen (Sana’a), the monetary union that accompanied the expansion of the German Federal Republic to incorporate the former German Democratic Republic in 1990 is the sole recent example. However, the very distinctive political situation of post-1945 Germany (and its economic consequences) make it difficult to draw relevant conclusions from this experience. The creation of the euro is especially noteworthy at a time when the majority, and an increasing proportion, of countries have chosen floating (or managed floating) exchange rates for their currencies. With the important exception of China, this includes most major economies. This statement should be treated with caution, however, because countries that claim to operate a managed floating exchange rate frequently aim, as described by Calvo and Reinhart (2002), to stabilize their currencies with respect to the United States dollar.

When the euro was established, it replaced national currencies. However, this is not the same as the process known as dollarization, in which a country adopts another country’s currency. For example, the United States dollar is the sole legal tender in Ecuador, El Salvador, Marshall Islands, Micronesia, Palau, Panama, Timor-Leste, and Zimbabwe. It is also the sole legal tender in the overseas possessions of the United States (American Samoa, Guam, Northern Mariana Islands, Puerto Rico, and U.S. Virgin Islands), in two British territories (Turks and Caicos Islands and British Virgin Islands) and in the Caribbean Netherlands. Like the countries that use the euro, a dollarized country cannot operate an independent monetary policy. A euro-using country will, however, have some input into the formation of monetary policy, whereas dollarized countries have none. In addition, unlike euro-using countries, dollarized countries probably receive none of the seigniorage that is derived from the issue of currency.

Prospects for the Euro

The expansion of the greater euro zone, which is likely to continue with the economic integration of the new members of the European Union, and with the probable admission of additional new members, has enhanced the importance of the euro. However, this expansion is unlikely to make the greater euro zone into a major currency bloc comparable to, for example, the Sterling Area even at the time of its collapse in 1972.  Mushin (2012) has described the nature and role of the Sterling Area

Mundell (2003) has predicted that the establishment of the euro will be the model for a new currency bloc in Asia. However, there is no evidence yet of any significant movement in this direction. Eichengreen et al (1995) have argued that monetary unification in the emerging industrial economies of Asia is unlikely to occur. A feature of Mundell’s paper is that he assumes that the benefits of joining a currency area almost necessarily exceed the costs, but this remains unproven.

The creation of the euro will have, and might already have had, macroeconomic consequences for the countries that comprise the greater euro zone. Since 1999, the influences on the import prices and export prices of these countries have included the effects of monetary policy run by the European Central Bank (www.ecb.int), a non-elected supra-national institution that is directly accountable neither to individual national governments nor to individual national parliaments, and developments, including capital flows, in world financial markets. Neither of these can be relied upon to ensure stable prices at an acceptable level in price-taking economies. The consequences of the introduction of the euro might be severe in some parts of the greater euro zone, especially in the low-GDP economies. For example, unemployment might increase if exports cease to have competitive prices. Further, domestic macroeconomic policy is not independent of exchange-rate policy. One of the costs of joining a monetary union is the loss of monetary-policy independence.

Data on Exchange-rate Policies

The best source of data on exchange-rate policies is probably the International Monetary Fund (IMF) (see www.imf.org). Almost all countries of significant size are members of the IMF; notable exceptions are Cuba (since 1964), the Republic of China (Taiwan) (since 1981), and the People’s Democratic Republic of Korea (North Korea). The most significant IMF publications that contain exchange-rate data are International Financial Statistics and the Annual Report on Exchange Arrangements and Exchange Restrictions.

Since 2009, the IMF has allocated each country’s exchange rate policy to one of ten categories. Unfortunately, the definitions of these mean that the members of the greater euro zone are not easy to identify. In this taxonomy, the exchange rate systems of countries that are part of a monetary union are classified according to the arrangements that govern the joint currency. The exchange rate policies of the eleven countries that introduced the euro in 1999, Cyprus (South), Estonia, Greece, Latvia, Lithuania, Malta, Slovakia, and Slovenia are classified as “Free floating.” Kosovo, Montenegro, and San Marino have “No separate legal tender.” Bosnia-Herzegovina and Bulgaria have “Currency boards.” Cape Verde, Comoros, Denmark, Fiji, Kuwait, Libya, São Tomé and Príncipe, and the fourteen African countries that use the CFA franc have “Conventional pegs.” Botswana has a “crawling peg.” Croatia, North Macedonia, and Morocco have a “Stabilized arrangement.” Romania and Singapore have a “Crawl-like arrangement.” Andorra, Monaco, Vatican, and the three territories in the south Pacific that use the CFP franc are not IMF members. Anderson, Habermeier, Kokenyne, and Veyrune (2009) explain and discuss the definitions of these categories and compare them to the definitions that were used by the International Monetary Fund until 2010. Information on the exchange-rate policy of each of its members is published by the International Monetary Fund (2020).

Other Monetary Unions in Europe

The establishment of the Snake, the EMS, and the euro have affected some of the other monetary unions in Europe. The monetary unions of Belgium-Luxembourg, of France-Monaco, and of Italy-Vatican-San Marino predate the Snake, survived within the EMS, and have now been absorbed into the euro zone. Unchanged by the introduction of the euro are the UK-Gibraltar-Guernsey-Isle of Man-Jersey monetary union (which is the remnant of the Sterling Area that also includes Falkland Islands and St. Helena), the Switzerland-Liechtenstein monetary union, and the use of the Turkish lira in Northern Cyprus.

The relationship between the currencies of the Irish Republic (previously the Irish Free State) and the UK is an interesting case study of the interaction of political and economic forces on the development of macroeconomic (including exchange-rate) policy. Despite the non-participation of the UK, the Irish Republic was a foundation member of the EMS. This ended the link between the British pound and the Irish Republic pound (also called the punt) that had existed since the establishment of the Irish currency following the partition of Ireland (1922), so that a step towards one monetary union destroyed another. Until 1979, the Irish Republic pound had a rigidly fixed exchange rate with the British pound, and each of the two banking systems cleared the other’s checks as if denominated in its own currency. These very close financial links meant that every policy decision of monetary importance in the UK coincided with an identical change in the Irish Republic, including the currency reforms of 1939 (US-dollar peg), 1949 (devaluation), 1967 (devaluation), 1971 (decimalization), 1972 (floating exchange rate), and 1972 (brief membership of the Snake). From 1979 until 1999, when the Irish Republic adopted the euro, there was a floating exchange rate between the British pound and the Irish Republic pound. South of the Irish border, the dominant political mood in the 1920s was the need to develop a distinct non-British national identity, but there were perceived to be good economic grounds for retaining a very close link with the British pound. By 1979, although political rhetoric still referred to the desire for a united Ireland, the economic situation had changed, and the decision to join the EMS without the membership of the UK meant that, for the first time, different currencies were used on each side of the Irish border. In both of these cases, political objectives were tempered by economic pressures.

Effects of the Global Financial Crisis

One of the ways of analyzing the significance of a new system is to observe the effects of circumstances that have not been predicted. The global financial crisis [GFC] that began in 2007 provides such an opportunity. In the UK and in the Irish Republic, whose business cycles are usually comparable, the problems that followed the GFC were similar in nature and in severity. In both of these countries, major banks (and therefore their depositors) were rescued from collapse by their governments. However, the macroeconomic outcomes have been different. The increase in the unemployment rate has been much greater in the Irish Republic than in the UK. The explanation for this is that an independent monetary policy is not possible in the Irish Republic, which is part of the euro zone. The UK, which does not use the euro, responded to the GFC by operating a very loose monetary policy (with a very low discount rate and large scale “quantitative easing”). The effects of this have been compounded by depreciation of the British pound. Although, partly because of the common language, labor is mobile between the UK and the Irish Republic, the unemployment rate in the Irish Republic remains high because its real exchange rate is high and its real interest rates are high. The effect of the GFC is that the Irish Republic now has an overvalued currency, which has made an inefficient economy more inefficient. Simultaneously, the more efficient economies in the euro zone (and some countries that are outside the euro zone, including the UK, whose currencies have depreciated) now have undervalued currencies, which have encouraged their economies to expand. This illustrates one of the consequences of membership of the euro zone. Had the GFC been predicted, the estimation of the economic benefits for the Irish Republic (and for Greece, Italy, Portugal, Spain, and other countries) would probably have been different. The political consequences for the more efficient countries in the euro zone, including Germany, might also be significant. At great cost, these countries have provided financial assistance to the weaker members of the euro zone, especially Greece.

Conclusion

The future role of the euro is uncertain. Especially in view of the British decision to withdraw from the European Union, even its survival is not guaranteed. It is clear, however, that the outcome will depend on both political and economic forces.

References:

Adams, J. J. “The Exchange-Rate Mechanism in the European Monetary System.” Bank of England Quarterly Bulletin 30, no. 4 (1990): 479-81.

Anderson, Harald, Karl Habermeier, Annamaria Kokenyne, and Romain Veyrune. Revised System for the Classification of Exchange Rate Arrangements, Washington DC: International Monetary Fund, 2009.

Calvo, Guillermo and Carmen Reinhart. “Fear of Floating.” Quarterly Journal of Economics 117, no 2 (2002): 379-408.

Coffey, Peter and John Presley. European Monetary Integration. London: Macmillan Press, 1971.

Cohen, Benjamin. “Monetary Unions.” In Encyclopedia of Economic and Business History, edited by Robert Whaples, 2003. http://eh.net/encyclopedia/monetary-unions/

Eichengreen, Barry, James Tobin, and Charles Wyplosz. “Two Cases for Sand in the Wheels of International Finance.” Economic Journal 105, no. 1 (1995): 162-72.

European Central Bank.  The International Role of the Euro.  2020.

International Monetary Fund. Annual Report of the Executive Board, 2020.

Mundell, Robert. “Prospects for an Asian Currency Area.” Journal of Asian Economics 14, no. 1 (2003): 1-10.

Mushin, Jerry. “The Sterling Area.” In Encyclopedia of Economic and Business History, edited by Robert Whaples, 2012.  http://eh.net/encyclopedia/the-sterling-area/

Endnote:

Jerry Mushin can be reached at  jerry.mushin1@outlook.com.  This article includes material from some of the author’s publications:

Mushin, Jerry. “A Simulation of the European Monetary System.” Computer Education 35 (1980): 8-19.

Mushin, Jerry. “The Irish Pound: Recent Developments.” Atlantic Economic Journal 8, no, 4 (1980): 100-10.

Mushin, Jerry. “Exchange-Rate Adjustment in a Multi-Currency Monetary System.” Simulation 36, no 5 (1981): 157-63.

Mushin, Jerry. “Non-Symmetry in the European Monetary System.” British Review of Economic Issues 8, no 2 (1986): 85-89.

Mushin, Jerry. “Exchange-Rate Stability and the Euro.” New Zealand Banker 11, no. 4 (1999): 27-32.

Mushin, Jerry. “A Taxonomy of Fixed Exchange Rates.” Australian Stock Exchange Perspective 7, no. 2 (2001): 28-32.

Mushin, Jerry. “Exchange-Rate Policy and the Efficacy of Aggregate Demand Management.” The Business Economist 33, no. 2 (2002): 16-24.

Mushin, Jerry. Output and the Role of Money. New York, London and Singapore: World Scientific Publishing Company, 2002.

Mushin, Jerry. “The Deceptive Resilience of Fixed Exchange Rates.” Journal of Economics, Business and Law 6, no. 1 (2004): 1-27.

Mushin, Jerry. “The Uncertain Prospect of Asian Monetary Integration.” International Economics and Finance Journal 1, no. 1 (2006): 89-94.

Mushin, Jerry. “Increasing Stability in the Mix of Exchange Rate Policies.” Studies in Business and Economics 14, no. 1 (2008): 17-30.

Mushin, Jerry. “Predicting Monetary Unions.” International Journal of Economic Research 5, no. 1 (2008): 27-33.

Mushin, Jerry. Interest Rates, Prices, and the Economy. Jodhpur: Scientific Publishers (India), 2009.

Mushin, Jerry. “Infrequently Asked Questions on the Monetary Union of the Countries of the Gulf Cooperation Council.” Economics and Business Journal: Inquiries and Perspectives, 3, no. 1, (2010): 1-12.

Mushin, Jerry. “Common Currencies: Economic and Political Causes and Consequences.” The Business Economist 42, no. 2, (2011): 19-26.

Mushin, Jerry. “Exchange Rates, Monetary Aggregates, and Inflation,” Bulletin of Political Economy 7, no. 1 (2013): 69-88.

Mushin, Jerry. “Monetary-Policy Targets and Exchange Rates.” Economics and Business Journal: Inquiries and Perspectives, 5, no 1, (2015): 1-12.

Mushin, Jerry and Uduakobong Edy-Ewoh. Output, Prices and Interest Rates, Ilishan-Remo: Babcock University Press, 2019.

Citation: Mushin, Jerry. “The Euro and Its Antecedents”. EH.Net Encyclopedia, edited by Robert Whaples. December 4, 2020. URL http://eh.net/encyclopedia/the-euro-and-its-antecedents/

The U.S. Economy in the 1920s

Gene Smiley, Marquette University

Introduction

The interwar period in the United States, and in the rest of the world, is a most interesting era. The decade of the 1930s marks the most severe depression in our history and ushered in sweeping changes in the role of government. Economists and historians have rightly given much attention to that decade. However, with all of this concern about the growing and developing role of government in economic activity in the 1930s, the decade of the 1920s often tends to get overlooked. This is unfortunate because the 1920s are a period of vigorous, vital economic growth. It marks the first truly modern decade and dramatic economic developments are found in those years. There is a rapid adoption of the automobile to the detriment of passenger rail travel. Though suburbs had been growing since the late nineteenth century their growth had been tied to rail or trolley access and this was limited to the largest cities. The flexibility of car access changed this and the growth of suburbs began to accelerate. The demands of trucks and cars led to a rapid growth in the construction of all-weather surfaced roads to facilitate their movement. The rapidly expanding electric utility networks led to new consumer appliances and new types of lighting and heating for homes and businesses. The introduction of the radio, radio stations, and commercial radio networks began to break up rural isolation, as did the expansion of local and long-distance telephone communications. Recreational activities such as traveling, going to movies, and professional sports became major businesses. The period saw major innovations in business organization and manufacturing technology. The Federal Reserve System first tested its powers and the United States moved to a dominant position in international trade and global business. These things make the 1920s a period of considerable importance independent of what happened in the 1930s.

National Product and Income and Prices

We begin the survey of the 1920s with an examination of the overall production in the economy, GNP, the most comprehensive measure of aggregate economic activity. Real GNP growth during the 1920s was relatively rapid, 4.2 percent a year from 1920 to 1929 according to the most widely used estimates. (Historical Statistics of the United States, or HSUS, 1976) Real GNP per capita grew 2.7 percent per year between 1920 and 1929. By both nineteenth and twentieth century standards these were relatively rapid rates of real economic growth and they would be considered rapid even today.

There were several interruptions to this growth. In mid-1920 the American economy began to contract and the 1920-1921 depression lasted about a year, but a rapid recovery reestablished full-employment by 1923. As will be discussed below, the Federal Reserve System’s monetary policy was a major factor in initiating the 1920-1921 depression. From 1923 through 1929 growth was much smoother. There was a very mild recession in 1924 and another mild recession in 1927 both of which may be related to oil price shocks (McMillin and Parker, 1994). The 1927 recession was also associated with Henry Ford’s shut-down of all his factories for six months in order to changeover from the Model T to the new Model A automobile. Though the Model T’s market share was declining after 1924, in 1926 Ford’s Model T still made up nearly 40 percent of all the new cars produced and sold in the United States. The Great Depression began in the summer of 1929, possibly as early as June. The initial downturn was relatively mild but the contraction accelerated after the crash of the stock market at the end of October. Real total GNP fell 10.2 percent from 1929 to 1930 while real GNP per capita fell 11.5 percent from 1929 to 1930.

image002 image004

Price changes during the 1920s are shown in Figure 2. The Consumer Price Index, CPI, is a better measure of changes in the prices of commodities and services that a typical consumer would purchase, while the Wholesale Price Index, WPI, is a better measure in the changes in the cost of inputs for businesses. As the figure shows the 1920-1921 depression was marked by extraordinarily large price decreases. Consumer prices fell 11.3 percent from 1920 to 1921 and fell another 6.6 percent from 1921 to 1922. After that consumer prices were relatively constant and actually fell slightly from 1926 to 1927 and from 1927 to 1928. Wholesale prices show greater variation. The 1920-1921 depression hit farmers very hard. Prices had been bid up with the increasing foreign demand during the First World War. As European production began to recover after the war prices began to fall. Though the prices of agricultural products fell from 1919 to 1920, the depression brought on dramatic declines in the prices of raw agricultural produce as well as many other inputs that firms employ. In the scramble to beat price increases during 1919 firms had built up large inventories of raw materials and purchased inputs and this temporary increase in demand led to even larger price increases. With the depression firms began to draw down those inventories. The result was that the prices of raw materials and manufactured inputs fell rapidly along with the prices of agricultural produce—the WPI dropped 45.9 percent between 1920 and 1921. The price changes probably tend to overstate the severity of the 1920-1921 depression. Romer’s recent work (1988) suggests that prices changed much more easily in that depression reducing the drop in production and employment. Wholesale prices in the rest of the 1920s were relatively stable though they were more likely to fall than to rise.

Economic Growth in the 1920s

Despite the 1920-1921 depression and the minor interruptions in 1924 and 1927, the American economy exhibited impressive economic growth during the 1920s. Though some commentators in later years thought that the existence of some slow growing or declining sectors in the twenties suggested weaknesses that might have helped bring on the Great Depression, few now argue this. Economic growth never occurs in all sectors at the same time and at the same rate. Growth reallocates resources from declining or slower growing sectors to the more rapidly expanding sectors in accordance with new technologies, new products and services, and changing consumer tastes.

Economic growth in the 1920s was impressive. Ownership of cars, new household appliances, and housing was spread widely through the population. New products and processes of producing those products drove this growth. The combination of the widening use of electricity in production and the growing adoption of the moving assembly line in manufacturing combined to bring on a continuing rise in the productivity of labor and capital. Though the average workweek in most manufacturing remained essentially constant throughout the 1920s, in a few industries, such as railroads and coal production, it declined. (Whaples 2001) New products and services created new markets such as the markets for radios, electric iceboxes, electric irons, fans, electric lighting, vacuum cleaners, and other laborsaving household appliances. This electricity was distributed by the growing electric utilities. The stocks of those companies helped create the stock market boom of the late twenties. RCA, one of the glamour stocks of the era, paid no dividends but its value appreciated because of expectations for the new company. Like the Internet boom of the late 1990s, the electricity boom of the 1920s fed a rapid expansion in the stock market.

Fed by continuing productivity advances and new products and services and facilitated by an environment of stable prices that encouraged production and risk taking, the American economy embarked on a sustained expansion in the 1920s.

Population and Labor in the 1920s

At the same time that overall production was growing, population growth was declining. As can be seen in Figure 3, from an annual rate of increase of 1.85 and 1.93 percent in 1920 and 1921, respectively, population growth rates fell to 1.23 percent in 1928 and 1.04 percent in 1929.

These changes in the overall growth rate were linked to the birth and death rates of the resident population and a decrease in foreign immigration. Though the crude death rate changed little during the period, the crude birth rate fell sharply into the early 1930s. (Figure 4) There are several explanations for the decline in the birth rate during this period. First, there was an accelerated rural-to-urban migration. Urban families have tended to have fewer children than rural families because urban children do not augment family incomes through their work as unpaid workers as rural children do. Second, the period also saw continued improvement in women’s job opportunities and a rise in their labor force participation rates.

Immigration also fell sharply. In 1917 the federal government began to limit immigration and in 1921 an immigration act limited the number of prospective citizens of any nationality entering the United States each year to no more than 3 percent of that nationality’s resident population as of the 1910 census. A new act in 1924 lowered this to 2 percent of the resident population at the 1890 census and more firmly blocked entry for people from central, southern, and eastern European nations. The limits were relaxed slightly in 1929.

The American population also continued to move during the interwar period. Two regions experienced the largest losses in population shares, New England and the Plains. For New England this was a continuation of a long-term trend. The population share for the Plains region had been rising through the nineteenth century. In the interwar period its agricultural base, combined with the continuing shift from agriculture to industry, led to a sharp decline in its share. The regions gaining population were the Southwest and, particularly, the far West.— California began its rapid growth at this time.

 Real Average Weekly or Daily Earnings for Selected=During the 1920s the labor force grew at a more rapid rate than population. This somewhat more rapid growth came from the declining share of the population less than 14 years old and therefore not in the labor force. In contrast, the labor force participation rates, or fraction of the population aged 14 and over that was in the labor force, declined during the twenties from 57.7 percent to 56.3 percent. This was entirely due to a fall in the male labor force participation rate from 89.6 percent to 86.8 percent as the female labor force participation rate rose from 24.3 percent to 25.1 percent. The primary source of the fall in male labor force participation rates was a rising retirement rate. Employment rates for males who were 65 or older fell from 60.1 percent in 1920 to 58.0 percent in 1930.

With the depression of 1920-1921 the unemployment rate rose rapidly from 5.2 to 8.7 percent. The recovery reduced unemployment to an average rate of 4.8 percent in 1923. The unemployment rate rose to 5.8 percent in the recession of 1924 and to 5.0 percent with the slowdown in 1927. Otherwise unemployment remained relatively low. The onset of the Great Depression from the summer of 1929 on brought the unemployment rate from 4.6 percent in 1929 to 8.9 percent in 1930. (Figure 5)

Earnings for laborers varied during the twenties. Table 1 presents average weekly earnings for 25 manufacturing industries. For these industries male skilled and semi-skilled laborers generally commanded a premium of 35 percent over the earnings of unskilled male laborers in the twenties. Unskilled males received on average 35 percent more than females during the twenties. Real average weekly earnings for these 25 manufacturing industries rose somewhat during the 1920s. For skilled and semi-skilled male workers real average weekly earnings rose 5.3 percent between 1923 and 1929, while real average weekly earnings for unskilled males rose 8.7 percent between 1923 and 1929. Real average weekly earnings for females rose on 1.7 percent between 1923 and 1929. Real weekly earnings for bituminous and lignite coal miners fell as the coal industry encountered difficult times in the late twenties and the real daily wage rate for farmworkers in the twenties, reflecting the ongoing difficulties in agriculture, fell after the recovery from the 1920-1921 depression.

The 1920s were not kind to labor unions even though the First World War had solidified the dominance of the American Federation of Labor among labor unions in the United States. The rapid growth in union membership fostered by federal government policies during the war ended in 1919. A committee of AFL craft unions undertook a successful membership drive in the steel industry in that year. When U.S. Steel refused to bargain, the committee called a strike, the failure of which was a sharp blow to the unionization drive. (Brody, 1965) In the same year, the United Mine Workers undertook a large strike and also lost. These two lost strikes and the 1920-21 depression took the impetus out of the union movement and led to severe membership losses that continued through the twenties. (Figure 6)

Under Samuel Gompers’s leadership, the AFL’s “business unionism” had attempted to promote the union and collective bargaining as the primary answer to the workers’ concerns with wages, hours, and working conditions. The AFL officially opposed any government actions that would have diminished worker attachment to unions by providing competing benefits, such as government sponsored unemployment insurance, minimum wage proposals, maximum hours proposals and social security programs. As Lloyd Ulman (1961) points out, the AFL, under Gompers’ direction, differentiated on the basis of whether the statute would or would not aid collective bargaining. After Gompers’ death, William Green led the AFL in a policy change as the AFL promoted the idea of union-management cooperation to improve output and promote greater employer acceptance of unions. But Irving Bernstein (1965) concludes that, on the whole, union-management cooperation in the twenties was a failure.

To combat the appeal of unions in the twenties, firms used the “yellow-dog” contract requiring employees to swear they were not union members and would not join one; the “American Plan” promoting the open shop and contending that the closed shop was un-American; and welfare capitalism. The most common aspects of welfare capitalism included personnel management to handle employment issues and problems, the doctrine of “high wages,” company group life insurance, old-age pension plans, stock-purchase plans, and more. Some firms formed company unions to thwart independent unionization and the number of company-controlled unions grew from 145 to 432 between 1919 and 1926.

Until the late thirties the AFL was a voluntary association of independent national craft unions. Craft unions relied upon the particular skills the workers had acquired (their craft) to distinguish the workers and provide barriers to the entry of other workers. Most craft unions required a period of apprenticeship before a worker was fully accepted as a journeyman worker. The skills, and often lengthy apprenticeship, constituted the entry barrier that gave the union its bargaining power. There were only a few unions that were closer to today’s industrial unions where the required skills were much less (or nonexistent) making the entry of new workers much easier. The most important of these industrial unions was the United Mine Workers, UMW.

The AFL had been created on two principles: the autonomy of the national unions and the exclusive jurisdiction of the national union.—Individual union members were not, in fact, members of the AFL; rather, they were members of the local and national union, and the national was a member of the AFL. Representation in the AFL gave dominance to the national unions, and, as a result, the AFL had little effective power over them. The craft lines, however, had never been distinct and increasingly became blurred. The AFL was constantly mediating jurisdictional disputes between member national unions. Because the AFL and its individual unions were not set up to appeal to and work for the relatively less skilled industrial workers, union organizing and growth lagged in the twenties.

Agriculture

The onset of the First World War in Europe brought unprecedented prosperity to American farmers. As agricultural production in Europe declined, the demand for American agricultural exports rose, leading to rising farm product prices and incomes. In response to this, American farmers expanded production by moving onto marginal farmland, such as Wisconsin cutover property on the edge of the woods and hilly terrain in the Ozark and Appalachian regions. They also increased output by purchasing more machinery, such as tractors, plows, mowers, and threshers. The price of farmland, particularly marginal farmland, rose in response to the increased demand, and the debt of American farmers increased substantially.

This expansion of American agriculture continued past the end of the First World War as farm exports to Europe and farm prices initially remained high. However, agricultural production in Europe recovered much faster than most observers had anticipated. Even before the onset of the short depression in 1920, farm exports and farm product prices had begun to fall. During the depression, farm prices virtually collapsed. From 1920 to 1921, the consumer price index fell 11.3 percent, the wholesale price index fell 45.9 percent, and the farm products price index fell 53.3 percent. (HSUS, Series E40, E42, and E135)

Real average net income per farm fell over 72.6 percent between 1920 and 1921 and, though rising in the twenties, never recovered the relative levels of 1918 and 1919. (Figure 7) Farm mortgage foreclosures rose and stayed at historically high levels for the entire decade of the 1920s. (Figure 8) The value of farmland and buildings fell throughout the twenties and, for the first time in American history, the number of cultivated acres actually declined as farmers pulled back from the marginal farmland brought into production during the war. Rather than indicators of a general depression in agriculture in the twenties, these were the results of the financial commitments made by overoptimistic American farmers during and directly after the war. The foreclosures were generally on second mortgages rather than on first mortgages as they were in the early 1930s. (Johnson, 1973; Alston, 1983)

A Declining Sector

A major difficulty in analyzing the interwar agricultural sector lies in separating the effects of the 1920-21 and 1929-33 depressions from those that arose because agriculture was declining relative to the other sectors. A relatively very slow growing demand for basic agricultural products and significant increases in the productivity of labor, land, and machinery in agricultural production combined with a much more rapid extensive economic growth in the nonagricultural sectors of the economy required a shift of resources, particularly labor, out of agriculture. (Figure 9) The market induces labor to voluntarily move from one sector to another through income differentials, suggesting that even in the absence of the effects of the depressions, farm incomes would have been lower than nonfarm incomes so as to bring about this migration.

The continuous substitution of tractor power for horse and mule power released hay and oats acreage to grow crops for human consumption. Though cotton and tobacco continued as the primary crops in the south, the relative production of cotton continued to shift to the west as production in Arkansas, Missouri, Oklahoma, Texas, New Mexico, Arizona, and California increased. As quotas reduced immigration and incomes rose, the demand for cereal grains grew slowly—more slowly than the supply—and the demand for fruits, vegetables, and dairy products grew. Refrigeration and faster freight shipments expanded the milk sheds further from metropolitan areas. Wisconsin and other North Central states began to ship cream and cheeses to the Atlantic Coast. Due to transportation improvements, specialized truck farms and the citrus industry became more important in California and Florida. (Parker, 1972; Soule, 1947)

The relative decline of the agricultural sector in this period was closely related to the highly inelastic income elasticity of demand for many farm products, particularly cereal grains, pork, and cotton. As incomes grew, the demand for these staples grew much more slowly. At the same time, rising land and labor productivity were increasing the supplies of staples, causing real prices to fall.

Table 3 presents selected agricultural productivity statistics for these years. Those data indicate that there were greater gains in labor productivity than in land productivity (or per acre yields). Per acre yields in wheat and hay actually decreased between 1915-19 and 1935-39. These productivity increases, which released resources from the agricultural sector, were the result of technological improvements in agriculture.

Technological Improvements In Agricultural Production

In many ways the adoption of the tractor in the interwar period symbolizes the technological changes that occurred in the agricultural sector. This changeover in the power source that farmers used had far-reaching consequences and altered the organization of the farm and the farmers’ lifestyle. The adoption of the tractor was land saving (by releasing acreage previously used to produce crops for workstock) and labor saving. At the same time it increased the risks of farming because farmers were now much more exposed to the marketplace. They could not produce their own fuel for tractors as they had for the workstock. Rather, this had to be purchased from other suppliers. Repair and replacement parts also had to be purchased, and sometimes the repairs had to be undertaken by specialized mechanics. The purchase of a tractor also commonly required the purchase of new complementary machines; therefore, the decision to purchase a tractor was not an isolated one. (White, 2001; Ankli, 1980; Ankli and Olmstead, 1981; Musoke, 1981; Whatley, 1987). These changes resulted in more and more farmers purchasing and using tractors, but the rate of adoption varied sharply across the United States.

Technological innovations in plants and animals also raised productivity. Hybrid seed corn increased yields from an average of 40 bushels per acre to 100 to 120 bushels per acre. New varieties of wheat were developed from the hardy Russian and Turkish wheat varieties which had been imported. The U.S. Department of Agriculture’s Experiment Stations took the lead in developing wheat varieties for different regions. For example, in the Columbia River Basin new varieties raised yields from an average of 19.1 bushels per acre in 1913-22 to 23.1 bushels per acre in 1933-42. (Shepherd, 1980) New hog breeds produced more meat and new methods of swine sanitation sharply increased the survival rate of piglets. An effective serum for hog cholera was developed, and the federal government led the way in the testing and eradication of bovine tuberculosis and brucellosis. Prior to the Second World War, a number of pesticides to control animal disease were developed, including cattle dips and disinfectants. By the mid-1920s a vaccine for “blackleg,” an infectious, usually fatal disease that particularly struck young cattle, was completed. The cattle tick, which carried Texas Fever, was largely controlled through inspections. (Schlebecker, 1975; Bogue, 1983; Wood, 1980)

Federal Agricultural Programs in the 1920s

Though there was substantial agricultural discontent in the period from the Civil War to late 1890s, the period from then to the onset of the First World War was relatively free from overt farmers’ complaints. In later years farmers dubbed the 1910-14 period as agriculture’s “golden years” and used the prices of farm crops and farm inputs in that period as a standard by which to judge crop and input prices in later years. The problems that arose in the agricultural sector during the twenties once again led to insistent demands by farmers for government to alleviate their distress.

Though there were increasing calls for direct federal government intervention to limit production and raise farm prices, this was not used until Roosevelt took office. Rather, there was a reliance upon the traditional method to aid injured groups—tariffs, and upon the “sanctioning and promotion of cooperative marketing associations.” In 1921 Congress attempted to control the grain exchanges and compel merchants and stockyards to charge “reasonable rates,” with the Packers and Stockyards Act and the Grain Futures Act. In 1922 Congress passed the Capper-Volstead Act to promote agricultural cooperatives and the Fordney-McCumber Tariff to impose high duties on most agricultural imports.—The Cooperative Marketing Act of 1924 did not bolster failing cooperatives as it was supposed to do. (Hoffman and Liebcap, 1991)

Twice between 1924 and 1928 Congress passed “McNary-Haugan” bills, but President Calvin Coolidge vetoed both. The McNary-Haugan bills proposed to establish “fair” exchange values (based on the 1910-14 period) for each product and to maintain them through tariffs and a private corporation that would be chartered by the government and could buy enough of each commodity to keep its price up to the computed fair level. The revenues were to come from taxes imposed on farmers. The Hoover administration passed the Hawley-Smoot tariff in 1930 and an Agricultural Marketing Act in 1929. This act committed the federal government to a policy of stabilizing farm prices through several nongovernment institutions but these failed during the depression. Federal intervention in the agricultural sector really came of age during the New Deal era of the 1930s.

Manufacturing

Agriculture was not the only sector experiencing difficulties in the twenties. Other industries, such as textiles, boots and shoes, and coal mining, also experienced trying times. However, at the same time that these industries were declining, other industries, such as electrical appliances, automobiles, and construction, were growing rapidly. The simultaneous existence of growing and declining industries has been common to all eras because economic growth and technological progress never affect all sectors in the same way. In general, in manufacturing there was a rapid rate of growth of productivity during the twenties. The rise of real wages due to immigration restrictions and the slower growth of the resident population spurred this. Transportation improvements and communications advances were also responsible. These developments brought about differential growth in the various manufacturing sectors in the United States in the 1920s.

Because of the historic pattern of economic development in the United States, the northeast was the first area to really develop a manufacturing base. By the mid-nineteenth century the East North Central region was creating a manufacturing base and the other regions began to create manufacturing bases in the last half of the nineteenth century resulting in a relative westward and southern shift of manufacturing activity. This trend continued in the 1920s as the New England and Middle Atlantic regions’ shares of manufacturing employment fell while all of the other regions—excluding the West North Central region—gained. There was considerable variation in the growth of the industries and shifts in their ranking during the decade. The largest broadly defined industries were, not surprisingly, food and kindred products; textile mill products; those producing and fabricating primary metals; machinery production; and chemicals. When industries are more narrowly defined, the automobile industry, which ranked third in manufacturing value added in 1919, ranked first by the mid-1920s.

Productivity Developments

Gavin Wright (1990) has argued that one of the underappreciated characteristics of American industrial history has been its reliance on mineral resources. Wright argues that the growing American strength in industrial exports and industrialization in general relied on an increasing intensity in nonreproducible natural resources. The large American market was knit together as one large market without internal barriers through the development of widespread low-cost transportation. Many distinctively American developments, such as continuous-process, mass-production methods were associated with the “high throughput” of fuel and raw materials relative to labor and capital inputs. As a result the United States became the dominant industrial force in the world 1920s and 1930s. According to Wright, after World War II “the process by which the United States became a unified ‘economy’ in the nineteenth century has been extended to the world as a whole. To a degree, natural resources have become commodities rather than part of the ‘factor endowment’ of individual countries.” (Wright, 1990)

In addition to this growing intensity in the use of nonreproducible natural resources as a source of productivity gains in American manufacturing, other technological changes during the twenties and thirties tended to raise the productivity of the existing capital through the replacement of critical types of capital equipment with superior equipment and through changes in management methods. (Soule, 1947; Lorant, 1967; Devine, 1983; Oshima, 1984) Some changes, such as the standardization of parts and processes and the reduction of the number of styles and designs, raised the productivity of both capital and labor. Modern management techniques, first introduced by Frederick W. Taylor, were introduced on a wider scale.

One of the important forces contributing to mass production and increased productivity was the transfer to electric power. (Devine, 1983) By 1929 about 70 percent of manufacturing activity relied on electricity, compared to roughly 30 percent in 1914. Steam provided 80 percent of the mechanical drive capacity in manufacturing in 1900, but electricity provided over 50 percent by 1920 and 78 percent by 1929. An increasing number of factories were buying their power from electric utilities. In 1909, 64 percent of the electric motor capacity in manufacturing establishments used electricity generated on the factory site; by 1919, 57 percent of the electricity used in manufacturing was purchased from independent electric utilities.

The shift from coal to oil and natural gas and from raw unprocessed energy in the forms of coal and waterpower to processed energy in the form of internal combustion fuel and electricity increased thermal efficiency. After the First World War energy consumption relative to GNP fell, there was a sharp increase in the growth rate of output per labor-hour, and the output per unit of capital input once again began rising. These trends can be seen in the data in Table 3. Labor productivity grew much more rapidly during the 1920s than in the previous or following decade. Capital productivity had declined in the decade previous to the 1920s while it also increased sharply during the twenties and continued to rise in the following decade. Alexander Field (2003) has argued that the 1930s were the most technologically progressive decade of the twentieth century basing his argument on the growth of multi-factor productivity as well as the impressive array of technological developments during the thirties. However, the twenties also saw impressive increases in labor and capital productivity as, particularly, developments in energy and transportation accelerated.

 Average Annual Rates of Labor Productivity and Capital Productivity Growth.

Warren Devine, Jr. (1983) reports that in the twenties the most important result of the adoption of electricity was that it would be an indirect “lever to increase production.” There were a number of ways in which this occurred. Electricity brought about an increased flow of production by allowing new flexibility in the design of buildings and the arrangement of machines. In this way it maximized throughput. Electric cranes were an “inestimable boon” to production because with adequate headroom they could operate anywhere in a plant, something that mechanical power transmission to overhead cranes did not allow. Electricity made possible the use of portable power tools that could be taken anywhere in the factory. Electricity brought about improved illumination, ventilation, and cleanliness in the plants, dramatically improving working conditions. It improved the control of machines since there was no longer belt slippage with overhead line shafts and belt transmission, and there were less limitations on the operating speeds of machines. Finally, it made plant expansion much easier than when overhead shafts and belts had been relied upon for operating power.

The mechanization of American manufacturing accelerated in the 1920s, and this led to a much more rapid growth of productivity in manufacturing compared to earlier decades and to other sectors at that time. There were several forces that promoted mechanization. One was the rapidly expanding aggregate demand during the prosperous twenties. Another was the technological developments in new machines and processes, of which electrification played an important part. Finally, Harry Jerome (1934) and, later, Harry Oshima (1984) both suggest that the price of unskilled labor began to rise as immigration sharply declined with new immigration laws and falling population growth. This accelerated the mechanization of the nation’s factories.

Technological changes during this period can be documented for a number of individual industries. In bituminous coal mining, labor productivity rose when mechanical loading devices reduced the labor required from 24 to 50 percent. The burst of paved road construction in the twenties led to the development of a finishing machine to smooth the surface of cement highways, and this reduced the labor requirement from 40 to 60 percent. Mechanical pavers that spread centrally mixed materials further increased productivity in road construction. These replaced the roadside dump and wheelbarrow methods of spreading the cement. Jerome (1934) reports that the glass in electric light bulbs was made by new machines that cut the number of labor-hours required for their manufacture by nearly half. New machines to produce cigarettes and cigars, for warp-tying in textile production, and for pressing clothes in clothing shops also cut labor-hours. The Banbury mixer reduced the labor input in the production of automobile tires by half, and output per worker of inner tubes increased about four times with a new production method. However, as Daniel Nelson (1987) points out, the continuing advances were the “cumulative process resulting from a vast number of successive small changes.” Because of these continuing advances in the quality of the tires and in the manufacturing of tires, between 1910 and 1930 “tire costs per thousand miles of driving fell from $9.39 to $0.65.”

John Lorant (1967) has documented other technological advances that occurred in American manufacturing during the twenties. For example, the organic chemical industry developed rapidly due to the introduction of the Weizman fermentation process. In a similar fashion, nearly half of the productivity advances in the paper industry were due to the “increasingly sophisticated applications of electric power and paper manufacturing processes,” especially the fourdrinier paper-making machines. As Avi Cohen (1984) has shown, the continuing advances in these machines were the result of evolutionary changes to the basic machine. Mechanization in many types of mass-production industries raised the productivity of labor and capital. In the glass industry, automatic feeding and other types of fully automatic production raised the efficiency of the production of glass containers, window glass, and pressed glass. Giedion (1948) reported that the production of bread was “automatized” in all stages during the 1920s.

Though not directly bringing about productivity increases in manufacturing processes, developments in the management of manufacturing firms, particularly the largest ones, also significantly affected their structure and operation. Alfred D. Chandler, Jr. (1962) has argued that the structure of a firm must follow its strategy. Until the First World War most industrial firms were centralized, single-division firms even when becoming vertically integrated. When this began to change the management of the large industrial firms had to change accordingly.

Because of these changes in the size and structure of the firm during the First World War, E. I. du Pont de Nemours and Company was led to adopt a strategy of diversifying into the production of largely unrelated product lines. The firm found that the centralized, divisional structure that had served it so well was not suited to this strategy, and its poor business performance led its executives to develop between 1919 and 1921 a decentralized, multidivisional structure that boosted it to the first rank among American industrial firms.

General Motors had a somewhat different problem. By 1920 it was already decentralized into separate divisions. In fact, there was so much decentralization that those divisions essentially remained separate companies and there was little coordination between the operating divisions. A financial crisis at the end of 1920 ousted W. C. Durant and brought in the du Ponts and Alfred Sloan. Sloan, who had seen the problems at GM but had been unable to convince Durant to make changes, began reorganizing the management of the company. Over the next several years Sloan and other GM executives developed the general office for a decentralized, multidivisional firm.

Though facing related problems at nearly the same time, GM and du Pont developed their decentralized, multidivisional organizations separately. As other manufacturing firms began to diversify, GM and du Pont became the models for reorganizing the management of the firms. In many industrial firms these reorganizations were not completed until well after the Second World War.

Competition, Monopoly, and the Government

The rise of big businesses, which accelerated in the postbellum period and particularly during the first great turn-of-the-century merger wave, continued in the interwar period. Between 1925 and 1939 the share of manufacturing assets held by the 100 largest corporations rose from 34.5 to 41.9 percent. (Niemi, 1980) As a public policy, the concern with monopolies diminished in the 1920s even though firms were growing larger. But the growing size of businesses was one of the convenient scapegoats upon which to blame the Great Depression.

However, the rise of large manufacturing firms in the interwar period is not so easily interpreted as an attempt to monopolize their industries. Some of the growth came about through vertical integration by the more successful manufacturing firms. Backward integration was generally an attempt to ensure a smooth supply of raw materials where that supply was not plentiful and was dispersed and firms “feared that raw materials might become controlled by competitors or independent suppliers.” (Livesay and Porter, 1969) Forward integration was an offensive tactic employed when manufacturers found that the existing distribution network proved inadequate. Livesay and Porter suggested a number of reasons why firms chose to integrate forward. In some cases they had to provide the mass distribution facilities to handle their much larger outputs; especially when the product was a new one. The complexity of some new products required technical expertise that the existing distribution system could not provide. In other cases “the high unit costs of products required consumer credit which exceeded financial capabilities of independent distributors.” Forward integration into wholesaling was more common than forward integration into retailing. The producers of automobiles, petroleum, typewriters, sewing machines, and harvesters were typical of those manufacturers that integrated all the way into retailing.

In some cases, increases in industry concentration arose as a natural process of industrial maturation. In the automobile industry, Henry Ford’s invention in 1913 of the moving assembly line—a technological innovation that changed most manufacturing—lent itself to larger factories and firms. Of the several thousand companies that had produced cars prior to 1920, 120 were still doing so then, but Ford and General Motors were the clear leaders, together producing nearly 70 percent of the cars. During the twenties, several other companies, such as Durant, Willys, and Studebaker, missed their opportunity to become more important producers, and Chrysler, formed in early 1925, became the third most important producer by 1930. Many went out of business and by 1929 only 44 companies were still producing cars. The Great Depression decimated the industry. Dozens of minor firms went out of business. Ford struggled through by relying on its huge stockpile of cash accumulated prior to the mid-1920s, while Chrysler actually grew. By 1940, only eight companies still produced cars—GM, Ford, and Chrysler had about 85 percent of the market, while Willys, Studebaker, Nash, Hudson, and Packard shared the remainder. The rising concentration in this industry was not due to attempts to monopolize. As the industry matured, growing economies of scale in factory production and vertical integration, as well as the advantages of a widespread dealer network, led to a dramatic decrease in the number of viable firms. (Chandler, 1962 and 1964; Rae, 1984; Bernstein, 1987)

It was a similar story in the tire industry. The increasing concentration and growth of firms was driven by scale economies in production and retailing and by the devastating effects of the depression in the thirties. Although there were 190 firms in 1919, 5 firms dominated the industry—Goodyear, B. F. Goodrich, Firestone, U.S. Rubber, and Fisk, followed by Miller Rubber, General Tire and Rubber, and Kelly-Springfield. During the twenties, 166 firms left the industry while 66 entered. The share of the 5 largest firms rose from 50 percent in 1921 to 75 percent in 1937. During the depressed thirties, there was fierce price competition, and many firms exited the industry. By 1937 there were 30 firms, but the average employment per factory was 4.41 times as large as in 1921, and the average factory produced 6.87 times as many tires as in 1921. (French, 1986 and 1991; Nelson, 1987; Fricke, 1982)

The steel industry was already highly concentrated by 1920 as U.S. Steel had around 50 percent of the market. But U. S. Steel’s market share declined through the twenties and thirties as several smaller firms competed and grew to become known as Little Steel, the next six largest integrated producers after U. S. Steel. Jonathan Baker (1989) has argued that the evidence is consistent with “the assumption that competition was a dominant strategy for steel manufacturers” until the depression. However, the initiation of the National Recovery Administration (NRA) codes in 1933 required the firms to cooperate rather than compete, and Baker argues that this constituted a training period leading firms to cooperate in price and output policies after 1935. (McCraw and Reinhardt, 1989; Weiss, 1980; Adams, 1977)

Mergers

A number of the larger firms grew by merger during this period, and the second great merger wave in American industry occurred during the last half of the 1920s. Figure 10 shows two series on mergers during the interwar period. The FTC series included many of the smaller mergers. The series constructed by Carl Eis (1969) only includes the larger mergers and ends in 1930.

This second great merger wave coincided with the stock market boom of the twenties and has been called “merger for oligopoly” rather than merger for monopoly. (Stigler, 1950) This merger wave created many larger firms that ranked below the industry leaders. Much of the activity in occurred in the banking and public utilities industries. (Markham, 1955) In manufacturing and mining, the effects on industrial structure were less striking. Eis (1969) found that while mergers took place in almost all industries, they were concentrated in a smaller number of them, particularly petroleum, primary metals, and food products.

The federal government’s antitrust policies toward business varied sharply during the interwar period. In the 1920s there was relatively little activity by the Justice Department, but after the Great Depression the New Dealers tried to take advantage of big business to make business exempt from the antitrust laws and cartelize industries under government supervision.

With the passage of the FTC and Clayton Acts in 1914 to supplement the 1890 Sherman Act, the cornerstones of American antitrust law were complete. Though minor amendments were later enacted, the primary changes after that came in the enforcement of the laws and in swings in judicial decisions. Their two primary areas of application were in the areas of overt behavior, such as horizontal and vertical price-fixing, and in market structure, such as mergers and dominant firms. Horizontal price-fixing involves firms that would normally be competitors getting together to agree on stable and higher prices for their products. As long as most of the important competitors agree on the new, higher prices, substitution between products is eliminated and the demand becomes much less elastic. Thus, increasing the price increases the revenues and the profits of the firms who are fixing prices. Vertical price-fixing involves firms setting the prices of intermediate products purchased at different stages of production. It also tends to eliminate substitutes and makes the demand less elastic.

Price-fixing continued to be considered illegal throughout the period, but there was no major judicial activity regarding it in the 1920s other than the Trenton Potteries decision in 1927. In that decision 20 individuals and 23 corporations were found guilty of conspiring to fix the prices of bathroom bowls. The evidence in the case suggested that the firms were not very successful at doing so, but the court found that they were guilty nevertheless; their success, or lack thereof, was not held to be a factor in the decision. (Scherer and Ross, 1990) Though criticized by some, the decision was precedent setting in that it prohibited explicit pricing conspiracies per se.

The Justice Department had achieved success in dismantling Standard Oil and American Tobacco in 1911 through decisions that the firms had unreasonably restrained trade. These were essentially the same points used in court decisions against the Powder Trust in 1911, the thread trust in 1913, Eastman Kodak in 1915, the glucose and cornstarch trust in 1916, and the anthracite railroads in 1920. The criterion of an unreasonable restraint of trade was used in the 1916 and 1918 decisions that found the American Can Company and the United Shoe Machinery Company innocent of violating the Sherman Act; it was also clearly enunciated in the 1920 U. S. Steel decision. This became known as the rule of reason standard in antitrust policy.

Merger policy had been defined in the 1914 Clayton Act to prohibit only the acquisition of one corporation’s stock by another corporation. Firms then shifted to the outright purchase of a competitor’s assets. A series of court decisions in the twenties and thirties further reduced the possibilities of Justice Department actions against mergers. “Only fifteen mergers were ordered dissolved through antitrust actions between 1914 and 1950, and ten of the orders were accomplished under the Sherman Act rather than Clayton Act proceedings.”

Energy

The search for energy and new ways to translate it into heat, light, and motion has been one of the unending themes in history. From whale oil to coal oil to kerosene to electricity, the search for better and less costly ways to light our lives, heat our homes, and move our machines has consumed much time and effort. The energy industries responded to those demands and the consumption of energy materials (coal, oil, gas, and fuel wood) as a percent of GNP rose from about 2 percent in the latter part of the nineteenth century to about 3 percent in the twentieth.

Changes in the energy markets that had begun in the nineteenth century continued. Processed energy in the forms of petroleum derivatives and electricity continued to become more important than “raw” energy, such as that available from coal and water. The evolution of energy sources for lighting continued; at the end of the nineteenth century, natural gas and electricity, rather than liquid fuels began to provide more lighting for streets, businesses, and homes.

In the twentieth century the continuing shift to electricity and internal combustion fuels increased the efficiency with which the American economy used energy. These processed forms of energy resulted in a more rapid increase in the productivity of labor and capital in American manufacturing. From 1899 to 1919, output per labor-hour increased at an average annual rate of 1.2 percent, whereas from 1919 to 1937 the increase was 3.5 percent per year. The productivity of capital had fallen at an average annual rate of 1.8 percent per year in the 20 years prior to 1919, but it rose 3.1 percent a year in the 18 years after 1919. As discussed above, the adoption of electricity in American manufacturing initiated a rapid evolution in the organization of plants and rapid increases in productivity in all types of manufacturing.

The change in transportation was even more remarkable. Internal combustion engines running on gasoline or diesel fuel revolutionized transportation. Cars quickly grabbed the lion’s share of local and regional travel and began to eat into long distance passenger travel, just as the railroads had done to passenger traffic by water in the 1830s. Even before the First World War cities had begun passing laws to regulate and limit “jitney” services and to protect the investments in urban rail mass transit. Trucking began eating into the freight carried by the railroads.

These developments brought about changes in the energy industries. Coal mining became a declining industry. As Figure 11 shows, in 1925 the share of petroleum in the value of coal, gas, and petroleum output exceeded bituminous coal, and it continued to rise. Anthracite coal’s share was much smaller and it declined while natural gas and LP (or liquefied petroleum) gas were relatively unimportant. These changes, especially the declining coal industry, were the source of considerable worry in the twenties.

Coal

One of the industries considered to be “sick” in the twenties was coal, particularly bituminous, or soft, coal. Income in the industry declined, and bankruptcies were frequent. Strikes frequently interrupted production. The majority of the miners “lived in squalid and unsanitary houses, and the incidence of accidents and diseases was high.” (Soule, 1947) The number of operating bituminous coal mines declined sharply from 1923 through 1932. Anthracite (or hard) coal output was much smaller during the twenties. Real coal prices rose from 1919 to 1922, and bituminous coal prices fell sharply from then to 1925. (Figure 12) Coal mining employment plummeted during the twenties. Annual earnings, especially in bituminous coal mining, also fell because of dwindling hourly earnings and, from 1929 on, a shrinking workweek. (Figure 13)

The sources of these changes are to be found in the increasing supply due to productivity advances in coal production and in the decreasing demand for coal. The demand fell as industries began turning from coal to electricity and because of productivity advances in the use of coal to create energy in steel, railroads, and electric utilities. (Keller, 1973) In the generation of electricity, larger steam plants employing higher temperatures and steam pressures continued to reduce coal consumption per kilowatt hour. Similar reductions were found in the production of coke from coal for iron and steel production and in the use of coal by the steam railroad engines. (Rezneck, 1951) All of these factors reduced the demand for coal.

Productivity advances in coal mining tended to be labor saving. Mechanical cutting accounted for 60.7 percent of the coal mined in 1920 and 78.4 percent in 1929. By the middle of the twenties, the mechanical loading of coal began to be introduced. Between 1929 and 1939, output per labor-hour rose nearly one third in bituminous coal mining and nearly four fifths in anthracite as more mines adopted machine mining and mechanical loading and strip mining expanded.

The increasing supply and falling demand for coal led to the closure of mines that were too costly to operate. A mine could simply cease operations, let the equipment stand idle, and lay off employees. When bankruptcies occurred, the mines generally just turned up under new ownership with lower capital charges. When demand increased or strikes reduced the supply of coal, idle mines simply resumed production. As a result, the easily expanded supply largely eliminated economic profits.

The average daily employment in coal mining dropped by over 208,000 from its peak in 1923, but the sharply falling real wages suggests that the supply of labor did not fall as rapidly as the demand for labor. Soule (1947) notes that when employment fell in coal mining, it meant fewer days of work for the same number of men. Social and cultural characteristics tended to tie many to their home region. The local alternatives were few, and ignorance of alternatives outside the Appalachian rural areas, where most bituminous coal was mined, made it very costly to transfer out.

Petroleum

In contrast to the coal industry, the petroleum industry was growing throughout the interwar period. By the thirties, crude petroleum dominated the real value of the production of energy materials. As Figure 14 shows, the production of crude petroleum increased sharply between 1920 and 1930, while real petroleum prices, though highly variable, tended to decline.

The growing demand for petroleum was driven by the growth in demand for gasoline as America became a motorized society. The production of gasoline surpassed kerosene production in 1915. Kerosene’s market continued to contract as electric lighting replaced kerosene lighting. The development of oil burners in the twenties began a switch from coal toward fuel oil for home heating, and this further increased the growing demand for petroleum. The growth in the demand for fuel oil and diesel fuel for ship engines also increased petroleum demand. But it was the growth in the demand for gasoline that drove the petroleum market.

The decline in real prices in the latter part of the twenties shows that supply was growing even faster than demand. The discovery of new fields in the early twenties increased the supply of petroleum and led to falling prices as production capacity grew. The Santa Fe Springs, California strike in 1919 initiated a supply shock as did the discovery of the Long Beach, California field in 1921. New discoveries in Powell, Texas and Smackover Arkansas further increased the supply of petroleum in 1921. New supply increases occurred in 1926 to 1928 with petroleum strikes in Seminole, Oklahoma and Hendricks, Texas. The supply of oil increased sharply in 1930 to 1931 with new discoveries in Oklahoma City and East Texas. Each new discovery pushed down real oil prices, and the prices of petroleum derivatives, and the growing production capacity led to a general declining trend in petroleum prices. McMillin and Parker (1994) argue that supply shocks generated by these new discoveries were a factor in the business cycles during the 1920s.

The supply of gasoline increased more than the supply of crude petroleum. In 1913 a chemist at Standard Oil of Indiana introduced the cracking process to refine crude petroleum; until that time it had been refined by distillation or unpressurized heating. In the heating process, various refined products such as kerosene, gasoline, naphtha, and lubricating oils were produced at different temperatures. It was difficult to vary the amount of the different refined products produced from a barrel of crude. The cracking process used pressurized heating to break heavier components down into lighter crude derivatives; with cracking, it was possible to increase the amount of gasoline obtained from a barrel of crude from 15 to 45 percent. In the early twenties, chemists at Standard Oil of New Jersey improved the cracking process, and by 1927 it was possible to obtain twice as much gasoline from a barrel of crude petroleum as in 1917.

The petroleum companies also developed new ways to distribute gasoline to motorists that made it more convenient to purchase gasoline. Prior to the First World War, gasoline was commonly purchased in one- or five-gallon cans and the purchaser used a funnel to pour the gasoline from the can into the car. Then “filling stations” appeared, which specialized in filling cars’ tanks with gasoline. These spread rapidly, and by 1919 gasoline companies werebeginning to introduce their own filling stations or contract with independent stations to exclusively distribute their gasoline. Increasing competition and falling profits led filling station operators to expand into other activities such as oil changes and other mechanical repairs. The general name attached to such stations gradually changed to “service stations” to reflect these new functions.

Though the petroleum firms tended to be large, they were highly competitive, trying to pump as much petroleum as possible to increase their share of the fields. This, combined with the development of new fields, led to an industry with highly volatile prices and output. Firms desperately wanted to stabilize and reduce the production of crude petroleum so as to stabilize and raise the prices of crude petroleum and refined products. Unable to obtain voluntary agreement on output limitations by the firms and producers, governments began stepping in. Led by Texas, which created the Texas Railroad Commission in 1891, oil-producing states began to intervene to regulate production. Such laws were usually termed prorationing laws and were quotas designed to limit each well’s output to some fraction of its potential. The purpose was as much to stabilize and reduce production and raise prices as anything else, although generally such laws were passed under the guise of conservation. Although the federal government supported such attempts, not until the New Deal were federal laws passed to assist this.

Electricity

By the mid 1890s the debate over the method by which electricity was to be transmitted had been won by those who advocated alternating current. The reduced power losses and greater distance over which electricity could be transmitted more than offset the necessity for transforming the current back to direct current for general use. Widespread adoption of machines and appliances by industry and consumers then rested on an increase in the array of products using electricity as the source of power, heat, or light and the development of an efficient, lower cost method of generating electricity.

General Electric, Westinghouse, and other firms began producing the electrical appliances for homes and an increasing number of machines based on electricity began to appear in industry. The problem of lower cost production was solved by the introduction of centralized generating facilities that distributed the electric power through lines to many consumers and business firms.

Though initially several firms competed in generating and selling electricity to consumers and firms in a city or area, by the First World War many states and communities were awarding exclusive franchises to one firm to generate and distribute electricity to the customers in the franchise area. (Bright, 1947; Passer, 1953) The electric utility industry became an important growth industry and, as Figure 15 shows, electricity production and use grew rapidly.

The electric utilities increasingly were regulated by state commissions that were charged with setting rates so that the utilities could receive a “fair return” on their investments. Disagreements over what constituted a “fair return” and the calculation of the rate base led to a steady stream of cases before the commissions and a continuing series of court appeals. Generally these court decisions favored the reproduction cost basis. Because of the difficulty and cost in making these calculations, rates tended to be in the hands of the electric utilities that, it has been suggested, did not lower rates adequately to reflect the rising productivity and lowered costs of production. The utilities argued that a more rapid lowering of rates would have jeopardized their profits. Whether or not this increased their monopoly power is still an open question, but it should be noted, that electric utilities were hardly price-taking industries prior to regulation. (Mercer, 1973) In fact, as Figure 16 shows, the electric utilities began to systematically practice market segmentation charging users with less elastic demands, higher prices per kilowatt-hour.

Energy in the American Economy of the 1920s

The changes in the energy industries had far-reaching consequences. The coal industry faced a continuing decline in demand. Even in the growing petroleum industry, the periodic surges in the supply of petroleum caused great instability. In manufacturing, as described above, electrification contributed to a remarkable rise in productivity. The transportation revolution brought about by the rise of gasoline-powered trucks and cars changed the way businesses received their supplies and distributed their production as well as where they were located. The suburbanization of America and the beginnings of urban sprawl were largely brought about by the introduction of low-priced gasoline for cars.

Transportation

The American economy was forever altered by the dramatic changes in transportation after 1900. Following Henry Ford’s introduction of the moving assembly production line in 1914, automobile prices plummeted, and by the end of the 1920s about 60 percent of American families owned an automobile. The advent of low-cost personal transportation led to an accelerating movement of population out of the crowded cities to more spacious homes in the suburbs and the automobile set off a decline in intracity public passenger transportation that has yet to end. Massive road-building programs facilitated the intercity movement of people and goods. Trucks increasingly took over the movement of freight in competition with the railroads. New industries, such as gasoline service stations, motor hotels, and the rubber tire industry, arose to service the automobile and truck traffic. These developments were complicated by the turmoil caused by changes in the federal government’s policies toward transportation in the United States.

With the end of the First World War, a debate began as to whether the railroads, which had been taken over by the government, should be returned to private ownership or nationalized. The voices calling for a return to private ownership were much stronger, but doing so fomented great controversy. Many in Congress believed that careful planning and consolidation could restore the railroads and make them more efficient. There was continued concern about the near monopoly that the railroads had on the nation’s intercity freight and passenger transportation. The result of these deliberations was the Transportation Act of 1920, which was premised on the continued domination of the nation’s transportation by the railroads—an erroneous presumption.

The Transportation Act of 1920 presented a marked change in the Interstate Commerce Commission’s ability to control railroads. The ICC was allowed to prescribe exact rates that were to be set so as to allow the railroads to earn a fair return, defined as 5.5 percent, on the fair value of their property. The ICC was authorized to make an accounting of the fair value of each regulated railroad’s property; however, this was not completed until well into the 1930s, by which time the accounting and rate rules were out of date. To maintain fair competition between railroads in a region, all roads were to have the same rates for the same goods over the same distance. With the same rates, low-cost roads should have been able to earn higher rates of return than high-cost roads. To handle this, a recapture clause was inserted: any railroad earning a return of more than 6 percent on the fair value of its property was to turn the excess over to the ICC, which would place half of the money in a contingency fund for the railroad when it encountered financial problems and the other half in a contingency fund to provide loans to other railroads in need of assistance.

In order to address the problem of weak and strong railroads and to bring better coordination to the movement of rail traffic in the United States, the act was directed to encourage railroad consolidation, but little came of this in the 1920s. In order to facilitate its control of the railroads, the ICC was given two additional powers. The first was the control over the issuance or purchase of securities by railroads, and the second was the power to control changes in railroad service through the control of car supply and the extension and abandonment of track. The control of the supply of rail cars was turned over to the Association of American Railroads. Few extensions of track were proposed, but as time passed, abandonment requests grew. The ICC, however, trying to mediate between the conflicting demands of shippers, communities and railroads, generally refused to grant abandonments, and this became an extremely sensitive issue in the 1930s.

As indicated above, the premises of the Transportation Act of 1920 were wrong. Railroads experienced increasing competition during the 1920s, and both freight and passenger traffic were drawn off to competing transport forms. Passenger traffic exited from the railroads much more quickly. As the network of all weather surfaced roads increased, people quickly turned from the train to the car. Harmed even more by the move to automobile traffic were the electric interurban railways that had grown rapidly just prior to the First World War. (Hilton-Due, 1960) Not surprisingly, during the 1920s few railroads earned profits in excess of the fair rate of return.

The use of trucks to deliver freight began shortly after the turn of the century. Before the outbreak of war in Europe, White and Mack were producing trucks with as much as 7.5 tons of carrying capacity. Most of the truck freight was carried on a local basis, and it largely supplemented the longer distance freight transportation provided by the railroads. However, truck size was growing. In 1915 Trailmobile introduced the first four-wheel trailer designed to be pulled by a truck tractor unit. During the First World War, thousands of trucks were constructed for military purposes, and truck convoys showed that long distance truck travel was feasible and economical. The use of trucks to haul freight had been growing by over 18 percent per year since 1925, so that by 1929 intercity trucking accounted for more than one percent of the ton-miles of freight hauled.

The railroads argued that the trucks and buses provided “unfair” competition and believed that if they were also regulated, then the regulation could equalize the conditions under which they competed. As early as 1925, the National Association of Railroad and Utilities Commissioners issued a call for the regulation of motor carriers in general. In 1928 the ICC called for federal regulation of buses and in 1932 extended this call to federal regulation of trucks.

Most states had began regulating buses at the beginning of the 1920s in an attempt to reduce the diversion of urban passenger traffic from the electric trolley and railway systems. However, most of the regulation did not aim to control intercity passenger traffic by buses. As the network of surfaced roads expanded during the twenties, so did the routes of the intercity buses. In 1929 a number of smaller bus companies were incorporated in the Greyhound Buslines, the carrier that has since dominated intercity bus transportation. (Walsh, 2000)

A complaint of the railroads was that interstate trucking competition was unfair because it was subsidized while railroads were not. All railroad property was privately owned and subject to property taxes, whereas truckers used the existing road system and therefore neither had to bear the costs of creating the road system nor pay taxes upon it. Beginning with the Federal Road-Aid Act of 1916, small amounts of money were provided as an incentive for states to construct rural post roads. (Dearing-Owen, 1949) However, through the First World War most of the funds for highway construction came from a combination of levies on the adjacent property owners and county and state taxes. The monies raised by the counties were commonly 60 percent of the total funds allocated, and these primarily came from property taxes. In 1919 Oregon pioneered the state gasoline tax, which then began to be adopted by more and more states. A highway system financed by property taxes and other levies can be construed as a subsidization of motor vehicles, and one study for the period up to 1920 found evidence of substantial subsidization of trucking. (Herbst-Wu, 1973) However, the use of gasoline taxes moved closer to the goal of users paying the costs of the highways. Neither did the trucks have to pay for all of the highway construction because automobiles jointly used the highways. Highways had to be constructed in more costly ways in order to accommodate the larger and heavier trucks. Ideally the gasoline taxes collected from trucks should have covered the extra (or marginal) costs of highway construction incurred because of the truck traffic. Gasoline taxes tended to do this.

The American economy occupies a vast geographic region. Because economic activity occurs over most of the country, falling transportation costs have been crucial to knitting American firms and consumers into a unified market. Throughout the nineteenth century the railroads played this crucial role. Because of the size of the railroad companies and their importance in the economic life of Americans, the federal government began to regulate them. But, by 1917 it appeared that the railroad system had achieved some stability, and it was generally assumed that the post-First World War era would be an extension of the era from 1900 to 1917. Nothing could have been further from the truth. Spurred by public investments in highways, cars and trucks voraciously ate into the railroad’s market, and, though the regulators failed to understand this at the time, the railroad’s monopoly on transportation quickly disappeared.

Communications

Communications had joined with transportation developments in the nineteenth century to tie the American economy together more completely. The telegraph had benefited by using the railroads’ right-of-ways, and the railroads used the telegraph to coordinate and organize their far-flung activities. As the cost of communications fell and information transfers sped, the development of firms with multiple plants at distant locations was facilitated. The interwar era saw a continuation of these developments as the telephone continued to supplant the telegraph and the new medium of radio arose to transmit news and provide a new entertainment source.

Telegraph domination of business and personal communications had given way to the telephone as long distance telephone calls between the east and west coasts with the new electronic amplifiers became possible in 1915. The number of telegraph messages handled grew 60.4 percent in the twenties. The number of local telephone conversations grew 46.8 percent between 1920 and 1930, while the number of long distance conversations grew 71.8 percent over the same period. There were 5 times as many long distance telephone calls as telegraph messages handled in 1920, and 5.7 times as many in 1930.

The twenties were a prosperous period for AT&T and its 18 major operating companies. (Brooks, 1975; Temin, 1987; Garnet, 1985; Lipartito, 1989) Telephone usage rose and, as Figure 19 shows, the share of all households with a telephone rose from 35 percent to nearly 42 percent. In cities across the nation, AT&T consolidated its system, gained control of many operating companies, and virtually eliminated its competitors. It was able to do this because in 1921 Congress passed the Graham Act exempting AT&T from the Sherman Act in consolidating competing telephone companies. By 1940, the non-Bell operating companies were all small relative to the Bell operating companies.

Surprisingly there was a decline in telephone use on the farms during the twenties. (Hadwiger-Cochran, 1984; Fischer 1987) Rising telephone rates explain part of the decline in rural use. The imposition of connection fees during the First World War made it more costly for new farmers to hook up. As AT&T gained control of more and more operating systems, telephone rates were increased. AT&T also began requiring, as a condition of interconnection, that independent companies upgrade their systems to meet AT&T standards. Most of the small mutual companies that had provided service to farmers had operated on a shoestring—wires were often strung along fenceposts, and phones were inexpensive “whoop and holler” magneto units. Upgrading to AT&T’s standards raised costs, forcing these companies to raise rates.

However, it also seems likely that during the 1920s there was a general decline in the rural demand for telephone services. One important factor in this was the dramatic decline in farm incomes in the early twenties. The second reason was a change in the farmers’ environment. Prior to the First World War, the telephone eased farm isolation and provided news and weather information that was otherwise hard to obtain. After 1920 automobiles, surfaced roads, movies, and the radio loosened the isolation and the telephone was no longer as crucial.

Othmar Merganthaler’s development of the linotype machine in the late nineteenth century had irrevocably altered printing and publishing. This machine, which quickly created a line of soft, lead-based metal type that could be printed, melted down and then recast as a new line of type, dramatically lowered the costs of printing. Previously, all type had to be painstakingly set by hand, with individual cast letter matrices picked out from compartments in drawers to construct words, lines, and paragraphs. After printing, each line of type on the page had to be broken down and each individual letter matrix placed back into its compartment in its drawer for use in the next printing job. Newspapers often were not published every day and did not contain many pages, resulting in many newspapers in most cities. In contrast to this laborious process, the linotype used a keyboard upon which the operator typed the words in one of the lines in a news column. Matrices for each letter dropped down from a magazine of matrices as the operator typed each letter and were assembled into a line of type with automatic spacers to justify the line (fill out the column width). When the line was completed the machine mechanically cast the line of matrices into a line of lead type. The line of lead type was ejected into a tray and the letter matrices mechanically returned to the magazine while the operator continued typing the next line in the news story. The first Merganthaler linotype machine was installed in the New York Tribune in 1886. The linotype machine dramatically lowered the costs of printing newspapers (as well as books and magazines). Prior to the linotype a typical newspaper averaged no more than 11 pages and many were published only a few times a week. The linotype machine allowed newspapers to grow in size and they began to be published more regularly. A process of consolidation of daily and Sunday newspapers began that continues to this day. Many have termed the Merganthaler linotype machine the most significant printing invention since the introduction of movable type 400 years earlier.

For city families as well as farm families, radio became the new source of news and entertainment. (Barnouw, 1966; Rosen, 1980 and 1987; Chester-Garrison, 1950) It soon took over as the prime advertising medium and in the process revolutionized advertising. By 1930 more homes had radio sets than had telephones. The radio networks sent news and entertainment broadcasts all over the country. The isolation of rural life, particularly in many areas of the plains, was forever broken by the intrusion of the “black box,” as radio receivers were often called. The radio began a process of breaking down regionalism and creating a common culture in the United States.

The potential demand for radio became clear with the first regular broadcast of Westinghouse’s KDKA in Pittsburgh in the fall of 1920. Because the Department of Commerce could not deny a license application there was an explosion of stations all broadcasting at the same frequency and signal jamming and interference became a serious problem. By 1923 the Department of Commerce had gained control of radio from the Post Office and the Navy and began to arbitrarily disperse stations on the radio dial and deny licenses creating the first market in commercial broadcast licenses. In 1926 a U.S. District Court decided that under the Radio Law of 1912 Herbert Hoover, the secretary of commerce, did not have this power. New stations appeared and the logjam and interference of signals worsened. A Radio Act was passed in January of 1927 creating the Federal Radio Commission (FRC) as a temporary licensing authority. Licenses were to be issued in the public interest, convenience, and necessity. A number of broadcasting licenses were revoked; stations were assigned frequencies, dial locations, and power levels. The FRC created 24 clear-channel stations with as much as 50,000 watts of broadcasting power, of which 21 ended up being affiliated with the new national radio networks. The Communications Act of 1934 essentially repeated the 1927 act except that it created a permanent, seven-person Federal Communications Commission (FCC).

Local stations initially created and broadcast the radio programs. The expenses were modest, and stores and companies operating radio stations wrote this off as indirect, goodwill advertising. Several forces changed all this. In 1922, AT&T opened up a radio station in New York City, WEAF (later to become WNBC). AT&T envisioned this station as the center of a radio toll system where individuals could purchase time to broadcast a message transmitted to other stations in the toll network using AT&T’s long distance lines and an August 1922 broadcast by a Long Island realty company became the first conscious use of direct advertising.

Though advertising continued to be condemned, the fiscal pressures on radio stations to accept advertising began rising. In 1923 the American Society of Composers and Publishers (ASCAP), began demanding a performance fee anytime ASCAP-copyrighted music was performed on the radio, either live or on record. By 1924 the issue was settled, and most stations began paying performance fees to ASCAP. AT&T decided that all stations broadcasting with non AT&T transmitters were violating their patent rights and began asking for annual fees from such stations based on the station’s power. By the end of 1924, most stations were paying the fees. All of this drained the coffers of the radio stations, and more and more of them began discreetly accepting advertising.

RCA became upset at AT&T’s creation of a chain of radio stations and set up its own toll network using the inferior lines of Western Union and Postal Telegraph, because AT&T, not surprisingly, did not allow any toll (or network) broadcasting on its lines except by its own stations. AT&T began to worry that its actions might threaten its federal monopoly in long distance telephone communications. In 1926 a new firm was created, the National Broadcasting Company (NBC), which took over all broadcasting activities from AT&T and RCA as AT&T left broadcasting. When NBC debuted in November of 1926, it had two networks: the Red, which was the old AT&T network, and the Blue, which was the old RCA network. Radio networks allowed advertisers to direct advertising at a national audience at a lower cost. Network programs allowed local stations to broadcast superior programs that captured a larger listening audience and in return received a share of the fees the national advertiser paid to the network. In 1927 a new network, the Columbia Broadcasting System (CBS) financed by the Paley family began operation and other new networks entered or tried to enter the industry in the 1930s.

Communications developments in the interwar era present something of a mixed picture. By 1920 long distance telephone service was in place, but rising rates slowed the rate of adoption in the period, and telephone use in rural areas declined sharply. Though direct dialing was first tried in the twenties, its general implementation would not come until the postwar era, when other changes, such as microwave transmission of signals and touch-tone dialing, would also appear. Though the number of newspapers declined, newspaper circulation generally held up. The number of competing newspapers in larger cities began declining, a trend that also would accelerate in the postwar American economy.

Banking and Securities Markets

In the twenties commercial banks became “department stores of finance.”— Banks opened up installment (or personal) loan departments, expanded their mortgage lending, opened up trust departments, undertook securities underwriting activities, and offered safe deposit boxes. These changes were a response to growing competition from other financial intermediaries. Businesses, stung by bankers’ control and reduced lending during the 1920-21 depression, began relying more on retained earnings and stock and bond issues to raise investment and, sometimes, working capital. This reduced loan demand. The thrift institutions also experienced good growth in the twenties as they helped fuel the housing construction boom of the decade. The securities markets boomed in the twenties only to see a dramatic crash of the stock market in late 1929.

There were two broad classes of commercial banks; those that were nationally chartered and those that were chartered by the states. Only the national banks were required to be members of the Federal Reserve System. (Figure 21) Most banks were unit banks because national regulators and most state regulators prohibited branching. However, in the twenties a few states began to permit limited branching; California even allowed statewide branching.—The Federal Reserve member banks held the bulk of the assets of all commercial banks, even though most banks were not members. A high bank failure rate in the 1920s has usually been explained by “overbanking” or too many banks located in an area, but H. Thomas Johnson (1973-74) makes a strong argument against this. (Figure 22)— If there were overbanking, on average each bank would have been underutilized resulting in intense competition for deposits and higher costs and lower earnings. One common reason would have been the free entry of banks as long as they achieved the minimum requirements then in force. However, the twenties saw changes that led to the demise of many smaller rural banks that would likely have been profitable if these changes had not occurred. Improved transportation led to a movement of business activities, including banking, into the larger towns and cities. Rural banks that relied on loans to farmers suffered just as farmers did during the twenties, especially in the first half of the twenties. The number of bank suspensions and the suspension rate fell after 1926. The sharp rise in bank suspensions in 1930 occurred because of the first banking crisis during the Great Depression.

Prior to the twenties, the main assets of commercial banks were short-term business loans, made by creating a demand deposit or increasing an existing one for a borrowing firm. As business lending declined in the 1920s commercial banks vigorously moved into new types of financial activities. As banks purchased more securities for their earning asset portfolios and gained expertise in the securities markets, larger ones established investment departments and by the late twenties were an important force in the underwriting of new securities issued by nonfinancial corporations.

The securities market exhibited perhaps the most dramatic growth of the noncommercial bank financial intermediaries during the twenties, but others also grew rapidly. (Figure 23) The assets of life insurance companies increased by 10 percent a year from 1921 to 1929; by the late twenties they were a very important source of funds for construction investment. Mutual savings banks and savings and loan associations (thrifts) operated in essentially the same types of markets. The Mutual savings banks were concentrated in the northeastern United States. As incomes rose, personal savings increased, and housing construction expanded in the twenties, there was an increasing demand for the thrifts’ interest earning time deposits and mortgage lending.

But the dramatic expansion in the financial sector came in new corporate securities issues in the twenties—especially common and preferred stock—and in the trading of existing shares of those securities. (Figure 24) The late twenties boom in the American economy was rapid, highly visible, and dramatic. Skyscrapers were being erected in most major cities, the automobile manufacturers produced over four and a half million new cars in 1929; and the stock market, like a barometer of this prosperity, was on a dizzying ride to higher and higher prices. “Playing the market” seemed to become a national pastime.

The Dow-Jones index hit its peak of 381 on September 3 and then slid to 320 on October 21. In the following week the stock market “crashed,” with a record number of shares being traded on several days. At the end of Tuesday, October, 29th, the index stood at 230, 96 points less than one week before. On November 13, 1929, the Dow-Jones index reached its lowest point for the year at 198—183 points less than the September 3 peak.

The path of the stock market boom of the twenties can be seen in Figure 25. Sharp price breaks occurred several times during the boom, and each of these gave rise to dark predictions of the end of the bull market and speculation. Until late October of 1929, these predictions turned out to be wrong. Between those price breaks and prior to the October crash, stock prices continued to surge upward. In March of 1928, 3,875,910 shares were traded in one day, establishing a record. By late 1928, five million shares being traded in a day was a common occurrence.

New securities, from rising merger activity and the formation of holding companies, were issued to take advantage of the rising stock prices.—Stock pools, which were not illegal until the 1934 Securities and Exchange Act, took advantage of the boom to temporarily drive up the price of selected stocks and reap large gains for the members of the pool. In stock pools a group of speculators would pool large amounts of their funds and then begin purchasing large amounts of shares of a stock. This increased demand led to rising prices for that stock. Frequently pool insiders would “churn” the stock by repeatedly buying and selling the same shares among themselves, but at rising prices. Outsiders, seeing the price rising, would decide to purchase the stock whose price was rising. At a predetermined higher price the pool members would, within a short period, sell their shares and pull out of the market for that stock. Without the additional demand from the pool, the stock’s price usually fell quickly bringing large losses for the unsuspecting outside investors while reaping large gains for the pool insiders.

Another factor commonly used to explain both the speculative boom and the October crash was the purchase of stocks on small margins. However, contrary to popular perception, margin requirements through most of the twenties were essentially the same as in previous decades. Brokers, recognizing the problems with margin lending in the rapidly changing market, began raising margin requirements in late 1928, and by the fall of 1929, margin requirements were the highest in the history of the New York Stock Exchange. In the 1920s, as was the case for decades prior to that, the usual margin requirements were 10 to 15 percent of the purchase price, and, apparently, more often around 10 percent. There were increases in this percentage by 1928 and by the fall of 1928, well before the crash and at the urging of a special New York Clearinghouse committee, margin requirements had been raised to some of the highest levels in New York Stock Exchange history. One brokerage house required the following of its clients. Securities with a selling price below $10 could only be purchased for cash. Securities with a selling price of $10 to $20 had to have a 50 percent margin; for securities of $20 to $30 a margin requirement of 40 percent; and, for securities with a price above $30 the margin was 30 percent of the purchase price. In the first half of 1929 margin requirements on customers’ accounts averaged a 40 percent margin, and some houses raised their margins to 50 percent a few months before the crash. These were, historically, very high margin requirements. (Smiley and Keehn, 1988)—Even so, during the crash when additional margin calls were issued, those investors who could not provide additional margin saw the brokers’ sell their stock at whatever the market price was at the time and these forced sales helped drive prices even lower.

The crash began on Monday, October 21, as the index of stock prices fell 3 points on the third-largest volume in the history of the New York Stock Exchange. After a slight rally on Tuesday, prices began declining on Wednesday and fell 21 points by the end of the day bringing on the third call for more margin in that week. On Black Thursday, October 24, prices initially fell sharply, but rallied somewhat in the afternoon so that the net loss was only 7 points, but the volume of thirteen million shares set a NYSE record. Friday brought a small gain that was wiped out on Saturday. On Monday, October 28, the Dow Jones index fell 38 points on a volume of nine million shares—three million in the final hour of trading. Black Tuesday, October 29, brought declines in virtually every stock price. Manufacturing firms, which had been lending large sums to brokers for margin loans, had been calling in these loans and this accelerated on Monday and Tuesday. The big Wall Street banks increased their lending on call loans to offset some of this loss of loanable funds. The Dow Jones Index fell 30 points on a record volume of nearly sixteen and a half million shares exchanged. Black Thursday and Black Tuesday wiped out entire fortunes.

Though the worst was over, prices continued to decline until November 13, 1929, as brokers cleaned up their accounts and sold off the stocks of clients who could not supply additional margin. After that, prices began to slowly rise and by April of 1930 had increased 96 points from the low of November 13,— “only” 87 points less than the peak of September 3, 1929. From that point, stock prices resumed their depressing decline until the low point was reached in the summer of 1932.

 

—There is a long tradition that insists that the Great Bull Market of the late twenties was an orgy of speculation that bid the prices of stocks far above any sustainable or economically justifiable level creating a bubble in the stock market. John Kenneth Galbraith (1954) observed, “The collapse in the stock market in the autumn of 1929 was implicit in the speculation that went before.”—But not everyone has agreed with this.

In 1930 Irving Fisher argued that the stock prices of 1928 and 1929 were based on fundamental expectations that future corporate earnings would be high.— More recently, Murray Rothbard (1963), Gerald Gunderson (1976), and Jude Wanniski (1978) have argued that stock prices were not too high prior to the crash.—Gunderson suggested that prior to 1929, stock prices were where they should have been and that when corporate profits in the summer and fall of 1929 failed to meet expectations, stock prices were written down.— Wanniski argued that political events brought on the crash. The market broke each time news arrived of advances in congressional consideration of the Hawley-Smoot tariff. However, the virtually perfect foresight that Wanniski’s explanation requires is unrealistic.— Charles Kindleberger (1973) and Peter Temin (1976) examined common stock yields and price-earnings ratios and found that the relative constancy did not suggest that stock prices were bid up unrealistically high in the late twenties.—Gary Santoni and Gerald Dwyer (1990) also failed to find evidence of a bubble in stock prices in 1928 and 1929.—Gerald Sirkin (1975) found that the implied growth rates of dividends required to justify stock prices in 1928 and 1929 were quite conservative and lower than post-Second World War dividend growth rates.

However, examination of after-the-fact common stock yields and price-earning ratios can do no more than provide some ex post justification for suggesting that there was not excessive speculation during the Great Bull Market.— Each individual investor was motivated by that person’s subjective expectations of each firm’s future earnings and dividends and the future prices of shares of each firm’s stock. Because of this element of subjectivity, not only can we never accurately know those values, but also we can never know how they varied among individuals. The market price we observe will be the end result of all of the actions of the market participants, and the observed price may be different from the price almost all of the participants expected.

In fact, there are some indications that there were differences in 1928 and 1929. Yields on common stocks were somewhat lower in 1928 and 1929. In October of 1928, brokers generally began raising margin requirements, and by the beginning of the fall of 1929, margin requirements were, on average, the highest in the history of the New York Stock Exchange. Though the discount and commercial paper rates had moved closely with the call and time rates on brokers’ loans through 1927, the rates on brokers’ loans increased much more sharply in 1928 and 1929.— This pulled in funds from corporations, private investors, and foreign banks as New York City banks sharply reduced their lending. These facts suggest that brokers and New York City bankers may have come to believe that stock prices had been bid above a sustainable level by late 1928 and early 1929. White (1990) created a quarterly index of dividends for firms in the Dow-Jones index and related this to the DJI. Through 1927 the two track closely, but in 1928 and 1929 the index of stock prices grows much more rapidly than the index of dividends.

The qualitative evidence for a bubble in the stock market in 1928 and 1929 that White assembled was strengthened by the findings of J. Bradford De Long and Andre Shleifer (1991). They examined closed-end mutual funds, a type of fund where investors wishing to liquidate must sell their shares to other individual investors allowing its fundamental value to be exactly measurable.— Using evidence from these funds, De Long and Shleifer estimated that in the summer of 1929, the Standard and Poor’s composite stock price index was overvalued about 30 percent due to excessive investor optimism. Rappoport and White (1993 and 1994) found other evidence that supported a bubble in the stock market in 1928 and 1929. There was a sharp divergence between the growth of stock prices and dividends; there were increasing premiums on call and time brokers’ loans in 1928 and 1929; margin requirements rose; and stock market volatility rose in the wake of the 1929 stock market crash.

There are several reasons for the creation of such a bubble. First, the fundamental values of earnings and dividends become difficult to assess when there are major industrial changes, such as the rapid changes in the automobile industry, the new electric utilities, and the new radio industry.— Eugene White (1990) suggests that “While investors had every reason to expect earnings to grow, they lacked the means to evaluate easily the future path of dividends.” As a result investors bid up prices as they were swept up in the ongoing stock market boom. Second, participation in the stock market widened noticeably in the twenties. The new investors were relatively unsophisticated, and they were more likely to be caught up in the euphoria of the boom and bid prices upward.— New, inexperienced commission sales personnel were hired to sell stocks and they promised glowing returns on stocks they knew little about.

These observations were strengthened by the experimental work of economist Vernon Smith. (Bishop, 1987) In a number of experiments over a three-year period using students and Tucson businessmen and businesswomen, bubbles developed as inexperienced investors valued stocks differently and engaged in price speculation. As these investors in the experiments began to realize that speculative profits were unsustainable and uncertain, their dividend expectations changed, the market crashed, and ultimately stocks began trading at their fundamental dividend values. These bubbles and crashes occurred repeatedly, leading Smith to conjecture that there are few regulatory steps that can be taken to prevent a crash.

Though the bubble of 1928 and 1929 made some downward adjustment in stock prices inevitable, as Barsky and De Long have shown, changes in fundamentals govern the overall movements. And the end of the long bull market was almost certainly governed by this. In late 1928 and early 1929 there was a striking rise in economic activity, but a decline began somewhere between May and July of that year and was clearly evident by August of 1929. By the middle of August, the rise in stock prices had slowed down as better information on the contraction was received. There were repeated statements by leading figures that stocks were “overpriced” and the Federal Reserve System sharply increased the discount rate in August 1929 was well as continuing its call for banks to reduce their margin lending. As this information was assessed, the number of speculators selling stocks increased, and the number buying decreased. With the decreased demand, stock prices began to fall, and as more accurate information on the nature and extent of the decline was received, stock prices fell more. The late October crash made the decline occur much more rapidly, and the margin purchases and consequent forced selling of many of those stocks contributed to a more severe price fall. The recovery of stock prices from November 13 into April of 1930 suggests that stock prices may have been driven somewhat too low during the crash.

There is now widespread agreement that the 1929 stock market crash did not cause the Great Depression. Instead, the initial downturn in economic activity was a primary determinant of the ending of the 1928-29 stock market bubble. The stock market crash did make the downturn become more severe beginning in November 1929. It reduced discretionary consumption spending (Romer, 1990) and created greater income uncertainty helping to bring on the contraction (Flacco and Parker, 1992). Though stock market prices reached a bottom and began to recover following November 13, 1929, the continuing decline in economic activity took its toll and by May 1930 stock prices resumed their decline and continued to fall through the summer of 1932.

Domestic Trade

In the nineteenth century, a complex array of wholesalers, jobbers, and retailers had developed, but changes in the postbellum period reduced the role of the wholesalers and jobbers and strengthened the importance of the retailers in domestic trade. (Cochran, 1977; Chandler, 1977; Marburg, 1951; Clewett, 1951) The appearance of the department store in the major cities and the rise of mail order firms in the postbellum period changed the retailing market.

Department Stores

A department store is a combination of specialty stores organized as departments within one general store. A. T. Stewart’s huge 1846 dry goods store in New York City is often referred to as the first department store. (Resseguie, 1965; Sobel-Sicilia, 1986) R. H. Macy started his dry goods store in 1858 and Wanamaker’s in Philadelphia opened in 1876. By the end of the nineteenth century, every city of any size had at least one major department store. (Appel, 1930; Benson, 1986; Hendrickson, 1979; Hower, 1946; Sobel, 1974) Until the late twenties, the department store field was dominated by independent stores, though some department stores in the largest cities had opened a few suburban branches and stores in other cities. In the interwar period department stores accounted for about 8 percent of retail sales.

The department stores relied on a “one-price” policy, which Stewart is credited with beginning. In the antebellum period and into the postbellum period, it was common not to post a specific price on an item; rather, each purchaser haggled with a sales clerk over what the price would be. Stewart posted fixed prices on the various dry goods sold, and the customer could either decide to buy or not buy at the fixed price. The policy dramatically lowered transactions costs for both the retailer and the purchaser. Prices were reduced with a smaller markup over the wholesale price, and a large sales volume and a quicker turnover of the store’s inventory generated profits.

Mail Order Firms

What changed the department store field in the twenties was the entrance of Sears Roebuck and Montgomery Ward, the two dominant mail order firms in the United States. (Emmet-Jeuck, 1950; Chandler, 1962, 1977) Both firms had begun in the late nineteenth century and by 1914 the younger Sears Roebuck had surpassed Montgomery Ward. Both located in Chicago due to its central location in the nation’s rail network and both had benefited from the advent of Rural Free Delivery in 1896 and low cost Parcel Post Service in 1912.

In 1924 Sears hired Robert C. Wood, who was able to convince Sears Roebuck to open retail stores. Wood believed that the declining rural population and the growing urban population forecast the gradual demise of the mail order business; survival of the mail order firms required a move into retail sales. By 1925 Sears Roebuck had opened 8 retail stores, and by 1929 it had 324 stores. Montgomery Ward quickly followed suit. Rather than locating these in the central business district (CBD), Wood located many on major streets closer to the residential areas. These moves of Sears Roebuck and Montgomery Ward expanded department store retailing and provided a new type of chain store.

Chain Stores

Though chain stores grew rapidly in the first two decades of the twentieth century, they date back to the 1860s when George F. Gilman and George Huntington Hartford opened a string of New York City A&P (Atlantic and Pacific) stores exclusively to sell tea. (Beckman-Nolen, 1938; Lebhar, 1963; Bullock, 1933) Stores were opened in other regions and in 1912 their first “cash-and-carry” full-range grocery was opened. Soon they were opening 50 of these stores each week and by the 1920s A&P had 14,000 stores. They then phased out the small stores to reduce the chain to 4,000 full-range, supermarket-type stores. A&P’s success led to new grocery store chains such as Kroger, Jewel Tea, and Safeway.

Prior to A&P’s cash-and-carry policy, it was common for grocery stores, produce (or green) grocers, and meat markets to provide home delivery and credit, both of which were costly. As a result, retail prices were generally marked up well above the wholesale prices. In cash-and-carry stores, items were sold only for cash; no credit was extended, and no expensive home deliveries were provided. Markups on prices could be much lower because other costs were much lower. Consumers liked the lower prices and were willing to pay cash and carry their groceries, and the policy became common by the twenties.

Chains also developed in other retail product lines. In 1879 Frank W. Woolworth developed a “5 and 10 Cent Store,” or dime store, and there were over 1,000 F. W. Woolworth stores by the mid-1920s. (Winkler, 1940) Other firms such as Kresge, Kress, and McCrory successfully imitated Woolworth’s dime store chain. J.C. Penney’s dry goods chain store began in 1901 (Beasley, 1948), Walgreen’s drug store chain began in 1909, and shoes, jewelry, cigars, and other lines of merchandise also began to be sold through chain stores.

Self-Service Policies

In 1916 Clarence Saunders, a grocer in Memphis, Tennessee, built upon the one-price policy and began offering self-service at his Piggly Wiggly store. Previously, customers handed a clerk a list or asked for the items desired, which the clerk then collected and the customer paid for. With self-service, items for sale were placed on open shelves among which the customers could walk, carrying a shopping bag or pushing a shopping cart. Each customer could then browse as he or she pleased, picking out whatever was desired. Saunders and other retailers who adopted the self-service method of retail selling found that customers often purchased more because of exposure to the array of products on the shelves; as well, self-service lowered the labor required for retail sales and therefore lowered costs.

Shopping Centers

Shopping Centers, another innovation in retailing that began in the twenties, was not destined to become a major force in retail development until after the Second World War. The ultimate cause of this innovation was the widening ownership and use of the automobile. By the 1920s, as the ownership and use of the car began expanding, population began to move out of the crowded central cities toward the more open suburbs. When General Robert Wood set Sears off on its development of urban stores, he located these not in the central business district, CBD, but as free-standing stores on major arteries away from the CBD with sufficient space for parking.

At about the same time, a few entrepreneurs began to develop shopping centers. Yehoshua Cohen (1972) says, “The owner of such a center was responsible for maintenance of the center, its parking lot, as well as other services to consumers and retailers in the center.” Perhaps the earliest such shopping center was the Country Club Plaza built in 1922 by the J. C. Nichols Company in Kansas City, Missouri. Other early shopping centers appeared in Baltimore and Dallas. By the mid-1930s the concept of a planned shopping center was well known and was expected to be the means to capture the trade of the growing number of suburban consumers.

International Trade and Finance

In the twenties a gold exchange standard was developed to replace the gold standard of the prewar world. Under a gold standard, each country’s currency carried a fixed exchange rate with gold, and the currency had to be backed up by gold. As a result, all countries on the gold standard had fixed exchange rates with all other countries. Adjustments to balance international trade flows were made by gold flows. If a country had a deficit in its trade balance, gold would leave the country, forcing the money stock to decline and prices to fall. Falling prices made the deficit countries’ exports more attractive and imports more costly, reducing the deficit. Countries with a surplus imported gold, which increased the money stock and caused prices to rise. This made the surplus countries’ exports less attractive and imports more attractive, decreasing the surplus. Most economists who have studied the prewar gold standard contend that it did not work as the conventional textbook model says, because capital flows frequently reduced or eliminated the need for gold flows for long periods of time. However, there is no consensus on whether fortuitous circumstances, rather than the gold standard, saved the international economy from periodic convulsions or whether the gold standard as it did work was sufficient to promote stability and growth in international transactions.

After the First World War it was argued that there was a “shortage” of fluid monetary gold to use for the gold standard, so some method of “economizing” on gold had to be found. To do this, two basic changes were made. First, most nations, other than the United States, stopped domestic circulation of gold. Second, the “gold exchange” system was created. Most countries held their international reserves in the form of U.S. dollars or British pounds and international transactions used dollars or pounds, as long as the United States and Great Britain stood ready to exchange their currencies for gold at fixed exchange rates. However, the overvaluation of the pound and the undervaluation of the franc threatened these arrangements. The British trade deficit led to a capital outflow, higher interest rates, and a weak economy. In the late twenties, the French trade surplus led to the importation of gold that they did not allow to expand the money supply.

Economizing on gold by no longer allowing its domestic circulation and by using key currencies as international monetary reserves was really an attempt to place the domestic economies under the control of the nations’ politicians and make them independent of international events. Unfortunately, in doing this politicians eliminated the equilibrating mechanism of the gold standard but had nothing with which to replace it. The new international monetary arrangements of the twenties were potentially destabilizing because they were not allowed to operate as a price mechanism promoting equilibrating adjustments.

There were other problems with international economic activity in the twenties. Because of the war, the United States was abruptly transformed from a debtor to a creditor on international accounts. Though the United States did not want reparations payments from Germany, it did insist that Allied governments repay American loans. The Allied governments then insisted on war reparations from Germany. These initial reparations assessments were quite large. The Allied Reparations Commission collected the charges by supervising Germany’s foreign trade and by internal controls on the German economy, and it was authorized to increase the reparations if it was felt that Germany could pay more. The treaty allowed France to occupy the Ruhr after Germany defaulted in 1923.

Ultimately, this tangled web of debts and reparations, which was a major factor in the course of international trade, depended upon two principal actions. First, the United States had to run an import surplus or, on net, export capital out of the United States to provide a pool of dollars overseas. Germany then had either to have an export surplus or else import American capital so as to build up dollar reserves—that is, the dollars the United States was exporting. In effect, these dollars were paid by Germany to Great Britain, France, and other countries that then shipped them back to the United States as payment on their U.S. debts. If these conditions did not occur, (and note that the “new” gold standard of the twenties had lost its flexibility because the price adjustment mechanism had been eliminated) disruption in international activity could easily occur and be transmitted to the domestic economies.

In the wake of the 1920-21 depression Congress passed the Emergency Tariff Act, which raised tariffs, particularly on manufactured goods. (Figures 26 and 27) The Fordney-McCumber Tariff of 1922 continued the Emergency Tariff of 1921, and its protection on many items was extremely high, ranging from 60 to 100 percent ad valorem (or as a percent of the price of the item). The increases in the Fordney-McCumber tariff were as large and sometimes larger than the more famous (or “infamous”) Smoot-Hawley tariff of 1930. As farm product prices fell at the end of the decade presidential candidate Herbert Hoover proposed, as part of his platform, tariff increases and other changes to aid the farmers. In January 1929, after Hoover’s election, but before he took office, a tariff bill was introduced into Congress. Special interests succeeded in gaining additional (or new) protection for most domestically produced commodities and the goal of greater protection for the farmers tended to get lost in the increased protection for multitudes of American manufactured products. In spite of widespread condemnation by economists, President Hoover signed the Smoot-Hawley Tariff in June 1930 and rates rose sharply.

Following the First World War, the U.S. government actively promoted American exports, and in each of the postwar years through 1929, the United States recorded a surplus in its balance of trade. (Figure 28) However, the surplus declined in the 1930s as both exports and imports fell sharply after 1929. From the mid-1920s on finished manufactures were the most important exports, while agricultural products dominated American imports.

The majority of the funds that allowed Germany to make its reparations payments to France and Great Britain and hence allowed those countries to pay their debts to the United States came from the net flow of capital out of the United States in the form of direct investment in real assets and investments in long- and short-term foreign financial assets. After the devastating German hyperinflation of 1922 and 1923, the Dawes Plan reformed the German economy and currency and accelerated the U.S. capital outflow. American investors began to actively and aggressively pursue foreign investments, particularly loans (Lewis, 1938) and in the late twenties there was a marked deterioration in the quality of foreign bonds sold in the United States. (Mintz, 1951)

The system, then, worked well as long as there was a net outflow of American capital, but this did not continue. In the middle of 1928, the flow of short-term capital began to decline. In 1928 the flow of “other long-term” capital out of the United States was 752 million dollars, but in 1929 it was only 34 million dollars. Though arguments now exist as to whether the booming stock market in the United States was to blame for this, it had far-reaching effects on the international economic system and the various domestic economies.

The Start of the Depression

The United States had the majority of the world’s monetary gold, about 40 percent, by 1920. In the latter part of the twenties, France also began accumulating gold as its share of the world’s monetary gold rose from 9 percent in 1927 to 17 percent in 1929 and 22 percent by 1931. In 1927 the Federal Reserve System had reduced discount rates (the interest rate at which they lent reserves to member commercial banks) and engaged in open market purchases (purchasing U.S. government securities on the open market to increase the reserves of the banking system) to push down interest rates and assist Great Britain in staying on the gold standard. By early 1928 the Federal Reserve System was worried about its loss of gold due to this policy as well as the ongoing boom in the stock market. It began to raise the discount rate to stop these outflows. Gold was also entering the United States so that foreigners could obtain dollars to invest in stocks and bonds. As the United States and France accumulated more and more of the world’s monetary gold, other countries’ central banks took contractionary steps to stem the loss of gold. In country after country these deflationary strategies began contracting economic activity and by 1928 some countries in Europe, Asia, and South America had entered into a depression. More countries’ economies began to decline in 1929, including the United States, and by 1930 a depression was in force for almost all of the world’s market economies. (Temin, 1989; Eichengreen, 1992)

Monetary and Fiscal Policies in the 1920s

Fiscal Policies

As a tool to promote stability in aggregate economic activity, fiscal policy is largely a post-Second World War phenomenon. Prior to 1930 the federal government’s spending and taxing decisions were largely, but not completely, based on the perceived “need” for government-provided public goods and services.

Though the fiscal policy concept had not been developed, this does not mean that during the twenties no concept of the government’s role in stimulating economic activity existed. Herbert Stein (1990) points out that in the twenties Herbert Hoover and some of his contemporaries shared two ideas about the proper role of the federal government. The first was that federal spending on public works could be an important force in reducin Smiley and Keehn, 1995.  investment. Both concepts fit the ideas held by Hoover and others of his persuasion that the U.S. economy of the twenties was not the result of laissez-faire workings but of “deliberate social engineering.”

The federal personal income tax was enacted in 1913. Though mildly progressive, its rates were low and topped out at 7 percent on taxable income in excess of $750,000. (Table 4) As the United States prepared for war in 1916, rates were increased and reached a maximum marginal rate of 12 percent. With the onset of the First World War, the rates were dramatically increased. To obtain additional revenue in 1918, marginal rates were again increased. The share of federal revenue generated by income taxes rose from 11 percent in 1914 to 69 percent in 1920. The tax rates had been extended downward so that more than 30 percent of the nation’s income recipients were subject to income taxes by 1918. However, through the purchase of tax exempt state and local securities and through steps taken by corporations to avoid the cash distribution of profits, the number of high income taxpayers and their share of total taxes paid declined as Congress kept increasing the tax rates. The normal (or base) tax rate was reduced slightly for 1919 but the surtax rates, which made the income tax highly progressive, were retained. (Smiley-Keehn, 1995)

President Harding’s new Secretary of the Treasury, Andrew Mellon, proposed cutting the tax rates, arguing that the rates in the higher brackets had “passed the point of productivity” and rates in excess of 70 percent simply could not be collected. Though most agreed that the rates were too high, there was sharp disagreement on how the rates should be cut. Democrats and  Smiley and Keehn, 1995.  Progressive Republicans argued for rate cuts targeted for the lower income taxpayers while maintaining most of the steep progressivity of the tax rates. They believed that remedies could be found to change the tax laws to stop the legal avoidance of federal income taxes. Republicans argued for sharper cuts that reduced the progressivity of the rates. Mellon proposed a maximum rate of 25 percent.

Though the federal income tax rates were reduced and made less progressive, it took three tax rate cuts in 1921, 1924, and 1925 before Mellon’s goal was finally achieved. The highest marginal tax rate was reduced from 73 percent to 58 percent to 46 percent and finally to 25 percent for the 1925 tax year. All of the other rates were also reduced and exemptions increased. By 1926, only about the top 10 percent of income recipients were subject to federal income taxes. As tax rates were reduced, the number of high income tax returns increased and the share of total federal personal income taxes paid rose. (Tables 5 and 6) Even with the dramatic income tax rate cuts and reductions in the number of low income taxpayers, federal personal income tax revenue continued to rise during the 1920s. Though early estimates of the distribution of personal income showed sharp increases in income inequality during the 1920s (Kuznets, 1953; Holt, 1977), more recent estimates have found that the increases in inequality were considerably less and these appear largely to be related to the sharp rise in capital gains due to the booming stock market in the late twenties. (Smiley, 1998 and 2000)

Each year in the twenties the federal government generated a surplus, in some years as much as 1 percent of GNP. The surpluses were used to reduce the federal deficit and it declined by 25 percent between 1920 and 1930. Contrary to simple macroeconomic models that argue a federal government budget surplus must be contractionary and tend to stop an economy from reaching full employment, the American economy operated at full-employment or close to it throughout the twenties and saw significant economic growth. In this case, the surpluses were not contractionary because the dollars were circulated back into the economy through the purchase of outstanding federal debt rather than pulled out as currency and held in a vault somewhere.

Monetary Policies

In 1913 fear of the “money trust” and their monopoly power led Congress to create 12 central banks when they created the Federal Reserve System. The new central banks were to control money and credit and act as lenders of last resort to end banking panics. The role of the Federal Reserve Board, located in Washington, D.C., was to coordinate the policies of the 12 district banks; it was composed of five presidential appointees and the current secretary of the treasury and comptroller of the currency. All national banks had to become members of the Federal Reserve System, the Fed, and any state bank meeting the qualifications could elect to do so.

The act specified fixed reserve requirements on demand and time deposits, all of which had to be on deposit in the district bank. Commercial banks were allowed to rediscount commercial paper and given Federal Reserve currency. Initially, each district bank set its own rediscount rate. To provide additional income when there was little rediscounting, the district banks were allowed to engage in open market operations that involved the purchasing and selling of federal government securities, short-term securities of state and local governments issued in anticipation of taxes, foreign exchange, and domestic bills of exchange. The district banks were also designated to act as fiscal agents for the federal government. Finally, the Federal Reserve System provided a central check clearinghouse for the entire banking system.

When the Federal Reserve System was originally set up, it was believed that its primary role was to be a lender of last resort to prevent banking panics and become a check-clearing mechanism for the nation’s banks. Both the Federal Reserve Board and the Governors of the District Banks were bodies established to jointly exercise these activities. The division of functions was not clear, and a struggle for power ensued, mainly between the New York Federal Reserve Bank, which was led by J. P. Morgan’s protege, Benjamin Strong, through 1928, and the Federal Reserve Board. By the thirties the Federal Reserve Board had achieved dominance.

There were really two conflicting criteria upon which monetary actions were ostensibly based: the Gold Standard and the Real Bills Doctrine. The Gold Standard was supposed to be quasi-automatic, with an effective limit to the quantity of money. However, the Real Bills Doctrine (which required that all loans be made on short-term, self-liquidating commercial paper) had no effective limit on the quantity of money. The rediscounting of eligible commercial paper was supposed to lead to the required “elasticity” of the stock of money to “accommodate” the needs of industry and business. Actually the rediscounting of commercial paper, open market purchases, and gold inflows all had the same effects on the money stock.

The 1920-21 Depression

During the First World War, the Fed kept discount rates low and granted discounts on banks’ customer loans used to purchase V-bonds in order to help finance the war. The final Victory Loan had not been floated when the Armistice was signed in November of 1918: in fact, it took until October of 1919 for the government to fully sell this last loan issue. The Treasury, with the secretary of the treasury sitting on the Federal Reserve Board, persuaded the Federal Reserve System to maintain low interest rates and discount the Victory bonds necessary to keep bond prices high until this last issue had been floated. As a result, during this period the money supply grew rapidly and prices rose sharply.

A shift from a federal deficit to a surplus and supply disruptions due to steel and coal strikes in 1919 and a railroad strike in early 1920 contributed to the end of the boom. But the most—common view is that the Fed’s monetary policy was the main determinant of the end of the expansion and inflation and the beginning of the subsequent contraction and severe deflation. When the Fed was released from its informal agreement with the Treasury in November of 1919, it raised the discount rate from 4 to 4.75 percent. Benjamin Strong (the governor of the New York bank) was beginning to believe that the time for strong action was past and that the Federal Reserve System’s actions should be moderate. However, with Strong out of the country, the Federal Reserve Board increased the discount rate from 4.75 to 6 percent in late January of 1920 and to 7 percent on June 1, 1920. By the middle of 1920, economic activity and employment were rapidly falling, and prices had begun their downward spiral in one of the sharpest price declines in American history. The Federal Reserve System kept the discount rate at 7 percent until May 5, 1921, when it was lowered to 6.5 percent. By June of 1922, the rate had been lowered yet again to 4 percent. (Friedman and Schwartz, 1963)

The Federal Reserve System authorities received considerable criticism then and later for their actions. Milton Friedman and Anna Schwartz (1963) contend that the discount rate was raised too much too late and then kept too high for too long, causing the decline to be more severe and the price deflation to be greater. In their opinion the Fed acted in this manner due to the necessity of meeting the legal reserve requirement with a safe margin of gold reserves. Elmus Wicker (1966), however, argues that the gold reserve ratio was not the main factor determining the Federal Reserve policy in the episode. Rather, the Fed knowingly pursued a deflationary policy because it felt that the money supply was simply too large and prices too high. To return to the prewar parity for gold required lowering the price level, and there was an excessive stock of money because the additional money had been used to finance the war, not to produce consumer goods. Finally, the outstanding indebtedness was too large due to the creation of Fed credit.

Whether statutory gold reserve requirements to maintain the gold standard or domestic credit conditions were the most important determinant of Fed policy is still an open question, though both certainly had some influence. Regardless of the answer to that question, the Federal Reserve System’s first major undertaking in the years immediately following the First World War demonstrated poor policy formulation.

Federal Reserve Policies from 1922 to 1930

By 1921 the district banks began to recognize that their open market purchases had effects on interest rates, the money stock, and economic activity. For the next several years, economists in the Federal Reserve System discussed how this worked and how it could be related to discounting by member banks. A committee was created to coordinate the open market purchases of the district banks.

The recovery from the 1920-1921 depression had proceeded smoothly with moderate price increases. In early 1923 the Fed sold some securities and increased the discount rate from 4 percent as they believed the recovery was too rapid. However, by the fall of 1923 there were some signs of a business slump. McMillin and Parker (1994) argue that this contraction, as well as the 1927 contraction, were related to oil price shocks. By October of 1923 Benjamin Strong was advocating securities purchases to counter this. Between then and September 1924 the Federal Reserve System increased its securities holdings by over $500 million. Between April and August of 1924 the Fed reduced the discount rate to 3 percent in a series of three separate steps. In addition to moderating the mild business slump, the expansionary policy was also intended to reduce American interest rates relative to British interest rates. This reversed the gold flow back toward Great Britain allowing Britain to return to the gold standard in 1925. At the time it appeared that the Fed’s monetary policy had successfully accomplished its goals.

By the summer of 1924 the business slump was over and the economy again began to grow rapidly. By the mid-1920s real estate speculation had arisen in many urban areas in the United States and especially in Southeastern Florida. Land prices were rising sharply. Stock market prices had also begun rising more rapidly. The Fed expressed some worry about these developments and in 1926 sold some securities to gently slow the real estate and stock market boom. Amid hurricanes and supply bottlenecks the Florida real estate boom collapsed but the stock market boom continued.

The American economy entered into another mild business recession in the fall of 1926 that lasted until the fall of 1927. One of the factors in this was Henry’s Ford’s shut down of all of his factories to changeover from the Model T to the Model A. His employees were left without a job and without income for over six months. International concerns also reappeared. France, which was preparing to return to the gold standard, had begun accumulating gold and gold continued to flow into the United States. Some of this gold came from Great Britain making it difficult for the British to remain on the gold standard. This occasioned a new experiment in central bank cooperation. In July 1927 Benjamin Strong arranged a conference with Governor Montagu Norman of the Bank of England, Governor Hjalmar Schacht of the Reichsbank, and Deputy Governor Charles Ritt of the Bank of France in an attempt to promote cooperation among the world’s central bankers. By the time the conference began the Fed had already taken steps to counteract the business slump and reduce the gold inflow. In early 1927 the Fed reduced discount rates and made large securities purchases. One result of this was that the gold stock fell from $4.3 billion in mid-1927 to $3.8 billion in mid-1928. Some of the gold exports went to France and France returned to the gold standard with its undervalued currency. The loss of gold from Britain eased allowing it to maintain the gold standard.

By early 1928 the Fed was again becoming worried. Stock market prices were rising even faster and the apparent speculative bubble in the stock market was of some concern to Fed authorities. The Fed was also concerned about the loss of gold and wanted to bring that to an end. To do this they sold securities and, in three steps, raised the discount rate to 5 percent by July 1928. To this point the Federal Reserve Board had largely agreed with district Bank policy changes. However, problems began to develop.

During the stock market boom of the late 1920s the Federal Reserve Board preferred to use “moral suasion” rather than increases in discount rates to lessen member bank borrowing. The New York City bank insisted that moral suasion would not work unless backed up by literal credit rationing on a bank by bank basis which they, and the other district banks, were unwilling to do. They insisted that discount rates had to be increased. The Federal Reserve Board countered that this general policy change would slow down economic activity in general rather than be specifically targeted to stock market speculation. The result was that little was done for a year. Rates were not raised but no open market purchases were undertaken. Rates were finally raised to 6 percent in August of 1929. By that time the contraction had already begun. In late October the stock market crashed, and America slid into the Great Depression.

In November, following the stock market crash the Fed reduced discount rates to 4.5 percent. In January they again decreased discount rates and began a series of discount rate decreases until the rate reached 2.5 percent at the end of 1930. No further open market operations were undertaken for the next six months. As banks reduced their discounting in 1930, the stock of money declined. There was a banking crisis in the southeast in November and December of 1930, and in its wake the public’s holding of currency relative to deposits and banks’ reserve ratios began to rise and continued to do so through the end of the Great Depression.

Conclusion

Though some disagree, there is growing evidence that the behavior of the American economy in the 1920s did not cause the Great Depression. The depressed 1930s were not “retribution” for the exuberant growth of the 1920s. The weakness of a few economic sectors in the 1920s did not forecast the contraction from 1929 to 1933. Rather it was the depression of the 1930s and the Second World War that interrupted the economic growth begun in the 1920s and resumed after the Second World War. Just as the construction of skyscrapers that began in the 1920s resumed in the 1950s, so did real economic growth and progress resume. In retrospect we can see that the introduction and expansion of new technologies and industries in the 1920s, such as autos, household electric appliances, radio, and electric utilities, are echoed in the 1990s in the effects of the expanding use and development of the personal computer and the rise of the internet. The 1920s have much to teach us about the growth and development of the American economy.

References

Adams, Walter, ed. The Structure of American Industry, 5th ed. New York: Macmillan Publishing Co., 1977.

Aldcroft, Derek H. From Versailles to Wall Street, 1919-1929. Berkeley: The University of California Press, 1977.

Allen, Frederick Lewis. Only Yesterday. New York: Harper and Sons, 1931.

Alston, Lee J. “Farm Foreclosures in the United States During the Interwar Period.” The Journal of Economic History 43, no. 4 (1983): 885-904.

Alston, Lee J., Wayne A. Grove, and David C. Wheelock. “Why Do Banks Fail? Evidence from the 1920s.” Explorations in Economic History 31 (1994): 409-431.

Ankli, Robert. “Horses vs. Tractors on the Corn Belt.” Agricultural History 54 (1980): 134-148.

Ankli, Robert and Alan L. Olmstead. “The Adoption of the Gasoline Tractor in California.” Agricultural History 55 (1981):— 213-230.

Appel, Joseph H. The Business Biography of John Wanamaker, Founder and Builder. New York: The Macmillan Co., 1930.

Baker, Jonathan B. “Identifying Cartel Pricing Under Uncertainty: The U.S. Steel Industry, 1933-1939.” The Journal of Law and Economics 32 (1989): S47-S76.

Barger, E. L, et al. Tractors and Their Power Units. New York: John Wiley and Sons, 1952.

Barnouw, Eric. A Tower in Babel: A History of Broadcasting in the United States: Vol. I—to 1933. New York: Oxford University Press, 1966.

Barnouw, Eric. The Golden Web: A History of Broadcasting in the United States: Vol. II—1933 to 1953. New York: Oxford University Press, 1968.

Beasley, Norman. Main Street Merchant: The Story of the J. C. Penney Company. New York: Whittlesey House, 1948.

Beckman, Theodore N. and Herman C. Nolen. The Chain Store Problem: A Critical Analysis. New York: McGraw-Hill Book Co., 1938.

Benson, Susan Porter. Counter Cultures: Saleswomen, Managers, and Customers in American Department Stores, 1890-1940. Urbana, IL: University of Illinois Press, 1986.

Bernstein, Irving. The Lean Years: A History of the American Worker, 1920-1933. Boston: Houghton Mifflin Co., 1960.

Bernstein, Michael A. The Great Depression: Delayed Recovery and Economic Change in America, 1929-1939. New York: Cambridge University Press, 1987.

Bishop, Jerry E. “Stock Market Experiment Suggests Inevitability of Booms and Busts.” The Wall Street Journal, 17 November, 1987.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics. Washington: USGOP, 1943.

Bogue, Allan G. “Changes in Mechanical and Plant Technology: The Corn Belt, 1910-1940.” The Journal of Economic History 43 (1983): 1-26.

Breit, William and Elzinga, Kenneth. The Antitrust Casebook: Milestones in Economic Regulation, 2d ed. Chicago: The Dryden Press, 1989.

Bright, Arthur A., Jr. The Electric Lamp Industry: Technological Change and Economic Development from 1800 to 1947. New York: Macmillan, 1947.

Brody, David. Labor in Crisis: The Steel Strike. Philadelphia: J. B. Lippincott Co., 1965.

Brooks, John. Telephone: The First Hundred Years. New York: Harper and Row, 1975.

Brown, D. Clayton. Electricity for Rural America: The Fight for the REA. Westport, CT: The Greenwood Press, 1980.

Brown, William A., Jr. The International Gold Standard Reinterpreted, 1914-1934, 2 vols. New York: National Bureau of Economic Research, 1940.

Brunner, Karl and Allen Meltzer. “What Did We Learn from the Monetary Experience of the United States in the Great Depression?” Canadian Journal of Economics 1 (1968): 334-48.

Bryant, Keith L., Jr., and Henry C. Dethloff. A History of American Business. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1983.

Bucklin, Louis P. Competition and Evolution in the Distributive Trades. Englewood Cliffs, NJ: Prentice-Hall, 1972.

Bullock, Roy J. “The Early History of the Great Atlantic & Pacific Tea Company,” Harvard Business Review 11 (1933): 289-93.

Bullock, Roy J. “A History of the Great Atlantic & Pacific Tea Company Since 1878.” Harvard Business Review 12 (1933): 59-69.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, Edited by Mark Wheeler. Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 1998.

Chandler, Alfred D., Jr. Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Cambridge, MA: The M.I.T. Press, 1962.

Chandler, Alfred D., Jr. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: the Belknap Press Harvard University Press, 1977.

Chandler, Alfred D., Jr. Giant Enterprise: Ford, General Motors, and the American Automobile Industry. New York: Harcourt, Brace, and World, 1964.

Chester, Giraud, and Garnet R. Garrison. Radio and Television: An Introduction. New York: Appleton-Century Crofts, 1950.

Clewett, Richard C. “Mass Marketing of Consumers’ Goods.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Cochran, Thomas C. 200 Years of American Business. New York: Delta Books, 1977.

Cohen, Yehoshua S. Diffusion of an Innovation in an Urban System: The Spread of Planned Regional Shopping Centers in the United States, 1949-1968. Chicago: The University of Chicago, Department of Geography, Research Paper No. 140, 1972.

Daley, Robert. An American Saga: Juan Trippe and His Pan American Empire. New York: Random House, 1980.

De Long, J. Bradford, and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (September 1991): 675-700.

Clarke, Sally. “New Deal Regulation and the Revolution in American Farm Productivity: A Case Study of the Diffusion of the Tractor in the Corn Belt, 1920-1940.” The Journal of Economic History 51, no. 1 (1991): 105-115.

Cohen, Avi. “Technological Change as Historical Process: The Case of the U.S. Pulp and Paper Industry, 1915-1940.” The Journal of Economic History 44 (1984): 775-79.

Davies, R. E. G. A History of the World’s Airlines. London: Oxford University Press, 1964.

Dearing, Charles L., and Wilfred Owen. National Transportation Policy. Washington: The Brookings Institution, 1949.

Degen, Robert A. The American Monetary System: A Concise Survey of Its Evolution Since 1896. Lexington, MA: Lexington Books, 1987.

De Long, J. Bradford and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (1991): 675-700.

Devine, Warren D., Jr. “From Shafts to Wires: Historical Perspectives on Electrification.” The Journal of Economic History 43 (1983): 347-372.

Eckert, Ross D., and George W. Hilton. “The Jitneys.” The Journal of Law and Economics 15 (October 1972): 293-326.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Metheun, 1985.

Barry Eichengreen. “The Political Economy of the Smoot-Hawley Tariff.” Research in Economic History 12 (1989): 1-43.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939. New York: Oxford University Press, 1992.

Eis, Carl. “The 1919-1930 Merger Movement in American Industry.” The Journal of Law and Economics XII (1969): 267-96.

Emmet, Boris, and John E. Jeuck. Catalogues and Counters: A History of Sears Roebuck and Company. Chicago: University of Chicago Press, 1950.

Fearon, Peter. War, Prosperity, & Depression: The U.S. Economy, 1947-1945. Lawrence, KS: University of Kansas Press, 1987.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” The American Economic Review 93 (2003): 1399-1413.

Fischer, Claude. “The Revolution in Rural Telephony, 1900-1920.” Journal of Social History 21 (1987): 221-38.

Fischer, Claude. “Technology’s Retreat: The Decline of Rural Telephony in the United States, 1920-1940.” Social Science History, Vol. 11 (Fall 1987), pp. 295-327.

Fisher, Irving. The Stock Market Crash—and After. New York: Macmillan, 1930.

French, Michael J. “Structural Change and Competition in the United States Tire Industry, 1920-1937.” Business History Review 60 (1986): 28-54.

French, Michael J. The U.S. Tire Industry. Boston: Twayne Publishers, 1991.

Fricke, Ernest B. “The New Deal and the Modernization of Small Business: The McCreary Tire and Rubber Company, 1930-1940.” Business History Review 56 (1982):— 559-76.

Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Galbraith, John Kenneth. The Great Crash. Boston: Houghton Mifflin, 1954.

Garnet, Robert W. The Telephone Enterprise: The Evolution of the Bell System’s Horizontal Structure, 1876-1900. Baltimore: The Johns Hopkins University Press, 1985.

Gideonse, Max. “Foreign Trade, Investments, and Commercial Policy.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Giedion, Simon. Mechanization Takes Command. New York: Oxford University Press, 1948.

Gordon, Robert Aaron. Economic Instability and Growth: The American Record. New York: Harper and Row, 1974.

Gray, Roy Burton. Development of the Agricultural Tractor in the United States, 2 vols. Washington, D. C.: USGPO, 1954.

Gunderson, Gerald. An Economic History of America. New York: McGraw-Hill, 1976.

Hadwiger, Don F., and Clay Cochran. “Rural Telephones in the United States.” Agricultural History 58 (July 1984): 221-38.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 19 (1987): 145-169.

Hamilton, James D. “The Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6 (1988): 67-89.

Hayek, Friedrich A. Prices and Production. New York: Augustus M. Kelly reprint of 1931 edition.

Hayford, Marc and Carl A. Pasurka, Jr. “The Political Economy of the Fordney-McCumber and Smoot-Hawley Tariff Acts.” Explorations in Economic History 29 (1992): 30-50.

Hendrickson, Robert. The Grand Emporiums: The Illustrated History of America’s Great Department Stores. Briarcliff Manor, NY: Stein and Day, 1979.

Herbst, Anthony F., and Joseph S. K. Wu. “Some Evidence of Subsidization of the U.S. Trucking Industry, 1900-1920.” The Journal of Economic History— 33 (June 1973): 417-33.

Higgs, Robert. Crisis and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987.

Hilton, George W., and John Due. The Electric Interurban Railways in America. Stanford: Stanford University Press, 1960.

Hoffman, Elizabeth and Gary D. Liebcap. “Institutional Choice and the Development of U.S. Agricultural Policies in the 1920s.” The Journal of Economic History 51 (1991): 397-412.

Holt, Charles F. “Who Benefited from the Prosperity of the Twenties?” Explorations in Economic History 14 (1977): 277-289.

Hower, Ralph W. History of Macy’s of New York, 1858-1919. Cambridge, MA: Harvard University Press, 1946.

Hubbard, R. Glenn, Ed. Financial Markets and Financial Crises. Chicago: University of Chicago Press, 1991.

Hunter, Louis C. “Industry in the Twentieth Century.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Jerome, Harry. Mechanization in Industry. New York: National Bureau of Economic Research, 1934.

Johnson, H. Thomas. “Postwar Optimism and the Rural Financial Crisis.” Explorations in Economic History 11, no. 2 (1973-1974): 173-192.

Jones, Fred R. and William H. Aldred. Farm Power and Tractors, 5th ed. New York: McGraw-Hill, 1979.

Keller, Robert. “Factor Income Distribution in the United States During the 20’s: A Reexamination of Fact and Theory.” The Journal of Economic History 33 (1973): 252-95.

Kelly, Charles J., Jr. The Sky’s the Limit: The History of the Airlines. New York: Coward-McCann, 1963.

Kindleberger, Charles. The World in Depression, 1929-1939. Berkeley: The University of California Press, 1973.

Klebaner, Benjamin J. Commercial Banking in the United States: A History. New York: W. W. Norton and Co., 1974.

Kuznets, Simon. Shares of Upper Income Groups in Income and Savings. New York: NBER, 1953.

Lebhar, Godfrey M. Chain Stores in America, 1859-1962. New York: Chain Store Publishing Corp., 1963.

Lewis, Cleona. America’s Stake in International Investments. Washington: The Brookings Institution, 1938.

Livesay, Harold C. and Patrick G. Porter. “Vertical Integration in American Manufacturing, 1899-1948.” The Journal of Economic History 29 (1969): 494-500.

Lipartito, Kenneth. The Bell System and Regional Business: The Telephone in the South, 1877-1920. Baltimore: The Johns Hopkins University Press, 1989.

Liu, Tung, Gary J. Santoni, and Courteney C. Stone. “In Search of Stock Market Bubbles: A Comment on Rappoport and White.” The Journal of Economic History 55 (1995): 647-654.

Lorant, John. “Technological Change in American Manufacturing During the 1920s.” The Journal of Economic History 33 (1967): 243-47.

MacDonald, Forrest. Insull. Chicago: University of Chicago Press, 1962.

Marburg, Theodore. “Domestic Trade and Marketing.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Markham, Jesse. “Survey of the Evidence and Findings on Mergers.” In Business Concentration and Price Policy, National Bureau of Economic Research. Princeton: Princeton University Press, 1955.

Markin, Rom J. The Supermarket: An Analysis of Growth, Development, and Change. Rev. ed. Pullman, WA: Washington State University Press, 1968.

McCraw, Thomas K. TVA and the Power Fight, 1933-1937. Philadelphia: J. B. Lippincott, 1971.

McCraw, Thomas K. and Forest Reinhardt. “Losing to Win: U.S. Steel’s Pricing, Investment Decisions, and Market Share, 1901-1938.” The Journal of Economic History 49 (1989): 592-620.

McMillin, W. Douglas and Randall E. Parker. “An Empirical Analysis of Oil Price Shocks in the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

McNair, Malcolm P., and Eleanor G. May. The Evolution of Retail Institutions in the United States. Cambridge, MA: The Marketing Science Institute, 1976.

Mercer, Lloyd J. “Comment on Papers by Scheiber, Keller, and Raup.” The Journal of Economic History 33 (1973): 291-95.

Mintz, Ilse. Deterioration in the Quality of Foreign Bonds Issued in the United States, 1920-1930. New York: National Bureau of Economic Research, 1951.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises Edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Morris, Lloyd. Not So Long Ago. New York: Random House, 1949.

Mosco, Vincent. Broadcasting in the United States: Innovative Challenge and Organizational Control. Norwood, NJ: Ablex Publishing Corp., 1979.

Moulton, Harold G. et al. The American Transportation Problem. Washington: The Brookings Institution, 1933.

Mueller, John. “Lessons of the Tax-Cuts of Yesteryear.” The Wall Street Journal, March 5, 1981.

Musoke, Moses S. “Mechanizing Cotton Production in the American South: The Tractor, 1915-1960.” Explorations in Economic History 18 (1981): 347-75.

Nelson, Daniel. “Mass Production and the U.S. Tire Industry.” The Journal of Economic History 48 (1987): 329-40.

Nelson, Ralph L. Merger Movements in American Industry, 1895-1956. Princeton: Princeton University Press, 1959.

Niemi, Albert W., Jr., U.S. Economic History, 2nd ed. Chicago: Rand McNally Publishing Co., 1980.

Norton, Hugh S. Modern Transportation Economics. Columbus, OH: Charles E. Merrill Books, Inc., 1963.

Nystrom, Paul H. Economics of Retailing, vol. 1, 3rd ed. New York: The Ronald Press Co., 1930.

Oshima, Harry T. “The Growth of U.S. Factor Productivity: The Significance of New Technologies in the Early Decades of the Twentieth Century.” The Journal of Economic History 44 (1984): 161-70.

Parker, Randall and W. Douglas McMillin. “An Empirical Analysis of Oil Price Shocks During the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

Parker, Randall and Paul Flacco. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30 (1992): 154-171.

Parker, William N. “Agriculture.” In American Economic Growth: An Economist’s History of the United States, edited by Lance E. Davis, Richard A. Easterlin, William N. Parker, et. al. New York: Harper and Row, 1972.

Passer, Harold C. The Electrical Manufacturers, 1875-1900. Cambridge: Harvard University Press, 1953.

Peak, Hugh S., and Ellen F. Peak. Supermarket Merchandising and Management. Englewood Cliffs, NJ: Prentice-Hall, 1977.

Pilgrim, John. “The Upper Turning Point of 1920: A Reappraisal.” Explorations in Economic History 11 (1974): 271-98.

Rae, John B. Climb to Greatness: The American Aircraft Industry, 1920-1960. Cambridge: The M.I.T. Press, 1968.

Rae, John B. The American Automobile Industry. Boston: Twayne Publishers, 1984.

Rappoport, Peter and Eugene N. White. “Was the Crash of 1929 Expected?” American Economic Review 84 (1994): 271-281.

Rappoport, Peter and Eugene N. White. “Was There a Bubble in the 1929 Stock Market?” The Journal of Economic History 53 (1993): 549-574.

Resseguie, Harry E. “Alexander Turney Stewart and the Development of the Department Store, 1823-1876,” Business History Review 39 (1965): 301-22.

Rezneck, Samuel. “Mass Production and the Use of Energy.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Rockwell, Llewellyn H., Jr., ed. The Gold Standard: An Austrian Perspective. Lexington, MA: Lexington Books, 1985.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” The Journal of Political Economy 91 (1986): 1-37.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46 (1986): 341-352.

Romer, Christina. “World War I and the Postwar Depression: A Reinterpretation Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22 (1988): 91-115.

Romer, Christina and Jeffrey A. Miron. “A New Monthly Index of Industrial Production, 1884-1940.” Journal of Economic History 50 (1990): 321-337.

Romer, Christina. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105 (1990): 597-625.

Romer, Christina. “Remeasuring Business Cycles.” The Journal of Economic History 54 (1994): 573-609.

Roose, Kenneth D. “The Production Ceiling and the Turning Point of 1920.” American Economic Review 48 (1958): 348-56.

Rosen, Philip T. The Modern Stentors: Radio Broadcasters and the Federal Government, 1920-1934. Westport, CT: The Greenwood Press, 1980.

Rosen, Philip T. “Government, Business, and Technology in the 1920s: The Emergence of American Broadcasting.” In American Business History: Case Studies. Edited by Henry C. Dethloff and C. Joseph Pusateri. Arlington Heights, IL: Harlan Davidson, 1987.

Rothbard, Murray N. America’s Great Depression. Kansas City: Sheed and Ward, 1963.

Sampson, Roy J., and Martin T. Ferris. Domestic Transportation: Practice, Theory, and Policy, 4th ed. Boston: Houghton Mifflin Co., 1979.

Samuelson, Paul and Everett E. Hagen. After the War—1918-1920. Washington: National Resources Planning Board, 1943.

Santoni, Gary and Gerald P. Dwyer, Jr. “The Great Bull Markets, 1924-1929 and 1982-1987: Speculative Bubbles or Economic Fundamentals?” Federal Reserve Bank of St. Louis Review 69 (1987): 16-29.

Santoni, Gary, and Gerald P. Dwyer, Jr. “Bubbles vs. Fundamentals: New Evidence from the Great Bull Markets.” In Crises and Panics: The Lessons of History. Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Scherer, Frederick M. and David Ross. Industrial Market Structure and Economic Performance, 3d ed. Boston: Houghton Mifflin, 1990.

Schlebecker, John T. Whereby We Thrive: A History of American Farming, 1607-1972. Ames, IA: The Iowa State University Press, 1975.

Shepherd, James. “The Development of New Wheat Varieties in the Pacific Northwest.” Agricultural History 54 (1980): 52-63.

Sirkin, Gerald. “The Stock Market of 1929 Revisited: A Note.” Business History Review 49 (Fall 1975): 233-41.

Smiley, Gene. The American Economy in the Twentieth Century. Cincinnati: South-Western Publishing Co., 1994.

Smiley, Gene. “New Estimates of Income Shares During the 1920s.” In Calvin Coolidge and the Coolidge Era: Essays on the History of the 1920s, edited by John Earl Haynes, 215-232. Washington, D.C.: Library of Congress, 1998.

Smiley, Gene. “A Note on New Estimates of the Distribution of Income in the 1920s.” The Journal of Economic History 60, no. 4 (2000): 1120-1128.

Smiley, Gene. Rethinking the Great Depression: A New View of Its Causes and Consequences. Chicago: Ivan R. Dee, 2002.

Smiley, Gene, and Richard H. Keehn. “Margin Purchases, Brokers’ Loans and the Bull Market of the Twenties.” Business and Economic History. 2d series. 17 (1988): 129-42.

Smiley, Gene and Richard H. Keehn. “Federal Personal Income Tax Policy in the 1920s.” the Journal of Economic History 55, no. 2 (1995): 285-303.

Sobel, Robert. The Entrepreneuers: Explorations Within the American Business Tradition. New York: Weybright and Talley, 1974.

Soule, George. Prosperity Decade: From War to Depression: 1917-1929. New York: Holt, Rinehart, and Winston, 1947.

Stein, Herbert. The Fiscal Revolution in America, revised ed. Washington, D.C.: AEI Press, 1990.

Stigler, George J. “Monopoly and Oligopoly by Merger.” American Economic Review, 40 (May 1950): 23-34.

Sumner, Scott. “The Role of the International Gold Standard in Commodity Price Deflation: Evidence from the 1929 Stock Market Crash.” Explorations in Economic History 29 (1992): 290-317.

Swanson, Joseph and Samuel Williamson. “Estimates of National Product and Income for the United States Economy, 1919-1941.” Explorations in Economic History 10, no. 1 (1972): 53-73.

Temin, Peter. “The Beginning of the Depression in Germany.” Economic History Review. 24 (May 1971): 240-48.

Temin, Peter. Did Monetary Forces Cause the Great Depression. New York: W. W. Norton, 1976.

Temin, Peter. The Fall of the Bell System. New York: Cambridge University Press, 1987.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: The M.I.T. Press, 1989.

Thomas, Gordon, and Max Morgan-Witts. The Day the Bubble Burst. Garden City, NY: Doubleday, 1979.

Ulman, Lloyd. “The Development of Trades and Labor Unions,” In American Economic History, edited by Seymour E. Harris, chapter 14. New York: McGraw-Hill Book Co., 1961.

Ulman Lloyd. The Rise of the National Trade Union. Cambridge, MA: Harvard University Press, 1955.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970, 2 volumes. Washington, D.C.: USGPO, 1976.

Walsh, Margaret. Making Connections: The Long Distance Bus Industry in the U.S.A. Burlington, VT: Ashgate, 2000.

Wanniski, Jude. The Way the World Works. New York: Simon and Schuster, 1978.

Weiss, Leonard W. Cast Studies in American Industry, 3d ed. New York: John Wiley & Sons, 1980.

Whaples, Robert. “Hours of Work in U.S. History.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL— http://www.eh.net/encyclopedia/contents/whaples.work.hours.us.php

Whatley, Warren. “Southern Agrarian Labor Contracts as Impediments to Cotton Mechanization.” The Journal of Economic History 87 (1987): 45-70.

Wheelock, David C. and Subal C. Kumbhakar. “The Slack Banker Dances: Deposit Insurance and Risk-Taking in the Banking Collapse of the 1920s.” Explorations in Economic History 31 (1994): 357-375.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” The Journal of Economic Perspectives. 4 (Spring 1990): 67-83.

White, Eugene N., Ed. Crises and Panics: The Lessons of History. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “When the Ticker Ran Late: The Stock Market Boom and Crash of 1929.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “Stock Market Bubbles? A Reply.” The Journal of Economic History 55 (1995): 655-665.

White, William J. “Economic History of Tractors in the United States.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL http://www.eh.net/encyclopedia/contents/white.tractors.history.us.php

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922-1933: A Reinterpretation.” Journal of Political Economy 73 (1965): 325-43.

Wicker, Elmus. “A Reconsideration of Federal Reserve Policy During the 1920-1921 Depression.” The Journal of Economic History 26 (1966): 223-38.

Wicker, Elmus. Federal Reserve Monetary Policy, 1917-1933. New York: Random House, 1966.

Wigmore, Barrie A. The Crash and Its Aftermath. Westport, CT: Greenwood Press, 1985.

Williams, Raburn McFetridge. The Politics of Boom and Bust in Twentieth-Century America. Minneapolis/St. Paul: West Publishing Co., 1994.

Williamson, Harold F., et al. The American Petroleum Industry: The Age of Energy, 1899-1959. Evanston, IL: Northwestern University Press, 1963.

Wilson, Thomas. Fluctuations in Income and Employment, 3d ed. New York: Pitman Publishing, 1948.

Wilson, Jack W., Richard E. Sylla, and Charles P. Jones. “Financial Market Panics and Volatility in the Long Run, 1830-1988.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Winkler, John Kennedy. Five and Ten: The Fabulous Life of F. W. Woolworth. New York: R. M. McBride and Co., 1940.

Wood, Charles. “Science and Politics in the War on Cattle Diseases: The Kansas Experience, 1900-1940.” Agricultural History 54 (1980): 82-92.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy Since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “The Origins of American Industrial Success, 1879-1940.” The American Economic Review 80 (1990): 651-668.

Citation: Smiley, Gene. “US Economy in the 1920s”. EH.Net Encyclopedia, edited by Robert Whaples. June 29, 2004. URL http://eh.net/encyclopedia/the-u-s-economy-in-the-1920s/

The Dust Bowl

Geoff Cunfer, Southwest Minnesota State University

What Was “The Dust Bowl”?

The phrase “Dust Bowl” holds a powerful place in the American imagination. It connotes a confusing mixture of concepts. Is the Dust Bowl a place? Was it an event? An era? American popular culture employs the term in all three ways. Ask most people about the Dust Bowl and they can place it in the Middle West, though in the imagination it wanders widely, from the Rocky Mountains, through the Great Plains, to Illinois and Indiana. Many people can situate the event in the 1930s. Ask what happened then, and a variety of stories emerge. A combination of severe drought and economic depression created destitution among farmers. Millions of desperate people took to the roads, seeking relief in California where they became exploited itinerant farm laborers. Farmers plowed up a pristine wilderness for profit, and suffered ecological collapse because of their recklessness. Dust Bowl stories, like its definitions, are legion, and now approach the mythological.

The words also evoke powerful graphic images taken from art and literature. Consider these lines from the opening chapter of John Steinbeck’s The Grapes of Wrath (1939):

“Now the wind grew strong and hard and it worked at the rain crust in the corn fields. Little by little the sky was darkened by the mixing dust, and carried away. The wind grew stronger. The rain crust broke and the dust lifted up out of the fields and drove gray plumes into the air like sluggish smoke. The corn threshed the wind and made a dry, rushing sound. The finest dust did not settle back to earth now, but disappeared into the darkening sky. … The people came out of their houses and smelled the hot stinging air and covered their noses from it. And the children came out of the houses, but they did not run or shout as they would have done after a rain. Men stood by their fences and looked at the ruined corn, drying fast now, only a little green showing through the film of dust. The men were silent and they did not move often. And the women came out of the houses to stand beside their men – to feel whether this time the men would break.”

When Americans hear the words “Dust Bowl,” grainy black and white photographs of devastated landscapes and destitute people leap to mind. Dorothea Lange and Arthur Rothstein classics bring the Dust Bowl vividly to life in our imaginations (Figures [1] [2] [3] [4]). For the musically inclined, Woody Guthrie’s Dust Bowl ballads define the event with evocative lyrics such as those in “The Great Dust Storm” (Figure 5). Some of America’s most memorable art – literature, photography, music – emerged from the Dust Bowl and that art helped to define the event and build the myth in American popular culture.

The Dust Bowl was an event defined by artists and by government bureaucrats. It has become part of American mythology, an episode in the nation’s progression from the Pilgrims to Lexington and Concord, through Civil War and frontier settlement, to industrial modernization, Depression, and Dust Bowl. Many of the great themes of American history are tied up in the Dust Bowl story: agricultural settlement and frontier struggle; industrial mechanization with the arrival of tractors; the migration from farm to city, the transformation from rural to urban. Add the Great Depression and the rise of a powerful federal government, and we have covered many of the themes of a standard U.S. history survey course.

Despite the multiple uses of the phrase “Dust Bowl” it was an event which occurred in a specific place and time. The Dust Bowl was a coincidence of drought, severe wind erosion, and economic depression that occurred on the Southern and Central Great Plains during the 1930s. The drought – the longest and deepest in over a century of systematic meteorological observation – began in 1933 and continued through 1940. In 1941 rain poured down on the region, dust storms ceased, crops thrived, economic prosperity returned, and the Dust Bowl was over. But for those eight years crops failed, sandy soils blew and drifted over failed croplands, and rural people, unable to meet cash obligations, suffered through tax delinquency, farm foreclosure, business failure, and out-migration. The Dust Bowl was defined by a combination of:

  • extended severe drought and unusually high temperatures
  • episodic regional dust storms and routine localized wind erosion
  • agricultural failure, including both cropland and livestock operations
  • the collapse of the rural economy, affecting farmers, rural businesses, and local governments
  • an aggressive reform movement by the federal government
  • migration from rural to urban areas and out of the region

The Dust Bowl on the Great Plains coincided with the Great Depression. Though few plainsmen suffered directly from the 1929 stock market crash, they were too intimately connected to national and world markets to be immune from economic repercussions. The farm recession had begun in the 1920s; after the 1919 Armistice transformed Europe from an importer to an exporter of agricultural products, American farmers again faced their constant nemesis: production so high that prices were pushed downward. Farmers grew more cotton, wheat, and corn, than the market could consume, and prices fell, fell more, and then hit rock bottom by the early 1930s. Cotton, one of the staple crops of the southern plains, for example, sold for 36 cents per pound in 1919, dropped to 18 cents in 1928, then collapsed to a dismal 6 cents per pound in 1931. One irony of the Dust Bowl is that the world could not really buy all of the crops Great Plains farmers produced. Even the severe drought and crop failures of the 1930s had little impact on the flood of farm commodities inundating the world market.

Routine Dust Storms on the Southern and Central Plains

The location of the drought and the dust storms shifted from place to place between 1934 and 1940 (Figure 6 [large]). The core of the Dust Bowl was in the Texas and Oklahoma panhandles, southwestern Kansas and southeastern Colorado. The drought began on the Great Plains, from the Dakotas through Texas and New Mexico, in 1931. The following year was wetter, but 1933 and 1934 set low rainfall records across the plains. In some places is did not rain at all. Others quickly accumulated a deep deficit. Figure 7 [large] shows percent difference from average rainfall over five-year periods, with the location of the shifting Dust Bowl over top. Only a handful of counties (mapped in blue) had more rain than average between 1932 and 1940. And few counties fall into the 0 to -10 percent range. Most counties were 10 percent drier than average, or more, and more than eighty counties were at least 20 percent drier. Scientists now believe that the 1930s drought coincided with a severe La Nina event in the Pacific Ocean. Cool sea surface temperatures reduced the amount of moisture entering the jet stream and directed it south of the continental U.S. The drought was deep, extensive, and persisted for more than a decade.

Whenever there is drought on the southern and central plains dust blows. The flat topography and continental climate mean that winds are routinely high. When soil moisture declines, plant cover, whether native plants or crops, diminishes in tandem. Normally dry conditions mean that native plants typically cover less than 60 percent of the ground surface, leaving the other 40+ percent in bare, exposed soils. During the driest conditions native prairie vegetation sometimes covers less than 20 percent of the ground surface, exposing 80 percent or more of the soil to strong prairie winds. Failed crop fields are completely bare of vegetation. In these circumstances soil blows. Local wind erosion can drift soil from one field into ridges and ripples in a neighboring field (Figure 8). Stronger regional dust storms can move dirt many miles before it drifts down along fence lines and around buildings (Figure 9). In rare instances very large dust storms carry soils high into the air where they can travel for many hundreds of miles. These “black blizzards” are the most spectacular and memorable of dust storms, but happen only infrequently (Figure 10).

When wind erosion and dust storms began in the 1930s experienced plains residents hardly welcomed the development, but neither did it surprise them. Dust storms were an occasional spring occurrence from Texas and New Mexico through Kansas and Colorado. They did not happen every year, but often enough to be treated casually. This series of excerpts from the Salina, Kansas Journal and Herald in 1879 indicates that dust storms were a routine part of plains life in dry years:

“For the past few days the gentle winds have enveloped the city with dust decorations. And some of this time it has been intensely hot. Imagine the pleasantness of the situation.”

“During the past few days we have had several exhibitions of what dust can do when propelled by a gale. We had the disagreeable March winds, and saw with ample disgust the evolutions and gyrations of the dust. We have had enough of it, but will undoubtedly get much more of the same kind during this very disagreeable month.”

“Real estate moved considerably this week.”

“Another ‘hardest’ blow ever seen in Kansas … Salina was tantalized with a small sprinkle of rain Thursday afternoon. The wind and dust soon resumed full sway.”

“People have just got through digging from the pores of the skin the dirt driven there by the furious dust storms which for several days since our last issue have been lifting this county ‘clean off its toes.’ Even sinners have stood some chance of being translated with such favoring gales.”

“The wind which held high carnival in this section last Thursday, filled the air with such clouds of dust that darkness of the ‘consistency of twilight’ prevailed. Buildings across the street could not be distinguished. The title of all land about for a while was not worth a cotton hat – it was so ‘unsettled.’ It was of the nature of personal property, because it was not a ‘fixture’ and very moveable. The air was so filled with dust as to be stifling even within houses.”

The Salina newspapers reported dust storms many springs through the late nineteenth century. An item in the Journal in 1885 epitomizes the local attitude: “When the March winds commenced raising dust Monday, the average citizen calmly smiled and whispered ‘so natural!'”

What Made the 1930s Different?

Dust storms were not new to the region in the 1930s, but a number of demographic and cultural factors were new. First there were a lot more people living in the region in the 1930s than there had been in the 1880s. The population of the Great Plains – 450 counties stretching from Texas and New Mexico to the Dakotas and Montana – stood at only 800,000 in 1880; it was seven times that, at 5.6 million in 1930. The dust storms affected many more people than they had ever done before. And many of those people were relative newcomers, having only arrived in recent years. They had no personal or family memory of life in the plains, and many interpreted the arrival of episodic dust storms as an entirely new phenomenon. An example is the reminiscence by Minnie Zeller Doehring, written in 1981. Having moved with her family to western Kansas in 1906, at age 7, she reported “I remember the first Dirt storm in Western Kansas. I think it was about 1911. And a drouth that year followed by a severe winter.” Neither she nor her family had experienced any of the nineteenth century dust storms reported in local newspapers, so when one arrived during a dry spring five years after they arrived, it seemed like a brand new development.

Second, this drought and sequence of dust storms coincided with an international economic depression, the worst in two centuries of American history. The financial stresses and personal misery of the Depression blended seamlessly into the environmental disasters of drought, crop failure, farm loss, and dust. It was difficult to assign blame. Were farmers failing because of the economic crisis? Bank failures? Landlords squeezing tenants? Drought? Dust storms? In the midst of these concurrent crises emerged an activist and newly powerful federal government. Franklin Roosevelt’s New Deal roared into Washington in 1933 with a landslide mandate from voters to fix all of the ills plaguing the nation: depression, bank failures, unemployment, agricultural overproduction, underconsumption, the list went on and on. And several items quickly added to that list of ills to be fixed were rural poverty, agricultural land use, soil erosion, and dust storms.

The drought and dust storms were certainly hard on farmers. Crop failure was widespread and repeated. In 1935 46.6 million acres of crops failed on the Great Plains, with over 130 counties losing more than half their planted acreage. Many farmers lived on the edge of financial failure. In debt for land, tractor, automobile, and even for last year’s seed, one or two years with reduced income often meant bankruptcy. Tax delinquency became a serious problem throughout the plains. As land owners fell behind on their local property tax payments, county governments grew desperate. Many counties had delinquency rates over 40 percent for several consecutive years, and were faced with laying off teachers, police, and other employees. A few counties considered closing county government altogether and merging with neighboring counties. Their only alternative was to foreclose on now nearly worthless farms which they could neither rent nor sell. Many families behind on mortgage payments and taxes simply packed up and left without notice. The crisis was not restricted to farmers, bankers, and county employees. Throughout the plains sales of tractors, automobiles, and fertilizer declined in the early 1930s, affecting small town merchants across the board.

Consider the example of William and Sallie DeLoach, typical southern plains farmers who moved from farm to farm through the early twentieth century, repeatedly trying to buy land and repeatedly losing it to the bank in the face of drought or low crop prices. After an earlier failed attempt to buy land, the family invested in a 177 acre cotton farm in Lamb County, Texas in 1924, paying 30 dollars per acre. A month later they passed up a chance to sell it for 35 dollars an acre. Within three months of the purchase late summer rains failed to arrive, the cotton crop bloomed late, and the first freeze of winter killed it. Unable to make the upcoming mortgage payment, the DeLoaches forfeited their land and the 200 dollars they had already paid toward it. One bad season meant default. Through the rest of the 1920s the DeLoaches rented from Sallie’s father and farmed cotton in Lamb County. In September, 1929, just weeks before the stock market crashed, William thought the time auspicious to invest in land again, and bought 90 acres. He farmed it, then rented part of it to another farmer. Rain was plentiful in 1931, and by the end of that year DeLoach had repaid back rent to his father-in-law, paid off all outstanding debts except his land mortgage, and started 1932 in good shape. But the 1930s were hard on the southern plains, with the extended drought, dust storms, and widespread poverty. The one bright spot for farmers was the farm subsidies instituted by Franklin Roosevelt’s New Deal. In 1933 DeLoach plowed up 55 acres of already growing cotton in exchange for a check from the federal government. Lamb County led the state in the cotton reduction program, bringing nearly 1.4 million dollars into the county in 1933. Drought lingered over the Texas panhandle through 1934 and 1935, and by early 1936 DeLoach was beleaguered again. When the Supreme Court declared the Agricultural Adjustment Act (AAA) unconstitutional it appeared that federal farm subsidies would disappear. A few weeks after that decision DeLoach had a visit from his real estate agent:

Mr. Gholson came by this A.M. and wanted to know what I was going to do about my land notes. I told him I could do nothing, only let them have the land back. … I told him I had payed the school tax for 1934. Owed the state and county for 1935, also the state for 1934. All tole [sic] about $37.50. He said he would pay that and we (wife & I) could deed the land back to the Nugent people. I hate to lose the land and what I have payed on it, but I can’t do any thing else. ‘Big fish eat the little ones.’ The law is take from the poor devil that wants a home, give to the rich. I have lost about $1000.00 on the land.

A week later:

Mr. Gholson came by. Told me about the deed he had drawn in Dallas. … He said if I would pay for the deed and stamps, which would be $5.00, the deal would be closed. I asked him if that meant just as the land stood now. He said yes. He said they would pay the balance of taxes. Well, they ought to. I have payed $800.00 or better on the land, but got behind and could not do any thing else. Any way my mind is at ease. I do not think Gholson or any of the cold blooded land grafters would lose any sleep on account of taking a home away from any poor devil.

For the third time in his career DeLoach defaulted and turned over his farm. Later that month Congress rewrote the AAA legislation to meet Constitutional requirements, and the farm programs have continued ever since. With federal program income again assured, DeLoach purchased yet another 68 acre farm in September, 1936, moved the family onto it, and tried again. Other families were not as persistent, and when crop failure led to bankruptcy they packed up and left the region. The term popularly assigned to such emigrants, “Dust Bowl refugees,” assigned a single cause – dust storms – to what was in fact a complex and multi-causal event (Figure 11).

Like dust storms and agricultural setbacks, high out-migration was not new to the plains. Throughout the settlement period, from about 1870 to 1920, there was very high turnover in population. Many people moved into the region, but many moved out also. James Malin found that 10 year population turnover on the western Kansas frontier ranged from 41 to 67 percent between 1895 and 1930. Many people were half farmers, half land speculators, buying frontier land cheap (or homesteading it for free), then selling a few years later on a rising market. People moved from farm to farm, always looking for a better opportunity, often following a succession of frontiers over a lifetime, from Ohio to Illinois to Kansas to Colorado. Outmigration from the Great Plains in the 1930s was not considerably higher than it had been over the previous 50 years. What changed in the 1930s was that new immigrants stopped moving in to replace those leaving. Many rural areas of the grassland began a slow population decline that had not yet bottomed out in 2000.

The New Deal Response to Drought and Dust Storms

Emigrants from the Great Plains were not new in the 1930s. Neither was drought, agricultural crisis, or dust storms. This drought and these dust storms were certainly more severe than those that wracked the plains in 1879-1880, in the mid 1890s, and again in 1911. And more people were adversely affected because total population was higher. But what was most different about the 1930s was the response of the federal government. In past crises, when farmers went bankrupt, when grassland counties lost 20 percent of their population, when dust storms descended, the federal government stood aloof. It felt no responsibility for the problems, no popular mandate to solve them. Just the opposite was the case in the 1930s. The New Deal set out to solve the nation’s problems, and in the process contributed to the creation of the Dust Bowl as an historic event of mythological proportions.

The economic and agricultural disaster of the 1930s provided an opening for experimentation with federal land use management. The idea had begun among economists in agricultural colleges in the 1920s who proposed removing “submarginal” land from crop production. “Submarginal” referred to land low in productivity, unsuited for the production of farm crops, or incapable of profitable cultivation. A “land utilization” movement emerged in the 1920s to classify farm land as good, poor, marginal, or submarginal, and to forcibly retire the latter from production. Such rational planning aimed to reduce farm poverty, contract chronic overproduction of farm crops, and protect land vulnerable to damage. M.L. Wilson, of Montana State Agricultural College, focused the academic movement while Lewis C. Gray, at the Bureau of Agricultural Economics (BAE), led the effort within the U.S. Department of Agriculture. The land utilization movement began well before the 1930s, but the drought and dust storms of that decade provided a fortuitous justification for a land use policy already on the table, and newly created agencies like the Soil Conservation Service (SCS), the Resettlement Administration (RA), and the Farm Security Administration (FSA) were the loudest to publicize and deplore the Dust Bowl wracking America’s heartland.

Whereas the land use adjustment movement had begun as an attempt to solve chronic rural poverty, the arrival of dust storms in 1934 provided a second justification for aggressive federal action to change land use practices. Federal bureaucrats created the central narrative of the Dust Bowl, in part because it emphasized the need for these new reform agencies. The FSA launched a sophisticated public relations campaign to publicize the disaster unfolding in the Great Plains. It hired world class photographers to document the suffering of plains people, giving them specific instructions from Washington to photograph the most eroded landscapes and the most destitute people. Dorothea Lange’s photographs of emigrants on the road to California still stand as some of the most evocative images in American history (Figures 1213). The Resettlement Administration also hired filmmaker Pare Lorentz the make a series of movies, including “The Plow that Broke the Plains.”

The narrative behind this publicity campaign was this: in the nineteenth and early twentieth centuries farmers had come to the dry western plains, encouraged by a misguided Homestead Act, where they plowed up land unsuited for farming. The grassland should have been left in native grass for grazing, but small farmers, hoping to make profits growing cash crops like wheat had plowed the land, exposing soils to relentless winds. When serious drought struck in the 1930s the wounded landscape succumbed to dust storms that devastated farms, farmers, and local economies. The result was a mass exodus of desperately poor people, a social failure caused by misuse of land. The profit motive and private land ownership were behind this failure, and only a scientifically grounded federal bureaucracy could manage land use wisely in the interests of all Americans, rather than for the profit of a few individuals. Federal agents would retire land from cultivation, return it to grassland, and teach remaining farmers how to use their land more carefully to prevent erosion. This effort would, of course, require large budgets and thousands of employees, but it was vital to resolving a rural disaster.

The New Deal government, with Congressional support and appropriations, began to put reform plan into place. A host of new agencies vied to manage the program, including the FSA, the SCS, the RA, and the Agricultural Adjustment Administration (AAA). Each implemented a variety of reforms. The RA began purchasing “submarginal” land from farmers, eventually acquiring some 10 million acres for former farmland in the Great Plains. (These lands are now mostly managed by the U.S. Forest Service as National Grasslands leased to nearby private ranchers for grazing.) The RA and the FSA worked to relocate destitute farmers on better lands, or move them out of farming altogether. The SCS established demonstration projects in counties across the nation, where local cooperator farmers implemented recommended soils conservation techniques on their farms, such as fallowing, strip cropping, contour plowing, terracing, growing cover crops, and a variety of cultivation techniques. There were efforts in each county to establish Land Use Planning Committees made of local farmers and federal agents who would have authority over land use practices on private farms. These committees functioned for several years in the late 1930s, but ended in most places by the early 1940s. The most important and expensive measure was the AAA’s development of a comprehensive system of farm subsidies, which paid farmers cash for reducing their acreage of commodity crops. The subsidies, created as an emergency Depression measure, have become routine and persist 70 years later. They brought millions of dollars into nearly every farming county in the U.S. and permanently transformed the economics of agriculture. In a multitude of innovative ways the federal government set out to remake American farming. The Dust Bowl narrative served exceedingly well to justify these massive and revolutionary changes in farming, America’s most common occupation for most of its history.

Conclusion

The Dust Bowl finally ended in 1941 with the arrival of drenching rains on the southern and central plains and with the advent of World War II. The rains restored crops and settled the dust. The war diverted public and government attention from the plains. In a telling move, the FSA photography corps was reconstituted as the Office of War Information, the propaganda wing of the government’s war effort. The narrative of World War II replaced the Dust Bowl narrative in the public’s attention. Congress diverted funding away from the Great Plains and toward mobilization. The Land Utilization Program stopped buying submarginal land and the county Land Use Planning Committees ceased. Some of the New Deal reforms became permanent. The AAA subsidy system continued through the present and the Soil Conservation Service (now the Natural Resources Conservation Service) created a stable niche promoting wise agricultural land management and soil mapping.

Ironically, overall land use on the Great Plains had changed little during the decade. About the same amount of land was devoted to crops in the second half of the twentieth century as in the first half. Farmers grew the same crops in the same mixtures. Many implemented the milder reforms promoted by New Dealers – contour plowing, terracing – but little cropland was converted back to pasture. The “submarginal” regions have continued to grow wheat, sorghum, and other crops in roughly the same quantities. Despite these facts the public has generally adopted the Dust Bowl narrative. If asked, most will identify the Dust Bowl as caused by misuse of land. The descendants of the federal agencies created in the 1930s still claim to have played a leading role in solving the crisis. Periodic droughts and dust storms have returned to the region since 1941, notably in the early 1950s and again in the 1970s. Towns in the core dust storm region still have dust storms in dry years. Lubbock, Texas, for example, experienced 35 dust storms in 1973-74. Rural depopulation continues in the Great Plains (although cities in the region have grown even faster than rural places have declined). None of these droughts, dust storms, or periods of depopulation have received the concentrated public attention that those of the 1930s did. Nonetheless, environmentalists and critics of modern agricultural systems continue to warn that unless we reform modern farming the Dust Bowl may return.

References and Additional Reading

Bonnifield, Mathew P. The Dust Bowl: Men, Dirt, and Depression. Albuquerque: University of New Mexico Press, 1979.

Cronon, William. “A Place for Stories: Nature, History, and Narrative.” Journal of American History 78 (March 1992): 1347-1376.

Cunfer, Geoff. “Causes of the Dust Bowl.” In Past Time, Past Place: GIS for History, edited by Anne Kelly Knowles, 93-104. Redlands, CA: ESRI Press, 2002.

Cunfer, Geoff. “The New Deal’s Land Utilization Program in the Great Plains.” Great Plains Quarterly 21 (Summer 2001): 193-210.

Cunfer, Geoff. On the Great Plains: Agriculture and Environment. Texas A&M University Press, 2005.

The Future of the Great Plains: Report of the Great Plains Committee. Washington: Government Printing Office, 1936.

Ganzel, Bill. Dust Bowl Descent. Lincoln: University of Nebraska Press, 1984.

Great Plains Quarterly 6 (Spring 1986), special issue on the Dust Bowl.

Gregory, James N. American Exodus: The Dust Bowl Migration and Okie Culture in California. New York: Oxford University Press, 1989.

Guthrie, Woody. Dust Bowl Ballads. New York: Folkway Records, 1964.

Gutmann, Myron P. and Geoff Cunfer. “A New Look at the Causes of the Dust Bowl.” Charles L. Wood Agricultural History Lecture Series, no. 99-1. Lubbock: International Center for Arid and Semiarid Land Studies, Texas Tech University, 1999.

Hansen, Zeynep K. and Gary D. Libecap. “Small Farms, Externalities, and the Dust Bowl of the 1930s.” Journal of Political Economy 112 (2004): 665-694.

Hurt, R. Douglas. The Dust Bowl: An Agricultural and Social History. Chicago: Nelson-Hall, 1981.

Lookingbill, Brad. Dust Bowl USA: Depression America and the Ecological Imagination, 1929-1941. Athens: Ohio University Press, 2001.

Lorentz, Pare. The Plow that Broke the Plains. Washington: Resettlement Administration, 1936.

Malin, James C. “Dust Storms, 1850-1900.” Kansas Historical Quarterly 14 (May, August, and November 1946): 129-144, 265-296; 391-413.

Malin, James C. Essays on Historiography. Ann Arbor, Michigan: Edwards Brothers, 1946.

Malin, James C. The Grassland of North America: Prolegomena to Its History. Lawrence, Kansas, privately printed, 1961.

Riney-Kehrberg, Pamela. Rooted in Dust: Surviving Drought and Depression in Southwestern Kansas. Lawrence: University Press of Kansas, 1994.

Riney-Kehrberg, Pamela, editor. Waiting on the Bounty: The Dust Bowl Diary of Mary Knackstedt Dyck. Iowa City: University of Iowa Press, 1999.

Svobida, Lawrence. Farming the Dust Bowl: A Firsthand Account from Kansas. Lawrence: University Press of Kansas, 1986.

Wooten, H.H. The Land Utilization Program, 1934 to 1964: Origin, Development, and Present Status. U.S.D.A. Economic Research Service Agricultural Economic Report no. 85. Washington: Government Printing Office, 1965.

Worster, Donald. Dust Bowl: The Southern Plains in the 1930s. New York: Oxford University Press, 1979.

Wunder, John R., Frances W. Kaye, and Vernon Carstensen. Americans View Their Dust Bowl Experience. Niwot: University Press of Colorado, 1999.

Citation: Cunfer, Geoff. “The Dust Bowl”. EH.Net Encyclopedia, edited by Robert Whaples. August 18, 2004. URL http://eh.net/encyclopedia/the-dust-bowl/