EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Japanese Industrialization and Economic Growth

Carl Mosk, University of Victoria

Japan achieved sustained growth in per capita income between the 1880s and 1970 through industrialization. Moving along an income growth trajectory through expansion of manufacturing is hardly unique. Indeed Western Europe, Canada, Australia and the United States all attained high levels of income per capita by shifting from agrarian-based production to manufacturing and technologically sophisticated service sector activity.

Still, there are four distinctive features of Japan’s development through industrialization that merit discussion:

The proto-industrial base

Japan’s agricultural productivity was high enough to sustain substantial craft (proto-industrial) production in both rural and urban areas of the country prior to industrialization.

Investment-led growth

Domestic investment in industry and infrastructure was the driving force behind growth in Japanese output. Both private and public sectors invested in infrastructure, national and local governments serving as coordinating agents for infrastructure build-up.

  • Investment in manufacturing capacity was largely left to the private sector.
  • Rising domestic savings made increasing capital accumulation possible.
  • Japanese growth was investment-led, not export-led.

Total factor productivity growth — achieving more output per unit of input — was rapid.

On the supply side, total factor productivity growth was extremely important. Scale economies — the reduction in per unit costs due to increased levels of output — contributed to total factor productivity growth. Scale economies existed due to geographic concentration, to growth of the national economy, and to growth in the output of individual companies. In addition, companies moved down the “learning curve,” reducing unit costs as their cumulative output rose and demand for their product soared.

The social capacity for importing and adapting foreign technology improved and this contributed to total factor productivity growth:

  • At the household level, investing in education of children improved social capability.
  • At the firm level, creating internalized labor markets that bound firms to workers and workers to firms, thereby giving workers a strong incentive to flexibly adapt to new technology, improved social capability.
  • At the government level, industrial policy that reduced the cost to private firms of securing foreign technology enhanced social capacity.

Shifting out of low-productivity agriculture into high productivity manufacturing, mining, and construction contributed to total factor productivity growth.

Dualism

Sharply segmented labor and capital markets emerged in Japan after the 1910s. The capital intensive sector enjoying high ratios of capital to labor paid relatively high wages, and the labor intensive sector paid relatively low wages.

Dualism contributed to income inequality and therefore to domestic social unrest. After 1945 a series of public policy reforms addressed inequality and erased much of the social bitterness around dualism that ravaged Japan prior to World War II.

The remainder of this article will expand on a number of the themes mentioned above. The appendix reviews quantitative evidence concerning these points. The conclusion of the article lists references that provide a wealth of detailed evidence supporting the points above, which this article can only begin to explore.

The Legacy of Autarky and the Proto-Industrial Economy: Achievements of Tokugawa Japan (1600-1868)

Why Japan?

Given the relatively poor record of countries outside the European cultural area — few achieving the kind of “catch-up” growth Japan managed between 1880 and 1970 – the question naturally arises: why Japan? After all, when the United States forcibly “opened Japan” in the 1850s and Japan was forced to cede extra-territorial rights to a number of Western nations as had China earlier in the 1840s, many Westerners and Japanese alike thought Japan’s prospects seemed dim indeed.

Tokugawa achievements: urbanization, road networks, rice cultivation, craft production

In answering this question, Mosk (2001), Minami (1994) and Ohkawa and Rosovsky (1973) emphasize the achievements of Tokugawa Japan (1600-1868) during a long period of “closed country” autarky between the mid-seventeenth century and the 1850s: a high level of urbanization; well developed road networks; the channeling of river water flow with embankments and the extensive elaboration of irrigation ditches that supported and encouraged the refinement of rice cultivation based upon improving seed varieties, fertilizers and planting methods especially in the Southwest with its relatively long growing season; the development of proto-industrial (craft) production by merchant houses in the major cities like Osaka and Edo (now called Tokyo) and its diffusion to rural areas after 1700; and the promotion of education and population control among both the military elite (the samurai) and the well-to-do peasantry in the eighteenth and early nineteenth centuries.

Tokugawa political economy: daimyo and shogun

These developments were inseparable from the political economy of Japan. The system of confederation government introduced at the end of the fifteenth century placed certain powers in the hands of feudal warlords, daimyo, and certain powers in the hands of the shogun, the most powerful of the warlords. Each daimyo — and the shogun — was assigned a geographic region, a domain, being given taxation authority over the peasants residing in the villages of the domain. Intercourse with foreign powers was monopolized by the shogun, thereby preventing daimyo from cementing alliances with other countries in an effort to overthrow the central government. The samurai military retainers of the daimyo were forced to abandon rice farming and reside in the castle town headquarters of their daimyo overlord. In exchange, samurai received rice stipends from the rice taxes collected from the villages of their domain. By removing samurai from the countryside — by demilitarizing rural areas — conflicts over local water rights were largely made a thing of the past. As a result irrigation ditches were extended throughout the valleys, and riverbanks were shored up with stone embankments, facilitating transport and preventing flooding.

The sustained growth of proto-industrialization in urban Japan, and its widespread diffusion to villages after 1700 was also inseparable from the productivity growth in paddy rice production and the growing of industrial crops like tea, fruit, mulberry plant growing (that sustained the raising of silk cocoons) and cotton. Indeed, Smith (1988) has given pride of place to these “domestic sources” of Japan’s future industrial success.

Readiness to emulate the West

As a result of these domestic advances, Japan was well positioned to take up the Western challenge. It harnessed its infrastructure, its high level of literacy, and its proto-industrial distribution networks to the task of emulating Western organizational forms and Western techniques in energy production, first and foremost enlisting inorganic energy sources like coal and the other fossil fuels to generate steam power. Having intensively developed the organic economy depending upon natural energy flows like wind, water and fire, Japanese were quite prepared to master inorganic production after the Black Ships of the Americans forced Japan to jettison its long-standing autarky.

From Balanced to Dualistic Growth, 1887-1938: Infrastructure and Manufacturing Expand

Fukoku Kyohei

After the Tokugawa government collapsed in 1868, a new Meiji government committed to the twin policies of fukoku kyohei (wealthy country/strong military) took up the challenge of renegotiating its treaties with the Western powers. It created infrastructure that facilitated industrialization. It built a modern navy and army that could keep the Western powers at bay and establish a protective buffer zone in North East Asia that eventually formed the basis for a burgeoning Japanese empire in Asia and the Pacific.

Central government reforms in education, finance and transportation

Jettisoning the confederation style government of the Tokugawa era, the new leaders of the new Meiji government fashioned a unitary state with powerful ministries consolidating authority in the capital, Tokyo. The freshly minted Ministry of Education promoted compulsory primary schooling for the masses and elite university education aimed at deepening engineering and scientific knowledge. The Ministry of Finance created the Bank of Japan in 1882, laying the foundations for a private banking system backed up a lender of last resort. The government began building a steam railroad trunk line girding the four major islands, encouraging private companies to participate in the project. In particular, the national government committed itself to constructing a Tokaido line connecting the Tokyo/Yokohama region to the Osaka/Kobe conurbation along the Pacific coastline of the main island of Honshu, and to creating deepwater harbors at Yokohama and Kobe that could accommodate deep-hulled steamships.

Not surprisingly, the merchants in Osaka, the merchant capital of Tokugawa Japan, already well versed in proto-industrial production, turned to harnessing steam and coal, investing heavily in integrated spinning and weaving steam-driven textile mills during the 1880s.

Diffusion of best-practice agriculture

At the same time, the abolition of the three hundred or so feudal fiefs that were the backbone of confederation style-Tokugawa rule and their consolidation into politically weak prefectures, under a strong national government that virtually monopolized taxation authority, gave a strong push to the diffusion of best practice agricultural technique. The nationwide diffusion of seed varieties developed in the Southwest fiefs of Tokugawa Japan spearheaded a substantial improvement in agricultural productivity especially in the Northeast. Simultaneously, expansion of agriculture using traditional Japanese technology agriculture and manufacturing using imported Western technology resulted.

Balanced growth

Growth at the close of the nineteenth century was balanced in the sense that traditional and modern technology using sectors grew at roughly equal rates, and labor — especially young girls recruited out of farm households to labor in the steam using textile mills — flowed back and forth between rural and urban Japan at wages that were roughly equal in industrial and agricultural pursuits.

Geographic economies of scale in the Tokaido belt

Concentration of industrial production first in Osaka and subsequently throughout the Tokaido belt fostered powerful geographic scale economies (the ability to reduce per unit costs as output levels increase), reducing the costs of securing energy, raw materials and access to global markets for enterprises located in the great harbor metropolises stretching from the massive Osaka/Kobe complex northward to the teeming Tokyo/Yokohama conurbation. Between 1904 and 1911, electrification mainly due to the proliferation of intercity electrical railroads created economies of scale in the nascent industrial belt facing outward onto the Pacific. The consolidation of two huge hydroelectric power grids during the 1920s — one servicing Tokyo/Yokohama, the other Osaka and Kobe — further solidified the comparative advantage of the Tokaido industrial belt in factory production. Finally, the widening and paving during the 1920s of roads that could handle buses and trucks was also pioneered by the great metropolises of the Tokaido, which further bolstered their relative advantage in per capita infrastructure.

Organizational economies of scale — zaibatsu

In addition to geographic scale economies, organizational scale economies also became increasingly important in the late nineteenth centuries. The formation of the zaibatsu (“financial cliques”), which gradually evolved into diversified industrial combines tied together through central holding companies, is a case in point. By the 1910s these had evolved into highly diversified combines, binding together enterprises in banking and insurance, trading companies, mining concerns, textiles, iron and steel plants, and machinery manufactures. By channeling profits from older industries into new lines of activity like electrical machinery manufacturing, the zaibatsu form of organization generated scale economies in finance, trade and manufacturing, drastically reducing information-gathering and transactions costs. By attracting relatively scare managerial and entrepreneurial talent, the zaibatsu format economized on human resources.

Electrification

The push into electrical machinery production during the 1920s had a revolutionary impact on manufacturing. Effective exploitation of steam power required the use of large central steam engines simultaneously driving a large number of machines — power looms and mules in a spinning/weaving plant for instance – throughout a factory. Small enterprises did not mechanize in the steam era. But with electrification the “unit drive” system of mechanization spread. Each machine could be powered up independently of one another. Mechanization spread rapidly to the smallest factory.

Emergence of the dualistic economy

With the drive into heavy industries — chemicals, iron and steel, machinery — the demand for skilled labor that would flexibly respond to rapid changes in technique soared. Large firms in these industries began offering premium wages and guarantees of employment in good times and bad as a way of motivating and holding onto valuable workers. A dualistic economy emerged during the 1910s. Small firms, light industry and agriculture offered relatively low wages. Large enterprises in the heavy industries offered much more favorable remuneration, extending paternalistic benefits like company housing and company welfare programs to their “internal labor markets.” As a result a widening gulf opened up between the great metropolitan centers of the Tokaido and rural Japan. Income per head was far higher in the great industrial centers than in the hinterland.

Clashing urban/rural and landlord/tenant interests

The economic strains of emergent dualism were amplified by the slowing down of technological progress in the agricultural sector, which had exhaustively reaped the benefits due to regional diffusion from the Southwest to the Northeast of best practice Tokugawa rice cultivation. Landlords — around 45% of the cultivable rice paddy land in Japan was held in some form of tenancy at the beginning of the twentieth century — who had played a crucial role in promoting the diffusion of traditional best practice techniques now lost interest in rural affairs and turned their attention to industrial activities. Tenants also found their interests disregarded by the national authorities in Tokyo, who were increasingly focused on supplying cheap foodstuffs to the burgeoning industrial belt by promoting agricultural production within the empire that it was assembling through military victories. Japan secured Taiwan from China in 1895, and formally brought Korea under its imperial rule in 1910 upon the heels of its successful war against Russia in 1904-05. Tenant unions reacted to this callous disrespect of their needs through violence. Landlord/tenant disputes broke out in the early 1920s, and continued to plague Japan politically throughout the 1930s, calls for land reform and bureaucratic proposals for reform being rejected by a Diet (Japan’s legislature) politically dominated by landlords.

Japan’s military expansion

Japan’s thrust to imperial expansion was inflamed by the growing instability of the geopolitical and international trade regime of the later 1920s and early 1930s. The relative decline of the United Kingdom as an economic power doomed a gold standard regime tied to the British pound. The United States was becoming a potential contender to the United Kingdom as the backer of a gold standard regime but its long history of high tariffs and isolationism deterred it from taking over leadership in promoting global trade openness. Germany and the Soviet Union were increasingly becoming industrial and military giants on the Eurasian land mass committed to ideologies hostile to the liberal democracy championed by the United Kingdom and the United States. It was against this international backdrop that Japan began aggressively staking out its claim to being the dominant military power in East Asia and the Pacific, thereby bringing it into conflict with the United States and the United Kingdom in the Asian and Pacific theaters after the world slipped into global warfare in 1939.

Reform and Reconstruction in a New International Economic Order, Japan after World War II

Postwar occupation: economic and institutional restructuring

Surrendering to the United States and its allies in 1945, Japan’s economy and infrastructure was revamped under the S.C.A.P (Supreme Commander of the Allied Powers) Occupation lasting through 1951. As Nakamura (1995) points out, a variety of Occupation-sponsored reforms transformed the institutional environment conditioning economic performance in Japan. The major zaibatsu were liquidated by the Holding Company Liquidation Commission set up under the Occupation (they were revamped as keiretsu corporate groups mainly tied together through cross-shareholding of stock in the aftermath of the Occupation); land reform wiped out landlordism and gave a strong push to agricultural productivity through mechanization of rice cultivation; and collective bargaining, largely illegal under the Peace Preservation Act that was used to suppress union organizing during the interwar period, was given the imprimatur of constitutional legality. Finally, education was opened up, partly through making middle school compulsory, partly through the creation of national universities in each of Japan’s forty-six prefectures.

Improvement in the social capability for economic growth

In short, from a domestic point of view, the social capability for importing and adapting foreign technology was improved with the reforms in education and the fillip to competition given by the dissolution of the zaibatsu. Resolving tension between rural and urban Japan through land reform and the establishment of a rice price support program — that guaranteed farmers incomes comparable to blue collar industrial workers — also contributed to the social capacity to absorb foreign technology by suppressing the political divisions between metropolitan and hinterland Japan that plagued the nation during the interwar years.

Japan and the postwar international order

The revamped international economic order contributed to the social capability of importing and adapting foreign technology. The instability of the 1920s and 1930s was replaced with replaced with a relatively predictable bipolar world in which the United States and the Soviet Union opposed each other in both geopolitical and ideological arenas. The United States became an architect of multilateral architecture designed to encourage trade through its sponsorship of the United Nations, the World Bank, the International Monetary Fund and the General Agreement on Tariffs and Trade (the predecessor to the World Trade Organization). Under the logic of building military alliances to contain Eurasian Communism, the United States brought Japan under its “nuclear umbrella” with a bilateral security treaty. American companies were encouraged to license technology to Japanese companies in the new international environment. Japan redirected its trade away from the areas that had been incorporated into the Japanese Empire before 1945, and towards the huge and expanding American market.

Miracle Growth: Soaring Domestic Investment and Export Growth, 1953-1970

Its infrastructure revitalized through the Occupation period reforms, its capacity to import and export enhanced by the new international economic order, and its access to American technology bolstered through its security pact with the United States, Japan experienced the dramatic “Miracle Growth” between 1953 and the early 1970s whose sources have been cogently analyzed by Denison and Chung (1976). Especially striking in the Miracle Growth period was the remarkable increase in the rate of domestic fixed capital formation, the rise in the investment proportion being matched by a rising savings rate whose secular increase — especially that of private household savings – has been well documented and analyzed by Horioka (1991). While Japan continued to close the gap in income per capita between itself and the United States after the early 1970s, most scholars believe that large Japanese manufacturing enterprises had by and large become internationally competitive by the early 1970s. In this sense it can be said that Japan had completed its nine decade long convergence to international competitiveness through industrialization by the early 1970s.

MITI

There is little doubt that the social capacity to import and adapt foreign technology was vastly improved in the aftermath of the Pacific War. Creating social consensus with Land Reform and agricultural subsidies reduced political divisiveness, extending compulsory education and breaking up the zaibatsu had a positive impact. Fashioning the Ministry of International Trade and Industry (M.I.T.I.) that took responsibility for overseeing industrial policy is also viewed as facilitating Japan’s social capability. There is no doubt that M.I.T.I. drove down the cost of securing foreign technology. By intervening between Japanese firms and foreign companies, it acted as a single buyer of technology, playing off competing American and European enterprises in order to reduce the royalties Japanese concerns had to pay on technology licenses. By keeping domestic patent periods short, M.I.T.I. encouraged rapid diffusion of technology. And in some cases — the experience of International Business Machines (I.B.M.), enjoying a virtual monopoly in global mainframe computer markets during the 1950s and early 1960s, is a classical case — M.I.T.I. made it a condition of entry into the Japanese market (through the creation of a subsidiary Japan I.B.M. in the case of I.B.M.) that foreign companies share many of their technological secrets with potential Japanese competitors.

How important industrial policy was for Miracle Growth remains controversial, however. The view of Johnson (1982), who hails industrial policy as a pillar of the Japanese Development State (government promoting economic growth through state policies) has been criticized and revised by subsequent scholars. The book by Uriu (1996) is a case in point.

Internal labor markets, just-in-time inventory and quality control circles

Furthering the internalization of labor markets — the premium wages and long-term employment guarantees largely restricted to white collar workers were extended to blue collar workers with the legalization of unions and collective bargaining after 1945 — also raised the social capability of adapting foreign technology. Internalizing labor created a highly flexible labor force in post-1950 Japan. As a result, Japanese workers embraced many of the key ideas of Just-in-Time inventory control and Quality Control circles in assembly industries, learning how to do rapid machine setups as part and parcel of an effort to produce components “just-in-time” and without defect. Ironically, the concepts of just-in-time and quality control were originally developed in the United States, just-in-time methods being pioneered by supermarkets and quality control by efficiency experts like W. Edwards Deming. Yet it was in Japan that these concepts were relentlessly pursued to revolutionize assembly line industries during the 1950s and 1960s.

Ultimate causes of the Japanese economic “miracle”

Miracle Growth was the completion of a protracted historical process involving enhancing human capital, massive accumulation of physical capital including infrastructure and private manufacturing capacity, the importation and adaptation of foreign technology, and the creation of scale economies, which took decades and decades to realize. Dubbed a miracle, it is best seen as the reaping of a bountiful harvest whose seeds were painstakingly planted in the six decades between 1880 and 1938. In the course of the nine decades between the 1880s and 1970, Japan amassed and lost a sprawling empire, reorienting its trade and geopolitical stance through the twists and turns of history. While the ultimate sources of growth can be ferreted out through some form of statistical accounting, the specific way these sources were marshaled in practice is inseparable from the history of Japan itself and of the global environment within which it has realized its industrial destiny.

Appendix: Sources of Growth Accounting and Quantitative Aspects of Japan’s Modern Economic Development

One of the attractions of studying Japan’s post-1880 economic development is the abundance of quantitative data documenting Japan’s growth. Estimates of Japanese income and output by sector, capital stock and labor force extend back to the 1880s, a period when Japanese income per capita was low. Consequently statistical probing of Japan’s long-run growth from relative poverty to abundance is possible.

The remainder of this appendix is devoted to introducing the reader to the vast literature on quantitative analysis of Japan’s economic development from the 1880s until 1970, a nine decade period during which Japanese income per capita converged towards income per capita levels in Western Europe. As the reader will see, this discussion confirms the importance of factors discussed at the outset of this article.

Our initial touchstone is the excellent “sources of growth” accounting analysis carried out by Denison and Chung (1976) on Japan’s growth between 1953 and 1971. Attributing growth in national income in growth of inputs, the factors of production — capital and labor — and growth in output per unit of the two inputs combined (total factor productivity) along the following lines:

G(Y) = { a G(K) + [1-a] G(L) } + G (A)

where G(Y) is the (annual) growth of national output, g(K) is the growth rate of capital services, G(L) is the growth rate of labor services, a is capital’s share in national income (the share of income accruing to owners of capital), and G(A) is the growth of total factor productivity, is a standard approach used to approximate the sources of growth of income.

Using a variant of this type of decomposition that takes into account improvements in the quality of capital and labor, estimates of scale economies and adjustments for structural change (shifting labor out of agriculture helps explain why total factor productivity grows), Denison and Chung (1976) generate a useful set of estimates for Japan’s Miracle Growth era.

Operating with this “sources of growth” approach and proceeding under a variety of plausible assumptions, Denison and Chung (1976) estimate that of Japan’s average annual real national income growth of 8.77 % over 1953-71, input growth accounted for 3.95% (accounting for 45% of total growth) and growth in output per unit of input contributed 4.82% (accounting for 55% of total growth). To be sure, the precise assumptions and techniques they use can be criticized. The precise numerical results they arrive at can be argued over. Still, their general point — that Japan’s growth was the result of improvements in the quality of factor inputs — health and education for workers, for instance — and improvements in the way these inputs are utilized in production — due to technological and organizational change, reallocation of resources from agriculture to non-agriculture, and scale economies, is defensible.

With this in mind consider Table 1.

Table 1: Industrialization and Economic Growth in Japan, 1880-1970:
Selected Quantitative Characteristics

Panel A: Income and Structure of National Output

Real Income per Capita [a] Share of National Output (of Net Domestic Product) and Relative Labor Productivity (Ratio of Output per Worker in Agriculture to Output per Worker in the N Sector) [b]
Years Absolute Relative to U.S. level Year Agriculture Manufacturing & Mining

(Ma)

Manufacturing,

Construction & Facilitating Sectors [b]

Relative Labor Productivity

A/N

1881-90 893 26.7% 1887 42.5% 13.6% 20.0% 68.3
1891-1900 1,049 28.5 1904 37.8 17.4 25.8 44.3
1900-10 1,195 25.3 1911 35.5 20.3 31.1 37.6
1911-20 1,479 27.9 1919 29.9 26.2 38.3 32.5
1921-30 1,812 29.1 1930 20.0 25.8 43.3 27.4
1930-38 2,197 37.7 1938 18.5 35.3 51.7 20.8
1951-60 2,842 26.2 1953 22.0 26.3 39.7 22.6
1961-70 6,434 47.3 1969 8.7 30.5 45.9 19.1

Panel B: Domestic and External Sources of Aggregate Supply and Demand Growth: Manufacturing and Mining (Ma), Gross Domestic Fixed Capital Formation (GDFCF), and Trade (TR)

Percentage Contribution to Growth due to: Trade Openness and Trade Growth [c]
Years Ma to Output Growth GDFCF to Effective

Demand Growth

Years Openness Growth in Trade
1888-1900 19.3% 17.9% 1885-89 6.9% 11.4%
1900-10 29.2 30.5 1890-1913 16.4 8.0
1910-20 26.5 27.9 1919-29 32.4 4.6
1920-30 42.4 7.5 1930-38 43.3 8.1
1930-38 50.5 45.3 1954-59 19.3 12.0
1955-60 28.1 35.0 1960-69 18.5 10.3
1960-70 33.5 38.5

Panel C: Infrastructure and Human Development

Human Development Index (HDI) [d] Electricity Generation and National Broadcasting (NHK) per 100 Persons [e]
Year Educational Attainment Infant Mortality Rate (IMR) Overall HDI

Index

Year Electricity NHK Radio Subscribers
1900 0.57 155 0.57 1914 0.28 n.a.
1910 0.69 161 0.61 1920 0.68 n.a.
1920 0.71 166 0.64 1930 2.46 1.2
1930 0.73 124 0.65 1938 4.51 7.8
1950 0.81 63 0.69 1950 5.54 11.0
1960 0.87 34 0.75 1960 12.28 12.6
1970 0.95 14 0.83 1970 34.46 21.9

Notes: [a] Maddison (2000) provides estimates of real income that take into account the purchasing power of national currencies.

[b] Ohkawa (1979) gives estimates for the “N” sector that is defined as manufacturing and mining (Ma) plus construction plus facilitating industry (transport, communications and utilities). It should be noted that the concept of an “N” sector is not standard in the field of economics.

[c] The estimates of trade are obtained by adding merchandise imports to merchandise exports. Trade openness is estimated by taking the ratio of total (merchandise) trade to national output, the latter defined as Gross Domestic Product (G.D.P.). The trade figures include trade with Japan’s empire (Korea, Taiwan, Manchuria, etc.); the income figures for Japan exclude income generated in the empire.

[d] The Human Development Index is a composite variable formed by adding together indices for educational attainment, for health (using life expectancy that is inversely related to the level of the infant mortality rate, the IMR), and for real per capita income. For a detailed discussion of this index see United Nations Development Programme (2000).

[e] Electrical generation is measured in million kilowatts generated and supplied. For 1970, the figures on NHK subscribers are for television subscribers. The symbol n.a. = not available.

Sources: The figures in this table are taken from various pages and tables in Japan Statistical Association (1987), Maddison (2000), Minami (1994), and Ohkawa (1979).

Flowing from this table are a number of points that bear lessons of the Denison and Chung (1976) decomposition. One cluster of points bears upon the timing of Japan’s income per capita growth and the relationship of manufacturing expansion to income growth. Another highlights improvements in the quality of the labor input. Yet another points to the overriding importance of domestic investment in manufacturing and the lesser significance of trade demand. A fourth group suggests that infrastructure has been important to economic growth and industrial expansion in Japan, as exemplified by the figures on electricity generating capacity and the mass diffusion of communications in the form of radio and television broadcasting.

Several parts of Table 1 point to industrialization, defined as an increase in the proportion of output (and labor force) attributable to manufacturing and mining, as the driving force in explaining Japan’s income per capita growth. Notable in Panels A and B of the table is that the gap between Japanese and American income per capita closed most decisively during the 1910s, the 1930s, and the 1960s, precisely the periods when manufacturing expansion was the most vigorous.

Equally noteworthy of the spurts of the 1910s, 1930s and the 1960s is the overriding importance of gross domestic fixed capital formation, that is investment, for growth in demand. By contrast, trade seems much less important to growth in demand during these critical decades, a point emphasized by both Minami (1994) and by Ohkawa and Rosovsky (1973). The notion that Japanese growth was “export led” during the nine decades between 1880 and 1970 when Japan caught up technologically with the leading Western nations is not defensible. Rather, domestic capital investment seems to be the driving force behind aggregate demand expansion. The periods of especially intense capital formation were also the periods when manufacturing production soared. Capital formation in manufacturing, or in infrastructure supporting manufacturing expansion, is the main agent pushing long-run income per capita growth.

Why? As Ohkawa and Rosovsky (1973) argue, spurts in manufacturing capital formation were associated with the import and adaptation of foreign technology, especially from the United States These investment spurts were also associated with shifts of labor force out of agriculture and into manufacturing, construction and facilitating sectors where labor productivity was far higher than it was in labor-intensive farming centered around labor-intensive rice cultivation. The logic of productivity gain due to more efficient allocation of labor resources is apparent from the right hand column of Panel A in Table 1.

Finally, Panel C of Table 1 suggests that infrastructure investment that facilitated health and educational attainment (combined public and private expenditure on sanitation, schools and research laboratories), and public/private investment in physical infrastructure including dams and hydroelectric power grids helped fuel the expansion of manufacturing by improving human capital and by reducing the costs of transportation, communications and energy supply faced by private factories. Mosk (2001) argues that investments in human-capital-enhancing (medicine, public health and education), financial (banking) and physical infrastructure (harbors, roads, power grids, railroads and communications) laid the groundwork for industrial expansions. Indeed, the “social capability for importing and adapting foreign technology” emphasized by Ohkawa and Rosovsky (1973) can be largely explained by an infrastructure-driven growth hypothesis like that given by Mosk (2001).

In sum, Denison and Chung (1976) argue that a combination of input factor improvement and growth in output per combined factor inputs account for Japan’s most rapid spurt of economic growth. Table 1 suggests that labor quality improved because health was enhanced and educational attainment increased; that investment in manufacturing was important not only because it increased capital stock itself but also because it reduced dependence on agriculture and went hand in glove with improvements in knowledge; and that the social capacity to absorb and adapt Western technology that fueled improvements in knowledge was associated with infrastructure investment.

References

Denison, Edward and William Chung. “Economic Growth and Its Sources.” In Asia’s Next Giant: How the Japanese Economy Works, edited by Hugh Patrick and Henry Rosovsky, 63-151. Washington, DC: Brookings Institution, 1976.

Horioka, Charles Y. “Future Trends in Japan’s Savings Rate and the Implications Thereof for Japan’s External Imbalance.” Japan and the World Economy 3 (1991): 307-330.

Japan Statistical Association. Historical Statistics of Japan [Five Volumes]. Tokyo: Japan Statistical Association, 1987.

Johnson, Chalmers. MITI and the Japanese Miracle: The Growth of Industrial Policy, 1925-1975. Stanford: Stanford University Press, 1982.

Maddison, Angus. Monitoring the World Economy, 1820-1992. Paris: Organization for Economic Co-operation and Development, 2000.

Minami, Ryoshin. Economic Development of Japan: A Quantitative Study. [Second edition]. Houndmills, Basingstoke, Hampshire: Macmillan Press, 1994.

Mitchell, Brian. International Historical Statistics: Africa and Asia. New York: New York University Press, 1982.

Mosk, Carl. Japanese Industrial History: Technology, Urbanization, and Economic Growth. Armonk, New York: M.E. Sharpe, 2001.

Nakamura, Takafusa. The Postwar Japanese Economy: Its Development and Structure, 1937-1994. Tokyo: University of Tokyo Press, 1995.

Ohkawa, Kazushi. “Production Structure.” In Patterns of Japanese Economic Development: A Quantitative Appraisal, edited by Kazushi Ohkawa and Miyohei Shinohara with Larry Meissner, 34-58. New Haven: Yale University Press, 1979.

Ohkawa, Kazushi and Henry Rosovsky. Japanese Economic Growth: Trend Acceleration in the Twentieth Century. Stanford, CA: Stanford University Press, 1973.

Smith, Thomas. Native Sources of Japanese Industrialization, 1750-1920. Berkeley: University of California Press, 1988.

Uriu, Robert. Troubled Industries: Confronting Economic Challenge in Japan. Ithaca: Cornell University Press, 1996.

United Nations Development Programme. Human Development Report, 2000. New York: Oxford University Press, 2000.

Citation: Mosk, Carl. “Japan, Industrialization and Economic Growth”. EH.Net Encyclopedia, edited by Robert Whaples. January 18, 2004. URL http://eh.net/encyclopedia/japanese-industrialization-and-economic-growth/

A Brief Economic History of Modern Israel

Nadav Halevi, Hebrew University

The Pre-state Background

The history of modern Israel begins in the 1880s, when the first Zionist immigrants came to Palestine, then under Ottoman rule, to join the small existing Jewish community, establishing agricultural settlements and some industry, restoring Hebrew as the spoken national language, and creating new economic and social institutions. The ravages of World War I reduced the Jewish population by a third, to 56,000, about what it had been at the beginning of the century.

As a result of the war, Palestine came under the control of Great Britain, whose Balfour Declaration had called for a Jewish National Home in Palestine. Britain’s control was formalized in 1920, when it was given the Mandate for Palestine by the League of Nations. During the Mandatory period, which lasted until May 1948, the social, political and economic structure for the future state of Israel was developed. Though the government of Palestine had a single economic policy, the Jewish and Arab economies developed separately, with relatively little connection.

Two factors were instrumental in fostering rapid economic growth of the Jewish sector: immigration and capital inflows. The Jewish population increased mainly through immigration; by the end of 1947 it had reached 630,000, about 35 percent of the total population. Immigrants came in waves, particularly large in the mid 1920s and mid 1930s. They consisted of ideological Zionists and refugees, economic and political, from Central and Eastern Europe. Capital inflows included public funds, collected by Zionist institutions, but were for the most part private funds. National product grew rapidly during periods of large immigration, but both waves of mass immigration were followed by recessions, periods of adjustment and consolidation.

In the period from 1922 to 1947 real net domestic product (NDP) of the Jewish sector grew at an average rate of 13.2 percent, and in 1947 accounted for 54 percent of the NDP of the Jewish and Arab economies together. NDP per capita in the Jewish sector grew at a rate of 4.8 percent; by the end of the period it was 8.5 times larger in than in 1922, and 2.5 times larger than in the Arab sector (Metzer, 1998). Though agricultural development – an ideological objective – was substantial, this sector never accounted for more than 15 percent of total net domestic product of the Jewish economy. Manufacturing grew slowly for most of the period, but very rapidly during World War II, when Palestine was cut off from foreign competition and was a major provider to the British armed forces in the Middle East. By the end of the period, manufacturing accounted for a quarter of NDP. Housing construction, though a smaller component of NDP, was the most volatile sector, and contributed to sharp business cycle movements. A salient feature of the Jewish economy during the Mandatory period, which carried over into later periods, was the dominant size of the services sector – more than half of total NDP. This included a relatively modern educational and health sector, efficient financial and business sectors, and semi-governmental Jewish institutions, which later were ready to take on governmental duties.

The Formative Years: 1948-1965

The state of Israel came into being, in mid May 1948, in the midst of a war with its Arab neighbors. The immediate economic problems were formidable: to finance and wage a war, to take in as many immigrants as possible (first the refugees kept in camps in Europe and on Cyprus), to provide basic commodities to the old and new population, and to create a government bureaucracy to cope with all these challenges. The creation of a government went relatively smoothly, as semi-governmental Jewish institutions which had developed during the Mandatory period now became government departments.

Cease-fire agreements were signed during 1949. By the end of that year a total of 340,000 immigrants had arrived, and by the end of 1951 an additional 345,000 (the latter including immigrants from Arab countries), thus doubling the Jewish population. Immediate needs were met by a strict austerity program and inflationary government finance, repressed by price controls and rationing of basic commodities. However, the problems of providing housing and employment for the new population were solved only gradually. A New Economic Policy was introduced in early 1952. It consisted of exchange rate devaluation, the gradual relaxation of price controls and rationing, and curbing of monetary expansion, primarily by budgetary restraint. Active immigration encouragement was curtailed, to await the absorption of the earlier mass immigration.

From 1950 until 1965, Israel achieved a high rate of growth: Real GNP (gross national product) grew by an average annual rate of over 11 percent, and per capita GNP by greater than 6 percent. What made this possible? Israel was fortunate in receiving large sums of capital inflows: U.S. aid in the forms of unilateral transfers and loans, German reparations and restitutions to individuals, sale of State of Israel Bonds abroad, and unilateral transfers to public institutions, mainly the Jewish Agency, which retained responsibility for immigration absorption and agricultural settlement. Thus, Israel had resources available for domestic use – for public and private consumption and investment – about 25 percent more than its own GNP. This made possible a massive investment program, mainly financed through a special government budget. Both the enormity of needs and the socialist philosophy of the main political party in the government coalitions led to extreme government intervention in the economy.

Governmental budgets and strong protectionist measures to foster import-substitution enabled the development of new industries, chief among them textiles, and subsidies were given to help the development of exports, additional to the traditional exports of citrus products and cut diamonds.

During the four decades from the mid 1960s until the present, Israel’s economy developed and changed, as did economic policy. A major factor affecting these developments has been the Arab-Israeli conflict. Its influence is discussed first, and is followed by brief descriptions of economic growth and fluctuations, and evolution of economic policy.

The Arab-Israel Conflict

The most dramatic event of the 1960s was the Six Day War of 1967, at the end of which Israel controlled the West Bank (of the Jordan River) – the area of Palestine absorbed by the Jordan since 1949 – and the Gaza Strip, controlled until then by Egypt.

As a consequence of the occupation of these territories Israel was responsible for the economic as well as the political life in the areas taken over. The Arab sections of Jerusalem were united with the Jewish section. Jewish settlements were established in parts of the occupied territories. As hostilities intensified, special investments in infrastructure were made to protect Jewish settlers. The allocation of resources to Jewish settlements in the occupied territories has been a political and economic issue ever since.

The economies of Israel and the occupied territories were partially integrated. Trade in goods and services developed, with restrictions placed on exports to Israel of products deemed too competitive, and Palestinian workers were employed in Israel particularly in construction and agriculture. At its peak, in 1996, Palestinian employment in Israel reached 115,000 to 120,000, about 40 percent of the Palestinian labor force, but never more than 6.5 percent of total Israeli employment. Thus, while employment in Israel was a major contributor to the economy of the Palestinians, its effects on the Israeli economy, except for the sectors of construction and agriculture, were not large.

The Palestinian economy developed rapidly – real per capita national income grew at an annual rate of close to 20 percent in 1969-1972 and 5 percent in 1973-1980 – but fluctuated widely thereafter, and actually decreased in times of hostilities. Palestinian per capita income equaled 10.2 percent of Israeli per capita income in 1968, 22.8 percent in 1986, and declined to 9.7 percent in 1998 (Kleiman, 2003).

As part of the peace process between Israel and the Palestinians initiated in the 1990s, an economic agreement was signed between the parties in 1994, which in effect transformed what had been essentially a one-sided customs agreement (which gave Israel full freedom to export to the Territories but put restrictions on Palestinian exports to Israel) into a more equal customs union: the uniform external trade policy was actually Israel’s, but the Palestinians were given limited sovereignty regarding imports of certain commodities.

Arab uprisings (intifadas), in the 1980s, and especially the more violent one beginning in 2000 and continuing into 2005, led to severe Israeli restrictions on interaction between the two economies, particularly employment of Palestinians in Israel, and even to military reoccupation of some areas given over earlier to Palestinian control. These measures set the Palestinian economy back many years, wiping out much of the gains in income which had been achieved since 1967 – per capita GNP in 2004 was $932, compared to about $1500 in 1999. Palestinian workers in Israel were replaced by foreign workers.

An important economic implication of the Arab-Israel conflict is that Israel must allocate a major part of its budget to defense. The size of the defense budget has varied, rising during wars and armed hostilities. The total defense burden (including expenses not in the budget) reached its maximum relative size during and after the Yom Kippur War of 1973, close to 30 percent of GNP in 1974-1978. In the 2000-2004 period, the defense budget alone reached about 22 to 25 percent of GDP. Israel has been fortunate in receiving generous amounts of U.S. aid. Until 1972 most of this came in the form of grants and loans, primarily for purchases of U.S. agricultural surpluses. But since 1973 U.S. aid has been closely connected to Israel’s defense needs. During 1973-1982 annual loans and grants averaged $1.9 billion, and covered some 60 percent of total defense imports. But even in more tranquil periods, the defense burden, exclusive of U.S. aid, has been much larger than usual in industrial countries during peace time.

Growth and Economic Fluctuations

The high rates of growth of income and income per capita which characterized Israel until 1973 were not achieved thereafter. GDP growth fluctuated, generally between 2 and 5 percent, reaching as high as 7.5 percent in 2000, but falling below zero in the recession years from 2001 to mid 2003. By the end of the twentieth century income per capita reached about $20,000, similar to many of the more developed industrialized countries.

Economic fluctuations in Israel have usually been associated with waves of immigration: a large flow of immigrants which abruptly increases the population requires an adjustment period until it is absorbed productively, with the investments for its absorption in employment and housing stimulating economic activity. Immigration never again reached the relative size of the first years after statehood, but again gained importance with the loosening of restrictions on emigration from the Soviet Union. The total number of immigrants in 1972-1982 was 325,000, and after the collapse of the Soviet Union immigration totaled 1,050,000 in 1990-1999, mostly from the former Soviet Union. Unlike the earlier period, these immigrants were gradually absorbed in productive employment (though often not in the same activity as abroad) without resort to make-work projects. By the end of the century the population of Israel passed 6,300,000, with the Jewish population being 78 percent of the total. The immigrants from the former Soviet Union were equal to about one-fifth of the Jewish population, and were a significant and important addition of human capital to the labor force.

As the economy developed, the structure of output changed. Though the service sectors are still relatively large – trade and services contributing 46 percent of the business sector’s product – agriculture has declined in importance, and industry makes up over a quarter of the total. The structure of manufacturing has also changed: both in total production and in exports the share of traditional, low-tech industries has declined, with sophisticated, high-tech products, particularly electronics, achieving primary importance.

Fluctuations in output were marked by periods of inflation and periods of unemployment. After a change in exchange rate policy in the late 1970s (discussed below), an inflationary spiral was unleashed. Hyperinflation rates were reached in the early 1980s, about 400 percent per year by the time a drastic stabilization policy was imposed in 1985. Exchange rate stabilization, budgetary and monetary restraint, and wage and price freezes sharply reduced the rate of inflation to less than 20 percent, and then to about 16 percent in the late 1980s. Very drastic monetary policy, from the late 1990s, finally reduced the inflation to zero by 2005. However, this policy, combined with external factors such as the bursting of the high-tech bubble, recession abroad, and domestic insecurity resulting from the intifada, led to unemployment levels above 10 percent at the beginning of the new century. The economic improvements since the latter half of 2003 have, as yet (February 2005), not significantly reduced the level of unemployment.

Policy Changes

The Israeli economy was initially subject to extensive government controls. Only gradually was the economy converted into a fairly free (though still not completely so) market economy. This process began in the 1960s. In response to a realization by policy makers that government intervention in the economy was excessive, and to the challenge posed by the creation in Europe of a customs union (which gradually progressed into the present European Union), Israel embarked upon a very gradual process of economic liberalization. This appeared first in foreign trade: quantitative restrictions on imports were replaced by tariff protection, which was slowly reduced, and both import-substitution and exports were encouraged by more realistic exchange rates rather than by protection and subsidies. Several partial trade agreements with the European Economic Community (EEC), starting in 1964, culminated in a free trade area agreement (FTA) in industrial goods in 1975, and an FTA agreement with the U.S. came into force in 1985.

By late 1977 a considerable degree of trade liberalization had taken place. In October of that year, Israel moved from a fixed exchange rate system to a floating rate system, and restrictions on capital movements were considerably liberalized. However, there followed a disastrous inflationary spiral which curbed the capital liberalization process. Capital flows were not completely liberalized until the beginning of the new century.

Throughout the 1980s and the 1990s there were additional liberalization measures: in monetary policy, in domestic capital markets, and in various instruments of governmental interference in economic activity. The role of government in the economy was considerably decreased. On the other hand, some governmental economic functions were increased: a national health insurance system was introduced, though private health providers continued to provide health services within the national system. Social welfare payments, such as unemployment benefits, child allowances, old age pensions and minimum income support, were expanded continuously, until they formed a major budgetary expenditure. These transfer payments compensated, to a large extent, for the continuous growth of income inequality, which had moved Israel from among the developed countries with the least income inequality to those with the most. By 2003, 15 percent of the government’s budget went to health services, 15 percent to education, and an additional 20 percent were transfer payments through the National Insurance Agency.

Beginning in 2003, the Ministry of Finance embarked upon a major effort to decrease welfare payments, induce greater participation in the labor force, privatize enterprises still owned by government, and reduce both the relative size of the government deficit and the government sector itself. These activities are the result of an ideological acceptance by the present policy makers of the concept that a truly free market economy is needed to fit into and compete in the modern world of globalization.

An important economic institution is the Histadrut, a federation of labor unions. What had made this institution unique is that, in addition to normal labor union functions, it encompassed agricultural and other cooperatives, major construction and industrial enterprises, and social welfare institutions, including the main health care provider. During the Mandatory period, and for many years thereafter, the Histadrut was an important factor in economic development and in influencing economic policy. During the 1990s, the Histadrut was divested of many of its non-union activities, and its influence in the economy has greatly declined. The major unions associated with it still have much say in wage and employment issues.

The Challenges Ahead

As it moves into the new century, the Israeli economy has proven to be prosperous, as it continuously introduces and applies economic innovation, and to be capable of dealing with economic fluctuations. However, it faces some serious challenges. Some of these are the same as those faced by most industrial economies: how to reconcile innovation, the switch from traditional activities which are no longer competitive, to more sophisticated, skill-intensive products, with the dislocation of labor it involves, and the income inequality it intensifies. Like other small economies, Israel has to see how it fits into the new global economy, marked by the two major markets of the EU and the U.S., and the emergence of China as a major economic factor.

Special issues relate to the relations of Israel with its Arab neighbors. First are the financial implications of continuous hostilities and military threats. Clearly, if peace can come to the region, resources can be transferred to more productive uses. Furthermore, foreign investment, so important for Israel’s future growth, is very responsive to political security. Other issues depend on the type of relations established: will there be the free movement of goods and workers between Israel and a Palestinian state? Will relatively free economic relations with other Arab countries lead to a greater integration of Israel in the immediate region, or, as is more likely, will Israel’s trade orientation continue to be directed mainly to the present major industrial countries? If the latter proves true, Israel will have to carefully maneuver between the two giants: the U.S. and the EU.

References and Recommended Reading

Ben-Bassat, Avi, editor. The Israeli Economy, 1985-1998: From Government Intervention to Market Economics. Cambridge, MA: MIT Press, 2002.

Ben-Porath, Yoram, editor. The Israeli Economy: Maturing through Crisis. Cambridge, MA: Harvard University Press, 1986.

Fischer, Stanley, Dani Rodrik and Elias Tuma, editors. The Economics of Middle East Peace. Cambridge, MA: MIT Press, 1993.

Halevi, Nadav and Ruth Klinov-Malul, The Economic Development of Israel. New York: Praeger, 1968.

Kleiman, Ephraim. “Palestinian Economic Viability and Vulnerability.” Paper presented at the UCLA Burkle Conference in Athens, August 2003. (Available at www.international.ucla.edu.)

Metz, Helen Chapin, editor. Israel: A Country Study. Washington: Library of Congress Country Studies, 1986.

Metzer, Jacob, The Divided Economy of Mandatory Palestine. Cambridge: Cambridge University Press, 1998.

Patinkin, Don. The Israel Economy: The First Decade. Jerusalem: Maurice Falk Institute for Economic Research in Israel, 1967.

Razin, Assaf and Efraim Sadka, The Economy of Modern Israel: Malaise and Promise. London: Chicago University Press, 1993.

World Bank. Developing the Occupied Territories: An Investment in Peace. Washington D.C.: The World Bank, September, 1993.

Citation: Halevi, Nadav. “A Brief Economic History of Modern Israel”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/a-brief-economic-history-of-modern-israel/

Industrial Sickness Funds

John E. Murray, University of Toledo

Overview and Definition

Industrial sickness funds provided an early form of health insurance. They were financial institutions that extended cash payments and in some cases medical benefits to members who became unable to work due to sickness or injury. The term industrial sickness funds is a later construct which describes funds organized by companies, which were also known as establishment funds, and by labor unions. These funds were widespread geographically in the United States; the 1890 Census of Insurance found 1,259 nationwide, with concentrations in the Northeast, Midwest, California, Texas, and Louisiana (U.S. Department of the Interior, 1895). By the turn of the twentieth century, some industrial sickness funds had accumulated considerable experience at managing sickness benefits. A few predated the Civil War. When the U. S. Commissioner of Labor surveyed a sample of sickness funds in 1908, it found 867 non-fraternal funds nationwide that provided temporary disability benefits (U.S. Commissioner of Labor, 1909). By the time of World War I, these funds, together with similar funds sponsored by fraternal societies, covered 30 to 40 percent of non-agricultural wage workers in the more industrialized states, or by extension, eight to nine million nationwide (Murray 2007a). Sickness funds were numerous, widespread, and in general carefully operated.

Industrial sickness funds were among the earliest providers of any type of health or medical benefits in the United States. In fact, their earliest product was called “workingman’s insurance” or “sickness insurance,” terms that described their clientele and purpose accurately. In the late Progressive Era, reformers promoted government insurance programs that would supplant the sickness funds. To sound more British, they used the term “health insurance,” and that is the phrase we still use for this kind of insurance contract (Numbers 1978). In the history of health insurance, the funds were contemporary with benefit operations of fraternal societies (see fraternal sickness insurance) and led into the period of group health insurance (see health insurance, U. S.). They should be distinguished from the sickness benefits provided by some industrial insurance policies, which required weekly premium payments and paid a cash benefit upon death, which was intended to cover burial expenses.

Many written histories of health insurance have missed the important role industrial sickness funds played in both relief of worker suffering and in the political process. Recent historians have tended to criticize, patronize, or ignore sickness funds. Lubove (1986) complained that they stood in the way of government insurance for all workers. Klein (2003) claimed that they were inefficient, without making explicit her standard for that judgment. Quadagno (2005) simply asserted that no one had thought of health insurance before the 1920s. Contemporary commentators such as I. M. Rubinow and Irving Fisher criticized workers who preferred “hopelessly inadequate” sickness fund insurance over government insurance as “infantile” (Derickson 2005). But these criticisms stemmed more from their authors’ ideological preconceptions than from close study of these institutions.

Rise and Operations of Industrial Sickness Funds

The period of their greatest extent and importance was from the 1880s to around 1940. The many state labor bureau surveys of individual workers, since digitized by the University of California’s Historical Labor Statistics Project and available for download at EH.net, often asked questions such as “do you belong to a benefit society,” meaning a fraternal sickness benefit fund or an industrial sickness fund. Of the surveys from the early 1890s that included this question, around a quarter of respondents indicated that they belonged to such societies. Later, closer to 1920, several states examined the extent of sickness insurance coverage in response to movements to create governmental health insurance for workers (Table 1). These later studies indicated that in the Northeast, Midwest, and California, between thirty and forty percent of non-agricultural workers were covered. Thus, remarkably, these societies had actually increased their market share over a three decade period in which the labor force itself grew from 13 to 30 million workers (Murray 2007a). Industrial sickness funds were dynamic institutions, capable of dealing with an ever expanding labor market

Table 1:
Sources of Insurance in Three States (thousands of workers)

Source/state Illinois Ohio California
Fraternal society 250 200 291
Establishment fund 116 130 50
Union fund 140 85 38
Other sick fund 12 N/a 35
Commercial insurance 140 85 2 (?)
Total 660 500 416
Eligible labor force 1,850 1,500 995
Share insured 36% 33% 42%
Sources: Illinois (1919), Ohio, (1919), California (1917), Lee et al. (1957).

Industrial sickness funds operated in a relatively simple fashion, but one that enabled them to mitigate the usual information problems that emerge in insurance markets. The process of joining a fund and making a claim typically worked as follows. A newly hired worker in a plant with such a fund explicitly applied to join, often after a probationary period during which fund managers could observe his baseline health and work habits. After admission to the fund, he paid an entrance fee followed by weekly dues. Since the average industrial worker in the 1910s earned about ten dollars a week, the entrance fee of one dollar was a half-day’s pay and the dues of ten cents made the cost to the worker around one percent of his pay packet.

A member who was unable to work contacted his fund, which then sent either a committee of fellow fund members, a physician, or both to check on the member-now-claimant. If they found him as sick as he had said he was, and in their judgment he was unable to work, after a one week waiting period he received around half his weekly pay. The waiting period was intended to let transient, less serious illnesses resolve so that the fund could support members with longer-term medical problems. To continue receiving the sick pay the claimant needed to allow periodic examinations by a physician or visiting committee. In rough terms, the average worker missed two percent of a work year, or about a week every year, a rate that varied by age and industry. The quarter of all workers who missed any work lost on average one month’s pay; thus a typical incapacitated worker received three and a half weeks of benefit per year. Comparing the cost of dues and expected value of benefits shows that the sickness funds were close to an actuarially fair bet: $5.00 in annual dues compared to (0.25 chance of falling ill) x (3.5 weeks of benefits) x ($5.00 weekly benefit), or about four and a half dollars in expected benefits. Thus, sickness funds appear to have been a reasonably fair deal for workers.

Establishment funds did not invent sickness benefits by any means. Rather, they systematized previous arrangements for supporting sick workers or the survivors of deceased workers. The old way was to pass the hat, which was characterized by random assessments and arbitrary financial awards. Workers and employers both observed that contributors and beneficiaries alike detested passing the hat. Fellow workers complained about the surprise nature of the hat’s appearance, and beneficiaries faced humiliation upon grief when the hat contained less money than had been collected for a more popular co-worker. Eventually rules replaced discretion, and benefits were paid according to a published schedule, either as a flat rate per diem or as a percentage of wages. The 1890 Census of Insurance reported that only a few funds extended benefits “at the discretion of the society,” and by the time of the 1908 Commissioner of Labor survey the practice had disappeared (Murray 2007).

Labor union funds began in the early nineteenth century. In the earliest union funds, members of craft unions pledged to complete jobs that ill brothers had contracted to perform but could not finish due to illness. Eventually cash benefit payments replaced the in-kind promises of labor, accompanied by cash premium payments into the union’s kitty. While criticized by many observers as unstable, labor union funds actually operated in transparent fashion. Even funds that offered unemployment benefits survived the depression of the mid-1890s by reducing benefit payments and enacting other conservative measures. Another criticism was that their benefits were too small in amount and too brief in duration, but according to the 1908 Commissioner of Labor survey, labor union funds and establishment funds offered similar levels of benefits. The cost-benefit ratio did favor establishment funds, but establishment fund membership ended with employment at a particular company, while union funds offered the substantial attraction of benefits that were portable from job to job.

The cash payment to sick workers created an incentive to take sick leave that workers without sickness insurance did not face; this is the moral hazard of sick pay. Further, workers who believed that they were more likely to make a sick claim would have a stronger incentive to join a sickness fund than a worker in relatively good health; this is called adverse selection. Early twentieth century commentators on government sickness insurance disagreed on the extent and even the existence of moral hazard and adverse selection in sickness insurance. Later statistical studies found evidence for both in establishment funds. However, the funds themselves had understood the potential financial damage each could wreak and strategized to mitigate such losses. The magnitude of the sick pay moral hazard was small, and affected primarily the tendency of the worker to make a claim in the first place. Many sickness funds limited their liability here by paying for the physician who examined the claimant and thus was responsible for approving extended sickness payments. Physicians appear to have paid attention to the wishes of those who paid them. Among claimants in funds that paid the examining physician directly, the average duration of their illness ended significantly earlier. By the same token, physicians who were paid by the worker tended to approve longer absences for that worker—a sign that physicians too responded to incentives.

Testing for adverse selection depends on whether membership in a company’s fund was the worker’s choice (that is, it was voluntary) or the company’s choice (that is, it was compulsory). In fact among establishment funds in which membership was voluntary, claim rates per member were significantly higher than in mandatory membership funds. This indicates that voluntary funds were especially attractive to sicker workers, which is the essence of adverse selection. To reduce the risks of adverse selection, funds imposed age limits to keep out older applicants, physical examinations to discourage the obviously ill, probationary periods to reveal chronic illness, and pre-existing condition clauses to avoid paying for such conditions (Murray 2007a). Sickness funds thus cleverly managed information problems typical of insurance markets.

Industrial Sickness Funds and Progressive Era Politics

Industrial sickness funds were the linchpin of efforts to promote and to oppose the Progressive campaign for state-level mandatory government sickness insurance. One consistent claim made by government insurance supporters was that workers could neither afford to pay for sickness insurance nor to save in advance of financially damaging health problems. The leading advocacy organization, the American Association for Labor Legislation (AALL), reported in its magazine that “Savings of Wage-Earners Are Insufficient to Meet this Loss,” meaning lost income during sickness (American Association for Labor Legislation 1916a). However, worker surveys of savings, income, and insurance holdings revealed that workers rationally strategized according to their varying needs and abilities across the life-cycle. Young workers saved little and were less likely to belong to industrial sickness funds—but were less likely to miss work due to illness as well. Middle aged workers, married with families to support, were relatively more likely to belong to a sickness fund. Older workers pursued a different strategy, saving more and relying on sickness funds less; among other factors, they wanted greater liquidity in their financial assets (Murray 2007a). Worker strategies reflected varying needs at varying stages of life, some (but not all) of which could be adequately addressed by membership in sickness funds.

Despite claims to the contrary by some historians, there was little popular support for government sickness insurance in early twentieth century America. Lobbying by the AALL led twelve states to charge investigatory commissions with determining the need for and feasibility of government sickness insurance (Moss 1996). The AALL offered a basic bill that could be adjusted to meet a state’s particular needs (American Association for Labor Legislation 1916b). Typically the Association prodded states to adopt a version of German insurance, which would keep the many small industrial sickness funds while forcing new members into some and creating new funds for other workers. However, these bills met consistent defeat in statehouses, earning only a fleeting victory in the New York Senate in 1919, which was followed by the bill’s death in an Assembly committee (Hoffman 2001). In the previous year a California referendum on a constitutional amendment that would allow the government to provide sickness insurance lost by nearly three to one (Costa 1996).

After the Progressive campaign exhausted itself, industrial sickness funds continued to grow through the 1920s, but the Great Depression exposed deep flaws in their structure. Many labor union funds, without a sponsoring firm to act as lender of last resort, dissolved. Establishment funds failed at a surprisingly low rate, but their survival was made possible by the tendency of firms to fire less healthy workers. Federal surveys in Minnesota found that ill-health led to earlier job loss in the Depression, and comparisons of self reported health in later surveys indicated that the unemployed were in fact in poorer health than the employed, and the disparity grew as the Depression deepened. Thus, industrial sickness funds paradoxically enjoyed falling claim rates (and thus reduced expenses) as the economy deteriorated (Murray 2007).

Decline and Rebirth of Sickness Funds

At the same time, commercial insurers had been engaging in ever more productive research into the actuarial science of group health insurance. Eventually the insurers cut premium rates while offering benefits comparable to those available through sickness funds. As a result, the commercial insurers and Blue CrossBlue Shield came to dominate the market for health benefits. A federal survey that covered the early 1930s found more firms with group health than with mutual benefit societies but the benefit societies still insured more than twice as many workers (Sayers, et al 1937). By the later 1930s that gap in the number of firms had widened in favor of group health (Figure 1), and the number of workers insured was about equal. After the mid-1940s, industrial sickness funds were no longer a significant player in markets for health insurance (Murray 2007a).

Figure 1: Health Benefit Provision and Source
Source: Dobbin (1992) citing National Industrial Conference Board surveys.

More recently, a type of industrial sickness fund has begun to stage a comeback. Voluntary employee beneficiary associations (VEBAs) fall under a 1928 federal law that was created to govern industrial sickness funds. VEBAs are trusts set up to pay employee benefits without earning profits for the company. In late 2007, the Big Three automakers each contracted with the United Auto Workers (UAW) to operate a VEBA that would provide health insurance for UAW members. If the automakers and their workers succeed in establishing VEBAs that stand the test of time, they will have resurrected a once-successful financial institution previously thought relegated to the pre-World War II economy (Murray 2007b).

References

American Association for Labor Legislation. “Brief for Health Insurance.” American Labor Legislation Review 6 (1916a): 155–236.

American Association for Labor Legislation. “Tentative Draft of an Act.” American Labor Legislation Review 6 (1916b): 239–68.

California Social Insurance Commission. Report of the Social Insurance Commission of the State of California, January 25, 1917. Sacramento: California State Printing Office, 1917.

Costa, Dora L. “Demand for Private and State Provided Health Insurance in the 1910s: Evidence from California.” Photocopy, MIT, 1996.

Derickson, Alan. Health Security for All: Dreams of Universal Health Care in America. Baltimore: Johns Hopkins University Press, 2005.

Dobbin, Frank. “The Origins of Private Social Insurance: Public Policy and Fringe Benefits in America, 1920-1950,” American Journal of Sociology 97 (1992): 1416-50.

Hoffman, Beatrix. The Wages of Sickness: The Politics of Health Insurance in Progressive America. Chapel Hill: University of North Carolina Press, 2001.

Klein, Jennifer. For All These Rights: Business, Labor, and the Shaping of America’s Public-Private Welfare State. Princeton: Princeton University Press, 2003.

Lee, Everett S., Ann Ratner Miller, Carol P. Brainerd, and Richard A. Easterlin, under the direction of Simon Kuznets and Dorothy Swaine Thomas. Population Redistribution and Economic Growth, 1870-1950: Volume I, Methodological Considerations and Reference Tables. Philadelphia: Memoirs of the American Philosophical Society 45, 1957.

Lubove, Roy. The Struggle for Social Security, 1900-1930. Second edition. Pittsburgh: University of Pittsburgh Press, 1986.

Moss, David. Socializing Security: Progressive-Era Economists and the Origins of American Social Policy. Cambridge: Harvard University Press, 1996.

Murray, John E. Origins of American Health Insurance: A History of Industrial Sickness Funds. New Haven: Yale University Press, 2007a.

Murray, John E. “UAW Members Must Treat Health Care Money as Their Own,” Detroit Free Press, 21 November 2007b.

Ohio Health and Old Age Insurance Commission. Health, Health Insurance, Old Age Pensions: Report, Recommendations, Dissenting Opinions. Columbus: Heer, 1919.

Quadagno, Jill. One Nation, Uninsured: Why the U. S. Has No National Health Insurance. New York: Oxford University Press, 2005.

Sayers, R. R., Gertrud Kroeger, and W. M. Gafafer. “General Aspects and Functions of the Sick Benefit Organization.” Public Health Reports 52 (November 5, 1937): 1563–80.

State of Illinois. Report of the Health Insurance Commission of the State of Illinois, May 1, 1919. Springfield: State of Illinois, 1919.

U.S. Department of the Interior. Report on Insurance Business in the United States at the Eleventh Census: 1890; pt. 2, “Life Insurance.” Washington, DC: GPO, 1895.

U.S. Commissioner of Labor. Twenty-third Annual Report of the Commissioner of Labor, 1908: Workmen’s Insurance and Benefit Funds in the United States. Washington, DC: GPO, 1909.

Citation: Murray, John. “Industrial Sickness Funds, US”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/industrial-sickness-funds/

Hours of Work in U.S. History

Robert Whaples, Wake Forest University

In the 1800s, many Americans worked seventy hours or more per week and the length of the workweek became an important political issue. Since then the workweek’s length has decreased considerably. This article presents estimates of the length of the historical workweek in the U.S., describes the history of the shorter-hours “movement,” and examines the forces that drove the workweek’s decline over time.

Estimates of the Length of the Workweek

Measuring the length of the workweek (or workday or workyear) is a difficult task, full of ambiguities concerning what constitutes work and who is to be considered a worker. Estimating the length of the historical workweek is even more troublesome. Before the Civil War most Americans were employed in agriculture and most of these were self-employed. Like self-employed workers in other fields, they saw no reason to record the amount of time they spent working. Often the distinction between work time and leisure time was blurry. Therefore, estimates of the length of the typical workweek before the mid-1800s are very imprecise.

The Colonial Period

Based on the amount of work performed — for example, crops raised per worker — Carr (1992) concludes that in the seventeenth-century Chesapeake region, “for at least six months of the year, an eight to ten-hour day of hard labor was necessary.” This does not account for other required tasks, which probably took about three hours per day. This workday was considerably longer than for English laborers, who at the time probably averaged closer to six hours of heavy labor each day.

The Nineteenth Century

Some observers believe that most American workers adopted the practice of working from “first light to dark” — filling all their free hours with work — throughout the colonial period and into the nineteenth century. Others are skeptical of such claims and argue that work hours increased during the nineteenth century — especially its first half. Gallman (1975) calculates “changes in implicit hours of work per agricultural worker” and estimates that hours increased 11 to 18 percent from 1800 to 1850. Fogel and Engerman (1977) argue that agricultural hours in the North increased before the Civil War due to the shift into time-intensive dairy and livestock. Weiss and Craig (1993) find evidence suggesting that agricultural workers also increased their hours of work between 1860 and 1870. Finally, Margo (2000) estimates that “on an economy-wide basis, it is probable that annual hours of work rose over the (nineteenth) century, by around 10 percent.” He credits this rise to the shift out of agriculture, a decline in the seasonality of labor demand and reductions in annual periods of nonemployment. On the other hand, it is clear that working hours declined substantially for one important group. Ransom and Sutch (1977) and Ng and Virts (1989) estimate that annual labor hours per capita fell 26 to 35 percent among African-Americans with the end of slavery.

Manufacturing Hours before 1890

Our most reliable estimates of the workweek come from manufacturing, since most employers required that manufacturing workers remain at work during precisely specified hours. The Census of Manufactures began to collect this information in 1880 but earlier estimates are available. Much of what is known about average work hours in the nineteenth century comes from two surveys of manufacturing hours taken by the federal government. The first survey, known as the Weeks Report, was prepared by Joseph Weeks as part of the Census of 1880. The second was prepared in 1893 by Commissioner of Labor Carroll D. Wright, for the Senate Committee on Finance, chaired by Nelson Aldrich. It is commonly called the Aldrich Report. Both of these sources, however, have been criticized as flawed due to problems such as sample selection bias (firms whose records survived may not have been typical) and unrepresentative regional and industrial coverage. In addition, the two series differ in their estimates of the average length of the workweek by as much as four hours. These estimates are reported in Table 1. Despite the previously mentioned problems, it seems reasonable to accept two important conclusions based on these data — the length of the typical manufacturing workweek in the 1800s was very long by modern standards and it declined significantly between 1830 and 1890.

Table 1
Estimated Average Weekly Hours Worked in Manufacturing, 1830-1890

Year Weeks Report Aldrich Report
1830 69.1
1840 67.1 68.4
1850 65.5 69.0
1860 62.0 66.0
1870 61.1 63.0
1880 60.7 61.8
1890 60.0

Sources: U.S. Department of Interior (1883), U.S. Senate (1893)
Note: Atack and Bateman (1992), using data from census manuscripts, estimate average weekly hours to be 60.1 in 1880 — very close to Weeks’ contemporary estimate. They also find that the summer workweek was about 1.5 hours longer than the winter workweek.

Hours of Work during the Twentieth Century

Because of changing definitions and data sources there does not exist a consistent series of workweek estimates covering the entire twentieth century. Table 2 presents six sets of estimates of weekly hours. Despite differences among the series, there is a fairly consistent pattern, with weekly hours falling considerably during the first third of the century and much more slowly thereafter. In particular, hours fell strongly during the years surrounding World War I, so that by 1919 the eight-hour day (with six workdays per week) had been won. Hours fell sharply at the beginning of the Great Depression, especially in manufacturing, then rebounded somewhat and peaked during World War II. After World War II, the length of the workweek stabilized around forty hours. Owen’s nonstudent-male series shows little trend after World War II, but the other series show a slow, but steady, decline in the length of the average workweek. Greis’s two series are based on the average length of the workyear and adjust for paid vacations, holidays and other time-off. The last column is based on information reported by individuals in the decennial censuses and in the Current Population Survey of 1988. It may be the most accurate and representative series, as it is based entirely on the responses of individuals rather than employers.

Table 2
Estimated Average Weekly Hours Worked, 1900-1988

Year Census of Manu-facturing JonesManu-

facturing

OwenNonstudent Males GreisManu-

facturing

GreisAll Workers Census/CPS All Workers
1900 59.6* 55.0 58.5
1904 57.9 53.6 57.1
1909 56.8 (57.3) 53.1 55.7
1914 55.1 (55.5) 50.1 54.0
1919 50.8 (51.2) 46.1 50.0
1924 51.1* 48.8 48.8
1929 50.6 48.0 48.7
1934 34.4 40.6
1940 37.6 42.5 43.3
1944 44.2 46.9
1947 39.2 42.4 43.4 44.7
1950 38.7 41.1 42.7
1953 38.6 41.5 43.2 44.0
1958 37.8* 40.9 42.0 43.4
1960 41.0 40.9
1963 41.6 43.2 43.2
1968 41.7 41.2 42.0
1970 41.1 40.3
1973 40.6 41.0
1978 41.3* 39.7 39.1
1980 39.8
1988 39.2

Sources: Whaples (1990a), Jones (1963), Owen (1976, 1988), and Greis (1984). The last column is based on the author’s calculations using Coleman and Pencavel’s data from Table 4 (below).
* = these estimates are from one year earlier than the year listed.
(The figures in parentheses in the first column are unofficial estimates but are probably more precise, as they better estimate the hours of workers in industries with very long workweeks.)

Hours in Other Industrial Sectors

Table 3 compares the length of the workweek in manufacturing to that in other industries for which there is available information. (Unfortunately, data from the agricultural and service sectors are unavailable until late in this period.) The figures in Table 3 show that the length of the workweek was generally shorter in the other industries — sometimes considerably shorter. For example, in 1910 anthracite coalminers’ workweeks were about forty percent shorter than the average workweek among manufacturing workers. All of the series show an overall downward trend.

Table 3
Estimated Average Weekly Hours Worked, Other Industries

Year Manufacturing Construction Railroads Bituminous Coal Anthracite Coal
1850s about 66 about 66
1870s about 62 about 60
1890 60.0 51.3
1900 59.6 50.3 52.3 42.8 35.8
1910 57.3 45.2 51.5 38.9 43.3
1920 51.2 43.8 46.8 39.3 43.2
1930 50.6 42.9 33.3 37.0
1940 37.6 42.5 27.8 27.2
1955 38.5 37.1 32.4 31.4

Sources: Douglas (1930), Jones (1963), Licht (1983), and Tables 1 and 2.
Note: The manufacturing figures for the 1850s and 1870s are approximations based on averaging numbers from the Weeks and Aldrich reports from Table 1. The early estimates for the railroad industry are also approximations.

Recent Trends by Race and Gender

Some analysts, such as Schor (1992) have argued that the workweek increased substantially in the last half of the twentieth century. Few economists accept this conclusion, arguing that it is based on the use of faulty data (public opinion surveys) and unexplained methods of “correcting” more reliable sources. Schor’s conclusions are contradicted by numerous studies. Table 4 presents Coleman and Pencavel’s (1993a, 1993b) estimates of the average workweek of employed people — disaggregated by race and gender. For all four groups the average length of the workweek has dropped since 1950. Although median weekly hours were virtually constant for men, the upper tail of the hours distribution fell for those with little schooling and rose for the well-educated. In addition, Coleman and Pencavel also find that work hours declined for young and older men (especially black men), but changed little for white men in their prime working years. Women with relatively little schooling were working fewer hours in the 1980s than in 1940, while the reverse is true of well-educated women.

Table 4
Estimated Average Weekly Hours Worked, by Race and Gender, 1940-1988

Year White Men Black Men White Women Black Women
1940 44.1 44.5 40.6 42.2
1950 43.4 42.8 41.0 40.3
1960 43.3 40.4 36.8 34.7
1970 43.1 40.2 36.1 35.9
1980 42.9 39.6 35.9 36.5
1988 42.4 39.6 35.5 37.2

Source: Coleman and Pencavel (1993a, 1993b)

Broader Trends in Time Use, 1880 to 2040

In 1880 a typical male household head had very little leisure time — only about 1.8 hours per day over the course of a year. However, as Fogel’s (2000) estimates in Table 5 show, between 1880 and 1995 the amount of work per day fell nearly in half, allowing leisure time to more than triple. Because of the decline in the length of the workweek and the declining portion of a lifetime that is spent in paid work (due largely to lengthening periods of education and retirement) the fraction of the typical American’s lifetime devoted to work has become remarkably small. Based on these trends Fogel estimates that four decades from now less than one-fourth of our discretionary time (time not needed for sleep, meals, and hygiene) will be devoted to paid work — over three-fourths will be available for doing what we wish.

Table 5
Division of the Day for the Average Male Household Head over the Course of a Year, 1880 and 1995

Activity 1880 1995
Sleep 8 8
Meals and hygiene 2 2
Chores 2 2
Travel to and from work 1 1
Work 8.5 4.7
Illness .7 .5
Left over for leisure activities 1.8 5.8

Source: Fogel (2000)

Table 6
Estimated Trend in the Lifetime Distribution of Discretionary Time, 1880-2040

Activity 1880 1995 2040
Lifetime Discretionary Hours 225,900 298,500 321,900
Lifetime Work Hours 182,100 122,400 75,900
Lifetime Leisure Hours 43,800 176,100 246,000

Source: Fogel (2000)
Notes: Discretionary hours exclude hours used for sleep, meals and hygiene. Work hours include paid work, travel to and from work, and household chores.

Postwar International Comparisons

While hours of work have decreased slowly in the U.S. since the end of World War II, they have decreased more rapidly in Western Europe. Greis (1984) calculates that annual hours worked per employee fell from 1908 to 1704 in the U.S. between 1950 and 1979, a 10.7 percent decrease. This compares to a 21.8 percent decrease across a group of twelve Western European countries, where the average fell from 2170 hours to 1698 hours between 1950 and 1979. Perhaps the most precise way of measuring work hours is to have individuals fill out diaries on their day-to-day and hour-to-hour time use. Table 7 presents an international comparison of average work hours both inside and outside of the workplace, by adult men and women — averaging those who are employed with those who are not. (Juster and Stafford (1991) caution, however, that making these comparisons requires a good deal of guesswork.) These numbers show a significant drop in total work per week in the U.S. between 1965 and 1981. They also show that total work by men and women is very similar, although it is divided differently. Total work hours in the U.S. were fairly similar to those in Japan, but greater than in Denmark, while less than in the USSR.

Table 7
Weekly Work Time in Four Countries, Based on Time Diaries, 1960s-1980s

Activity US USSR (Pskov)
Men Women Men Women
1965 1981 1965 1981 1965 1981 1965 1981
Total Work 63.1 57.8 60.9 54.4 64.4 65.7 75.3 66.3
Market Work 51.6 44.0 18.9 23.9 54.6 53.8 43.8 39.3
Commuting 4.8 3.5 1.6 2.0 4.9 5.2 3.7 3.4
Housework 11.5 13.8 41.8 30.5 9.8 11.9 31.5 27.0
Activity Japan Denmark
Men Women Men Women
1965 1985 1965 1985 1964 1987 1964 1987
Total Work 60.5 55.5 64.7 55.6 45.4 46.2 43.4 43.9
Market Work 57.7 52.0 33.2 24.6 41.7 33.4 13.3 20.8
Commuting 3.6 4.5 1.0 1.2 n.a n.a n.a n.a
Housework 2.8 3.5 31.5 31.0 3.7 12.8 30.1 23.1

Source: Juster and Stafford (1991)

The Shorter Hours “Movement” in the U.S.

The Colonial Period

Captain John Smith, after mapping New England’s coast, came away convinced that three days’ work per week would satisfy any settler. Far from becoming a land of leisure, however, the abundant resources of British America and the ideology of its settlers, brought forth high levels of work. Many colonial Americans held the opinion that prosperity could be taken as a sign of God’s pleasure with the individual, viewed work as inherently good and saw idleness as the devil’s workshop. Rodgers (1978) argues that this work ethic spread and eventually reigned supreme in colonial America. The ethic was consistent with the American experience, since high returns to effort meant that hard work often yielded significant increases in wealth. In Virginia, authorities also transplanted the Statue of Artificers, which obliged all Englishmen (except the gentry) to engage in productive activity from sunrise to sunset. Likewise, a 1670 Massachusetts law demanded a minimum ten-hour workday, but it is unlikely that these laws had any impact on the behavior of most free workers.

The Revolutionary War Period

Roediger and Foner (1989) contend that the Revolutionary War era brought a series of changes that undermined support for sun-to-sun work. The era’s republican ideology emphasized that workers needed free time, away from work, to participate in democracy. Simultaneously, the development of merchant capitalism meant that there were, for the first time, a significant number of wageworkers. Roediger and Foner argue that reducing labor costs was crucial to the profitability of these workers’ employers, who reduced costs by squeezing more work from their employees — reducing time for meals, drink and rest and sometimes even rigging the workplace’s official clock. Incensed by their employers’ practice of paying a flat daily wage during the long summer shift and resorting to piece rates during short winter days, Philadelphia’s carpenters mounted America’s first ten-hour-day strike in May 1791. (The strike was unsuccessful.)

1820s: The Shorter Hours Movement Begins

Changes in the organization of work, with the continued rise of merchant capitalists, the transition from the artisanal shop to the early factory, and an intensified work pace had become widespread by about 1825. These changes produced the first extensive, aggressive movement among workers for shorter hours, as the ten-hour movement blossomed in New York City, Philadelphia and Boston. Rallying around the ten-hour banner, workers formed the first city-central labor union in the U.S., the first labor newspaper, and the first workingmen’s political party — all in Philadelphia — in the late 1820s.

Early Debates over Shorter Hours

Although the length of the workday is largely an economic decision arrived at by the interaction of the supply and demand for labor, advocates of shorter hours and foes of shorter hours have often argued the issue on moral grounds. In the early 1800s, advocates argued that shorter work hours improved workers’ health, allowed them time for self-improvement and relieved unemployment. Detractors countered that workers would abuse leisure time (especially in saloons) and that long, dedicated hours of work were the path to success, which should not be blocked for the great number of ambitious workers.

1840s: Early Agitation for Government Intervention

When Samuel Slater built the first textile mills in the U.S., “workers labored from sun up to sun down in summer and during the darkness of both morning and evening in the winter. These hours ? only attracted attention when they exceeded the common working day of twelve hours,” according to Ware (1931). During the 1830s, an increased work pace, tighter supervision, and the addition of about fifteen minutes to the work day (partly due to the introduction of artificial lighting during winter months), plus the growth of a core of more permanent industrial workers, fueled a campaign for a shorter workweek among mill workers in Lowell, Massachusetts, whose workweek averaged about 74 hours. This agitation was led by Sarah Bagley and the New England Female Labor Reform Association, which, beginning in 1845, petitioned the state legislature to intervene in the determination of hours. The petitions were followed by America’s first-ever examination of labor conditions by a governmental investigating committee. The Massachusetts legislature proved to be very unsympathetic to the workers’ demands, but similar complaints led to the passage of laws in New Hampshire (1847) and Pennsylvania (1848), declaring ten hours to be the legal length of the working day. However, these laws also specified that a contract freely entered into by employee and employer could set any length for the workweek. Hence, these laws had little impact. Legislation passed by the federal government had a more direct, though limited effect. On March 31, 1840, President Martin Van Buren issued an executive order mandating a ten-hour day for all federal employees engaged in manual work.

1860s: Grand Eight Hours Leagues

As the length of the workweek gradually declined, political agitation for shorter hours seems to have waned for the next two decades. However, immediately after the Civil War reductions in the length of the workweek reemerged as an important issue for organized labor. The new goal was an eight-hour day. Roediger (1986) argues that many of the new ideas about shorter hours grew out of the abolitionists’ critique of slavery — that long hours, like slavery, stunted aggregate demand in the economy. The leading proponent of this idea, Ira Steward, argued that decreasing the length of the workweek would raise the standard of living of workers by raising their desired consumption levels as their leisure expanded, and by ending unemployment. The hub of the newly launched movement was Boston and Grand Eight Hours Leagues sprang up around the country in 1865 and 1866. The leaders of the movement called the meeting of the first national organization to unite workers of different trades, the National Labor Union, which met in Baltimore in 1867. In response to this movement, eight states adopted general eight-hour laws, but again the laws allowed employer and employee to mutually consent to workdays longer than the “legal day.” Many critics saw these laws and this agitation as a hoax, because few workers actually desired to work only eight hours per day at their original hourly pay rate. The passage of the state laws did foment action by workers — especially in Chicago where parades, a general strike, rioting and martial law ensued. In only a few places did work hours fall after the passage of these laws. Many become disillusioned with the idea of using the government to promote shorter hours and by the late 1860s, efforts to push for a universal eight-hour day had been put on the back burner.

The First Enforceable Hours Laws

Despite this lull in shorter-hours agitation, in 1874, Massachusetts passed the nation’s first enforceable ten-hour law. It covered only female workers and became fully effective by 1879. This legislation was fairly late by European standards. Britain had passed its first effective Factory Act, setting maximum hours for almost half of its very young textile workers, in 1833.

1886: Year of Dashed Hopes

In the early 1880s organized labor in the U.S. was fairly weak. In 1884, the short-lived Federation of Organized Trades and Labor Unions (FOTLU) fired a “shot in the dark.” During its final meeting, before dissolving, the Federation “ordained” May 1, 1886 as the date on which workers would cease working beyond eight hours per day. Meanwhile, the Knights of Labor, which had begun as a secret fraternal society and evolved a labor union, began to gain strength. It appears that many nonunionized workers, especially the unskilled, came to see in the Knights a chance to obtain a better deal from their employers, perhaps even to obtain the eight-hour day. FOTLU’s call for workers to simply walk off the job after eight hours beginning on May 1, plus the activities of socialist and anarchist labor organizers and politicians, and the apparent strength of the Knights combined to attract members in record numbers. The Knights mushroomed and its new membership demanded that their local leaders support them in attaining the eight-hour day. Many smelled victory in the air — the movement to win the eight-hour day became frenzied and the goal became “almost a religious crusade” (Grob, 1961).

The Knights’ leader, Terence Powderly, thought that the push for a May 1 general strike for eight-hours was “rash, short-sighted and lacking in system” and “must prove abortive” (Powderly, 1890). He offered no effective alternative plan but instead tried to block the mass action, issuing a “secret circular” condemning the use of strikes. Powderly reasoned that low incomes forced workmen to accept long hours. Workers didn’t want shorter hours unless their daily pay was maintained, but employers were unwilling and/or unable to offer this. Powderly’s rival, labor leader Samuel Gompers, agreed that “the movement of ’86 did not have the advantage of favorable conditions” (Gompers, 1925). Nelson (1986) points to divisions among workers, which probably had much to do with the failure in 1886 of the drive for the eight-hour day. Some insisted on eight hours with ten hours’ pay, but others were willing to accept eight hours with eight hours’ pay,

Haymarket Square Bombing

The eight-hour push of 1886 was, in Norman Ware’s words, “a flop” (Ware, 1929). Lack of will and organization among workers was undoubtedly important, but its collapse was aided by violence that marred strikes and political rallies in Chicago and Milwaukee. The 1886 drive for eight-hours literally blew up in organized labor’s face. At Haymarket Square in Chicago an anarchist bomb killed fifteen policemen during an eight-hour rally, and in Milwaukee’s Bay View suburb nine strikers were killed as police tried to disperse roving pickets. The public backlash and fear of revolution damned the eight-hour organizers along with the radicals and dampened the drive toward eight hours — although it is estimated that the strikes of May 1886 shortened the workweek for about 200,000 industrial workers, especially in New York City and Cincinnati.

The AFL’s Strategy

After the demise of the Knights of Labor, the American Federation of Labor (AFL) became the strongest labor union in the U.S. It held shorter hours as a high priority. The inside cover of its Proceedings carried two slogans in large type: “Eight hours for work, eight hours for rest, eight hours for what we will” and “Whether you work by the piece or work by the day, decreasing the hours increases the pay.” (The latter slogan was coined by Ira Steward’s wife, Mary.) In the aftermath of 1886, the American Federation of Labor adopted a new strategy of selecting each year one industry in which it would attempt to win the eight-hour day, after laying solid plans, organizing, and building up a strike fund war chest by taxing nonstriking unions. The United Brotherhood of Carpenters and Joiners was selected first and May 1, 1890 was set as a day of national strikes. It is estimated that nearly 100,000 workers gained the eight-hour day as a result of these strikes in 1890. However, other unions turned down the opportunity to follow the carpenters’ example and the tactic was abandoned. Instead, the length of the workweek continued to erode during this period, sometimes as the result of a successful local strike, more often as the result of broader economic forces.

The Spread of Hours Legislation

Massachusetts’ first hours law in 1874 set sixty hours per week as the legal maximum for women, in 1892 this was cut to 58, in 1908 to 56, and in 1911 to 54. By 1900, 26 percent of states had maximum hours laws covering women, children and, in some, adult men (generally only those in hazardous industries). The percentage of states with maximum hours laws climbed to 58 percent in 1910, 76 percent in 1920, and 84 percent in 1930. Steinberg (1982) calculates that the percent of employees covered climbed from 4 percent nationally in 1900, to 7 percent in 1910, and 12 percent in 1920 and 1930. In addition, these laws became more restrictive with the average legal standard falling from a maximum of 59.3 hours per week in 1900 to 56.7 in 1920. According to her calculations, in 1900 about 16 percent of the workers covered by these laws were adult men, 49 percent were adult women and the rest were minors.

Court Rulings

The banner years for maximum hours legislation were right around 1910. This may have been partly a reaction to the Supreme Court’s ruling upholding female-hours legislation in the Muller vs. Oregon case (1908). The Court’s rulings were not always completely consistent during this period, however. In 1898 the Court upheld a maximum eight-hour day for workmen in the hazardous industries of mining and smelting in Utah in Holden vs. Hardy. In Lochner vs. New York (1905), it rejected as unconstitutional New York’s ten-hour day for bakers, which was also adopted (at least nominally) out of concerns for safety. The defendant showed that mortality rates in baking were only slightly above average, and lower than those for many unregulated occupations, arguing that this was special interest legislation, designed to favor unionized bakers. Several state courts, on the other hand, supported laws regulating the hours of men in only marginally hazardous work. By 1917, in Bunting vs. Oregon, the Supreme Court seemingly overturned the logic of the Lochner decision, supporting a state law that required overtime payment for all men working long hours. The general presumption during this period was that the courts would allow regulation of labor concerning women and children, who were thought to be incapable of bargaining on an equal footing with employers and in special need of protection. Men were allowed freedom of contract unless it could be proven that regulating their hours served a higher good for the population at large.

New Arguments about Shorter Hours

During the first decades of the twentieth century, arguments favoring shorter hours moved away from Steward’s line that shorter hours increased pay and reduced unemployment to arguments that shorter hours were good for employers because they made workers more productive. A new cadre of social scientists began to offer evidence that long hours produced health-threatening, productivity-reducing fatigue. This line of reasoning, advanced in the court brief of Louis Brandeis and Josephine Goldmark, was crucial in the Supreme Court’s decision to support state regulation of women’s hours in Muller vs. Oregon. Goldmark’s book, Fatigue and Efficiency (1912) was a landmark. In addition, data relating to hours and output among British and American war workers during World War I helped convince some that long hours could be counterproductive. Businessmen, however, frequently attacked the shorter hours movement as merely a ploy to raise wages, since workers were generally willing to work overtime at higher wage rates.

Federal Legislation in the 1910s

In 1912 the Federal Public Works Act was passed, which provided that every contract to which the U.S. government was a party must contain an eight-hour day clause. Three year later LaFollette’s Bill established maximum hours for maritime workers. These were preludes to the most important shorter-hours law enacted by Congress during this period — 1916’s Adamson Act, which was passed to counter a threatened nationwide strike, granted rail workers the basic eight hour day. (The law set eight hours as the basic workday and required higher overtime pay for longer hours.)

World War I and Its Aftermath

Labor markets became very tight during World War I as the demand for workers soared and the unemployment rate plunged. These forces put workers in a strong bargaining position, which they used to obtain shorter work schedules. The move to shorter hours was also pushed by the federal government, which gave unprecedented support to unionization. The federal government began to intervene in labor disputes for the first time, and the National War Labor Board “almost invariably awarded the basic eight-hour day when the question of hours was at issue” in labor disputes (Cahill, 1932). At the end of the war everyone wondered if organized labor would maintain its newfound power and the crucial test case was the steel industry. Blast furnace workers generally put in 84-hour workweeks. These abnormally long hours were the subject of much denunciation and a major issue in a strike that began in September 1919. The strike failed (and organized labor’s power receded during the 1920s), but four years later US Steel reduced its workday from twelve to eight hours. The move came after much arm-twisting by President Harding but its timing may be explained by immigration restrictions and the loss of immigrant workers who were willing to accept such long hours (Shiells, 1990).

The Move to a Five-day Workweek

During the 1920s agitation for shorter workdays largely disappeared, now that the workweek had fallen to about 50 hours. However, pressure arose to grant half-holidays on Saturday or Saturday off — especially in industries whose workers were predominantly Jewish. By 1927 at least 262 large establishments had adopted the five-day week, while only 32 had it by 1920. The most notable action was Henry Ford’s decision to adopt the five-day week in 1926. Ford employed more than half of the nation’s approximately 400,000 workers with five-day weeks. However, Ford’s motives were questioned by many employers who argued that productivity gains from reducing hours ceased beyond about forty-eight hours per week. Even the reformist American Labor Legislation Review greeted the call for a five-day workweek with lukewarm interest.

Changing Attitudes in the 1920s

Hunnicutt (1988) argues that during the 1920s businessmen and economists began to see shorter hours as a threat to future economic growth. With the development of advertising — the “gospel of consumption” — a new vision of progress was proposed to American workers. It replaced the goal of leisure time with a list of things to buy and business began to persuade workers that more work brought more tangible rewards. Many workers began to oppose further decreases in the length of the workweek. Hunnicutt concludes that a new work ethic arose as Americans threw off the psychology of scarcity for one of abundance.

Hours’ Reduction during the Great Depression

Then the Great Depression hit the American economy. By 1932 about half of American employers had shortened hours. Rather than slash workers’ real wages, employers opted to lay-off many workers (the unemployment rate hit 25 percent) and tried to protect the ones they kept on by the sharing of work among them. President Hoover’s Commission for Work Sharing pushed voluntary hours reductions and estimated that they had saved three to five million jobs. Major employers like Sears, GM, and Standard Oil scaled down their workweeks and Kellogg’s and the Akron tire industry pioneered the six-hour day. Amid these developments, the AFL called for a federally-mandated thirty-hour workweek.

The Black-Connery 30-Hours Bill and the NIRA

The movement for shorter hours as a depression-fighting work-sharing measure built such a seemingly irresistible momentum that by 1933 observers predicting that the “30-hour week was within a month of becoming federal law” (Hunnicutt, 1988). During the period after the 1932 election but before Franklin Roosevelt’s inauguration, Congressional hearings on thirty hours began, and less than one month into FDR’s first term, the Senate passed, 53 to 30, a thirty-hour bill authored by Hugo Black. The bill was sponsored in the House by William Connery. Roosevelt originally supported the Black-Connery proposals, but soon backed off, uneasy with a provision forbidding importation of goods produced by workers whose weeks were longer than thirty hours, and convinced by arguments of business that trying to legislate fewer hours might have disastrous results. Instead, FDR backed the National Industrial Recovery Act (NIRA). Hunnicutt argues that an implicit deal was struck in the NIRA. Labor leaders were persuaded by NIRA Section 7a’s provisions — which guaranteed union organization and collective bargaining — to support the NIRA rather than the Black-Connery Thirty-Hour Bill. Business, with the threat of thirty hours hanging over its head, fell raggedly into line. (Most historians cite other factors as the key to the NIRA’s passage. See Barbara Alexander’s article on the NIRA in this encyclopedia.) When specific industry codes were drawn up by the NIRA-created National Recovery Administration (NRA), shorter hours were deemphasized. Despite a plan by NRA Administrator Hugh Johnson to make blanket provisions for a thirty-five hour workweek in all industry codes, by late August 1933, the momentum toward the thirty-hour week had dissipated. About half of employees covered by NRA codes had their hours set at forty per week and nearly 40 percent had workweeks longer than forty hours.

The FSLA: Federal Overtime Law

Hunnicutt argues that the entire New Deal can be seen as an attempt to keep shorter-hours advocates at bay. After the Supreme Court struck down the NRA, Roosevelt responded to continued demands for thirty hours with the Works Progress Administration, the Wagner Act, Social Security, and, finally, the Fair Labor Standards Acts, which set a federal minimum wage and decreed that overtime beyond forty hours per week would be paid at one-and-a-half times the base rate in covered industries.

The Demise of the Shorter Hours’ Movement

As the Great Depression ended, average weekly work hours slowly climbed from their low reached in 1934. During World War II hours reached a level almost as high as at the end of World War I. With the postwar return of weekly work hours to the forty-hour level the shorter hours movement effectively ended. Occasionally organized labor’s leaders announced that they would renew the push for shorter hours, but they found that most workers didn’t desire a shorter workweek.

The Case of Kellogg’s

Offsetting isolated examples of hours reductions after World War II, there were noteworthy cases of backsliding. Hunnicutt (1996) has studied the case of Kellogg’s in great detail. In 1946, 87% of women and 71% of men working at Kellogg’s voted to return to the six-hour day, with the end of the war. Over the course of the next decade, however, the tide turned. By 1957 most departments had opted to switch to 8-hour shifts, so that only about one-quarter of the work force, mostly women, retained a six-hour shift. Finally, in 1985, the last department voted to adopt an 8-hour workday. Workers, especially male workers, began to favor additional money more than the extra two hours per day of free time. In interviews they explained that they needed the extra money to buy a wide range of consumer items and to keep up with the neighbors. Several men told about the friction that resulted when men spent too much time around the house: “The wives didn’t like the men underfoot all day.” “The wife always found something for me to do if I hung around.” “We got into a lot of fights.” During the 1950s, the threat of unemployment evaporated and the moral condemnation for being a “work hog” no longer made sense. In addition, the rise of quasi-fixed employment costs (such as health insurance) induced management to push workers toward a longer workday.

The Current Situation

As the twentieth century ended there was nothing resembling a shorter hours “movement.” The length of the workweek continues to fall for most groups — but at a glacial pace. Some Americans complain about a lack of free time but the vast majority seem content with an average workweek of roughly forty hours — channeling almost all of their growing wages into higher incomes rather than increased leisure time.

Causes of the Decline in the Length of the Workweek

Supply, Demand and Hours of Work

The length of the workweek, like other labor market outcomes, is determined by the interaction of the supply and demand for labor. Employers are torn by conflicting pressures. Holding everything else constant, they would like employees to work long hours because this means that they can utilize their equipment more fully and offset any fixed costs from hiring each worker (such as the cost of health insurance — common today, but not a consideration a century ago). On the other hand, longer hours can bring reduced productivity due to worker fatigue and can bring worker demands for higher hourly wages to compensate for putting in long hours. If they set the workweek too high, workers may quit and few workers will be willing to work for them at a competitive wage rate. Thus, workers implicitly choose among a variety of jobs — some offering shorter hours and lower earnings, others offering longer hours and higher earnings.

Economic Growth and the Long-Term Reduction of Work Hours

Historically employers and employees often agreed on very long workweeks because the economy was not very productive (by today’s standards) and people had to work long hours to earn enough money to feed, clothe and house their families. The long-term decline in the length of the workweek, in this view, has primarily been due to increased economic productivity, which has yielded higher wages for workers. Workers responded to this rise in potential income by “buying” more leisure time, as well as by buying more goods and services. In a recent survey, a sizeable majority of economic historians agreed with this view. Over eighty percent accepted the proposition that “the reduction in the length of the workweek in American manufacturing before the Great Depression was primarily due to economic growth and the increased wages it brought” (Whaples, 1995). Other broad forces probably played only a secondary role. For example, roughly two-thirds of economic historians surveyed rejected the proposition that the efforts of labor unions were the primary cause of the drop in work hours before the Great Depression.

Winning the Eight-Hour Day in the Era of World War I

The swift reduction of the workweek in the period around World War I has been extensively analyzed by Whaples (1990b). His findings support the consensus that economic growth was the key to reduced work hours. Whaples links factors such as wages, labor legislation, union power, ethnicity, city size, leisure opportunities, age structure, wealth and homeownership, health, education, alternative employment opportunities, industrial concentration, seasonality of employment, and technological considerations to changes in the average workweek in 274 cities and 118 industries. He finds that the rapid economic expansion of the World War I period, which pushed up real wages by more than 18 percent between 1914 and 1919, explains about half of the drop in the length of the workweek. The reduction of immigration during the war was important, as it deprived employers of a group of workers who were willing to put in long hours, explaining about one-fifth of the hours decline. The rapid electrification of manufacturing seems also to have played an important role in reducing the workweek. Increased unionization explains about one-seventh of the reduction, and federal and state legislation and policies that mandated reduced workweeks also had a noticeable role.

Cross-sectional Patterns from 1919

In 1919 the average workweek varied tremendously, emphasizing the point that not all workers desired the same workweek. The workweek exceeded 69 hours in the iron blast furnace, cottonseed oil, and sugar beet industries, but fell below 45 hours in industries such as hats and caps, fur goods, and women’s clothing. Cities’ averages also differed dramatically. In a few Midwestern steel mill towns average workweeks exceeded 60 hours. In a wide range of low-wage Southern cities they reached the high 50s, but in high-wage Western ports, like Seattle, the workweek fell below 45 hours.

Whaples (1990a) finds that among the most important city-level determinants of the workweek during this period were the availability of a pool of agricultural workers, the capital-labor ratio, horsepower per worker, and the amount of employment in large establishments. Hours rose as each of these increased. Eastern European immigrants worked significantly longer than others, as did people in industries whose output varied considerably from season to season. High unionization and strike levels reduced hours to a small degree. The average female employee worked about six and a half fewer hours per week in 1919 than did the average male employee. In city-level comparisons, state maximum hours laws appear to have had little affect on average work hours, once the influences of other factors have been taken into account. One possibility is that these laws were passed only after economic forces lowered the length of the workweek. Overall, in cities where wages were one percent higher, hours were about -0.13 to -0.05 percent lower. Again, this suggests that during the era of declining hours, workers were willing to use higher wages to “buy” shorter hours.

Annotated Bibliography

Perhaps the most comprehensive survey of the shorter hours movement in the U.S. is David Roediger and Philip Foner’s Our Own Time: A History of American Labor and the Working Day (1989). It contends that “the length of the working day has been the central issue for the American labor movement during its most vigorous periods of activity, uniting workers along lines of craft, gender, and ethnicity.” Critics argue that its central premise is flawed because workers have often been divided about the optimal length of the workweek. It explains the point of view of organized labor and recounts numerous historically important events and arguments, but does not attempt to examine in detail the broader economic forces that determined the length of the workweek. An earlier useful comprehensive work is Marion Cahill’s Shorter Hours: A Study of the Movement since the Civil War (1932).

Benjamin Hunnicutt’s Work Without End: Abandoning Shorter Hours for the Right to Work (1988) focuses on the period from 1920 to 1940 and traces the political, intellectual, and social “dialogues” that changed the American concept of progress from dreams of more leisure to an “obsession” with the importance of work and wage-earning. This work’s detailed analysis and insights are valuable, but it draws many of its inferences from what intellectuals said about shorter hours, rather than spending time on the actual decision makers — workers and employers. Hunnicutt’s Kellogg’s Six-Hour Day (1996), is important because it does exactly this — interviewing employees and examining the motives and decisions of a prominent employer. Unfortunately, it shows that one must carefully interpret what workers say on the subject, as they are prone to reinterpret their own pasts so that their choices can be more readily rationalized. (See EH.NET’s review: http://eh.net/book_reviews/kelloggs-six-hour-day/.)

Economists have given surprisingly little attention to the determinants of the workweek. The most comprehensive treatment is Robert Whaples’ “The Shortening of the American Work Week” (1990), which surveys estimates of the length of the workweek, the shorter hours movement, and economic theories about the length of the workweek. Its core is an extensive statistical examination of the determinants of the workweek in the period around World War I.

References

Atack, Jeremy and Fred Bateman. “How Long Was the Workday in 1880?” Journal of Economic History 52, no. 1 (1992): 129-160.

Cahill, Marion Cotter. Shorter Hours: A Study of the Movement since the Civil War. New York: Columbia University Press, 1932.

Carr, Lois Green. “Emigration and the Standard of Living: The Seventeenth Century Chesapeake.” Journal of Economic History 52, no. 2 (1992): 271-291.

Coleman, Mary T. and John Pencavel. “Changes in Work Hours of Male Employees, 1940-1988.” Industrial and Labor Relations Review 46, no. 2 (1993a): 262-283.

Coleman, Mary T. and John Pencavel. “Trends in Market Work Behavior of Women since 1940.” Industrial and Labor Relations Review 46, no. 4 (1993b): 653-676.

Douglas, Paul. Real Wages in the United States, 1890-1926. Boston: Houghton, 1930.

Fogel, Robert. The Fourth Great Awakening and the Future of Egalitarianism. Chicago: University of Chicago Press, 2000.

Fogel, Robert and Stanley Engerman. Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown, 1974.

Gallman, Robert. “The Agricultural Sector and the Pace of Economic Growth: U.S. Experience in the Nineteenth Century.” In Essays in Nineteenth-Century Economic History: The Old Northwest, edited by David Klingaman and Richard Vedder. Athens, OH: Ohio University Press, 1975.

Goldmark, Josephine. Fatigue and Efficiency. New York: Charities Publication Committee, 1912.

Gompers, Samuel. Seventy Years of Life and Labor: An Autobiography. New York: Dutton, 1925.

Greis, Theresa Diss. The Decline of Annual Hours Worked in the United States, since 1947. Manpower and Human Resources Studies, no. 10, Wharton School, University of Pennsylvania, 1984.

Grob, Gerald. Workers and Utopia: A Study of Ideological Conflict in the American Labor Movement, 1865-1900. Evanston: Northwestern University Press, 1961.

Hunnicutt, Benjamin Kline. Work Without End: Abandoning Shorter Hours for the Right to Work. Philadelphia: Temple University Press, 1988.

Hunnicutt, Benjamin Kline. Kellogg’s Six-Hour Day. Philadelphia: Temple University Press, 1996.

Jones, Ethel. “New Estimates of Hours of Work per Week and Hourly Earnings, 1900-1957.” Review of Economics and Statistics 45, no. 4 (1963): 374-385.

Juster, F. Thomas and Frank P. Stafford. “The Allocation of Time: Empirical Findings, Behavioral Models, and Problems of Measurement.” Journal of Economic Literature 29, no. 2 (1991): 471-522.

Licht, Walter. Working for the Railroad: The Organization of Work in the Nineteenth Century. Princeton: Princeton University Press, 1983.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume II, The Long Nineteenth Century, edited by Stanley Engerman and Robert Gallman, 207-243. New York: Cambridge University Press, 2000.

Nelson, Bruce. “‘We Can’t Get Them to Do Aggressive Work': Chicago’s Anarchists and the Eight-Hour Movement.” International Labor and Working Class History 29 (1986).

Ng, Kenneth and Nancy Virts. “The Value of Freedom.” Journal of Economic History 49, no. 4 (1989): 958-965.

Owen, John. “Workweeks and Leisure: An Analysis of Trends, 1948-1975.” Monthly Labor Review 99 (1976).

Owen, John. “Work-time Reduction in the United States and Western Europe.” Monthly Labor Review 111 (1988).

Powderly, Terence. Thirty Years of Labor, 1859-1889. Columbus: Excelsior, 1890.

Ransom, Roger and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Rodgers, Daniel. The Work Ethic in Industrial America, 1850-1920. Chicago: University of Chicago Press, 1978.

Roediger, David. “Ira Steward and the Antislavery Origins of American Eight-Hour Theory.” Labor History 27 (1986).

Roediger, David and Philip Foner. Our Own Time: A History of American Labor and the Working Day. New York: Verso, 1989.

Schor, Juliet B. The Overworked American: The Unexpected Decline in Leisure. New York: Basic Books, 1992.

Shiells, Martha Ellen, “Collective Choice of Working Conditions: Hours in British and U.S. Iron and Steel, 1890-1923.” Journal of Economic History 50, no. 2 (1990): 379-392.

Steinberg, Ronnie. Wages and Hours: Labor and Reform in Twentieth-Century America. New Brunswick, NJ: Rutgers University Press, 1982.

United States, Department of Interior, Census Office. Report on the Statistics of Wages in Manufacturing Industries, by Joseph Weeks, 1880 Census, Vol. 20. Washington: GPO, 1883.

United States Senate. Senate Report 1394, Fifty-Second Congress, Second Session. “Wholesale Prices, Wages, and Transportation.” Washington: GPO, 1893.

Ware, Caroline. The Early New England Cotton Manufacture: A Study of Industrial Beginnings. Boston: Houghton-Mifflin, 1931.

Ware, Norman. The Labor Movement in the United States, 1860-1895. New York: Appleton, 1929.

Weiss, Thomas and Lee Craig. “Agricultural Productivity Growth during the Decade of the Civil War.” Journal of Economic History 53, no. 3 (1993): 527-548.

Whaples, Robert. “The Shortening of the American Work Week: An Economic and Historical Analysis of Its Context, Causes, and Consequences.” Ph.D. dissertation, University of Pennsylvania, 1990a.

Whaples, Robert. “Winning the Eight-Hour Day, 1909-1919.” Journal of Economic History 50, no. 2 (1990b): 393-406.

Whaples, Robert. “Where Is There Consensus Among American Economic Historians? The Results of a Survey on Forty Propositions.” Journal of Economic History 55, no. 1 (1995): 139-154.

Citation: Whaples, Robert. “Hours of Work in U.S. History”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/hours-of-work-in-u-s-history/

A History of Futures Trading in the United States

Joseph Santos, South Dakota State University

Many contemporary [nineteenth century] critics were suspicious of a form of business in which one man sold what he did not own to another who did not want it… Morton Rothstein (1966)

Anatomy of a Futures Market

The Futures Contract

A futures contract is a standardized agreement between a buyer and a seller to exchange an amount and grade of an item at a specific price and future date. The item or underlying asset may be an agricultural commodity, a metal, mineral or energy commodity, a financial instrument or a foreign currency. Because futures contracts are derived from these underlying assets, they belong to a family of financial instruments called derivatives.

Traders buy and sell futures contracts on an exchange – a marketplace that is operated by a voluntary association of members. The exchange provides buyers and sellers the infrastructure (trading pits or their electronic equivalent), legal framework (trading rules, arbitration mechanisms), contract specifications (grades, standards, time and method of delivery, terms of payment) and clearing mechanisms (see section titled The Clearinghouse) necessary to facilitate futures trading. Only exchange members are allowed to trade on the exchange. Nonmembers trade through commission merchants – exchange members who service nonmember trades and accounts for a fee.

The September 2004 light sweet crude oil contract is an example of a petroleum (mineral) future. It trades on the New York Mercantile exchange (NYM). The contract is standardized – every one is an agreement to trade 1,000 barrels of grade light sweet crude in September, on a day of the seller’s choosing. As of May 25, 2004 the contract sold for $40,120=$40.12x1000 and debits Member S’s margin account the same amount.

The Clearinghouse

The clearinghouse is the counterparty to every trade – its members buy every contract that traders sell on the exchange and sell every contract that traders buy on the exchange. Absent a clearinghouse, traders would interact directly, and this would introduce two problems. First, traders. concerns about their counterparty’s credibility would impede trading. For example, Trader A might refuse to sell to Trader B, who is supposedly untrustworthy.

Second, traders would lose track of their counterparties. This would occur because traders typically settle their contractual obligations by offset – traders buy/sell the contracts that they sold/bought earlier. For example, Trader A sells a contract to Trader B, who sells a contract to Trader C to offset her position, and so on.

The clearinghouse eliminates both of these problems. First, it is a guarantor of all trades. If a trader defaults on a futures contract, the clearinghouse absorbs the loss. Second, clearinghouse members, and not outside traders, reconcile offsets at the end of trading each day. Margin accounts and a process called marking-to-market all but assure the clearinghouse’s solvency.

A margin account is a balance that a trader maintains with a commission merchant in order to offset the trader’s daily unrealized loses in the futures markets. Commission merchants also maintain margins with clearinghouse members, who maintain them with the clearinghouse. The margin account begins as an initial lump sum deposit, or original margin.

To understand the mechanics and merits of marking-to-market, consider that the values of the long and short positions of an existing futures contract change daily, even though futures trading is a zero-sum game – a buyer’s gain/loss equals a seller’s loss/gain. So, the clearinghouse breaks even on every trade, while its individual members. positions change in value daily.

With this in mind, suppose Trader B buys a 5,000 bushel soybean contract for $9.70 from Trader S. Technically, Trader B buys the contract from Clearinghouse Member S and Trader S sells the contract to Clearinghouse Member B. Now, suppose that at the end of the day the contract is priced at $9.71. That evening the clearinghouse marks-to-market each member’s account. That is to say, the clearinghouse credits Member B’s margin account $50 and debits Member S’s margin account the same amount.

Member B is now in a position to draw on the clearinghouse $50, while Member S must pay the clearinghouse a $50 variation margin – incremental margin equal to the difference between a contract’s price and its current market value. In turn, clearinghouse members debit and credit accordingly the margin accounts of their commission merchants, who do the same to the margin accounts of their clients (i.e., traders). This iterative process all but assures the clearinghouse a sound financial footing. In the unlikely event that a trader defaults, the clearinghouse closes out the position and loses, at most, the trader’s one day loss.

Active Futures Markets

Futures exchanges create futures contracts. And, because futures exchanges compete for traders, they must create contracts that appeal to the financial community. For example, the New York Mercantile Exchange created its light sweet crude oil contract in order to fill an unexploited niche in the financial marketplace.

Not all contracts are successful and those that are may, at times, be inactive – the contract exists, but traders are not trading it. For example, of all contracts introduced by U.S. exchanges between 1960 and 1977, only 32% traded in 1980 (Stein 1986, 7). Consequently, entire exchanges can become active – e.g., the New York Futures Exchange opened in 1980 – or inactive – e.g., the New Orleans Exchange closed in 1983 (Leuthold 1989, 18). Government price supports or other such regulation can also render trading inactive (see Carlton 1984, 245).

Futures contracts succeed or fail for many reasons, but successful contracts do share certain basic characteristics (see for example, Baer and Saxon 1949, 110-25; Hieronymus 1977, 19-22). To wit, the underlying asset is homogeneous, reasonably durable, and standardized (easily describable); its supply and demand is ample, its price is unfettered, and all relevant information is available to all traders. For example, futures contracts have never derived from, say, artwork (heterogeneous and not standardized) or rent-controlled housing rights (supply, and hence price is fettered by regulation).

Purposes and Functions

Futures markets have three fundamental purposes. The first is to enable hedgers to shift price risk – asset price volatility – to speculators in return for basis risk – changes in the difference between a futures price and the cash, or current spot price of the underlying asset. Because basis risk is typically less than asset price risk, the financial community views hedging as a form of risk management and speculating as a form of risk taking.

Generally speaking, to hedge is to take opposing positions in the futures and cash markets. Hedgers include (but are not restricted to) farmers, feedlot operators, grain elevator operators, merchants, millers, utilities, export and import firms, refiners, lenders, and hedge fund managers (see Peck 1985, 13-21). Meanwhile, to speculate is to take a position in the futures market with no counter-position in the cash market. Speculators may not be affiliated with the underlying cash markets.

To demonstrate how a hedge works, assume Hedger A buys, or longs, 5,000 bushels of corn, which is currently worth $2.40 per bushel, or $12,000=$2.40×5000; the date is May 1st and Hedger A wishes to preserve the value of his corn inventory until he sells it on June 1st. To do so, he takes a position in the futures market that is exactly opposite his position in the spot – current cash – market. For example, Hedger A sells, or shorts, a July futures contract for 5,000 bushels of corn at a price of $2.50 per bushel; put differently, Hedger A commits to sell in July 5,000 bushels of corn for $12,500=$2.50×5000. Recall that to sell (buy) a futures contract means to commit to sell (buy) an amount and grade of an item at a specific price and future date.

Absent basis risk, Hedger A’s spot and futures markets positions will preserve the value of the 5,000 bushels of corn that he owns, because a fall in the spot price of corn will be matched penny for penny by a fall in the futures price of corn. For example, suppose that by June 1st the spot price of corn has fallen five cents to $2.35 per bushel. Absent basis risk, the July futures price of corn has also fallen five cents to $2.45 per bushel.

So, on June 1st, Hedger A sells his 5,000 bushels of corn and loses $250=($2.35-$2.40)x5000 in the spot market. At the same time, he buys a July futures contract for 5,000 bushels of corn and gains $250=($2.50-$2.45)x5000 in the futures market. Notice, because Hedger A has both sold and bought a July futures contract for 5,000 bushels of corn, he has offset his commitment in the futures market.

This example of a textbook hedge – one that eliminates price risk entirely – is instructive but it is also a bit misleading because: basis risk exists; hedgers may choose to hedge more or less than 100% of their cash positions; and hedgers may cross hedge – trade futures contracts whose underlying assets are not the same as the assets that the hedger owns. So, in reality hedgers cannot immunize entirely their cash positions from market fluctuations and in some cases they may not wish to do so. Again, the purpose of a hedge is not to avoid risk, but rather to manage or even profit from it.

The second fundamental purpose of a futures market is to facilitate firms’ acquisitions of operating capital – short term loans that finance firms’ purchases of intermediate goods such as inventories of grain or petroleum. For example, lenders are relatively more likely to finance, at or near prime lending rates, hedged (versus non-hedged) inventories. The futures contact is an efficient form of collateral because it costs only a fraction of the inventory’s value, or the margin on a short position in the futures market.

Speculators make the hedge possible because they absorb the inventory’s price risk; for example, the ultimate counterparty to the inventory dealer’s short position is a speculator. In the absence of futures markets, hedgers could only engage in forward contracts – unique agreements between private parties, who operate independently of an exchange or clearinghouse. Hence, the collateral value of a forward contract is less than that of a futures contract.3

The third fundamental purpose of a futures market is to provide information to decision makers regarding the market’s expectations of future economic events. So long as a futures market is efficient – the market forms expectations by taking into proper consideration all available information – its forecasts of future economic events are relatively more reliable than an individual’s. Forecast errors are expensive, and well informed, highly competitive, profit-seeking traders have a relatively greater incentive to minimize them.

The Evolution of Futures Trading in the U.S.

Early Nineteenth Century Grain Production and Marketing

Into the early nineteenth century, the vast majority of American grains – wheat, corn, barley, rye and oats – were produced throughout the hinterlands of the United States by producers who acted primarily as subsistence farmers – agricultural producers whose primary objective was to feed themselves and their families. Although many of these farmers sold their surplus production on the market, most lacked access to large markets, as well as the incentive, affordable labor supply, and myriad technologies necessary to practice commercial agriculture – the large scale production and marketing of surplus agricultural commodities.

At this time, the principal trade route to the Atlantic seaboard was by river through New Orleans4; though the South was also home to terminal markets – markets of final destination – for corn, provisions and flour. Smaller local grain markets existed along the tributaries of the Ohio and Mississippi Rivers and east-west overland routes. The latter were used primarily to transport manufactured (high valued and nonperishable) goods west.

Most farmers, and particularly those in the East North Central States – the region consisting today of Illinois, Indiana, Michigan, Ohio and Wisconsin – could not ship bulk grains to market profitably (Clark 1966, 4, 15).5 Instead, most converted grains into relatively high value flour, livestock, provisions and whiskies or malt liquors and shipped them south or, in the case of livestock, drove them east (14).6 Oats traded locally, if at all; their low value-to-weight ratios made their shipment, in bulk or otherwise, prohibitive (15n).

The Great Lakes provided a natural water route east to Buffalo but, in order to ship grain this way, producers in the interior East North Central region needed local ports to receive their production. Although the Erie Canal connected Lake Erie to the port of New York by 1825, water routes that connected local interior ports throughout northern Ohio to the Canal were not operational prior to the mid-1830s. Indeed, initially the Erie aided the development of the Old Northwest, not because it facilitated eastward grain shipments, but rather because it allowed immigrants and manufactured goods easy access to the West (Clark 1966, 53).

By 1835 the mouths of rivers and streams throughout the East North Central States had become the hubs, or port cities, from which farmers shipped grain east via the Erie. By this time, shippers could also opt to go south on the Ohio River and then upriver to Pittsburgh and ultimately to Philadelphia, or north on the Ohio Canal to Cleveland, Buffalo and ultimately, via the Welland Canal, to Lake Ontario and Montreal (19).

By 1836 shippers carried more grain north on the Great Lakes and through Buffalo, than south on the Mississippi through New Orleans (Odle 1964, 441). Though, as late as 1840 Ohio was the only state/region who participated significantly in the Great Lakes trade. Illinois, Indiana, Michigan, and the region of modern day Wisconsin either produced for their respective local markets or relied upon Southern demand. As of 1837 only 4,107 residents populated the “village” of Chicago, which became an official city in that year (Hieronymus 1977, 72).7

Antebellum Grain Trade Finance in the Old Northwest

Before the mid-1860s, a network of banks, grain dealers, merchants, millers and commission houses – buying and selling agents located in the central commodity markets – employed an acceptance system to finance the U.S. grain trade (see Clark 1966, 119; Odle 1964, 442). For example, a miller who required grain would instruct an agent in, say, New York to establish, on the miller’s behalf, a line of credit with a merchant there. The merchant extended this line of credit in the form of sight drafts, which the merchant made payable, in sixty or ninety days, up to the amount of the line of credit.

With this credit line established, commission agents in the hinterland would arrange with grain dealers to acquire the necessary grain. The commission agent would obtain warehouse receipts – dealer certified negotiable titles to specific lots and quantities of grain in store – from dealers, attach these to drafts that he drew on the merchant’s line of credit, and discount these drafts at his local bank in return for banknotes; the local bank would forward these drafts on to the New York merchant’s bank for redemption. The commission agents would use these banknotes to advance – lend – grain dealers roughly three quarters of the current market value of the grain. The commission agent would pay dealers the remainder (minus finance and commission fees) when the grain was finally sold in the East. That is, commission agents and grain dealers entered into consignment contracts.

Unfortunately, this approach linked banks, grain dealers, merchants, millers and commission agents such that the “entire procedure was attended by considerable risk and speculation, which was assumed by both the consignee and consignor” (Clark 1966, 120). The system was reasonably adequate if grain prices went unchanged between the time the miller procured the credit and the time the grain (bulk or converted) was sold in the East, but this was rarely the case. The fundamental problem with this system of finance was that commission agents were effectively asking banks to lend them money to purchase as yet unsold grain. To be sure, this inadequacy was most apparent during financial panics, when many banks refused to discount these drafts (Odle 1964, 447).

Grain Trade Finance in Transition: Forward Contracts and Commodity Exchanges

In 1848 the Illinois-Michigan Canal connected the Illinois River to Lake Michigan. The canal enabled farmers in the hinterlands along the Illinois River to ship their produce to merchants located along the river. These merchants accumulated, stored and then shipped grain to Chicago, Milwaukee and Racine. At first, shippers tagged deliverables according to producer and region, while purchasers inspected and chose these tagged bundles upon delivery. Commercial activity at the three grain ports grew throughout the 1850s. Chicago emerged as a dominant grain (primarily corn) hub later that decade (Pierce 1957, 66).8

Amidst this growth of Lake Michigan commerce, a confluence of innovations transformed the grain trade and its method of finance. By the 1840s, grain elevators and railroads facilitated high volume grain storage and shipment, respectively. Consequently, country merchants and their Chicago counterparts required greater financing in order to store and ship this higher volume of grain.9 And, high volume grain storage and shipment required that inventoried grains be fungible – of such a nature that one part or quantity could be replaced by another equal part or quantity in the satisfaction of an obligation. For example, because a bushel of grade No. 2 Spring Wheat was fungible, its price did not depend on whether it came from Farmer A, Farmer B, Grain Elevator C, or Train Car D.

Merchants could secure these larger loans more easily and at relatively lower rates if they obtained firm price and quantity commitments from their buyers. So, merchants began to engage in forward (not futures) contracts. According to Hieronymus (1977), the first such “time contract” on record was made on March 13, 1851. It specified that 3,000 bushels of corn were to be delivered to Chicago in June at a price of one cent below the March 13th cash market price (74).10

Meanwhile, commodity exchanges serviced the trade’s need for fungible grain. In the 1840s and 1850s these exchanges emerged as associations for dealing with local issues such as harbor infrastructure and commercial arbitration (e.g., Detroit in 1847, Buffalo, Cleveland and Chicago in 1848 and Milwaukee in 1849) (see Odle 1964). By the 1850s they established a system of staple grades, standards and inspections, all of which rendered inventory grain fungible (Baer and Saxon 1949, 10; Chandler 1977, 211). As collection points for grain, cotton, and provisions, they weighed, inspected and classified commodity shipments that passed from west to east. They also facilitated organized trading in spot and forward markets (Chandler 1977, 211; Odle 1964, 439).11

The largest and most prominent of these exchanges was the Board of Trade of the City of Chicago, a grain and provisions exchange established in 1848 by a State of Illinois corporate charter (Boyle 1920, 38; Lurie 1979, 27); the exchange is known today as the Chicago Board of Trade (CBT). For at least its first decade, the CBT functioned as a meeting place for merchants to resolve contract disputes and discuss commercial matters of mutual concern. Participation was part-time at best. The Board’s first directorate of 25 members included “a druggist, a bookseller, a tanner, a grocer, a coal dealer, a hardware merchant, and a banker” and attendance was often encouraged by free lunches (Lurie 1979, 25).

However, in 1859 the CBT became a state- (of Illinois) chartered private association. As such, the exchange requested and received from the Illinois legislature sanction to establish rules “for the management of their business and the mode in which it shall be transacted, as they may think proper;” to arbitrate over and settle disputes with the authority as “if it were a judgment rendered in the Circuit Court;” and to inspect, weigh and certify grain and grain trades such that these certifications would be binding upon all CBT members (Lurie 1979, 27).

Nineteenth Century Futures Trading

By the 1850s traders sold and resold forward contracts prior to actual delivery (Hieronymus 1977, 75). A trader could not offset, in the futures market sense of the term, a forward contact. Nonetheless, the existence of a secondary market – market for extant, as opposed to newly issued securities – in forward contracts suggests, if nothing else, speculators were active in these early time contracts.

On March 27, 1863, the Chicago Board of Trade adopted its first rules and procedures for trade in forwards on the exchange (Hieronymus 1977, 76). The rules addressed contract settlement, which was (and still is) the fundamental challenge associated with a forward contract – finding a trader who was willing to take a position in a forward contract was relatively easy to do; finding that trader at the time of contract settlement was not.

The CBT began to transform actively traded and reasonably homogeneous forward contracts into futures contracts in May, 1865. At this time, the CBT: restricted trade in time contracts to exchange members; standardized contract specifications; required traders to deposit margins; and specified formally contract settlement, including payments and deliveries, and grievance procedures (Hieronymus 1977, 76).

The inception of organized futures trading is difficult to date. This is due, in part, to semantic ambiguities – e.g., was a “to arrive” contract a forward contract or a futures contract or neither? However, most grain trade historians agree that storage (grain elevators), shipment (railroad), and communication (telegraph) technologies, a system of staple grades and standards, and the impetus to speculation provided by the Crimean and U.S. Civil Wars enabled futures trading to ripen by about 1874, at which time the CBT was the U.S.’s premier organized commodities (grain and provisions) futures exchange (Baer and Saxon 1949, 87; Chandler 1977, 212; CBT 1936, 18; Clark 1966, 120; Dies 1925, 15; Hoffman 1932, 29; Irwin 1954, 77, 82; Rothstein 1966, 67).

Nonetheless, futures exchanges in the mid-1870s lacked modern clearinghouses, with which most exchanges began to experiment only in the mid-1880s. For example, the CBT’s clearinghouse got its start in 1884, and a complete and mandatory clearing system was in place at the CBT by 1925 (Hoffman 1932, 199; Williams 1982, 306). The earliest formal clearing and offset procedures were established by the Minneapolis Grain Exchange in 1891 (Peck 1985, 6).

Even so, rudiments of a clearing system – one that freed traders from dealing directly with one another – were in place by the 1870s (Hoffman 1920, 189). That is to say, brokers assumed the counter-position to every trade, much as clearinghouse members would do decades later. Brokers settled offsets between one another, though in the absence of a formal clearing procedure these settlements were difficult to accomplish.

Direct settlements were simple enough. Here, two brokers would settle in cash their offsetting positions between one another only. Nonetheless, direct settlements were relatively uncommon because offsetting purchases and sales between brokers rarely balanced with respect to quantity. For example, B1 might buy a 5,000 bushel corn future from B2, who then might buy a 6,000 bushel corn future from B1; in this example, 1,000 bushels of corn remain unsettled between B1 and B2. Of course, the two brokers could offset the remaining 1,000 bushel contract if B2 sold a 1,000 bushel corn future to B1. But what if B2 had already sold a 1,000 bushel corn future to B3, who had sold a 1,000 bushel corn future to B1? In this case, each broker’s net futures market position is offset, but all three must meet in order to settle their respective positions. Brokers referred to such a meeting as a ring settlement. Finally, if, in this example, B1 and B3 did not have positions with each other, B2 could settle her position if she transferred her commitment (which she has with B1) to B3. Brokers referred to this method as a transfer settlement. In either ring or transfer settlements, brokers had to find other brokers who held and wished to settle open counter-positions. Often brokers used runners to search literally the offices and corridors for the requisite counter-parties (see Hoffman 1932, 185-200).

Finally, the transformation in Chicago grain markets from forward to futures trading occurred almost simultaneously in New York cotton markets. Forward contracts for cotton traded in New York (and Liverpool, England) by the 1850s. And, like Chicago, organized trading in cotton futures began on the New York Cotton Exchange in about 1870; rules and procedures formalized the practice in 1872. Futures trading on the New Orleans Cotton Exchange began around 1882 (Hieronymus 1977, 77).

Other successful nineteenth century futures exchanges include the New York Produce Exchange, the Milwaukee Chamber of Commerce, the Merchant’s Exchange of St. Louis, the Chicago Open Board of Trade, the Duluth Board of Trade, and the Kansas City Board of Trade (Hoffman 1920, 33; see Peck 1985, 9).

Early Futures Market Performance

Volume

Data on grain futures volume prior to the 1880s are not available (Hoffman 1932, 30). Though in the 1870s “[CBT] officials openly admitted that there was no actual delivery of grain in more than ninety percent of contracts” (Lurie 1979, 59). Indeed, Chart 1 demonstrates that trading was relatively voluminous in the nineteenth century.

An annual average of 23,600 million bushels of grain futures traded between 1884 and 1888, or eight times the annual average amount of crops produced during that period. By comparison, an annual average of 25,803 million bushels of grain futures traded between 1966 and 1970, or four times the annual average amount of crops produced during that period. In 2002, futures volume outnumbered crop production by a factor of eleven.

The comparable data for cotton futures are presented in Chart 2. Again here, trading in the nineteenth century was significant. To wit, by 1879 futures volume had outnumbered production by a factor of five, and by 1896 this factor had reached eight.

Price of Storage

Nineteenth century observers of early U.S. futures markets either credited them for stabilizing food prices, or discredited them for wagering on, and intensifying, the economic hardships of Americans (Baer and Saxon 1949, 12-20, 56; Chandler 1977, 212; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115). To be sure, the performance of early futures markets remains relatively unexplored. The extant research on the subject has generally examined this performance in the context of two perspectives on the theory of efficiency: the price of storage and futures price efficiency more generally.

Holbrook Working pioneered research into the price of storage – the relationship, at a point in time, between prices (of storable agricultural commodities) applicable to different future dates (Working 1949, 1254).12 For example, what is the relationship between the current spot price of wheat and the current September 2004 futures price of wheat? Or, what is the relationship between the current September 2004 futures price of wheat and the current May 2005 futures price of wheat?

Working reasoned that these prices could not differ because of events that were expected to occur between these dates. For example, if the May 2004 wheat futures price is less than the September 2004 price, this cannot be due to, say, the expectation of a small harvest between May 2004 and September 2004. On the contrary, traders should factor such an expectation into both May and September prices. And, assuming that they do, then this difference can only reflect the cost of carrying – storing – these commodities over time.13 Though this strict interpretation has since been modified somewhat (see Peck 1985, 44).

So, for example, the September 2004 price equals the May 2004 price plus the cost of storing wheat between May 2004 and September 2004. If the difference between these prices is greater or less than the cost of storage, and the market is efficient, arbitrage will bring the difference back to the cost of storage – e.g., if the difference in prices exceeds the cost of storage, then traders can profit if they buy the May 2004 contract, sell the September 2004 contract, take delivery in May and store the wheat until September. Working (1953) demonstrated empirically that the theory of the price of storage could explain quite satisfactorily these inter-temporal differences in wheat futures prices at the CBT as early as the late 1880s (Working 1953, 556).

Futures Price Efficiency

Many contemporary economists tend to focus on futures price efficiency more generally (for example, Beck 1994; Kahl and Tomek 1986; Kofi 1973; McKenzie, et al. 2002; Tomek and Gray, 1970). That is to say, do futures prices shadow consistently (but not necessarily equal) traders’ rational expectations of future spot prices? Here, the research focuses on the relationship between, say, the cash price of wheat in September 2004 and the September 2004 futures price of wheat quoted two months earlier in July 2004.

Figure 1illustrates the behavior of corn futures prices and their corresponding spot prices between 1877 and 1890. The data consist of the average month t futures price in the last full week of month t-2 and the average cash price in the first full week of month t.

The futures price and its corresponding spot price need not be equal; futures price efficiency does not mean that the futures market is clairvoyant. But, a difference between the two series should exist only because of an unpredictable forecast error and a risk premium – futures prices may be, say, consistently below the expected future spot price if long speculators require an inducement, or premium, to enter the futures market. Recent work finds strong evidence that these early corn (and corresponding wheat) futures prices are, in the long run, efficient estimates of their underlying spot prices (Santos 2002, 35). Although these results and Working’s empirical studies on the price of storage support, to some extent, the notion that early U.S. futures markets were efficient, this question remains largely unexplored by economic historians.

The Struggle for Legitimacy

Nineteenth century America was both fascinated and appalled by futures trading. This is apparent from the litigation and many public debates surrounding its legitimacy (Baer and Saxon 1949, 55; Buck 1913, 131, 271; Hoffman 1932, 29, 351; Irwin 1954, 80; Lurie 1979, 53, 106). Many agricultural producers, the lay community and, at times, legislatures and the courts, believed trading in futures was tantamount to gambling. The difference between the latter and speculating, which required the purchase or sale of a futures contract but not the shipment or delivery of the commodity, was ostensibly lost on most Americans (Baer and Saxon 1949, 56; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115).

Many Americans believed that futures traders frequently manipulated prices. From the end of the Civil War until 1879 alone, corners – control of enough of the available supply of a commodity to manipulate its price – allegedly occurred with varying degrees of success in wheat (1868, 1871, 1878/9), corn (1868), oats (1868, 1871, 1874), rye (1868) and pork (1868) (Boyle 1920, 64-65). This manipulation continued throughout the century and culminated in the Three Big Corners – the Hutchinson (1888), the Leiter (1898), and the Patten (1909). The Patten corner was later debunked (Boyle 1920, 67-74), while the Leiter corner was the inspiration for Frank Norris’s classic The Pit: A Story of Chicago (Norris 1903; Rothstein 1982, 60).14 In any case, reports of market corners on America’s early futures exchanges were likely exaggerated (Boyle 1920, 62-74; Hieronymus 1977, 84), as were their long term effects on prices and hence consumer welfare (Rothstein 1982, 60).

By 1892 thousands of petitions to Congress called for the prohibition of “speculative gambling in grain” (Lurie, 1979, 109). And, attacks from state legislatures were seemingly unrelenting: in 1812 a New York act made short sales illegal (the act was repealed in 1858); in 1841 a Pennsylvania law made short sales, where the position was not covered in five days, a misdemeanor (the law was repealed in 1862); in 1882 an Ohio law and a similar one in Illinois tried unsuccessfully to restrict cash settlement of futures contracts; in 1867 the Illinois constitution forbade dealing in futures contracts (this was repealed by 1869); in 1879 California’s constitution invalidated futures contracts (this was effectively repealed in 1908); and, in 1882, 1883 and 1885, Mississippi, Arkansas, and Texas, respectively, passed laws that equated futures trading with gambling, thus making the former a misdemeanor (Peterson 1933, 68-69).

Two nineteenth century challenges to futures trading are particularly noteworthy. The first was the so-called Anti-Option movement. According to Lurie (1979), the movement was fueled by agrarians and their sympathizers in Congress who wanted to end what they perceived as wanton speculative abuses in futures trading (109). Although options were (are) not futures contracts, and were nonetheless already outlawed on most exchanges by the 1890s, the legislation did not distinguish between the two instruments and effectively sought to outlaw both (Lurie 1979, 109).

In 1890 the Butterworth Anti-Option Bill was introduced in Congress but never came to a vote. However, in 1892 the Hatch (and Washburn) Anti-Option bills passed both houses of Congress, and failed only on technicalities during reconciliation between the two houses. Had either bill become law, it would have effectively ended options and futures trading in the United States (Lurie 1979, 110).

A second notable challenge was the bucket shop controversy, which challenged the legitimacy of the CBT in particular. A bucket shop was essentially an association of gamblers who met outside the CBT and wagered on the direction of futures prices. These associations had legitimate-sounding names such as the Christie Grain and Stock Company and the Public Grain Exchange. To most Americans, these “exchanges” were no less legitimate than the CBT. That some CBT members were guilty of “bucket shopping” only made matters worse!

The bucket shop controversy was protracted and colorful (see Lurie 1979, 138-167). Between 1884 and 1887 Illinois, Iowa, Missouri and Ohio passed anti-bucket shop laws (Lurie 1979, 95). The CBT believed these laws entitled them to restrict bucket shops access to CBT price quotes, without which the bucket shops could not exist. Bucket shops argued that they were competing exchanges, and hence immune to extant anti-bucket shop laws. As such, they sued the CBT for access to these price quotes.15

The two sides and the telegraph companies fought in the courts for decades over access to these price quotes; the CBT’s very survival hung in the balance. After roughly twenty years of litigation, the Supreme Court of the U.S. effectively ruled in favor of the Chicago Board of Trade and against bucket shops (Board of Trade of the City of Chicago v. Christie Grain & Stock Co., 198 U.S. 236, 25 Sup. Ct. (1905)). Bucket shops disappeared completely by 1915 (Hieronymus 1977, 90).

Regulation

The anti-option movement, the bucket shop controversy and the American public’s discontent with speculation masks an ironic reality of futures trading: it escaped government regulation until after the First World War; though early exchanges did practice self-regulation or administrative law.16 The absence of any formal governmental oversight was due in large part to two factors. First, prior to 1895, the opposition tried unsuccessfully to outlaw rather than regulate futures trading. Second, strong agricultural commodity prices between 1895 and 1920 weakened the opposition, who blamed futures markets for low agricultural commodity prices (Hieronymus 1977, 313).

Grain prices fell significantly by the end of the First World War, and opposition to futures trading grew once again (Hieronymus 1977, 313). In 1922 the U.S. Congress enacted the Grain Futures Act, which required exchanges to be licensed, limited market manipulation and publicized trading information (Leuthold 1989, 369).17 However, regulators could rarely enforce the act because it enabled them to discipline exchanges, rather than individual traders. To discipline an exchange was essentially to suspend it, a punishment unfit (too harsh) for most exchange-related infractions.

The Commodity Exchange Act of 1936 enabled the government to deal directly with traders rather than exchanges. It established the Commodity Exchange Authority (CEA), a bureau of the U.S. Department of Agriculture, to monitor and investigate trading activities and prosecute price manipulation as a criminal offense. The act also: limited speculators’ trading activities and the sizes of their positions; regulated futures commission merchants; banned options trading on domestic agricultural commodities; and restricted futures trading – designated which commodities were to be traded on which licensed exchanges (see Hieronymus 1977; Leuthold, et al. 1989).

Although Congress amended the Commodity Exchange Act in 1968 in order to increase the regulatory powers of the Commodity Exchange Authority, the latter was ill-equipped to handle the explosive growth in futures trading in the 1960s and 1970s. So, in 1974 Congress passed the Commodity Futures Trading Act, which created far-reaching federal oversight of U.S. futures trading and established the Commodity Futures Trading Commission (CFTC).

Like the futures legislation before it, the Commodity Futures Trading Act seeks “to ensure proper execution of customer orders and to prevent unlawful manipulation, price distortion, fraud, cheating, fictitious trades, and misuse of customer funds” (Leuthold, et al. 1989, 34). Unlike the CEA, the CFTC was given broad regulator powers over all futures trading and related exchange activities throughout the U.S. The CFTC oversees and approves modifications to extant contracts and the creation and introduction of new contracts. The CFTC consists of five presidential appointees who are confirmed by the U.S. Senate.

The Futures Trading Act of 1982 amended the Commodity Futures Trading Act of 1974. The 1982 act legalized options trading on agricultural commodities and identified more clearly the jurisdictions of the CFTC and Securities and Exchange Commission (SEC). The regulatory overlap between the two organizations arose because of the explosive popularity during the 1970s of financial futures contracts. Today, the CFTC regulates all futures contracts and options on futures contracts traded on U.S. futures exchanges; the SEC regulates all financial instrument cash markets as well as all other options markets.

Finally, in 2000 Congress passed the Commodity Futures Modernization Act, which reauthorized the Commodity Futures Trading Commission for five years and repealed an 18-year old ban on trading single stock futures. The bill also sought to increase competition and “reduce systematic risk in markets for futures and over-the-counter derivatives” (H.R. 5660, 106th Congress 2nd Session).

Modern Futures Markets

The growth in futures trading has been explosive in recent years (Chart 3).

Futures trading extended beyond physical commodities in the 1970s and 1980s – currency futures in 1972; interest rate futures in 1975; and stock index futures in 1982 (Silber 1985, 83). The enormous growth of financial futures at this time was likely because of the breakdown of the Bretton Woods exchange rate regime, which essentially fixed the relative values of industrial economies’ exchange rates to the American dollar (see Bordo and Eichengreen 1993), and relatively high inflation from the late 1960s to the early 1980s. Flexible exchange rates and inflation introduced, respectively, exchange and interest rate risks, which hedgers sought to mitigate through the use of financial futures. Finally, although futures contracts on agricultural commodities remain popular, financial futures and options dominate trading today. Trading volume in metals, minerals and energy remains relatively small.

Trading volume in agricultural futures contracts first dropped below 50% in 1982. By 1985 this volume had dropped to less than one fourth all trading. In the same year the volume of futures trading in the U.S. Treasury bond contract alone exceeded trading volume in all agricultural commodities combined (Leuthold et al. 1989, 2). Today exchanges in the U.S. actively trade contracts on several underlying assets (Table 1). These range from the traditional – e.g., agriculture and metals – to the truly innovative – e.g. the weather. The latter’s payoff varies with the number of degree-days by which the temperature in a particular region deviates from 65 degrees Fahrenheit.

Table 1: Select Futures Contracts Traded as of 2002

Agriculture Currencies Equity Indexes Interest Rates Metals & Energy
Corn British pound S&P 500 index Eurodollars Copper
Oats Canadian dollar Dow Jones Industrials Euroyen Aluminum
Soybeans Japanese yen S&P Midcap 400 Euro-denominated bond Gold
Soybean meal Euro Nasdaq 100 Euroswiss Platinum
Soybean oil Swiss franc NYSE index Sterling Palladium
Wheat Australian dollar Russell 2000 index British gov. bond (gilt) Silver
Barley Mexican peso Nikkei 225 German gov. bond Crude oil
Flaxseed Brazilian real FTSE index Italian gov. bond Heating oil
Canola CAC-40 Canadian gov. bond Gas oil
Rye DAX-30 Treasury bonds Natural gas
Cattle All ordinary Treasury notes Gasoline
Hogs Toronto 35 Treasury bills Propane
Pork bellies Dow Jones Euro STOXX 50 LIBOR CRB index
Cocoa EURIBOR Electricity
Coffee Municipal bond index Weather
Cotton Federal funds rate
Milk Bankers’ acceptance
Orange juice
Sugar
Lumber
Rice

Source: Bodie, Kane and Marcus (2005), p. 796.

Table 2 provides a list of today’s major futures exchanges.

Table 2: Select Futures Exchanges as of 2002

Exchange Exchange
Chicago Board of Trade CBT Montreal Exchange ME
Chicago Mercantile Exchange CME Minneapolis Grain Exchange MPLS
Coffee, Sugar & Cocoa Exchange, New York CSCE Unit of Euronext.liffe NQLX
COMEX, a division of the NYME CMX New York Cotton Exchange NYCE
European Exchange EUREX New York Futures Exchange NYFE
Financial Exchange, a division of the NYCE FINEX New York Mercantile Exchange NYME
International Petroleum Exchange IPE OneChicago ONE
Kansas City Board of Trade KC Sydney Futures Exchange SFE
London International Financial Futures Exchange LIFFE Singapore Exchange Ltd. SGX
Marche a Terme International de France MATIF

Source: Wall Street Journal, 5/12/2004, C16.

Modern trading differs from its nineteenth century counterpart in other respects as well. First, the popularity of open outcry trading is waning. For example, today the CBT executes roughly half of all trades electronically. And, electronic trading is the rule, rather than the exception throughout Europe. Second, today roughly 99% of all futures contracts are settled prior to maturity. Third, in 1982 the Commodity Futures Trading Commission approved cash settlement – delivery that takes the form of a cash balance – on financial index and Eurodollar futures, whose underlying assets are not deliverable, as well as on several non-financial contracts including lean hog, feeder cattle and weather (Carlton 1984, 253). And finally, on Dec. 6, 2002, the Chicago Mercantile Exchange became the first publicly traded financial exchange in the U.S.

References and Further Reading

Baer, Julius B. and Olin. G. Saxon. Commodity Exchanges and Futures Trading. New York: Harper & Brothers, 1949.

Bodie, Zvi, Alex Kane and Alan J. Marcus. Investments. New York: McGraw-Hill/Irwin, 2005.

Bordo, Michael D. and Barry Eichengreen, editors. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press, 1993.

Boyle, James. E. Speculation and the Chicago Board of Trade. New York: MacMillan Company, 1920.

Buck, Solon. J. The Granger Movement: A Study of Agricultural Organization and Its Political,

Carlton, Dennis W. “Futures Markets: Their Purpose, Their History, Their Growth, Their Successes and Failures.” Journal of Futures Markets 4, no. 3 (1984): 237-271.

Chicago Board of Trade Bulletin. The Development of the Chicago Board of Trade. Chicago: Chicago Board of Trade, 1936.

Chandler, Alfred. D. The Visible Hand: The Managerial Revolution in American Business. Cambridge: Harvard University Press, 1977.

Clark, John. G. The Grain Trade in the Old Northwest. Urbana: University of Illinois Press, 1966.

Commodity Futures Trading Commission. Annual Report. Washington, D.C. 2003.

Dies, Edward. J. The Wheat Pit. Chicago: The Argyle Press, 1925.

Ferris, William. G. The Grain Traders: The Story of the Chicago Board of Trade. East Lansing, MI: Michigan State University Press, 1988.

Hieronymus, Thomas A. Economics of Futures Trading for Commercial and Personal Profit. New York: Commodity Research Bureau, Inc., 1977.

Hoffman, George W. Futures Trading upon Organized Commodity Markets in the United States. Philadelphia: University of Pennsylvania Press, 1932.

Irwin, Harold. S. Evolution of Futures Trading. Madison, WI: Mimir Publishers, Inc., 1954

Leuthold, Raymond M., Joan C. Junkus and Jean E. Cordier. The Theory and Practice of Futures Markets. Champaign, IL: Stipes Publishing L.L.C., 1989.

Lurie, Jonathan. The Chicago Board of Trade 1859-1905. Urbana: University of Illinois Press, 1979.

National Agricultural Statistics Service. “Historical Track Records.” Agricultural Statistics Board, U.S. Department of Agriculture, Washington, D.C. April 2004.

Norris, Frank. The Pit: A Story of Chicago. New York, NY: Penguin Group, 1903.

Odle, Thomas. “Entrepreneurial Cooperation on the Great Lakes: The Origin of the Methods of American Grain Marketing.” Business History Review 38, (1964): 439-55.

Peck, Anne E., editor. Futures Markets: Their Economic Role. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Peterson, Arthur G. “Futures Trading with Particular Reference to Agricultural Commodities.” Agricultural History 8, (1933): 68-80.

Pierce, Bessie L. A History of Chicago: Volume III, the Rise of a Modern City. New York: Alfred A. Knopf, 1957.

Rothstein, Morton. “The International Market for Agricultural Commodities, 1850-1873.” In Economic Change in the Civil War Era, edited by David. T. Gilchrist and W. David Lewis, 62-71. Greenville DE: Eleutherian Mills-Hagley Foundation, 1966.

Rothstein, Morton. “Frank Norris and Popular Perceptions of the Market.” Agricultural History 56, (1982): 50-66.

Santos, Joseph. “Did Futures Markets Stabilize U.S. Grain Prices?” Journal of Agricultural Economics 53, no. 1 (2002): 25-36.

Silber, William L. “The Economic Role of Financial Futures.” In Futures Markets: Their Economic Role, edited by Anne E. Peck, 83-114. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Stein, Jerome L. The Economics of Futures Markets. Oxford: Basil Blackwell Ltd, 1986.

Taylor, Charles. H. History of the Board of Trade of the City of Chicago. Chicago: R. O. Law, 1917.

Werner, Walter and Steven T. Smith. Wall Street. New York: Columbia University Press, 1991.

Williams, Jeffrey C. “The Origin of Futures Markets.” Agricultural History 56, (1982): 306-16.

Working, Holbrook. “The Theory of the Price of Storage.” American Economic Review 39, (1949): 1254-62.

Working, Holbrook. “Hedging Reconsidered.” Journal of Farm Economics 35, (1953): 544-61.

1 The clearinghouse is typically a corporation owned by a subset of exchange members. For details regarding the clearing arrangements of a specific exchange, go to www.cftc.gov and click on “Clearing Organizations.”

2 The vast majority of contracts are offset. Outright delivery occurs when the buyer receives from, or the seller “delivers” to the exchange a title of ownership, and not the actual commodity or financial security – the urban legend of the trader who neglected to settle his long position and consequently “woke up one morning to find several car loads of a commodity dumped on his front yard” is indeed apocryphal (Hieronymus 1977, 37)!

3 Nevertheless, forward contracts remain popular today (see Peck 1985, 9-12).

4 The importance of New Orleans as a point of departure for U.S. grain and provisions prior to the Civil War is unquestionable. According to Clark (1966), “New Orleans was the leading export center in the nation in terms of dollar volume of domestic exports, except for 1847 and a few years during the 1850s, when New York’s domestic exports exceeded those of the Crescent City” (36).

5 This area was responsible for roughly half of U.S. wheat production and a third of U.S. corn production just prior to 1860. Southern planters dominated corn output during the early to mid- 1800s.

6 Millers milled wheat into flour; pork producers fed corn to pigs, which producers slaughtered for provisions; distillers and brewers converted rye and barley into whiskey and malt liquors, respectively; and ranchers fed grains and grasses to cattle, which were then driven to eastern markets.

7 Significant advances in transportation made the grain trade’s eastward expansion possible, but the strong and growing demand for grain in the East made the trade profitable. The growth in domestic grain demand during the early to mid-nineteenth century reflected the strong growth in eastern urban populations. Between 1820 and 1860, the populations of Baltimore, Boston, New York and Philadelphia increased by over 500% (Clark 1966, 54). Moreover, as the 1840’s approached, foreign demand for U.S. grain grew. Between 1845 and 1847, U.S. exports of wheat and flour rose from 6.3 million bushels to 26.3 million bushels and corn exports grew from 840,000 bushels to 16.3 million bushels (Clark 1966, 55).

8 Wheat production was shifting to the trans-Mississippi West, which produced 65% of the nation’s wheat by 1899 and 90% by 1909, and railroads based in the Lake Michigan port cities intercepted the Mississippi River trade that would otherwise have headed to St. Louis (Clark 1966, 95). Lake Michigan port cities also benefited from a growing concentration of corn production in the West North Central region – Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota and South Dakota, which by 1899 produced 40% percent of the country’s corn (Clark 1966, 4).

9 Corn had to be dried immediately after it was harvested and could only be shipped profitably by water to Chicago, but only after rivers and lakes had thawed; so, country merchants stored large quantities of corn. On the other hand, wheat was more valuable relative to its weight, and it could be shipped to Chicago by rail or road immediately after it was harvested; so, Chicago merchants stored large quantities of wheat.

10 This is consistent with Odle (1964), who adds that “the creators of the new system of marketing [forward contracts] were the grain merchants of the Great Lakes” (439). However, Williams (1982) presents evidence of such contracts between Buffalo and New York City as early as 1847 (309). To be sure, Williams proffers an intriguing case that forward and, in effect, future trading was active and quite sophisticated throughout New York by the late 1840s. Moreover, he argues that this trading grew not out of activity in Chicago, whose trading activities were quite primitive at this early date, but rather trading in London and ultimately Amsterdam. Indeed, “time bargains” were common in London and New York securities markets in the mid- and late 1700s, respectively. A time bargain was essentially a cash-settled financial forward contract that was unenforceable by law, and as such “each party was forced to rely on the integrity and credit of the other” (Werner and Smith 1991, 31). According to Werner and Smith, “time bargains prevailed on Wall Street until 1840, and were gradually replaced by margin trading by 1860” (68). They add that, “margin trading … had an advantage over time bargains, in which there was little protection against default beyond the word of another broker. Time bargains also technically violated the law as wagering contracts; margin trading did not” (135). Between 1818 and 1840 these contracts comprised anywhere from 0.7% (49-day average in 1830) to 34.6% (78-day average in 1819) of daily exchange volume on the New York Stock & Exchange Board (Werner and Smith 1991, 174).

11 Of course, forward markets could and indeed did exist in the absence of both grading standards and formal exchanges, though to what extent they existed is unclear (see Williams 1982).

12 In the parlance of modern financial futures, the term cost of carry is used instead of the term storage. For example, the cost of carrying a bond is comprised of the cost of acquiring and holding (or storing) it until delivery minus the return earned during the carry period.

13 More specifically, the price of storage is comprised of three components: (1) physical costs such as warehouse and insurance; (2) financial costs such as borrowing rates of interest; and (3) the convenience yield – the return that the merchant, who stores the commodity, derives from maintaining an inventory in the commodity. The marginal costs of (1) and (2) are increasing functions of the amount stored; the more the merchant stores, the greater the marginal costs of warehouse use, insurance and financing. Whereas the marginal benefit of (3) is a decreasing function of the amount stored; put differently, the smaller the merchant’s inventory, the more valuable each additional unit of inventory becomes. Working used this convenience yield to explain a negative price of storage – the nearby contract is priced higher than the faraway contract; an event that is likely to occur when supplies are exceptionally low. In this instance, there is little for inventory dealers to store. Hence, dealers face extremely low physical and financial storage costs, but extremely high convenience yields. The price of storage turns negative; essentially, inventory dealers are willing to pay to store the commodity.

14 Norris’ protagonist, Curtis Jadwin, is a wheat speculator emotionally consumed and ultimately destroyed, while the welfare of producers and consumers hang in the balance, when a nineteenth century CBT wheat futures corner backfires on him.

15 One particularly colorful incident in the controversy came when the Supreme Court of Illinois ruled that the CBT had to either make price quotes public or restrict access to everyone. When the Board opted for the latter, it found it needed to “prevent its members from running (often literally) between the [CBT and a bucket shop next door], but with minimal success. Board officials at first tried to lock the doors to the exchange…However, after one member literally battered down the door to the east side of the building, the directors abandoned this policy as impracticable if not destructive” (Lurie 1979, 140).

16 Administrative law is “a body of rules and doctrines which deals with the powers and actions of administrative agencies” that are organizations other than the judiciary or legislature. These organizations affect the rights of private parties “through either adjudication, rulemaking, investigating, prosecuting, negotiating, settling, or informally acting” (Lurie 1979, 9).

17 In 1921 Congress passed The Futures Trading Act, which was declared unconstitutional.

Citation: Santos, Joseph. “A History of Futures Trading in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/a-history-of-futures-trading-in-the-united-states/

The Economic History of the Fur Trade: 1670 to 1870

Ann M. Carlos, University of Colorado
Frank D. Lewis, Queen’s University

Introduction

A commercial fur trade in North America grew out of the early contact between Indians and European fisherman who were netting cod on the Grand Banks off Newfoundland and on the Bay of Gaspé near Quebec. Indians would trade the pelts of small animals, such as mink, for knives and other iron-based products, or for textiles. Exchange at first was haphazard and it was only in the late sixteenth century, when the wearing of beaver hats became fashionable, that firms were established who dealt exclusively in furs. High quality pelts are available only where winters are severe, so the trade took place predominantly in the regions we now know as Canada, although some activity took place further south along the Mississippi River and in the Rocky Mountains. There was also a market in deer skins that predominated in the Appalachians.

The first firms to participate in the fur trade were French, and under French rule the trade spread along the St. Lawrence and Ottawa Rivers, and down the Mississippi. In the seventeenth century, following the Dutch, the English developed a trade through Albany. Then in 1670, a charter was granted by the British crown to the Hudson’s Bay Company, which began operating from posts along the coast of Hudson Bay (see Figure 1). For roughly the next hundred years, this northern region saw competition of varying intensity between the French and the English. With the conquest of New France in 1763, the French trade shifted to Scottish merchants operating out of Montreal. After the negotiation of Jay’s Treaty (1794), the northern border was defined and trade along the Mississippi passed to the American Fur Company under John Jacob Astor. In 1821, the northern participants merged under the name of the Hudson’s Bay Company, and for many decades this merged company continued to trade in furs. Finally, in the 1990s, under pressure from animal rights groups, the Hudson’s Bay Company, which in the twentieth century had become a large Canadian retailer, ended the fur component of its operation.

Figure 1
Hudson’s Bay Company Hinterlands
 Hudson's Bay Company Hinterlands (map)

Source: Ray (1987, plate 60)

The fur trade was based on pelts destined either for the luxury clothing market or for the felting industries, of which hatting was the most important. This was a transatlantic trade. The animals were trapped and exchanged for goods in North America, and the pelts were transported to Europe for processing and final sale. As a result, forces operating on the demand side of the market in Europe and on the supply side in North America determined prices and volumes; while intermediaries, who linked the two geographically separated areas, determined how the trade was conducted.

The Demand for Fur: Hats, Pelts and Prices

However much hats may be considered an accessory today, they were for centuries a mandatory part of everyday dress, for both men and women. Of course styles changed, and, in response to the vagaries of fashion and politics, hats took on various forms and shapes, from the high-crowned, broad-brimmed hat of the first two Stuarts to the conically-shaped, plainer hat of the Puritans. The Restoration of Charles II of England in 1660 and the Glorious Revolution in 1689 brought their own changes in style (Clarke, 1982, chapter 1). What remained a constant was the material from which hats were made – wool felt. The wool came from various animals, but towards the end of the fifteenth century beaver wool began to be predominate. Over time, beaver hats became increasingly popular eventually dominating the market. Only in the nineteenth century did silk replace beaver in high-fashion men’s hats.

Wool Felt

Furs have long been classified as either fancy or staple. Fancy furs are those demanded for the beauty and luster of their pelt. These furs – mink, fox, otter – are fashioned by furriers into garments or robes. Staple furs are sought for their wool. All staple furs have a double coating of hair with long, stiff, smooth hairs called guard hairs which protect the shorter, softer hair, called wool, that grows next to the animal skin. Only the wool can be felted. Each of the shorter hairs is barbed and once the barbs at the ends of the hair are open, the wool can be compressed into a solid piece of material called felt. The prime staple fur has been beaver, although muskrat and rabbit have also been used.

Wool felt was used for over two centuries to make high-fashion hats. Felt is stronger than a woven material. It will not tear or unravel in a straight line; it is more resistant to water, and it will hold its shape even if it gets wet. These characteristics made felt the prime material for hatters especially when fashion called for hats with large brims. The highest quality hats would be made fully from beaver wool, whereas lower quality hats included inferior wool, such as rabbit.

Felt Making

The transformation of beaver skins into felt and then hats was a highly skilled activity. The process required first that the beaver wool be separated from the guard hairs and the skin, and that some of the wool have open barbs, since felt required some open-barbed wool in the mixture. Felt dates back to the nomads of Central Asia, who are said to have invented the process of felting and made their tents from this light but durable material. Although the art of felting disappeared from much of western Europe during the first millennium, felt-making survived in Russia, Sweden, and Asia Minor. As a result of the Medieval Crusades, felting was reintroduced through the Mediterranean into France (Crean, 1962).

In Russia, the felting industry was based on the European beaver (castor fiber). Given their long tradition of working with beaver pelts, the Russians had perfected the art of combing out the short barbed hairs from among the longer guard hairs, a technology that they safeguarded. As a consequence, the early felting trades in England and France had to rely on beaver wool imported from Russia, although they also used domestic supplies of wool from other animals, such rabbit, sheep and goat. But by the end of the seventeenth century, Russian supplies were drying up, reflecting the serious depletion of the European beaver population.

Coincident with the decline in European beaver stocks was the emergence of a North American trade. North American beaver (castor canadensis) was imported through agents in the English, French and Dutch colonies. Although many of the pelts were shipped to Russia for initial processing, the growth of the beaver market in England and France led to the development of local technologies, and more knowledge of the art of combing. Separating the beaver wool from the felt was only the first step in the felting process. It was also necessary that some of the barbs on the short hairs be raised or open. On the animal these hairs were naturally covered with keratin to prevent the barbs from opening, thus to make felt, the keratin had to be stripped from at least some of the hairs. The process was difficult to refine and entailed considerable experimentation by felt-makers. For instance, one felt maker “bundled [the skins] in a sack of linen and boiled [them] for twelve hours in water containing several fatty substances and nitric acid” (Crean, 1962, p. 381). Although such processes removed the keratin, they did so at the price of a lower quality wool.

The opening of the North American trade not only increased the supply of skins for the felting industry, it also provided a subset of skins whose guard hairs had already been removed and the keratin broken down. Beaver pelts imported from North America were classified as either parchment beaver (castor sec – dry beaver), or coat beaver (castor gras – greasy beaver). Parchment beaver were from freshly caught animals, whose skins were simply dried before being presented for trade. Coat beaver were skins that had been worn by the Indians for a year or more. With wear, the guard hairs fell out and the pelt became oily and more pliable. In addition, the keratin covering the shorter hairs broke down. By the middle of the seventeenth century, hatters and felt-makers came to learn that parchment and coat beaver could be combined to produce a strong, smooth, pliable, top-quality waterproof material.

Until the 1720s, beaver felt was produced with relatively fixed proportions of coat and parchment skins, which led to periodic shortages of one or the other type of pelt. The constraint was relaxed when carotting was developed, a chemical process by which parchment skins were transformed into a type of coat beaver. The original carrotting formula consisted of salts of mercury diluted in nitric acid, which was brushed on the pelts. The use of mercury was a big advance, but it also had serious health consequences for hatters and felters, who were forced to breathe the mercury vapor for extended periods. The expression “mad as a hatter” dates from this period, as the vapor attacked the nervous systems of these workers.

The Prices of Parchment and Coat Beaver

Drawn from the accounts of the Hudson’s Bay Company, Table 1 presents some eighteenth century prices of parchment and coat beaver pelts. From 1713 to 1726, before the carotting process had become established, coat beaver generally fetched a higher price than parchment beaver, averaging 6.6 shillings per pelt as compared to 5.5 shillings. Once carotting was widely used, however, the prices were reversed, and from 1730 to 1770 parchment exceeded coat in almost every year. The same general pattern is seen in the Paris data, although there the reversal was delayed, suggesting slower diffusion in France of the carotting technology. As Crean (1962, p. 382) notes, Nollet’s L’Art de faire des chapeaux included the exact formula, but it was not published until 1765.

A weighted average of parchment and coat prices in London reveals three episodes. From 1713 to 1722 prices were quite stable, fluctuating within the narrow band of 5.0 and 5.5 shillings per pelt. During the period, 1723 to 1745, prices moved sharply higher and remained in the range of 7 to 9 shillings. The years 1746 to 1763 saw another big increase to over 12 shillings per pelt. There are far fewer prices available for Paris, but we do know that in the period 1739 to 1753 the trend was also sharply higher with prices more than doubling.

Table 1
Price of Beaver Pelts in Britain: 1713-1763
(shillings per skin)

Year Parchment Coat Averagea Year Parchment Coat Averagea
1713 5.21 4.62 5.03 1739 8.51 7.11 8.05
1714 5.24 7.86 5.66 1740 8.44 6.66 7.88
1715 4.88 5.49 1741 8.30 6.83 7.84
1716 4.68 8.81 5.16 1742 7.72 6.41 7.36
1717 5.29 8.37 5.65 1743 8.98 6.74 8.27
1718 4.77 7.81 5.22 1744 9.18 6.61 8.52
1719 5.30 6.86 5.51 1745 9.76 6.08 8.76
1720 5.31 6.05 5.38 1746 12.73 7.18 10.88
1721 5.27 5.79 5.29 1747 10.68 6.99 9.50
1722 4.55 4.97 4.55 1748 9.27 6.22 8.44
1723 8.54 5.56 7.84 1749 11.27 6.49 9.77
1724 7.47 5.97 7.17 1750 17.11 8.42 14.00
1725 5.82 6.62 5.88 1751 14.31 10.42 12.90
1726 5.41 7.49 5.83 1752 12.94 10.18 11.84
1727 7.22 1753 10.71 11.97 10.87
1728 8.13 1754 12.19 12.68 12.08
1729 9.56 1755 12.05 12.04 11.99
1730 8.71 1756 13.46 12.02 12.84
1731 6.27 1757 12.59 11.60 12.17
1732 7.12 1758 13.07 11.32 12.49
1733 8.07 1759 15.99 14.68
1734 7.39 1760 13.37 13.06 13.22
1735 8.33 1761 10.94 13.03 11.36
1736 8.72 7.07 8.38 1762 13.17 16.33 13.83
1737 7.94 6.46 7.50 1763 16.33 17.56 16.34
1738 8.95 6.47 8.32

a A weighted average of the prices of parchment, coat and half parchment beaver pelts. Weights are based on the trade in these types of furs at Fort Albany. Prices of the individual types of pelts are not available for the years, 1727 to 1735.

Source: Carlos and Lewis, 1999.

The Demand for Beaver Hats

The main cause of the rising beaver pelt prices in England and France was the increasing demand for beaver hats, which included hats made exclusively with beaver wool and referred to as “beaver hats,” and those hats containing a combination of beaver and a lower cost wool, such as rabbit. These were called “felt hats.” Unfortunately, aggregate consumption series for the eighteenth century Europe are not available. We do, however, have Gregory King’s contemporary work for England which provides a good starting point. In a table entitled “Annual Consumption of Apparell, anno 1688,” King calculated that consumption of all types of hats was about 3.3 million, or nearly one hat per person. King also included a second category, caps of all sorts, for which he estimated consumption at 1.6 million (Harte, 1991, p. 293). This means that as early as 1700, the potential market for hats in England alone was nearly 5 million per year. Over the next century, the rising demand for beaver pelts was a result of a number factors including population growth, a greater export market, a shift toward beaver hats from hats made of other materials, and a shift from caps to hats.

The British export data indicate that demand for beaver hats was growing not just in England, but in Europe as well. In 1700 a modest 69,500 beaver hats were exported from England and almost the same number of felt hats; but by 1760, slightly over 500,000 beaver hats and 370,000 felt halts were shipped from English ports (Lawson, 1943, app. I). In total, over the seventy years to 1770, 21 million beaver and felt hats were exported from England. In addition to the final product, England exported the raw material, beaver pelts. In 1760, £15,000 in beaver pelts were exported along with a range of other furs. The hats and the pelts tended to go to different parts of Europe. Raw pelts were shipped mainly to northern Europe, including Germany, Flanders, Holland and Russia; whereas hats went to the southern European markets of Spain and Portugal. In 1750, Germany imported 16,500 beaver hats, while Spain imported 110,000 and Portugal 175,000 (Lawson, 1943, appendices F & G). Over the first six decades of the eighteenth century, these markets grew dramatically, such that the value of beaver hat sales to Portugal alone was £89,000 in 1756-1760, representing about 300,000 hats or two-thirds of the entire export trade.

European Intermediaries in the Fur Trade

By the eighteenth century, the demand for furs in Europe was being met mainly by exports from North America with intermediaries playing an essential role. The American trade, which moved along the main water systems, was organized largely through chartered companies. At the far north, operating out of Hudson Bay, was the Hudson’s Bay Company, chartered in 1670. The Compagnie d’Occident, founded in 1718, was the most successful of a series of monopoly French companies. It operated through the St. Lawrence River and in the region of the eastern Great Lakes. There was also an English trade through Albany and New York, and a French trade down the Mississippi.

The Hudson’s Bay Company and the Compagnie d’Occident, although similar in title, had very different internal structures. The English trade was organized along hierarchical lines with salaried managers, whereas the French monopoly issued licenses (congés) or leased out the use of its posts. The structure of the English company allowed for more control from the London head office, but required systems that could monitor the managers of the trading posts (Carlos and Nicholas, 1990). The leasing and licensing arrangements of the French made monitoring unnecessary, but led to a system where the center had little influence over the conduct of the trade.

The French and English were distinguished as well by how they interacted with the Natives. The Hudson’s Bay Company established posts around the Bay and waited for the Indians, often middlemen, to come to them. The French, by contrast, moved into the interior, directly trading with the Indians who harvested the furs. The French arrangement was more conducive to expansion, and by the end of the seventeenth century, they had moved beyond the St. Lawrence and Ottawa rivers into the western Great Lakes region (see Figure 1). Later they established posts in the heart of the Hudson Bay hinterland. In addition, the French explored the river systems to the south, setting up a post at the mouth of the Mississippi. As noted earlier, after Jay’s Treaty was signed, the French were replaced in the Mississippi region by U.S. interests which later formed the American Fur Company (Haeger, 1991).

The English takeover of New France at the end of the French and Indian Wars in 1763 did not, at first, fundamentally change the structure of the trade. Rather, French management was replaced by Scottish and English merchants operating in Montreal. But, within a decade, the Montreal trade was reorganized into partnerships between merchants in Montreal and traders who wintered in the interior. The most important of these arrangements led to the formation of the Northwest Company, which for the first two decades of the nineteenth century, competed with the Hudson’s Bay Company (Carlos and Hoffman, 1986). By the early decades of the nineteenth century, the Hudson’s Bay Company, the Northwest Company, and the American Fur Company had, combined, a system of trading posts across North America, including posts in Oregon and British Columbia and on the Mackenzie River. In 1821, the Northwest Company and the Hudson’s Bay Company merged under the name of the Hudson’s Bay Company. The Hudson’s Bay Company then ran the trade as a monopsony until the late 1840s when it began facing serious competition from trappers to the south. The Company’s role in the northwest changed again with the Canadian Confederation in 1867. Over the next decades treaties were signed with many of the northern tribes forever changing the old fur trade order in Canada.

The Supply of Furs: The Harvesting of Beaver and Depletion

During the eighteenth century, the changing technology of felt production and the growing demand for felt hats were met by attempts to increase the supply of furs, especially the supply of beaver pelts. Any permanent increase, however, was ultimately dependent on the animal resource base. How that base changed over time must be a matter of speculation since no animal counts exist from that period; nevertheless, the evidence we do have points to a scenario in which over-harvesting, at least in some years, gave rise to serious depletion of the beaver and possibly other animals such as marten that were also being traded. Why the beaver were over-harvested was closely related to the prices Natives were receiving, but important as well was the nature of Native property rights to the resource.

Harvests in the Fort Albany and York Factory Regions

That beaver populations along the Eastern seaboard regions of North America were depleted as the fur trade advanced is widely accepted. In fact the search for new sources of supply further west, including the region of Hudson Bay, has been attributed in part to dwindling beaver stocks in areas where the fur trade had been long established. Although there has been little discussion of the impact that the Hudson’s Bay Company and the French, who traded in the region of Hudson Bay, were having on the beaver stock, the remarkably complete records of the Hudson’s Bay Company provide the basis for reasonable inferences about depletion. From 1700 there is an uninterrupted annual series of fur returns at Fort Albany; the fur returns from York Factory begin in 1716 (see Figure 1).

The beaver returns at Fort Albany and York Factory for the period 1700 to 1770 are described in Figure 2. At Fort Albany the number of beaver skins over the period 1700 to 1720 averaged roughly 19,000, with wide year-to-year fluctuations; the range was about 15,000 to 30,000. After 1720 and until the late 1740s average returns declined by about 5,000 skins, and remained within the somewhat narrower range of roughly 10,000 to 20,000 skins. The period of relative stability was broken in the final years of the 1740s. In 1748 and 1749, returns increased to an average of nearly 23,000. Following these unusually strong years, the trade fell precipitously so that in 1756 fewer than 6,000 beaver pelts were received. There was a brief recovery in the early 1760s but by the end decade trade had fallen below even the mid-1750s levels. In 1770, Fort Albany took in just 3,600 beaver pelts. This pattern – unusually large returns in the late 1740s and low returns thereafter – indicates that the beaver in the Fort Albany region were being seriously depleted.

Figure 2
Beaver Traded at Fort Albany and York Factory 1700 – 1770

Source: Carlos and Lewis, 1993.

The beaver returns at York Factory from 1716 to 1770, also described in Figure 2, have some of the key features of the Fort Albany data. After some low returns early on (from 1716 to 1720), the number of beaver pelts increased to an average of 35,000. There were extraordinary returns in 1730 and 1731, when the average was 55,600 skins, but beaver receipts then stabilized at about 31,000 over the remainder of the decade. The first break in the pattern came in the early 1740s shortly after the French established several trading posts in the area. Surprisingly perhaps, given the increased competition, trade in beaver pelts at the Hudson’s Bay Company post increased to an average of 34,300, this over the period 1740 to 1743. Indeed, the 1742 return of 38,791 skins was the largest since the French had established any posts in the region. The returns in 1745 were also strong, but after that year the trade in beaver pelts began a decline that continued through to 1770. Average returns over the rest of the decade were 25,000; the average during the 1750s was 18,000, and just 15,500 in the 1760s. The pattern of beaver returns at York Factory – high returns in the early 1740s followed by a large decline – strongly suggests that, as in the Fort Albany hinterland, the beaver population had been greatly reduced.

The overall carrying capacity of any region, or the size of the animal stock, depends on the nature of the terrain and the underlying biological determinants such as birth and death rates. A standard relationship between the annual harvest and the animal population is the Lotka-Volterra logistic, commonly used in natural resource models to relate the natural growth of a population to the size of that population:
F(X) = aX – bX2, a, b > 0 (1)

where X is the population, F(X) is the natural growth in the population, a is the maximum proportional growth rate of the population, and b = a/X, where X is the upper limit to population size. The population dynamics of the species exploited depends on the harvest each period:

DX = aX – bX2- H (2)

where DX is the annual change in the population and H is the harvest. The choice of parameter a and maximum population X is central to the population estimates and have been based largely on estimates from the beaver ecology literature and Ontario provincial field reports of beaver densities (Carlos and Lewis, 1993).

Simulations based on equation 2 suggest that, until the 1730s, beaver populations remained at levels roughly consistent with maximum sustained yield management, sometimes referred to as the biological optimum. But after the 1730s there was a decline in beaver stocks to about half the maximum sustained yield levels. The cause of the depletion was closely related to what was happening in Europe. There, buoyant demand for felt hats and dwindling local fur supplies resulted in much higher prices for beaver pelts. These higher prices, in conjunction with the resulting competition from the French in the Hudson Bay region, led the Hudson’s Bay Company to offer much better terms to Natives who came to their trading posts (Carlos and Lewis, 1999).

Figure 3 reports a price index for furs at Fort Albany and at York Factory. The index represents a measure of what Natives received in European goods for their furs. At Fort Albany, fur prices were close to 70 from 1713 to 1731, but in 1732, in response to higher European fur prices and the entry of la Vérendrye, an important French trader, the price jumped to 81. After that year, prices continued to rise. The pattern at York Factory was similar. Although prices were high in the early years when the post was being established, beginning in 1724 the price settled down to about 70. At York Factory, the jump in price came in 1738, which was the year la Vérendrye set up a trading post in the York Factory hinterland. Prices then continued to increase. It was these higher fur prices that led to over-harvesting and, ultimately, a decline in beaver stocks.

Figure 3
Price Index for Furs: Fort Albany and York Factory, 1713 – 1770

Source: Carlos and Lewis, 2001.

Property Rights Regimes

An increase in price paid to Native hunters did not have to lead to a decline in the animal stocks, because Indians could have chosen to limit their harvesting. Why they did not was closely related their system of property rights. One can classify property rights along a spectrum with, at one end, open access, where anyone can hunt or fish, and at the other, complete private property, where a sole owner has full control over the resource. Between, there are a range of property rights regimes with access controlled by a community or a government, and where individual members of the group do not necessarily have private property rights. Open access creates a situation where there is less incentive to conserve, because animals not harvested by a particular hunter will be available to other hunters in the future. Thus the closer is a system to open access the more likely it is that the resource will be depleted.

Across aboriginal societies in North America, one finds a range of property rights regimes. Native Americans did have a concept of trespass and of property, but individual and family rights to resources were not absolute. Sometimes referred to as the Good Samaritan principle (McManus, 1972), outsiders were not permitted to harvest furs on another’s territory for trade, but they were allowed to hunt game and even beaver for food. Combined with this limitation to private property was an Ethic of Generosity that included liberal gift-giving where any visitor to one’s encampment was to be supplied with food and shelter.

Why a social norm such as gift-giving or the related Good Samaritan principle emerged was due to the nature of the aboriginal environment. The primary objective of aboriginal societies was survival. Hunting was risky, and so rules were put in place that would reduce the risk of starvation. As Berkes et al.(1989, p. 153) notes, for such societies: “all resources are subject to the overriding principle that no one can prevent a person from obtaining what he needs for his family’s survival.” Such actions were reciprocal and especially in the sub-arctic world were an insurance mechanism. These norms, however, also reduced the incentive to conserve the beaver and other animals that were part of the fur trade. The combination of these norms and the increasing price paid to Native traders led to the large harvests in the 1740s and ultimately depletion of the animal stock.

The Trade in European Goods

Indians were the primary agents in the North American commercial fur trade. It was they who hunted the animals, and transported and traded the pelts or skins to European intermediaries. The exchange was a voluntary. In return for their furs, Indians obtained both access to an iron technology to improve production and access to a wide range of new consumer goods. It is important to recognize, however, that although the European goods were new to aboriginals, the concept of exchange was not. The archaeological evidence indicates an extensive trade between Native tribes in the north and south of North America prior to European contact.

The extraordinary records of the Hudson’s Bay Company allow us to form a clear picture of what Indians were buying. Table 2 lists the goods received by Natives at York Factory, which was by far the largest of the Hudson’s Bay Company trading posts. As is evident from the table, the commercial trade was more than in beads and baubles or even guns and alcohol; rather Native traders were receiving a wide range of products that improved their ability to meet their subsistence requirements and allowed them to raise their living standards. The items have been grouped by use. The producer goods category was dominated by firearms, including guns, shot and powder, but also includes knives, awls and twine. The Natives traded for guns of different lengths. The 3-foot gun was used mainly for waterfowl and in heavily forested areas where game could be shot at close range. The 4-foot gun was more accurate and suitable for open spaces. In addition, the 4-foot gun could play a role in warfare. Maintaining guns in the harsh sub-arctic environment was a serious problem, and ultimately, the Hudson’s Bay Company was forced to send gunsmiths to its trading posts to assess quality and help with repairs. Kettles and blankets were the main items in the “household goods” category. These goods probably became necessities to the Natives who adopted them. Then there were the luxury goods, which have been divided into two broad categories: “tobacco and alcohol,” and “other luxuries,” dominated by cloth of various kinds (Carlos and Lewis, 2001; 2002).

Table 2
Value of Goods Received at York Factory in 1740 (made beaver)

We have much less information about the French trade. The French are reported to have exchanged similar items, although given their higher transport costs, both the furs received and the goods traded tended to be higher in value relative to weight. The Europeans, it might be noted, supplied no food to the trade in the eighteenth century. In fact, Indians helped provision the posts with fish and fowl. This role of food purveyor grew in the nineteenth century as groups known as the “home guard Cree” came to live around the posts; as well, pemmican, supplied by Natives, became an important source of nourishment for Europeans involved in the buffalo hunts.

The value of the goods listed in Table 2 is expressed in terms of the unit of account, the made beaver, which the Hudson’s Bay Company used to record its transactions and determine the rate of exchange between furs and European goods. The price of a prime beaver pelt was 1 made beaver, and every other type of fur and good was assigned a price based on that unit. For example, a marten (a type of mink) was a made beaver, a blanket was 7 made beaver, a gallon of brandy, 4 made beaver, and a yard of cloth, 3? made beaver. These were the official prices at York Factory. Thus Indians, who traded at these prices, received, for example, a gallon of brandy for four prime beaver pelts, two yards of cloth for seven beaver pelts, and a blanket for 21 marten pelts. This was barter trade in that no currency was used; and although the official prices implied certain rates of exchange between furs and goods, Hudson’s Bay Company factors were encouraged to trade at rates more favorable to the Company. The actual rates, however, depended on market conditions in Europe and, most importantly, the extent of French competition in Canada. Figure 3 illustrates the rise in the price of furs at York Factory and Fort Albany in response to higher beaver prices in London and Paris, as well as to a greater French presence in the region (Carlos and Lewis, 1999). The increase in price also reflects the bargaining ability of Native traders during periods of direct competition between the English and French and later the Hudson’s Bay Company and the Northwest Company. At such times, the Native traders would play both parties off against each other (Ray and Freeman, 1978).

The records of the Hudson’s Bay Company provide us with a unique window to the trading process, including the bargaining ability of Native traders, which is evident in the range of commodities received. Natives only bought goods they wanted. Clear from the Company records is that it was the Natives who largely determined the nature and quality of those goods. As well the records tell us how income from the trade was being allocated. The breakdown differed by post and varied over time; but, for example, in 1740 at York Factory, the distribution was: producer goods – 44 percent; household goods – 9 percent; alcohol and tobacco – 24 percent; and other luxuries – 23 percent. An important implication of the trade data is that, like many Europeans and most American colonists, Native Americans were taking part in the consumer revolution of the eighteenth century (de Vries, 1993; Shammas, 1993). In addition to necessities, they were consuming a remarkable variety of luxury products. Cloth, including baize, duffel, flannel, and gartering, was by far the largest class, but they also purchased beads, combs, looking glasses, rings, shirts, and vermillion among a much longer list. Because these items were heterogeneous in nature, the Hudson’s Bay Company’s head office went to great lengths to satisfy the specific tastes of Native consumers. Attempts were also made, not always successfully, to introduce new products (Carlos and Lewis, 2002).

Perhaps surprising, given the emphasis that has been placed on it in the historical literature, was the comparatively small role of alcohol in the trade. At York Factory, Native traders received in 1740 a total of 494 gallons of brandy and “strong water,” which had a value of 1,976 made beaver. More than twice this amount was spent on tobacco in that year, nearly five times was spent on firearms, twice was spent on cloth, and more was spent on blankets and kettles than on alcohol. Thus, brandy, although a significant item of trade, was by no means a dominant one. In addition, alcohol could hardly have created serious social problems during this period. The amount received would have allowed for no more than ten two-ounce drinks per year for the adult Native population living in the region.

The Labor Supply of Natives

Another important question can be addressed using the trade data. Were Natives “lazy and improvident” as they have been described by some contemporaries, or were they “industrious” like the American colonists and many Europeans? Central to answering this question is how Native groups responded to the price of furs, which began rising in the 1730s. Much of the literature argues that Indian trappers reduced their effort in response to higher fur prices; that is, they had backward-bending supply curves of labor. The view is that Natives had a fixed demand for European goods that, at higher fur prices, could be met with fewer furs, and hence less effort. Although widely cited, this argument does not stand up. Not only were higher fur prices accompanied by larger total harvests of furs in the region, but the pattern of Native expenditure also points to a scenario of greater effort. From the late 1730s to the 1760s, as the price of furs rose, the share of expenditure on luxury goods increased dramatically (see Figure 4). Thus Natives were not content simply to accept their good fortune by working less; rather they seized the opportunity provided to them by the strong fur market by increasing their effort in the commercial sector, thereby dramatically augmenting the purchases of those goods, namely the luxuries, that could raise their living standards.

Figure 4
Native Expenditure Shares at York Factory 1716 – 1770

Source: Carlos and Lewis, 2001.

A Note on the Non-commercial Sector

As important as the fur trade was to Native Americans in the sub-arctic regions of Canada, commerce with the Europeans comprised just one, relatively small, part of their overall economy. Exact figures are not available, but the traditional sectors; hunting, gathering, food preparation and, to some extent, agriculture must have accounted for at least 75 to 80 percent of Native labor during these decades. Nevertheless, despite the limited time spent in commercial activity, the fur trade had a profound effect on the nature of the Native economy and Native society. The introduction of European producer goods, such as guns, and household goods, mainly kettles and blankets, changed the way Native Americans achieved subsistence; and the European luxury goods expanded the range of products that allowed them to move beyond subsistence. Most importantly, the fur trade connected Natives to Europeans in ways that affected how and how much they chose to work, where they chose to live, and how they exploited the resources on which the trade and their survival was based.

References

Berkes, Fikret, David Feeny, Bonnie J. McCay, and James M. Acheson. “The Benefits of the Commons.” Nature 340 (July 13, 1989): 91-93.

Braund, Kathryn E. Holland.Deerskins and Duffels: The Creek Indian Trade with Anglo-America, 1685-1815. Lincoln: University of Nebraska Press, 1993.

Carlos, Ann M., and Elizabeth Hoffman. “The North American Fur Trade: Bargaining to a Joint Profit Maximum under Incomplete Information, 1804-1821.” Journal of Economic History 46, no. 4 (1986): 967-86.

Carlos, Ann M., and Frank D. Lewis. “Indians, the Beaver and the Bay: The Economics of Depletion in the Lands of the Hudson’s Bay Company, 1700-1763.” Journal of Economic History 53, no. 3 (1993): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Property Rights, Competition and Depletion in the Eighteenth-Century Canadian Fur Trade: The Role of the European Market.” Canadian Journal of Economics 32, no. 3 (1999): 705-28.

Carlos, Ann M., and Frank D. Lewis. “Property Rights and Competition in the Depletion of the Beaver: Native Americans and the Hudson’s Bay Company.” In The Other Side of the Frontier: Economic Explorations in Native American History, edited by Linda Barrington, 131-149. Boulder, CO: Westview Press, 1999.

Carlos, Ann M., and Frank D. Lewis. “Trade, Consumption, and the Native Economy: Lessons from York Factory, Hudson Bay.” Journal of Economic History61, no. 4 (2001): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Marketing in the Land of Hudson Bay: Indian Consumers and the Hudson’s Bay Company, 1670-1770.” Enterprise and Society 2 (2002): 285-317.

Carlos, Ann and Nicholas, Stephen. “Agency Problems in Early Chartered Companies: The Case of the Hudson’s Bay Company.” Journal of Economic History 50, no. 4 (1990): 853-75.

Clarke, Fiona. Hats. London: Batsford, 1982.

Crean, J. F. “Hats and the Fur Trade.” Canadian Journal of Economics and Political Science 28, no. 3 (1962): 373-386.

Corner, David. “The Tyranny of Fashion: The Case of the Felt-Hatting Trade in the Late Seventeenth and Eighteenth Centuries.” Textile History 22, no.2 (1991): 153-178.

de Vries, Jan. “Between Purchasing Power and the World of Goods: Understanding the Household Economy in Early Modern Europe.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 85-132. London: Routledge, 1993.

Ginsburg Madeleine. The Hat: Trends and Traditions. London: Studio Editions, 1990.

Haeger, John D. John Jacob Astor: Business and Finance in the Early Republic. Detroit: Wayne State University Press, 1991.

Harte, N.B. “The Economics of Clothing in the Late Seventeenth Century.” Textile History 22, no. 2 (1991): 277-296.

Heidenreich, Conrad E., and Arthur J. Ray. The Early Fur Trade: A Study in Cultural Interaction. Toronto: McClelland and Stewart, 1976.

Helm, Jane, ed. Handbook of North American Indians 6, Subarctic. Washington: Smithsonian, 1981.

Innis, Harold. The Fur Trade in Canada (revised edition). Toronto: University of Toronto Press, 1956.

Krech III, Shepard. The Ecological Indian: Myth and History. New York: Norton, 1999.

Lawson, Murray G. Fur: A Study in English Mercantilism. Toronto: University of Toronto Press, 1943.

McManus, John. “An Economic Analysis of Indian Behavior in the North American Fur Trade.” Journal of Economic History 32, no.1 (1972): 36-53.

Ray, Arthur J. Indians in the Fur Trade: Their Role as Hunters, Trappers and Middlemen in the Lands Southwest of Hudson Bay, 1660-1870. Toronto: University of Toronto Press, 1974.

Ray, Arthur J. and Donald Freeman. “Give Us Good Measure”: An Economic Analysis of Relations between the Indians and the Hudson’s Bay Company before 1763. Toronto: University of Toronto Press, 1978.

Ray, Arthur J. “Bayside Trade, 1720-1780.” In Historical Atlas of Canada 1, edited by R. Cole Harris, plate 60. Toronto: University of Toronto Press, 1987.

Rich, E. E. Hudson’s Bay Company, 1670 – 1870. 2 vols. Toronto: McClelland and Stewart, 1960.

Rich, E.E. “Trade Habits and Economic Motivation among the Indians of North America.” Canadian Journal of Economics and Political Science 26, no. 1 (1960): 35-53.

Shammas, Carole. “Changes in English and Anglo-American Consumption from 1550-1800.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 177-205. London: Routledge, 1993.

Wien, Thomas. “Selling Beaver Skins in North America and Europe, 1720-1760: The Uses of Fur-Trade Imperialism.” Journal of the Canadian Historical Association, New Series 1 (1990): 293-317.

Citation: Carlos, Ann and Frank Lewis. “Fur Trade (1670-1870)”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-the-fur-trade-1670-to-1870/

The U.S. Economy in the 1920s

Gene Smiley, Marquette University

Introduction

The interwar period in the United States, and in the rest of the world, is a most interesting era. The decade of the 1930s marks the most severe depression in our history and ushered in sweeping changes in the role of government. Economists and historians have rightly given much attention to that decade. However, with all of this concern about the growing and developing role of government in economic activity in the 1930s, the decade of the 1920s often tends to get overlooked. This is unfortunate because the 1920s are a period of vigorous, vital economic growth. It marks the first truly modern decade and dramatic economic developments are found in those years. There is a rapid adoption of the automobile to the detriment of passenger rail travel. Though suburbs had been growing since the late nineteenth century their growth had been tied to rail or trolley access and this was limited to the largest cities. The flexibility of car access changed this and the growth of suburbs began to accelerate. The demands of trucks and cars led to a rapid growth in the construction of all-weather surfaced roads to facilitate their movement. The rapidly expanding electric utility networks led to new consumer appliances and new types of lighting and heating for homes and businesses. The introduction of the radio, radio stations, and commercial radio networks began to break up rural isolation, as did the expansion of local and long-distance telephone communications. Recreational activities such as traveling, going to movies, and professional sports became major businesses. The period saw major innovations in business organization and manufacturing technology. The Federal Reserve System first tested its powers and the United States moved to a dominant position in international trade and global business. These things make the 1920s a period of considerable importance independent of what happened in the 1930s.

National Product and Income and Prices

We begin the survey of the 1920s with an examination of the overall production in the economy, GNP, the most comprehensive measure of aggregate economic activity. Real GNP growth during the 1920s was relatively rapid, 4.2 percent a year from 1920 to 1929 according to the most widely used estimates. (Historical Statistics of the United States, or HSUS, 1976) Real GNP per capita grew 2.7 percent per year between 1920 and 1929. By both nineteenth and twentieth century standards these were relatively rapid rates of real economic growth and they would be considered rapid even today.

There were several interruptions to this growth. In mid-1920 the American economy began to contract and the 1920-1921 depression lasted about a year, but a rapid recovery reestablished full-employment by 1923. As will be discussed below, the Federal Reserve System’s monetary policy was a major factor in initiating the 1920-1921 depression. From 1923 through 1929 growth was much smoother. There was a very mild recession in 1924 and another mild recession in 1927 both of which may be related to oil price shocks (McMillin and Parker, 1994). The 1927 recession was also associated with Henry Ford’s shut-down of all his factories for six months in order to changeover from the Model T to the new Model A automobile. Though the Model T’s market share was declining after 1924, in 1926 Ford’s Model T still made up nearly 40 percent of all the new cars produced and sold in the United States. The Great Depression began in the summer of 1929, possibly as early as June. The initial downturn was relatively mild but the contraction accelerated after the crash of the stock market at the end of October. Real total GNP fell 10.2 percent from 1929 to 1930 while real GNP per capita fell 11.5 percent from 1929 to 1930.

image002 image004

Price changes during the 1920s are shown in Figure 2. The Consumer Price Index, CPI, is a better measure of changes in the prices of commodities and services that a typical consumer would purchase, while the Wholesale Price Index, WPI, is a better measure in the changes in the cost of inputs for businesses. As the figure shows the 1920-1921 depression was marked by extraordinarily large price decreases. Consumer prices fell 11.3 percent from 1920 to 1921 and fell another 6.6 percent from 1921 to 1922. After that consumer prices were relatively constant and actually fell slightly from 1926 to 1927 and from 1927 to 1928. Wholesale prices show greater variation. The 1920-1921 depression hit farmers very hard. Prices had been bid up with the increasing foreign demand during the First World War. As European production began to recover after the war prices began to fall. Though the prices of agricultural products fell from 1919 to 1920, the depression brought on dramatic declines in the prices of raw agricultural produce as well as many other inputs that firms employ. In the scramble to beat price increases during 1919 firms had built up large inventories of raw materials and purchased inputs and this temporary increase in demand led to even larger price increases. With the depression firms began to draw down those inventories. The result was that the prices of raw materials and manufactured inputs fell rapidly along with the prices of agricultural produce—the WPI dropped 45.9 percent between 1920 and 1921. The price changes probably tend to overstate the severity of the 1920-1921 depression. Romer’s recent work (1988) suggests that prices changed much more easily in that depression reducing the drop in production and employment. Wholesale prices in the rest of the 1920s were relatively stable though they were more likely to fall than to rise.

Economic Growth in the 1920s

Despite the 1920-1921 depression and the minor interruptions in 1924 and 1927, the American economy exhibited impressive economic growth during the 1920s. Though some commentators in later years thought that the existence of some slow growing or declining sectors in the twenties suggested weaknesses that might have helped bring on the Great Depression, few now argue this. Economic growth never occurs in all sectors at the same time and at the same rate. Growth reallocates resources from declining or slower growing sectors to the more rapidly expanding sectors in accordance with new technologies, new products and services, and changing consumer tastes.

Economic growth in the 1920s was impressive. Ownership of cars, new household appliances, and housing was spread widely through the population. New products and processes of producing those products drove this growth. The combination of the widening use of electricity in production and the growing adoption of the moving assembly line in manufacturing combined to bring on a continuing rise in the productivity of labor and capital. Though the average workweek in most manufacturing remained essentially constant throughout the 1920s, in a few industries, such as railroads and coal production, it declined. (Whaples 2001) New products and services created new markets such as the markets for radios, electric iceboxes, electric irons, fans, electric lighting, vacuum cleaners, and other laborsaving household appliances. This electricity was distributed by the growing electric utilities. The stocks of those companies helped create the stock market boom of the late twenties. RCA, one of the glamour stocks of the era, paid no dividends but its value appreciated because of expectations for the new company. Like the Internet boom of the late 1990s, the electricity boom of the 1920s fed a rapid expansion in the stock market.

Fed by continuing productivity advances and new products and services and facilitated by an environment of stable prices that encouraged production and risk taking, the American economy embarked on a sustained expansion in the 1920s.

Population and Labor in the 1920s

At the same time that overall production was growing, population growth was declining. As can be seen in Figure 3, from an annual rate of increase of 1.85 and 1.93 percent in 1920 and 1921, respectively, population growth rates fell to 1.23 percent in 1928 and 1.04 percent in 1929.

These changes in the overall growth rate were linked to the birth and death rates of the resident population and a decrease in foreign immigration. Though the crude death rate changed little during the period, the crude birth rate fell sharply into the early 1930s. (Figure 4) There are several explanations for the decline in the birth rate during this period. First, there was an accelerated rural-to-urban migration. Urban families have tended to have fewer children than rural families because urban children do not augment family incomes through their work as unpaid workers as rural children do. Second, the period also saw continued improvement in women’s job opportunities and a rise in their labor force participation rates.

Immigration also fell sharply. In 1917 the federal government began to limit immigration and in 1921 an immigration act limited the number of prospective citizens of any nationality entering the United States each year to no more than 3 percent of that nationality’s resident population as of the 1910 census. A new act in 1924 lowered this to 2 percent of the resident population at the 1890 census and more firmly blocked entry for people from central, southern, and eastern European nations. The limits were relaxed slightly in 1929.

The American population also continued to move during the interwar period. Two regions experienced the largest losses in population shares, New England and the Plains. For New England this was a continuation of a long-term trend. The population share for the Plains region had been rising through the nineteenth century. In the interwar period its agricultural base, combined with the continuing shift from agriculture to industry, led to a sharp decline in its share. The regions gaining population were the Southwest and, particularly, the far West.— California began its rapid growth at this time.

 Real Average Weekly or Daily Earnings for Selected=During the 1920s the labor force grew at a more rapid rate than population. This somewhat more rapid growth came from the declining share of the population less than 14 years old and therefore not in the labor force. In contrast, the labor force participation rates, or fraction of the population aged 14 and over that was in the labor force, declined during the twenties from 57.7 percent to 56.3 percent. This was entirely due to a fall in the male labor force participation rate from 89.6 percent to 86.8 percent as the female labor force participation rate rose from 24.3 percent to 25.1 percent. The primary source of the fall in male labor force participation rates was a rising retirement rate. Employment rates for males who were 65 or older fell from 60.1 percent in 1920 to 58.0 percent in 1930.

With the depression of 1920-1921 the unemployment rate rose rapidly from 5.2 to 8.7 percent. The recovery reduced unemployment to an average rate of 4.8 percent in 1923. The unemployment rate rose to 5.8 percent in the recession of 1924 and to 5.0 percent with the slowdown in 1927. Otherwise unemployment remained relatively low. The onset of the Great Depression from the summer of 1929 on brought the unemployment rate from 4.6 percent in 1929 to 8.9 percent in 1930. (Figure 5)

Earnings for laborers varied during the twenties. Table 1 presents average weekly earnings for 25 manufacturing industries. For these industries male skilled and semi-skilled laborers generally commanded a premium of 35 percent over the earnings of unskilled male laborers in the twenties. Unskilled males received on average 35 percent more than females during the twenties. Real average weekly earnings for these 25 manufacturing industries rose somewhat during the 1920s. For skilled and semi-skilled male workers real average weekly earnings rose 5.3 percent between 1923 and 1929, while real average weekly earnings for unskilled males rose 8.7 percent between 1923 and 1929. Real average weekly earnings for females rose on 1.7 percent between 1923 and 1929. Real weekly earnings for bituminous and lignite coal miners fell as the coal industry encountered difficult times in the late twenties and the real daily wage rate for farmworkers in the twenties, reflecting the ongoing difficulties in agriculture, fell after the recovery from the 1920-1921 depression.

The 1920s were not kind to labor unions even though the First World War had solidified the dominance of the American Federation of Labor among labor unions in the United States. The rapid growth in union membership fostered by federal government policies during the war ended in 1919. A committee of AFL craft unions undertook a successful membership drive in the steel industry in that year. When U.S. Steel refused to bargain, the committee called a strike, the failure of which was a sharp blow to the unionization drive. (Brody, 1965) In the same year, the United Mine Workers undertook a large strike and also lost. These two lost strikes and the 1920-21 depression took the impetus out of the union movement and led to severe membership losses that continued through the twenties. (Figure 6)

Under Samuel Gompers’s leadership, the AFL’s “business unionism” had attempted to promote the union and collective bargaining as the primary answer to the workers’ concerns with wages, hours, and working conditions. The AFL officially opposed any government actions that would have diminished worker attachment to unions by providing competing benefits, such as government sponsored unemployment insurance, minimum wage proposals, maximum hours proposals and social security programs. As Lloyd Ulman (1961) points out, the AFL, under Gompers’ direction, differentiated on the basis of whether the statute would or would not aid collective bargaining. After Gompers’ death, William Green led the AFL in a policy change as the AFL promoted the idea of union-management cooperation to improve output and promote greater employer acceptance of unions. But Irving Bernstein (1965) concludes that, on the whole, union-management cooperation in the twenties was a failure.

To combat the appeal of unions in the twenties, firms used the “yellow-dog” contract requiring employees to swear they were not union members and would not join one; the “American Plan” promoting the open shop and contending that the closed shop was un-American; and welfare capitalism. The most common aspects of welfare capitalism included personnel management to handle employment issues and problems, the doctrine of “high wages,” company group life insurance, old-age pension plans, stock-purchase plans, and more. Some firms formed company unions to thwart independent unionization and the number of company-controlled unions grew from 145 to 432 between 1919 and 1926.

Until the late thirties the AFL was a voluntary association of independent national craft unions. Craft unions relied upon the particular skills the workers had acquired (their craft) to distinguish the workers and provide barriers to the entry of other workers. Most craft unions required a period of apprenticeship before a worker was fully accepted as a journeyman worker. The skills, and often lengthy apprenticeship, constituted the entry barrier that gave the union its bargaining power. There were only a few unions that were closer to today’s industrial unions where the required skills were much less (or nonexistent) making the entry of new workers much easier. The most important of these industrial unions was the United Mine Workers, UMW.

The AFL had been created on two principles: the autonomy of the national unions and the exclusive jurisdiction of the national union.—Individual union members were not, in fact, members of the AFL; rather, they were members of the local and national union, and the national was a member of the AFL. Representation in the AFL gave dominance to the national unions, and, as a result, the AFL had little effective power over them. The craft lines, however, had never been distinct and increasingly became blurred. The AFL was constantly mediating jurisdictional disputes between member national unions. Because the AFL and its individual unions were not set up to appeal to and work for the relatively less skilled industrial workers, union organizing and growth lagged in the twenties.

Agriculture

The onset of the First World War in Europe brought unprecedented prosperity to American farmers. As agricultural production in Europe declined, the demand for American agricultural exports rose, leading to rising farm product prices and incomes. In response to this, American farmers expanded production by moving onto marginal farmland, such as Wisconsin cutover property on the edge of the woods and hilly terrain in the Ozark and Appalachian regions. They also increased output by purchasing more machinery, such as tractors, plows, mowers, and threshers. The price of farmland, particularly marginal farmland, rose in response to the increased demand, and the debt of American farmers increased substantially.

This expansion of American agriculture continued past the end of the First World War as farm exports to Europe and farm prices initially remained high. However, agricultural production in Europe recovered much faster than most observers had anticipated. Even before the onset of the short depression in 1920, farm exports and farm product prices had begun to fall. During the depression, farm prices virtually collapsed. From 1920 to 1921, the consumer price index fell 11.3 percent, the wholesale price index fell 45.9 percent, and the farm products price index fell 53.3 percent. (HSUS, Series E40, E42, and E135)

Real average net income per farm fell over 72.6 percent between 1920 and 1921 and, though rising in the twenties, never recovered the relative levels of 1918 and 1919. (Figure 7) Farm mortgage foreclosures rose and stayed at historically high levels for the entire decade of the 1920s. (Figure 8) The value of farmland and buildings fell throughout the twenties and, for the first time in American history, the number of cultivated acres actually declined as farmers pulled back from the marginal farmland brought into production during the war. Rather than indicators of a general depression in agriculture in the twenties, these were the results of the financial commitments made by overoptimistic American farmers during and directly after the war. The foreclosures were generally on second mortgages rather than on first mortgages as they were in the early 1930s. (Johnson, 1973; Alston, 1983)

A Declining Sector

A major difficulty in analyzing the interwar agricultural sector lies in separating the effects of the 1920-21 and 1929-33 depressions from those that arose because agriculture was declining relative to the other sectors. A relatively very slow growing demand for basic agricultural products and significant increases in the productivity of labor, land, and machinery in agricultural production combined with a much more rapid extensive economic growth in the nonagricultural sectors of the economy required a shift of resources, particularly labor, out of agriculture. (Figure 9) The market induces labor to voluntarily move from one sector to another through income differentials, suggesting that even in the absence of the effects of the depressions, farm incomes would have been lower than nonfarm incomes so as to bring about this migration.

The continuous substitution of tractor power for horse and mule power released hay and oats acreage to grow crops for human consumption. Though cotton and tobacco continued as the primary crops in the south, the relative production of cotton continued to shift to the west as production in Arkansas, Missouri, Oklahoma, Texas, New Mexico, Arizona, and California increased. As quotas reduced immigration and incomes rose, the demand for cereal grains grew slowly—more slowly than the supply—and the demand for fruits, vegetables, and dairy products grew. Refrigeration and faster freight shipments expanded the milk sheds further from metropolitan areas. Wisconsin and other North Central states began to ship cream and cheeses to the Atlantic Coast. Due to transportation improvements, specialized truck farms and the citrus industry became more important in California and Florida. (Parker, 1972; Soule, 1947)

The relative decline of the agricultural sector in this period was closely related to the highly inelastic income elasticity of demand for many farm products, particularly cereal grains, pork, and cotton. As incomes grew, the demand for these staples grew much more slowly. At the same time, rising land and labor productivity were increasing the supplies of staples, causing real prices to fall.

Table 3 presents selected agricultural productivity statistics for these years. Those data indicate that there were greater gains in labor productivity than in land productivity (or per acre yields). Per acre yields in wheat and hay actually decreased between 1915-19 and 1935-39. These productivity increases, which released resources from the agricultural sector, were the result of technological improvements in agriculture.

Technological Improvements In Agricultural Production

In many ways the adoption of the tractor in the interwar period symbolizes the technological changes that occurred in the agricultural sector. This changeover in the power source that farmers used had far-reaching consequences and altered the organization of the farm and the farmers’ lifestyle. The adoption of the tractor was land saving (by releasing acreage previously used to produce crops for workstock) and labor saving. At the same time it increased the risks of farming because farmers were now much more exposed to the marketplace. They could not produce their own fuel for tractors as they had for the workstock. Rather, this had to be purchased from other suppliers. Repair and replacement parts also had to be purchased, and sometimes the repairs had to be undertaken by specialized mechanics. The purchase of a tractor also commonly required the purchase of new complementary machines; therefore, the decision to purchase a tractor was not an isolated one. (White, 2001; Ankli, 1980; Ankli and Olmstead, 1981; Musoke, 1981; Whatley, 1987). These changes resulted in more and more farmers purchasing and using tractors, but the rate of adoption varied sharply across the United States.

Technological innovations in plants and animals also raised productivity. Hybrid seed corn increased yields from an average of 40 bushels per acre to 100 to 120 bushels per acre. New varieties of wheat were developed from the hardy Russian and Turkish wheat varieties which had been imported. The U.S. Department of Agriculture’s Experiment Stations took the lead in developing wheat varieties for different regions. For example, in the Columbia River Basin new varieties raised yields from an average of 19.1 bushels per acre in 1913-22 to 23.1 bushels per acre in 1933-42. (Shepherd, 1980) New hog breeds produced more meat and new methods of swine sanitation sharply increased the survival rate of piglets. An effective serum for hog cholera was developed, and the federal government led the way in the testing and eradication of bovine tuberculosis and brucellosis. Prior to the Second World War, a number of pesticides to control animal disease were developed, including cattle dips and disinfectants. By the mid-1920s a vaccine for “blackleg,” an infectious, usually fatal disease that particularly struck young cattle, was completed. The cattle tick, which carried Texas Fever, was largely controlled through inspections. (Schlebecker, 1975; Bogue, 1983; Wood, 1980)

Federal Agricultural Programs in the 1920s

Though there was substantial agricultural discontent in the period from the Civil War to late 1890s, the period from then to the onset of the First World War was relatively free from overt farmers’ complaints. In later years farmers dubbed the 1910-14 period as agriculture’s “golden years” and used the prices of farm crops and farm inputs in that period as a standard by which to judge crop and input prices in later years. The problems that arose in the agricultural sector during the twenties once again led to insistent demands by farmers for government to alleviate their distress.

Though there were increasing calls for direct federal government intervention to limit production and raise farm prices, this was not used until Roosevelt took office. Rather, there was a reliance upon the traditional method to aid injured groups—tariffs, and upon the “sanctioning and promotion of cooperative marketing associations.” In 1921 Congress attempted to control the grain exchanges and compel merchants and stockyards to charge “reasonable rates,” with the Packers and Stockyards Act and the Grain Futures Act. In 1922 Congress passed the Capper-Volstead Act to promote agricultural cooperatives and the Fordney-McCumber Tariff to impose high duties on most agricultural imports.—The Cooperative Marketing Act of 1924 did not bolster failing cooperatives as it was supposed to do. (Hoffman and Liebcap, 1991)

Twice between 1924 and 1928 Congress passed “McNary-Haugan” bills, but President Calvin Coolidge vetoed both. The McNary-Haugan bills proposed to establish “fair” exchange values (based on the 1910-14 period) for each product and to maintain them through tariffs and a private corporation that would be chartered by the government and could buy enough of each commodity to keep its price up to the computed fair level. The revenues were to come from taxes imposed on farmers. The Hoover administration passed the Hawley-Smoot tariff in 1930 and an Agricultural Marketing Act in 1929. This act committed the federal government to a policy of stabilizing farm prices through several nongovernment institutions but these failed during the depression. Federal intervention in the agricultural sector really came of age during the New Deal era of the 1930s.

Manufacturing

Agriculture was not the only sector experiencing difficulties in the twenties. Other industries, such as textiles, boots and shoes, and coal mining, also experienced trying times. However, at the same time that these industries were declining, other industries, such as electrical appliances, automobiles, and construction, were growing rapidly. The simultaneous existence of growing and declining industries has been common to all eras because economic growth and technological progress never affect all sectors in the same way. In general, in manufacturing there was a rapid rate of growth of productivity during the twenties. The rise of real wages due to immigration restrictions and the slower growth of the resident population spurred this. Transportation improvements and communications advances were also responsible. These developments brought about differential growth in the various manufacturing sectors in the United States in the 1920s.

Because of the historic pattern of economic development in the United States, the northeast was the first area to really develop a manufacturing base. By the mid-nineteenth century the East North Central region was creating a manufacturing base and the other regions began to create manufacturing bases in the last half of the nineteenth century resulting in a relative westward and southern shift of manufacturing activity. This trend continued in the 1920s as the New England and Middle Atlantic regions’ shares of manufacturing employment fell while all of the other regions—excluding the West North Central region—gained. There was considerable variation in the growth of the industries and shifts in their ranking during the decade. The largest broadly defined industries were, not surprisingly, food and kindred products; textile mill products; those producing and fabricating primary metals; machinery production; and chemicals. When industries are more narrowly defined, the automobile industry, which ranked third in manufacturing value added in 1919, ranked first by the mid-1920s.

Productivity Developments

Gavin Wright (1990) has argued that one of the underappreciated characteristics of American industrial history has been its reliance on mineral resources. Wright argues that the growing American strength in industrial exports and industrialization in general relied on an increasing intensity in nonreproducible natural resources. The large American market was knit together as one large market without internal barriers through the development of widespread low-cost transportation. Many distinctively American developments, such as continuous-process, mass-production methods were associated with the “high throughput” of fuel and raw materials relative to labor and capital inputs. As a result the United States became the dominant industrial force in the world 1920s and 1930s. According to Wright, after World War II “the process by which the United States became a unified ‘economy’ in the nineteenth century has been extended to the world as a whole. To a degree, natural resources have become commodities rather than part of the ‘factor endowment’ of individual countries.” (Wright, 1990)

In addition to this growing intensity in the use of nonreproducible natural resources as a source of productivity gains in American manufacturing, other technological changes during the twenties and thirties tended to raise the productivity of the existing capital through the replacement of critical types of capital equipment with superior equipment and through changes in management methods. (Soule, 1947; Lorant, 1967; Devine, 1983; Oshima, 1984) Some changes, such as the standardization of parts and processes and the reduction of the number of styles and designs, raised the productivity of both capital and labor. Modern management techniques, first introduced by Frederick W. Taylor, were introduced on a wider scale.

One of the important forces contributing to mass production and increased productivity was the transfer to electric power. (Devine, 1983) By 1929 about 70 percent of manufacturing activity relied on electricity, compared to roughly 30 percent in 1914. Steam provided 80 percent of the mechanical drive capacity in manufacturing in 1900, but electricity provided over 50 percent by 1920 and 78 percent by 1929. An increasing number of factories were buying their power from electric utilities. In 1909, 64 percent of the electric motor capacity in manufacturing establishments used electricity generated on the factory site; by 1919, 57 percent of the electricity used in manufacturing was purchased from independent electric utilities.

The shift from coal to oil and natural gas and from raw unprocessed energy in the forms of coal and waterpower to processed energy in the form of internal combustion fuel and electricity increased thermal efficiency. After the First World War energy consumption relative to GNP fell, there was a sharp increase in the growth rate of output per labor-hour, and the output per unit of capital input once again began rising. These trends can be seen in the data in Table 3. Labor productivity grew much more rapidly during the 1920s than in the previous or following decade. Capital productivity had declined in the decade previous to the 1920s while it also increased sharply during the twenties and continued to rise in the following decade. Alexander Field (2003) has argued that the 1930s were the most technologically progressive decade of the twentieth century basing his argument on the growth of multi-factor productivity as well as the impressive array of technological developments during the thirties. However, the twenties also saw impressive increases in labor and capital productivity as, particularly, developments in energy and transportation accelerated.

 Average Annual Rates of Labor Productivity and Capital Productivity Growth.

Warren Devine, Jr. (1983) reports that in the twenties the most important result of the adoption of electricity was that it would be an indirect “lever to increase production.” There were a number of ways in which this occurred. Electricity brought about an increased flow of production by allowing new flexibility in the design of buildings and the arrangement of machines. In this way it maximized throughput. Electric cranes were an “inestimable boon” to production because with adequate headroom they could operate anywhere in a plant, something that mechanical power transmission to overhead cranes did not allow. Electricity made possible the use of portable power tools that could be taken anywhere in the factory. Electricity brought about improved illumination, ventilation, and cleanliness in the plants, dramatically improving working conditions. It improved the control of machines since there was no longer belt slippage with overhead line shafts and belt transmission, and there were less limitations on the operating speeds of machines. Finally, it made plant expansion much easier than when overhead shafts and belts had been relied upon for operating power.

The mechanization of American manufacturing accelerated in the 1920s, and this led to a much more rapid growth of productivity in manufacturing compared to earlier decades and to other sectors at that time. There were several forces that promoted mechanization. One was the rapidly expanding aggregate demand during the prosperous twenties. Another was the technological developments in new machines and processes, of which electrification played an important part. Finally, Harry Jerome (1934) and, later, Harry Oshima (1984) both suggest that the price of unskilled labor began to rise as immigration sharply declined with new immigration laws and falling population growth. This accelerated the mechanization of the nation’s factories.

Technological changes during this period can be documented for a number of individual industries. In bituminous coal mining, labor productivity rose when mechanical loading devices reduced the labor required from 24 to 50 percent. The burst of paved road construction in the twenties led to the development of a finishing machine to smooth the surface of cement highways, and this reduced the labor requirement from 40 to 60 percent. Mechanical pavers that spread centrally mixed materials further increased productivity in road construction. These replaced the roadside dump and wheelbarrow methods of spreading the cement. Jerome (1934) reports that the glass in electric light bulbs was made by new machines that cut the number of labor-hours required for their manufacture by nearly half. New machines to produce cigarettes and cigars, for warp-tying in textile production, and for pressing clothes in clothing shops also cut labor-hours. The Banbury mixer reduced the labor input in the production of automobile tires by half, and output per worker of inner tubes increased about four times with a new production method. However, as Daniel Nelson (1987) points out, the continuing advances were the “cumulative process resulting from a vast number of successive small changes.” Because of these continuing advances in the quality of the tires and in the manufacturing of tires, between 1910 and 1930 “tire costs per thousand miles of driving fell from $9.39 to $0.65.”

John Lorant (1967) has documented other technological advances that occurred in American manufacturing during the twenties. For example, the organic chemical industry developed rapidly due to the introduction of the Weizman fermentation process. In a similar fashion, nearly half of the productivity advances in the paper industry were due to the “increasingly sophisticated applications of electric power and paper manufacturing processes,” especially the fourdrinier paper-making machines. As Avi Cohen (1984) has shown, the continuing advances in these machines were the result of evolutionary changes to the basic machine. Mechanization in many types of mass-production industries raised the productivity of labor and capital. In the glass industry, automatic feeding and other types of fully automatic production raised the efficiency of the production of glass containers, window glass, and pressed glass. Giedion (1948) reported that the production of bread was “automatized” in all stages during the 1920s.

Though not directly bringing about productivity increases in manufacturing processes, developments in the management of manufacturing firms, particularly the largest ones, also significantly affected their structure and operation. Alfred D. Chandler, Jr. (1962) has argued that the structure of a firm must follow its strategy. Until the First World War most industrial firms were centralized, single-division firms even when becoming vertically integrated. When this began to change the management of the large industrial firms had to change accordingly.

Because of these changes in the size and structure of the firm during the First World War, E. I. du Pont de Nemours and Company was led to adopt a strategy of diversifying into the production of largely unrelated product lines. The firm found that the centralized, divisional structure that had served it so well was not suited to this strategy, and its poor business performance led its executives to develop between 1919 and 1921 a decentralized, multidivisional structure that boosted it to the first rank among American industrial firms.

General Motors had a somewhat different problem. By 1920 it was already decentralized into separate divisions. In fact, there was so much decentralization that those divisions essentially remained separate companies and there was little coordination between the operating divisions. A financial crisis at the end of 1920 ousted W. C. Durant and brought in the du Ponts and Alfred Sloan. Sloan, who had seen the problems at GM but had been unable to convince Durant to make changes, began reorganizing the management of the company. Over the next several years Sloan and other GM executives developed the general office for a decentralized, multidivisional firm.

Though facing related problems at nearly the same time, GM and du Pont developed their decentralized, multidivisional organizations separately. As other manufacturing firms began to diversify, GM and du Pont became the models for reorganizing the management of the firms. In many industrial firms these reorganizations were not completed until well after the Second World War.

Competition, Monopoly, and the Government

The rise of big businesses, which accelerated in the postbellum period and particularly during the first great turn-of-the-century merger wave, continued in the interwar period. Between 1925 and 1939 the share of manufacturing assets held by the 100 largest corporations rose from 34.5 to 41.9 percent. (Niemi, 1980) As a public policy, the concern with monopolies diminished in the 1920s even though firms were growing larger. But the growing size of businesses was one of the convenient scapegoats upon which to blame the Great Depression.

However, the rise of large manufacturing firms in the interwar period is not so easily interpreted as an attempt to monopolize their industries. Some of the growth came about through vertical integration by the more successful manufacturing firms. Backward integration was generally an attempt to ensure a smooth supply of raw materials where that supply was not plentiful and was dispersed and firms “feared that raw materials might become controlled by competitors or independent suppliers.” (Livesay and Porter, 1969) Forward integration was an offensive tactic employed when manufacturers found that the existing distribution network proved inadequate. Livesay and Porter suggested a number of reasons why firms chose to integrate forward. In some cases they had to provide the mass distribution facilities to handle their much larger outputs; especially when the product was a new one. The complexity of some new products required technical expertise that the existing distribution system could not provide. In other cases “the high unit costs of products required consumer credit which exceeded financial capabilities of independent distributors.” Forward integration into wholesaling was more common than forward integration into retailing. The producers of automobiles, petroleum, typewriters, sewing machines, and harvesters were typical of those manufacturers that integrated all the way into retailing.

In some cases, increases in industry concentration arose as a natural process of industrial maturation. In the automobile industry, Henry Ford’s invention in 1913 of the moving assembly line—a technological innovation that changed most manufacturing—lent itself to larger factories and firms. Of the several thousand companies that had produced cars prior to 1920, 120 were still doing so then, but Ford and General Motors were the clear leaders, together producing nearly 70 percent of the cars. During the twenties, several other companies, such as Durant, Willys, and Studebaker, missed their opportunity to become more important producers, and Chrysler, formed in early 1925, became the third most important producer by 1930. Many went out of business and by 1929 only 44 companies were still producing cars. The Great Depression decimated the industry. Dozens of minor firms went out of business. Ford struggled through by relying on its huge stockpile of cash accumulated prior to the mid-1920s, while Chrysler actually grew. By 1940, only eight companies still produced cars—GM, Ford, and Chrysler had about 85 percent of the market, while Willys, Studebaker, Nash, Hudson, and Packard shared the remainder. The rising concentration in this industry was not due to attempts to monopolize. As the industry matured, growing economies of scale in factory production and vertical integration, as well as the advantages of a widespread dealer network, led to a dramatic decrease in the number of viable firms. (Chandler, 1962 and 1964; Rae, 1984; Bernstein, 1987)

It was a similar story in the tire industry. The increasing concentration and growth of firms was driven by scale economies in production and retailing and by the devastating effects of the depression in the thirties. Although there were 190 firms in 1919, 5 firms dominated the industry—Goodyear, B. F. Goodrich, Firestone, U.S. Rubber, and Fisk, followed by Miller Rubber, General Tire and Rubber, and Kelly-Springfield. During the twenties, 166 firms left the industry while 66 entered. The share of the 5 largest firms rose from 50 percent in 1921 to 75 percent in 1937. During the depressed thirties, there was fierce price competition, and many firms exited the industry. By 1937 there were 30 firms, but the average employment per factory was 4.41 times as large as in 1921, and the average factory produced 6.87 times as many tires as in 1921. (French, 1986 and 1991; Nelson, 1987; Fricke, 1982)

The steel industry was already highly concentrated by 1920 as U.S. Steel had around 50 percent of the market. But U. S. Steel’s market share declined through the twenties and thirties as several smaller firms competed and grew to become known as Little Steel, the next six largest integrated producers after U. S. Steel. Jonathan Baker (1989) has argued that the evidence is consistent with “the assumption that competition was a dominant strategy for steel manufacturers” until the depression. However, the initiation of the National Recovery Administration (NRA) codes in 1933 required the firms to cooperate rather than compete, and Baker argues that this constituted a training period leading firms to cooperate in price and output policies after 1935. (McCraw and Reinhardt, 1989; Weiss, 1980; Adams, 1977)

Mergers

A number of the larger firms grew by merger during this period, and the second great merger wave in American industry occurred during the last half of the 1920s. Figure 10 shows two series on mergers during the interwar period. The FTC series included many of the smaller mergers. The series constructed by Carl Eis (1969) only includes the larger mergers and ends in 1930.

This second great merger wave coincided with the stock market boom of the twenties and has been called “merger for oligopoly” rather than merger for monopoly. (Stigler, 1950) This merger wave created many larger firms that ranked below the industry leaders. Much of the activity in occurred in the banking and public utilities industries. (Markham, 1955) In manufacturing and mining, the effects on industrial structure were less striking. Eis (1969) found that while mergers took place in almost all industries, they were concentrated in a smaller number of them, particularly petroleum, primary metals, and food products.

The federal government’s antitrust policies toward business varied sharply during the interwar period. In the 1920s there was relatively little activity by the Justice Department, but after the Great Depression the New Dealers tried to take advantage of big business to make business exempt from the antitrust laws and cartelize industries under government supervision.

With the passage of the FTC and Clayton Acts in 1914 to supplement the 1890 Sherman Act, the cornerstones of American antitrust law were complete. Though minor amendments were later enacted, the primary changes after that came in the enforcement of the laws and in swings in judicial decisions. Their two primary areas of application were in the areas of overt behavior, such as horizontal and vertical price-fixing, and in market structure, such as mergers and dominant firms. Horizontal price-fixing involves firms that would normally be competitors getting together to agree on stable and higher prices for their products. As long as most of the important competitors agree on the new, higher prices, substitution between products is eliminated and the demand becomes much less elastic. Thus, increasing the price increases the revenues and the profits of the firms who are fixing prices. Vertical price-fixing involves firms setting the prices of intermediate products purchased at different stages of production. It also tends to eliminate substitutes and makes the demand less elastic.

Price-fixing continued to be considered illegal throughout the period, but there was no major judicial activity regarding it in the 1920s other than the Trenton Potteries decision in 1927. In that decision 20 individuals and 23 corporations were found guilty of conspiring to fix the prices of bathroom bowls. The evidence in the case suggested that the firms were not very successful at doing so, but the court found that they were guilty nevertheless; their success, or lack thereof, was not held to be a factor in the decision. (Scherer and Ross, 1990) Though criticized by some, the decision was precedent setting in that it prohibited explicit pricing conspiracies per se.

The Justice Department had achieved success in dismantling Standard Oil and American Tobacco in 1911 through decisions that the firms had unreasonably restrained trade. These were essentially the same points used in court decisions against the Powder Trust in 1911, the thread trust in 1913, Eastman Kodak in 1915, the glucose and cornstarch trust in 1916, and the anthracite railroads in 1920. The criterion of an unreasonable restraint of trade was used in the 1916 and 1918 decisions that found the American Can Company and the United Shoe Machinery Company innocent of violating the Sherman Act; it was also clearly enunciated in the 1920 U. S. Steel decision. This became known as the rule of reason standard in antitrust policy.

Merger policy had been defined in the 1914 Clayton Act to prohibit only the acquisition of one corporation’s stock by another corporation. Firms then shifted to the outright purchase of a competitor’s assets. A series of court decisions in the twenties and thirties further reduced the possibilities of Justice Department actions against mergers. “Only fifteen mergers were ordered dissolved through antitrust actions between 1914 and 1950, and ten of the orders were accomplished under the Sherman Act rather than Clayton Act proceedings.”

Energy

The search for energy and new ways to translate it into heat, light, and motion has been one of the unending themes in history. From whale oil to coal oil to kerosene to electricity, the search for better and less costly ways to light our lives, heat our homes, and move our machines has consumed much time and effort. The energy industries responded to those demands and the consumption of energy materials (coal, oil, gas, and fuel wood) as a percent of GNP rose from about 2 percent in the latter part of the nineteenth century to about 3 percent in the twentieth.

Changes in the energy markets that had begun in the nineteenth century continued. Processed energy in the forms of petroleum derivatives and electricity continued to become more important than “raw” energy, such as that available from coal and water. The evolution of energy sources for lighting continued; at the end of the nineteenth century, natural gas and electricity, rather than liquid fuels began to provide more lighting for streets, businesses, and homes.

In the twentieth century the continuing shift to electricity and internal combustion fuels increased the efficiency with which the American economy used energy. These processed forms of energy resulted in a more rapid increase in the productivity of labor and capital in American manufacturing. From 1899 to 1919, output per labor-hour increased at an average annual rate of 1.2 percent, whereas from 1919 to 1937 the increase was 3.5 percent per year. The productivity of capital had fallen at an average annual rate of 1.8 percent per year in the 20 years prior to 1919, but it rose 3.1 percent a year in the 18 years after 1919. As discussed above, the adoption of electricity in American manufacturing initiated a rapid evolution in the organization of plants and rapid increases in productivity in all types of manufacturing.

The change in transportation was even more remarkable. Internal combustion engines running on gasoline or diesel fuel revolutionized transportation. Cars quickly grabbed the lion’s share of local and regional travel and began to eat into long distance passenger travel, just as the railroads had done to passenger traffic by water in the 1830s. Even before the First World War cities had begun passing laws to regulate and limit “jitney” services and to protect the investments in urban rail mass transit. Trucking began eating into the freight carried by the railroads.

These developments brought about changes in the energy industries. Coal mining became a declining industry. As Figure 11 shows, in 1925 the share of petroleum in the value of coal, gas, and petroleum output exceeded bituminous coal, and it continued to rise. Anthracite coal’s share was much smaller and it declined while natural gas and LP (or liquefied petroleum) gas were relatively unimportant. These changes, especially the declining coal industry, were the source of considerable worry in the twenties.

Coal

One of the industries considered to be “sick” in the twenties was coal, particularly bituminous, or soft, coal. Income in the industry declined, and bankruptcies were frequent. Strikes frequently interrupted production. The majority of the miners “lived in squalid and unsanitary houses, and the incidence of accidents and diseases was high.” (Soule, 1947) The number of operating bituminous coal mines declined sharply from 1923 through 1932. Anthracite (or hard) coal output was much smaller during the twenties. Real coal prices rose from 1919 to 1922, and bituminous coal prices fell sharply from then to 1925. (Figure 12) Coal mining employment plummeted during the twenties. Annual earnings, especially in bituminous coal mining, also fell because of dwindling hourly earnings and, from 1929 on, a shrinking workweek. (Figure 13)

The sources of these changes are to be found in the increasing supply due to productivity advances in coal production and in the decreasing demand for coal. The demand fell as industries began turning from coal to electricity and because of productivity advances in the use of coal to create energy in steel, railroads, and electric utilities. (Keller, 1973) In the generation of electricity, larger steam plants employing higher temperatures and steam pressures continued to reduce coal consumption per kilowatt hour. Similar reductions were found in the production of coke from coal for iron and steel production and in the use of coal by the steam railroad engines. (Rezneck, 1951) All of these factors reduced the demand for coal.

Productivity advances in coal mining tended to be labor saving. Mechanical cutting accounted for 60.7 percent of the coal mined in 1920 and 78.4 percent in 1929. By the middle of the twenties, the mechanical loading of coal began to be introduced. Between 1929 and 1939, output per labor-hour rose nearly one third in bituminous coal mining and nearly four fifths in anthracite as more mines adopted machine mining and mechanical loading and strip mining expanded.

The increasing supply and falling demand for coal led to the closure of mines that were too costly to operate. A mine could simply cease operations, let the equipment stand idle, and lay off employees. When bankruptcies occurred, the mines generally just turned up under new ownership with lower capital charges. When demand increased or strikes reduced the supply of coal, idle mines simply resumed production. As a result, the easily expanded supply largely eliminated economic profits.

The average daily employment in coal mining dropped by over 208,000 from its peak in 1923, but the sharply falling real wages suggests that the supply of labor did not fall as rapidly as the demand for labor. Soule (1947) notes that when employment fell in coal mining, it meant fewer days of work for the same number of men. Social and cultural characteristics tended to tie many to their home region. The local alternatives were few, and ignorance of alternatives outside the Appalachian rural areas, where most bituminous coal was mined, made it very costly to transfer out.

Petroleum

In contrast to the coal industry, the petroleum industry was growing throughout the interwar period. By the thirties, crude petroleum dominated the real value of the production of energy materials. As Figure 14 shows, the production of crude petroleum increased sharply between 1920 and 1930, while real petroleum prices, though highly variable, tended to decline.

The growing demand for petroleum was driven by the growth in demand for gasoline as America became a motorized society. The production of gasoline surpassed kerosene production in 1915. Kerosene’s market continued to contract as electric lighting replaced kerosene lighting. The development of oil burners in the twenties began a switch from coal toward fuel oil for home heating, and this further increased the growing demand for petroleum. The growth in the demand for fuel oil and diesel fuel for ship engines also increased petroleum demand. But it was the growth in the demand for gasoline that drove the petroleum market.

The decline in real prices in the latter part of the twenties shows that supply was growing even faster than demand. The discovery of new fields in the early twenties increased the supply of petroleum and led to falling prices as production capacity grew. The Santa Fe Springs, California strike in 1919 initiated a supply shock as did the discovery of the Long Beach, California field in 1921. New discoveries in Powell, Texas and Smackover Arkansas further increased the supply of petroleum in 1921. New supply increases occurred in 1926 to 1928 with petroleum strikes in Seminole, Oklahoma and Hendricks, Texas. The supply of oil increased sharply in 1930 to 1931 with new discoveries in Oklahoma City and East Texas. Each new discovery pushed down real oil prices, and the prices of petroleum derivatives, and the growing production capacity led to a general declining trend in petroleum prices. McMillin and Parker (1994) argue that supply shocks generated by these new discoveries were a factor in the business cycles during the 1920s.

The supply of gasoline increased more than the supply of crude petroleum. In 1913 a chemist at Standard Oil of Indiana introduced the cracking process to refine crude petroleum; until that time it had been refined by distillation or unpressurized heating. In the heating process, various refined products such as kerosene, gasoline, naphtha, and lubricating oils were produced at different temperatures. It was difficult to vary the amount of the different refined products produced from a barrel of crude. The cracking process used pressurized heating to break heavier components down into lighter crude derivatives; with cracking, it was possible to increase the amount of gasoline obtained from a barrel of crude from 15 to 45 percent. In the early twenties, chemists at Standard Oil of New Jersey improved the cracking process, and by 1927 it was possible to obtain twice as much gasoline from a barrel of crude petroleum as in 1917.

The petroleum companies also developed new ways to distribute gasoline to motorists that made it more convenient to purchase gasoline. Prior to the First World War, gasoline was commonly purchased in one- or five-gallon cans and the purchaser used a funnel to pour the gasoline from the can into the car. Then “filling stations” appeared, which specialized in filling cars’ tanks with gasoline. These spread rapidly, and by 1919 gasoline companies werebeginning to introduce their own filling stations or contract with independent stations to exclusively distribute their gasoline. Increasing competition and falling profits led filling station operators to expand into other activities such as oil changes and other mechanical repairs. The general name attached to such stations gradually changed to “service stations” to reflect these new functions.

Though the petroleum firms tended to be large, they were highly competitive, trying to pump as much petroleum as possible to increase their share of the fields. This, combined with the development of new fields, led to an industry with highly volatile prices and output. Firms desperately wanted to stabilize and reduce the production of crude petroleum so as to stabilize and raise the prices of crude petroleum and refined products. Unable to obtain voluntary agreement on output limitations by the firms and producers, governments began stepping in. Led by Texas, which created the Texas Railroad Commission in 1891, oil-producing states began to intervene to regulate production. Such laws were usually termed prorationing laws and were quotas designed to limit each well’s output to some fraction of its potential. The purpose was as much to stabilize and reduce production and raise prices as anything else, although generally such laws were passed under the guise of conservation. Although the federal government supported such attempts, not until the New Deal were federal laws passed to assist this.

Electricity

By the mid 1890s the debate over the method by which electricity was to be transmitted had been won by those who advocated alternating current. The reduced power losses and greater distance over which electricity could be transmitted more than offset the necessity for transforming the current back to direct current for general use. Widespread adoption of machines and appliances by industry and consumers then rested on an increase in the array of products using electricity as the source of power, heat, or light and the development of an efficient, lower cost method of generating electricity.

General Electric, Westinghouse, and other firms began producing the electrical appliances for homes and an increasing number of machines based on electricity began to appear in industry. The problem of lower cost production was solved by the introduction of centralized generating facilities that distributed the electric power through lines to many consumers and business firms.

Though initially several firms competed in generating and selling electricity to consumers and firms in a city or area, by the First World War many states and communities were awarding exclusive franchises to one firm to generate and distribute electricity to the customers in the franchise area. (Bright, 1947; Passer, 1953) The electric utility industry became an important growth industry and, as Figure 15 shows, electricity production and use grew rapidly.

The electric utilities increasingly were regulated by state commissions that were charged with setting rates so that the utilities could receive a “fair return” on their investments. Disagreements over what constituted a “fair return” and the calculation of the rate base led to a steady stream of cases before the commissions and a continuing series of court appeals. Generally these court decisions favored the reproduction cost basis. Because of the difficulty and cost in making these calculations, rates tended to be in the hands of the electric utilities that, it has been suggested, did not lower rates adequately to reflect the rising productivity and lowered costs of production. The utilities argued that a more rapid lowering of rates would have jeopardized their profits. Whether or not this increased their monopoly power is still an open question, but it should be noted, that electric utilities were hardly price-taking industries prior to regulation. (Mercer, 1973) In fact, as Figure 16 shows, the electric utilities began to systematically practice market segmentation charging users with less elastic demands, higher prices per kilowatt-hour.

Energy in the American Economy of the 1920s

The changes in the energy industries had far-reaching consequences. The coal industry faced a continuing decline in demand. Even in the growing petroleum industry, the periodic surges in the supply of petroleum caused great instability. In manufacturing, as described above, electrification contributed to a remarkable rise in productivity. The transportation revolution brought about by the rise of gasoline-powered trucks and cars changed the way businesses received their supplies and distributed their production as well as where they were located. The suburbanization of America and the beginnings of urban sprawl were largely brought about by the introduction of low-priced gasoline for cars.

Transportation

The American economy was forever altered by the dramatic changes in transportation after 1900. Following Henry Ford’s introduction of the moving assembly production line in 1914, automobile prices plummeted, and by the end of the 1920s about 60 percent of American families owned an automobile. The advent of low-cost personal transportation led to an accelerating movement of population out of the crowded cities to more spacious homes in the suburbs and the automobile set off a decline in intracity public passenger transportation that has yet to end. Massive road-building programs facilitated the intercity movement of people and goods. Trucks increasingly took over the movement of freight in competition with the railroads. New industries, such as gasoline service stations, motor hotels, and the rubber tire industry, arose to service the automobile and truck traffic. These developments were complicated by the turmoil caused by changes in the federal government’s policies toward transportation in the United States.

With the end of the First World War, a debate began as to whether the railroads, which had been taken over by the government, should be returned to private ownership or nationalized. The voices calling for a return to private ownership were much stronger, but doing so fomented great controversy. Many in Congress believed that careful planning and consolidation could restore the railroads and make them more efficient. There was continued concern about the near monopoly that the railroads had on the nation’s intercity freight and passenger transportation. The result of these deliberations was the Transportation Act of 1920, which was premised on the continued domination of the nation’s transportation by the railroads—an erroneous presumption.

The Transportation Act of 1920 presented a marked change in the Interstate Commerce Commission’s ability to control railroads. The ICC was allowed to prescribe exact rates that were to be set so as to allow the railroads to earn a fair return, defined as 5.5 percent, on the fair value of their property. The ICC was authorized to make an accounting of the fair value of each regulated railroad’s property; however, this was not completed until well into the 1930s, by which time the accounting and rate rules were out of date. To maintain fair competition between railroads in a region, all roads were to have the same rates for the same goods over the same distance. With the same rates, low-cost roads should have been able to earn higher rates of return than high-cost roads. To handle this, a recapture clause was inserted: any railroad earning a return of more than 6 percent on the fair value of its property was to turn the excess over to the ICC, which would place half of the money in a contingency fund for the railroad when it encountered financial problems and the other half in a contingency fund to provide loans to other railroads in need of assistance.

In order to address the problem of weak and strong railroads and to bring better coordination to the movement of rail traffic in the United States, the act was directed to encourage railroad consolidation, but little came of this in the 1920s. In order to facilitate its control of the railroads, the ICC was given two additional powers. The first was the control over the issuance or purchase of securities by railroads, and the second was the power to control changes in railroad service through the control of car supply and the extension and abandonment of track. The control of the supply of rail cars was turned over to the Association of American Railroads. Few extensions of track were proposed, but as time passed, abandonment requests grew. The ICC, however, trying to mediate between the conflicting demands of shippers, communities and railroads, generally refused to grant abandonments, and this became an extremely sensitive issue in the 1930s.

As indicated above, the premises of the Transportation Act of 1920 were wrong. Railroads experienced increasing competition during the 1920s, and both freight and passenger traffic were drawn off to competing transport forms. Passenger traffic exited from the railroads much more quickly. As the network of all weather surfaced roads increased, people quickly turned from the train to the car. Harmed even more by the move to automobile traffic were the electric interurban railways that had grown rapidly just prior to the First World War. (Hilton-Due, 1960) Not surprisingly, during the 1920s few railroads earned profits in excess of the fair rate of return.

The use of trucks to deliver freight began shortly after the turn of the century. Before the outbreak of war in Europe, White and Mack were producing trucks with as much as 7.5 tons of carrying capacity. Most of the truck freight was carried on a local basis, and it largely supplemented the longer distance freight transportation provided by the railroads. However, truck size was growing. In 1915 Trailmobile introduced the first four-wheel trailer designed to be pulled by a truck tractor unit. During the First World War, thousands of trucks were constructed for military purposes, and truck convoys showed that long distance truck travel was feasible and economical. The use of trucks to haul freight had been growing by over 18 percent per year since 1925, so that by 1929 intercity trucking accounted for more than one percent of the ton-miles of freight hauled.

The railroads argued that the trucks and buses provided “unfair” competition and believed that if they were also regulated, then the regulation could equalize the conditions under which they competed. As early as 1925, the National Association of Railroad and Utilities Commissioners issued a call for the regulation of motor carriers in general. In 1928 the ICC called for federal regulation of buses and in 1932 extended this call to federal regulation of trucks.

Most states had began regulating buses at the beginning of the 1920s in an attempt to reduce the diversion of urban passenger traffic from the electric trolley and railway systems. However, most of the regulation did not aim to control intercity passenger traffic by buses. As the network of surfaced roads expanded during the twenties, so did the routes of the intercity buses. In 1929 a number of smaller bus companies were incorporated in the Greyhound Buslines, the carrier that has since dominated intercity bus transportation. (Walsh, 2000)

A complaint of the railroads was that interstate trucking competition was unfair because it was subsidized while railroads were not. All railroad property was privately owned and subject to property taxes, whereas truckers used the existing road system and therefore neither had to bear the costs of creating the road system nor pay taxes upon it. Beginning with the Federal Road-Aid Act of 1916, small amounts of money were provided as an incentive for states to construct rural post roads. (Dearing-Owen, 1949) However, through the First World War most of the funds for highway construction came from a combination of levies on the adjacent property owners and county and state taxes. The monies raised by the counties were commonly 60 percent of the total funds allocated, and these primarily came from property taxes. In 1919 Oregon pioneered the state gasoline tax, which then began to be adopted by more and more states. A highway system financed by property taxes and other levies can be construed as a subsidization of motor vehicles, and one study for the period up to 1920 found evidence of substantial subsidization of trucking. (Herbst-Wu, 1973) However, the use of gasoline taxes moved closer to the goal of users paying the costs of the highways. Neither did the trucks have to pay for all of the highway construction because automobiles jointly used the highways. Highways had to be constructed in more costly ways in order to accommodate the larger and heavier trucks. Ideally the gasoline taxes collected from trucks should have covered the extra (or marginal) costs of highway construction incurred because of the truck traffic. Gasoline taxes tended to do this.

The American economy occupies a vast geographic region. Because economic activity occurs over most of the country, falling transportation costs have been crucial to knitting American firms and consumers into a unified market. Throughout the nineteenth century the railroads played this crucial role. Because of the size of the railroad companies and their importance in the economic life of Americans, the federal government began to regulate them. But, by 1917 it appeared that the railroad system had achieved some stability, and it was generally assumed that the post-First World War era would be an extension of the era from 1900 to 1917. Nothing could have been further from the truth. Spurred by public investments in highways, cars and trucks voraciously ate into the railroad’s market, and, though the regulators failed to understand this at the time, the railroad’s monopoly on transportation quickly disappeared.

Communications

Communications had joined with transportation developments in the nineteenth century to tie the American economy together more completely. The telegraph had benefited by using the railroads’ right-of-ways, and the railroads used the telegraph to coordinate and organize their far-flung activities. As the cost of communications fell and information transfers sped, the development of firms with multiple plants at distant locations was facilitated. The interwar era saw a continuation of these developments as the telephone continued to supplant the telegraph and the new medium of radio arose to transmit news and provide a new entertainment source.

Telegraph domination of business and personal communications had given way to the telephone as long distance telephone calls between the east and west coasts with the new electronic amplifiers became possible in 1915. The number of telegraph messages handled grew 60.4 percent in the twenties. The number of local telephone conversations grew 46.8 percent between 1920 and 1930, while the number of long distance conversations grew 71.8 percent over the same period. There were 5 times as many long distance telephone calls as telegraph messages handled in 1920, and 5.7 times as many in 1930.

The twenties were a prosperous period for AT&T and its 18 major operating companies. (Brooks, 1975; Temin, 1987; Garnet, 1985; Lipartito, 1989) Telephone usage rose and, as Figure 19 shows, the share of all households with a telephone rose from 35 percent to nearly 42 percent. In cities across the nation, AT&T consolidated its system, gained control of many operating companies, and virtually eliminated its competitors. It was able to do this because in 1921 Congress passed the Graham Act exempting AT&T from the Sherman Act in consolidating competing telephone companies. By 1940, the non-Bell operating companies were all small relative to the Bell operating companies.

Surprisingly there was a decline in telephone use on the farms during the twenties. (Hadwiger-Cochran, 1984; Fischer 1987) Rising telephone rates explain part of the decline in rural use. The imposition of connection fees during the First World War made it more costly for new farmers to hook up. As AT&T gained control of more and more operating systems, telephone rates were increased. AT&T also began requiring, as a condition of interconnection, that independent companies upgrade their systems to meet AT&T standards. Most of the small mutual companies that had provided service to farmers had operated on a shoestring—wires were often strung along fenceposts, and phones were inexpensive “whoop and holler” magneto units. Upgrading to AT&T’s standards raised costs, forcing these companies to raise rates.

However, it also seems likely that during the 1920s there was a general decline in the rural demand for telephone services. One important factor in this was the dramatic decline in farm incomes in the early twenties. The second reason was a change in the farmers’ environment. Prior to the First World War, the telephone eased farm isolation and provided news and weather information that was otherwise hard to obtain. After 1920 automobiles, surfaced roads, movies, and the radio loosened the isolation and the telephone was no longer as crucial.

Othmar Merganthaler’s development of the linotype machine in the late nineteenth century had irrevocably altered printing and publishing. This machine, which quickly created a line of soft, lead-based metal type that could be printed, melted down and then recast as a new line of type, dramatically lowered the costs of printing. Previously, all type had to be painstakingly set by hand, with individual cast letter matrices picked out from compartments in drawers to construct words, lines, and paragraphs. After printing, each line of type on the page had to be broken down and each individual letter matrix placed back into its compartment in its drawer for use in the next printing job. Newspapers often were not published every day and did not contain many pages, resulting in many newspapers in most cities. In contrast to this laborious process, the linotype used a keyboard upon which the operator typed the words in one of the lines in a news column. Matrices for each letter dropped down from a magazine of matrices as the operator typed each letter and were assembled into a line of type with automatic spacers to justify the line (fill out the column width). When the line was completed the machine mechanically cast the line of matrices into a line of lead type. The line of lead type was ejected into a tray and the letter matrices mechanically returned to the magazine while the operator continued typing the next line in the news story. The first Merganthaler linotype machine was installed in the New York Tribune in 1886. The linotype machine dramatically lowered the costs of printing newspapers (as well as books and magazines). Prior to the linotype a typical newspaper averaged no more than 11 pages and many were published only a few times a week. The linotype machine allowed newspapers to grow in size and they began to be published more regularly. A process of consolidation of daily and Sunday newspapers began that continues to this day. Many have termed the Merganthaler linotype machine the most significant printing invention since the introduction of movable type 400 years earlier.

For city families as well as farm families, radio became the new source of news and entertainment. (Barnouw, 1966; Rosen, 1980 and 1987; Chester-Garrison, 1950) It soon took over as the prime advertising medium and in the process revolutionized advertising. By 1930 more homes had radio sets than had telephones. The radio networks sent news and entertainment broadcasts all over the country. The isolation of rural life, particularly in many areas of the plains, was forever broken by the intrusion of the “black box,” as radio receivers were often called. The radio began a process of breaking down regionalism and creating a common culture in the United States.

The potential demand for radio became clear with the first regular broadcast of Westinghouse’s KDKA in Pittsburgh in the fall of 1920. Because the Department of Commerce could not deny a license application there was an explosion of stations all broadcasting at the same frequency and signal jamming and interference became a serious problem. By 1923 the Department of Commerce had gained control of radio from the Post Office and the Navy and began to arbitrarily disperse stations on the radio dial and deny licenses creating the first market in commercial broadcast licenses. In 1926 a U.S. District Court decided that under the Radio Law of 1912 Herbert Hoover, the secretary of commerce, did not have this power. New stations appeared and the logjam and interference of signals worsened. A Radio Act was passed in January of 1927 creating the Federal Radio Commission (FRC) as a temporary licensing authority. Licenses were to be issued in the public interest, convenience, and necessity. A number of broadcasting licenses were revoked; stations were assigned frequencies, dial locations, and power levels. The FRC created 24 clear-channel stations with as much as 50,000 watts of broadcasting power, of which 21 ended up being affiliated with the new national radio networks. The Communications Act of 1934 essentially repeated the 1927 act except that it created a permanent, seven-person Federal Communications Commission (FCC).

Local stations initially created and broadcast the radio programs. The expenses were modest, and stores and companies operating radio stations wrote this off as indirect, goodwill advertising. Several forces changed all this. In 1922, AT&T opened up a radio station in New York City, WEAF (later to become WNBC). AT&T envisioned this station as the center of a radio toll system where individuals could purchase time to broadcast a message transmitted to other stations in the toll network using AT&T’s long distance lines and an August 1922 broadcast by a Long Island realty company became the first conscious use of direct advertising.

Though advertising continued to be condemned, the fiscal pressures on radio stations to accept advertising began rising. In 1923 the American Society of Composers and Publishers (ASCAP), began demanding a performance fee anytime ASCAP-copyrighted music was performed on the radio, either live or on record. By 1924 the issue was settled, and most stations began paying performance fees to ASCAP. AT&T decided that all stations broadcasting with non AT&T transmitters were violating their patent rights and began asking for annual fees from such stations based on the station’s power. By the end of 1924, most stations were paying the fees. All of this drained the coffers of the radio stations, and more and more of them began discreetly accepting advertising.

RCA became upset at AT&T’s creation of a chain of radio stations and set up its own toll network using the inferior lines of Western Union and Postal Telegraph, because AT&T, not surprisingly, did not allow any toll (or network) broadcasting on its lines except by its own stations. AT&T began to worry that its actions might threaten its federal monopoly in long distance telephone communications. In 1926 a new firm was created, the National Broadcasting Company (NBC), which took over all broadcasting activities from AT&T and RCA as AT&T left broadcasting. When NBC debuted in November of 1926, it had two networks: the Red, which was the old AT&T network, and the Blue, which was the old RCA network. Radio networks allowed advertisers to direct advertising at a national audience at a lower cost. Network programs allowed local stations to broadcast superior programs that captured a larger listening audience and in return received a share of the fees the national advertiser paid to the network. In 1927 a new network, the Columbia Broadcasting System (CBS) financed by the Paley family began operation and other new networks entered or tried to enter the industry in the 1930s.

Communications developments in the interwar era present something of a mixed picture. By 1920 long distance telephone service was in place, but rising rates slowed the rate of adoption in the period, and telephone use in rural areas declined sharply. Though direct dialing was first tried in the twenties, its general implementation would not come until the postwar era, when other changes, such as microwave transmission of signals and touch-tone dialing, would also appear. Though the number of newspapers declined, newspaper circulation generally held up. The number of competing newspapers in larger cities began declining, a trend that also would accelerate in the postwar American economy.

Banking and Securities Markets

In the twenties commercial banks became “department stores of finance.”— Banks opened up installment (or personal) loan departments, expanded their mortgage lending, opened up trust departments, undertook securities underwriting activities, and offered safe deposit boxes. These changes were a response to growing competition from other financial intermediaries. Businesses, stung by bankers’ control and reduced lending during the 1920-21 depression, began relying more on retained earnings and stock and bond issues to raise investment and, sometimes, working capital. This reduced loan demand. The thrift institutions also experienced good growth in the twenties as they helped fuel the housing construction boom of the decade. The securities markets boomed in the twenties only to see a dramatic crash of the stock market in late 1929.

There were two broad classes of commercial banks; those that were nationally chartered and those that were chartered by the states. Only the national banks were required to be members of the Federal Reserve System. (Figure 21) Most banks were unit banks because national regulators and most state regulators prohibited branching. However, in the twenties a few states began to permit limited branching; California even allowed statewide branching.—The Federal Reserve member banks held the bulk of the assets of all commercial banks, even though most banks were not members. A high bank failure rate in the 1920s has usually been explained by “overbanking” or too many banks located in an area, but H. Thomas Johnson (1973-74) makes a strong argument against this. (Figure 22)— If there were overbanking, on average each bank would have been underutilized resulting in intense competition for deposits and higher costs and lower earnings. One common reason would have been the free entry of banks as long as they achieved the minimum requirements then in force. However, the twenties saw changes that led to the demise of many smaller rural banks that would likely have been profitable if these changes had not occurred. Improved transportation led to a movement of business activities, including banking, into the larger towns and cities. Rural banks that relied on loans to farmers suffered just as farmers did during the twenties, especially in the first half of the twenties. The number of bank suspensions and the suspension rate fell after 1926. The sharp rise in bank suspensions in 1930 occurred because of the first banking crisis during the Great Depression.

Prior to the twenties, the main assets of commercial banks were short-term business loans, made by creating a demand deposit or increasing an existing one for a borrowing firm. As business lending declined in the 1920s commercial banks vigorously moved into new types of financial activities. As banks purchased more securities for their earning asset portfolios and gained expertise in the securities markets, larger ones established investment departments and by the late twenties were an important force in the underwriting of new securities issued by nonfinancial corporations.

The securities market exhibited perhaps the most dramatic growth of the noncommercial bank financial intermediaries during the twenties, but others also grew rapidly. (Figure 23) The assets of life insurance companies increased by 10 percent a year from 1921 to 1929; by the late twenties they were a very important source of funds for construction investment. Mutual savings banks and savings and loan associations (thrifts) operated in essentially the same types of markets. The Mutual savings banks were concentrated in the northeastern United States. As incomes rose, personal savings increased, and housing construction expanded in the twenties, there was an increasing demand for the thrifts’ interest earning time deposits and mortgage lending.

But the dramatic expansion in the financial sector came in new corporate securities issues in the twenties—especially common and preferred stock—and in the trading of existing shares of those securities. (Figure 24) The late twenties boom in the American economy was rapid, highly visible, and dramatic. Skyscrapers were being erected in most major cities, the automobile manufacturers produced over four and a half million new cars in 1929; and the stock market, like a barometer of this prosperity, was on a dizzying ride to higher and higher prices. “Playing the market” seemed to become a national pastime.

The Dow-Jones index hit its peak of 381 on September 3 and then slid to 320 on October 21. In the following week the stock market “crashed,” with a record number of shares being traded on several days. At the end of Tuesday, October, 29th, the index stood at 230, 96 points less than one week before. On November 13, 1929, the Dow-Jones index reached its lowest point for the year at 198—183 points less than the September 3 peak.

The path of the stock market boom of the twenties can be seen in Figure 25. Sharp price breaks occurred several times during the boom, and each of these gave rise to dark predictions of the end of the bull market and speculation. Until late October of 1929, these predictions turned out to be wrong. Between those price breaks and prior to the October crash, stock prices continued to surge upward. In March of 1928, 3,875,910 shares were traded in one day, establishing a record. By late 1928, five million shares being traded in a day was a common occurrence.

New securities, from rising merger activity and the formation of holding companies, were issued to take advantage of the rising stock prices.—Stock pools, which were not illegal until the 1934 Securities and Exchange Act, took advantage of the boom to temporarily drive up the price of selected stocks and reap large gains for the members of the pool. In stock pools a group of speculators would pool large amounts of their funds and then begin purchasing large amounts of shares of a stock. This increased demand led to rising prices for that stock. Frequently pool insiders would “churn” the stock by repeatedly buying and selling the same shares among themselves, but at rising prices. Outsiders, seeing the price rising, would decide to purchase the stock whose price was rising. At a predetermined higher price the pool members would, within a short period, sell their shares and pull out of the market for that stock. Without the additional demand from the pool, the stock’s price usually fell quickly bringing large losses for the unsuspecting outside investors while reaping large gains for the pool insiders.

Another factor commonly used to explain both the speculative boom and the October crash was the purchase of stocks on small margins. However, contrary to popular perception, margin requirements through most of the twenties were essentially the same as in previous decades. Brokers, recognizing the problems with margin lending in the rapidly changing market, began raising margin requirements in late 1928, and by the fall of 1929, margin requirements were the highest in the history of the New York Stock Exchange. In the 1920s, as was the case for decades prior to that, the usual margin requirements were 10 to 15 percent of the purchase price, and, apparently, more often around 10 percent. There were increases in this percentage by 1928 and by the fall of 1928, well before the crash and at the urging of a special New York Clearinghouse committee, margin requirements had been raised to some of the highest levels in New York Stock Exchange history. One brokerage house required the following of its clients. Securities with a selling price below $10 could only be purchased for cash. Securities with a selling price of $10 to $20 had to have a 50 percent margin; for securities of $20 to $30 a margin requirement of 40 percent; and, for securities with a price above $30 the margin was 30 percent of the purchase price. In the first half of 1929 margin requirements on customers’ accounts averaged a 40 percent margin, and some houses raised their margins to 50 percent a few months before the crash. These were, historically, very high margin requirements. (Smiley and Keehn, 1988)—Even so, during the crash when additional margin calls were issued, those investors who could not provide additional margin saw the brokers’ sell their stock at whatever the market price was at the time and these forced sales helped drive prices even lower.

The crash began on Monday, October 21, as the index of stock prices fell 3 points on the third-largest volume in the history of the New York Stock Exchange. After a slight rally on Tuesday, prices began declining on Wednesday and fell 21 points by the end of the day bringing on the third call for more margin in that week. On Black Thursday, October 24, prices initially fell sharply, but rallied somewhat in the afternoon so that the net loss was only 7 points, but the volume of thirteen million shares set a NYSE record. Friday brought a small gain that was wiped out on Saturday. On Monday, October 28, the Dow Jones index fell 38 points on a volume of nine million shares—three million in the final hour of trading. Black Tuesday, October 29, brought declines in virtually every stock price. Manufacturing firms, which had been lending large sums to brokers for margin loans, had been calling in these loans and this accelerated on Monday and Tuesday. The big Wall Street banks increased their lending on call loans to offset some of this loss of loanable funds. The Dow Jones Index fell 30 points on a record volume of nearly sixteen and a half million shares exchanged. Black Thursday and Black Tuesday wiped out entire fortunes.

Though the worst was over, prices continued to decline until November 13, 1929, as brokers cleaned up their accounts and sold off the stocks of clients who could not supply additional margin. After that, prices began to slowly rise and by April of 1930 had increased 96 points from the low of November 13,— “only” 87 points less than the peak of September 3, 1929. From that point, stock prices resumed their depressing decline until the low point was reached in the summer of 1932.

 

—There is a long tradition that insists that the Great Bull Market of the late twenties was an orgy of speculation that bid the prices of stocks far above any sustainable or economically justifiable level creating a bubble in the stock market. John Kenneth Galbraith (1954) observed, “The collapse in the stock market in the autumn of 1929 was implicit in the speculation that went before.”—But not everyone has agreed with this.

In 1930 Irving Fisher argued that the stock prices of 1928 and 1929 were based on fundamental expectations that future corporate earnings would be high.— More recently, Murray Rothbard (1963), Gerald Gunderson (1976), and Jude Wanniski (1978) have argued that stock prices were not too high prior to the crash.—Gunderson suggested that prior to 1929, stock prices were where they should have been and that when corporate profits in the summer and fall of 1929 failed to meet expectations, stock prices were written down.— Wanniski argued that political events brought on the crash. The market broke each time news arrived of advances in congressional consideration of the Hawley-Smoot tariff. However, the virtually perfect foresight that Wanniski’s explanation requires is unrealistic.— Charles Kindleberger (1973) and Peter Temin (1976) examined common stock yields and price-earnings ratios and found that the relative constancy did not suggest that stock prices were bid up unrealistically high in the late twenties.—Gary Santoni and Gerald Dwyer (1990) also failed to find evidence of a bubble in stock prices in 1928 and 1929.—Gerald Sirkin (1975) found that the implied growth rates of dividends required to justify stock prices in 1928 and 1929 were quite conservative and lower than post-Second World War dividend growth rates.

However, examination of after-the-fact common stock yields and price-earning ratios can do no more than provide some ex post justification for suggesting that there was not excessive speculation during the Great Bull Market.— Each individual investor was motivated by that person’s subjective expectations of each firm’s future earnings and dividends and the future prices of shares of each firm’s stock. Because of this element of subjectivity, not only can we never accurately know those values, but also we can never know how they varied among individuals. The market price we observe will be the end result of all of the actions of the market participants, and the observed price may be different from the price almost all of the participants expected.

In fact, there are some indications that there were differences in 1928 and 1929. Yields on common stocks were somewhat lower in 1928 and 1929. In October of 1928, brokers generally began raising margin requirements, and by the beginning of the fall of 1929, margin requirements were, on average, the highest in the history of the New York Stock Exchange. Though the discount and commercial paper rates had moved closely with the call and time rates on brokers’ loans through 1927, the rates on brokers’ loans increased much more sharply in 1928 and 1929.— This pulled in funds from corporations, private investors, and foreign banks as New York City banks sharply reduced their lending. These facts suggest that brokers and New York City bankers may have come to believe that stock prices had been bid above a sustainable level by late 1928 and early 1929. White (1990) created a quarterly index of dividends for firms in the Dow-Jones index and related this to the DJI. Through 1927 the two track closely, but in 1928 and 1929 the index of stock prices grows much more rapidly than the index of dividends.

The qualitative evidence for a bubble in the stock market in 1928 and 1929 that White assembled was strengthened by the findings of J. Bradford De Long and Andre Shleifer (1991). They examined closed-end mutual funds, a type of fund where investors wishing to liquidate must sell their shares to other individual investors allowing its fundamental value to be exactly measurable.— Using evidence from these funds, De Long and Shleifer estimated that in the summer of 1929, the Standard and Poor’s composite stock price index was overvalued about 30 percent due to excessive investor optimism. Rappoport and White (1993 and 1994) found other evidence that supported a bubble in the stock market in 1928 and 1929. There was a sharp divergence between the growth of stock prices and dividends; there were increasing premiums on call and time brokers’ loans in 1928 and 1929; margin requirements rose; and stock market volatility rose in the wake of the 1929 stock market crash.

There are several reasons for the creation of such a bubble. First, the fundamental values of earnings and dividends become difficult to assess when there are major industrial changes, such as the rapid changes in the automobile industry, the new electric utilities, and the new radio industry.— Eugene White (1990) suggests that “While investors had every reason to expect earnings to grow, they lacked the means to evaluate easily the future path of dividends.” As a result investors bid up prices as they were swept up in the ongoing stock market boom. Second, participation in the stock market widened noticeably in the twenties. The new investors were relatively unsophisticated, and they were more likely to be caught up in the euphoria of the boom and bid prices upward.— New, inexperienced commission sales personnel were hired to sell stocks and they promised glowing returns on stocks they knew little about.

These observations were strengthened by the experimental work of economist Vernon Smith. (Bishop, 1987) In a number of experiments over a three-year period using students and Tucson businessmen and businesswomen, bubbles developed as inexperienced investors valued stocks differently and engaged in price speculation. As these investors in the experiments began to realize that speculative profits were unsustainable and uncertain, their dividend expectations changed, the market crashed, and ultimately stocks began trading at their fundamental dividend values. These bubbles and crashes occurred repeatedly, leading Smith to conjecture that there are few regulatory steps that can be taken to prevent a crash.

Though the bubble of 1928 and 1929 made some downward adjustment in stock prices inevitable, as Barsky and De Long have shown, changes in fundamentals govern the overall movements. And the end of the long bull market was almost certainly governed by this. In late 1928 and early 1929 there was a striking rise in economic activity, but a decline began somewhere between May and July of that year and was clearly evident by August of 1929. By the middle of August, the rise in stock prices had slowed down as better information on the contraction was received. There were repeated statements by leading figures that stocks were “overpriced” and the Federal Reserve System sharply increased the discount rate in August 1929 was well as continuing its call for banks to reduce their margin lending. As this information was assessed, the number of speculators selling stocks increased, and the number buying decreased. With the decreased demand, stock prices began to fall, and as more accurate information on the nature and extent of the decline was received, stock prices fell more. The late October crash made the decline occur much more rapidly, and the margin purchases and consequent forced selling of many of those stocks contributed to a more severe price fall. The recovery of stock prices from November 13 into April of 1930 suggests that stock prices may have been driven somewhat too low during the crash.

There is now widespread agreement that the 1929 stock market crash did not cause the Great Depression. Instead, the initial downturn in economic activity was a primary determinant of the ending of the 1928-29 stock market bubble. The stock market crash did make the downturn become more severe beginning in November 1929. It reduced discretionary consumption spending (Romer, 1990) and created greater income uncertainty helping to bring on the contraction (Flacco and Parker, 1992). Though stock market prices reached a bottom and began to recover following November 13, 1929, the continuing decline in economic activity took its toll and by May 1930 stock prices resumed their decline and continued to fall through the summer of 1932.

Domestic Trade

In the nineteenth century, a complex array of wholesalers, jobbers, and retailers had developed, but changes in the postbellum period reduced the role of the wholesalers and jobbers and strengthened the importance of the retailers in domestic trade. (Cochran, 1977; Chandler, 1977; Marburg, 1951; Clewett, 1951) The appearance of the department store in the major cities and the rise of mail order firms in the postbellum period changed the retailing market.

Department Stores

A department store is a combination of specialty stores organized as departments within one general store. A. T. Stewart’s huge 1846 dry goods store in New York City is often referred to as the first department store. (Resseguie, 1965; Sobel-Sicilia, 1986) R. H. Macy started his dry goods store in 1858 and Wanamaker’s in Philadelphia opened in 1876. By the end of the nineteenth century, every city of any size had at least one major department store. (Appel, 1930; Benson, 1986; Hendrickson, 1979; Hower, 1946; Sobel, 1974) Until the late twenties, the department store field was dominated by independent stores, though some department stores in the largest cities had opened a few suburban branches and stores in other cities. In the interwar period department stores accounted for about 8 percent of retail sales.

The department stores relied on a “one-price” policy, which Stewart is credited with beginning. In the antebellum period and into the postbellum period, it was common not to post a specific price on an item; rather, each purchaser haggled with a sales clerk over what the price would be. Stewart posted fixed prices on the various dry goods sold, and the customer could either decide to buy or not buy at the fixed price. The policy dramatically lowered transactions costs for both the retailer and the purchaser. Prices were reduced with a smaller markup over the wholesale price, and a large sales volume and a quicker turnover of the store’s inventory generated profits.

Mail Order Firms

What changed the department store field in the twenties was the entrance of Sears Roebuck and Montgomery Ward, the two dominant mail order firms in the United States. (Emmet-Jeuck, 1950; Chandler, 1962, 1977) Both firms had begun in the late nineteenth century and by 1914 the younger Sears Roebuck had surpassed Montgomery Ward. Both located in Chicago due to its central location in the nation’s rail network and both had benefited from the advent of Rural Free Delivery in 1896 and low cost Parcel Post Service in 1912.

In 1924 Sears hired Robert C. Wood, who was able to convince Sears Roebuck to open retail stores. Wood believed that the declining rural population and the growing urban population forecast the gradual demise of the mail order business; survival of the mail order firms required a move into retail sales. By 1925 Sears Roebuck had opened 8 retail stores, and by 1929 it had 324 stores. Montgomery Ward quickly followed suit. Rather than locating these in the central business district (CBD), Wood located many on major streets closer to the residential areas. These moves of Sears Roebuck and Montgomery Ward expanded department store retailing and provided a new type of chain store.

Chain Stores

Though chain stores grew rapidly in the first two decades of the twentieth century, they date back to the 1860s when George F. Gilman and George Huntington Hartford opened a string of New York City A&P (Atlantic and Pacific) stores exclusively to sell tea. (Beckman-Nolen, 1938; Lebhar, 1963; Bullock, 1933) Stores were opened in other regions and in 1912 their first “cash-and-carry” full-range grocery was opened. Soon they were opening 50 of these stores each week and by the 1920s A&P had 14,000 stores. They then phased out the small stores to reduce the chain to 4,000 full-range, supermarket-type stores. A&P’s success led to new grocery store chains such as Kroger, Jewel Tea, and Safeway.

Prior to A&P’s cash-and-carry policy, it was common for grocery stores, produce (or green) grocers, and meat markets to provide home delivery and credit, both of which were costly. As a result, retail prices were generally marked up well above the wholesale prices. In cash-and-carry stores, items were sold only for cash; no credit was extended, and no expensive home deliveries were provided. Markups on prices could be much lower because other costs were much lower. Consumers liked the lower prices and were willing to pay cash and carry their groceries, and the policy became common by the twenties.

Chains also developed in other retail product lines. In 1879 Frank W. Woolworth developed a “5 and 10 Cent Store,” or dime store, and there were over 1,000 F. W. Woolworth stores by the mid-1920s. (Winkler, 1940) Other firms such as Kresge, Kress, and McCrory successfully imitated Woolworth’s dime store chain. J.C. Penney’s dry goods chain store began in 1901 (Beasley, 1948), Walgreen’s drug store chain began in 1909, and shoes, jewelry, cigars, and other lines of merchandise also began to be sold through chain stores.

Self-Service Policies

In 1916 Clarence Saunders, a grocer in Memphis, Tennessee, built upon the one-price policy and began offering self-service at his Piggly Wiggly store. Previously, customers handed a clerk a list or asked for the items desired, which the clerk then collected and the customer paid for. With self-service, items for sale were placed on open shelves among which the customers could walk, carrying a shopping bag or pushing a shopping cart. Each customer could then browse as he or she pleased, picking out whatever was desired. Saunders and other retailers who adopted the self-service method of retail selling found that customers often purchased more because of exposure to the array of products on the shelves; as well, self-service lowered the labor required for retail sales and therefore lowered costs.

Shopping Centers

Shopping Centers, another innovation in retailing that began in the twenties, was not destined to become a major force in retail development until after the Second World War. The ultimate cause of this innovation was the widening ownership and use of the automobile. By the 1920s, as the ownership and use of the car began expanding, population began to move out of the crowded central cities toward the more open suburbs. When General Robert Wood set Sears off on its development of urban stores, he located these not in the central business district, CBD, but as free-standing stores on major arteries away from the CBD with sufficient space for parking.

At about the same time, a few entrepreneurs began to develop shopping centers. Yehoshua Cohen (1972) says, “The owner of such a center was responsible for maintenance of the center, its parking lot, as well as other services to consumers and retailers in the center.” Perhaps the earliest such shopping center was the Country Club Plaza built in 1922 by the J. C. Nichols Company in Kansas City, Missouri. Other early shopping centers appeared in Baltimore and Dallas. By the mid-1930s the concept of a planned shopping center was well known and was expected to be the means to capture the trade of the growing number of suburban consumers.

International Trade and Finance

In the twenties a gold exchange standard was developed to replace the gold standard of the prewar world. Under a gold standard, each country’s currency carried a fixed exchange rate with gold, and the currency had to be backed up by gold. As a result, all countries on the gold standard had fixed exchange rates with all other countries. Adjustments to balance international trade flows were made by gold flows. If a country had a deficit in its trade balance, gold would leave the country, forcing the money stock to decline and prices to fall. Falling prices made the deficit countries’ exports more attractive and imports more costly, reducing the deficit. Countries with a surplus imported gold, which increased the money stock and caused prices to rise. This made the surplus countries’ exports less attractive and imports more attractive, decreasing the surplus. Most economists who have studied the prewar gold standard contend that it did not work as the conventional textbook model says, because capital flows frequently reduced or eliminated the need for gold flows for long periods of time. However, there is no consensus on whether fortuitous circumstances, rather than the gold standard, saved the international economy from periodic convulsions or whether the gold standard as it did work was sufficient to promote stability and growth in international transactions.

After the First World War it was argued that there was a “shortage” of fluid monetary gold to use for the gold standard, so some method of “economizing” on gold had to be found. To do this, two basic changes were made. First, most nations, other than the United States, stopped domestic circulation of gold. Second, the “gold exchange” system was created. Most countries held their international reserves in the form of U.S. dollars or British pounds and international transactions used dollars or pounds, as long as the United States and Great Britain stood ready to exchange their currencies for gold at fixed exchange rates. However, the overvaluation of the pound and the undervaluation of the franc threatened these arrangements. The British trade deficit led to a capital outflow, higher interest rates, and a weak economy. In the late twenties, the French trade surplus led to the importation of gold that they did not allow to expand the money supply.

Economizing on gold by no longer allowing its domestic circulation and by using key currencies as international monetary reserves was really an attempt to place the domestic economies under the control of the nations’ politicians and make them independent of international events. Unfortunately, in doing this politicians eliminated the equilibrating mechanism of the gold standard but had nothing with which to replace it. The new international monetary arrangements of the twenties were potentially destabilizing because they were not allowed to operate as a price mechanism promoting equilibrating adjustments.

There were other problems with international economic activity in the twenties. Because of the war, the United States was abruptly transformed from a debtor to a creditor on international accounts. Though the United States did not want reparations payments from Germany, it did insist that Allied governments repay American loans. The Allied governments then insisted on war reparations from Germany. These initial reparations assessments were quite large. The Allied Reparations Commission collected the charges by supervising Germany’s foreign trade and by internal controls on the German economy, and it was authorized to increase the reparations if it was felt that Germany could pay more. The treaty allowed France to occupy the Ruhr after Germany defaulted in 1923.

Ultimately, this tangled web of debts and reparations, which was a major factor in the course of international trade, depended upon two principal actions. First, the United States had to run an import surplus or, on net, export capital out of the United States to provide a pool of dollars overseas. Germany then had either to have an export surplus or else import American capital so as to build up dollar reserves—that is, the dollars the United States was exporting. In effect, these dollars were paid by Germany to Great Britain, France, and other countries that then shipped them back to the United States as payment on their U.S. debts. If these conditions did not occur, (and note that the “new” gold standard of the twenties had lost its flexibility because the price adjustment mechanism had been eliminated) disruption in international activity could easily occur and be transmitted to the domestic economies.

In the wake of the 1920-21 depression Congress passed the Emergency Tariff Act, which raised tariffs, particularly on manufactured goods. (Figures 26 and 27) The Fordney-McCumber Tariff of 1922 continued the Emergency Tariff of 1921, and its protection on many items was extremely high, ranging from 60 to 100 percent ad valorem (or as a percent of the price of the item). The increases in the Fordney-McCumber tariff were as large and sometimes larger than the more famous (or “infamous”) Smoot-Hawley tariff of 1930. As farm product prices fell at the end of the decade presidential candidate Herbert Hoover proposed, as part of his platform, tariff increases and other changes to aid the farmers. In January 1929, after Hoover’s election, but before he took office, a tariff bill was introduced into Congress. Special interests succeeded in gaining additional (or new) protection for most domestically produced commodities and the goal of greater protection for the farmers tended to get lost in the increased protection for multitudes of American manufactured products. In spite of widespread condemnation by economists, President Hoover signed the Smoot-Hawley Tariff in June 1930 and rates rose sharply.

Following the First World War, the U.S. government actively promoted American exports, and in each of the postwar years through 1929, the United States recorded a surplus in its balance of trade. (Figure 28) However, the surplus declined in the 1930s as both exports and imports fell sharply after 1929. From the mid-1920s on finished manufactures were the most important exports, while agricultural products dominated American imports.

The majority of the funds that allowed Germany to make its reparations payments to France and Great Britain and hence allowed those countries to pay their debts to the United States came from the net flow of capital out of the United States in the form of direct investment in real assets and investments in long- and short-term foreign financial assets. After the devastating German hyperinflation of 1922 and 1923, the Dawes Plan reformed the German economy and currency and accelerated the U.S. capital outflow. American investors began to actively and aggressively pursue foreign investments, particularly loans (Lewis, 1938) and in the late twenties there was a marked deterioration in the quality of foreign bonds sold in the United States. (Mintz, 1951)

The system, then, worked well as long as there was a net outflow of American capital, but this did not continue. In the middle of 1928, the flow of short-term capital began to decline. In 1928 the flow of “other long-term” capital out of the United States was 752 million dollars, but in 1929 it was only 34 million dollars. Though arguments now exist as to whether the booming stock market in the United States was to blame for this, it had far-reaching effects on the international economic system and the various domestic economies.

The Start of the Depression

The United States had the majority of the world’s monetary gold, about 40 percent, by 1920. In the latter part of the twenties, France also began accumulating gold as its share of the world’s monetary gold rose from 9 percent in 1927 to 17 percent in 1929 and 22 percent by 1931. In 1927 the Federal Reserve System had reduced discount rates (the interest rate at which they lent reserves to member commercial banks) and engaged in open market purchases (purchasing U.S. government securities on the open market to increase the reserves of the banking system) to push down interest rates and assist Great Britain in staying on the gold standard. By early 1928 the Federal Reserve System was worried about its loss of gold due to this policy as well as the ongoing boom in the stock market. It began to raise the discount rate to stop these outflows. Gold was also entering the United States so that foreigners could obtain dollars to invest in stocks and bonds. As the United States and France accumulated more and more of the world’s monetary gold, other countries’ central banks took contractionary steps to stem the loss of gold. In country after country these deflationary strategies began contracting economic activity and by 1928 some countries in Europe, Asia, and South America had entered into a depression. More countries’ economies began to decline in 1929, including the United States, and by 1930 a depression was in force for almost all of the world’s market economies. (Temin, 1989; Eichengreen, 1992)

Monetary and Fiscal Policies in the 1920s

Fiscal Policies

As a tool to promote stability in aggregate economic activity, fiscal policy is largely a post-Second World War phenomenon. Prior to 1930 the federal government’s spending and taxing decisions were largely, but not completely, based on the perceived “need” for government-provided public goods and services.

Though the fiscal policy concept had not been developed, this does not mean that during the twenties no concept of the government’s role in stimulating economic activity existed. Herbert Stein (1990) points out that in the twenties Herbert Hoover and some of his contemporaries shared two ideas about the proper role of the federal government. The first was that federal spending on public works could be an important force in reducin Smiley and Keehn, 1995.  investment. Both concepts fit the ideas held by Hoover and others of his persuasion that the U.S. economy of the twenties was not the result of laissez-faire workings but of “deliberate social engineering.”

The federal personal income tax was enacted in 1913. Though mildly progressive, its rates were low and topped out at 7 percent on taxable income in excess of $750,000. (Table 4) As the United States prepared for war in 1916, rates were increased and reached a maximum marginal rate of 12 percent. With the onset of the First World War, the rates were dramatically increased. To obtain additional revenue in 1918, marginal rates were again increased. The share of federal revenue generated by income taxes rose from 11 percent in 1914 to 69 percent in 1920. The tax rates had been extended downward so that more than 30 percent of the nation’s income recipients were subject to income taxes by 1918. However, through the purchase of tax exempt state and local securities and through steps taken by corporations to avoid the cash distribution of profits, the number of high income taxpayers and their share of total taxes paid declined as Congress kept increasing the tax rates. The normal (or base) tax rate was reduced slightly for 1919 but the surtax rates, which made the income tax highly progressive, were retained. (Smiley-Keehn, 1995)

President Harding’s new Secretary of the Treasury, Andrew Mellon, proposed cutting the tax rates, arguing that the rates in the higher brackets had “passed the point of productivity” and rates in excess of 70 percent simply could not be collected. Though most agreed that the rates were too high, there was sharp disagreement on how the rates should be cut. Democrats and  Smiley and Keehn, 1995.  Progressive Republicans argued for rate cuts targeted for the lower income taxpayers while maintaining most of the steep progressivity of the tax rates. They believed that remedies could be found to change the tax laws to stop the legal avoidance of federal income taxes. Republicans argued for sharper cuts that reduced the progressivity of the rates. Mellon proposed a maximum rate of 25 percent.

Though the federal income tax rates were reduced and made less progressive, it took three tax rate cuts in 1921, 1924, and 1925 before Mellon’s goal was finally achieved. The highest marginal tax rate was reduced from 73 percent to 58 percent to 46 percent and finally to 25 percent for the 1925 tax year. All of the other rates were also reduced and exemptions increased. By 1926, only about the top 10 percent of income recipients were subject to federal income taxes. As tax rates were reduced, the number of high income tax returns increased and the share of total federal personal income taxes paid rose. (Tables 5 and 6) Even with the dramatic income tax rate cuts and reductions in the number of low income taxpayers, federal personal income tax revenue continued to rise during the 1920s. Though early estimates of the distribution of personal income showed sharp increases in income inequality during the 1920s (Kuznets, 1953; Holt, 1977), more recent estimates have found that the increases in inequality were considerably less and these appear largely to be related to the sharp rise in capital gains due to the booming stock market in the late twenties. (Smiley, 1998 and 2000)

Each year in the twenties the federal government generated a surplus, in some years as much as 1 percent of GNP. The surpluses were used to reduce the federal deficit and it declined by 25 percent between 1920 and 1930. Contrary to simple macroeconomic models that argue a federal government budget surplus must be contractionary and tend to stop an economy from reaching full employment, the American economy operated at full-employment or close to it throughout the twenties and saw significant economic growth. In this case, the surpluses were not contractionary because the dollars were circulated back into the economy through the purchase of outstanding federal debt rather than pulled out as currency and held in a vault somewhere.

Monetary Policies

In 1913 fear of the “money trust” and their monopoly power led Congress to create 12 central banks when they created the Federal Reserve System. The new central banks were to control money and credit and act as lenders of last resort to end banking panics. The role of the Federal Reserve Board, located in Washington, D.C., was to coordinate the policies of the 12 district banks; it was composed of five presidential appointees and the current secretary of the treasury and comptroller of the currency. All national banks had to become members of the Federal Reserve System, the Fed, and any state bank meeting the qualifications could elect to do so.

The act specified fixed reserve requirements on demand and time deposits, all of which had to be on deposit in the district bank. Commercial banks were allowed to rediscount commercial paper and given Federal Reserve currency. Initially, each district bank set its own rediscount rate. To provide additional income when there was little rediscounting, the district banks were allowed to engage in open market operations that involved the purchasing and selling of federal government securities, short-term securities of state and local governments issued in anticipation of taxes, foreign exchange, and domestic bills of exchange. The district banks were also designated to act as fiscal agents for the federal government. Finally, the Federal Reserve System provided a central check clearinghouse for the entire banking system.

When the Federal Reserve System was originally set up, it was believed that its primary role was to be a lender of last resort to prevent banking panics and become a check-clearing mechanism for the nation’s banks. Both the Federal Reserve Board and the Governors of the District Banks were bodies established to jointly exercise these activities. The division of functions was not clear, and a struggle for power ensued, mainly between the New York Federal Reserve Bank, which was led by J. P. Morgan’s protege, Benjamin Strong, through 1928, and the Federal Reserve Board. By the thirties the Federal Reserve Board had achieved dominance.

There were really two conflicting criteria upon which monetary actions were ostensibly based: the Gold Standard and the Real Bills Doctrine. The Gold Standard was supposed to be quasi-automatic, with an effective limit to the quantity of money. However, the Real Bills Doctrine (which required that all loans be made on short-term, self-liquidating commercial paper) had no effective limit on the quantity of money. The rediscounting of eligible commercial paper was supposed to lead to the required “elasticity” of the stock of money to “accommodate” the needs of industry and business. Actually the rediscounting of commercial paper, open market purchases, and gold inflows all had the same effects on the money stock.

The 1920-21 Depression

During the First World War, the Fed kept discount rates low and granted discounts on banks’ customer loans used to purchase V-bonds in order to help finance the war. The final Victory Loan had not been floated when the Armistice was signed in November of 1918: in fact, it took until October of 1919 for the government to fully sell this last loan issue. The Treasury, with the secretary of the treasury sitting on the Federal Reserve Board, persuaded the Federal Reserve System to maintain low interest rates and discount the Victory bonds necessary to keep bond prices high until this last issue had been floated. As a result, during this period the money supply grew rapidly and prices rose sharply.

A shift from a federal deficit to a surplus and supply disruptions due to steel and coal strikes in 1919 and a railroad strike in early 1920 contributed to the end of the boom. But the most—common view is that the Fed’s monetary policy was the main determinant of the end of the expansion and inflation and the beginning of the subsequent contraction and severe deflation. When the Fed was released from its informal agreement with the Treasury in November of 1919, it raised the discount rate from 4 to 4.75 percent. Benjamin Strong (the governor of the New York bank) was beginning to believe that the time for strong action was past and that the Federal Reserve System’s actions should be moderate. However, with Strong out of the country, the Federal Reserve Board increased the discount rate from 4.75 to 6 percent in late January of 1920 and to 7 percent on June 1, 1920. By the middle of 1920, economic activity and employment were rapidly falling, and prices had begun their downward spiral in one of the sharpest price declines in American history. The Federal Reserve System kept the discount rate at 7 percent until May 5, 1921, when it was lowered to 6.5 percent. By June of 1922, the rate had been lowered yet again to 4 percent. (Friedman and Schwartz, 1963)

The Federal Reserve System authorities received considerable criticism then and later for their actions. Milton Friedman and Anna Schwartz (1963) contend that the discount rate was raised too much too late and then kept too high for too long, causing the decline to be more severe and the price deflation to be greater. In their opinion the Fed acted in this manner due to the necessity of meeting the legal reserve requirement with a safe margin of gold reserves. Elmus Wicker (1966), however, argues that the gold reserve ratio was not the main factor determining the Federal Reserve policy in the episode. Rather, the Fed knowingly pursued a deflationary policy because it felt that the money supply was simply too large and prices too high. To return to the prewar parity for gold required lowering the price level, and there was an excessive stock of money because the additional money had been used to finance the war, not to produce consumer goods. Finally, the outstanding indebtedness was too large due to the creation of Fed credit.

Whether statutory gold reserve requirements to maintain the gold standard or domestic credit conditions were the most important determinant of Fed policy is still an open question, though both certainly had some influence. Regardless of the answer to that question, the Federal Reserve System’s first major undertaking in the years immediately following the First World War demonstrated poor policy formulation.

Federal Reserve Policies from 1922 to 1930

By 1921 the district banks began to recognize that their open market purchases had effects on interest rates, the money stock, and economic activity. For the next several years, economists in the Federal Reserve System discussed how this worked and how it could be related to discounting by member banks. A committee was created to coordinate the open market purchases of the district banks.

The recovery from the 1920-1921 depression had proceeded smoothly with moderate price increases. In early 1923 the Fed sold some securities and increased the discount rate from 4 percent as they believed the recovery was too rapid. However, by the fall of 1923 there were some signs of a business slump. McMillin and Parker (1994) argue that this contraction, as well as the 1927 contraction, were related to oil price shocks. By October of 1923 Benjamin Strong was advocating securities purchases to counter this. Between then and September 1924 the Federal Reserve System increased its securities holdings by over $500 million. Between April and August of 1924 the Fed reduced the discount rate to 3 percent in a series of three separate steps. In addition to moderating the mild business slump, the expansionary policy was also intended to reduce American interest rates relative to British interest rates. This reversed the gold flow back toward Great Britain allowing Britain to return to the gold standard in 1925. At the time it appeared that the Fed’s monetary policy had successfully accomplished its goals.

By the summer of 1924 the business slump was over and the economy again began to grow rapidly. By the mid-1920s real estate speculation had arisen in many urban areas in the United States and especially in Southeastern Florida. Land prices were rising sharply. Stock market prices had also begun rising more rapidly. The Fed expressed some worry about these developments and in 1926 sold some securities to gently slow the real estate and stock market boom. Amid hurricanes and supply bottlenecks the Florida real estate boom collapsed but the stock market boom continued.

The American economy entered into another mild business recession in the fall of 1926 that lasted until the fall of 1927. One of the factors in this was Henry’s Ford’s shut down of all of his factories to changeover from the Model T to the Model A. His employees were left without a job and without income for over six months. International concerns also reappeared. France, which was preparing to return to the gold standard, had begun accumulating gold and gold continued to flow into the United States. Some of this gold came from Great Britain making it difficult for the British to remain on the gold standard. This occasioned a new experiment in central bank cooperation. In July 1927 Benjamin Strong arranged a conference with Governor Montagu Norman of the Bank of England, Governor Hjalmar Schacht of the Reichsbank, and Deputy Governor Charles Ritt of the Bank of France in an attempt to promote cooperation among the world’s central bankers. By the time the conference began the Fed had already taken steps to counteract the business slump and reduce the gold inflow. In early 1927 the Fed reduced discount rates and made large securities purchases. One result of this was that the gold stock fell from $4.3 billion in mid-1927 to $3.8 billion in mid-1928. Some of the gold exports went to France and France returned to the gold standard with its undervalued currency. The loss of gold from Britain eased allowing it to maintain the gold standard.

By early 1928 the Fed was again becoming worried. Stock market prices were rising even faster and the apparent speculative bubble in the stock market was of some concern to Fed authorities. The Fed was also concerned about the loss of gold and wanted to bring that to an end. To do this they sold securities and, in three steps, raised the discount rate to 5 percent by July 1928. To this point the Federal Reserve Board had largely agreed with district Bank policy changes. However, problems began to develop.

During the stock market boom of the late 1920s the Federal Reserve Board preferred to use “moral suasion” rather than increases in discount rates to lessen member bank borrowing. The New York City bank insisted that moral suasion would not work unless backed up by literal credit rationing on a bank by bank basis which they, and the other district banks, were unwilling to do. They insisted that discount rates had to be increased. The Federal Reserve Board countered that this general policy change would slow down economic activity in general rather than be specifically targeted to stock market speculation. The result was that little was done for a year. Rates were not raised but no open market purchases were undertaken. Rates were finally raised to 6 percent in August of 1929. By that time the contraction had already begun. In late October the stock market crashed, and America slid into the Great Depression.

In November, following the stock market crash the Fed reduced discount rates to 4.5 percent. In January they again decreased discount rates and began a series of discount rate decreases until the rate reached 2.5 percent at the end of 1930. No further open market operations were undertaken for the next six months. As banks reduced their discounting in 1930, the stock of money declined. There was a banking crisis in the southeast in November and December of 1930, and in its wake the public’s holding of currency relative to deposits and banks’ reserve ratios began to rise and continued to do so through the end of the Great Depression.

Conclusion

Though some disagree, there is growing evidence that the behavior of the American economy in the 1920s did not cause the Great Depression. The depressed 1930s were not “retribution” for the exuberant growth of the 1920s. The weakness of a few economic sectors in the 1920s did not forecast the contraction from 1929 to 1933. Rather it was the depression of the 1930s and the Second World War that interrupted the economic growth begun in the 1920s and resumed after the Second World War. Just as the construction of skyscrapers that began in the 1920s resumed in the 1950s, so did real economic growth and progress resume. In retrospect we can see that the introduction and expansion of new technologies and industries in the 1920s, such as autos, household electric appliances, radio, and electric utilities, are echoed in the 1990s in the effects of the expanding use and development of the personal computer and the rise of the internet. The 1920s have much to teach us about the growth and development of the American economy.

References

Adams, Walter, ed. The Structure of American Industry, 5th ed. New York: Macmillan Publishing Co., 1977.

Aldcroft, Derek H. From Versailles to Wall Street, 1919-1929. Berkeley: The University of California Press, 1977.

Allen, Frederick Lewis. Only Yesterday. New York: Harper and Sons, 1931.

Alston, Lee J. “Farm Foreclosures in the United States During the Interwar Period.” The Journal of Economic History 43, no. 4 (1983): 885-904.

Alston, Lee J., Wayne A. Grove, and David C. Wheelock. “Why Do Banks Fail? Evidence from the 1920s.” Explorations in Economic History 31 (1994): 409-431.

Ankli, Robert. “Horses vs. Tractors on the Corn Belt.” Agricultural History 54 (1980): 134-148.

Ankli, Robert and Alan L. Olmstead. “The Adoption of the Gasoline Tractor in California.” Agricultural History 55 (1981):— 213-230.

Appel, Joseph H. The Business Biography of John Wanamaker, Founder and Builder. New York: The Macmillan Co., 1930.

Baker, Jonathan B. “Identifying Cartel Pricing Under Uncertainty: The U.S. Steel Industry, 1933-1939.” The Journal of Law and Economics 32 (1989): S47-S76.

Barger, E. L, et al. Tractors and Their Power Units. New York: John Wiley and Sons, 1952.

Barnouw, Eric. A Tower in Babel: A History of Broadcasting in the United States: Vol. I—to 1933. New York: Oxford University Press, 1966.

Barnouw, Eric. The Golden Web: A History of Broadcasting in the United States: Vol. II—1933 to 1953. New York: Oxford University Press, 1968.

Beasley, Norman. Main Street Merchant: The Story of the J. C. Penney Company. New York: Whittlesey House, 1948.

Beckman, Theodore N. and Herman C. Nolen. The Chain Store Problem: A Critical Analysis. New York: McGraw-Hill Book Co., 1938.

Benson, Susan Porter. Counter Cultures: Saleswomen, Managers, and Customers in American Department Stores, 1890-1940. Urbana, IL: University of Illinois Press, 1986.

Bernstein, Irving. The Lean Years: A History of the American Worker, 1920-1933. Boston: Houghton Mifflin Co., 1960.

Bernstein, Michael A. The Great Depression: Delayed Recovery and Economic Change in America, 1929-1939. New York: Cambridge University Press, 1987.

Bishop, Jerry E. “Stock Market Experiment Suggests Inevitability of Booms and Busts.” The Wall Street Journal, 17 November, 1987.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics. Washington: USGOP, 1943.

Bogue, Allan G. “Changes in Mechanical and Plant Technology: The Corn Belt, 1910-1940.” The Journal of Economic History 43 (1983): 1-26.

Breit, William and Elzinga, Kenneth. The Antitrust Casebook: Milestones in Economic Regulation, 2d ed. Chicago: The Dryden Press, 1989.

Bright, Arthur A., Jr. The Electric Lamp Industry: Technological Change and Economic Development from 1800 to 1947. New York: Macmillan, 1947.

Brody, David. Labor in Crisis: The Steel Strike. Philadelphia: J. B. Lippincott Co., 1965.

Brooks, John. Telephone: The First Hundred Years. New York: Harper and Row, 1975.

Brown, D. Clayton. Electricity for Rural America: The Fight for the REA. Westport, CT: The Greenwood Press, 1980.

Brown, William A., Jr. The International Gold Standard Reinterpreted, 1914-1934, 2 vols. New York: National Bureau of Economic Research, 1940.

Brunner, Karl and Allen Meltzer. “What Did We Learn from the Monetary Experience of the United States in the Great Depression?” Canadian Journal of Economics 1 (1968): 334-48.

Bryant, Keith L., Jr., and Henry C. Dethloff. A History of American Business. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1983.

Bucklin, Louis P. Competition and Evolution in the Distributive Trades. Englewood Cliffs, NJ: Prentice-Hall, 1972.

Bullock, Roy J. “The Early History of the Great Atlantic & Pacific Tea Company,” Harvard Business Review 11 (1933): 289-93.

Bullock, Roy J. “A History of the Great Atlantic & Pacific Tea Company Since 1878.” Harvard Business Review 12 (1933): 59-69.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, Edited by Mark Wheeler. Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 1998.

Chandler, Alfred D., Jr. Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Cambridge, MA: The M.I.T. Press, 1962.

Chandler, Alfred D., Jr. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: the Belknap Press Harvard University Press, 1977.

Chandler, Alfred D., Jr. Giant Enterprise: Ford, General Motors, and the American Automobile Industry. New York: Harcourt, Brace, and World, 1964.

Chester, Giraud, and Garnet R. Garrison. Radio and Television: An Introduction. New York: Appleton-Century Crofts, 1950.

Clewett, Richard C. “Mass Marketing of Consumers’ Goods.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Cochran, Thomas C. 200 Years of American Business. New York: Delta Books, 1977.

Cohen, Yehoshua S. Diffusion of an Innovation in an Urban System: The Spread of Planned Regional Shopping Centers in the United States, 1949-1968. Chicago: The University of Chicago, Department of Geography, Research Paper No. 140, 1972.

Daley, Robert. An American Saga: Juan Trippe and His Pan American Empire. New York: Random House, 1980.

De Long, J. Bradford, and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (September 1991): 675-700.

Clarke, Sally. “New Deal Regulation and the Revolution in American Farm Productivity: A Case Study of the Diffusion of the Tractor in the Corn Belt, 1920-1940.” The Journal of Economic History 51, no. 1 (1991): 105-115.

Cohen, Avi. “Technological Change as Historical Process: The Case of the U.S. Pulp and Paper Industry, 1915-1940.” The Journal of Economic History 44 (1984): 775-79.

Davies, R. E. G. A History of the World’s Airlines. London: Oxford University Press, 1964.

Dearing, Charles L., and Wilfred Owen. National Transportation Policy. Washington: The Brookings Institution, 1949.

Degen, Robert A. The American Monetary System: A Concise Survey of Its Evolution Since 1896. Lexington, MA: Lexington Books, 1987.

De Long, J. Bradford and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (1991): 675-700.

Devine, Warren D., Jr. “From Shafts to Wires: Historical Perspectives on Electrification.” The Journal of Economic History 43 (1983): 347-372.

Eckert, Ross D., and George W. Hilton. “The Jitneys.” The Journal of Law and Economics 15 (October 1972): 293-326.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Metheun, 1985.

Barry Eichengreen. “The Political Economy of the Smoot-Hawley Tariff.” Research in Economic History 12 (1989): 1-43.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939. New York: Oxford University Press, 1992.

Eis, Carl. “The 1919-1930 Merger Movement in American Industry.” The Journal of Law and Economics XII (1969): 267-96.

Emmet, Boris, and John E. Jeuck. Catalogues and Counters: A History of Sears Roebuck and Company. Chicago: University of Chicago Press, 1950.

Fearon, Peter. War, Prosperity, & Depression: The U.S. Economy, 1947-1945. Lawrence, KS: University of Kansas Press, 1987.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” The American Economic Review 93 (2003): 1399-1413.

Fischer, Claude. “The Revolution in Rural Telephony, 1900-1920.” Journal of Social History 21 (1987): 221-38.

Fischer, Claude. “Technology’s Retreat: The Decline of Rural Telephony in the United States, 1920-1940.” Social Science History, Vol. 11 (Fall 1987), pp. 295-327.

Fisher, Irving. The Stock Market Crash—and After. New York: Macmillan, 1930.

French, Michael J. “Structural Change and Competition in the United States Tire Industry, 1920-1937.” Business History Review 60 (1986): 28-54.

French, Michael J. The U.S. Tire Industry. Boston: Twayne Publishers, 1991.

Fricke, Ernest B. “The New Deal and the Modernization of Small Business: The McCreary Tire and Rubber Company, 1930-1940.” Business History Review 56 (1982):— 559-76.

Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Galbraith, John Kenneth. The Great Crash. Boston: Houghton Mifflin, 1954.

Garnet, Robert W. The Telephone Enterprise: The Evolution of the Bell System’s Horizontal Structure, 1876-1900. Baltimore: The Johns Hopkins University Press, 1985.

Gideonse, Max. “Foreign Trade, Investments, and Commercial Policy.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Giedion, Simon. Mechanization Takes Command. New York: Oxford University Press, 1948.

Gordon, Robert Aaron. Economic Instability and Growth: The American Record. New York: Harper and Row, 1974.

Gray, Roy Burton. Development of the Agricultural Tractor in the United States, 2 vols. Washington, D. C.: USGPO, 1954.

Gunderson, Gerald. An Economic History of America. New York: McGraw-Hill, 1976.

Hadwiger, Don F., and Clay Cochran. “Rural Telephones in the United States.” Agricultural History 58 (July 1984): 221-38.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 19 (1987): 145-169.

Hamilton, James D. “The Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6 (1988): 67-89.

Hayek, Friedrich A. Prices and Production. New York: Augustus M. Kelly reprint of 1931 edition.

Hayford, Marc and Carl A. Pasurka, Jr. “The Political Economy of the Fordney-McCumber and Smoot-Hawley Tariff Acts.” Explorations in Economic History 29 (1992): 30-50.

Hendrickson, Robert. The Grand Emporiums: The Illustrated History of America’s Great Department Stores. Briarcliff Manor, NY: Stein and Day, 1979.

Herbst, Anthony F., and Joseph S. K. Wu. “Some Evidence of Subsidization of the U.S. Trucking Industry, 1900-1920.” The Journal of Economic History— 33 (June 1973): 417-33.

Higgs, Robert. Crisis and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987.

Hilton, George W., and John Due. The Electric Interurban Railways in America. Stanford: Stanford University Press, 1960.

Hoffman, Elizabeth and Gary D. Liebcap. “Institutional Choice and the Development of U.S. Agricultural Policies in the 1920s.” The Journal of Economic History 51 (1991): 397-412.

Holt, Charles F. “Who Benefited from the Prosperity of the Twenties?” Explorations in Economic History 14 (1977): 277-289.

Hower, Ralph W. History of Macy’s of New York, 1858-1919. Cambridge, MA: Harvard University Press, 1946.

Hubbard, R. Glenn, Ed. Financial Markets and Financial Crises. Chicago: University of Chicago Press, 1991.

Hunter, Louis C. “Industry in the Twentieth Century.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Jerome, Harry. Mechanization in Industry. New York: National Bureau of Economic Research, 1934.

Johnson, H. Thomas. “Postwar Optimism and the Rural Financial Crisis.” Explorations in Economic History 11, no. 2 (1973-1974): 173-192.

Jones, Fred R. and William H. Aldred. Farm Power and Tractors, 5th ed. New York: McGraw-Hill, 1979.

Keller, Robert. “Factor Income Distribution in the United States During the 20’s: A Reexamination of Fact and Theory.” The Journal of Economic History 33 (1973): 252-95.

Kelly, Charles J., Jr. The Sky’s the Limit: The History of the Airlines. New York: Coward-McCann, 1963.

Kindleberger, Charles. The World in Depression, 1929-1939. Berkeley: The University of California Press, 1973.

Klebaner, Benjamin J. Commercial Banking in the United States: A History. New York: W. W. Norton and Co., 1974.

Kuznets, Simon. Shares of Upper Income Groups in Income and Savings. New York: NBER, 1953.

Lebhar, Godfrey M. Chain Stores in America, 1859-1962. New York: Chain Store Publishing Corp., 1963.

Lewis, Cleona. America’s Stake in International Investments. Washington: The Brookings Institution, 1938.

Livesay, Harold C. and Patrick G. Porter. “Vertical Integration in American Manufacturing, 1899-1948.” The Journal of Economic History 29 (1969): 494-500.

Lipartito, Kenneth. The Bell System and Regional Business: The Telephone in the South, 1877-1920. Baltimore: The Johns Hopkins University Press, 1989.

Liu, Tung, Gary J. Santoni, and Courteney C. Stone. “In Search of Stock Market Bubbles: A Comment on Rappoport and White.” The Journal of Economic History 55 (1995): 647-654.

Lorant, John. “Technological Change in American Manufacturing During the 1920s.” The Journal of Economic History 33 (1967): 243-47.

MacDonald, Forrest. Insull. Chicago: University of Chicago Press, 1962.

Marburg, Theodore. “Domestic Trade and Marketing.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Markham, Jesse. “Survey of the Evidence and Findings on Mergers.” In Business Concentration and Price Policy, National Bureau of Economic Research. Princeton: Princeton University Press, 1955.

Markin, Rom J. The Supermarket: An Analysis of Growth, Development, and Change. Rev. ed. Pullman, WA: Washington State University Press, 1968.

McCraw, Thomas K. TVA and the Power Fight, 1933-1937. Philadelphia: J. B. Lippincott, 1971.

McCraw, Thomas K. and Forest Reinhardt. “Losing to Win: U.S. Steel’s Pricing, Investment Decisions, and Market Share, 1901-1938.” The Journal of Economic History 49 (1989): 592-620.

McMillin, W. Douglas and Randall E. Parker. “An Empirical Analysis of Oil Price Shocks in the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

McNair, Malcolm P., and Eleanor G. May. The Evolution of Retail Institutions in the United States. Cambridge, MA: The Marketing Science Institute, 1976.

Mercer, Lloyd J. “Comment on Papers by Scheiber, Keller, and Raup.” The Journal of Economic History 33 (1973): 291-95.

Mintz, Ilse. Deterioration in the Quality of Foreign Bonds Issued in the United States, 1920-1930. New York: National Bureau of Economic Research, 1951.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises Edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Morris, Lloyd. Not So Long Ago. New York: Random House, 1949.

Mosco, Vincent. Broadcasting in the United States: Innovative Challenge and Organizational Control. Norwood, NJ: Ablex Publishing Corp., 1979.

Moulton, Harold G. et al. The American Transportation Problem. Washington: The Brookings Institution, 1933.

Mueller, John. “Lessons of the Tax-Cuts of Yesteryear.” The Wall Street Journal, March 5, 1981.

Musoke, Moses S. “Mechanizing Cotton Production in the American South: The Tractor, 1915-1960.” Explorations in Economic History 18 (1981): 347-75.

Nelson, Daniel. “Mass Production and the U.S. Tire Industry.” The Journal of Economic History 48 (1987): 329-40.

Nelson, Ralph L. Merger Movements in American Industry, 1895-1956. Princeton: Princeton University Press, 1959.

Niemi, Albert W., Jr., U.S. Economic History, 2nd ed. Chicago: Rand McNally Publishing Co., 1980.

Norton, Hugh S. Modern Transportation Economics. Columbus, OH: Charles E. Merrill Books, Inc., 1963.

Nystrom, Paul H. Economics of Retailing, vol. 1, 3rd ed. New York: The Ronald Press Co., 1930.

Oshima, Harry T. “The Growth of U.S. Factor Productivity: The Significance of New Technologies in the Early Decades of the Twentieth Century.” The Journal of Economic History 44 (1984): 161-70.

Parker, Randall and W. Douglas McMillin. “An Empirical Analysis of Oil Price Shocks During the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

Parker, Randall and Paul Flacco. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30 (1992): 154-171.

Parker, William N. “Agriculture.” In American Economic Growth: An Economist’s History of the United States, edited by Lance E. Davis, Richard A. Easterlin, William N. Parker, et. al. New York: Harper and Row, 1972.

Passer, Harold C. The Electrical Manufacturers, 1875-1900. Cambridge: Harvard University Press, 1953.

Peak, Hugh S., and Ellen F. Peak. Supermarket Merchandising and Management. Englewood Cliffs, NJ: Prentice-Hall, 1977.

Pilgrim, John. “The Upper Turning Point of 1920: A Reappraisal.” Explorations in Economic History 11 (1974): 271-98.

Rae, John B. Climb to Greatness: The American Aircraft Industry, 1920-1960. Cambridge: The M.I.T. Press, 1968.

Rae, John B. The American Automobile Industry. Boston: Twayne Publishers, 1984.

Rappoport, Peter and Eugene N. White. “Was the Crash of 1929 Expected?” American Economic Review 84 (1994): 271-281.

Rappoport, Peter and Eugene N. White. “Was There a Bubble in the 1929 Stock Market?” The Journal of Economic History 53 (1993): 549-574.

Resseguie, Harry E. “Alexander Turney Stewart and the Development of the Department Store, 1823-1876,” Business History Review 39 (1965): 301-22.

Rezneck, Samuel. “Mass Production and the Use of Energy.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Rockwell, Llewellyn H., Jr., ed. The Gold Standard: An Austrian Perspective. Lexington, MA: Lexington Books, 1985.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” The Journal of Political Economy 91 (1986): 1-37.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46 (1986): 341-352.

Romer, Christina. “World War I and the Postwar Depression: A Reinterpretation Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22 (1988): 91-115.

Romer, Christina and Jeffrey A. Miron. “A New Monthly Index of Industrial Production, 1884-1940.” Journal of Economic History 50 (1990): 321-337.

Romer, Christina. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105 (1990): 597-625.

Romer, Christina. “Remeasuring Business Cycles.” The Journal of Economic History 54 (1994): 573-609.

Roose, Kenneth D. “The Production Ceiling and the Turning Point of 1920.” American Economic Review 48 (1958): 348-56.

Rosen, Philip T. The Modern Stentors: Radio Broadcasters and the Federal Government, 1920-1934. Westport, CT: The Greenwood Press, 1980.

Rosen, Philip T. “Government, Business, and Technology in the 1920s: The Emergence of American Broadcasting.” In American Business History: Case Studies. Edited by Henry C. Dethloff and C. Joseph Pusateri. Arlington Heights, IL: Harlan Davidson, 1987.

Rothbard, Murray N. America’s Great Depression. Kansas City: Sheed and Ward, 1963.

Sampson, Roy J., and Martin T. Ferris. Domestic Transportation: Practice, Theory, and Policy, 4th ed. Boston: Houghton Mifflin Co., 1979.

Samuelson, Paul and Everett E. Hagen. After the War—1918-1920. Washington: National Resources Planning Board, 1943.

Santoni, Gary and Gerald P. Dwyer, Jr. “The Great Bull Markets, 1924-1929 and 1982-1987: Speculative Bubbles or Economic Fundamentals?” Federal Reserve Bank of St. Louis Review 69 (1987): 16-29.

Santoni, Gary, and Gerald P. Dwyer, Jr. “Bubbles vs. Fundamentals: New Evidence from the Great Bull Markets.” In Crises and Panics: The Lessons of History. Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Scherer, Frederick M. and David Ross. Industrial Market Structure and Economic Performance, 3d ed. Boston: Houghton Mifflin, 1990.

Schlebecker, John T. Whereby We Thrive: A History of American Farming, 1607-1972. Ames, IA: The Iowa State University Press, 1975.

Shepherd, James. “The Development of New Wheat Varieties in the Pacific Northwest.” Agricultural History 54 (1980): 52-63.

Sirkin, Gerald. “The Stock Market of 1929 Revisited: A Note.” Business History Review 49 (Fall 1975): 233-41.

Smiley, Gene. The American Economy in the Twentieth Century. Cincinnati: South-Western Publishing Co., 1994.

Smiley, Gene. “New Estimates of Income Shares During the 1920s.” In Calvin Coolidge and the Coolidge Era: Essays on the History of the 1920s, edited by John Earl Haynes, 215-232. Washington, D.C.: Library of Congress, 1998.

Smiley, Gene. “A Note on New Estimates of the Distribution of Income in the 1920s.” The Journal of Economic History 60, no. 4 (2000): 1120-1128.

Smiley, Gene. Rethinking the Great Depression: A New View of Its Causes and Consequences. Chicago: Ivan R. Dee, 2002.

Smiley, Gene, and Richard H. Keehn. “Margin Purchases, Brokers’ Loans and the Bull Market of the Twenties.” Business and Economic History. 2d series. 17 (1988): 129-42.

Smiley, Gene and Richard H. Keehn. “Federal Personal Income Tax Policy in the 1920s.” the Journal of Economic History 55, no. 2 (1995): 285-303.

Sobel, Robert. The Entrepreneuers: Explorations Within the American Business Tradition. New York: Weybright and Talley, 1974.

Soule, George. Prosperity Decade: From War to Depression: 1917-1929. New York: Holt, Rinehart, and Winston, 1947.

Stein, Herbert. The Fiscal Revolution in America, revised ed. Washington, D.C.: AEI Press, 1990.

Stigler, George J. “Monopoly and Oligopoly by Merger.” American Economic Review, 40 (May 1950): 23-34.

Sumner, Scott. “The Role of the International Gold Standard in Commodity Price Deflation: Evidence from the 1929 Stock Market Crash.” Explorations in Economic History 29 (1992): 290-317.

Swanson, Joseph and Samuel Williamson. “Estimates of National Product and Income for the United States Economy, 1919-1941.” Explorations in Economic History 10, no. 1 (1972): 53-73.

Temin, Peter. “The Beginning of the Depression in Germany.” Economic History Review. 24 (May 1971): 240-48.

Temin, Peter. Did Monetary Forces Cause the Great Depression. New York: W. W. Norton, 1976.

Temin, Peter. The Fall of the Bell System. New York: Cambridge University Press, 1987.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: The M.I.T. Press, 1989.

Thomas, Gordon, and Max Morgan-Witts. The Day the Bubble Burst. Garden City, NY: Doubleday, 1979.

Ulman, Lloyd. “The Development of Trades and Labor Unions,” In American Economic History, edited by Seymour E. Harris, chapter 14. New York: McGraw-Hill Book Co., 1961.

Ulman Lloyd. The Rise of the National Trade Union. Cambridge, MA: Harvard University Press, 1955.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970, 2 volumes. Washington, D.C.: USGPO, 1976.

Walsh, Margaret. Making Connections: The Long Distance Bus Industry in the U.S.A. Burlington, VT: Ashgate, 2000.

Wanniski, Jude. The Way the World Works. New York: Simon and Schuster, 1978.

Weiss, Leonard W. Cast Studies in American Industry, 3d ed. New York: John Wiley & Sons, 1980.

Whaples, Robert. “Hours of Work in U.S. History.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL— http://www.eh.net/encyclopedia/contents/whaples.work.hours.us.php

Whatley, Warren. “Southern Agrarian Labor Contracts as Impediments to Cotton Mechanization.” The Journal of Economic History 87 (1987): 45-70.

Wheelock, David C. and Subal C. Kumbhakar. “The Slack Banker Dances: Deposit Insurance and Risk-Taking in the Banking Collapse of the 1920s.” Explorations in Economic History 31 (1994): 357-375.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” The Journal of Economic Perspectives. 4 (Spring 1990): 67-83.

White, Eugene N., Ed. Crises and Panics: The Lessons of History. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “When the Ticker Ran Late: The Stock Market Boom and Crash of 1929.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “Stock Market Bubbles? A Reply.” The Journal of Economic History 55 (1995): 655-665.

White, William J. “Economic History of Tractors in the United States.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL http://www.eh.net/encyclopedia/contents/white.tractors.history.us.php

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922-1933: A Reinterpretation.” Journal of Political Economy 73 (1965): 325-43.

Wicker, Elmus. “A Reconsideration of Federal Reserve Policy During the 1920-1921 Depression.” The Journal of Economic History 26 (1966): 223-38.

Wicker, Elmus. Federal Reserve Monetary Policy, 1917-1933. New York: Random House, 1966.

Wigmore, Barrie A. The Crash and Its Aftermath. Westport, CT: Greenwood Press, 1985.

Williams, Raburn McFetridge. The Politics of Boom and Bust in Twentieth-Century America. Minneapolis/St. Paul: West Publishing Co., 1994.

Williamson, Harold F., et al. The American Petroleum Industry: The Age of Energy, 1899-1959. Evanston, IL: Northwestern University Press, 1963.

Wilson, Thomas. Fluctuations in Income and Employment, 3d ed. New York: Pitman Publishing, 1948.

Wilson, Jack W., Richard E. Sylla, and Charles P. Jones. “Financial Market Panics and Volatility in the Long Run, 1830-1988.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Winkler, John Kennedy. Five and Ten: The Fabulous Life of F. W. Woolworth. New York: R. M. McBride and Co., 1940.

Wood, Charles. “Science and Politics in the War on Cattle Diseases: The Kansas Experience, 1900-1940.” Agricultural History 54 (1980): 82-92.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy Since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “The Origins of American Industrial Success, 1879-1940.” The American Economic Review 80 (1990): 651-668.

Citation: Smiley, Gene. “US Economy in the 1920s”. EH.Net Encyclopedia, edited by Robert Whaples. June 29, 2004. URL http://eh.net/encyclopedia/the-u-s-economy-in-the-1920s/

African Americans in the Twentieth Century

Thomas N. Maloney, University of Utah

The nineteenth century was a time of radical transformation in the political and legal status of African Americans. Blacks were freed from slavery and began to enjoy greater rights as citizens (though full recognition of their rights remained a long way off). Despite these dramatic developments, many economic and demographic characteristics of African Americans at the end of the nineteenth century were not that different from what they had been in the mid-1800s. Tables 1 and 2 present characteristics of black and white Americans in 1900, as recorded in the Census for that year. (The 1900 Census did not record information on years of schooling or on income, so these important variables are left out of these tables, though they will be examined below.) According to the Census, ninety percent of African Americans still lived in the Southern US in 1900 — roughly the same percentage as lived in the South in 1870. Three-quarters of black households were located in rural places. Only about one-fifth of African American household heads owned their own homes (less than half the percentage among whites). About half of black men and about thirty-five percent of black women who reported an occupation to the Census said that they worked as a farmer or a farm laborer, as opposed to about one-third of white men and about eight percent of white women. Outside of farm work, African American men and women were greatly concentrated in unskilled labor and service jobs. Most black children had not attended school in the year before the Census, and white children were much more likely to have attended. So the members of a typical African American family at the start of the twentieth century lived and worked on a farm in the South and did not own their home. Children in these families were unlikely to be in school even at very young ages.

By 1990 (the most recent Census for which such statistics are available at the time of this writing), the economic conditions of African Americans had changed dramatically (see Tables 1 and 2). They had become much less concentrated in the South, in rural places, and in farming jobs and had entered better blue-collar jobs and the white-collar sector. They were nearly twice as likely to own their own homes at the end of the century as in 1900, and their rates of school attendance at all ages had risen sharply. Even after this century of change, though, African Americans were still relatively disadvantaged in terms of education, labor market success, and home ownership.

Table 1: Characteristics of Households in 1900 and 1990

1900 1990
Black White Black White
A. Region of Residence
South 90.1% 23.5% 53.0% 32.9%
Northeast 3.6% 31.8% 18.9% 20.9%
Midwest 5.8% 38.5% 18.9% 25.3%
West 0.5% 6.2% 9.2% 21.0%
B. Share Rural
75.8% 56.1% 11.9% 25.7%
C. Share of Homes Owner-Occupied
22.1% 49.2% 43.4% 67.3%

Based on household heads in Integrated Public Use Microdata Series Census samples for 1900 and 1990.

Table 2: Characteristics of Individuals in 1900 and 1990

1900 1990
Male Female Male Female
Black White Black White Black White Black White
A. Occupational Distribution
Professional/Technical 1.3% 3.8% 1.6% 10.7% 9.9% 17.2% 16.6% 21.9%
Proprietor/Manager/Official 0.8 6.9 0.2 2.6 6.5 14.7 5.4 10.0
Clerical 0.2 4.0 0.2 5.6 10.7 7.2 29.7 31.9
Sales 0.3 4.2 0.2 4.1 2.9 6.7 4.1 7.3
Craft 4.2 15.9 0 3.1 17.4 20.7 2.3 2.1
Operative 7.3 13.4 1.8 24.5 20.7 14.9 12.4 8.0
Laborer 25.5 14.0 6.5 1.5 12.2 7.2 2.0 1.5
Private Service 2.2 0.4 33.0 33.2 0.1 0 2.0 0.8
Other Service 4.8 2.4 20.6 6.6 18.5 9.0 25.3 15.8
Farmer 30.8 23.9 6.7 6.1 0.2 1.4 0.1 0.4
Farm Laborer 22.7 11.0 29.4 2.0 1.0 1.0 0.4 0.5
B. Percent Attending School by Age
Ages 6 to 13 37.8% 72.2% 41.9% 71.9% 94.5% 95.3% 94.2% 95.5
Ages 14 to 17 26.7 47.9 36.2 51.5 91.1 93.4 92.6 93.5
Ages 18 to 21 6.8 10.4 5.9 8.6 47.7 54.3 52.9 57.1

Based on Integrated Public Use Microdata Series Census samples for 1900 and 1990. Occupational distributions based on individuals aged 18 to 64 with recorded occupation. School attendance in 1900 refers to attendance at any time in the previous year. School attendance in 1990 refers to attendance since February 1 of that year.

These changes in the lives of African Americans did not occur continuously and steadily throughout the twentieth century. Rather, we can divide the century into three distinct eras: (1) the years from 1900 to 1915, prior to large-scale movement out of the South; (2) the years from 1916 to 1964, marked by migration and urbanization, but prior to the most important government efforts to reduce racial inequality; and (3) the years since 1965, characterized by government antidiscrimination efforts but also by economic shifts which have had a great impact on racial inequality and African American economic status.

1900-1915: Continuation of Nineteenth-Century Patterns

As was the case in the 1800s, African American economic life in the early 1900s centered on Southern cotton agriculture. African Americans grew cotton under a variety of contracts and institutional arrangements. Some were laborers hired for a short period for specific tasks. Many were tenant farmers, renting a piece of land and some of their tools and supplies, and paying the rent at the end of the growing season with a portion of their harvest. Records from Southern farms indicate that white and black farm laborers were paid similar wages, and that white and black tenant farmers worked under similar contracts for similar rental rates. Whites in general, however, were much more likely to own land. A similar pattern is found in Southern manufacturing in these years. Among the fairly small number of individuals employed in manufacturing in the South, white and black workers were often paid comparable wages if they worked at the same job for the same company. However, blacks were much less likely to hold better-paying skilled jobs, and they were more likely to work for lower-paying companies.

While the concentration of African Americans in cotton agriculture persisted, Southern black life changed in other ways in the early 1900s. Limitations on the legal rights of African Americans grew more severe in the South in this era. The 1896 Supreme Court decision in the case of Plessy v. Ferguson provided a legal basis for greater explicit segregation in American society. This decision allowed for the provision of separate facilities and services to blacks and whites as long as the facilities and services were equal. Through the early 1900s, many new laws, known as Jim Crow laws, were passed in Southern states creating legally segregated schools, transportation systems, and lodging. The requirement of equality was not generally enforced, however. Perhaps the most important and best-known example of separate and unequal facilities in the South was the system of public education. Through the first decades of the twentieth century, resources were funneled to white schools, raising teacher salaries and per-pupil funding while reducing class size. Black schools experienced no real improvements of this type. The result was a sharp decline in the relative quality of schooling available to African-American children.

1916-1964: Migration and Urbanization

The mid-1910s witnessed the first large-scale movement of African Americans out of the South. The share of African Americans living in the South fell by about four percentage points between 1910 and 1920 (with nearly all of this movement after 1915) and another six points between 1920 and 1930 (see Table 3). What caused this tremendous relocation of African Americans? The worsening political and social conditions in the South, noted above, certainly played a role. But the specific timing of the migration appears to be connected to economic factors. Northern employers in many industries faced strong demand for their products and so had a great need for labor. Their traditional source of cheap labor, European immigrants, dried up in the late 1910s as the coming of World War I interrupted international migration. After the end of the war, new laws limiting immigration to the US would keep the flow of European labor at a low level. Northern employers thus needed a new source of cheap labor, and they turned to Southern blacks. In some cases, employers would send recruiters to the South to find workers and to pay their way North. In addition to this pull from the North, economic events in the South served to push out many African Americans. Destruction of the cotton crop by the boll weevil, an insect that feeds on cotton plants, and poor weather in some places during these years made new opportunities in the North even more attractive.

Table 3: Share of African Americans Residing in the South

Year Share Living in South
1890 90%
1900 90%
1910 89%
1920 85%
1930 79%
1940 77%
1950 68%
1960 60%
1970 53%
1980 53%
1990 53%

Sources: 1890 to 1960: Historical Statistics of the United States, volume 1, pp. 22-23; 1970: Statistical Abstract of the United States, 1973, p. 27; 1980: Statistical Abstract of the United States, 1985, p. 31; 1990: Statistical Abstract of the United States, 1996, p. 31.

Pay was certainly better, and opportunities were wider, in the North. Nonetheless, the region was not entirely welcoming to these migrants. As the black population in the North grew in the 1910s and 1920s, residential segregation grew more pronounced, as did school segregation. In some cases, racial tensions boiled over into deadly violence. The late 1910s were scarred by severe race riots in a number of cities, including East St. Louis (1917) and Chicago (1919).

Access to Jobs in the North

Within the context of this broader turmoil, black migrants did gain entry to new jobs in Northern manufacturing. As in Southern manufacturing, pay differences between blacks and whites working the same job at the same plant were generally small. However, black workers had access to a limited set of jobs and remained heavily concentrated in unskilled laborer positions. Black workers gained admittance to only a limited set of firms, as well. For instance, in the auto industry, the Ford Motor Company hired a tremendous number of black workers, while other auto makers in Detroit typically excluded these workers. Because their alternatives were limited, black workers could be worked very intensely and could also be used in particularly unpleasant and dangerous settings, such as the killing and cutting areas of meat packing plants, foundry departments in auto plants, and blast furnaces in steel plants.

Unions

Through the 1910s and 1920s, relations between black workers and Northern labor unions were often antagonistic. Many unions in the North had explicit rules barring membership by black workers. When faced with a strike (or the threat of a strike), employers often hired in black workers, knowing that these workers were unlikely to become members of the union or to be sympathetic to its goals. Indeed, there is evidence that black workers were used as strike breakers in a great number of labor disputes in the North in the 1910s and 1920s. Beginning in the mid-1930s, African Americans gained greater inclusion in the union movement. By that point, it was clear that black workers were entrenched in manufacturing, and that any broad-based organizing effort would have to include them.

Conditions around 1940

As is apparent in Table 3, black migration slowed in the 1930s, due to the onset of the Great Depression and the resulting high level of unemployment in the North in the 1930s. Beginning in about 1940, preparations for war again created tight labor markets in Northern cities, though, and, as in the late 1910s, African Americans journeyed north to take advantage of new opportunities. In some ways, moving to the North in the 1940s may have appeared less risky than it had during the World War I era. By 1940, there were large black communities in a number of Northern cities. Newspapers produced by these communities circulated in the South, providing information about housing, jobs, and social conditions. Many Southern African Americans now had friends and relatives in the North to help with the transition.

In other ways, though, labor market conditions were less auspicious for black workers in 1940 than they had been during the World War I years. Unemployment remained high in 1940, with about fourteen percent of white workers either unemployed or participating in government work relief programs. Employers hired these unemployed whites before turning to African American labor. Even as labor markets tightened, black workers gained little access to war-related employment. The President issued orders in 1941 that companies doing war-related work had to hire in a non-discriminatory way, and the Fair Employment Practice Committee was created to monitor the hiring practices of these companies. Initially, few resources were devoted to this effort, but in 1943 the government began to enforce fair employment policies more aggressively. These efforts appear to have aided black employment, at least for the duration of the war.

Gains during the 1940s and 1950s

In 1940, the Census Bureau began to collect data on individual incomes, so we can track changes in black income levels and in black/white income ratios in more detail from this date forward. Table 4 provides annual earnings figures for black and white men and women from 1939 (recorded in the 1940 Census) to 1989 (recorded in the 1990 Census). The big gains of the 1940s, both in level of earnings and in the black/white income ratio, are very obvious. Often, we focus on the role of education in producing higher earnings, but the gap between average schooling levels for blacks and whites did not change much in the 1940s (particularly for men), so schooling levels could not have contributed too much to the relative income gains for blacks in the 1940s (see Table 5). Rather, much of the improvement in the black/white pay ratio in this decade simply reflects ongoing migration: blacks were leaving the South, a low-wage region, and entering the North, a high-wage region. Some of the improvement reflects access to new jobs and industries for black workers, due to the tight labor markets and antidiscrimination efforts of the war years.

Table 4: Mean Annual Earnings of Wage and Salary Workers

Aged 20 and Over

Male

Female

Black White Ratio Black White Ratio
1939 $537.45 $1234.41 .44 $331.32 $771.69 .43
1949 1761.06 2984.96 .59 992.35 1781.96 .56
1959 2848.67 5157.65 .55 1412.16 2371.80 .59
1969 5341.64 8442.37 .63 3205.12 3786.45 .85
1979 11404.46 16703.67 .68 7810.66 7893.76 .99
1989 19417.03 28894.69 .67 15319.29 16135.65 .95

Source: Integrated Public Use Microdata Series Census samples for 1940, 1950, 1960, 1970, 1980, and 1990. Includes only those with non-zero earnings who were not in school. All figures are in current (nominal) dollars.

Table 5: Years of School Attended for Individuals 20 and Over

Male

Female

Black White Difference Black White Difference
1940 5.9 9.1 3.2 6.9 10.5 3.6
1950 6.8 9.8 3 7.8 10.8 3
1960 7.9 10.5 2.6 8.8 11.0 2.2
1970 9.4 11.4 2.0 10.3 11.7 1.4
1980 11.2 12.5 1.3 11.8 12.4 0.6

Source: Integrated Public Use Microdata Series Census samples for 1940, 1950, 1960, 1970, and 1980. Based on highest grade attended by wage and salary workers aged 20 and over who had non-zero earnings in the previous year and who were not in school at the time of the census. Comparable figures are not available in the 1990 Census.

Black workers relative incomes were also increased by some general changes in labor demand and supply and in labor market policy in the 1940s. During the war, demand for labor was particularly strong in the blue-collar manufacturing sector. Workers were needed to build tanks, jeeps, and planes, and these jobs did not require a great deal of formal education or skill. In addition, the minimum wage was raised in 1945, and wartime regulations allowed greater pay increases for low-paid workers than for highly-paid workers. After the war, the supply of college-educated workers increased dramatically. The GI Bill, passed in 1944, provided large subsidies to help pay the expenses of World War II veterans who wanted to attend college. This policy helped a generation of men further their education and get a college degree. So strong labor demand, government policies that raised wages at the bottom, and a rising supply of well-educated workers meant that less-educated, less-skilled workers received particularly large wage increases in the 1940s. Because African Americans were concentrated among the less-educated, low-earning workers, these general economic forces were especially helpful to African Americans and served to raise their pay relative to that of whites.

The effect of these broader forces on racial inequality helps to explain the contrast between the 1940s and 1950s evident in Table 4. The black-white pay ratio may have actually fallen a bit for men in the 1950s, and it rose much more slowly in the 1950s than in the 1940s for women. Some of this slowdown in progress reflects weaker labor markets in general, which reduced black access to new jobs. In addition, the general narrowing of the wage distribution that occurred in the 1940s stopped in the 1950s. Less-educated, lower-paid workers were no longer getting particularly large pay increases. As a result, blacks did not gain ground on white workers. It is striking that pay gains for black workers slowed in the 1950s despite a more rapid decline in the black-white schooling gap during these years (Table 5).

Unemployment

On the whole, migration and entry to new industries played a large role in promoting black relative pay increases through the years from World War I to the late 1950s. However, these changes also had some negative effects on black labor market outcomes. As black workers left Southern agriculture, their relative rate of unemployment rose. For the nation as a whole, black and white unemployment rates were about equal as late as 1930. This equality was to a great extent the result of lower rates of unemployment for everyone in the rural South relative to the urban North. Farm owners and sharecroppers tended not to lose their work entirely during weak markets, whereas manufacturing employees might be laid off or fired during downturns. Still, while unemployment was greater for everyone in the urban North, it was disproportionately greater for black workers. Their unemployment rates in Northern cities were much higher than white unemployment rates in the same cities. One result of black migration, then, was a dramatic increase in the ratio of black unemployment to white unemployment. The black/white unemployment ratio rose from about 1 in 1930 (indicating equal unemployment rates for blacks and whites) to about 2 by 1960. The ratio remained at this high level through the end of the twentieth century.

1965-1999: Civil Rights and New Challenges

In the 1960s, black workers again began to experience more rapid increases in relative pay levels (see Table 4). These years also marked a new era in government involvement in the labor market, particularly with regard to racial inequality and discrimination. One of the most far-reaching changes in government policy regarding race actually occurred a bit earlier, in the 1954 Supreme Court decision in the case of Brown v. the Board of Education of Topeka, Kansas. In that case, the Supreme Court ruled that racial segregation of schools was unconstitutional. However, substantial desegregation of Southern schools (and some Northern schools) would not take place until the late 1960s and early 1970s.

School desegregation, therefore, was probably not a primary force in generating the relative pay gains of the 1960s and 1970s. Other anti-discrimination policies enacted in the mid-1960s did play a large role, however. The Civil Rights Act of 1964 outlawed discrimination in a broad set of social arenas. Title VII of this law banned discrimination in hiring, firing, pay, promotion, and working conditions and created the Equal Employment Opportunity Commission to investigate complaints of workplace discrimination. A second policy, Executive Order 11246 (issued by President Johnson in 1965), set up more stringent anti-discrimination rules for businesses working on government contracts. There has been much debate regarding the importance of these policies in promoting better jobs and wages for African Americans. There is now increasing agreement that these policies had positive effects on labor market outcomes for black workers at least through the mid-1970s. Several pieces of evidence point to this conclusion. First, the timing is right. Many indicators of employment and wage gains show marked improvement beginning in 1965, soon after the implementation of these policies. Second, job and wage gains for black workers in the 1960s were, for the first time, concentrated in the South. Enforcement of anti-discrimination policy was targeted on the South in this era. It is also worth noting that rates of black migration out of the South dropped substantially after 1965, perhaps reflecting a sense of greater opportunity there due to these policies. Finally, these gains for black workers occurred simultaneously in many industries and many places, under a variety of labor market conditions. Whatever generated these improvements had to come into effect broadly at one point in time. Federal antidiscrimination policy fits this description.

Return to Stagnation in Relative Income

The years from 1979 to 1989 saw the return of stagnation in black relative incomes. Part of this stagnation may reflect the reversal of the shifts in wage distribution that occurred during the 1940s. In the late 1970s and especially in the 1980s, the US wage distribution grew more unequal. Individuals with less education, particularly those with no college education, saw their pay decline relative to the better-educated. Workers in blue-collar manufacturing jobs were particularly hard hit. The concentration of black workers, especially black men, in these categories meant that their pay suffered relative to that of whites. Another possible factor in the stagnation of black relative pay in the 1980s was weakened enforcement of antidiscrimination policies at this time.

While black relative incomes stagnated on average, black residents of urban centers suffered particular hardships in the 1970s and 1980s. The loss of blue-collar manufacturing jobs was most severe in these areas. For a variety of reasons, including the introduction of new technologies that required larger plants, many firms relocated their production facilities outside of central cities, to suburbs and even more peripheral areas. Central cities increasingly became information-processing and financial centers. Jobs in these industries generally required a college degree or even more education. Despite decades of rising educational levels, African Americans were still barely half as likely as whites to have completed four years of college or more: in 1990, 11.3% of blacks over the age of 25 had four years of college or more, versus 22% of whites. As a result of these developments, many blacks in urban centers found themselves surrounded by jobs for which they were poorly qualified, and at some distance from the types of jobs for which they were qualified, the jobs their parents had moved to the city for in the first place. Their ability to relocate near these blue-collar jobs seems to have been limited both by ongoing discrimination in the housing market and by a lack of resources. Those African Americans with the resources to exit the central city often did so, leaving behind communities marked by extremely high rates of poverty and unemployment.

Over the fifty years from 1939 to 1989, through these episodes of gain and stagnation, the ratio of black mens average annual earnings to white mens average annual earnings rose about 23 points, from .44 to .67. The timing of improvement in the black female/ white female income ratio was similar. However, black women gained much more ground overall: the black-white income ratio for women rose 50 points over these fifty years and stood at .95 in 1989 (down from .99 in 1979). The education gap between black women and white women declined more than the education gap between black and white men, which contributed to the faster pace of improvement in black womens relative earnings. Furthermore, black female workers were more likely to be employed full-time than were white female workers, which raised their annual income. The reverse was true among men: white male workers were somewhat more likely to be employed full time than were black male workers.

Comparable data on annual incomes from the 2000 Census are not available at the time of this writing. Evidence from other labor market surveys suggests that the tight labor markets of the late 1990s may have brought renewed relative pay gains for black workers. Black workers also experienced sharp declines in unemployment during these years, though black unemployment remained about twice as great as white unemployment.

Beyond the Labor Market: Persistent Gaps in Wealth and Health

When we look beyond these basic measures of labor market success, we find more disturbingly large and persistent gaps between African Americans and white Americans. Wealth differences between blacks and whites continue to be very large. In the mid-1990s, black households held only about one-quarter the amount of wealth that white households held, on average. If we leave out equity in ones home and personal possessions and focus on more strictly financial, income-producing assets, black households held only about ten to fifteen percent as much wealth as white households. Big differences in wealth holding remain even if we compare black and white households with similar incomes.

Much of this wealth gap reflects the ongoing effects of the historical patterns described above. When freed from slavery, African Americans held no wealth, and their lower incomes prevented them from accumulating wealth at the rate whites did. African Americans found it particularly difficult to buy homes, traditionally a households most important asset, due to discrimination in real estate markets. Government housing policies in the 1930s and 1940s may have also reduced their rate of home-buying. While the federal government made low interest loans and loan insurance available through the Home Owners Loan Corporation and the Federal Housing Authority, these programs generally could not be used to acquire homes in black or mixed neighborhoods, usually the only neighborhoods in which blacks could buy, because these were considered to be areas of high-risk for loan default. Because wealth is passed on from parents to children, the wealth differences of the mid-twentieth century continue to have an important impact today.

Differences in life expectancy have also proven to be remarkably stubborn. Certainly, black and white mortality patterns are more similar today than they once were. In 1929, the first year for which national figures are available, white life expectancy at birth was 58.6 years and black life expectancy was 46.7 years (for men and women combined). By 2000, white life expectancy had risen to 77.4 years and black life expectancy was 71.8 years. Thus, the black-white gap had fallen from about twelve years to less than six. However, almost all of this reduction in the gap was completed by the early 1960s. In 1961, the black-white gap was 6.5 years. The past forty years have seen very little change in the gap, though life expectancy has risen for both groups.

Some of this remaining difference in life expectancy can be traced to income differences between blacks and whites. Black children face a particularly high risk of accidental death in the home, often due to dangerous conditions in low-quality housing. African Americans of all ages face a high risk of homicide, which is related in part to residence in poor neighborhoods. Among older people, African Americans face high risk of death due to heart disease, and the incidence of heart disease is correlated with income. Still, black-white mortality differences, especially those related to disease, are complex and are not yet fully understood.

Infant mortality is a particularly large and particularly troubling form of health difference between blacks and whites.

In 2000 the white infant mortality rate (5.7 per 1000 live births) was less than half the rate for African Americans (14.0 per 1000). Again, some of this mortality difference is related to the effect of lower incomes on the nutrition, medical care, and living conditions available to African American mothers and newborns. However, the full set of relevant factors is the subject of ongoing research.

Summary and Conclusions

It is undeniable that the economic fortunes of African Americans changed dramatically during the twentieth century. African Americans moved from tremendous concentration in Southern agriculture to much greater diversity in residence and occupation. Over the period in which income can be measured, there are large increases in black incomes in both relative and absolute terms. Schooling differentials between blacks and whites fell sharply, as well. When one looks beyond the starting and ending points, though, more complex realities present themselves. The progress that we observe grew out of periods of tremendous social upheaval, particularly during the world wars. It was shaped in part by conflict between black workers and white workers, and it coincided with growing residential segregation. It was not continuous and gradual. Rather, it was punctuated by periods of rapid gain and periods of stagnation. The rapid gains are attributable to actions on the part of black workers (especially migration), broad economic forces (especially tight labor markets and narrowing of the general wage distribution), and specific antidiscrimination policy initiatives (such as the Fair Employment Practice Committee in the 1940s and Title VII and contract compliance policy in the 1960s). Finally, we should note that this century of progress ended with considerable gaps remaining between African Americans and white Americans in terms of income, unemployment, wealth, and life expectancy.

Sources

Butler, Richard J., James J. Heckman, and Brook Payner. “The Impact of the Economy and the State on the Economic Status of Blacks: A Study of South Carolina.” In Markets in History: Economic Studies of the Past, edited by David W. Galenson, 52-96. New York: Cambridge University Press, 1989.

Collins, William J. “Race, Roosevelt, and Wartime Production: Fair Employment in World War II Labor Markets.” American Economic Review 91, no. 1 (2001): 272-86.

Conley, Dalton. Being Black, Living in the Red: Race, Wealth, and Social Policy in America. Berkeley, CA: University of California Press, 1999.

Donohue, John H. III, and James Heckman. “Continuous vs. Episodic Change: The Impact of Civil Rights Policy on the Economic Status of Blacks.” Journal of Economic Literature 29, no. 4 (1991): 1603-43.

Goldin, Claudia, and Robert A. Margo. “The Great Compression: The Wage Structure in the United States at Mid-Century.” Quarterly Journal of Economics 107, no. 1 (1992): 1-34.

Halcoussis, Dennis and Gary Anderson. “The Political Economy of Legal Segregation: Jim Crow and Racial Employment Patterns.” Economics and Politics 8, no. 1 (1996): 1-15.

Herbst, Alma. The Negro in the Slaughtering and Meat Packing Industry in Chicago. New York: Houghton Mifflin, 1932.

Higgs, Robert. Competition and Coercion: Blacks in the American Economy 1865-1914. New York: Cambridge University Press, 1977.

Jaynes, Gerald David and Robin M. Williams, Jr., editors. A Common Destiny: Blacks and American Society. Washington, DC: National Academy Press, 1989.

Johnson, Daniel M. and Rex R. Campbell. Black Migration in America: A Social Demographic History. Durham, NC: Duke University Press, 1981.

Juhn, Chinhui, Kevin M. Murphy, and Brooks Pierce. “Accounting for the Slowdown in Black-White Wage Convergence.” In Workers and Their Wages: Changing Patterns in the United States, edited by Marvin H. Kosters, 107-43. Washington, DC: AEI Press, 1991.

Kaminski, Robert, and Andrea Adams. Educational Attainment in the US: March 1991 and 1990 (Current Population Reports P20-462). Washington, DC: US Census Bureau, May 1992.

Kasarda, John D. Urban Industrial Transition and the Underclass. In The Ghetto Underclass: Social Science Perspectives, edited by William J. Wilson, 43-64. Newberry Park, CA: Russell Sage, 1993.

Kennedy, Louise V. The Negro Peasant Turns Cityward: The Effects of Recent Migrations to Northern Centers. New York: Columbia University Press, 1930.

Leonard, Jonathan S. “The Impact of Affirmative Action Regulation and Equal Employment Law on Black Employment.” Journal of Economic Perspectives 4, no. 4 (1990): 47-64.

Maloney, Thomas N. “Wage Compression and Wage Inequality between Black and White Males in the United States, 1940-1960.” Journal of Economic History 54, no. 2 (1994): 358-81.

Maloney, Thomas N. “Racial Segregation, Working Conditions, and Workers’ Health: Evidence from the A.M. Byers Company, 1916-1930.” Explorations in Economic History 35, no. 3 (1998): 272-295.

Maloney, Thomas N., and Warren C. Whatley. “Making the Effort: The Contours of Racial Discrimination in Detroit’s Labor Markets, 1920-1940.” Journal of Economic History 55, no. 3 (1995): 465-93.

Margo, Robert A. Race and Schooling in the South, 1880-1950. Chicago: University of Chicago Press, 1990.

Margo, Robert A. “Explaining Black-White Wage Convergence, 1940-1950.” Industrial and Labor Relations Review 48, no. 3 (1995): 470-81.

Marshall, Ray F. The Negro and Organized Labor. NY: John Wiley and Sons, 1965.

Minino, Arialdi M., and Betty L. Smith. “Deaths: Preliminary Data for 2000″ National Vital Statistics Reports 49, no. 12 (2001).

Oliver, Melvin L., and Thomas M. Shapiro. Race and Wealth. Review of Black Political Economy 17, no. 4 (1989): 5-25.

Ruggles, Steven, and Matthew Sobek. Integrated Public Use Microdata Series: Version 2.0. Minneapolis: Social Historical Research Laboratory, University of Minnesota, 1997.

Sugrue, Thomas J. The Origins of the Urban Crisis: Race and Inequality in Postwar Detroit. NJ: Princeton University Press, 1996.

Sundstrom, William A. “Last Hired, First Fired? Unemployment and Urban Black Workers During the Great Depression.” Journal of Economic History 52, no. 2 (1992): 416-29.

United States Bureau of the Census. Statistical Abstract of the United States 1973 (94th Edition). Washington, DC: Department of Commerce, Bureau of the Census, 1973.

United States Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970. Washington, DC: Department of Commerce, Bureau of the Census, 1975.

United States Bureau of the Census. Statistical Abstract of the United States 1985 (105th Edition). Washington, DC: Department of Commerce, Bureau of the Census, 1985.

United States Bureau of the Census. Statistical Abstract of the United States 1996 (116th Edition). Washington, DC: Department of Commerce, Bureau of the Census, 1996.

Vedder, Richard K. and Lowell Gallaway. “Racial Differences in Unemployment in the United States, 1890-1980.” Journal of Economic History 52, no. 3 (1992): 696-702.

Whatley, Warren C. “African-American Strikebreaking from the Civil War to the New Deal.” Social Science History 17, no. 4 (1993): 525-58.

Wilson, William J. The Truly Disadvantaged: The Inner City, the Underclass, and Public Policy. Chicago, IL: University of Chicago Press, 1987.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. NY: Basic Books, 1986.

Citation: Maloney, Thomas. “African Americans in the Twentieth Century”. EH.Net Encyclopedia, edited by Robert Whaples. January 14, 2002. URL http://eh.net/encyclopedia/african-americans-in-the-twentieth-century/

The Federal Reserve and the Financial Crisis

Author(s):Bernanke, Ben S.
Reviewer(s):Mitchener, Kris James

Published by EH.Net (August 2013)

Ben S. Bernanke, The Federal Reserve and the Financial Crisis. Princeton, NJ: Princeton University Press, 2013. vii + 134 pp. $20 (hardcover), ISBN: 978-0-691-15873-0.

Reviewed for EH.Net by Kris James Mitchener, Department of Economics, University of Warwick.

This book is the product of a series of university lectures given by Federal Reserve Chairman Ben Bernanke in March 2012 at George Washington University. It is short, crisp, and clear, with only five footnotes and no references.? The four chapters are called ?lectures? and the prose is written in a conversational style reflecting the original form of delivery. Indeed, the unedited lectures are also available for free on the web. They could easily be absorbed while driving one?s car, though this reviewer does not necessarily endorse that form of consumption, nor would one expect that to be the Fed?s official position. Since the original audience consisted largely of undergraduates, concepts are kept simple throughout, largely at an introductory level of economics. Student questions from the lectures are included at the end of each chapter.

Not often does one get to read a book that articulates the beliefs and actions of an incumbent policymaker, let alone one who was charged with conducting monetary policy during a severe financial crisis unless, of course, one is reading sworn testimony given to a public agency, but that is an altogether different exercise than what is undertaken here. The value of this book is that it allows one to observe how the Chairman of the Federal Reserve reacted to the events of 2007-2009 and how that response is justified. What makes it especially delightful to read or listen to is that Chairman Bernanke puts his decision making in a long-run context, describing the particular ?lessons? from the Fed?s history that he drew on during the crisis period. It thus provides a shining example of how policymakers use history in formulating economic policy.

It probably comes as no surprise to those familiar with his academic research on central bank transparency that Chairman Bernanke is the first standing Fed Chairman to write a book while in office. That said, one must keep expectations in check while reading the book. Although he is not afraid to discuss mistakes that the Fed made in its past nor to acknowledge that it could have possibly done more prior to the recent crisis, the chairman presents the official rationale of the most controversial decisions. Some economists, such as Alan Blinder (2013), have drawn attention to differences between the rationale policymakers provided to the President, Congress, and the American public and their actual motivations during the crisis.

Lecture 1 provides a review of the history of the founding of the Federal Reserve System and discusses the tools that central banks have at their disposal for maintaining financial stability and limiting the size and duration of aggregate fluctuations. It covers themes that will be very familiar to most readers of this review: the origins of central banking, the advantages and disadvantages of the gold standard, nineteenth-century banking panics, and ?Bagehot?s Rule.? This lecture and the subsequent one serve the purpose of providing the historical and institutional context for the lessons that Chairman Bernanke applied during the financial crisis that peaked in 2008-2009. They also lay out a case for the Fed?s learning process: the Fed made a variety of mistakes in the 1930s and 1970s, for example, which it subsequently drew on in formulating later policies. In this lecture, Chairman Bernanke acknowledges that the Fed failed with respect to monetary policy and financial stability during the Great Depression, as witnessed by the severity of the banking crisis and the depth and duration of the economic decline. He suggests that FDR?s abandonment of the gold standard and the enactment of federal deposit insurance were actions taken to offset policy errors. Detailing the Fed?s policy mistakes of the 1930s allows him to later contrast the Fed?s policy response to the recent financial crisis.

Lecture 2 continues the examination of the Fed?s history, focusing on the post-World War II period. The first part of it is devoted to the Great Inflation and the associated policy mistakes (overconfidence in the ability to fine tune the economy and loose fiscal and monetary policy) as well as the Great Moderation, with substantial credit given to Paul Volcker and Alan Greenspan?s stewardship of the monetary policy. An omission from this period of policymaking is a discussion of the S&L crisis. Although savings and loans were outside the Fed?s regulatory domain, the episode might have been one that the Fed could have learned from, at least with respect to the idea that it could have refocused the Fed?s attention on ensuring financial stability. (An interesting theme throughout the book is that the Fed?s role of providing financial stability fell into neglect until the recent crisis hit.)

The second half of lecture 2 presents his views on what factors led to the intensity of the financial crisis of 2008-2009, which he casts as a ?classic financial panic? that took place in a broader institutional context (multiple financial markets rather than just banks). He provides a laundry list of weaknesses in the financial system that likely transformed a modest recession into a more severe crisis. For example, he points to household leveraging (driven partly by a decline in the standards for mortgage underwriting and exotic mortgage products), inadequate risk management by banks, short-term funding exposure of banks, and the use of CDS and other exotic derivatives as private-sector catalysts. With respect to public sector vulnerabilities, he suggests that supervision of insurance companies, investment banks, and GSEs was inadequate and that the economy lacked a systemic regulator that could oversee risks across different types of financial institutions. It is unsurprising that he places little stock in the view that the Fed set rates too low early in the 2000s, citing cross-country evidence of other housing booms, the timing of the bubble, and the size of the house price increases relative to changes in monetary policy as evidence against this argument. However, he does acknowledge that the Fed did not fully anticipate how large of an effect a decline in house prices could have on the overall economy.

Lecture 3 provides a description of the Fed?s response to the recent financial crisis and a sense of the real-time decision making that was required during the peak period of the crisis when problems in different sectors of the financial system were springing up on an almost daily basis. This is where it is entertaining for the reader to play armchair central banker, and think whether one?s own policy choices would have deviated that far from the path that the Fed actually took. Important for his description of the Fed?s response to the crisis is the fact that, even though the total losses due to subprime mortgages were not very big, they were spread out across different financial markets, making the size of the losses and the bearers of those losses uncertain. Because many financial firms were using wholesale funding, the uncertainty over losses created the potential for short-term funding to dry up as lenders re-assessed the health of borrowers. Firms in need of short-term funding faced fire sales of assets and runs rippled through the financial system. In response, the Fed provided liquidity to illiquid banks via the discount window and to other financial firms like broker-dealers through special liquidity and credit facilities. Interestingly, although he does not state that the Fed could have done more to save Lehman Brothers (arguing it was an investment bank and the Fed and Treasury tried to find either a buyer or more capital), he does seem to acknowledge that its failure was systemically important (p. 75), and he goes on to describe the effects its failure had on money market mutual funds such as Reserve Primary Fund. Finally, he discusses the coordinated international response of central banks to the financial crisis, contrasting it with the lack of coordination of the 1930s.

The last lecture provides a discussion of what the Fed has been doing in the wake of the crisis, how it is working to implement Dodd-Frank, and what that law means for future Fed conduct. This lecture includes a cogent discussion of the Fed?s quantitative easing policies, which are aimed at influencing long-term interest rates and stimulating the housing sector, and it discusses its continued effort to satisfy its dual mandate by focusing on the persistently weak labor market conditions. Since this lecture provides a detailed description of the expansion of the Fed?s balance sheet and the piling up of reserves by Fed member banks, it would have been nice to see this discussion connected more directly to the continued low levels of bank lending.

This book will be particularly useful for those teaching a class in either macroeconomics or economic history of the twentieth century at the undergraduate level, as these lectures provide a succinct and accessible account of U.S. macro policymaking over the last hundred years. Companion questions, written by Stephen Buckles of Vanderbilt and referencing the video presentation, are also available on the Fed?s website.

Reference:
Alan Blinder, After the Music Stopped: The Financial Crisis, the Response, and the Work Ahead, New York: Penguin, 2013.

Kris James Mitchener is professor of economics at the University of Warwick and Research Associate at NBER and CAGE. Recent publications include ?Globalization, Trade and Wages: What Does History Tell Us about China?? (with Se Yan) International Economic Review (February 2014) and ?Shadowy Banks and Financial Contagion during the Great Depression: A Retrospective on Friedman and Schwartz? (with Gary Richardson) American Economic Review (May 2013).
?
Copyright (c) 2013 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (August 2013). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Financial Markets, Financial Institutions, and Monetary History
Geographic Area(s):North America
Time Period(s):20th Century: Pre WWII
20th Century: WWII and post-WWII

The Ascent of Money: A Financial History of the World

Author(s):Ferguson, Niall
Reviewer(s):Horesh, Niv

Published by EH.NET (July 2009)

Niall Ferguson, The Ascent of Money: A Financial History of the World. New York: Penguin, 2008. v + 441 pp. $30 (hardcover), ISBN: 1594201929.

Reviewed for EH.NET by Niv Horesh, Faculty of Arts and Social Sciences, University of New South Wales.

Harvard?s Niall Ferguson is perhaps best known for his magisterial history of the House of Rothschild and, more recently, his exhortation against the risks of unbridled government borrowing and nebulous stimulus packages ostensibly designed to avert what is often termed the worst global economic crisis since the Great Depression. In the Ascent of Money he harnesses his narrative skills to offer lay readership a captivating account of global monetary history from time immemorial to the twenty-first century. The book?s release coincided with an eponymous television series that has already been broadcast in much of the English-speaking world. Both the series and the book are immensely entertaining and readily accessible, but the latter arguably makes for a more convenient platform from which academics can approach Ferguson?s many insights.

The Introduction (pp. 1-17) prepares readers for what Ferguson perceptively identifies as the core stories attending the evolution of money over the last four millennia. These are many and varied, as one would expect. He is concerned with, inter alia, the ?recurrent hostility? to financial intermediaries and religious minorities associated with them in early-modern European history; the triumph of the Dutch Republic over the Hapsburg Empire, the latter?s possession of silver mines in South America notwithstanding; the spread of paper money, fiat currency and invisible means of payment in the twentieth century; right through to the possible eclipse of American global primacy in the next two decades.

Titled ?Dreams of Avarice,? Chapter One sets course by recounting how the Incas were flabbergasted by the ?insatiable lust for gold and silver? that seemed to grip the Spanish conquistadors (p. 21). It then lays out with humor and verve the well-known story of Potos?, now a fairly sleepy town in the Bolivian Andes, which once provisioned Spain with untold amounts of silver. In the same breath, the chapter goes on to offer an overview of coinage since the seventh century BC. Notably, Ferguson sees the flow of silver from the Andes to Europe as a ?resource curse? which removed the incentives for more productive economic activity, while strengthening ?rent-seeking autocrats? in seventeenth-century Spain. Contrary to criticism of Eurocentrism often leveled at him, Ferguson carefully emphasizes here the contribution other peoples have made to modern finance: ?… economic life in the Eastern world ? in the Abassid caliphate or in Song China ? was far more advanced? at least until Fibonacci introduced Indian algebraic precepts in early thirteenth-century Italy (p. 32); these were later reified by the Medicis into double-entry bookkeeping in the Florentine republic (p. 43).

By the early seventeenth-century, European financial innovation had shifted from the Italian city-states to the Low Countries, though it was still driven by the exigencies of costly and recurrent warfare and ambitions of monopolizing trade with the East (p. 48-49). This spurt of European financial innovation had actually long ?preceded the industrial revolution,? a complex but much better-studied spate of events (p. 52). The financial and industrial revolutions then converged with the spread of joint-stock companies and proto-types of central banks in the latter half of the nineteenth century.

Subsequent chapters flesh out Ferguson?s analysis. Titled ?Of Human Bondage,? Chapter Two (pp. 65-118) explores, for example, the distinctness of the European economic trajectory, beginning with how the majority of Florentine citizenry partook of financing the Republic?s debt in the fourteenth century. In the seventeenth-century, the United Provinces of the Netherlands combined the borrowing techniques of an Italian city-state ?… with the scale of a nation-state.? The Dutch were able to finance their wars by pitching Amsterdam ?as the market for a whole range of new securities? (p. 75). The eighteenth and nineteenth centuries are characterized by Anglo-French friction, but here Ferguson sees a yawning gap between protestant Britain where public debt defaults became rarer and public debt itself increased many-fold and the powers of landed aristocracy diminished while a professional civil service became more influential ? and Catholic France where public offices were often sold to raise money, tax collection was farmed out and government bond issues lost credibility. Notably, the incremental spread of, and popular faith in, British government bonds allowed Whitehall to borrow overseas as well, much to the detriment of Napoleon?s armies. Ferguson similarly believes that (p. 97) the reluctance of European investors to buy into Confederate bonds during the American Civil War doomed the South?s endeavors. This historic lesson is invoked toward the end of the chapter when discussing, in passing, the Bush Administration?s large budget deficits.

Chapter Three (?Blowing Bubbles,? pp. 119-178) zooms in on arguably the most significant economic entity of our time: the joint-stock company. Ferguson aptly dubs it ?perhaps the single greatest Dutch invention of all.? Here, he elides earlier ? though fairly short-lived ? occurrences of comparable entities both in Europe and in pre-modern Asia. But there can be little doubt that the establishment of the Dutch VOC (1602) marked a veritable turning point, not least because it underlay the growth of the world?s first bourse. Indeed, the establishment of royally-chartered companies principally aimed at trade with Asia seems to have underpinned the rise of stock exchanges and public debt in Europe?s Northeast as a whole. The rise of public debt and publicly-listed equity was beset by frequent speculative bubbles, from which emerged a more sophisticated British credit economy.

Chapter Four (?The Return of Risk,? pp. 176-229) takes up a swag of issues from the impact of Hurricane Katrina on the U.S. psyche, through how the Great Fire of London (1666) created demand for insurance policies, to Japan?s welfare system and Milton Friedman?s mentorship of Latin American finance ministers. By comparison, Chapter Five (?Safe as Houses,? pp. 230-82) is more singularly framed around what Ferguson perceptively calls ?the passion for property? in the home-owning democracies of Anglo-Saxondom. He aptly reminds us (p. 233, 241) that as recently as the 1930s, little more than two-fifths of U.S. households owned their home compared with over 65% today, and traces back this staggering social transformation to the New Deal and the Civil Rights Movement. The expansion of home ownership was facilitated in the late 1930s by then-novel institutions like Fannie Mae, which are at the heart of the recent sub-prime meltdown. In that sense, but not in that sense only, Ferguson does a wonderful job of explaining well beyond clich?s the linkages between the Great Depression to today?s global finance crisis. He then points the finger (p. 269) at rating agencies such as Moody?s and Standard & Poor?s for obfuscating the precariousness of collateralized sub-prime mortgages, which financial ?alchemists? turned into tradable debt obligations.

In essence, the last chapter (?From Empire to Chimerica,? pp. 283-340) is dedicated to China?s resurgence in the twenty-first century, and subtly considers whether this might ultimately result in a catastrophic Sino-American military confrontation. From a China specialist?s perspective, it is perhaps a pity that a scholar of Ferguson?s wisdom and insight stops short of opining whether we are witnessing at present the rise of a new form of capitalism with Chinese characteristics (e.g. capitalism without democracy) or simply gradual Chinese adaptation to Western market norms. Academic pedants might also quip that Ferguson draws heavily on Kenneth Pomeranz?s path-breaking book, The Great Divergence, when writing that living standards in Europe and China were on par as late as the eighteenth-century (p. 285). This might have called for a more detailed discussion, given that earlier parts of the book allude to the Italian city-states (fourteenth century) as the progenitors of Europe?s financial revolution. Similarly, Ferguson?s assertion that the ?… ease with which the [Chinese] Empire could finance its deficits by printing money discouraged the emergence of European-style capital markets? (p. 286) might sound a little facile to specialists, not least because note issuance was all but abandoned by late-Imperial dynasties.

However, these are minor criticisms that do not detract in any way from the wonderful feat of storytelling which Ferguson has again pulled off. This book makes for a bold and original attempt to provide a comprehensive history of what, some say, makes the world go around. It is likely to turn into a best-selling classic, and a must-read item in countless undergraduate courses.

Niv Horesh is Lecturer in Chinese Studies at the School of Languages and Linguistics, University of New South Wales, Sydney, Australia. His first book, Shanghai’s Bund and Beyond (Yale University Press, 2009), is the first comparative study in English of foreign banks and banknote issuance in pre-war China. His second book (forthcoming in 2010), is a comprehensive socio-economic account of Shanghai?s rise to prominence (1842-2010).

Subject(s):Financial Markets, Financial Institutions, and Monetary History
Geographic Area(s):North America
Time Period(s):General or Comparative