EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Japanese Industrialization and Economic Growth

Carl Mosk, University of Victoria

Japan achieved sustained growth in per capita income between the 1880s and 1970 through industrialization. Moving along an income growth trajectory through expansion of manufacturing is hardly unique. Indeed Western Europe, Canada, Australia and the United States all attained high levels of income per capita by shifting from agrarian-based production to manufacturing and technologically sophisticated service sector activity.

Still, there are four distinctive features of Japan’s development through industrialization that merit discussion:

The proto-industrial base

Japan’s agricultural productivity was high enough to sustain substantial craft (proto-industrial) production in both rural and urban areas of the country prior to industrialization.

Investment-led growth

Domestic investment in industry and infrastructure was the driving force behind growth in Japanese output. Both private and public sectors invested in infrastructure, national and local governments serving as coordinating agents for infrastructure build-up.

  • Investment in manufacturing capacity was largely left to the private sector.
  • Rising domestic savings made increasing capital accumulation possible.
  • Japanese growth was investment-led, not export-led.

Total factor productivity growth — achieving more output per unit of input — was rapid.

On the supply side, total factor productivity growth was extremely important. Scale economies — the reduction in per unit costs due to increased levels of output — contributed to total factor productivity growth. Scale economies existed due to geographic concentration, to growth of the national economy, and to growth in the output of individual companies. In addition, companies moved down the “learning curve,” reducing unit costs as their cumulative output rose and demand for their product soared.

The social capacity for importing and adapting foreign technology improved and this contributed to total factor productivity growth:

  • At the household level, investing in education of children improved social capability.
  • At the firm level, creating internalized labor markets that bound firms to workers and workers to firms, thereby giving workers a strong incentive to flexibly adapt to new technology, improved social capability.
  • At the government level, industrial policy that reduced the cost to private firms of securing foreign technology enhanced social capacity.

Shifting out of low-productivity agriculture into high productivity manufacturing, mining, and construction contributed to total factor productivity growth.

Dualism

Sharply segmented labor and capital markets emerged in Japan after the 1910s. The capital intensive sector enjoying high ratios of capital to labor paid relatively high wages, and the labor intensive sector paid relatively low wages.

Dualism contributed to income inequality and therefore to domestic social unrest. After 1945 a series of public policy reforms addressed inequality and erased much of the social bitterness around dualism that ravaged Japan prior to World War II.

The remainder of this article will expand on a number of the themes mentioned above. The appendix reviews quantitative evidence concerning these points. The conclusion of the article lists references that provide a wealth of detailed evidence supporting the points above, which this article can only begin to explore.

The Legacy of Autarky and the Proto-Industrial Economy: Achievements of Tokugawa Japan (1600-1868)

Why Japan?

Given the relatively poor record of countries outside the European cultural area — few achieving the kind of “catch-up” growth Japan managed between 1880 and 1970 – the question naturally arises: why Japan? After all, when the United States forcibly “opened Japan” in the 1850s and Japan was forced to cede extra-territorial rights to a number of Western nations as had China earlier in the 1840s, many Westerners and Japanese alike thought Japan’s prospects seemed dim indeed.

Tokugawa achievements: urbanization, road networks, rice cultivation, craft production

In answering this question, Mosk (2001), Minami (1994) and Ohkawa and Rosovsky (1973) emphasize the achievements of Tokugawa Japan (1600-1868) during a long period of “closed country” autarky between the mid-seventeenth century and the 1850s: a high level of urbanization; well developed road networks; the channeling of river water flow with embankments and the extensive elaboration of irrigation ditches that supported and encouraged the refinement of rice cultivation based upon improving seed varieties, fertilizers and planting methods especially in the Southwest with its relatively long growing season; the development of proto-industrial (craft) production by merchant houses in the major cities like Osaka and Edo (now called Tokyo) and its diffusion to rural areas after 1700; and the promotion of education and population control among both the military elite (the samurai) and the well-to-do peasantry in the eighteenth and early nineteenth centuries.

Tokugawa political economy: daimyo and shogun

These developments were inseparable from the political economy of Japan. The system of confederation government introduced at the end of the fifteenth century placed certain powers in the hands of feudal warlords, daimyo, and certain powers in the hands of the shogun, the most powerful of the warlords. Each daimyo — and the shogun — was assigned a geographic region, a domain, being given taxation authority over the peasants residing in the villages of the domain. Intercourse with foreign powers was monopolized by the shogun, thereby preventing daimyo from cementing alliances with other countries in an effort to overthrow the central government. The samurai military retainers of the daimyo were forced to abandon rice farming and reside in the castle town headquarters of their daimyo overlord. In exchange, samurai received rice stipends from the rice taxes collected from the villages of their domain. By removing samurai from the countryside — by demilitarizing rural areas — conflicts over local water rights were largely made a thing of the past. As a result irrigation ditches were extended throughout the valleys, and riverbanks were shored up with stone embankments, facilitating transport and preventing flooding.

The sustained growth of proto-industrialization in urban Japan, and its widespread diffusion to villages after 1700 was also inseparable from the productivity growth in paddy rice production and the growing of industrial crops like tea, fruit, mulberry plant growing (that sustained the raising of silk cocoons) and cotton. Indeed, Smith (1988) has given pride of place to these “domestic sources” of Japan’s future industrial success.

Readiness to emulate the West

As a result of these domestic advances, Japan was well positioned to take up the Western challenge. It harnessed its infrastructure, its high level of literacy, and its proto-industrial distribution networks to the task of emulating Western organizational forms and Western techniques in energy production, first and foremost enlisting inorganic energy sources like coal and the other fossil fuels to generate steam power. Having intensively developed the organic economy depending upon natural energy flows like wind, water and fire, Japanese were quite prepared to master inorganic production after the Black Ships of the Americans forced Japan to jettison its long-standing autarky.

From Balanced to Dualistic Growth, 1887-1938: Infrastructure and Manufacturing Expand

Fukoku Kyohei

After the Tokugawa government collapsed in 1868, a new Meiji government committed to the twin policies of fukoku kyohei (wealthy country/strong military) took up the challenge of renegotiating its treaties with the Western powers. It created infrastructure that facilitated industrialization. It built a modern navy and army that could keep the Western powers at bay and establish a protective buffer zone in North East Asia that eventually formed the basis for a burgeoning Japanese empire in Asia and the Pacific.

Central government reforms in education, finance and transportation

Jettisoning the confederation style government of the Tokugawa era, the new leaders of the new Meiji government fashioned a unitary state with powerful ministries consolidating authority in the capital, Tokyo. The freshly minted Ministry of Education promoted compulsory primary schooling for the masses and elite university education aimed at deepening engineering and scientific knowledge. The Ministry of Finance created the Bank of Japan in 1882, laying the foundations for a private banking system backed up a lender of last resort. The government began building a steam railroad trunk line girding the four major islands, encouraging private companies to participate in the project. In particular, the national government committed itself to constructing a Tokaido line connecting the Tokyo/Yokohama region to the Osaka/Kobe conurbation along the Pacific coastline of the main island of Honshu, and to creating deepwater harbors at Yokohama and Kobe that could accommodate deep-hulled steamships.

Not surprisingly, the merchants in Osaka, the merchant capital of Tokugawa Japan, already well versed in proto-industrial production, turned to harnessing steam and coal, investing heavily in integrated spinning and weaving steam-driven textile mills during the 1880s.

Diffusion of best-practice agriculture

At the same time, the abolition of the three hundred or so feudal fiefs that were the backbone of confederation style-Tokugawa rule and their consolidation into politically weak prefectures, under a strong national government that virtually monopolized taxation authority, gave a strong push to the diffusion of best practice agricultural technique. The nationwide diffusion of seed varieties developed in the Southwest fiefs of Tokugawa Japan spearheaded a substantial improvement in agricultural productivity especially in the Northeast. Simultaneously, expansion of agriculture using traditional Japanese technology agriculture and manufacturing using imported Western technology resulted.

Balanced growth

Growth at the close of the nineteenth century was balanced in the sense that traditional and modern technology using sectors grew at roughly equal rates, and labor — especially young girls recruited out of farm households to labor in the steam using textile mills — flowed back and forth between rural and urban Japan at wages that were roughly equal in industrial and agricultural pursuits.

Geographic economies of scale in the Tokaido belt

Concentration of industrial production first in Osaka and subsequently throughout the Tokaido belt fostered powerful geographic scale economies (the ability to reduce per unit costs as output levels increase), reducing the costs of securing energy, raw materials and access to global markets for enterprises located in the great harbor metropolises stretching from the massive Osaka/Kobe complex northward to the teeming Tokyo/Yokohama conurbation. Between 1904 and 1911, electrification mainly due to the proliferation of intercity electrical railroads created economies of scale in the nascent industrial belt facing outward onto the Pacific. The consolidation of two huge hydroelectric power grids during the 1920s — one servicing Tokyo/Yokohama, the other Osaka and Kobe — further solidified the comparative advantage of the Tokaido industrial belt in factory production. Finally, the widening and paving during the 1920s of roads that could handle buses and trucks was also pioneered by the great metropolises of the Tokaido, which further bolstered their relative advantage in per capita infrastructure.

Organizational economies of scale — zaibatsu

In addition to geographic scale economies, organizational scale economies also became increasingly important in the late nineteenth centuries. The formation of the zaibatsu (“financial cliques”), which gradually evolved into diversified industrial combines tied together through central holding companies, is a case in point. By the 1910s these had evolved into highly diversified combines, binding together enterprises in banking and insurance, trading companies, mining concerns, textiles, iron and steel plants, and machinery manufactures. By channeling profits from older industries into new lines of activity like electrical machinery manufacturing, the zaibatsu form of organization generated scale economies in finance, trade and manufacturing, drastically reducing information-gathering and transactions costs. By attracting relatively scare managerial and entrepreneurial talent, the zaibatsu format economized on human resources.

Electrification

The push into electrical machinery production during the 1920s had a revolutionary impact on manufacturing. Effective exploitation of steam power required the use of large central steam engines simultaneously driving a large number of machines — power looms and mules in a spinning/weaving plant for instance – throughout a factory. Small enterprises did not mechanize in the steam era. But with electrification the “unit drive” system of mechanization spread. Each machine could be powered up independently of one another. Mechanization spread rapidly to the smallest factory.

Emergence of the dualistic economy

With the drive into heavy industries — chemicals, iron and steel, machinery — the demand for skilled labor that would flexibly respond to rapid changes in technique soared. Large firms in these industries began offering premium wages and guarantees of employment in good times and bad as a way of motivating and holding onto valuable workers. A dualistic economy emerged during the 1910s. Small firms, light industry and agriculture offered relatively low wages. Large enterprises in the heavy industries offered much more favorable remuneration, extending paternalistic benefits like company housing and company welfare programs to their “internal labor markets.” As a result a widening gulf opened up between the great metropolitan centers of the Tokaido and rural Japan. Income per head was far higher in the great industrial centers than in the hinterland.

Clashing urban/rural and landlord/tenant interests

The economic strains of emergent dualism were amplified by the slowing down of technological progress in the agricultural sector, which had exhaustively reaped the benefits due to regional diffusion from the Southwest to the Northeast of best practice Tokugawa rice cultivation. Landlords — around 45% of the cultivable rice paddy land in Japan was held in some form of tenancy at the beginning of the twentieth century — who had played a crucial role in promoting the diffusion of traditional best practice techniques now lost interest in rural affairs and turned their attention to industrial activities. Tenants also found their interests disregarded by the national authorities in Tokyo, who were increasingly focused on supplying cheap foodstuffs to the burgeoning industrial belt by promoting agricultural production within the empire that it was assembling through military victories. Japan secured Taiwan from China in 1895, and formally brought Korea under its imperial rule in 1910 upon the heels of its successful war against Russia in 1904-05. Tenant unions reacted to this callous disrespect of their needs through violence. Landlord/tenant disputes broke out in the early 1920s, and continued to plague Japan politically throughout the 1930s, calls for land reform and bureaucratic proposals for reform being rejected by a Diet (Japan’s legislature) politically dominated by landlords.

Japan’s military expansion

Japan’s thrust to imperial expansion was inflamed by the growing instability of the geopolitical and international trade regime of the later 1920s and early 1930s. The relative decline of the United Kingdom as an economic power doomed a gold standard regime tied to the British pound. The United States was becoming a potential contender to the United Kingdom as the backer of a gold standard regime but its long history of high tariffs and isolationism deterred it from taking over leadership in promoting global trade openness. Germany and the Soviet Union were increasingly becoming industrial and military giants on the Eurasian land mass committed to ideologies hostile to the liberal democracy championed by the United Kingdom and the United States. It was against this international backdrop that Japan began aggressively staking out its claim to being the dominant military power in East Asia and the Pacific, thereby bringing it into conflict with the United States and the United Kingdom in the Asian and Pacific theaters after the world slipped into global warfare in 1939.

Reform and Reconstruction in a New International Economic Order, Japan after World War II

Postwar occupation: economic and institutional restructuring

Surrendering to the United States and its allies in 1945, Japan’s economy and infrastructure was revamped under the S.C.A.P (Supreme Commander of the Allied Powers) Occupation lasting through 1951. As Nakamura (1995) points out, a variety of Occupation-sponsored reforms transformed the institutional environment conditioning economic performance in Japan. The major zaibatsu were liquidated by the Holding Company Liquidation Commission set up under the Occupation (they were revamped as keiretsu corporate groups mainly tied together through cross-shareholding of stock in the aftermath of the Occupation); land reform wiped out landlordism and gave a strong push to agricultural productivity through mechanization of rice cultivation; and collective bargaining, largely illegal under the Peace Preservation Act that was used to suppress union organizing during the interwar period, was given the imprimatur of constitutional legality. Finally, education was opened up, partly through making middle school compulsory, partly through the creation of national universities in each of Japan’s forty-six prefectures.

Improvement in the social capability for economic growth

In short, from a domestic point of view, the social capability for importing and adapting foreign technology was improved with the reforms in education and the fillip to competition given by the dissolution of the zaibatsu. Resolving tension between rural and urban Japan through land reform and the establishment of a rice price support program — that guaranteed farmers incomes comparable to blue collar industrial workers — also contributed to the social capacity to absorb foreign technology by suppressing the political divisions between metropolitan and hinterland Japan that plagued the nation during the interwar years.

Japan and the postwar international order

The revamped international economic order contributed to the social capability of importing and adapting foreign technology. The instability of the 1920s and 1930s was replaced with a relatively predictable bipolar world in which the United States and the Soviet Union opposed each other in both geopolitical and ideological arenas. The United States became an architect of multilateral architecture designed to encourage trade through its sponsorship of the United Nations, the World Bank, the International Monetary Fund and the General Agreement on Tariffs and Trade (the predecessor to the World Trade Organization). Under the logic of building military alliances to contain Eurasian Communism, the United States brought Japan under its “nuclear umbrella” with a bilateral security treaty. American companies were encouraged to license technology to Japanese companies in the new international environment. Japan redirected its trade away from the areas that had been incorporated into the Japanese Empire before 1945, and towards the huge and expanding American market.

Miracle Growth: Soaring Domestic Investment and Export Growth, 1953-1970

Its infrastructure revitalized through the Occupation period reforms, its capacity to import and export enhanced by the new international economic order, and its access to American technology bolstered through its security pact with the United States, Japan experienced the dramatic “Miracle Growth” between 1953 and the early 1970s whose sources have been cogently analyzed by Denison and Chung (1976). Especially striking in the Miracle Growth period was the remarkable increase in the rate of domestic fixed capital formation, the rise in the investment proportion being matched by a rising savings rate whose secular increase — especially that of private household savings – has been well documented and analyzed by Horioka (1991). While Japan continued to close the gap in income per capita between itself and the United States after the early 1970s, most scholars believe that large Japanese manufacturing enterprises had by and large become internationally competitive by the early 1970s. In this sense it can be said that Japan had completed its nine decade long convergence to international competitiveness through industrialization by the early 1970s.

MITI

There is little doubt that the social capacity to import and adapt foreign technology was vastly improved in the aftermath of the Pacific War. Creating social consensus with Land Reform and agricultural subsidies reduced political divisiveness, extending compulsory education and breaking up the zaibatsu had a positive impact. Fashioning the Ministry of International Trade and Industry (M.I.T.I.) that took responsibility for overseeing industrial policy is also viewed as facilitating Japan’s social capability. There is no doubt that M.I.T.I. drove down the cost of securing foreign technology. By intervening between Japanese firms and foreign companies, it acted as a single buyer of technology, playing off competing American and European enterprises in order to reduce the royalties Japanese concerns had to pay on technology licenses. By keeping domestic patent periods short, M.I.T.I. encouraged rapid diffusion of technology. And in some cases — the experience of International Business Machines (I.B.M.), enjoying a virtual monopoly in global mainframe computer markets during the 1950s and early 1960s, is a classical case — M.I.T.I. made it a condition of entry into the Japanese market (through the creation of a subsidiary Japan I.B.M. in the case of I.B.M.) that foreign companies share many of their technological secrets with potential Japanese competitors.

How important industrial policy was for Miracle Growth remains controversial, however. The view of Johnson (1982), who hails industrial policy as a pillar of the Japanese Development State (government promoting economic growth through state policies) has been criticized and revised by subsequent scholars. The book by Uriu (1996) is a case in point.

Internal labor markets, just-in-time inventory and quality control circles

Furthering the internalization of labor markets — the premium wages and long-term employment guarantees largely restricted to white collar workers were extended to blue collar workers with the legalization of unions and collective bargaining after 1945 — also raised the social capability of adapting foreign technology. Internalizing labor created a highly flexible labor force in post-1950 Japan. As a result, Japanese workers embraced many of the key ideas of Just-in-Time inventory control and Quality Control circles in assembly industries, learning how to do rapid machine setups as part and parcel of an effort to produce components “just-in-time” and without defect. Ironically, the concepts of just-in-time and quality control were originally developed in the United States, just-in-time methods being pioneered by supermarkets and quality control by efficiency experts like W. Edwards Deming. Yet it was in Japan that these concepts were relentlessly pursued to revolutionize assembly line industries during the 1950s and 1960s.

Ultimate causes of the Japanese economic “miracle”

Miracle Growth was the completion of a protracted historical process involving enhancing human capital, massive accumulation of physical capital including infrastructure and private manufacturing capacity, the importation and adaptation of foreign technology, and the creation of scale economies, which took decades and decades to realize. Dubbed a miracle, it is best seen as the reaping of a bountiful harvest whose seeds were painstakingly planted in the six decades between 1880 and 1938. In the course of the nine decades between the 1880s and 1970, Japan amassed and lost a sprawling empire, reorienting its trade and geopolitical stance through the twists and turns of history. While the ultimate sources of growth can be ferreted out through some form of statistical accounting, the specific way these sources were marshaled in practice is inseparable from the history of Japan itself and of the global environment within which it has realized its industrial destiny.

Appendix: Sources of Growth Accounting and Quantitative Aspects of Japan’s Modern Economic Development

One of the attractions of studying Japan’s post-1880 economic development is the abundance of quantitative data documenting Japan’s growth. Estimates of Japanese income and output by sector, capital stock and labor force extend back to the 1880s, a period when Japanese income per capita was low. Consequently statistical probing of Japan’s long-run growth from relative poverty to abundance is possible.

The remainder of this appendix is devoted to introducing the reader to the vast literature on quantitative analysis of Japan’s economic development from the 1880s until 1970, a nine decade period during which Japanese income per capita converged towards income per capita levels in Western Europe. As the reader will see, this discussion confirms the importance of factors discussed at the outset of this article.

Our initial touchstone is the excellent “sources of growth” accounting analysis carried out by Denison and Chung (1976) on Japan’s growth between 1953 and 1971. Attributing growth in national income in growth of inputs, the factors of production — capital and labor — and growth in output per unit of the two inputs combined (total factor productivity) along the following lines:

G(Y) = { a G(K) + [1-a] G(L) } + G (A)

where G(Y) is the (annual) growth of national output, g(K) is the growth rate of capital services, G(L) is the growth rate of labor services, a is capital’s share in national income (the share of income accruing to owners of capital), and G(A) is the growth of total factor productivity, is a standard approach used to approximate the sources of growth of income.

Using a variant of this type of decomposition that takes into account improvements in the quality of capital and labor, estimates of scale economies and adjustments for structural change (shifting labor out of agriculture helps explain why total factor productivity grows), Denison and Chung (1976) generate a useful set of estimates for Japan’s Miracle Growth era.

Operating with this “sources of growth” approach and proceeding under a variety of plausible assumptions, Denison and Chung (1976) estimate that of Japan’s average annual real national income growth of 8.77 % over 1953-71, input growth accounted for 3.95% (accounting for 45% of total growth) and growth in output per unit of input contributed 4.82% (accounting for 55% of total growth). To be sure, the precise assumptions and techniques they use can be criticized. The precise numerical results they arrive at can be argued over. Still, their general point — that Japan’s growth was the result of improvements in the quality of factor inputs — health and education for workers, for instance — and improvements in the way these inputs are utilized in production — due to technological and organizational change, reallocation of resources from agriculture to non-agriculture, and scale economies, is defensible.

With this in mind consider Table 1.

Table 1: Industrialization and Economic Growth in Japan, 1880-1970:
Selected Quantitative Characteristics

Panel A: Income and Structure of National Output

Real Income per Capita [a] Share of National Output (of Net Domestic Product) and Relative Labor Productivity (Ratio of Output per Worker in Agriculture to Output per Worker in the N Sector) [b]
Years Absolute Relative to U.S. level Year Agriculture Manufacturing & Mining

(Ma)

Manufacturing,

Construction & Facilitating Sectors [b]

Relative Labor Productivity

A/N

1881-90 893 26.7% 1887 42.5% 13.6% 20.0% 68.3
1891-1900 1,049 28.5 1904 37.8 17.4 25.8 44.3
1900-10 1,195 25.3 1911 35.5 20.3 31.1 37.6
1911-20 1,479 27.9 1919 29.9 26.2 38.3 32.5
1921-30 1,812 29.1 1930 20.0 25.8 43.3 27.4
1930-38 2,197 37.7 1938 18.5 35.3 51.7 20.8
1951-60 2,842 26.2 1953 22.0 26.3 39.7 22.6
1961-70 6,434 47.3 1969 8.7 30.5 45.9 19.1

Panel B: Domestic and External Sources of Aggregate Supply and Demand Growth: Manufacturing and Mining (Ma), Gross Domestic Fixed Capital Formation (GDFCF), and Trade (TR)

Percentage Contribution to Growth due to: Trade Openness and Trade Growth [c]
Years Ma to Output Growth GDFCF to Effective

Demand Growth

Years Openness Growth in Trade
1888-1900 19.3% 17.9% 1885-89 6.9% 11.4%
1900-10 29.2 30.5 1890-1913 16.4 8.0
1910-20 26.5 27.9 1919-29 32.4 4.6
1920-30 42.4 7.5 1930-38 43.3 8.1
1930-38 50.5 45.3 1954-59 19.3 12.0
1955-60 28.1 35.0 1960-69 18.5 10.3
1960-70 33.5 38.5

Panel C: Infrastructure and Human Development

Human Development Index (HDI) [d] Electricity Generation and National Broadcasting (NHK) per 100 Persons [e]
Year Educational Attainment Infant Mortality Rate (IMR) Overall HDI

Index

Year Electricity NHK Radio Subscribers
1900 0.57 155 0.57 1914 0.28 n.a.
1910 0.69 161 0.61 1920 0.68 n.a.
1920 0.71 166 0.64 1930 2.46 1.2
1930 0.73 124 0.65 1938 4.51 7.8
1950 0.81 63 0.69 1950 5.54 11.0
1960 0.87 34 0.75 1960 12.28 12.6
1970 0.95 14 0.83 1970 34.46 21.9

Notes: [a] Maddison (2000) provides estimates of real income that take into account the purchasing power of national currencies.

[b] Ohkawa (1979) gives estimates for the “N” sector that is defined as manufacturing and mining (Ma) plus construction plus facilitating industry (transport, communications and utilities). It should be noted that the concept of an “N” sector is not standard in the field of economics.

[c] The estimates of trade are obtained by adding merchandise imports to merchandise exports. Trade openness is estimated by taking the ratio of total (merchandise) trade to national output, the latter defined as Gross Domestic Product (G.D.P.). The trade figures include trade with Japan’s empire (Korea, Taiwan, Manchuria, etc.); the income figures for Japan exclude income generated in the empire.

[d] The Human Development Index is a composite variable formed by adding together indices for educational attainment, for health (using life expectancy that is inversely related to the level of the infant mortality rate, the IMR), and for real per capita income. For a detailed discussion of this index see United Nations Development Programme (2000).

[e] Electrical generation is measured in million kilowatts generated and supplied. For 1970, the figures on NHK subscribers are for television subscribers. The symbol n.a. = not available.

Sources: The figures in this table are taken from various pages and tables in Japan Statistical Association (1987), Maddison (2000), Minami (1994), and Ohkawa (1979).

Flowing from this table are a number of points that bear lessons of the Denison and Chung (1976) decomposition. One cluster of points bears upon the timing of Japan’s income per capita growth and the relationship of manufacturing expansion to income growth. Another highlights improvements in the quality of the labor input. Yet another points to the overriding importance of domestic investment in manufacturing and the lesser significance of trade demand. A fourth group suggests that infrastructure has been important to economic growth and industrial expansion in Japan, as exemplified by the figures on electricity generating capacity and the mass diffusion of communications in the form of radio and television broadcasting.

Several parts of Table 1 point to industrialization, defined as an increase in the proportion of output (and labor force) attributable to manufacturing and mining, as the driving force in explaining Japan’s income per capita growth. Notable in Panels A and B of the table is that the gap between Japanese and American income per capita closed most decisively during the 1910s, the 1930s, and the 1960s, precisely the periods when manufacturing expansion was the most vigorous.

Equally noteworthy of the spurts of the 1910s, 1930s and the 1960s is the overriding importance of gross domestic fixed capital formation, that is investment, for growth in demand. By contrast, trade seems much less important to growth in demand during these critical decades, a point emphasized by both Minami (1994) and by Ohkawa and Rosovsky (1973). The notion that Japanese growth was “export led” during the nine decades between 1880 and 1970 when Japan caught up technologically with the leading Western nations is not defensible. Rather, domestic capital investment seems to be the driving force behind aggregate demand expansion. The periods of especially intense capital formation were also the periods when manufacturing production soared. Capital formation in manufacturing, or in infrastructure supporting manufacturing expansion, is the main agent pushing long-run income per capita growth.

Why? As Ohkawa and Rosovsky (1973) argue, spurts in manufacturing capital formation were associated with the import and adaptation of foreign technology, especially from the United States These investment spurts were also associated with shifts of labor force out of agriculture and into manufacturing, construction and facilitating sectors where labor productivity was far higher than it was in labor-intensive farming centered around labor-intensive rice cultivation. The logic of productivity gain due to more efficient allocation of labor resources is apparent from the right hand column of Panel A in Table 1.

Finally, Panel C of Table 1 suggests that infrastructure investment that facilitated health and educational attainment (combined public and private expenditure on sanitation, schools and research laboratories), and public/private investment in physical infrastructure including dams and hydroelectric power grids helped fuel the expansion of manufacturing by improving human capital and by reducing the costs of transportation, communications and energy supply faced by private factories. Mosk (2001) argues that investments in human-capital-enhancing (medicine, public health and education), financial (banking) and physical infrastructure (harbors, roads, power grids, railroads and communications) laid the groundwork for industrial expansions. Indeed, the “social capability for importing and adapting foreign technology” emphasized by Ohkawa and Rosovsky (1973) can be largely explained by an infrastructure-driven growth hypothesis like that given by Mosk (2001).

In sum, Denison and Chung (1976) argue that a combination of input factor improvement and growth in output per combined factor inputs account for Japan’s most rapid spurt of economic growth. Table 1 suggests that labor quality improved because health was enhanced and educational attainment increased; that investment in manufacturing was important not only because it increased capital stock itself but also because it reduced dependence on agriculture and went hand in glove with improvements in knowledge; and that the social capacity to absorb and adapt Western technology that fueled improvements in knowledge was associated with infrastructure investment.

References

Denison, Edward and William Chung. “Economic Growth and Its Sources.” In Asia’s Next Giant: How the Japanese Economy Works, edited by Hugh Patrick and Henry Rosovsky, 63-151. Washington, DC: Brookings Institution, 1976.

Horioka, Charles Y. “Future Trends in Japan’s Savings Rate and the Implications Thereof for Japan’s External Imbalance.” Japan and the World Economy 3 (1991): 307-330.

Japan Statistical Association. Historical Statistics of Japan [Five Volumes]. Tokyo: Japan Statistical Association, 1987.

Johnson, Chalmers. MITI and the Japanese Miracle: The Growth of Industrial Policy, 1925-1975. Stanford: Stanford University Press, 1982.

Maddison, Angus. Monitoring the World Economy, 1820-1992. Paris: Organization for Economic Co-operation and Development, 2000.

Minami, Ryoshin. Economic Development of Japan: A Quantitative Study. [Second edition]. Houndmills, Basingstoke, Hampshire: Macmillan Press, 1994.

Mitchell, Brian. International Historical Statistics: Africa and Asia. New York: New York University Press, 1982.

Mosk, Carl. Japanese Industrial History: Technology, Urbanization, and Economic Growth. Armonk, New York: M.E. Sharpe, 2001.

Nakamura, Takafusa. The Postwar Japanese Economy: Its Development and Structure, 1937-1994. Tokyo: University of Tokyo Press, 1995.

Ohkawa, Kazushi. “Production Structure.” In Patterns of Japanese Economic Development: A Quantitative Appraisal, edited by Kazushi Ohkawa and Miyohei Shinohara with Larry Meissner, 34-58. New Haven: Yale University Press, 1979.

Ohkawa, Kazushi and Henry Rosovsky. Japanese Economic Growth: Trend Acceleration in the Twentieth Century. Stanford, CA: Stanford University Press, 1973.

Smith, Thomas. Native Sources of Japanese Industrialization, 1750-1920. Berkeley: University of California Press, 1988.

Uriu, Robert. Troubled Industries: Confronting Economic Challenge in Japan. Ithaca: Cornell University Press, 1996.

United Nations Development Programme. Human Development Report, 2000. New York: Oxford University Press, 2000.

Citation: Mosk, Carl. “Japan, Industrialization and Economic Growth”. EH.Net Encyclopedia, edited by Robert Whaples. January 18, 2004. URL http://eh.net/encyclopedia/japanese-industrialization-and-economic-growth/

A Brief Economic History of Modern Israel

Nadav Halevi, Hebrew University

The Pre-state Background

The history of modern Israel begins in the 1880s, when the first Zionist immigrants came to Palestine, then under Ottoman rule, to join the small existing Jewish community, establishing agricultural settlements and some industry, restoring Hebrew as the spoken national language, and creating new economic and social institutions. The ravages of World War I reduced the Jewish population by a third, to 56,000, about what it had been at the beginning of the century.

As a result of the war, Palestine came under the control of Great Britain, whose Balfour Declaration had called for a Jewish National Home in Palestine. Britain’s control was formalized in 1920, when it was given the Mandate for Palestine by the League of Nations. During the Mandatory period, which lasted until May 1948, the social, political and economic structure for the future state of Israel was developed. Though the government of Palestine had a single economic policy, the Jewish and Arab economies developed separately, with relatively little connection.

Two factors were instrumental in fostering rapid economic growth of the Jewish sector: immigration and capital inflows. The Jewish population increased mainly through immigration; by the end of 1947 it had reached 630,000, about 35 percent of the total population. Immigrants came in waves, particularly large in the mid 1920s and mid 1930s. They consisted of ideological Zionists and refugees, economic and political, from Central and Eastern Europe. Capital inflows included public funds, collected by Zionist institutions, but were for the most part private funds. National product grew rapidly during periods of large immigration, but both waves of mass immigration were followed by recessions, periods of adjustment and consolidation.

In the period from 1922 to 1947 real net domestic product (NDP) of the Jewish sector grew at an average rate of 13.2 percent, and in 1947 accounted for 54 percent of the NDP of the Jewish and Arab economies together. NDP per capita in the Jewish sector grew at a rate of 4.8 percent; by the end of the period it was 8.5 times larger in than in 1922, and 2.5 times larger than in the Arab sector (Metzer, 1998). Though agricultural development – an ideological objective – was substantial, this sector never accounted for more than 15 percent of total net domestic product of the Jewish economy. Manufacturing grew slowly for most of the period, but very rapidly during World War II, when Palestine was cut off from foreign competition and was a major provider to the British armed forces in the Middle East. By the end of the period, manufacturing accounted for a quarter of NDP. Housing construction, though a smaller component of NDP, was the most volatile sector, and contributed to sharp business cycle movements. A salient feature of the Jewish economy during the Mandatory period, which carried over into later periods, was the dominant size of the services sector – more than half of total NDP. This included a relatively modern educational and health sector, efficient financial and business sectors, and semi-governmental Jewish institutions, which later were ready to take on governmental duties.

The Formative Years: 1948-1965

The state of Israel came into being, in mid May 1948, in the midst of a war with its Arab neighbors. The immediate economic problems were formidable: to finance and wage a war, to take in as many immigrants as possible (first the refugees kept in camps in Europe and on Cyprus), to provide basic commodities to the old and new population, and to create a government bureaucracy to cope with all these challenges. The creation of a government went relatively smoothly, as semi-governmental Jewish institutions which had developed during the Mandatory period now became government departments.

Cease-fire agreements were signed during 1949. By the end of that year a total of 340,000 immigrants had arrived, and by the end of 1951 an additional 345,000 (the latter including immigrants from Arab countries), thus doubling the Jewish population. Immediate needs were met by a strict austerity program and inflationary government finance, repressed by price controls and rationing of basic commodities. However, the problems of providing housing and employment for the new population were solved only gradually. A New Economic Policy was introduced in early 1952. It consisted of exchange rate devaluation, the gradual relaxation of price controls and rationing, and curbing of monetary expansion, primarily by budgetary restraint. Active immigration encouragement was curtailed, to await the absorption of the earlier mass immigration.

From 1950 until 1965, Israel achieved a high rate of growth: Real GNP (gross national product) grew by an average annual rate of over 11 percent, and per capita GNP by greater than 6 percent. What made this possible? Israel was fortunate in receiving large sums of capital inflows: U.S. aid in the forms of unilateral transfers and loans, German reparations and restitutions to individuals, sale of State of Israel Bonds abroad, and unilateral transfers to public institutions, mainly the Jewish Agency, which retained responsibility for immigration absorption and agricultural settlement. Thus, Israel had resources available for domestic use – for public and private consumption and investment – about 25 percent more than its own GNP. This made possible a massive investment program, mainly financed through a special government budget. Both the enormity of needs and the socialist philosophy of the main political party in the government coalitions led to extreme government intervention in the economy.

Governmental budgets and strong protectionist measures to foster import-substitution enabled the development of new industries, chief among them textiles, and subsidies were given to help the development of exports, additional to the traditional exports of citrus products and cut diamonds.

During the four decades from the mid 1960s until the present, Israel’s economy developed and changed, as did economic policy. A major factor affecting these developments has been the Arab-Israeli conflict. Its influence is discussed first, and is followed by brief descriptions of economic growth and fluctuations, and evolution of economic policy.

The Arab-Israel Conflict

The most dramatic event of the 1960s was the Six Day War of 1967, at the end of which Israel controlled the West Bank (of the Jordan River) – the area of Palestine absorbed by the Jordan since 1949 – and the Gaza Strip, controlled until then by Egypt.

As a consequence of the occupation of these territories Israel was responsible for the economic as well as the political life in the areas taken over. The Arab sections of Jerusalem were united with the Jewish section. Jewish settlements were established in parts of the occupied territories. As hostilities intensified, special investments in infrastructure were made to protect Jewish settlers. The allocation of resources to Jewish settlements in the occupied territories has been a political and economic issue ever since.

The economies of Israel and the occupied territories were partially integrated. Trade in goods and services developed, with restrictions placed on exports to Israel of products deemed too competitive, and Palestinian workers were employed in Israel particularly in construction and agriculture. At its peak, in 1996, Palestinian employment in Israel reached 115,000 to 120,000, about 40 percent of the Palestinian labor force, but never more than 6.5 percent of total Israeli employment. Thus, while employment in Israel was a major contributor to the economy of the Palestinians, its effects on the Israeli economy, except for the sectors of construction and agriculture, were not large.

The Palestinian economy developed rapidly – real per capita national income grew at an annual rate of close to 20 percent in 1969-1972 and 5 percent in 1973-1980 – but fluctuated widely thereafter, and actually decreased in times of hostilities. Palestinian per capita income equaled 10.2 percent of Israeli per capita income in 1968, 22.8 percent in 1986, and declined to 9.7 percent in 1998 (Kleiman, 2003).

As part of the peace process between Israel and the Palestinians initiated in the 1990s, an economic agreement was signed between the parties in 1994, which in effect transformed what had been essentially a one-sided customs agreement (which gave Israel full freedom to export to the Territories but put restrictions on Palestinian exports to Israel) into a more equal customs union: the uniform external trade policy was actually Israel’s, but the Palestinians were given limited sovereignty regarding imports of certain commodities.

Arab uprisings (intifadas), in the 1980s, and especially the more violent one beginning in 2000 and continuing into 2005, led to severe Israeli restrictions on interaction between the two economies, particularly employment of Palestinians in Israel, and even to military reoccupation of some areas given over earlier to Palestinian control. These measures set the Palestinian economy back many years, wiping out much of the gains in income which had been achieved since 1967 – per capita GNP in 2004 was $932, compared to about $1500 in 1999. Palestinian workers in Israel were replaced by foreign workers.

An important economic implication of the Arab-Israel conflict is that Israel must allocate a major part of its budget to defense. The size of the defense budget has varied, rising during wars and armed hostilities. The total defense burden (including expenses not in the budget) reached its maximum relative size during and after the Yom Kippur War of 1973, close to 30 percent of GNP in 1974-1978. In the 2000-2004 period, the defense budget alone reached about 22 to 25 percent of GDP. Israel has been fortunate in receiving generous amounts of U.S. aid. Until 1972 most of this came in the form of grants and loans, primarily for purchases of U.S. agricultural surpluses. But since 1973 U.S. aid has been closely connected to Israel’s defense needs. During 1973-1982 annual loans and grants averaged $1.9 billion, and covered some 60 percent of total defense imports. But even in more tranquil periods, the defense burden, exclusive of U.S. aid, has been much larger than usual in industrial countries during peace time.

Growth and Economic Fluctuations

The high rates of growth of income and income per capita which characterized Israel until 1973 were not achieved thereafter. GDP growth fluctuated, generally between 2 and 5 percent, reaching as high as 7.5 percent in 2000, but falling below zero in the recession years from 2001 to mid 2003. By the end of the twentieth century income per capita reached about $20,000, similar to many of the more developed industrialized countries.

Economic fluctuations in Israel have usually been associated with waves of immigration: a large flow of immigrants which abruptly increases the population requires an adjustment period until it is absorbed productively, with the investments for its absorption in employment and housing stimulating economic activity. Immigration never again reached the relative size of the first years after statehood, but again gained importance with the loosening of restrictions on emigration from the Soviet Union. The total number of immigrants in 1972-1982 was 325,000, and after the collapse of the Soviet Union immigration totaled 1,050,000 in 1990-1999, mostly from the former Soviet Union. Unlike the earlier period, these immigrants were gradually absorbed in productive employment (though often not in the same activity as abroad) without resort to make-work projects. By the end of the century the population of Israel passed 6,300,000, with the Jewish population being 78 percent of the total. The immigrants from the former Soviet Union were equal to about one-fifth of the Jewish population, and were a significant and important addition of human capital to the labor force.

As the economy developed, the structure of output changed. Though the service sectors are still relatively large – trade and services contributing 46 percent of the business sector’s product – agriculture has declined in importance, and industry makes up over a quarter of the total. The structure of manufacturing has also changed: both in total production and in exports the share of traditional, low-tech industries has declined, with sophisticated, high-tech products, particularly electronics, achieving primary importance.

Fluctuations in output were marked by periods of inflation and periods of unemployment. After a change in exchange rate policy in the late 1970s (discussed below), an inflationary spiral was unleashed. Hyperinflation rates were reached in the early 1980s, about 400 percent per year by the time a drastic stabilization policy was imposed in 1985. Exchange rate stabilization, budgetary and monetary restraint, and wage and price freezes sharply reduced the rate of inflation to less than 20 percent, and then to about 16 percent in the late 1980s. Very drastic monetary policy, from the late 1990s, finally reduced the inflation to zero by 2005. However, this policy, combined with external factors such as the bursting of the high-tech bubble, recession abroad, and domestic insecurity resulting from the intifada, led to unemployment levels above 10 percent at the beginning of the new century. The economic improvements since the latter half of 2003 have, as yet (February 2005), not significantly reduced the level of unemployment.

Policy Changes

The Israeli economy was initially subject to extensive government controls. Only gradually was the economy converted into a fairly free (though still not completely so) market economy. This process began in the 1960s. In response to a realization by policy makers that government intervention in the economy was excessive, and to the challenge posed by the creation in Europe of a customs union (which gradually progressed into the present European Union), Israel embarked upon a very gradual process of economic liberalization. This appeared first in foreign trade: quantitative restrictions on imports were replaced by tariff protection, which was slowly reduced, and both import-substitution and exports were encouraged by more realistic exchange rates rather than by protection and subsidies. Several partial trade agreements with the European Economic Community (EEC), starting in 1964, culminated in a free trade area agreement (FTA) in industrial goods in 1975, and an FTA agreement with the U.S. came into force in 1985.

By late 1977 a considerable degree of trade liberalization had taken place. In October of that year, Israel moved from a fixed exchange rate system to a floating rate system, and restrictions on capital movements were considerably liberalized. However, there followed a disastrous inflationary spiral which curbed the capital liberalization process. Capital flows were not completely liberalized until the beginning of the new century.

Throughout the 1980s and the 1990s there were additional liberalization measures: in monetary policy, in domestic capital markets, and in various instruments of governmental interference in economic activity. The role of government in the economy was considerably decreased. On the other hand, some governmental economic functions were increased: a national health insurance system was introduced, though private health providers continued to provide health services within the national system. Social welfare payments, such as unemployment benefits, child allowances, old age pensions and minimum income support, were expanded continuously, until they formed a major budgetary expenditure. These transfer payments compensated, to a large extent, for the continuous growth of income inequality, which had moved Israel from among the developed countries with the least income inequality to those with the most. By 2003, 15 percent of the government’s budget went to health services, 15 percent to education, and an additional 20 percent were transfer payments through the National Insurance Agency.

Beginning in 2003, the Ministry of Finance embarked upon a major effort to decrease welfare payments, induce greater participation in the labor force, privatize enterprises still owned by government, and reduce both the relative size of the government deficit and the government sector itself. These activities are the result of an ideological acceptance by the present policy makers of the concept that a truly free market economy is needed to fit into and compete in the modern world of globalization.

An important economic institution is the Histadrut, a federation of labor unions. What had made this institution unique is that, in addition to normal labor union functions, it encompassed agricultural and other cooperatives, major construction and industrial enterprises, and social welfare institutions, including the main health care provider. During the Mandatory period, and for many years thereafter, the Histadrut was an important factor in economic development and in influencing economic policy. During the 1990s, the Histadrut was divested of many of its non-union activities, and its influence in the economy has greatly declined. The major unions associated with it still have much say in wage and employment issues.

The Challenges Ahead

As it moves into the new century, the Israeli economy has proven to be prosperous, as it continuously introduces and applies economic innovation, and to be capable of dealing with economic fluctuations. However, it faces some serious challenges. Some of these are the same as those faced by most industrial economies: how to reconcile innovation, the switch from traditional activities which are no longer competitive, to more sophisticated, skill-intensive products, with the dislocation of labor it involves, and the income inequality it intensifies. Like other small economies, Israel has to see how it fits into the new global economy, marked by the two major markets of the EU and the U.S., and the emergence of China as a major economic factor.

Special issues relate to the relations of Israel with its Arab neighbors. First are the financial implications of continuous hostilities and military threats. Clearly, if peace can come to the region, resources can be transferred to more productive uses. Furthermore, foreign investment, so important for Israel’s future growth, is very responsive to political security. Other issues depend on the type of relations established: will there be the free movement of goods and workers between Israel and a Palestinian state? Will relatively free economic relations with other Arab countries lead to a greater integration of Israel in the immediate region, or, as is more likely, will Israel’s trade orientation continue to be directed mainly to the present major industrial countries? If the latter proves true, Israel will have to carefully maneuver between the two giants: the U.S. and the EU.

References and Recommended Reading

Ben-Bassat, Avi, editor. The Israeli Economy, 1985-1998: From Government Intervention to Market Economics. Cambridge, MA: MIT Press, 2002.

Ben-Porath, Yoram, editor. The Israeli Economy: Maturing through Crisis. Cambridge, MA: Harvard University Press, 1986.

Fischer, Stanley, Dani Rodrik and Elias Tuma, editors. The Economics of Middle East Peace. Cambridge, MA: MIT Press, 1993.

Halevi, Nadav and Ruth Klinov-Malul, The Economic Development of Israel. New York: Praeger, 1968.

Kleiman, Ephraim. “Palestinian Economic Viability and Vulnerability.” Paper presented at the UCLA Burkle Conference in Athens, August 2003. (Available at www.international.ucla.edu.)

Metz, Helen Chapin, editor. Israel: A Country Study. Washington: Library of Congress Country Studies, 1986.

Metzer, Jacob, The Divided Economy of Mandatory Palestine. Cambridge: Cambridge University Press, 1998.

Patinkin, Don. The Israel Economy: The First Decade. Jerusalem: Maurice Falk Institute for Economic Research in Israel, 1967.

Razin, Assaf and Efraim Sadka, The Economy of Modern Israel: Malaise and Promise. London: Chicago University Press, 1993.

World Bank. Developing the Occupied Territories: An Investment in Peace. Washington D.C.: The World Bank, September, 1993.

Citation: Halevi, Nadav. “A Brief Economic History of Modern Israel”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/a-brief-economic-history-of-modern-israel/

Economic History of Hawai’i

Sumner La Croix, University of Hawai’i and East-West Center

The Hawaiian Islands are a chain of 132 islands, shoals, and reefs extending over 1,523 miles in the Northeast Pacific Ocean. Eight islands — Hawai’i, Maui, O’ahu, Kaua’i, Moloka’i, Lana’i, Ni’ihau, and Kaho’olawe — possess 99 percent of the land area (6,435 square miles) and are noted for their volcanic landforms, unique flora and fauna, and diverse climates.

From Polynesian Settlement to Western Contact

The Islands were uninhabited until sometime around 400 AD when Polynesian voyagers sailing double-hulled canoes arrived from the Marquesas Islands (Kirch, 1985, p. 68). Since the settlers had no written language and virtually no contact with the Western world until 1778, our knowledge of Hawai’i’s pre-history comes primarily from archaeological investigations and oral legends. A relatively egalitarian society and subsistence economy were coupled with high population growth rates until about 1100 when continued population growth led to a major expansion of the areas of settlement and cultivation. Perhaps under pressures of increasing resource scarcity, a new, more hierarchical social structure emerged, characterized by chiefs (ali’i) and subservient commoners (maka’ainana). In the two centuries prior to Western contact, there is considerable evidence that ruling chiefs (ali’i nui) competed to extend their lands by conquest and that this led to cycles of expansion and retrenchment.

Captain James Cook’s ships reached Hawai’i in 1778, thereby ending a long period of isolation for the Islands. Captain James King observed in 1779 that Hawaiians were generally “above the middle size” of Europeans, a rough indicator that Hawaiians generally had a diet superior to eighteenth-century Europeans. At contact, Hawaiian social and political institutions were similar to those found in other Polynesian societies. Hawaiians were sharply divided into three main social classes: ali’i (chiefs), maka’ainana (commoners), and kahuna (priests). Oral legends tell us that the Islands were usually divided into six to eight small kingdoms consisting of an island or part of an island, each governed by an ali’i nui (ruling chief). The ali’i nui had extensive rights to all lands and material goods and the ability to confiscate or redistribute material wealth at any time. Redistribution usually occurred only when a new ruling chief took office or when lands were conquered or lost. The ali’i nui gave temporary land grants to ali’i who, in turn, gave temporary land grants to konohiki (managers), who then “contracted” with maka’ainana, the great majority of the populace, to work the lands.

The Hawaiian society and economy has its roots in extended families (‘ohana) working cooperatively on an ahupua’a, a land unit running from the mountains to the sea. Numerous tropical root, tuber, and tree crops were cultivated. Taro, a wetland crop, was cultivated primarily in windward areas, while sweet potatoes and yams, both dryland crops, were cultivated in drier leeward areas. The maka’ainana apparently lived well above subsistence levels, with extensive time available for cultural activities, sports, and games. There were unquestionably periods of hardship, but these times tended to be associated with drought or other causes of poor harvest.

Unification of Hawai’i and Population Decline

The long-prevailing political equilibrium began to disintegrate shortly after the introduction of guns and the spread of new diseases to the Islands. In 1784, the most powerful ali’i nui, Kamehameha, began a war of conquest, and with his superior use of modern weapons and western advisors, he subdued all other chiefdoms, with the exception of Kaua’i, by 1795. Each chief in his ruling coalition received the right to administer large areas of land, consisting of smaller strips on various islands. Sumner La Croix and James Roumasset (1984) have argued that the strip system conveyed durability to the newly unified kingdom (by making it more costly for an ali’i to accumulate a power base on one island) and facilitated monitoring of ali’i production by the new king. In 1810, Kamehameha reached a negotiated settlement with Kaumuali’i, the ruling chief of Kaua’i, which brought the island under his control, thereby bringing the entire island chain under a single monarchy.

Exposure to Western diseases produced a massive decline in the native population of Hawai’i from 1778 through 1900 (Table 1). Estimates of Hawai’i’s population at the time of contact vary wildly, from approximately 110,000 to one million people (Bushnell, 1993; Dye, 1994). The first missionary census in 1831-1832 counted 130,313 people. A substantial portion of the decline can be attributed to a series of epidemics beginning after contact, including measles, influenza, diarrhea, and whooping cough. The introduction of venereal diseases was a factor behind declining crude birth rates. The first accurate census conducted in the Islands revealed a population of 80,641 in 1849. The native Hawaiian population reached its lowest point in 1900 when the U.S. census revealed only 39,656 full or part Hawaiians.

Table 1: Population of Hawai’i

Year

Total Population

Native Hawaiian Population

1778

110,000-1,000,000

110,000-1,000,000

1831-32

130,313

Na

1853

73,137

71,019

1872

56,897

51,531

1890

89,990

40,622

1900

154,001

39,656

1920

255,881

41,750

1940

422,770

64,310

1960

632,772

102,403

1980

964,691

115,500

2000

1,211,537

239,655

Sources: Total population from http://www.hawaii.gov/dbedt/db99/index.html, Table 1.01, Dye (1994), and Bushnell (1993). Native Hawaiian population for 1853-1960 from Schmitt (1977), p. 25. Data from the 2000 census includes people declaring “Native Hawaiian” as their only race or one of two races. See http://factfinder.census.gov/servlet/DTTable?_ts=18242084330 for the 2000 census population.

The Rise and Fall of Sandalwood and Whaling

With the unification of the Islands came the opening of foreign trade. Trade in sandalwood, a wood in demand in China for ornamental uses and burning as incense, began in 1805. The trade was interrupted by the War of 1812 and then flourished from 1816 to the late 1820s before fading away in the 1830s and 1840s (Kuykendall, 1957, I, pp. 86-87). La Croix and Roumasset (1984) have argued that the centralized organization of the sandalwood trade under King Kamehameha provided the king with incentives to harvest sandalwood efficiently. The adoption of a decentralized production system by his successor (Liholiho) led to the sandalwood being treated by ali’i as a common property resource. The reallocation of resources from agricultural production to sandalwood production not only led to rapid exhaustion of the sandalwood resource but also to famine.

As the sandalwood industry declined, Hawai’i became the base for the north-central Pacific whaling trade. The impetus for the new trade was the 1818 discovery of the “Offshore Ground” west of Peru and the 1820 discovery of rich sperm whale grounds off the coast of Japan. The first whaling ship visited the Islands in 1820, and by the late 1820s over 150 whaling ships were stopping in Hawai’i annually. While ship visits declined somewhat during the 1830s, by 1843 over 350 whaling ships annually visited the two major ports of Honolulu and Lahaina. Through the 1850s over 500 whaling ships visited Hawai’i annually. The demise of the Pacific whaling fleet during the U.S. Civil War and the rapid rise of the petroleum industry led to steep declines in the number of ships visiting Hawai’i, and after 1870 only a trickle of ships continued to visit.

Missionaries and Land Tenure

In 1819, King Kamehameha’s successor, Liholiho, abandoned the system of religious practices known as the kapu system and ordered temples (heiau) and images of the gods desecrated and burnt. In April 1820, missionaries from New England arrived and began filling the religious void with conversions to protestant Christianity. Over the next two decades as church attendance became widespread, the missionaries suppressed many traditional Hawaiian cultural practices, operated over 1,000 common schools, and instructed the ali’i in western political economy. The king promulgated a constitution with provisions for a Hawai’i legislature in 1840. It was followed, later in the decade, by laws establishing a cabinet, civil service, and judiciary. Under the 1852 constitution, male citizens received the right to vote in elections for a legislative lower house. Missionaries and other foreigners regularly served in cabinets through the end of the monarchy.

In 1844, the government began a 12-year program, known as the Great Mahele (Division), to dismantle the traditional system of land tenure. King Kauikeaouli gave up his interest in all island lands, retaining ownership only in selected estates. Ali’i had the right to take out fee simple title to lands held at the behest of the king. Maka’ainana had the right to claim fee simple title to small farms (kuleana). At the end of the claiming period, maka’ainana received less than ~40,000 acres of land, while the government (~1.5 million acres), the king (~900,000 acres), and the ali’i (~1.5 million acres) all received substantial shares. Foreigners were initially not allowed to own land in fee simple, but an 1850 law overturned this restriction. By the end of the 19th century, commoners and chiefs had sold, lost, or given up their lands, with foreigners and large estates owning most non-government lands.

Lilikala Kame’eleihiwa (1992) found the origins of the Mahele in the traditional duty of a king to undertake a redistribution of land and the difficulty of such an undertaking during the initial years of missionary influence. By contrast, La Croix and Roumasset (1990) found the origins of the Mahele in the rising value of Hawaii land in sugar cultivation, with fee simple title facilitating investment in the land, irrigation facilities, and processing factories.

Sugar, Immigration, and Population Increase

The first commercially-viable sugar plantation, Ladd and Co., was started on Kaua’i in 1835, and the sugar industry achieved moderate growth through the 1850s. Hawai’i’s sugar exports to California soared during the U.S. Civil War, but the end of hostilities in 1865 also meant the end of the sugar boom. The U.S. tariff on sugar posed a major obstacle to expanding sugar production in Hawai’i during peacetime, as the high tariff, ranging from 20 to 42 percent between 1850 and 1870, limited the extent of profitable sugar cultivation in the islands. Sugar interests helped elect King Kalakaua to the Hawaiian throne over the British-leaning Queen Emma in February 1874, and Kalakaua immediately sought a trade agreement with the United States. The 1876 reciprocity treaty between Hawai’i and the United States allowed duty-free sales of Hawai’i sugar and other selected agricultural products in the United States as well as duty-free sales of most U.S. manufactured goods in Hawai’i. Sugar exports from Hawai’i to the United States soared after the treaty’s promulgation, rising from 21 million pounds in 1876 to 114 million pounds in 1883 to 224.5 million pounds in 1890 (Table 2).

Table 2: Hawai’i Sugar Production (1000 short tons)

Year

Exports

Year

Production

Year

Production

1850

.4

1900

289.5

1950

961

1860

.7

1910

529.9

1960

935.7

1870

9.4

1920

560.4

1970

1162.1

1880

31.8

1930

939.3

1990

819.6

1890

129.9

1940

976.7

1999

367.5

Sources: Data for 1850-1970 are from Schmitt (1977), pp. 418-420. Data for 1990 and 1999 are from http://www.hawaii.gov/dbedt/db99/index.html, Table 22.09. Data for 1850-1880 are exports. Data for 1910-1990 are converted to 96° raw value.

The reciprocity treaty set the tone for Hawai’i’s economy and society over the next 80 years by establishing the sugar industry as the Hawai’i’s leading industry and altering the demographic composition of the Islands via the industry’s labor demands. Rapid expansion of the sugar industry after reciprocity sharply increased its demand for labor: Plantation employment rose from 3,921 in 1872 to 10,243 in 1882 to 20,536 in 1892. The increase in labor demand occurred while the native Hawaiian population continued its precipitous decline, and the Hawai’i government responded to labor shortages by allowing sugar planters to bring in overseas contract laborers bound to serve at fixed wages for 3-5 year periods. The enormous increase in the plantation workforce consisted of first Chinese, then Japanese, then Portuguese contract laborers.

The extensive investment in sugar industry lands and irrigations systems coupled with the rapid influx of overseas contract laborers changed the bargaining positions of Hawai’i and the United States when the reciprocity treaty was due for renegotiation in 1883. La Croix and Christopher Grandy (1997) argued that the profitability of the planters’ new investment was dependent on access to the U.S. market, and this improved the bargaining position of the United States. As a condition for renewal of the treaty, the United States demanded access to Pearl Bay [now Pearl Harbor]. King Kalakaua opposed this demand, and in July 1887, opponents of the government forced the king to accept a new constitution and cabinet. With the election of a new pro-American government in September 1887, the king signed an extension of the reciprocity treaty in October 1887 that granted access rights to Pearl Bay to the United States for the life of the treaty.

Annexation and the Sugar Economy

In 1890, the U.S. Congress enacted the McKinley Tariff, which allowed raw sugar to enter the United States free of duty and established a two-cent per pound bounty for domestic producers. The overall effect of the McKinley Tariff was to completely erase the advantages that the reciprocity treaty had provided to Hawaiian sugar producers over other foreign sugar producers selling in the U.S. market. The value of Hawaiian merchandise exports plunged from $13 million in 1890 to $10 million in 1891 to a low point of $8 million in 1892.

La Croix and Grandy (1997) argued that the McKinley Tariff threatened the wealth of the planters and induced important changes in Hawai’i’s domestic politics. King Kalakaua died in January 1891, and his sister succeeded him. After Queen Lili’uokalani proposed to declare a new constitution in January 1893, a group of U.S. residents, with the incautious assistance of the U.S. Minister and troops from a U.S. warship, overthrew the monarchy. The new government, dominated by the white minority, offered Hawai’i for annexation by the United States from 1893. Annexation was first opposed by U.S. President Cleveland, and then, during U.S. President McKinley’s term, failed to obtain Congressional approval. The advent of the Spanish-American War and the ensuing hostilities in the Philippines raised Hawai’i’s strategic value to the United States, and Hawai’i was annexed by a joint resolution of Congress in July 1898. Hawai’i became a U.S. territory with the passage of the Organic Act on June 14, 1900.

Economic Integration with the United States

In 1900 annexation by the United States eliminated bound labor contracts and freed the existing labor force from their contracts. After annexation, the sugar planters and the Hawaii government recruited workers from Japan, Korea, the Philippines, Spain, Portugal, Puerto Rico, England, Germany, and Russia. The ensuing flood of immigrants swelled the population of the Hawaiian Islands from 109,020 people in 1896 to 232,856 people in 1915. The growth in the plantation labor force was one factor behind the expansion of sugar production from 289,500 short tons in 1900 to 939,300 short tons in 1930. Pineapple production also expanded, from just 2,000 cases of canned fruit in 1903 to 12,808,000 cases in 1931.

La Croix and Price Fishback (2000) established that European and American workers on sugar plantations were paid job-specific wage premiums relative to Asian workers and that the premium paid for unskilled American workers fell by one third between 1901 and 1915 and for European workers by 50 percent or more over the same period. While similar wage gaps disappeared during this period on the U.S. West Coast, Hawai’i plantations were able to maintain a portion of the wage gaps because they constantly found new low-wage immigrants to work in the Hawai’i market. Immigrant workers from Asia failed, however, to climb many rungs up the job ladder on Hawai’i sugar plantations, and this was a major factor behind labor unrest in the sugar industry. Edward Beechert (1985) concluded that large-scale strikes on sugar plantations during 1909 and 1920 improved the welfare of sugar plantation workers but did not lead to recognition of labor unions. Between 1900 and 1941, many sugar workers responded to limited advancement and wage prospects on the sugar plantation by leaving the plantations for jobs in Hawai’i’s growing urban areas.

The rise of the sugar industry and the massive inflow of immigrant workers into Hawaii was accompanied by a decline in the Native Hawaiian population and its overall welfare (La Croix and Rose, 1999). Native Hawaiians and their political representatives argued that government lands should be made available for homesteading to enable Hawaiians to resettle in rural areas and to return to farming occupations. The U.S. Congress enacted legislation in 1921 to reserve specified rural and urban lands for a new Hawaiian Homes Program. La Croix and Louis Rose have argued that the Hawaiian Homes Program has functioned poorly, providing benefits for only a small portion of the Hawaiian population over the course of the twentieth century.

Five firms-Castle & Cooke, Alexander & Baldwin, C. Brewer & Co., Theo. Davies & Co., and American Factors-came to dominate the sugar industry. Originally established to provide financial, labor recruiting, transportation, and marketing services to plantations, they gradually acquired the plantations and also gained control over other vital industries such as banking, insurance, retailing, and shipping. By 1933, their plantations produced 96 percent of the sugar crop. The “Big Five’s” dominance would continue until the rise of the tourism industry and statehood induced U.S. and foreign firms to enter Hawai’i’s markets.

The Great Depression hit Hawai’i hard, as employment in the sugar and pineapple industries declined during the early 1930s. In December 1936, about one-quarter of Hawai’i’s labor force was unemployed. Full recovery would not occur until the military began a buildup in the mid-1930s in reaction to Japan’s occupation of Manchuria. With the Japanese invasion of China in 1937, the number of U.S. military personnel in Hawai’i increased to 48,000 by September 1940.

World War II and its Aftermath

The Japanese attack on the American Pacific Fleet at Pearl Harbor on December 7, 1941 led to a declaration of martial law, a state that continued until October 24, 1944. The war was accompanied by a massive increase in American armed service personnel in Hawai’i, with numbers increasing from 28,000 in 1940 to 378,000 in 1944. The total population increased from 429,000 in 1940 to 858,000 in 1944, thereby substantially increasing the demand for retail, restaurant, and other consumer services. An enormous construction program to house the new personnel was undertaken in 1941 and 1942. The wartime interruption of commercial shipping reduced the tonnage of civilian cargo arriving in Hawai’i by more than 50 percent. Employees working in designated high priority organizations, including sugar plantations, had their jobs and wages frozen in place by General Order 18 which also suspended union activity.

In March 1943, the National Labor Relations Board was allowed to resume operations, and the International Longshoreman’s Union (ILWU) organized 34 of Hawai’i’s 35 sugar plantations, the pineapple plantations, and the longshoremen by November 1945. The passage of the Hawai’i Employment Relations Act in 1945 facilitated union organizing by providing agricultural workers with the same union organizing rights as industrial workers.

After the War, Hawai’i’s economy stagnated, as demobilized armed services personnel left Hawai’i for the U.S. mainland. With the decline in population, real per capita personal income declined at an annual rate of 5.7 percent between 1945 and 1949 (Schmitt, 1976, pp. 148, 167). During this period, Hawai’i’s newly formed unions embarked on a series of disruptive strikes covering West Coast and Hawai’i longshoremen (1946-1949); the sugar industry (1946); and the pineapple industry (1947, 1951). The economy began a nine-year period of moderate expansion in 1949, with the annual growth rate of real personal income averaging 2.3 percent. The expansion of propeller-driven commercial air service sent visitor numbers soaring, from 15,000 in 1946 to 171,367 in 1958, and induced construction of new hotels and other tourism facilities and infrastructure. The onset of the Korean War increased the number of armed service personnel stationed in Hawai’i from 21,000 in 1950 to 50,000 in 1958. Pineapple production and canning also displayed substantial increases over the decade, increasing from 13,697,000 cases in 1949 to 18,613,000 cases in 1956.

Integration and Growth after Statehood

In 1959, Hawai’i became the fiftieth state. The transition from territorial to statehood status was one factor behind the 1958-1973 boom, in which real per capita personal income increased at an annual rate of 4 percent. The most important factor behind the long expansion was the introduction of commercial jet service in 1959, as the jet plane dramatically reduced the money and time costs of traveling to Hawai’i. Also fueled by rapidly rising real incomes in the United States and Japan, the tourism industry would continue its rapid growth through 1990. Visitor arrivals (see Table 3) increased from 171,367 in 1958 to 6,723,531 in 1990. Growth in visitor arrivals was once again accompanied by growth in the construction industry, particularly from 1965 to 1975. The military build-up during the Vietnam War also contributed to the boom by increasing defense expenditures in Hawai’i by 3.9 percent annually from 1958 to 1973 (Schmitt, 1977, pp. 148, 668).

Table 3: Visitor Arrivals to Hawai’i

Year

Visitor Arrivals

Year

Visitor Arrivals

1930

18,651

1970

1,745,904

1940

25,373

1980

3,928,789

1950

46,593

1990

6,723,531

1960

296,249

2000

6,975,866

Source: Hawai’i Tourism Authority, http://www.hawaii.gov/dbedt/monthly/historical-r.xls at Table 5 and http://www.state.hi.us/dbedt/monthly/index2k.html.

From 1973 to 1990, growth in real per capita personal income slowed to 1.1 percent annually. The defense and agriculture sectors stagnated, with most growth generated by the relentless increase in visitor arrivals. Japan’s persistently high rates of economic growth during the 1970s and 1980s spilled over to Hawai’i in the form of huge increases in the numbers of Japanese tourists and in the value of Japanese foreign investment in Hawai’i. At the end of the 1980s, the Hawai’i unemployment rate was just 2-3 percent, employment had been steadily growing since 1983, and prospects looked good for continued expansion of both tourism and the overall economy.

The Malaise of the 1990s

From 1991 to 1998, Hawai’i’s economy was hit by several negative shocks. The 1990-1991 recession in the United States, the closure of California military bases and defense plants, and uncertainty over the safety of air travel during the 1991 Gulf War combined to reduce visitor arrivals from the United States in the early and mid-1990s. Volatile and slow growth in Japan throughout the 1990s led to declines in Japanese visitor arrivals in the late 1990s. The ongoing decline in sugar and pineapple production gathered steam in the 1990s, with only a handful of plantations still in business by 2001. The cumulative impact of these adverse shocks was severe, as real per capita personal income did not change between 1991 and 1998.

The recovery continued through summer 2001 despite a slowing U.S. economy. It came to an abrupt halt with the terrorism attack of September 11, 2001, as domestic and foreign tourism declined sharply.

References

Beechert, Edward D. Working in Hawaii: A Labor History. Honolulu: University of Hawaii Press, 1985.

Bushnell, Andrew F. “The ‘Horror’ Reconsidered: An Evaluation of the Historical Evidence for Population Decline in Hawai’i, 1778-1803.” Pacific Studies 16 (1993): 115-161.

Daws, Gavan. Shoal of Time: A History of the Hawaiian Islands. Honolulu: University of Hawaii Press, 1968.

Dye, Tom. “Population Trends in Hawai’i before 1778.” The Hawaiian Journal of History 28 (1994): 1-20.

Hitch, Thomas Kemper. Islands in Transition: The Past, Present, and Future of Hawaii’s Economy. Honolulu: First Hawaiian Bank, 1992.

Kame’eleihiwa, Lilikala. Native Land and Foreign Desires: Pehea La E Pono Ai? Honolulu: Bishop Museum Press, 1992.

Kirch, Patrick V. Feathered Gods and Fishhooks: An Introduction to Hawaiian Archaeology and Prehistory. Honolulu: University of Hawaii Press, 1985.

Kuykendall, Ralph S. A History of the Hawaiian Kingdom. 3 vols. Honolulu: University of Hawaii Press, 1938-1967.

La Croix, Sumner J., and Price Fishback. “Firm-Specific Evidence on Racial Wage Differentials and Workforce Segregation in Hawaii’s Sugar Industry.” Explorations in Economic History 26 (1989): 403-423.

La Croix, Sumner J., and Price Fishback. “Migration, Labor Market Dynamics, and Wage Differentials in Hawaii’s Sugar Industry.” Advances in Agricultural Economic History 1 (2000): 31-72.

La Croix, Sumner J., and Christopher Grandy. “The Political Instability of Reciprocal Trade and the Overthrow of the Hawaiian Kingdom.” Journal of Economic History 57 (1997): 161-189.

La Croix, Sumner J., and Louis A. Rose. “The Political Economy of the Hawaiian Homelands Program.” In The Other Side of the Frontier: Economic Explorations into Native American History, edited by Linda Barrington. Boulder, Colorado: Westview Press, 1999.

La Croix, Sumner J., and James Roumasset. “An Economic Theory of Political Change in Pre-Missionary Hawaii.” Explorations in Economic History 21 (1984): 151-168.

La Croix, Sumner J., and James Roumasset. “The Evolution of Property Rights in Nineteenth-Century Hawaii.” Journal of Economic History 50 (1990): 829-852.

Morgan, Theodore. Hawaii, A Century of Economic Change: 1778-1876. Cambridge, MA: Harvard University Press, 1948.

Schmitt, Robert C. Historical Statistics of Hawaii. Honolulu: University Press of Hawaii, 1977.

Citation: La Croix, Sumner. “Economic History of Hawai’i”. EH.Net Encyclopedia, edited by Robert Whaples. September 27, 2001. URL http://eh.net/encyclopedia/economic-history-of-hawaii/

An Overview of the Great Depression

Randall Parker, East Carolina University

This article provides an overview of selected events and economic explanations of the interwar era. What follows is not intended to be a detailed and exhaustive review of the literature on the Great Depression, or of any one theory in particular. Rather, it will attempt to describe the “big picture” events and topics of interest. For the reader who wishes more extensive analysis and detail, references to additional materials are also included.

The 1920s

The Great Depression, and the economic catastrophe that it was, is perhaps properly scaled in reference to the decade that preceded it, the 1920s. By conventional macroeconomic measures, this was a decade of brisk economic growth in the United States. Perhaps the moniker “the roaring twenties” summarizes this period most succinctly. The disruptions and shocking nature of World War I had been survived and it was felt the United States was entering a “new era.” In January 1920, the Federal Reserve seasonally adjusted index of industrial production, a standard measure of aggregate economic activity, stood at 81 (1935–39 = 100). When the index peaked in July 1929 it was at 114, for a growth rate of 40.6 percent over this period. Similar rates of growth over the 1920–29 period equal to 47.3 percent and 42.4 percent are computed using annual real gross national product data from Balke and Gordon (1986) and Romer (1988), respectively. Further computations using the Balke and Gordon (1986) data indicate an average annual growth rate of real GNP over the 1920–29 period equal to 4.6 percent. In addition, the relative international economic strength of this country was clearly displayed by the fact that nearly one-half of world industrial output in 1925–29 was produced in the United States (Bernanke, 1983).

Consumer Durables Market

The decade of the 1920s also saw major innovations in the consumption behavior of households. The development of installment credit over this period led to substantial growth in the consumer durables market (Bernanke, 1983). Purchases of automobiles, refrigerators, radios and other such durable goods all experienced explosive growth during the 1920s as small borrowers, particularly households and unincorporated businesses, utilized their access to available credit (Persons, 1930; Bernanke, 1983; Soule, 1947).

Economic Growth in the 1920s

Economic growth during this period was mitigated only somewhat by three recessions. According to the National Bureau of Economic Research (NBER) business cycle chronology, two of these recessions were from May 1923 through July 1924 and October 1926 through November 1927. Both of these recessions were very mild and unremarkable. In contrast, the 1920s began with a recession lasting 18 months from the peak in January 1920 until the trough of July 1921. Original estimates of real GNP from the Commerce Department showed that real GNP fell 8 percent between 1919 and 1920 and another 7 percent between 1920 and 1921 (Romer, 1988). The behavior of prices contributed to the naming of this recession “the Depression of 1921,” as the implicit price deflator for GNP fell 16 percent and the Bureau of Labor Statistics wholesale price index fell 46 percent between 1920 and 1921. Although thought to be severe, Romer (1988) has argued that the so-called “postwar depression” was not as severe as once thought. While the deflation from war-time prices was substantial, revised estimates of real GNP show falls in output of only 1 percent between 1919 and 1920 and 2 percent between 1920 and 1921. Romer (1988) also argues that the behaviors of output and prices are inconsistent with the conventional explanation of the Depression of 1921 being primarily driven by a decline in aggregate demand. Rather, the deflation and the mild recession are better understood as resulting from a decline in aggregate demand together with a series of positive supply shocks, particularly in the production of agricultural goods, and significant decreases in the prices of imported primary commodities. Overall, the upshot is that the growth path of output was hardly impeded by the three minor downturns, so that the decade of the 1920s can properly be viewed economically as a very healthy period.

Fed Policies in the 1920s

Friedman and Schwartz (1963) label the 1920s “the high tide of the Reserve System.” As they explain, the Federal Reserve became increasingly confident in the tools of policy and in its knowledge of how to use them properly. The synchronous movements of economic activity and explicit policy actions by the Federal Reserve did not go unnoticed. Taking the next step and concluding there was cause and effect, the Federal Reserve in the 1920s began to use monetary policy as an implement to stabilize business cycle fluctuations. “In retrospect, we can see that this was a major step toward the assumption by government of explicit continuous responsibility for economic stability. As the decade wore on, the System took – and perhaps even more was given – credit for the generally stable conditions that prevailed, and high hopes were placed in the potency of monetary policy as then administered” (Friedman and Schwartz, 1963).

The giving/taking of credit to/by the Federal Reserve has particular value pertaining to the recession of 1920–21. Although suggesting the Federal Reserve probably tightened too much, too late, Friedman and Schwartz (1963) call this episode “the first real trial of the new system of monetary control introduced by the Federal Reserve Act.” It is clear from the history of the time that the Federal Reserve felt as though it had successfully passed this test. The data showed that the economy had quickly recovered and brisk growth followed the recession of 1920–21 for the remainder of the decade.

Questionable Lessons “Learned” by the Fed

Moreover, Eichengreen (1992) suggests that the episode of 1920–21 led the Federal Reserve System to believe that the economy could be successfully deflated or “liquidated” without paying a severe penalty in terms of reduced output. This conclusion, however, proved to be mistaken at the onset of the Depression. As argued by Eichengreen (1992), the Federal Reserve did not appreciate the extent to which the successful deflation could be attributed to the unique circumstances that prevailed during 1920–21. The European economies were still devastated after World War I, so the demand for United States’ exports remained strong many years after the War. Moreover, the gold standard was not in operation at the time. Therefore, European countries were not forced to match the deflation initiated in the United States by the Federal Reserve (explained below pertaining to the gold standard hypothesis).

The implication is that the Federal Reserve thought that deflation could be generated with little effect on real economic activity. Therefore, the Federal Reserve was not vigorous in fighting the Great Depression in its initial stages. It viewed the early years of the Depression as another opportunity to successfully liquidate the economy, especially after the perceived speculative excesses of the 1920s. However, the state of the economic world in 1929 was not a duplicate of 1920–21. By 1929, the European economies had recovered and the interwar gold standard was a vehicle for the international transmission of deflation. Deflation in 1929 would not operate as it did in 1920–21. The Federal Reserve failed to understand the economic implications of this change in the international standing of the United States’ economy. The result was that the Depression was permitted to spiral out of control and was made much worse than it otherwise would have been had the Federal Reserve not considered it to be a repeat of the 1920–21 recession.

The Beginnings of the Great Depression

In January 1928 the seeds of the Great Depression, whenever they were planted, began to germinate. For it is around this time that two of the most prominent explanations for the depth, length, and worldwide spread of the Depression first came to be manifest. Without any doubt, the economics profession would come to a firm consensus around the idea that the economic events of the Great Depression cannot be properly understood without a solid linkage to both the behavior of the supply of money together with Federal Reserve actions on the one hand and the flawed structure of the interwar gold standard on the other.

It is well documented that many public officials, such as President Herbert Hoover and members of the Federal Reserve System in the latter 1920s, were intent on ending what they perceived to be the speculative excesses that were driving the stock market boom. Moreover, as explained by Hamilton (1987), despite plentiful denials to the contrary, the Federal Reserve assumed the role of “arbiter of security prices.” Although there continues to be debate as to whether or not the stock market was overvalued at the time (White, 1990; DeLong and Schleifer, 1991), the main point is that the Federal Reserve believed there to be a speculative bubble in equity values. Hamilton (1987) describes how the Federal Reserve, intending to “pop” the bubble, embarked on a highly contractionary monetary policy in January 1928. Between December 1927 and July 1928 the Federal Reserve conducted $393 million of open market sales of securities so that only $80 million remained in the Open Market account. Buying rates on bankers’ acceptances1 were raised from 3 percent in January 1928 to 4.5 percent by July, reducing Federal Reserve holdings of such bills by $193 million, leaving a total of only $185 million of these bills on balance. Further, the discount rate was increased from 3.5 percent to 5 percent, the highest level since the recession of 1920–21. “In short, in terms of the magnitudes consciously controlled by the Fed, it would be difficult to design a more contractionary policy than that initiated in January 1928” (Hamilton, 1987).

The pressure did not stop there, however. The death of Federal Reserve Bank President Benjamin Strong and the subsequent control of policy ascribed to Adolph Miller of the Federal Reserve Board insured that the fall in the stock market was going to be made a reality. Miller believed the speculative excesses of the stock market were hurting the economy, and the Federal Reserve continued attempting to put an end to this perceived harm (Cecchetti, 1998). The amount of Federal Reserve credit that was being extended to market participants in the form of broker loans became an issue in 1929. The Federal Reserve adamantly discouraged lending that was collateralized by equities. The intentions of the Board of Governors of the Federal Reserve were made clear in a letter dated February 2, 1929 sent to Federal Reserve banks. In part the letter read:

The board has no disposition to assume authority to interfere with the loan practices of member banks so long as they do not involve the Federal reserve banks. It has, however, a grave responsibility whenever there is evidence that member banks are maintaining speculative security loans with the aid of Federal reserve credit. When such is the case the Federal reserve bank becomes either a contributing or a sustaining factor in the current volume of speculative security credit. This is not in harmony with the intent of the Federal Reserve Act, nor is it conducive to the wholesome operation of the banking and credit system of the country. (Board of Governors of the Federal Reserve 1929: 93–94, quoted from Cecchetti, 1998)

The deflationary pressure to stock prices had been applied. It was now a question of when the market would break. Although the effects were not immediate, the wait was not long.

The Economy Stumbles

The NBER business cycle chronology dates the start of the Great Depression in August 1929. For this reason many have said that the Depression started on Main Street and not Wall Street. Be that as it may, the stock market plummeted in October of 1929. The bursting of the speculative bubble had been achieved and the economy was now headed in an ominous direction. The Federal Reserve’s seasonally adjusted index of industrial production stood at 114 (1935–39 = 100) in August 1929. By October it had fallen to 110 for a decline of 3.5 percent (annualized percentage decline = 14.7 percent). After the crash, the incipient recession intensified, with the industrial production index falling from 110 in October to 100 in December 1929, or 9 percent (annualized percentage decline = 41 percent). In 1930, the index fell further from 100 in January to 79 in December, or an additional 21percent.

Links between the Crash and the Depression?

While popular history treats the crash and the Depression as one and the same event, economists know that they were not. But there is no doubt that the crash was one of the things that got the ball rolling. Several authors have offered explanations for the linkage between the crash and the recession of 1929–30. Mishkin (1978) argues that the crash and an increase in liabilities led to a deterioration in households’ balance sheets. The reduced liquidity2 led consumers to defer consumption of durable goods and housing and thus contributed to a fall in consumption. Temin (1976) suggests that the fall in stock prices had a negative wealth effect on consumption, but attributes only a minor role to this given that stocks were not a large fraction of total wealth; the stock market in 1929, although falling dramatically, remained above the value it had achieved in early 1928, and the propensity to consume from wealth was small during this period. Romer (1990) provides evidence suggesting that if the stock market were thought to be a predictor of future economic activity, then the crash can rightly be viewed as a source of increased consumer uncertainty that depressed spending on consumer durables and accelerated the decline that had begun in August 1929. Flacco and Parker (1992) confirm Romer’s findings using different data and alternative estimation techniques.

Looking back on the behavior of the economy during the year of 1930, industrial production declined 21 percent, the consumer price index fell 2.6 percent, the supply of high-powered money (that is, the liabilities of the Federal Reserve that are usable as money, consisting of currency in circulation and bank reserves; also called the monetary base) fell 2.8 percent, the nominal supply of money as measured by M1 (the product of the monetary base3 multiplied by the money multiplier4) dipped 3.5 percent and the ex post real interest rate turned out to be 11.3 percent, the highest it had been since the recession of 1920–21 (Hamilton, 1987). In spite of this, when put into historical context, there was no reason to view the downturn of 1929–30 as historically unprecedented. Its magnitude was comparable to that of many recessions that had previously occurred. Perhaps there was justifiable optimism in December 1930 that the economy might even shake off the negative movement and embark on the path to recovery, rather like what had occurred after the recession of 1920–21 (Bernanke, 1983). As we know, the bottom would not come for another 27 months.

The Economy Crumbles

Banking Failures

During 1931, there was a “change in the character of the contraction” (Friedman and Schwartz, 1963). Beginning in October 1930 and lasting until December 1930, the first of a series of banking panics now accompanied the downward spasms of the business cycle. Although bank failures had occurred throughout the 1920s, the magnitude of the failures that occurred in the early 1930s was of a different order altogether (Bernanke, 1983). The absence of any type of deposit insurance resulted in the contagion of the panics being spread to sound financial institutions and not just those on the margin.

Traditional Methods of Combating Bank Runs Not Used

Moreover, institutional arrangements that had existed in the private banking system designed to provide liquidity – to convert assets into cash – to fight bank runs before 1913 were not exercised after the creation of the Federal Reserve System. For example, during the panic of 1907, the effects of the financial upheaval had been contained through a combination of lending activities by private banks, called clearinghouses, and the suspension of deposit convertibility into currency. While not preventing bank runs and the financial panic, their economic impact was lessened to a significant extent by these countermeasures enacted by private banks, as the economy quickly recovered in 1908. The aftermath of the panic of 1907 and the desire to have a central authority to combat the contagion of financial disruptions was one of the factors that led to the establishment of the Federal Reserve System. After the creation of the Federal Reserve, clearinghouse lending and suspension of deposit convertibility by private banks were not undertaken. Believing the Federal Reserve to be the “lender of last resort,” it was apparently thought that the responsibility to fight bank runs was the domain of the central bank (Friedman and Schwartz, 1963; Bernanke, 1983). Unfortunately, when the banking panics came in waves and the financial system was collapsing, being the “lender of last resort” was a responsibility that the Federal Reserve either could not or would not assume.

Money Supply Contracts

The economic effects of the banking panics were devastating. Aside from the obvious impact of the closing of failed banks and the subsequent loss of deposits by bank customers, the money supply accelerated its downward spiral. Although the economy had flattened out after the first wave of bank failures in October–December 1930, with the industrial production index steadying from 79 in December 1930 to 80 in April 1931, the remainder of 1931 brought a series of shocks from which the economy was not to recover for some time.

Second Wave of Banking Failure

In May, the failure of Austria’s largest bank, the Kredit-anstalt, touched off financial panics in Europe. In September 1931, having had enough of the distress associated with the international transmission of economic depression, Britain abandoned its participation in the gold standard. Further, just as the United States’ economy appeared to be trying to begin recovery, the second wave of bank failures hit the financial system in June and did not abate until December. In addition, the Hoover administration in December 1931, adhering to its principles of limited government, embarked on a campaign to balance the federal budget. Tax increases resulted the following June, just as the economy was to hit the first low point of its so-called “double bottom” (Hoover, 1952).

The results of these events are now evident. Between January and December 1931 the industrial production index declined from 78 to 66, or 15.4 percent, the consumer price index fell 9.4 percent, the nominal supply of M1 dipped 5.7 percent, the ex post real interest rate5 remained at 11.3 percent, and although the supply of high-powered money6 actually increased 5.5 percent, the currency–deposit and reserve–deposit ratios began their upward ascent, and thus the money multiplier started its downward plunge (Hamilton, 1987). If the economy had flattened out in the spring of 1931, then by December output, the money supply, and the price level were all on negative growth paths that were dragging the economy deeper into depression.

Third Wave of Banking Failure

The economic difficulties were far from over. The economy displayed some evidence of recovery in late summer/early fall of 1932. However, in December 1932 the third, and largest, wave of banking panics hit the financial markets and the collapse of the economy arrived with the business cycle hitting bottom in March 1933. Industrial production between January 1932 and March 1933 fell an additional 15.6 percent. For the combined years of 1932 and 1933, the consumer price index fell a cumulative 16.2 percent, the nominal supply of M1 dropped 21.6 percent, the nominal M2 money supply fell 34.7 percent, and although the supply of high-powered money increased 8.4 percent, the currency–deposit and reserve–deposit ratios accelerated their upward ascent. Thus the money multiplier continued on a downward plunge that was not arrested until March 1933. Similar behaviors for real GDP, prices, money supplies and other key macroeconomic variables occurred in many European economies as well (Snowdon and Vane, 1999; Temin, 1989).

An examination of the macroeconomic data in August 1929 compared to March 1933 provides a stark contrast. The unemployment rate of 3 percent in August 1929 was at 25 percent in March 1933. The industrial production index of 114 in August 1929 was at 54 in March 1933, or a 52.6 percent decrease. The money supply had fallen 35 percent, prices plummeted by about 33 percent, and more than one-third of banks in the United States were either closed or taken over by other banks. The “new era” ushered in by “the roaring twenties” was over. Roosevelt took office in March 1933, a nationwide bank holiday was declared from March 6 until March 13, and the United States abandoned the international gold standard in April 1933. Recovery commenced immediately and the economy began its long path back to the pre-1929 secular growth trend.

Table 1 summarizes the drop in industrial production in the major economies of Western Europe and North America. Table 2 gives gross national product estimates for the United States from 1928 to 1941. The constant price series adjusts for inflation and deflation.

Table 1
Indices of Total Industrial Production, 1927 to 1935 (1929 = 100)

1927 1928 1929 1930 1931 1932 1933 1934 1935
Britain 95 94 100 94 86 89 95 105 114
Canada 85 94 100 91 78 68 69 82 90
France 84 94 100 99 85 74 83 79 77
Germany 95 100 100 86 72 59 68 83 96
Italy 87 99 100 93 84 77 83 85 99
Netherlands 87 94 100 109 101 90 90 93 95
Sweden 85 88 100 102 97 89 93 111 125
U.S. 85 90 100 83 69 55 63 69 79

Source: Industrial Statistics, 1900-57 (Paris, OEEC, 1958), Table 2.

Table 2
U.S. GNP at Constant (1929) and Current Prices, 1928-1941

Year GNP at constant (1929) prices (billions of $) GNP at current prices (billions of $)
1928 98.5 98.7
1929 104.4 104.6
1930 95.1 91.2
1931 89.5 78.5
1932 76.4 58.6
1933 74.2 56.1
1934 80.8 65.5
1935 91.4 76.5
1936 100.9 83.1
1937 109.1 91.2
1938 103.2 85.4
1939 111.0 91.2
1940 121.0 100.5
1941 131.7 124.7

Contemporary Explanations

The economics profession during the 1930s was at a loss to explain the Depression. The most prominent conventional explanations were of two types. First, some observers at the time firmly grounded their explanations on the two pillars of classical macroeconomic thought, Say’s Law and the belief in the self-equilibrating powers of the market. Many argued that it was simply a question of time before wages and prices adjusted fully enough for the economy to return to full employment and achieve the realization of the putative axiom that “supply creates its own demand.” Second, the Austrian school of thought argued that the Depression was the inevitable result of overinvestment during the 1920s. The best remedy for the situation was to let the Depression run its course so that the economy could be purified from the negative effects of the false expansion. Government intervention was viewed by the Austrian school as a mechanism that would simply prolong the agony and make any subsequent depression worse than it would ordinarily be (Hayek, 1966; Hayek, 1967).

Liquidationist Theory

The Hoover administration and the Federal Reserve Board also contained several so-called “liquidationists.” These individuals basically believed that economic agents should be forced to re-arrange their spending proclivities and alter their alleged profligate use of resources. If it took mass bankruptcies to produce this result and wipe the slate clean so that everyone could have a fresh start, then so be it. The liquidationists viewed the events of the Depression as an economic penance for the speculative excesses of the 1920s. Thus, the Depression was the price that was being paid for the misdeeds of the previous decade. This is perhaps best exemplified in the well-known quotation of Treasury Secretary Andrew Mellon, who advised President Hoover to “Liquidate labor, liquidate stocks, liquidate the farmers, liquidate real estate.” Mellon continued, “It will purge the rottenness out of the system. High costs of living and high living will come down. People will work harder, live a more moral life. Values will be adjusted, and enterprising people will pick up the wrecks from less competent people” (Hoover, 1952). Hoover apparently followed this advice as the Depression wore on. He continued to reassure the public that if the principles of orthodox finance were faithfully followed, recovery would surely be the result.

The business press at the time was not immune from such liquidationist prescriptions either. The Commercial and Financial Chronicle, in an August 3, 1929 editorial entitled “Is Not Group Speculating Conspiracy, Fostering Sham Prosperity?” complained of the economy being replete with profligate spending including:

(a) The luxurious diversification of diet advantageous to dairy men … and fruit growers …; (b) luxurious dressing … more silk and rayon …; (c) free spending for automobiles and their accessories, gasoline, house furnishings and equipment, radios, travel, amusements and sports; (d) the displacement from the farms by tractors and autos of produce-consuming horses and mules to a number aggregating 3,700,000 for the period 1918–1928 … (e) the frills of education to thousands for whom places might better be reserved at bench or counter or on the farm. (Quoted from Nelson, 1991)

Persons, in a paper which appeared in the November 1930 Quarterly Journal of Economics, demonstrates that some academic economists also held similar liquidationist views.

Although certainly not universal, the descriptions above suggest that no small part of the conventional wisdom at the time believed the Depression to be a penitence for past sins. In addition, it was thought that the economy would be restored to full employment equilibrium once wages and prices adjusted sufficiently. Say’s Law will ensure the economy will return to health, and supply will create its own demand sufficient to return to prosperity, if we simply let the system work its way through. In his memoirs published in 1952, 20 years after his election defeat, Herbert Hoover continued to steadfastly maintain that if Roosevelt and the New Dealers would have stuck to the policies his administration put in place, the economy would have made a full recovery within 18 months after the election of 1932. We have to intensify our resolve to “stay the course.” All will be well in time if we just “take our medicine.” In hindsight, it challenges the imagination to think up worse policy prescriptions for the events of 1929–33.

Modern Explanations

There remains considerable debate regarding the economic explanations for the behavior of the business cycle between August 1929 and March 1933. This section describes the main hypotheses that have been presented in the literature attempting to explain the causes for the depth, protracted length, and worldwide propagation of the Great Depression.

The United States’ experience, considering the preponderance of empirical results and historical simulations contained in the economic literature, can largely be accounted for by the monetary hypothesis of Friedman and Schwartz (1963) together with the nonmonetary/financial hypotheses of Bernanke (1983) and Fisher (1933). That is, most, but not all, of the characteristic phases of the business cycle and depth to which output fell from 1929 to 1933 can be accounted for by the monetary and nonmonetary/financial hypotheses. The international experience, well documented in Choudri and Kochin (1980), Hamilton (1988), Temin (1989), Bernanke and James (1991), and Eichengreen (1992), can be properly understood as resulting from a flawed interwar gold standard. Each of these hypotheses is explained in greater detail below.

Nonmonetary/Nonfinancial Theories

It should be noted that I do not include a section covering the nonmonetary/nonfinancial theories of the Great Depression. These theories, including Temin’s (1976) focus on autonomous consumption decline, the collapse of housing construction contained in Anderson and Butkiewicz (1980), the effects of the stock market crash, the uncertainty hypothesis of Romer (1990), and the Smoot–Hawley Tariff Act of 1930, are all worthy of mention and can rightly be apportioned some of the responsibility for initiating the Depression. However, any theory of the Depression must be able to account for the protracted problems associated with the punishing deflation imposed on the United States and the world during that era. While the nonmonetary/nonfinancial theories go a long way accounting for the impetus for, and first year of the Depression, my reading of the empirical results of the economic literature indicates that they do not have the explanatory power of the three other theories mentioned above to account for the depths to which the economy plunged.

Moreover, recent research by Olney (1999) argues convincingly that the decline in consumption was not autonomous at all. Rather, the decline resulted because high consumer indebtedness threatened future consumption spending because default was expensive. Olney shows that households were shouldering an unprecedented burden of installment debt – especially for automobiles. In addition, down payments were large and contracts were short. Missed installment payments triggered repossession, reducing consumer wealth in 1930 because households lost all acquired equity. Cutting consumption was the only viable strategy in 1930 for avoiding default.

The Monetary Hypothesis

In reviewing the economic history of the Depression above, it was mentioned that the supply of money fell by 35 percent, prices dropped by about 33 percent, and one-third of all banks vanished. Milton Friedman and Anna Schwartz, in their 1963 book A Monetary History of the United States, 1867–1960, call this massive drop in the supply of money “The Great Contraction.”

Friedman and Schwartz (1963) discuss and painstakingly document the synchronous movements of the real economy with the disruptions that occurred in the financial sector. They point out that the series of bank failures that occurred beginning in October 1930 worsened economic conditions in two ways. First, bank shareholder wealth was reduced as banks failed. Second, and most importantly, the bank failures were exogenous shocks and led to the drastic decline in the money supply. The persistent deflation of the 1930s follows directly from this “great contraction.”

Criticisms of Fed Policy

However, this raises an important question: Where was the Federal Reserve while the money supply and the financial system were collapsing? If the Federal Reserve was created in 1913 primarily to be the “lender of last resort” for troubled financial institutions, it was failing miserably. Friedman and Schwartz pin the blame squarely on the Federal Reserve and the failure of monetary policy to offset the contractions in the money supply. As the money multiplier continued on its downward path, the monetary base, rather than being aggressively increased, simply progressed slightly upwards on a gently positive sloping time path. As banks were failing in waves, was the Federal Reserve attempting to contain the panics by aggressively lending to banks scrambling for liquidity? The unfortunate answer is “no.” When the panics were occurring, was there discussion of suspending deposit convertibility or suspension of the gold standard, both of which had been successfully employed in the past? Again the unfortunate answer is “no.” Did the Federal Reserve consider the fact that it had an abundant supply of free gold, and therefore that monetary expansion was feasible? Once again the unfortunate answer is “no.” The argument can be summarized by the following quotation:

At all times throughout the 1929–33 contraction, alternative policies were available to the System by which it could have kept the stock of money from falling, and indeed could have increased it at almost any desired rate. Those policies did not involve radical innovations. They involved measures of a kind the System had taken in earlier years, of a kind explicitly contemplated by the founders of the System to meet precisely the kind of banking crisis that developed in late 1930 and persisted thereafter. They involved measures that were actually proposed and very likely would have been adopted under a slightly different bureaucratic structure or distribution of power, or even if the men in power had had somewhat different personalities. Until late 1931 – and we believe not even then – the alternative policies involved no conflict with the maintenance of the gold standard. Until September 1931, the problem that recurrently troubled the System was how to keep the gold inflows under control, not the reverse. (Friedman and Schwartz, 1963)

The inescapable conclusion is that it was a failure of the policies of the Federal Reserve System in responding to the crises of the time that made the Depression as bad as it was. If monetary policy had responded differently, the economic events of 1929–33 need not have been as they occurred. This assertion is supported by the results of Fackler and Parker (1994). Using counterfactual historical simulations, they show that if the Federal Reserve had kept the M1 money supply growing along its pre-October 1929 trend of 3.3 percent annually, most of the Depression would have been averted. McCallum (1990) also reaches similar conclusions employing a monetary base feedback policy in his counterfactual simulations.

Lack of Leadership at the Fed

Friedman and Schwartz trace the seeds of these regrettable events to the death of Federal Reserve Bank of New York President Benjamin Strong in 1928. Strong’s death altered the locus of power in the Federal Reserve System and left it without effective leadership. Friedman and Schwartz maintain that Strong had the personality, confidence and reputation in the financial community to lead monetary policy and sway policy makers to his point of view. Friedman and Schwartz believe that Strong would not have permitted the financial panics and liquidity crises to persist and affect the real economy. Instead, after Governor Strong died, the conduct of open market operations changed from a five-man committee dominated by the New York Federal Reserve to that of a 12-man committee of Federal Reserve Bank governors. Decisiveness in leadership was replaced by inaction and drift. Others (Temin, 1989; Wicker, 1965) reject this point, claiming the policies of the Federal Reserve in the 1930s were not inconsistent with the policies pursued in the decade of the 1920s.

The Fed’s Failure to Distinguish between Nominal and Real Interest Rates

Meltzer (1976) also points out errors made by the Federal Reserve. His argument is that the Federal Reserve failed to distinguish between nominal and real interest rates. That is, while nominal rates were falling, the Federal Reserve did virtually nothing, since it construed this to be a sign of an “easy” credit market. However, in the face of deflation, real rates were rising and there was in fact a “tight” credit market. Failure to make this distinction led money to be a contributing factor to the initial decline of 1929.

Deflation

Cecchetti (1992) and Nelson (1991) bolster the monetary hypothesis by demonstrating that the deflation during the Depression was anticipated at short horizons, once it was under way. The result, using the Fisher equation, is that high ex ante real interest rates were the transmission mechanism that led from falling prices to falling output. In addition, Cecchetti (1998) and Cecchetti and Karras (1994) argue that if the lower bound of the nominal interest rate is reached, then continued deflation renders the opportunity cost of holding money negative. In this instance the nature of money changes. Now the rate of deflation places a floor on the real return nonmoney assets must provide to make them attractive to hold. If they cannot exceed the rate on money holdings, then agents will move their assets into cash and the result will be negative net investment and a decapitalization of the economy.

Critics of the Monetary Hypothesis

The monetary hypothesis, however, is not without its detractors. Paul Samuelson observes that the monetary base did not fall during the Depression. Moreover, expecting the Federal Reserve to have aggressively increased the monetary base by whatever amount was necessary to stop the decline in the money supply is hindsight. A course of action for monetary policy such as this was beyond the scope of discussion prevailing at the time. In addition, others, like Moses Abramovitz, point out that the money supply had endogenous components that were beyond the Federal Reserve’s ability to control. Namely, the money supply may have been falling as a result of declining economic activity, or so-called “reverse causation.” Moreover the gold standard, to which the United States continued to adhere until March 1933, also tied the hands of the Federal Reserve in so far as gold outflows that occurred required the Federal Reserve to contract the supply of money. These views are also contained in Temin (1989) and Eichengreen (1992), as discussed below.

Bernanke (1983) argues that the monetary hypothesis: (i) is not a complete explanation of the link between the financial sector and aggregate output in the 1930s; (ii) does not explain how it was that decreases in the money supply caused output to keep falling over many years, especially since it is widely believed that changes in the money supply only change prices and other nominal economic values in the long run, not real economic values like output ; and (iii) is quantitatively insufficient to explain the depth of the decline in output. Bernanke (1983) not only resurrected and sharpened Fisher’s (1933) debt deflation hypothesis, but also made further contributions to what has come to be known as the nonmonetary/financial hypothesis.

The Nonmonetary/Financial Hypothesis

Bernanke (1983), building on the monetary hypothesis of Friedman and Schwartz (1963), presents an alternative interpretation of the way in which the financial crises may have affected output. The argument involves both the effects of debt deflation and the impact that bank panics had on the ability of financial markets to efficiently allocate funds from lenders to borrowers. These nonmonetary/financial theories hold that events in financial markets other than shocks to the money supply can help to account for the paths of output and prices during the Great Depression.

Fisher (1933) asserted that the dominant forces that account for “great” depressions are (nominal) over-indebtedness and deflation. Specifically, he argued that real debt burdens were substantially increased when there were dramatic declines in the price level and nominal incomes. The combination of deflation, falling nominal income and increasing real debt burdens led to debtor insolvency, lowered aggregate demand, and thereby contributed to a continuing decline in the price level and thus further increases in the real burden of debt.

The “Credit View”

Bernanke (1983), in what is now called the “credit view,” provided additional details to help explain Fisher’s debt deflation hypothesis. He argued that in normal circumstances, an initial decline in prices merely reallocates wealth from debtors to creditors, such as banks. Usually, such wealth redistributions are minor in magnitude and have no first-order impact on the economy. However, in the face of large shocks, deflation in the prices of assets forfeited to banks by debtor bankruptcies leads to a decline in the nominal value of assets on bank balance sheets. For a given value of bank liabilities, also denominated in nominal terms, this deterioration in bank assets threatens insolvency. As banks reallocate away from loans to safer government securities, some borrowers, particularly small ones, are unable to obtain funds, often at any price. Further, if this reallocation is long-lived, the shortage of credit for these borrowers helps to explain the persistence of the downturn. As the disappearance of bank financing forces lower expenditure plans, aggregate demand declines, which again contributes to the downward deflationary spiral. For debt deflation to be operative, it is necessary to demonstrate that there was a substantial build-up of debt prior to the onset of the Depression and that the deflation of the 1930s was at least partially unanticipated at medium- and long-term horizons at the time that the debt was being incurred. Both of these conditions appear to have been in place (Fackler and Parker, 2001; Hamilton, 1992; Evans and Wachtel, 1993).

The Breakdown in Credit Markets

In addition, the financial panics which occurred hindered the credit allocation mechanism. Bernanke (1983) explains that the process of credit intermediation requires substantial information gathering and non-trivial market-making activities. The financial disruptions of 1930–33 are correctly viewed as substantial impediments to the performance of these services and thus impaired the efficient allocation of credit between lenders and borrowers. That is, financial panics and debtor and business bankruptcies resulted in a increase in the real cost of credit intermediation. As the cost of credit intermediation increased, sources of credit for many borrowers (especially households, farmers and small firms) became expensive or even unobtainable at any price. This tightening of credit put downward pressure on aggregate demand and helped turn the recession of 1929–30 into the Great Depression. The empirical support for the validity of the nonmonetary/financial hypothesis during the Depression is substantial (Bernanke, 1983; Fackler and Parker, 1994, 2001; Hamilton, 1987, 1992), although support for the “credit view” for the transmission mechanism of monetary policy in post-World War II economic activity is substantially weaker. In combination, considering the preponderance of empirical results and historical simulations contained in the economic literature, the monetary hypothesis and the nonmonetary/financial hypothesis go a substantial distance toward accounting for the economic experiences of the United States during the Great Depression.

The Role of Pessimistic Expectations

To this combination, the behavior of expectations should also be added. As explained by James Tobin, there was another reason for a “change in the character of the contraction” in 1931. Although Friedman and Schwartz attribute this “change” to the bank panics that occurred, Tobin points out that change also took place because of the emergence of pessimistic expectations. If it was thought that the early stages of the Depression were symptomatic of a recession that was not different in kind from similar episodes in our economic history, and that recovery was a real possibility, the public need not have had pessimistic expectations. Instead the public may have anticipated things would get better. However, after the British left the gold standard, expectations changed in a very pessimistic way. The public may very well have believed that the business cycle downturn was not going to be reversed, but rather was going to get worse than it was. When households and business investors begin to make plans based on the economy getting worse instead of making plans based on anticipations of recovery, the depressing economic effects on consumption and investment of this switch in expectations are common knowledge in the modern macroeconomic literature. For the literature on the Great Depression, the empirical research conducted on the expectations hypothesis focuses almost exclusively on uncertainty (which is not the same thing as pessimistic/optimistic expectations) and its contribution to the onset of the Depression (Romer, 1990; Flacco and Parker, 1992). Although Keynes (1936) writes extensively about the state of expectations and their economic influence, the literature is silent regarding the empirical validity of the expectations hypothesis in 1931–33. Yet, in spite of this, the continued shocks that the United States’ economy received demonstrated that the business cycle downturn of 1931–33 was of a different kind than had previously been known. Once the public believed this to be so and made their plans accordingly, the results had to have been economically devastating. There is no formal empirical confirmation and I have not segregated the expectations hypothesis as a separate hypothesis in the overview. However, the logic of the above argument compels me to be of the opinion that the expectations hypothesis provides an impressive addition to the monetary hypothesis and the nonmonetary/financial hypothesis in accounting for the economic experiences of the United States during the Great Depression.

The Gold Standard Hypothesis

Recent research on the operation of the interwar gold standard has deepened our understanding of the Depression and its international character. The way and manner in which the interwar gold standard was structured and operated provide a convincing explanation of the international transmission of deflation and depression that occurred in the 1930s.

The story has its beginning in the 1870–1914 period. During this time the gold standard functioned as a pegged exchange rate system where certain rules were observed. Namely, it was necessary for countries to permit their money supplies to be altered in response to gold flows in order for the price-specie flow mechanism to function properly. It operated successfully because countries that were gaining gold allowed their money supply to increase and raise the domestic price level to restore equilibrium and maintain the fixed exchange rate of their currency. Countries that were losing gold were obligated to permit their money supply to decrease and generate a decline in their domestic price level to restore equilibrium and maintain the fixed exchange rate of their currency. Eichengreen (1992) discusses and extensively documents that the gold standard of this period functioned as smoothly as it did because of the international commitment countries had to the gold standard and the level of international cooperation exhibited during this time. “What rendered the commitment to the gold standard credible, then, was that the commitment was international, not merely national. That commitment was activated through international cooperation” (Eichengreen, 1992).

The gold standard was suspended when the hostilities of World War I broke out. By the end of 1928, major countries such as the United States, the United Kingdom, France and Germany had re-established ties to a functioning fixed exchange rate gold standard. However, Eichengreen (1992) points out that the world in which the gold standard functioned before World War I was not the same world in which the gold standard was being re-established. A credible commitment to the gold standard, as Hamilton (1988) explains, required that a country maintain fiscal soundness and political objectives that insured the monetary authority could pursue a monetary policy consistent with long-run price stability and continuous convertibility of the currency. Successful operation required these conditions to be in place before re-establishment of the gold standard was operational. However, many governments during the interwar period went back on the gold standard in the opposite set of circumstances. They re-established ties to the gold standard because they were incapable, due to the political chaos generated after World War I, of fiscal soundness and did not have political objectives conducive to reforming monetary policy such that it could insure long-run price stability. “By this criterion, returning to the gold standard could not have come at a worse time or for poorer reasons” (Hamilton, 1988). Kindleberger (1973) stresses the fact that the pre-World War I gold standard functioned as well as it did because of the unquestioned leadership exercised by Great Britain. After World War I and the relative decline of Britain, the United States did not exhibit the same strength of leadership Britain had shown before. The upshot is that it was an unsuitable environment in which to re-establish the gold standard after World War I and the interwar gold standard was destined to drift in a state of malperformance as no one took responsibility for its proper functioning. However, the problems did not end there.

Flaws in the Interwar International Gold Standard

Lack of Symmetry in the Response of Gold-Gaining and Gold-Losing Countries

The interwar gold standard operated with four structural/technical flaws that almost certainly doomed it to failure (Eichengreen, 1986; Temin, 1989; Bernanke and James, 1991). The first, and most damaging, was the lack of symmetry in the response of gold-gaining countries and gold-losing countries that resulted in a deflationary bias that was to drag the world deeper into deflation and depression. If a country was losing gold reserves, it was required to decrease its money supply to maintain its commitment to the gold standard. Given that a minimum gold reserve had to be maintained and that countries became concerned when the gold reserve fell within 10 percent of this minimum, little gold could be lost before the necessity of monetary contraction, and thus deflation, became a reality. Moreover, with a fractional gold reserve ratio of 40 percent, the result was a decline in the domestic money supply equal to 2.5 times the gold outflow. On the other hand, there was no such constraint on countries that experienced gold inflows. Gold reserves were accumulated without the binding requirement that the domestic money supply be expanded. Thus the price–specie flow mechanism ceased to function and the equilibrating forces of the pre-World War I gold standard were absent during the interwar period. If a country attracting gold reserves were to embark on a contractionary path, the result would be the further extraction of gold reserves from other countries on the gold standard and the imposition of deflation on their economies as well, as they were forced to contract their money supplies. “As it happened, both of the two major gold surplus countries – France and the United States, who at the time together held close to 60 percent of the world’s monetary gold – took deflationary paths in 1928–1929” (Bernanke and James, 1991).

Foreign Exchange Reserves

Second, countries that did not have reserve currencies could hold their minimum reserves in the form of both gold and convertible foreign exchange reserves. If the threat of devaluation of a reserve currency appeared likely, a country holding foreign exchange reserves could divest itself of the foreign exchange, as holding it became a more risky proposition. Further, the convertible reserves were usually only fractionally backed by gold. Thus, if countries were to prefer gold holdings as opposed to foreign exchange reserves for whatever reason, the result would be a contraction in the world money supply as reserves were destroyed in the movement to gold. This effect can be thought of as equivalent to the effect on the domestic money supply in a fractional reserve banking system of a shift in the public’s money holdings toward currency and away from bank deposits.

The Bank of France and Open Market Operations

Third, the powers of many European central banks were restricted or excluded outright. In particular, as discussed by Eichengreen (1986), the Bank of France was prohibited from engaging in open market operations, i.e. the purchase or sale of government securities. Given that France was one of the countries amassing gold reserves, this restriction largely prevented them from adhering to the rules of the gold standard. The proper response would have been to expand their supply of money and inflate so as not to continue to attract gold reserves and impose deflation on the rest of the world. This was not done. France continued to accumulate gold until 1932 and did not leave the gold standard until 1936.

Inconsistent Currency Valuations

Lastly, the gold standard was re-established at parities that were unilaterally determined by each individual country. When France returned to the gold standard in 1926, it returned at a parity rate that is believed to have undervalued the franc. When Britain returned to the gold standard in 1925, it returned at a parity rate that is believed to have overvalued the pound. In this situation, the only sustainable equilibrium required the French to inflate their economy in response to the gold inflows. However, given their legacy of inflation during the 1921–26 period, France steadfastly resisted inflation (Eichengreen, 1986). The maintenance of the gold standard and the resistance to inflation were now inconsistent policy objectives. The Bank of France’s inability to conduct open market operations only made matters worse. The accumulation of gold and the exporting of deflation to the world was the result.

The Timing of Recoveries

Taken together, the flaws described above made the interwar gold standard dysfunctional and in the end unsustainable. Looking back, we observe that the record of departure from the gold standard and subsequent recovery was different for many different countries. For some countries recovery came sooner. For some it came later. It is in this timing of departure from the gold standard that recent research has produced a remarkable empirical finding. From the work of Choudri and Kochin (1980), Eichengreen and Sachs (1985), Temin (1989), and Bernanke and James (1991), we now know that the sooner a country abandoned the gold standard, the quicker recovery commenced. Spain, which never restored its participation in the gold standard, missed the ravages of the Depression altogether. Britain left the gold standard in September 1931, and started to recover. Sweden left the gold standard at the same time as Britain, and started to recover. The United States left in March 1933, and recovery commenced. France, Holland, and Poland continued to have their economies struggle after the United States’ recovery began as they continued to adhere to the gold standard until 1936. Only after they left did recovery start; departure from the gold standard freed a country from the ravages of deflation.

The Fed and the Gold Standard: The “Midas Touch”

Temin (1989) and Eichengreen (1992) argue that it was the unbending commitment to the gold standard that generated deflation and depression worldwide. They emphasize that the gold standard required fiscal and monetary authorities around the world to submit their economies to internal adjustment and economic instability in the face of international shocks. Given how the gold standard tied countries together, if the gold parity were to be defended and devaluation was not an option, unilateral monetary actions by any one country were pointless. The end result is that Temin (1989) and Eichengreen (1992) reject Friedman and Schwartz’s (1963) claim that the Depression was caused by a series of policy failures on the part of the Federal Reserve. Actions taken in the United States, according to Temin (1989) and Eichengreen (1992), cannot be properly understood in isolation with respect to the rest of the world. If the commitment to the gold standard was to be maintained, monetary and fiscal authorities worldwide had little choice in responding to the crises of the Depression. Why did the Federal Reserve continue a policy of inaction during the banking panics? Because the commitment to the gold standard, what Temin (1989) has labeled “The Midas Touch,” gave them no choice but to let the banks fail. Monetary expansion and the injection of liquidity would lower interest rates, lead to a gold outflow, and potentially be contrary to the rules of the gold standard. Continued deflation due to gold outflows would begin to call into question the monetary authority’s commitment to the gold standard. “Defending gold parity might require the authorities to sit idly by as the banking system crumbled, as the Federal Reserve did at the end of 1931 and again at the beginning of 1933” (Eichengreen, 1992). Thus, if the adherence to the gold standard were to be maintained, the money supply was endogenous with respect to the balance of payments and beyond the influence of the Federal Reserve.

Eichengreen (1992) concludes further that what made the pre-World War I gold standard so successful was absent during the interwar period: credible commitment to the gold standard activated through international cooperation in its implementation and management. Had these important ingredients of the pre-World War I gold standard been present during the interwar period, twentieth-century economic history may have been very different.

Recovery and the New Deal

March 1933 was the rock bottom of the Depression and the inauguration of Franklin D. Roosevelt represented a sharp break with the status quo. Upon taking office, a bank holiday was declared, the United States left the interwar gold standard the following month, and the government commenced with several measures designed to resurrect the financial system. These measures included: (i) the establishment of the Reconstruction Finance Corporation which set about funneling large sums of liquidity to banks and other intermediaries; (ii) the Securities Exchange Act of 1934 which established margin requirements for bank loans used to purchase stocks and bonds and increased information requirements to potential investors; and (iii) the Glass–Steagal Act which strictly separated commercial banking and investment banking. Although delivering some immediate relief to financial markets, lenders continued to be reluctant to extend credit after the events of 1929–33, and the recovery of financial markets was slow and incomplete. Bernanke (1983) estimates that the United States’ financial system did not begin to shed the inefficiencies under which it was operating until the end of 1935.

The NIRA

Policies designed to promote different economic institutions were enacted as part of the New Deal. The National Industrial Recovery Act (NIRA) was passed on June 6, 1933 and was designed to raise prices and wages. In addition, the Act mandated the formation of planning boards in critical sectors of the economy. The boards were charged with setting output goals for their respective sector and the usual result was a restriction of production. In effect, the NIRA was a license for industries to form cartels and was struck down as unconstitutional in 1935. The Agricultural Adjustment Act of 1933 was similar legislation designed to reduce output and raise prices in the farming sector. It too was ruled unconstitutional in 1936.

Relief and Jobs Programs

Other policies intended to provide relief directly to people who were destitute and out of work were rapidly enacted. The Civilian Conservation Corps (CCC), the Tennessee Valley Authority (TVA), the Public Works Administration (PWA) and the Federal Emergency Relief Administration (FERA) were set up shortly after Roosevelt took office and provided jobs for the unemployed and grants to states for direct relief. The Civil Works Administration (CWA), created in 1933–34, and the Works Progress Administration (WPA), created in 1935, were also designed to provide work relief to the jobless. The Social Security Act was also passed in 1935. There surely are other programs with similar acronyms that have been left out, but the intent was the same. In the words of Roosevelt himself, addressing Congress in 1938:

Government has a final responsibility for the well-being of its citizenship. If private co-operative endeavor fails to provide work for the willing hands and relief for the unfortunate, those suffering hardship from no fault of their own have a right to call upon the Government for aid; and a government worthy of its name must make fitting response. (Quoted from Polenberg, 2000)

The Depression had shown the inaccuracies of classifying the 1920s as a “new era.” Rather, the “new era,” as summarized by Roosevelt’s words above and initiated in government’s involvement in the economy, began in March 1933.

The NBER business cycle chronology shows continuous growth from March 1933 until May 1937, at which time a 13-month recession hit the economy. The business cycle rebounded in June 1938 and continued on its upward march to and through the beginning of the United States’ involvement in World War II. The recovery that started in 1933 was impressive, with real GNP experiencing annual rates of the growth in the 10 percent range between 1933 and December 1941, excluding the recession of 1937–38 (Romer, 1993). However, as reported by Romer (1993), real GNP did not return to its pre-Depression level until 1937 and real GNP did not catch up to its pre-Depression secular trend until 1942. Indeed, the unemployment rate, peaking at 25 percent in March 1933, continued to dwell near or above the double-digit range until 1940. It is in this sense that most economists attribute the ending of the Depression to the onset of World War II. The War brought complete recovery as the unemployment rate quickly plummeted after December 1941 to its nadir during the War of below 2 percent.

Explanations for the Pace of Recovery

The question remains, however, that if the War completed the recovery, what initiated it and sustained it through the end of 1941? Should we point to the relief programs of the New Deal and the leadership of Roosevelt? Certainly, they had psychological/expectational effects on consumers and investors and helped to heal the suffering experienced during that time. However, as shown by Brown (1956), Peppers (1973), and Raynold, McMillin and Beard (1991), fiscal policy contributed little to the recovery, and certainly could have done much more.

Once again we return to the financial system for answers. The abandonment of the gold standard, the impact this had on the money supply, and the deliverance from the economic effects of deflation would have to be singled out as the most important contributor to the recovery. Romer (1993) stresses that Eichengreen and Sachs (1985) have it right; recovery did not come before the decision to abandon the old gold parity was made operational. Once this became reality, devaluation of the currency permitted expansion in the money supply and inflation which, rather than promoting a policy of beggar-thy-neighbor, allowed countries to escape the deflationary vortex of economic decline. As discussed in connection with the gold standard hypothesis, the simultaneity of leaving the gold standard and recovery is a robust empirical result that reflects more than simple temporal coincidence.

Romer (1993) reports an increase in the monetary base in the United States of 52 percent between April 1933 and April 1937. The M1 money supply virtually matched this increase in the monetary base, with 49 percent growth over the same period. The sources of this increase were two-fold. First, aside from the immediate monetary expansion permitted by devaluation, as Romer (1993) explains, monetary expansion continued into 1934 and beyond as gold flowed to the United States from Europe due to the increasing political unrest and heightened probability of hostilities that began the progression to World War II. Second, the increase in the money supply matched the increase in the monetary base and the Treasury chose not to sterilize the gold inflows. This is evidence that the monetary expansion resulted from policy decisions and not endogenous changes in the money multiplier. The new regime was freed from the constraints of the gold standard and the policy makers were intent on taking actions of a different nature than what had been done between 1929 and 1933.

Incompleteness of the Recovery before WWII

The Depression had turned a corner and the economy was emerging from the abyss in 1933. However, it still had a long way to go to reach full recovery. Friedman and Schwartz (1963) comment that “the most notable feature of the revival after 1933 was not its rapidity but its incompleteness.” They claim that monetary policy and the Federal Reserve were passive after 1933. The monetary authorities did nothing to stop the fall from 1929 to 1933 and did little to promote the recovery. The Federal Reserve made no effort to increase the stock of high-powered money through the use of either open market operations or rediscounting; Federal Reserve credit outstanding remained “almost perfectly constant from 1934 to mid-1940” (Friedman and Schwartz, 1963). As we have seen above, it was the Treasury that was generating increases in the monetary base at the time by issuing gold certificates equal to the amount of gold reserve inflow and depositing them at the Federal Reserve. When the government spent the money, the Treasury swapped the gold certificates for Federal Reserve notes and this expanded the monetary base (Romer, 1993). Monetary policy was thought to be powerless to promote recovery, and instead it was fiscal policy that became the implement of choice. The research shows that fiscal policy could have done much more to aid in recovery – ironically fiscal policy was the vehicle that was now the focus of attention. There is an easy explanation for why this is so.

The Emergences of Keynes

The economics profession as a whole was at a loss to provide cogent explanations for the events of 1929–33. In the words of Robert Gordon (1998), “economics had lost its intellectual moorings, and it was time for a new diagnosis.” There were no convincing answers regarding why the earlier theories of macroeconomic behavior failed to explain the events that were occurring, and worse, there was no set of principles that established a guide for proper actions in the future. That changed in 1936 with the publication of Keynes’s book The General Theory of Employment, Interest and Money. Perhaps there has been no other person and no other book in economics about which so much has been written. Many consider the arrival of Keynesian thought to have been a “revolution,” although this too is hotly contested (see, for example, Laidler, 1999). The debates that The General Theory generated have been many and long-lasting. There is little that can be said here to add or subtract from the massive literature devoted to the ideas promoted by Keynes, whether they be viewed right or wrong. But the influence over academic thought and economic policy that was generated by The General Theory is not in doubt.

The time was right for a set of ideas that not only explained the Depression’s course of events, but also provided a prescription for remedies that would create better economic performance in the future. Keynes and The General Theory, at the time the events were unfolding, provided just such a package. When all is said and done, we can look back in hindsight and argue endlessly about what Keynes “really meant” or what the “true” contribution of Keynesianism has been to the world of economics. At the time the Depression happened, Keynes represented a new paradigm for young scholars to latch on to. The stage was set for the nurturing of macroeconomics for the remainder of the twentieth century.

This article is a modified version of the introduction to Randall Parker, editor, Reflections on the Great Depression, Edward Elgar Publishing, 2002.

Bibliography

Olney, Martha. “Avoiding Default:The Role of Credit in the Consumption Collapse of 1930.” Quarterly Journal of Economics 114, no. 1 (1999): 319-35.

Anderson, Barry L. and James L. Butkiewicz. “Money, Spending and the Great Depression.” Southern Economic Journal 47 (1980): 388-403.

Balke, Nathan S. and Robert J. Gordon. “Historical Data.” In The American Business Cycle: Continuity and Change, edited by Robert J. Gordon. Chicago: University of Chicago Press, 1986.

Bernanke, Ben S. “Nonmonetary Effects of the Financial Crisis in the Propagation of the Great Depression.” American Economic Review 73, no. 3 (1983): 257-76.

Bernanke, Ben S. and Harold James. “The Gold Standard, Deflation, and Financial Crisis in the Great Depression: An International Comparison.” In Financial Markets and Financial Crises, edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Brown, E. Cary. “Fiscal Policy in the Thirties: A Reappraisal.” American Economic Review 46, no. 5 (1956): 857-79.

Cecchetti, Stephen G. “Prices during the Great Depression: Was the Deflation of 1930-1932 Really Anticipated?” American Economic Review 82, no. 1 (1992): 141-56.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, edited by Mark Wheeler. Kalamazoo, MI: W.E. Upjohn Institute for Employment Research, 1998.

Cecchetti, Stephen G. and Georgios Karras. “Sources of Output Fluctuations during the Interwar Period: Further Evidence on the Causes of the Great Depression.” Review of Economics and Statistics 76, no. 1 (1994): 80-102

Choudri, Ehsan U. and Levis A. Kochin. “The Exchange Rate and the International Transmission of Business Cycle Disturbances: Some Evidence from the Great Depression.” Journal of Money, Credit, and Banking 12, no. 4 (1980): 565-74.

De Long, J. Bradford and Andrei Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” Journal of Economic History 51, no. 3 (1991): 675-700.

Eichengreen, Barry. “The Bank of France and the Sterilization of Gold, 1926–1932.” Explorations in Economic History 23, no. 1 (1986): 56-84.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919–1939. New York: Oxford University Press, 1992.

Eichengreen, Barry and Jeffrey Sachs. “Exchange Rates and Economic Recovery in the 1930s.” Journal of Economic History 45, no. 4 (1985): 925-46.

Evans, Martin and Paul Wachtel. “Were Price Changes during the Great Depression Anticipated? Evidence from Nominal Interest Rates.” Journal of Monetary Economics 32, no. 1 (1993): 3-34.

Fackler, James S. and Randall E. Parker. “Accounting for the Great Depression: A Historical Decomposition.” Journal of Macroeconomics 16 (1994): 193-220.

Fackler, James S. and Randall E. Parker. “Was Debt Deflation Operative during the Great Depression?” East Carolina University Working Paper, 2001.

Fisher, Irving. “The Debt–Deflation Theory of Great Depressions.” Econometrica 1, no. 4 (1933): 337-57.

Flacco, Paul R. and Randall E. Parker. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30, no. 1 (1992): 154-71.

Friedman, Milton and Anna J. Schwartz. A Monetary History of the United States, 1867–1960. Princeton, NJ: Princeton University Press, 1963.

Gordon, Robert J. Macroeconomics, seventh edition. New York: Addison Wesley, 1998.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 13 (1987): 1-25.

Hamilton, James D. “Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6, no. 2 (1988): 67-89.

Hamilton, James D. “Was the Deflation during the Great Depression Anticipated? Evidence from the Commodity Futures Market.” American Economic Review 82, no. 1 (1992): 157-78.

Hayek, Friedrich A. von. Monetary Theory and the Trade Cycle. New

York: A. M. Kelley, 1967 (originally published in 1929).

Hayek, Friedrich A. von, Prices and Production. New York: A. M.

Kelley, 1966 (originally published in 1931).

Hoover, Herbert. The Memoirs of Herbert Hoover: The Great Depression, 1929–1941. New York: Macmillan, 1952.

Keynes, John M. The General Theory of Employment, Interest, and Money. London: Macmillan, 1936.

Kindleberger, Charles P. The World in Depression, 1929–1939. Berkeley: University of California Press, 1973.

Laidler, David. Fabricating the Keynesian Revolution. Cambridge: Cambridge University Press, 1999.

McCallum, Bennett T. “Could a Monetary Base Rule Have Prevented the Great Depression?” Journal of Monetary Economics 26 (1990): 3-26.

Meltzer, Allan H. “Monetary and Other Explanations of the Start of the Great Depression.” Journal of Monetary Economics 2 (1976): 455-71.

Mishkin, Frederick S. “The Household Balance Sheet and the Great Depression.” Journal of Economic History 38, no. 4 (1978): 918-37.

Nelson, Daniel B. “Was the Deflation of 1929–1930 Anticipated? The Monetary Regime as Viewed by the Business Press.” Research in Economic History 13 (1991): 1-65.

Peppers, Larry. “Full Employment Surplus Analysis and Structural Change: The 1930s.” Explorations in Economic History 10 (1973): 197-210..

Persons, Charles E. “Credit Expansion, 1920 to 1929, and Its Lessons.” Quarterly Journal of Economics 45, no. 1 (1930): 94-130.

Polenberg, Richard. The Era of Franklin D. Roosevelt, 1933–1945: A Brief History with Documents. Boston: Bedford/St. Martin’s, 2000.

Raynold, Prosper, W. Douglas McMillin and Thomas R. Beard. “The Impact of Federal Government Expenditures in the 1930s.” Southern Economic Journal 58, no. 1 (1991): 15-28.

Romer, Christina D. “World War I and the Postwar Depression: A Reappraisal Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22, no. 1 (1988): 91-115.

Romer, Christina D. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105, no. 3 (1990): 597-624.

Romer, Christina D. “The Nation in Depression.” Journal of Economic Perspectives 7, no. 2 (1993): 19-39.

Snowdon, Brian and Howard R. Vane. Conversations with Leading Economists: Interpreting Modern Macroeconomics, Cheltenham, UK: Edward Elgar, 1999.

Soule, George H. Prosperity Decade, From War to Depression: 1917–1929. New York: Rinehart, 1947.

Temin, Peter. Did Monetary Forces Cause the Great Depression? New York: W.W. Norton, 1976.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: MIT Press, 1989.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” Journal of Economic Perspectives 4, no. 2 (1990): 67-83.

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922–33: A Reinterpretation.” Journal of Political Economy 73, no. 4 (1965): 325-43.

1 Bankers’ acceptances are explained at http://www.rich.frb.org/pubs/instruments/ch10.html.

2 Liquidity is the ease of converting an asset into money.

3 The monetary base is measured as the sum of currency in the hands of the public plus reserves in the banking system. It is also called high-powered money since the monetary base is the quantity that gets multiplied into greater amounts of money supply as banks make loans and people spend and thereby create new bank deposits.

4 The money multiplier equals [D/R*(1 + D/C)]/(D/R + D/C + D/E), where

D = deposits, R = reserves, C = currency and E = excess reserves in the

banking system.

5 The real interest rate adjusts the observed (nominal) interest rate for inflation or deflation. Ex post refers to the real interest rate after the actual change in prices has been observed; ex ante refers to the real interest rate that is expected at the time the lending occurs.

6 See note 3.

Citation: Parker, Randall. “An Overview of the Great Depression”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/an-overview-of-the-great-depression/

Gold Standard

Lawrence H. Officer, University of Illinois at Chicago

The gold standard is the most famous monetary system that ever existed. The periods in which the gold standard flourished, the groupings of countries under the gold standard, and the dates during which individual countries adhered to this standard are delineated in the first section. Then characteristics of the gold standard (what elements make for a gold standard), the various types of the standard (domestic versus international, coin versus other, legal versus effective), and implications for the money supply of a country on the standard are outlined. The longest section is devoted to the “classical” gold standard, the predominant monetary system that ended in 1914 (when World War I began), followed by a section on the “interwar” gold standard, which operated between the two World Wars (the 1920s and 1930s).

Countries and Dates on the Gold Standard

Countries on the gold standard and the periods (or beginning and ending dates) during which they were on gold are listed in Tables 1 and 2 for the classical and interwar gold standards. Types of gold standard, ambiguities of dates, and individual-country cases are considered in later sections. The country groupings reflect the importance of countries to establishment and maintenance of the standard. Center countries — Britain in the classical standard, the United Kingdom (Britain’s legal name since 1922) and the United States in the interwar period — were indispensable to the spread and functioning of the gold standard. Along with the other core countries — France and Germany, and the United States in the classical period — they attracted other countries to adopt the gold standard, in particular, British colonies and dominions, Western European countries, and Scandinavia. Other countries — and, for some purposes, also British colonies and dominions — were in the periphery: acted on, rather than actors, in the gold-standard eras, and generally not as committed to the gold standard.

Table 1Countries on Classical Gold Standard
Country Type of Gold Standard Period
Center Country
Britaina Coin 1774-1797b, 1821-1914
Other Core Countries
United Statesc Coin 1879-1917d
Francee Coin 1878-1914
Germany Coin 1871-1914
British Colonies and Dominions
Australia Coin 1852-1915
Canadaf Coin 1854-1914
Ceylon Coin 1901-1914
Indiag Exchange (British pound) 1898-1914
Western Europe
Austria-Hungaryh Coin 1892-1914
Belgiumi Coin 1878-1914
Italy Coin 1884-1894
Liechtenstein Coin 1898-1914
Netherlandsj Coin 1875-1914
Portugalk Coin 1854-1891
Switzerland Coin 1878-1914
Scandinavia
Denmarkl Coin 1872-1914
Finland Coin 1877-1914
Norway Coin 1875-1914
Sweden Coin 1873-1914
Eastern Europe
Bulgaria Coin 1906-1914
Greece Coin 1885, 1910-1914
Montenegro Coin 1911-1914
Romania Coin 1890-1914
Russia Coin 1897-1914
Middle East
Egypt Coin 1885-1914
Turkey (Ottoman Empire) Coin 1881m-1914
Asia
Japann Coin 1897-1917
Philippines Exchange (U.S. dollar) 1903-1914
Siam Exchange (British pound) 1908-1914
Straits Settlementso Exchange (British pound) 1906-1914
Mexico and Central America
Costa Rica Coin 1896-1914
Mexico Coin 1905-1913
South America
Argentina Coin 1867-1876, 1883-1885, 1900-1914
Bolivia Coin 1908-1914
Brazil Coin 1888-1889, 1906-1914
Chile Coin 1895-1898
Ecuador Coin 1898-1914
Peru Coin 1901-1914
Uruguay Coin 1876-1914
Africa
Eritrea Exchange (Italian lira) 1890-1914
German East Africa Exchange (German mark) 1885p-1914
Italian Somaliland Exchange (Italian lira) 1889p-1914

a Including colonies (except British Honduras) and possessions without a national currency: New Zealand and certain other Oceanic colonies, South Africa, Guernsey, Jersey, Malta, Gibraltar, Cyprus, Bermuda, British West Indies, British Guiana, British Somaliland, Falkland Islands, other South and West African colonies.
b Or perhaps 1798.
c Including countries and territories with U.S. dollar as exclusive or predominant currency: British Honduras (from 1894), Cuba (from 1898), Dominican Republic (from 1901), Panama (from 1904), Puerto Rico (from 1900), Alaska, Aleutian Islands, Hawaii, Midway Islands (from 1898), Wake Island, Guam, and American Samoa.
d Except August – October 1914.
e Including Tunisia (from 1891) and all other colonies except Indochina.
f Including Newfoundland (from 1895).
g Including British East Africa, Uganda, Zanzibar, Mauritius, and Ceylon (to 1901).
h Including Montenegro (to 1911).
I Including Belgian Congo.
j Including Netherlands East Indies.
k Including colonies, except Portuguese India.
l Including Greenland and Iceland.
m Or perhaps 1883.
n Including Korea and Taiwan.
o Including Borneo.
p Approximate beginning date.

Sources: Bloomfield (1959, pp. 13, 15; 1963), Bordo and Kydland (1995), Bordo and Schwartz (1996), Brown (1940, pp.15-16), Bureau of the Mint (1929), de Cecco (1984, p. 59), Ding (1967, pp. 6- 7), Director of the Mint (1913, 1917), Ford (1985, p. 153), Gallarotti (1995, pp. 272 75), Gunasekera (1962), Hawtrey (1950, p. 361), Hershlag (1980, p. 62), Ingram (1971, p. 153), Kemmerer (1916; 1940, pp. 9-10; 1944, p. 39), Kindleberger (1984, pp. 59-60), Lampe (1986, p. 34), MacKay (1946, p. 64), MacLeod (1994, p. 13), Norman (1892, pp. 83-84), Officer (1996, chs. 3 4), Pamuk (2000, p. 217), Powell (1999, p. 14), Rifaat (1935, pp. 47, 54), Shinjo (1962, pp. 81-83), Spalding (1928), Wallich (1950, pp. 32-36), Yeager (1976, p. 298), Young (1925).

Table 2Countries on Interwar Gold Standard
Country Type ofGold Standard Ending Date
Exchange-RateStabilization CurrencyConvertibilitya
United Kingdomb 1925 1931
Coin 1922e Other Core Countries
Bullion 1928 Germany 1924 1931
Australiag 1925 1930
Exchange 1925 Canadai 1925 1929
Exchange 1925 Indiaj 1925 1931
Coin 1929k South Africa 1925 1933
Austria 1922 1931
Exchange 1926 Danzig 1925 1935
Coin 1925 Italym 1927 1934
Coin 1925 Portugalo 1929 1931
Coin 1925 Scandinavia
Bullion 1927 Finland 1925 1931
Bullion 1928 Sweden 1922 1931
Albania 1922 1939
Exchange 1927 Czechoslovakia 1923 1931
Exchange 1928 Greece 1927 1932
Exchange 1925 Latvia 1922 1931
Coin 1922 Poland 1926 1936
Exchange 1929 Yugoslavia 1925 1932
Egypt 1925 1931
Exchange 1925 Palestine 1927 1931
Exchange 1928 Asia
Coin 1930 Malayat 1925 1931
Coin 1925 Philippines 1922 1933
Exchange 1928 Mexico and Central America
Exchange 1922 Guatemala 1925 1933
Exchange 1922 Honduras 1923 1933
Coin 1925 Nicaragua 1915 1932
Coin 1920 South America
Coin 1927 Bolivia 1926 1931
Exchange 1928 Chile 1925 1931
Coin 1923 Ecuador 1927 1932
Exchange 1927 Peru 1928 1932
Exchange 1928 Venezuela 1923 1930

a And freedom of gold export and import.
b Including colonies (except British Honduras) and possessions without a national currency: Guernsey, Jersey, Malta, Gibraltar, Cyprus, Bermuda, British West Indies, British Guiana, British Somaliland, Falkland Islands, British West African and certain South African colonies, certain Oceanic colonies.
cIncluding countries and territories with U.S. dollar as exclusive or predominant currency: British Honduras, Cuba, Dominican Republic, Panama, Puerto Rico, Alaska, Aleutian Islands, Hawaii, Midway Islands, Wake Island, Guam, and American Samoa.
dNot applicable; “the United States dollar…constituted the central point of reference in the whole post-war stabilization effort and was throughout the period of stabilization at par with gold.” — Brown (1940, p. 394)
e1919 for freedom of gold export.
f Including colonies and possessions, except Indochina and Syria.
g Including Papua (New Guinea) and adjoining islands.
h Kenya, Uganda, and Tanganyika.
I Including Newfoundland.
j Including Bhutan, Nepal, British Swaziland, Mauritius, Pemba Island, and Zanzibar.
k 1925 for freedom of gold export.
l Including Luxemburg and Belgian Congo.
m Including Italian Somaliland and Tripoli.
n Including Dutch Guiana and Curacao (Netherlands Antilles).
o Including territories, except Portuguese India.
p Including Liechtenstein.
q Including Greenland and Iceland.
r Including Greater Lebanon.
s Including Korea and Taiwan.
t Including Straits Settlements, Sarawak, Labuan, and Borneo.

Sources: Bett (1957, p. 36), Brown (1940), Bureau of the Mint (1929), Ding (1967, pp. 6-7), Director of the Mint (1917), dos Santos (1996, pp. 191-92), Eichengreen (1992, p. 299), Federal Reserve Bulletin (1928, pp. 562, 847; 1929, pp. 201, 265, 549; 1930, pp. 72, 440; 1931, p. 554; 1935, p. 290; 1936, pp. 322, 760), Gunasekera (1962), Jonung (1984, p. 361), Kemmerer (1954, pp. 301 302), League of Nations (1926, pp. 7, 15; 1927, pp. 165-69; 1929, pp. 208-13; 1931, pp. 265-69; 1937/38, p. 107; 1946, p. 2), Moggridge (1989, p. 305), Officer (1996, chs. 3-4), Powell (1999, pp. 23-24), Spalding (1928), Wallich (1950, pp. 32-37), Yeager (1976, pp. 330, 344, 359); Young (1925, p. 76).

Characteristics of Gold Standards

Types of Gold Standards

Pure Coin and Mixed Standards

In theory, “domestic” gold standards — those that do not depend on interaction with other countries — are of two types: “pure coin” standard and “mixed” (meaning coin and paper, but also called simply “coin”) standard. The two systems share several properties. (1) There is a well-defined and fixed gold content of the domestic monetary unit. For example, the dollar is defined as a specified weight of pure gold. (2) Gold coin circulates as money with unlimited legal-tender power (meaning it is a compulsorily acceptable means of payment of any amount in any transaction or obligation). (3) Privately owned bullion (gold in mass, foreign coin considered as mass, or gold in the form of bars) is convertible into gold coin in unlimited amounts at the government mint or at the central bank, and at the “mint price” (of gold, the inverse of the gold content of the monetary unit). (4) Private parties have no restriction on their holding or use of gold (except possibly that privately created coined money may be prohibited); in particular, they may melt coin into bullion. The effect is as if coin were sold to the monetary authority (central bank or Treasury acting as a central bank) for bullion. It would make sense for the authority to sell gold bars directly for coin, even though not legally required, thus saving the cost of coining. Conditions (3) and (4) commit the monetary authority in effect to transact in coin and bullion in each direction such that the mint price, or gold content of the monetary unit, governs in the marketplace.

Under a pure coin standard, gold is the only money. Under a mixed standard, there are also paper currency (notes) — issued by the government, central bank, or commercial banks — and demand-deposit liabilities of banks. Government or central-bank notes (and central-bank deposit liabilities) are directly convertible into gold coin at the fixed established price on demand. Commercial-bank notes and demand deposits might be converted not directly into gold but rather into gold-convertible government or central-bank currency. This indirect convertibility of commercial-bank liabilities would apply certainly if the government or central- bank currency were legal tender but also generally even if it were not. As legal tender, gold coin is always exchangeable for paper currency or deposits at the mint price, and usually the monetary authority would provide gold bars for its coin. Again, two-way transactions in unlimited amounts fix the currency price of gold at the mint price. The credibility of the monetary-authority commitment to a fixed price of gold is the essence of a successful, ongoing gold-standard regime.

A pure coin standard did not exist in any country during the gold-standard periods. Indeed, over time, gold coin declined from about one-fifth of the world money supply in 1800 (2/3 for gold and silver coin together, as silver was then the predominant monetary standard) to 17 percent in 1885 (1/3 for gold and silver, for an eleven-major-country aggregate), 10 percent in 1913 (15 percent for gold and silver, for the major-country aggregate), and essentially zero in 1928 for the major-country aggregate (Triffin, 1964, pp. 15, 56). See Table 3. The zero figure means not that gold coin did not exist, rather that its main use was as reserves for Treasuries, central banks, and (generally to a lesser extent) commercial banks.

Table 3Structure of Money: Major-Countries Aggregatea(end of year)
1885 1928
8 50
33 0d
18 21
33 99

a Core countries: Britain, United States, France, Germany. Western Europe: Belgium, Italy, Netherlands, Switzerland. Other countries: Canada, Japan, Sweden.
b Metallic money, minor coin, paper currency, and demand deposits.
c 1885: Gold and silver coin; overestimate, as includes commercial-bank holdings that could not be isolated from coin held outside banks by the public. 1913: Gold and silver coin. 1928: Gold coin.
d Less than 0.5 percent.
e 1885 and 1913: Gold, silver, and foreign exchange. 1928: Gold and foreign exchange.
f Official gold: Gold in official reserves. Money gold: Gold-coin component of money supply.

Sources: Triffin (1964, p. 62), Sayers (1976, pp. 348, 352) for 1928 Bank of England dollar reserves (dated January 2, 1929).

An “international” gold standard, which naturally requires that more than one country be on gold, requires in addition freedom both of international gold flows (private parties are permitted to import or export gold without restriction) and of foreign-exchange transactions (an absence of exchange control). Then the fixed mint prices of any two countries on the gold standard imply a fixed exchange rate (“mint parity”) between the countries’ currencies. For example, the dollar- sterling mint parity was $4.8665635 per pound sterling (the British pound).

Gold-Bullion and Gold-Exchange Standards

In principle, a country can choose among four kinds of international gold standards — the pure coin and mixed standards, already mentioned, a gold-bullion standard, and a gold- exchange standard. Under a gold-bullion standard, gold coin neither circulates as money nor is it used as commercial-bank reserves, and the government does not coin gold. The monetary authority (Treasury or central bank) stands ready to transact with private parties, buying or selling gold bars (usable only for import or export, not as domestic currency) for its notes, and generally a minimum size of transaction is specified. For example, in 1925 1931 the Bank of England was on the bullion standard and would sell gold bars only in the minimum amount of 400 fine (pure) ounces, approximately £1699 or $8269. Finally, the monetary authority of a country on a gold-exchange standard buys and sells not gold in any form but rather gold- convertible foreign exchange, that is, the currency of a country that itself is on the gold coin or bullion standard.

Gold Points and Gold Export/Import

A fixed exchange rate (the mint parity) for two countries on the gold standard is an oversimplification that is often made but is misleading. There are costs of importing or exporting gold. These costs include freight, insurance, handling (packing and cartage), interest on money committed to the transaction, risk premium (compensation for risk), normal profit, any deviation of purchase or sale price from the mint price, possibly mint charges, and possibly abrasion (wearing out or removal of gold content of coin — should the coin be sold abroad by weight or as bullion). Expressing the exporting costs as the percent of the amount invested (or, equivalently, as percent of parity), the product of 1/100th of these costs and mint parity (the number of units of domestic currency per unit of foreign currency) is added to mint parity to obtain the gold-export point — the exchange rate at which gold is exported. To obtain the gold-import point, the product of 1/100th of the importing costs and mint parity is subtracted from mint parity.

If the exchange rate is greater than the gold-export point, private-sector “gold-point arbitrageurs” export gold, thereby obtaining foreign currency. Conversely, for the exchange rate less than the gold-import point, gold is imported and foreign currency relinquished. Usually the gold is, directly or indirectly, purchased from the monetary authority of the one country and sold to the monetary authority in the other. The domestic-currency cost of the transaction per unit of foreign currency obtained is the gold-export point. That per unit of foreign currency sold is the gold-import point. Also, foreign currency is sold, or purchased, at the exchange rate. Therefore arbitrageurs receive a profit proportional to the exchange-rate/gold-point divergence.

Gold-Point Arbitrage

However, the arbitrageurs’ supply of foreign currency eliminates profit by returning the exchange rate to below the gold-export point. Therefore perfect “gold-point arbitrage” would ensure that the exchange rate has upper limit of the gold-export point. Similarly, the arbitrageurs’ demand for foreign currency returns the exchange rate to above the gold-import point, and perfect arbitrage ensures that the exchange rate has that point as a lower limit. It is important to note what induces the private sector to engage in gold-point arbitrage: (1) the profit motive; and (2) the credibility of the commitment to (a) the fixed gold price and (b) freedom of foreign exchange and gold transactions, on the part of the monetary authorities of both countries.

Gold-Point Spread

The difference between the gold points is called the (gold-point) spread. The gold points and the spread may be expressed as percentages of parity. Estimates of gold points and spreads involving center countries are provided for the classical and interwar gold standards in Tables 4 and 5. Noteworthy is that the spread for a given country pair generally declines over time both over the classical gold standard (evidenced by the dollar-sterling figures) and for the interwar compared to the classical period.

Table 4Gold-Point Estimates: Classical Gold Standard
Countries Period Gold Pointsa(percent) Spreadd(percent) Method of Computation
Exportb Importc
U.S./Britain 1881-1890 0.6585 0.7141 1.3726 PA
U.S./Britain 1891-1900 0.6550 0.6274 1.2824 PA
U.S./Britain 1901-1910 0.4993 0.5999 1.0992 PA
U.S./Britain 1911-1914 0.5025 0.5915 1.0940 PA
France/U.S. 1877-1913 0.6888 0.6290 1.3178 MED
Germany/U.S. 1894-1913 0.4907 0.7123 1.2030 MED
France/Britain 1877-1913 0.4063 0.3964 0.8027 MED
Germany/Britain 1877-1913 0.3671 0.4405 0.8076 MED
Germany/France 1877-1913 0.4321 0.5556 0.9877 MED
Austria/Britain 1912 0.6453 0.6037 1.2490 SE
Netherlands/Britain 1912 0.5534 0.3552 0.9086 SE
Scandinaviae /Britain 1912 0.3294 0.6067 0.9361 SE

a For numerator country.
b Gold-import point for denominator country.
c Gold-export point for denominator country.
d Gold-export point plus gold-import point.
e Denmark, Sweden, and Norway.

Method of Computation: PA = period average. MED = median exchange rate form estimate of various authorities for various dates, converted to percent deviation from parity. SE = single exchange-rate- form estimate, converted to percent deviation from parity.

Sources: U.S./Britain — Officer (1996, p. 174). France/U.S., Germany/U.S., France/Britain, Germany/Britain, Germany/France — Morgenstern (1959, pp. 178-81). Austria/Britain, Netherlands/Britain, Scandinavia/Britain — Easton (1912, pp. 358-63).

Table 5Gold-Point Estimates: Interwar Gold Standard
Countries Period Gold Pointsa(percent) Spreadd(percent) Method of Computation
Exportb Importc
U.S./Britain 1925-1931 0.6287 0.4466 1.0753 PA
U.S./France 1926-1928e 0.4793 0.5067 0.9860 PA
U.S./France 1928-1933f 0.5743 0.3267 0.9010 PA
U.S./Germany 1926-1931 0.8295 0.3402 1.1697 PA
France/Britain 1926 0.2042 0.4302 0.6344 SE
France/Britain 1929-1933 0.2710 0.3216 0.5926 MED
Germany/Britain 1925-1933 0.3505 0.2676 0.6181 MED
Canada/Britain 1929 0.3521 0.3465 0.6986 SE
Netherlands/Britain 1929 0.2858 0.5146 0.8004 SE
Denmark/Britain 1926 0.4432 0.4930 0.9362 SE
Norway/Britain 1926 0.6084 0.3828 0.9912 SE
Sweden/Britain 1926 0.3881 0.3828 0.7709 SE

a For numerator country.
b Gold-import point for denominator country.
c Gold-export point for denominator country.
d Gold-export point plus gold-import point.
e To end of June 1928. French-franc exchange-rate stabilization, but absence of currency convertibility; see Table 2.
f Beginning July 1928. French-franc convertibility; see Table 2.

Method of Computation: PA = period average. MED = median exchange rate form estimate of various authorities for various dates, converted to percent deviation from parity. SE = single exchange-rate- form estimate, converted to percent deviation from parity.

Sources: U.S./Britain — Officer (1996, p. 174). U.S./France, U.S./Germany, France/Britain 1929- 1933, Germany/Britain — Morgenstern (1959, pp. 185-87). Canada/Britain, Netherlands/Britain — Einzig (1929, pp. 98-101) [Netherlands/Britain currencies’ mint parity from Spalding (1928, p. 135). France/Britain 1926, Denmark/Britain, Norway/Britain, Sweden/Britain — Spalding (1926, pp. 429-30, 436).

The effective monetary standard of a country is distinguishable from its legal standard. For example, a country legally on bimetallism usually is effectively on either a gold or silver monometallic standard, depending on whether its “mint-price ratio” (the ratio of its mint price of gold to mint price of silver) is greater or less than the world price ratio. In contrast, a country might be legally on a gold standard but its banks (and government) have “suspended specie (gold) payments” (refusing to convert their notes into gold), so that the country is in fact on a “paper standard.” The criterion adopted here is that a country is deemed on the gold standard if (1) gold is the predominant effective metallic money, or is the monetary bullion, (2) specie payments are in force, and (3) there is a limitation on the coinage and/or the legal-tender status of silver (the only practical and historical competitor to gold), thus providing institutional or legal support for the effective gold standard emanating from (1) and (2).

Implications for Money Supply

Consider first the domestic gold standard. Under a pure coin standard, the gold in circulation, monetary base, and money supply are all one. With a mixed standard, the money supply is the product of the money multiplier (dependent on the commercial-banks’ reserves/deposit and the nonbank-public’s currency/deposit ratios) and the monetary base (the actual and potential reserves of the commercial banking system, with potential reserves held by the nonbank public). The monetary authority alters the monetary base by changing its gold holdings and its loans, discounts, and securities portfolio (non gold assets, called its “domestic assets”). However, the level of its domestic assets is dependent on its gold reserves, because the authority generates demand liabilities (notes and deposits) by increasing its assets, and convertibility of these liabilities must be supported by a gold reserve, if the gold standard is to be maintained. Therefore the gold standard provides a constraint on the level (or growth) of the money supply.

The international gold standard involves balance-of-payments surpluses settled by gold imports at the gold-import point, and deficits financed by gold exports at the gold-export point. (Within the spread, there are no gold flows and the balance of payments is in equilibrium.) The change in the money supply is then the product of the money multiplier and the gold flow, providing the monetary authority does not change its domestic assets. For a country on a gold- exchange standard, holdings of “foreign exchange” (the reserve currency) take the place of gold. In general, the “international assets” of a monetary authority may consist of both gold and foreign exchange.

The Classical Gold Standard

Dates of Countries Joining the Gold Standard

Table 1 (above) lists all countries that were on the classical gold standard, the gold- standard type to which each adhered, and the period(s) on the standard. Discussion here concentrates on the four core countries. For centuries, Britain was on an effective silver standard under legal bimetallism. The country switched to an effective gold standard early in the eighteenth century, solidified by the (mistakenly) gold-overvalued mint-price ratio established by Isaac Newton, Master of the Mint, in 1717. In 1774 the legal-tender property of silver was restricted, and Britain entered the gold standard in the full sense on that date. In 1798 coining of silver was suspended, and in 1816 the gold standard was formally adopted, ironically during a paper-standard regime (the “Bank Restriction Period,” of 1797-1821), with the gold standard effectively resuming in 1821.

The United States was on an effective silver standard dating back to colonial times, legally bimetallic from 1786, and on an effective gold standard from 1834. The legal gold standard began in 1873-1874, when Acts ended silver-dollar coinage and limited legal tender of existing silver coins. Ironically, again the move from formal bimetallism to a legal gold standard occurred during a paper standard (the “greenback period,” of 1861-1878), with a dual legal and effective gold standard from 1879.

International Shift to the Gold Standard

The rush to the gold standard occurred in the 1870s, with the adherence of Germany, the Scandinavian countries, France, and other European countries. Legal bimetallism shifted from effective silver to effective gold monometallism around 1850, as gold discoveries in the United States and Australia resulted in overvalued gold at the mints. The gold/silver market situation subsequently reversed itself, and, to avoid a huge inflow of silver, many European countries suspended the coinage of silver and limited its legal-tender property. Some countries (France, Belgium, Switzerland) adopted a “limping” gold standard, in which existing former-standard silver coin retained full legal tender, permitting the monetary authority to redeem its notes in silver as well as gold.

As Table 1 shows, most countries were on a gold-coin (always meaning mixed) standard. The gold-bullion standard did not exist in the classical period (although in Britain that standard was embedded in legislation of 1819 that established a transition to restoration of the gold standard). A number of countries in the periphery were on a gold-exchange standard, usually because they were colonies or territories of a country on a gold-coin standard. In situations in which the periphery country lacked its own (even-coined) currency, the gold-exchange standard existed almost by default. Some countries — China, Persia, parts of Latin America — never joined the classical gold standard, instead retaining their silver or bimetallic standards.

Sources of Instability of the Classical Gold Standard

There were three elements making for instability of the classical gold standard. First, the use of foreign exchange as reserves increased as the gold standard progressed. Available end-of- year data indicate that, worldwide, foreign exchange in official reserves (the international assets of the monetary authority) increased by 36 percent from 1880 to 1899 and by 356 percent from 1899 to 1913. In comparison, gold in official reserves increased by 160 percent from 1880 to 1903 but only by 88 percent from 1903 to 1913. (Lindert, 1969, pp. 22, 25) While in 1913 only Germany among the center countries held any measurable amount of foreign exchange — 15 percent of total reserves excluding silver (which was of limited use) — the percentage for the rest of the world was double that for Germany (Table 6). If there were a rush to cash in foreign exchange for gold, reduction or depletion of the gold of reserve-currency countries could place the gold standard in jeopardy.

Table 6Share of Foreign Exchange in Official Reserves(end of year, percent)
Country 1928b
Excluding Silverb
0 10
0 0c
0d 51
13 16
27 32

a Official reserves: gold, silver, and foreign exchange.
b Official reserves: gold and foreign exchange.
c Less than 0.05 percent.
d Less than 0.5 percent.

Sources: 1913 — Lindert (1969, pp. 10-11). 1928 — Britain: Board of Governors of the Federal Reserve System [cited as BG] (1943, p. 551), Sayers (1976, pp. 348, 352) for Bank of England dollar reserves (dated January 2, 1929). United States: BG (1943, pp. 331, 544), foreign exchange consisting of Federal Reserve Banks holdings of foreign-currency bills. France and Germany: Nurkse (1944, p. 234). Rest of world [computed as residual]: gold, BG (1943, pp. 544-51); foreign exchange, from “total” (Triffin, 1964, p. 66), France, and Germany.

Second, Britain — the predominant reserve-currency country — was in a particularly sensitive situation. Again considering end-of 1913 data, almost half of world foreign-exchange reserves were in sterling, but the Bank of England had only three percent of world gold reserves (Tables 7-8). Defining the “reserve ratio” of the reserve-currency-country monetary authority as the ratio of (i) official reserves to (ii) liabilities to foreign monetary authorities held in financial institutions in the country, in 1913 this ratio was only 31 percent for the Bank of England, far lower than those of the monetary authorities of the other core countries (Table 9). An official run on sterling could easily force Britain off the gold standard. Because sterling was an international currency, private foreigners also held considerable liquid assets in London, and could themselves initiate a run on sterling.

Table 7Composition of World Official Foreign-Exchange Reserves(end of year, percent)
1913a British pounds 77
2 French francs }2}

}

16
5b

a Excluding holdings for which currency unspecified.
b Primarily Dutch guilders and Scandinavian kroner.

Sources: 1913 — Lindert (1969, pp. 18-19). 1928 — Components of world total: Triffin (1964, pp. 22, 66), Sayers (1976, pp. 348, 352) for Bank of England dollar reserves (dated January 2, 1929), Board of Governors of the Federal Reserve System [cited as BG] (1943, p. 331) for Federal Reserve Banks holdings of foreign-currency bills.

Table 8Official-Reserves Components: Percent of World Total(end of year)
Country 1928
Gold Foreign Exchange
0 7 United States 27 0a
0b 13 Germany 6 4
95 36 Table 9Reserve Ratiosa of Reserve-Currency Countries

(end of year)

Country 1928c
Excluding Silverc
0.31 0.33
90.55 5.45
2.38 not available
2.11 not available

a Ratio of official reserves to official liquid liabilities (that is, liabilities to foreign governments and central banks).
b Official reserves: gold, silver, and foreign exchange.
c Official reserves: gold and foreign exchange.

Sources : 1913 — Lindert (1969, pp. 10-11, 19). Foreign-currency holdings for which currency unspecified allocated proportionately to the four currencies based on known distribution. 1928 — Gold reserves: Board of Governors of the Federal Reserve System [cited as BG] (1943, pp. 544, 551). Foreign- exchange reserves: Sayers (1976, pp. 348, 352) for Bank of England dollar reserves (dated January 2, 1929); BG (1943, p. 331) for Federal Reserve Banks holdings of foreign-currency bills. Official liquid liabilities: Triffin (1964, p. 22), Sayers (1976, pp. 348, 352).

Third, the United States, though a center country, was a great source of instability to the gold standard. Its Treasury held a high percentage of world gold reserves (more than that of the three other core countries combined in 1913), resulting in an absurdly high reserve ratio — Tables 7-9). With no central bank and a decentralized banking system, financial crises were frequent. Far from the United States assisting Britain, gold often flowed from the Bank of England to the United States to satisfy increases in U.S. demand for money. Though in economic size the United States was the largest of the core countries, in many years it was a net importer rather than exporter of capital to the rest of the world — the opposite of the other core countries. The political power of silver interests and recurrent financial panics led to imperfect credibility in the U.S. commitment to the gold standard. Runs on banks and runs on the Treasury gold reserve placed the U.S. gold standard near collapse in the early and mid-1890s. During that period, the credibility of the Treasury’s commitment to the gold standard was shaken. Indeed, the gold standard was saved in 1895 (and again in 1896) only by cooperative action of the Treasury and a bankers’ syndicate that stemmed gold exports.

Rules of the Game

According to the “rules of the [gold-standard] game,” central banks were supposed to reinforce, rather than “sterilize” (moderate or eliminate) or ignore, the effect of gold flows on the monetary supply. A gold outflow typically decreases the international assets of the central bank and thence the monetary base and money supply. The central-bank’s proper response is: (1) raise its “discount rate,” the central-bank interest rate for rediscounting securities (cashing, at a further deduction from face value, a short-term security from a financial institution that previously discounted the security), thereby inducing commercial banks to adopt a higher reserves/deposit ratio and therefore decreasing the money multiplier; and (2) decrease lending and sell securities, thereby decreasing domestic assets and thence the monetary base. On both counts the money supply is further decreased. Should the central bank rather increase its domestic assets when it loses gold, it engages in “sterilization” of the gold flow and is decidedly not following the “rules of the game.” The converse argument (involving gold inflow and increases in the money supply) also holds, with sterilization involving the central bank decreasing its domestic assets when it gains gold.

Price Specie-Flow Mechanism

A country experiencing a balance-of-payments deficit loses gold and its money supply decreases, both automatically and by policy in accordance with the “rules of the game.” Money income contracts and the price level falls, thereby increasing exports and decreasing imports. Similarly, a surplus country gains gold, the money supply increases, money income expands, the price level rises, exports decrease and imports increase. In each case, balance-of-payments equilibrium is restored via the current account. This is called the “price specie-flow mechanism.” To the extent that wages and prices are inflexible, movements of real income in the same direction as money income occur; in particular, the deficit country suffers unemployment but the payments imbalance is nevertheless corrected.

The capital account also acts to restore balance, via interest-rate increases in the deficit country inducing a net inflow of capital. The interest-rate increases also reduce real investment and thence real income and imports. Similarly, interest-rate decreases in the surplus country elicit capital outflow and increase real investment, income, and imports. This process enhances the current-account correction of the imbalance.

One problem with the “rules of the game” is that, on “global-monetarist” theoretical grounds, they were inconsequential. Under fixed exchange rates, gold flows simply adjust money supply to money demand; the money supply is not determined by policy. Also, prices, interest rates, and incomes are determined worldwide. Even core countries can influence these variables domestically only to the extent that they help determine them in the global marketplace. Therefore the price-specie-flow and like mechanisms cannot occur. Historical data support this conclusion: gold flows were too small to be suggestive of these mechanisms; and prices, incomes, and interest rates moved closely in correspondence (rather than in the opposite directions predicted by the adjustment mechanisms induced by the “rules of the game”) — at least among non-periphery countries, especially the core group.

Discount Rate Rule and the Bank of England

However, the Bank of England did, in effect, manage its discount rate (“Bank Rate”) in accordance with rule (1). The Bank’s primary objective was to maintain convertibility of its notes into gold, that is, to preserve the gold standard, and its principal policy tool was Bank Rate. When its “liquidity ratio” of gold reserves to outstanding note liabilities decreased, it would usually increase Bank Rate. The increase in Bank Rate carried with it market short-term increase rates, inducing a short-term capital inflow and thereby moving the exchange rate away from the gold-export point by increasing the exchange value of the pound. The converse also held, with a rise in the liquidity ratio involving a Bank Rate decrease, capital outflow, and movement of the exchange rate away from the gold import point. The Bank was constantly monitoring its liquidity ratio, and in response altered Bank Rate almost 200 times over 1880- 1913.

While the Reichsbank (the German central bank), like the Bank of England, generally moved its discount rate inversely to its liquidity ratio, most other central banks often violated the rule, with changes in their discount rates of inappropriate direction, or of insufficient amount or frequency. The Bank of France, in particular, kept its discount rate stable. Unlike the Bank of England, it chose to have large gold reserves (see Table 8), with payments imbalances accommodated by fluctuations in its gold rather than financed by short-term capital flows. The United States, lacking a central bank, had no discount rate to use as a policy instrument.

Sterilization Was Dominant

As for rule (2), that the central-bank’s domestic and international assets move in the same direction; in fact the opposite behavior, sterilization, was dominant, as shown in Table 10. The Bank of England followed the rule more than any other central bank, but even so violated it more often than not! How then did the classical gold standard cope with payments imbalances? Why was it a stable system?

Table 10Annual Changes in Internationala and Domesticb Assets of Central BankPercent of Changes in the Same Directionc
1880-1913d Britain 33
__ France 33
31 British Dominionse 13
32 Scandinaviag 25
33 South Americai 23

a 1880-1913: Gold, silver and foreign exchange. 1922-1936: Gold and foreign exchange.
b Domestic income-earning assets: discounts, loans, securities.
c Implying country is following “rules of the game.” Observations with zero or negligible changes in either class of assets excluded.
d Years when country is off gold standard excluded. See Tables 1 and 2.
e Australia and South Africa.
f1880-1913: Austria-Hungary, Belgium, and Netherlands. 1922-1936: Austria, Italy, Netherlands, and Switzerland.
g Denmark, Finland, Norway, and Sweden.
h1880-1913: Russia. 1922-1936: Bulgaria, Czechoslovakia, Greece, Hungary, Poland, Romania, and Yugoslavia.
I Chile, Colombia, Peru, and Uruguay.

Sources: Bloomfield (1959, p. 49), Nurkse (1944, p. 69).

The Stability of the Classical Gold Standard

The fundamental reason for the stability of the classical gold standard is that there was always absolute private-sector credibility in the commitment to the fixed domestic-currency price of gold on the part of the center country (Britain), two (France and Germany) of the three remaining core countries, and certain other European countries (Belgium, Netherlands, Switzerland, and Scandinavia). Certainly, that was true from the late-1870s onward. (For the United States, this absolute credibility applied from about 1900.) In earlier periods, that commitment had a contingency aspect: it was recognized that convertibility could be suspended in the event of dire emergency (such as war); but, after normal conditions were restored, convertibility would be re-established at the pre-existing mint price and gold contracts would again be honored. The Bank Restriction Period is an example of the proper application of the contingency, as is the greenback period (even though the United States, effectively on the gold standard, was legally on bimetallism).

Absolute Credibility Meant Zero Convertibility and Exchange Risk

The absolute credibility in countries’ commitment to convertiblity at the existing mint price implied that there was extremely low, essentially zero, convertibility risk (the probability that Treasury or central-bank notes would not be redeemed in gold at the established mint price) and exchange risk (the probability that the mint parity between two currencies would be altered, or that exchange control or prohibition of gold export would be instituted).

Reasons Why Commitment to Convertibility Was So Credible

There were many reasons why the commitment to convertibility was so credible. (1) Contracts were expressed in gold; if convertibility were abandoned, contracts would inevitably be violated — an undesirable outcome for the monetary authority. (2) Shocks to the domestic and world economies were infrequent and generally mild. There was basically international peace and domestic calm.

(3) The London capital market was the largest, most open, most diversified in the world, and its gold market was also dominant. A high proportion of world trade was financed in sterling, London was the most important reserve-currency center, and balances of payments were often settled by transferring sterling assets rather than gold. Therefore sterling was an international currency — not merely supplemental to gold but perhaps better: a boon to non- center countries, because sterling involved positive, not zero, interest return and its transfer costs were much less than those of gold. Advantages to Britain were the charges for services as an international banker, differential interest returns on its financial intermediation, and the practice of countries on a sterling (gold-exchange) standard of financing payments surpluses with Britain by piling up short-term sterling assets rather than demanding Bank of England gold.

(4) There was widespread ideology — and practice — of “orthodox metallism,” involving authorities’ commitment to an anti-inflation, balanced-budget, stable-money policy. In particular, the ideology implied low government spending and taxes and limited monetization of government debt (financing of budget deficits by printing money). Therefore it was not expected that a country’s price level or inflation would get out of line with that of other countries, with resulting pressure on the country’s adherence to the gold standard. (5) This ideology was mirrored in, and supported by, domestic politics. Gold had won over silver and paper, and stable-money interests (bankers, industrialists, manufacturers, merchants, professionals, creditors, urban groups) over inflationary interests (farmers, landowners, miners, debtors, rural groups).

(6) There was freedom from government regulation and a competitive environment, domestically and internationally. Therefore prices and wages were more flexible than in other periods of human history (before and after). The core countries had virtually no capital controls; the center country (Britain) had adopted free trade, and the other core countries had moderate tariffs. Balance-of-payments financing and adjustment could proceed without serious impediments.

(7) Internal balance (domestic macroeconomic stability, at a high level of real income and employment) was an unimportant goal of policy. Preservation of convertibility of paper currency into gold would not be superseded as the primary policy objective. While sterilization of gold flows was frequent (see above), the purpose was more “meeting the needs of trade” (passive monetary policy) than fighting unemployment (active monetary policy).

(8) The gradual establishment of mint prices over time ensured that the implied mint parities (exchange rates) were in line with relative price levels; so countries joined the gold standard with exchange rates in equilibrium. (9) Current-account and capital-account imbalances tended to be offsetting for the core countries, especially for Britain. A trade deficit induced a gold loss and a higher interest rate, attracting a capital inflow and reducing capital outflow. Indeed, the capital- exporting core countries — Britain, France, and Germany — could eliminate a gold loss simply by reducing lending abroad.

Rareness of Violations of Gold Points

Many of the above reasons not only enhanced credibility in existing mint prices and parities but also kept international-payments imbalances, and hence necessary adjustment, of small magnitude. Responding to the essentially zero convertibility and exchange risks implied by the credible commitment, private agents further reduced the need for balance-of-payments adjustment via gold-point arbitrage (discussed above) and also via a specific kind of speculation. When the exchange rate moved beyond a gold point, arbitrage acted to return it to the spread. So it is not surprising that “violations of the gold points” were rare on a monthly average basis, as demonstrated in Table 11 for the dollar, franc, and mark exchange rate versus sterling. Certainly, gold-point violations did occur; but they rarely persisted sufficiently to be counted on monthly average data. Such measured violations were generally associated with financial crises. (The number of dollar-sterling violations for 1890-1906 exceeding that for 1889-1908 is due to the results emanating from different researchers using different data. Nevertheless, the important common finding is the low percent of months encompassed by violations.)

Table 11Violations of Gold Points
Exchange Rate Time Period Number of Months Number dollar-sterling 240 0.4
1890-1906 3 dollar-sterling 76 0
1889-1908 12b mark-sterling 240 7.5

a May 1925 – August 1931: full months during which both United States and Britain on gold standard.
b Approximate number, deciphered from graph.

Sources: Dollar-sterling, 1890-1906 and 1925-1931 — Officer (1996, p. 235). All other — Giovannini (1993, pp. 130-31).

Stabilizing Speculation

The perceived extremely low convertibility and exchange risks gave private agents profitable opportunities not only outside the spread (gold-point arbitrage) but also within the spread (exchange-rate speculation). As the exchange value of a country’s currency weakened, the exchange rate approaching the gold-export point, speculators had an ever greater incentive to purchase domestic currency with foreign currency (a capital inflow); for they had good reason to believe that the exchange rate would move in the opposite direction, whereupon they would reverse their transaction at a profit. Similarly, a strengthened currency, with the exchange rate approaching the gold-import point, involved speculators selling the domestic currency for foreign currency (a capital outflow). Clearly, the exchange rate would either not go beyond the gold point (via the actions of other speculators of the same ilk) or would quickly return to the spread (via gold-point arbitrage). Also, the further the exchange rate moved toward the gold point, the greater the potential profit opportunity; for there was a decreased distance to that gold point and an increased distance from the other point.

This “stabilizing speculation” enhanced the exchange value of depreciating currencies that were about to lose gold; and thus the gold loss could be prevented. The speculation was all the more powerful, because the absence of controls on capital movements meant private capital flows were highly responsive to exchange-rate changes. Dollar-sterling data, in Table 12, show that this speculation was extremely efficient in keeping the exchange rate away from the gold points — and increasingly effective over time. Interestingly, these statements hold even for the 1890s, during which at times U.S. maintenance of currency convertibility was precarious. The average deviation of the exchange rate from the midpoint of the spread fell decade-by-decade from about 1/3 of one percent of parity in 1881-1890 (23 percent of the gold-point spread) to only 12/100th of one percent of parity in 1911-1914 (11 percent of the spread).

Table 12Average Deviation of Dollar-Sterling Exchange Rate from Gold-Point-Spread Midpoint
Percent of Parity Quarterly observations
0.32 1891-1900 19
0.15 1911-1914a 11
0.28 Monthly observations
0.24 1925-1931c 26

a Ending with second quarter of 1914.
b Third quarter 1925 – second quarter 1931: full quarters during which both United States and Britain on gold standard.
c May 1925 – August 1931: full months during which both United States and Britain on gold standard.

Source: Officer (1996, pp. 182, 191, 272).

Government Policies That Enhanced Gold-Standard Stability

Government policies also enhanced gold-standard stability. First, by the turn of the century South Africa — the main world gold producer — sold all its gold in London, either to private parties or actively to the Bank of England, with the Bank serving also as residual purchaser of the gold. Thus the Bank had the means to replenish its gold reserves. Second, the orthodox- metallism ideology and the leadership of the Bank of England — other central banks would often gear their monetary policy to that of the Bank — kept monetary policies harmonized. Monetary discipline was maintained.

Third, countries used “gold devices,” primarily the manipulation of gold points, to affect gold flows. For example, the Bank of England would foster gold imports by lowering the foreign gold-export point (number of units of foreign currency per pound, the British gold-import point) through interest-free loans to gold importers or raising its purchase price for bars and foreign coin. The Bank would discourage gold exports by lowering the foreign gold-import point (the British gold-export point) via increasing its selling prices for gold bars and foreign coin, refusing to sell bars, or redeeming its notes in underweight domestic gold coin. These policies were alternative to increasing Bank Rate.

The Bank of France and Reichsbank employed gold devices relative to discount-rate changes more than Britain did. Some additional policies included converting notes into gold only in Paris or Berlin rather than at branches elsewhere in the country, the Bank of France converting its notes in silver rather than gold (permitted under its “limping” gold standard), and the Reichsbank using moral suasion to discourage the export of gold. The U.S. Treasury followed similar policies at times. In addition to providing interest-free loans to gold importers and changing the premium at which it would sell bars (or refusing to sell bars outright), the Treasury condoned banking syndicates to put pressure on gold arbitrageurs to desist from gold export in 1895 and 1896, a time when the U.S. adherence to the gold standard was under stress.

Fourth, the monetary system was adept at conserving gold, as evidenced in Table 3. This was important, because the increased gold required for a growing world economy could be obtained only from mining or from nonmonetary hoards. While the money supply for the eleven- major-country aggregate more than tripled from 1885 to 1913, the percent of the money supply in the form of metallic money (gold and silver) more than halved. This process did not make the gold standard unstable, because gold moved into commercial-bank and central-bank (or Treasury) reserves: the ratio of gold in official reserves to official plus money gold increased from 33 to 54 percent. The relative influence of the public versus private sector in reducing the proportion of metallic money in the money supply is an issue warranting exploration by monetary historians.

Fifth, while not regular, central-bank cooperation was not generally required in the stable environment in which the gold standard operated. Yet this cooperation was forthcoming when needed, that is, during financial crises. Although Britain was the center country, the precarious liquidity position of the Bank of England meant that it was more often the recipient than the provider of financial assistance. In crises, it would obtain loans from the Bank of France (also on occasion from other central banks), and the Bank of France would sometimes purchase sterling to push up that currency’s exchange value. Assistance also went from the Bank of England to other central banks, as needed. Further, the credible commitment was so strong that private bankers did not hesitate to make loans to central banks in difficulty.

In sum, “virtuous” two-way interactions were responsible for the stability of the gold standard. The credible commitment to convertibility of paper money at the established mint price, and therefore the fixed mint parities, were both a cause and a result of (1) the stable environment in which the gold standard operated, (2) the stabilizing behavior of arbitrageurs and speculators, and (3) the responsible policies of the authorities — and (1), (2), and (3), and their individual elements, also interacted positively among themselves.

Experience of Periphery

An important reason for periphery countries to join and maintain the gold standard was the access to the capital markets of the core countries thereby fostered. Adherence to the gold standard connoted that the peripheral country would follow responsible monetary, fiscal, and debt-management policies — and, in particular, faithfully repay the interest on and principal of debt. This “good housekeeping seal of approval” (the term coined by Bordo and Rockoff, 1996), by reducing the risk premium, involved a lower interest rate on the country’s bonds sold abroad, and very likely a higher volume of borrowing. The favorable terms and greater borrowing enhanced the country’s economic development.

However, periphery countries bore the brunt of the burden of adjustment of payments imbalances with the core (and other Western European) countries, for three reasons. First, some of the periphery countries were on a gold-exchange standard. When they ran a surplus, they typically increased — and with a deficit, decreased — their liquid balances in London (or other reserve-currency country) rather than withdraw gold from the reserve-currency country. The monetary base of the periphery country would increase, or decrease, but that of the reserve-currency country would remain unchanged. This meant that such changes in domestic variables — prices, incomes, interest rates, portfolios, etc.–that occurred to correct the surplus or deficit, were primarily in the periphery country. The periphery, rather than the core, “bore the burden of adjustment.”

Second, when Bank Rate increased, London drew funds from France and Germany, that attracted funds from other Western European and Scandinavian countries, that drew capital from the periphery. Also, it was easy for a core country to correct a deficit by reducing lending to, or bringing capital home from, the periphery. Third, the periphery countries were underdeveloped; their exports were largely primary products (agriculture and mining), which inherently were extremely sensitive to world market conditions. This feature made adjustment in the periphery compared to the core take the form more of real than financial correction. This conclusion also follows from the fact that capital obtained from core countries for the purpose of economic development was subject to interruption and even reversal. While the periphery was probably better off with access to the capital than in isolation, its welfare gain was reduced by the instability of capital import.

The experience on adherence to the gold standard differed among periphery groups. The important British dominions and colonies — Australia, New Zealand, Canada, and India — successfully maintained the gold standard. They were politically stable and, of course, heavily influenced by Britain. They paid the price of serving as an economic cushion to the Bank of England’s financial situation; but, compared to the rest of the periphery, gained a relatively stable long-term capital inflow. In undeveloped Latin American and Asia, adherence to the gold standard was fragile, with lack of complete credibility in the commitment to convertibility. Many of the reasons for credible commitment that applied to the core countries were absent — for example, there were powerful inflationary interests, strong balance-of-payments shocks, and rudimentary banking sectors. For Latin America and Asia, the cost of adhering to the gold standard was very apparent: loss of the ability to depreciate the currency to counter reductions in exports. Yet the gain, in terms of a steady capital inflow from the core countries, was not as stable or reliable as for the British dominions and colonies.

The Breakdown of the Classical Gold Standard

The classical gold standard was at its height at the end of 1913, ironically just before it came to an end. The proximate cause of the breakdown of the classical gold standard was political: the advent of World War I in August 1914. However, it was the Bank of England’s precarious liquidity position and the gold-exchange standard that were the underlying cause. With the outbreak of war, a run on sterling led Britain to impose extreme exchange control — a postponement of both domestic and international payments — that made the international gold standard non-operational. Convertibility was not legally suspended; but moral suasion, legalistic action, and regulation had the same effect. Gold exports were restricted by extralegal means (and by Trading with the Enemy legislation), with the Bank of England commandeering all gold imports and applying moral suasion to bankers and bullion brokers.

Almost all other gold-standard countries undertook similar policies in 1914 and 1915. The United States entered the war and ended its gold standard late, adopting extralegal restrictions on convertibility in 1917 (although in 1914 New York banks had temporarily imposed an informal embargo on gold exports). An effect of the universal removal of currency convertibility was the ineffectiveness of mint parities and inapplicability of gold points: floating exchange rates resulted.

Interwar Gold Standard

Return to the Gold Standard

In spite of the tremendous disruption to domestic economies and the worldwide economy caused by World War I, a general return to gold took place. However, the resulting interwar gold standard differed institutionally from the classical gold standard in several respects. First, the new gold standard was led not by Britain but rather by the United States. The U.S. embargo on gold exports (imposed in 1917) was removed in 1919, and currency convertibility at the prewar mint price was restored in 1922. The gold value of the dollar rather than of the pound sterling would typically serve as the reference point around which other currencies would be aligned and stabilized. Second, it follows that the core would now have two center countries, the United Kingdom and the United States.

Third, for many countries there was a time lag between stabilizing a country’s currency in the foreign-exchange market (fixing the exchange rate or mint parity) and resuming currency convertibility. Given a lag, the former typically occurred first, currency stabilization operating via central-bank intervention in the foreign-exchange market (transacting in the domestic currency and a reserve currency, generally sterling or the dollar). Table 2 presents the dates of exchange- rate stabilization and currency convertibility resumption for the countries on the interwar gold standard. It is fair to say that the interwar gold standard was at its height at the end of 1928, after all core countries were fully on the standard and before the Great Depression began.

Fourth, the contingency aspect of convertibility conversion, that required restoration of convertibility at the mint price that existed prior to the emergency (World War I), was broken by various countries — even core countries. Some countries (including the United States, United Kingdom, Denmark, Norway, Netherlands, Sweden, Switzerland, Australia, Canada, Japan, Argentina) stabilized their currencies at the prewar mint price. However, other countries (France, Belgium, Italy, Portugal, Finland, Bulgaria, Romania, Greece, Chile) established a gold content of their currency that was a fraction of the prewar level: the currency was devalued in terms of gold, the mint price was higher than prewar. A third group of countries (Germany, Austria, Hungary) stabilized new currencies adopted after hyperinflation. A fourth group (Czechoslovakia, Danzig, Poland, Estonia, Latvia, Lithuania) consisted of countries that became independent or were created following the war and that joined the interwar gold standard. A fifth group (some Latin American countries) had been on silver or paper standards during the classical period but went on the interwar gold standard. A sixth country group (Russia) had been on the classical gold standard, but did not join the interwar gold standard. A seventh group (Spain, China, Iran) joined neither gold standard.

The fifth way in which the interwar gold standard diverged from the classical experience was the mix of gold-standard types. As Table 2 shows, the gold coin standard, dominant in the classical period, was far less prevalent in the interwar period. In particular, all four core countries had been on coin in the classical gold standard; but, of them, only the United States was on coin interwar. The gold-bullion standard, nonexistent prewar, was adopted by two core countries (United Kingdom and France) as well as by two Scandinavian countries (Denmark and Norway). Most countries were on a gold-exchange standard. The central banks of countries on the gold-exchange standard would convert their currencies not into gold but rather into “gold-exchange” currencies (currencies themselves convertible into gold), in practice often sterling, sometimes the dollar (the reserve currencies).

Instability of the Interwar Gold Standard

The features that fostered stability of the classical gold standard did not apply to the interwar standard; instead, many forces made for instability. (1) The process of establishing fixed exchange rates was piecemeal and haphazard, resulting in disequilibrium exchange rates. The United Kingdom restored convertibility at the prewar mint price without sufficient deflation, resulting in an overvalued currency of about ten percent. (Expressed in a common currency at mint parity, the British price level was ten percent higher than that of its trading partners and competitors). A depressed export sector and chronic balance-of-payments difficulties were to result. Other overvalued currencies (in terms of mint parity) were those of Denmark, Italy, and Norway. In contrast, France, Germany, and Belgium had undervalued currencies. (2) Wages and prices were less flexible than in the prewar period. In particular, powerful unions kept wages and unemployment high in British export industries, hindering balance-of-payments correction.

(3) Higher trade barriers than prewar also restrained adjustment.

(4) The gold-exchange standard economized on total world gold via the gold of reserve- currency countries backing their currencies in their reserves role for countries on that standard and also for countries on a coin or bullion standard that elected to hold part of their reserves in London or New York. (Another economizing element was continuation of the move of gold out of the money supply and into banking and official reserves that began in the classical period: for the eleven-major-country aggregate, gold declined to less than œ of one percent of the money supply in 1928, and the ratio of official gold to official-plus-money gold reached 99 percent — Table 3). The gold-exchange standard was inherently unstable, because of the conflict between (a) the expansion of sterling and dollar liabilities to foreign central banks to expand world liquidity, and (b) the resulting deterioration in the reserve ratio of the Bank of England, and U.S. Treasury and Federal Reserve Banks.

This instability was particularly severe in the interwar period, for several reasons. First, France was now a large official holder of sterling, with over half the official reserves of the Bank of France in foreign exchange in 1928, versus essentially none in 1913 (Table 6); and France was resentful that the United Kingdom had used its influence in the League of Nations to induce financially reconstructed countries in Europe to adopt the gold-exchange (sterling) standard. Second, many more countries were on the gold-exchange standard than prewar. Cooperation in restraining a run on sterling or the dollar would be difficult to achieve. Third, the gold-exchange standard, associated with colonies in the classical period, was viewed as a system inferior to a coin standard.

(5) In the classical period, London was the one dominant financial center; in the interwar period it was joined by New York and, in the late 1920s, Paris. Both private and official holdings of foreign currency could shift among the two or three centers, as interest-rate differentials and confidence levels changed.

(6) The problem with gold was not overall scarcity but rather maldistribution. In 1928, official reserve-currency liabilities were much more concentrated than in 1913: the United Kingdom accounted for 77 percent of world foreign-exchange reserves and France less than two percent (versus 47 and 30 percent in 1913 — Table 7). Yet the United Kingdom held only seven percent of world official gold and France 13 percent (Table 8). Reflecting its undervalued currency, France also possessed 39 percent of world official foreign exchange. Incredibly, the United States held 37 percent of world official gold — more than all the non-core countries together.

(7) Britain’s financial position was even more precarious than in the classical period. In 1928, the gold and dollar reserves of the Bank of England covered only one third of London’s liquid liabilities to official foreigners, a ratio hardly greater than in 1913 (and compared to a U.S. ratio of almost 5œ — Table 9). Various elements made the financial position difficult compared to prewar. First, U.K. liquid liabilities were concentrated on stronger countries (France, United States), whereas its liquid assets were predominantly in weaker countries (such as Germany). Second, there was ongoing tension with France, that resented the sterling-dominated gold- exchange standard and desired to cash in its sterling holding for gold to aid its objective of achieving first-class financial status for Paris.

(8) Internal balance was an important goal of policy, which hindered balance-of-payments adjustment, and monetary policy was affected greatly by domestic politics rather than geared to preservation of currency convertibility. (9) Especially because of (8), the credibility in authorities’ commitment to the gold standard was not absolute. Convertibility risk and exchange risk could be well above zero, and currency speculation could be destabilizing rather than stabilizing; so that when a country’s currency approached or reached its gold-export point, speculators might anticipate that currency convertibility would not be maintained and the currency devalued. Hence they would sell rather than buy the currency, which, of course, would help bring about the very outcome anticipated.

(10) The “rules of the game” were infrequently followed and, for most countries, violated even more often than in the classical gold standard — Table 10. Sterilization of gold inflows by the Bank of England can be viewed as an attempt to correct the overvalued pound by means of deflation. However, the U.S. and French sterilization of their persistent gold inflows reflected exclusive concern for the domestic economy and placed the burden of adjustment on other countries in the form of deflation.

(11) The Bank of England did not provide a leadership role in any important way, and central-bank cooperation was insufficient to establish credibility in the commitment to currency convertibility.

Breakdown of the Interwar Gold Standard

Although Canada effectively abandoned the gold standard early in 1929, this was a special case in two respects. First, the action was an early drastic reaction to high U.S. interest rates established to fight the stock-market boom but that carried the threat of unsustainable capital outflow and gold loss for other countries. Second, use of gold devices was the technique used to restrict gold exports and informally terminate the Canadian gold standard.

The beginning of the end of the interwar gold standard occurred with the Great Depression. The depression began in the periphery, with low prices for exports and debt-service requirements leading to insurmountable balance-of-payments difficulties while on the gold standard. However, U.S. monetary policy was an important catalyst. In the second half of 1927 the Federal Reserve pursued an easy-money policy, which supported foreign currencies but also fed the boom in the New York stock market. Reversing policy to fight the Wall Street boom, higher interest rates attracted monies to New York, which weakened sterling in particular. The stock market crash in October 1929, while helpful to sterling, was followed by a passive monetary policy that did not prevent the U.S. depression that started shortly thereafter and that spread to the rest of the world via declines in U.S. trade and lending. In 1929 and 1930 a number of periphery countries either formally suspended currency convertibility or restricted it so that their currencies went beyond the gold-export point.

It was destabilizing speculation, emanating from lack of confidence in authorities’ commitment to currency convertibility that ended the interwar gold standard. In May 1931 there was a run on Austria’s largest commercial bank, and the bank failed. The run spread to Germany, where an important bank also collapsed. The countries’ central banks lost substantial reserves; international financial assistance was too late; and in July 1931 Germany adopted exchange control, followed by Austria in October. These countries were definitively off the gold standard.

The Austrian and German experiences, as well as British budgetary and political difficulties, were among the factors that destroyed confidence in sterling, which occurred in mid-July 1931. Runs on sterling ensued, and the Bank of England lost much of its reserves. Loans from abroad were insufficient, and in any event taken as a sign of weakness. The gold standard was abandoned in September, and the pound quickly and sharply depreciated on the foreign- exchange market, as overvaluation of the pound would imply.

Amazingly, there were no violations of the dollar-sterling gold points on a monthly average basis to the very end of August 1931 (Table 11). In contrast, the average deviation of the dollar-sterling exchange rate from the midpoint of the gold-point spread in 1925-1931 was more than double that in 1911-1914, by either of two measures (Table 12), suggesting less- dominant stabilizing speculation compared to the prewar period. Yet the 1925-1931 average deviation was not much more (in one case, even less) than in earlier decades of the classical gold standard. The trust in the Bank of England had a long tradition, and the shock to confidence in sterling that occurred in July 1931 was unexpected by the British authorities.

Following the U.K. abandonment of the gold standard, many countries followed, some to maintain their competitiveness via currency devaluation, others in response to destabilizing capital flows. The United States held on until 1933, when both domestic and foreign demands for gold, manifested in runs on U.S. commercial banks, became intolerable. The “gold bloc” countries (France, Belgium, Netherlands, Switzerland, Italy, Poland) and Danzig lasted even longer; but, with their currencies now overvalued and susceptible to destabilizing speculation, these countries succumbed to the inevitable by the end of 1936. Albania stayed on gold until occupied by Italy in 1939. As much as a cause, the Great Depression was a consequence of the gold standard; for gold-standard countries hesitated to inflate their economies for fear of weakening the balance of payments, suffering loss of gold and foreign-exchange reserves, and being forced to abandon convertibility or the gold parity. So the gold standard involved “golden fetters” (the title of the classic work of Eichengreen, 1992) that inhibited monetary and fiscal policy to fight the depression. Therefore, some have argued, these fetters seriously exacerbated the severity of the Great Depression within countries (because expansionary policy to fight unemployment was not adopted) and fostered the international transmission of the Depression (because as a country’s output decreased, its imports fell, thus reducing exports and income of other countries).

The “international gold standard,” defined as the period of time during which all four core countries were on the gold standard, existed from 1879 to 1914 (36 years) in the classical period and from 1926 or 1928 to 1931 (four or six years) in the interwar period. The interwar gold standard was a dismal failure in longevity, as well as in its association with the greatest depression the world has known.

References

Bayoumi, Tamim, Barry Eichengreen, and Mark P. Taylor, eds. Modern Perspectives on the Gold Standard. Cambridge: Cambridge University Press, 1996.

Bernanke, Ben, and Harold James. “The Gold Standard, Deflation, and Financial Crisis in the Great Depression: An International Comparison.” In Financial Market and Financial Crises, edited by R. Glenn Hubbard, 33-68. Chicago: University of Chicago Press, 1991.

Bett, Virgil M. Central Banking in Mexico: Monetary Policies and Financial Crises, 1864-1940. Ann Arbor: University of Michigan, 1957.

Bloomfield, Arthur I. Monetary Policy under the International Gold Standard, 1880 1914. New York: Federal Reserve Bank of New York, 1959.

Bloomfield, Arthur I. Short-Term Capital Movements Under the Pre-1914 Gold Standard. Princeton: International Finance Section, Princeton University, 1963.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics, 1914-1941. Washington, DC, 1943.

Bordo, Michael D. “The Classical Gold Standard: Some Lessons for Today.” Federal Reserve Bank of St. Louis Review 63, no. 5 (1981): 2-17.

Bordo, Michael D. “The Classical Gold Standard: Lessons from the Past.” In The International Monetary System: Choices for the Future, edited by Michael B. Connolly, 229-65. New York: Praeger, 1982.

Bordo, Michael D. “Gold Standard: Theory.” In The New Palgrave Dictionary of Money & Finance, vol. 2, edited by Peter Newman, Murray Milgate, and John Eatwell, 267 71. London: Macmillan, 1992.

Bordo, Michael D. “The Gold Standard, Bretton Woods and Other Monetary Regimes: A Historical Appraisal.” Federal Reserve Bank of St. Louis Review 75, no. 2 (1993): 123-91.

Bordo, Michael D. The Gold Standard and Related Regimes: Collected Essays. Cambridge: Cambridge University Press, 1999.

Bordo, Michael D., and Forrest Capie, eds. Monetary Regimes in Transition. Cambridge: Cambridge University Press, 1994.

Bordo, Michael D., and Barry Eichengreen, eds. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press, 1993.

Bordo, Michael D., and Finn E. Kydland. “The Gold Standard as a Rule: An Essay in Exploration.” Explorations in Economic History 32, no. 4 (1995): 423-64.

Bordo, Michael D., and Hugh Rockoff. “The Gold Standard as a ‘Good Housekeeping Seal of Approval’. ” Journal of Economic History 56, no. 2 (1996): 389- 428.

Bordo, Michael D., and Anna J. Schwartz, eds. A Retrospective on the Classical Gold Standard, 1821-1931. Chicago: University of Chicago Press, 1984.

Bordo, Michael D., and Anna J. Schwartz. “The Operation of the Specie Standard: Evidence for Core and Peripheral Countries, 1880-1990.” In Currency Convertibility: The Gold Standard and Beyond, edited by Jorge Braga de Macedo, Barry Eichengreen, and Jaime Reis, 11-83. London: Routledge, 1996.

Bordo, Michael D., and Anna J. Schwartz. “Monetary Policy Regimes and Economic Performance: The Historical Record.” In Handbook of Macroeconomics, vol. 1A, edited by John B. Taylor and Michael Woodford, 149-234. Amsterdam: Elsevier, 1999.

Broadberry, S. N., and N. F. R. Crafts, eds. Britain in the International Economy. Cambridge: Cambridge University Press, 1992.

Brown, William Adams, Jr. The International Gold Standard Reinterpreted, 1914- 1934. New York: National Bureau of Economic Research, 1940.

Bureau of the Mint. Monetary Units and Coinage Systems of the Principal Countries of the World, 1929. Washington, DC: Government Printing Office, 1929.

Cairncross, Alec, and Barry Eichengreen. Sterling in Decline: The Devaluations of 1931, 1949 and 1967. Oxford: Basil Blackwell, 1983.

Calleo, David P. “The Historiography of the Interwar Period: Reconsiderations.” In Balance of Power or Hegemony: The Interwar Monetary System, edited by Benjamin M. Rowland, 225-60. New York: New York University Press, 1976.

Clarke, Stephen V. O. Central Bank Cooperation: 1924-31. New York: Federal Reserve Bank of New York, 1967.

Cleveland, Harold van B. “The International Monetary System in the Interwar Period.” In Balance of Power or Hegemony: The Interwar Monetary System, edited by Benjamin M. Rowland, 1-59. New York: New York University Press, 1976.

Cooper, Richard N. “The Gold Standard: Historical Facts and Future Prospects.” Brookings Papers on Economic Activity 1 (1982): 1-45.

Dam, Kenneth W. The Rules of the Game: Reform and Evolution in the International Monetary System. Chicago: University of Chicago Press, 1982.

De Cecco, Marcello. The International Gold Standard. New York: St. Martin’s Press, 1984.

De Cecco, Marcello. “Gold Standard.” In The New Palgrave Dictionary of Money & Finance, vol. 2, edited by Peter Newman, Murray Milgate, and John Eatwell, 260 66. London: Macmillan, 1992.

De Cecco, Marcello. “Central Bank Cooperation in the Inter-War Period: A View from the Periphery.” In International Monetary Systems in Historical Perspective, edited by Jaime Reis, 113-34. Houndmills, Basingstoke, Hampshire: Macmillan, 1995.

De Macedo, Jorge Braga, Barry Eichengreen, and Jaime Reis, eds. Currency Convertibility: The Gold Standard and Beyond. London: Routledge, 1996.

Ding, Chiang Hai. “A History of Currency in Malaysia and Singapore.” In The Monetary System of Singapore and Malaysia: Implications of the Split Currency, edited by J. Purcal, 1-9. Singapore: Stamford College Press, 1967.

Director of the Mint. The Monetary Systems of the Principal Countries of the World, 1913. Washington: Government Printing Office, 1913.

Director of the Mint. Monetary Systems of the Principal Countries of the World, 1916. Washington: Government Printing Office, 1917.

Dos Santos, Fernando Teixeira. “Last to Join the Gold Standard, 1931.” In Currency Convertibility: The Gold Standard and Beyond, edited by Jorge Braga de Macedo, Barry Eichengreen, and Jaime Reis, 182-203. London: Routledge, 1996.

Dowd, Kevin, and Richard H. Timberlake, Jr., eds. Money and the National State: The Financial Revolution, Government and the World Monetary System. New Brunswick (U.S.): Transaction, 1998.

Drummond, Ian. M. The Gold Standard and the International Monetary System, 1900 1939. Houndmills, Basingstoke, Hampshire: Macmillan, 1987.

Easton, H. T. Tate’s Modern Cambist. London: Effingham Wilson, 1912.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Methuen, 1985.

Eichengreen, Barry. Elusive Stability: Essays in the History of International Finance, 1919-1939. New York: Cambridge University Press, 1990.

Eichengreen, Barry. “International Monetary Instability between the Wars: Structural Flaws or Misguided Policies?” In The Evolution of the International Monetary System: How can Efficiency and Stability Be Attained? edited by Yoshio Suzuki, Junichi Miyake, and Mitsuaki Okabe, 71-116. Tokyo: University of Tokyo Press, 1990.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919 1939. New York: Oxford University Press, 1992.

Eichengreen, Barry. “The Endogeneity of Exchange-Rate Regimes.” In Understanding Interdependence: The Macroeconomics of the Open Economy, edited by Peter B. Kenen, 3-33. Princeton: Princeton University Press, 1995.

Eichengreen, Barry. “History of the International Monetary System: Implications for Research in International Macroeconomics and Finance.” In The Handbook of International Macroeconomics, edited by Frederick van der Ploeg, 153-91. Cambridge, MA: Basil Blackwell, 1994.

Eichengreen, Barry, and Marc Flandreau. The Gold Standard in Theory and History, second edition. London: Routledge, 1997.

Einzig, Paul. International Gold Movements. London: Macmillan, 1929. Federal Reserve Bulletin, various issues, 1928-1936.

Ford, A. G. The Gold Standard 1880-1914: Britain and Argentina. Oxford: Clarendon Press, 1962.

Ford, A. G. “Notes on the Working of the Gold Standard before 1914.” In The Gold Standard in Theory and History, edited by Barry Eichengreen, 141-65. New York: Methuen, 1985.

Ford, A. G. “International Financial Policy and the Gold Standard, 1870-1914.” In The Industrial Economies: The Development of Economic and Social Policies, The Cambridge Economic History of Europe, vol. 8, edited by Peter Mathias and Sidney Pollard, 197-249. Cambridge: Cambridge University Press, 1989.

Frieden, Jeffry A. “The Dynamics of International Monetary Systems: International and Domestic Factors in the Rise, Reign, and Demise of the Classical Gold Standard.” In Coping with Complexity in the International System, edited by Jack Snyder and Robert Jervis, 137-62. Boulder, CO: Westview, 1993.

Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University, Press, 1963.

Gallarotti, Giulio M. The Anatomy of an International Monetary Regime: The Classical Gold Standard, 1880-1914. New York: Oxford University Press, 1995.

Giovannini, Alberto. “Bretton Woods and its Precursors: Rules versus Discretion in the History of International Monetary Regimes.” In A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform, edited by Michael D. Bordo and Barry Eichengreen, 109-47. Chicago: University of Chicago Press, 1993.

Gunasekera, H. A. de S. From Dependent Currency to Central Banking in Ceylon: An Analysis of Monetary Experience, 1825-1957. London: G. Bell, 1962.

Hawtrey, R. G. The Gold Standard in Theory and Practice, fifth edition. London: Longmans, Green, 1947.

Hawtrey, R. G. Currency and Credit, fourth edition. London: Longmans, Green, 1950.

Hershlag, Z. Y. Introduction to the Modern Economic History of the Middle East. London: E. J. Brill, 1980.

Ingram, James C. Economic Changes in Thailand, 1850-1970. Stanford, CA: Stanford University, 1971.

Jonung, Lars. “Swedish Experience under the Classical Gold Standard, 1873-1914.” In A Retrospective on the Classical Gold Standard, 1821-1931, edited by Michael D. Bordo and Anna J. Schwartz, 361-99. Chicago: University of Chicago Press, 1984.

Kemmerer, Donald L. “Statement.” In Gold Reserve Act Amendments, Hearings, U.S. Senate, 83rd Cong., second session, pp. 299-302. Washington, DC: Government Printing Office, 1954.

Kemmerer, Edwin Walter. Modern Currency Reforms: A History and Discussion of Recent Currency Reforms in India, Puerto Rico, Philippine Islands, Straits Settlements and Mexico. New York: Macmillan, 1916.

Kemmerer, Edwin Walter. Inflation and Revolution: Mexico’s Experience of 1912- 1917. Princeton: Princeton University Press, 1940.

Kemmerer, Edwin Walter. Gold and the Gold Standard: The Story of Gold Money – – Past, Present and Future. New York: McGraw-Hill, 1944.

Kenwood, A.G., and A. L. Lougheed. The Growth of the International Economy, 1820 1960. London: George Allen & Unwin, 1971.

Kettell, Brian. Gold. Cambridge, MA: Ballinger, 1982.

Kindleberger, Charles P. A Financial History of Western Europe. London: George Allen & Unwin, 1984.

Kindleberger, Charles P. The World in Depression, 1929-1939, revised edition. Berkeley, University of California Press, 1986.

Lampe, John R. The Bulgarian Economy in the Twentieth Century. London: Croom Helm, 1986.

League of Nations. Memorandum on Currency and Central Banks, 1913-1925, second edition, vol. 1. Geneva, 1926.

League of Nations. International Statistical Yearbook, 1926. Geneva, 1927.

League of Nations. International Statistical Yearbook, 1928. Geneva, 1929.

League of Nations. Statistical Yearbook, 1930/31.Geneva, 1931.

League of Nations. Money and Banking, 1937/38, vol. 1: Monetary Review. Geneva.

League of Nations. The Course and Control of Inflation. Geneva, 1946.

Lindert, Peter H. Key Currencies and Gold, 1900-1913. Princeton: International Finance Section, Princeton University, 1969.

McCloskey, Donald N., and J. Richard Zecher. “How the Gold Standard Worked, 1880 1913.” In The Monetary Approach to the Balance of Payments, edited by Jacob A. Frenkel and Harry G. Johnson, 357-85. Toronto: University of Toronto Press, 1976.

MacKay, R. A., ed. Newfoundland: Economic Diplomatic, and Strategic Studies. Toronto: Oxford University Press, 1946.

MacLeod, Malcolm. Kindred Countries: Canada and Newfoundland before Confederation. Ottawa: Canadian Historical Association, 1994.

Moggridge, D. E. British Monetary Policy, 1924-1931: The Norman Conquest of $4.86. Cambridge: Cambridge University Press, 1972.

Moggridge, D. E. “The Gold Standard and National Financial Policies, 1919-39.” In The Industrial Economies: The Development of Economic and Social Policies, The Cambridge Economic History of Europe, vol. 8, edited by Peter Mathias and Sidney Pollard, 250-314. Cambridge: Cambridge University Press, 1989.

Morgenstern, Oskar. International Financial Transactions and Business Cycles. Princeton: Princeton University Press, 1959.

Norman, John Henry. Complete Guide to the World’s Twenty-nine Metal Monetary Systems. New York: G. P. Putnam, 1892.

Nurkse, Ragnar. International Currency Experience: Lessons of the Inter-War Period. Geneva: League of Nations, 1944.

Officer, Lawrence H. Between the Dollar-Sterling Gold Points: Exchange Rates, Parity, and Market Behavior. Cambridge: Cambridge University Press, 1996.

Pablo, Martín Acena, and Jaime Reis, eds. Monetary Standards in the Periphery: Paper, Silver and Gold, 1854-1933. Houndmills, Basingstoke, Hampshire: Macmillan, 2000.

Palyi, Melchior. The Twilight of Gold, 1914-1936: Myths and Realities. Chicago: Henry Regnery, 1972.

Pamuk, Sevket. A Monetary History of the Ottoman Empire. Cambridge: Cambridge University Press, 2000.

Pani?, M. European Monetary Union: Lessons from the Classical Gold Standard. Houndmills, Basingstoke, Hampshire: St. Martin’s Press, 1992.

Powell, James. A History of the Canadian Dollar. Ottawa: Bank of Canada, 1999.

Redish, Angela. Bimetallism: An Economic and Historical Analysis. Cambridge: Cambridge University Press, 2000.

Rifaat, Mohammed Ali. The Monetary System of Egypt: An Inquiry into its History and Present Working. London: George Allen & Unwin, 1935.

Rockoff, Hugh. “Gold Supply.” In The New Palgrave Dictionary of Money & Finance, vol. 2, edited by Peter Newman, Murray Milgate, and John Eatwell, 271 73. London: Macmillan, 1992.

Sayers, R. S. The Bank of England, 1891-1944, Appendixes. Cambridge: Cambridge University Press, 1976.

Sayers, R. S. The Bank of England, 1891-1944. Cambridge: Cambridge University Press, 1986.

Schwartz, Anna J. “Alternative Monetary Regimes: The Gold Standard.” In Alternative Monetary Regimes, edited by Colin D. Campbell and William R. Dougan, 44-72. Baltimore: Johns Hopkins University Press, 1986.

Shinjo, Hiroshi. History of the Yen: 100 Years of Japanese Money-Economy. Kobe: Kobe University, 1962.

Spalding, William F. Tate’s Modern Cambist. London: Effingham Wilson, 1926.

Spalding, William F. Dictionary of the World’s Currencies and Foreign Exchange. London: Isaac Pitman, 1928.

Triffin, Robert. The Evolution of the International Monetary System: Historical Reappraisal and Future Perspectives. Princeton: International Finance Section, Princeton University, 1964.

Triffin, Robert. Our International Monetary System: Yesterday, Today, and Tomorrow. New York: Random House, 1968.

Wallich, Henry Christopher. Monetary Problems of an Export Economy: The Cuban Experience, 1914-1947. Cambridge, MA: Harvard University Press, 1950.

Yeager, Leland B. International Monetary Relations: Theory, History, and Policy, second edition. New York: Harper & Row, 1976.

Young, John Parke. Central American Currency and Finance. Princeton: Princeton University Press, 1925.

Citation: Officer, Lawrence. “Gold Standard”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/gold-standard/

Fire Insurance in the United States

Dalit Baranoff

Fire Insurance before 1810

Marine Insurance

The first American insurers modeled themselves after British marine and fire insurers, who were already well-established by the eighteenth century. In eighteenth-century Britain, individual merchants wrote most marine insurance contracts. Shippers and ship owners were able to acquire insurance through an informal exchange centering on London’s coffeehouses. Edward Lloyd’s Coffee-house, the predecessor of Lloyd’s of London, came to dominate the individual underwriting business by the middle of the eighteenth century.

Similar insurance offices where local merchants could underwrite individual voyages began to appear in a number of American port cities in the 1720s. The trade centered on Philadelphia, where at least fifteen different brokerages helped place insurance in the hands of some 150 private underwriters over the course of the eighteenth century. But only a limited amount of coverage was available. American shippers also could acquire insurance through the agents of Lloyds and other British insurers, but often had to wait months for payments of losses.

Mutual Fire Insurance

When fire insurance first appeared in Britain after the Great London Fire of 1666, mutual societies, in which each policyholder owned a share of the risk, predominated. The earliest American fire insurers followed this model as well. Established in the few urban centers where capital was concentrated, American mutuals were not considered money-making ventures, but rather were outgrowths of volunteer firefighting organizations. In 1735 Charleston residents formed the first American mutual insurance company, the Friendly Society of Mutual Insuring of Homes against Fire. It only lasted until 1741, when a major fire put it out of business.

Benjamin Franklin was the organizing force behind the next, more successful, mutual insurance venture, the Philadelphia Contributionship for the Insurance of Houses from Loss by Fire 1, known familiarly by the name of its symbol, the “Hand in Hand.” By the 1780s, growing demand had led to the formation of other fire mutuals in Philadelphia, New York, Baltimore, Norwich (CT), Charleston, Richmond, Boston, Providence, and elsewhere. (See Table 1.)

Joint-Stock Companies

Joint-stock insurance companies, which raise capital through the sale of shares and distribute dividends, rose to prominence in American fire and marine insurance after the War of Independence. While only a few British insurers were granted the royal charters that allowed them to sell stock and to claim limited liability, insurers in the young United States found it relatively easy to obtain charters from state legislatures eager to promote a domestic insurance industry.

Joint-stock companies first appeared in the marine sector, where demand and the potential for profit were greater. Because they did not rely on the fortunes of any one individual, joint-stock companies provided greater security than private underwriting. In addition to their premium income, joint-stock companies maintained a fixed capital, allowing them to cover larger amounts than mutuals could.

The first successful joint-stock company, the Insurance Company of North America, was formed in 1792 in Philadelphia to sell marine, fire, and life insurance. By 1810, more than seventy such companies had been chartered in the United States. Most of the firms incorporated before 1810 operated primarily in marine insurance, although they were often chartered to handle other lines. (See Table 1.)

Table 1: American Insurance Companies, 1735-1810

Connecticut
1794 Norwich Mutual Fire Insurance Co. (Norwich)
1796 New Haven Insurance Co.
1797 New Haven Insurance Co. (Marine)
1801 Mutual Assurance Co. (New Haven)
1803 Hartford Insurance Co.(M)
1803 Middletown Insurance Co. (Middletown) (M)
1803 Norwich Marine Insurance Co.
1805 Union Insurance Co. (New London) (M)
1810 Hartford Fire Insurance Co.
Maryland
1787 Baltimore Fire Insurance Co. (Baltimore)
1791 Maryland I. Insurance Co. (Baltimore)
1794 Baltimore Equitable Society (Baltimore)
1795 Baltimore Fire Insurance Co. (Baltimore)
1795 Maryland Insurance Co. (Baltimore)
1796 Charitable Marine Society (Baltimore)
1798 Georgetown Mutual Insurance Co. (Georgetown)
1804 Chesapeake Insurance Co. (Baltimore)
1804 Marine Insurance Co. (Baltimore)
1804 Union Insurance Co. of MD (Baltimore)
Massachusetts
1795 Massachusetts Fire and Marine Insurance Co. (Boston)
1798 Massachusetts Mutual Ins. Co. (Boston)
1799 Boston Marine Insurance Co. (Boston)
1799 Newburyport Marine Insurance Co. (Newburyport)
1800 Maine Fire and Marine Ins. Co. (Portland)
1800 Salem Marine Insurance Co. (Salem)
1803 New England Marine Insurance Co. (Boston)
1803 Suffolk Insurance Co. (Boston)
1803 Cumberland Marine and Fire Insurance Co. (Portland, ME)
1803 Essex Fire and Marine Insurance Co. (Salem)
1803 Gloucester Marine Ins. Co. (Gloucester)
1803 Lincoln and Kennebeck Marine Ins. Co. (Wicasset)
1803 Merrimac Marine and Fire Ins. Co. (Newburyport)
1803 Marblehead Marine Insurance Co. (Marblehead)
1803 Nantucket Marine Insurance Co. (Nantucket)
1803 Portland Marine and Fire Insurance Co. (Portland)
1804 North American Insurance Co. (Boston)
1804 Union Insurance Co. (Boston)
1804 Hampshire Mutual Fire Insurance Co. (Northampton)
1804 Kennebunk Marine Ins. Co. (Wells)
1804 Nantucket Union Marine Insurance Co. (Nantucket)
1804 Plymouth Marine Insurance Co. (Plymouth)
1804 Union Marine Insurance Co. (Salem)
1805 Bedford Marine Insurance Co. (New Bedford)
1806 Newburyport Marine Insurance Co. (Newburyport)
1807 Bath Fire and Marine Insurance Co. (Bath)
1807 Middlesex Insurance Co. (Charlestown)
1807 Union Marine and Fire Insurance Co. (Newburyport)
1808 Kennebeck Marine Ins. Co. (Bath)
1809 Beverly Marine Insurance Co. (Beverly)
1809 Marblehead Social (Marblehead)
1809 Social Insurance Co. (Salem)
Pennsylvania
1752 Philadelphia Contributionship for the Insurance of Houses from Loss by Fire
1784 Mutual Assurance Co. (Philadelphia)
1794 Insurance Co. of North America (Philadelphia)
1794 Insurance Co. of the State of Pennsylvania (Philadelphia)
1803 Phoenix Insurance Co. (Philadelphia)
1803 Philadelphia Insurance Co. (Philadelphia)
1804 Delaware Insurance Co. (Philadelphia)
1804 Union Insurance Co. (Chester County)
1807 Lancaster and Susquehanna Insurance Co.
1809 Marine and Fire Insurance Co. (Philadelphia)
1810 United States Insurance Co. (Philadelphia)
1810 American Fire Insurance Co. (Philadelphia)
Delaware
1810 Farmers’ Bank of the State of Delaware (Dover)
Rhode Island
1799 Providence Insurance Co.
1800 Washington Insurance Co.
1800 Providence Mutual Fire Insurance Co.
South Carolina
1735 Friendly Society (Charleston) – royal charter
1797 Charleston Insurance Co. (Charleston)
1797 Charleston Mutual Insurance Co. (Charleston)
1805 South Carolina Insurance Co. (Charleston)
1807 Union Insurance Co. (Charleston)
New Hampshire
1799 New Hampshire Insurance Co. (Portsmouth)
New York City
1787 Knickerbocker Fire Insurance Co. (originally Mutual Insurance Co. of the City of New York)
1796 New York Insurance Co.
1796 Insurance Co. of New York
1797 Associated Underwriters
1798 Mutual Assurance Co.
1800 Columbian Insurance Co.
1802 Washington Mutual Assurance Co.
1802 Marine Insurance Co.
1804 Commercial Insurance Co.
1804 Eagle Fire Insurance Co.
1807 Phoenix Insurance Co.
1809 Mutual Insurance Co.
1810 Fireman’s Insurance Co.
1810 Ocean Insurance Co.
North Carolina
1803 Mutual Insurance Co. (Raleigh)
Virginia
1794 Mutual Assurance Society(Richmond)

The Embargo Act (1807-1809) and the War of 1812 (1812-1814) interrupted shipping, drying up marine insurers’ premiums and forcing them to look for other sources of revenue. These same events also stimulated the development of domestic industries, such as textiles, which created new demand for fire insurance. Together, these events led many marine insurers into the fire field, previously a sideline for most. After 1810, new joint-stock companies appeared whose business centered on fire insurance from the outset. Unlike mutuals, these new fire underwriters insured contents as well as real estate, a growing necessity as Americans’ personal wealth began to expand.

1810-1870

Geographic Diversification

Until the late 1830s, most fire insurers concentrated on their local markets, with only a few experimenting with representation through agents in distant cities. Many state legislatures discouraged “foreign” competition by taxing the premiums of out-of-state insurers. This situation prevailed through 1835, when fire insurers learned a lesson they were not to forget. A devastating fire destroyed New York City’s business district, causing between $15 million and $26 million in damage, bankrupting 23 of the 26 local fire insurance companies. From this point on, fire insurers regarded the geographic diversification of risks as imperative.

Insurers sought to enter new markets in order to reduce their exposure to large-scale conflagrations. They gradually discovered that contracting with agents allowed them to expand broadly, rapidly, and at relatively low cost. Pioneered mainly by companies based in Hartford and Philadelphia, the agency system did not become truly widespread until the 1850s. Once the system began to emerge in earnest, it rapidly took off. By 1855, for example, New York State had authorized 38 out-of-state companies to sell insurance there. Most were fewer than five years old. By 1860, national companies relying on networks of local agents had replaced purely local operations as the mainstay of the industry.

Competition

As the agency system grew, so too did competition. By the 1860s, national fire insurance firms competed in hundreds of local markets simultaneously. Low capitalization requirements and the widespread adoption of general incorporation laws provided for easy entry into the field.

Competition forced insurers to base their premiums on short-term costs. As a result, fire insurance rates were inadequate to cover the long-term costs associated with the city-wide conflagrations that might occur unpredictably once or twice in a generation. When another large fire occurred, many consumers would be left with worthless policies.

Aware of this danger, insurers struggled to raise rates through cooperation. Their most notable effort was the National Board of Fire Underwriters. Formed in 1866 with 75 member companies, it established local boards throughout the country to set uniform rates. But by 1870, renewed competition led the members of the National Board to give up the attempt.

Regulation

Insurance regulation developed during this period to protect consumers from the threat of insurance company insolvency. Beginning with New York (1849) and Massachusetts (1852), a number of states began to codify their insurance laws. Following New York’s lead in 1851, some states adopted $100,000-minimum capitalization requirements. But these rules did little to protect consumers when a large fire resulted in losses in excess of that amount.

By 1860 four states had established insurance departments. Two decades later, insurance departments, headed by a commissioner or superintendent, existed in some 25 states. In states without formal departments, the state treasurer, comptroller, or secretary of state typically oversaw insurance regulation.

State Insurance Departments through 1910
(Departments headed by insurance commissioner or superintendent unless otherwise indicated)

Source: Harry C. Brearley, Fifty Years of a Civilizing Force (1916), 261-174.
Year listed is year department began operating, not year legislation creating it was passed.

1852
  • New Hampshire
  • Vermont (state treasurer served as insurance commissioner)
1855
  • Massachusetts (annual returns required since 1837)
1860
  • New York (comptroller first authorized to prepare reports in 1853, first annual report 1855)
1862
  • Rhode Island
1865
  • Indiana (1852-1865, state auditor headed)
  • Connecticut
1867
  • West Virginia (state auditor supervised 1865 until 1907, when reorganized)
1868
  • California
  • Maine
1869
  • Missouri
1870
  • Kentucky (part of bureau of state auditor.s department)
1871
  • Kansas
  • Michigan
1872
  • Florida
  • Ohio (1867-72, state auditor supervised)
  • Maryland
  • Minnesota
1873
  • Arkansas
  • Nebraska
  • Pennsylvania
  • Tennessee (state treasurer acted as insurance commissioner)
1876
  • Texas
1878
  • Wisconsin (1867-78, secretary of state supervised insurance)
1879
  • Delaware
1881
  • Nevada (1864-1881, state comptroller supervised insurance)
1883
  • Colorado
1887
  • Georgia (1869-1887, insurance supervised by state comptroller general)
1889
  • North Dakota
  • Washington (secretary of state acted as insurance commissioner until 1908)
1890
  • Oklahoma (secretary of territory headed through 1907)
1891
  • New Jersey (1875-1891, secretary of state supervised insurance)
1893
  • Illinois (auditor of public accounts supervised insurance 1869-1893)
1896
  • Utah (1884-1896, supervised by territorial secretary. Supervised by secretary of state until department reorganized in 1909)
1897
  • Alabama (1860-1897, insurance supervised by state auditor)
  • Wyoming (territorial auditor supervised insurance 1868-1896) (1877)
  • South Dakota (1889-1897, state auditor supervised)
1898
  • Louisiana (secretary of state acted as superintendent)
1900
  • Alaska (administered by survey-general of territory)
1901
  • Arizona (1887-1901 supervised by territorial treasurer)
  • Idaho (1891-1901, state treasurer headed)
1902
  • Mississippi (1857-1902, auditor of public accounts supervised insurance)
  • District of Columbia
1905
  • New Mexico (1882-1904, territorial auditor supervised)
1906
  • Virginia (from 1866 auditor of public accounts supervised)
1908
  • South Carolina (1876-1908, comptroller general supervised insurance)
1909
  • Montana (supervised by territorial/state auditor 1883-1909)

The Supreme Court affirmed state supervision of insurance in 1868 in Paul v. Virginia, which found insurance not to be interstate commerce. As a result, it would not be subject to any federal regulations over the coming decades.

1871-1906

Chicago and Boston Fires

The Great Chicago Fire of October 9 and 10, 1871 destroyed over 2,000 acres (nearly 3½ square miles) of the city. With close to 18,000 buildings burned, including 1,500 “substantial business structures,” 100,000 people were left homeless and thousands jobless. Insurance losses totaled between $90 and $100 million. Many firms’ losses exceeded their available assets.

About 200 fire insurance companies did business in Chicago at the time. The fire bankrupted 68 of them. At least one-half of the property in the burnt district was covered by insurance, but as a result of the insurance company failures, Chicago policyholders recovered only about 40 percent of what they were owed.

A year later, on November 9 and 10, 1872, a fire destroyed Boston’s entire mercantile district, an area of 40 acres. Insured losses in this case totaled more than $50 million, bankrupting an additional 32 companies. The rate of insurance coverage was higher in Boston, where commercial property, everywhere more likely to be insured, happened to bear the brunt of the fire. Some 75 percent of ruined buildings and their contents were insured against fire. In this case, policyholders recovered about 70 percent of their insured losses.

Local Boards

After the Chicago and Boston fires revealed the inadequacy of insurance rates, surviving insurers again tried to set rates collectively. By 1875, a revitalized National Board had organized over 1,000 local boards, placing them under the supervision of district organizations. State auxiliary boards oversaw the districts, and the National Board itself was the final arbiter of rates. But this top-down structure encountered resistance from the local agents, long accustomed to setting their own rates. In the midst of the economic downturn that followed the Panic of 1873, the National Board’s efforts again collapsed.

In 1877, the membership took a fresh approach. They voted to dismantle the centralized rating bureaucracy, instead leaving rate-setting to local boards composed of agents. The National Board now focused its attention on promoting fire prevention and collecting statistics. By the mid-1880s, local rate-setting cartels operated in cities throughout the U.S. Regional boards or private companies rated smaller communities outside the jurisdiction of a local board.

The success of the new breed of local rate-setting cartels owed much to the ever-expanding scale of commerce and property, which fostered a system of mutual dependence between the local agents. Although individual agents typically represented multiple companies, they had come routinely to split risks amongst themselves and the several firms they served. Responding to the imperative of diversification, companies rarely covered more than $10,000 on an individual property, or even within one block of a city.

As property values rose, it was not unusual to see single commercial buildings insured by 20 or more firms, each underwriting a $1,000 or $2,000 chunk of a given risk. Insurers who shared their business had few incentives to compete on price. Undercutting other insurers might even cost them future business. When a sufficiently large group of agents joined forces to set minimum prices, they effectively could shut out any agents who refused to follow the tariff.

Cooperative price-setting by local boards allowed insurers to maintain higher rates, taking periodic conflagrations into account as long-term costs. Cooperation also resulted, for the first time, in rates that followed a stable pattern, where aggregate prices reflected aggregate costs, the so-called underwriting cycle.

(Note: The underwriting cycle is illustrated above using combined ratios, which are the ratio of losses and expenses to premium income in any given year. Because combined ratios include dividend payments but not investment income, they are often greater than 100.)

Local boards helped fire insurance companies diversify their risks and stabilize their rates. The companies in turn, supported the local boards. As a result, the local rate-setting boards that formed during the early 1880s proved remarkably durable and successful. Despite brief disruptions in some cities during the severe economic downturn of the mid-1890s, the local boards did not fail.

As an additional benefit, insurers were able to accomplish collectively what they could not afford to do individually: collect and analyze data on a large scale. The “science” of fire insurance remained in its infancy. The local boards inspected property and created detailed rating charts. Some even instituted scheduled rating – a system where property owners were penalized for defects, such as lack of fire doors, and rewarded for improvements. Previously, agents had set rates based on their personal, idiosyncratic knowledge of local conditions. Within the local boards, agents shared both their subjective personal knowledge and objective data. The results were a crude approximation of an actuarial science.

Anti-Compact Laws

Price-setting by local boards was not viewed favorably by many policy-holders who had to pay higher prices for insurance. Since Paul v. Virginia had exempted insurance from federal antitrust laws, consumers encouraged their state legislatures to pass laws outlawing price collusion among insurers. Ohio adopted the first anti-compact law in 1885, followed by Michigan (1887), Arkansas, Nebraska, Texas, and Kansas (1889), Maine, New Hampshire, and Georgia (1891). By 1906, 19 states had anti-compact laws, but they had limited effectiveness. Where open collusion was outlawed, insurers simply established private rating bureaus to set “advisory” rates.

Spread of Insurance

Local boards flourished in prosperous times. During the boom years of the 1880s, new capital flowed into every sector. The increasing concentration of wealth in cities steadily drove the amounts and rates of covered property upward. Between 1880 and 1889, insurance coverage rose by an average rate of 4.6 percent a year, increasing 50 percent overall. By 1890, close to 60 percent of burned property in the U.S. was insured, a figure that would not be exceeded until the 1910s, when upwards of 70 percent of property was insured.

In 1889, the dollar value of property insured against fire in the United States approached $12 billion. Fifteen years later, $20 billion dollars in property was covered.

Baltimore and San Francisco

The ability of higher, more stable prices to insulate industry and society from the consequences of citywide conflagrations can be seen in the strikingly different results following the sequels to Boston and Chicago, which occurred in Baltimore and San Francisco in the early 1900s. The Baltimore Fire of Feb. 7 through 9, 1904 resulted in $55 million in insurance claims, 90 percent of which was paid. Only a few Maryland-based companies went bankrupt.

San Francisco’s disaster dwarfed Baltimore’s. The earthquake that struck the city on April 18, 1906 set off fires that burned for three days, destroying over 500 blocks that contained at least 25,000 buildings. The damages totaled $350 million, some two-thirds covered by insurance. In the end, $225 million was paid out, or around 90 percent of what was owed. Only 20 companies operating in San Francisco were forced to suspend business, some only temporarily.

Improvements in construction and firefighting would put an end to the giant blazes that had plagued America’s cities. But by the middle of the first decade of the twentieth century, cooperative price-setting in fire insurance already had ameliorated the worst economic consequences of these disasters.

1907-1920

State Rate-Setting

Despite the passage of anti-compact legislation, fire insurance in the early 1900s was regulated as much by companies as by state governments. After Baltimore and San Francisco, state governments, recognizing the value of cooperative price-setting, began to abandon anti-compact laws in favor of state involvement in rate-setting which took one of two forms: set rates, or state review of industry-set rates.

Kansas was the first to adopt strict rate regulation in 1909, followed by Texas in 1910 and Missouri in 1911. These laws required insurers to submit their rates for review by the state insurance department, which could overrule them. Contesting the constitutionality of its law, the insurance industry took the State of Kansas to court. In 1914, the Supreme Court of the United States decided German Alliance Insurance Co. v. Ike Lewis, Superintendent of Insurance in favor of Kansas. The Court declared insurance to be a public good, subject to rate regulation.

While the case was pending, New York entered the rating arena in 1911 with a much less restrictive law. New York’s law was greatly influenced by a legislative investigation, the Merritt Committee. The Armstrong Committee’s investigation of New York’s life insurance industry in 1905 had uncovered numerous financial improprieties, leading legislators to call for investigations into the fire insurance industry, where they hoped to discover similar evidence of corruption or profiteering. The Merritt Committee, which met in 1910 and 1911, instead found that most fire insurance companies brought in only modest profits.

The Merritt Committee further concluded that cooperation among firms was often in the public interest, and recommended that insurance boards continue to set rates. The ensuing law mandated state review of rates to prevent discrimination, requiring companies to charge the same rates for the same types of property. The law also required insurance companies to submit uniform statistics on premiums and losses for the first time. Other states soon adopted similar requirements. By the early 1920, nearly thirty states had some form of rate regulation.

Data Collection

New York’s data-collection requirement had far-reaching consequences for the entire fire insurance industry. Because every major insurer in the United States did business in New York (and often a great deal of it), any regulatory act passed there had national implications. And once New York mandated that companies submit data, the imperative for a uniform classification system was born.

In 1914, the industry responded by creating an Actuarial Bureau within the National Board of Fire Underwriters to collect uniformly organized data and submit it to the states. Supported by the National Convention of Insurance Commissioners (today called the National Association of Insurance Commissioners, or NAIC), the Actuarial Bureau was soon able to establish uniform, industry-wide classification standards. The regular collection of uniform data enabled the development of modern actuarial science in the fire field.

1920 to the Present

Federal Regulation

Through the 1920s and 1930s, property insurance rating continued as it had before, with various rating bureaus determining the rates that insurers were to charge, and the states reviewing or approving them. In 1944, the Supreme Court decided a federal antitrust suit against the Southeastern Underwriters Association, which set rates in a number of southern states. The Supreme Court found the SEUA to be in violation of the Sherman Act, thereby overturning Paul v. Virginia. The industry had become subject to federal regulation for the first time.

Within a year, Congress had passed the McCarran-Ferguson Act, allowing the states to continue regulating insurance so long as they met certain federal requirements. The law also granted the industry a limited exemption from antitrust statutes. The Act gave the National Association of Insurance Commissioners three years to develop model rating laws for the states to adopt.

State Rating Laws

In 1946, the NAIC adopted model rate laws for fire and casualty insurance that required “prior approval” of rates by the states before they could be used by insurers. While most of the industry supported this requirement as a way to prevent competition, a group of “independent” insurers opposed prior approval and instead supported “file and use” rates.

By the 1950s, all states had passed rating laws, although not necessarily the model laws. Some allowed insurers to file deviations from bureau rates, while others required bureau membership and strict prior approval of rates. Most regulatory activity through the late 1950s involved the industry’s attempts to protect the bureau rating system.

The bureaus’ tight hold on rates was soon to loosen. In 1959, an investigation into bureau practices by a U.S. Senate Antitrust subcommittee (the O’Mahoney Committee) found that competition should be the main regulator of the industry. As a result, some states began to make it easier for insurers to deviate from prior approval rates.

During the 1960s, two different systems of property/casualty insurance regulation developed. While many states abandoned prior approval in favor of competitive rating, others strengthened strict rating laws. At the same time, the many rating bureaus that had provided rates for different states began to consolidate. By the 1970s, the rates that these combined rating bureaus provided were officially only advisory. Insurers could choose whether to use them or develop their own rates.

Although membership in rating bureaus is no longer mandatory, advisory organizations continue to play an important part in property/casualty insurance by providing required statistics to the states. They also allow new firms easy access to rating data. The Insurance Services Office (ISO), one of the largest “bureaus,” became a for-profit corporation in 1997, and is no longer controlled by the insurance industry. Still, even in its current, mature state, the property/casualty field still functions largely according to the patterns set in fire insurance by the 1920s.

References and Further Reading:

Bainbridge, John. Biography of an Idea: The Story of Mutual Fire and Casualty Insurance. New York: Doubleday, 1952.

Baranoff, Dalit. “Shaped By Risk: Fire Insurance in America 1790-1920.” Ph.D. dissertation, Johns Hopkins University, 2003.

Brearley, Harry Chase. Fifty Years of a Civilizing Force: An Historical and Critical Study of the Work of the National Board of Fire Underwriters. New York: Frederick A. Stokes Company, 1916.

Grant, H. Roger. Insurance Reform: Consumer Action in the Progressive Era. Ames: Iowa State University Press, 1979.

Harrington, Scott E. “Insurance Rate Regulation in the Twentieth Century.” Journal of Risk and Insurance 19, no. 2 (2000): 204-18.

Lilly, Claude C. “A History of Insurance Regulation in the United States.” CPCU Annals 29 (1976): 99-115.

Perkins, Edwin J. American Public Finance and Financial Services, 1700-1815. Columbus: Ohio State University Press, 1994.

Pomeroy, Earl and Carole Olson Gates. “State and Federal Regulation of the Business of Insurance.” Journal of Risk and Insurance 19, no. 2 (2000): 179-88.

Tebeau, Mark. Eating Smoke: Fire in Urban America, 1800-1950. Baltimore: Johns Hopkins University Press, 2003.

Wagner, Tim. “Insurance Rating Bureaus.” Journal of Risk and Insurance 19, no. 2 (2000): 189-203.

1 The name appears in various sources as either the “Contributionship” or the “Contributorship.”

Citation: Baranoff, Dalit. “Fire Insurance in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/fire-insurance-in-the-united-states/

An Economic History of Finland

Riitta Hjerppe, University of Helsinki

Finland in the early 2000s is a small industrialized country with a standard of living ranked among the top twenty in the world. At the beginning of the twentieth century it was a poor agrarian country with a gross domestic product per capita less than half of that of the United Kingdom and the United States, world leaders at the time in this respect. Finland was part of Sweden until 1809, and a Grand Duchy of Russia from 1809 to 1917, with relatively broad autonomy in its economic and many internal affairs. It became an independent republic in 1917. While not directly involved in the fighting in World War I, the country went through a civil war during the years of early independence in 1918, and fought against the Soviet Union during World War II. Participation in Western trade liberalization and bilateral trade with the Soviet Union required careful balancing of foreign policy, but also enhanced the welfare of the population. Finland has been a member of the European Union since 1995, and has belonged to the European Economic and Monetary Union since 1999, when it adopted the euro as its currency.

Gross Domestic Product per capita in Finland and in EU 15, 1860-2004, index 2004 = 100

Sources: Eurostat (2001–2005)

Finland has large forest areas of coniferous trees, and forests have been and still are an important natural resource in its economic development. Other natural resources are scarce: there is no coal or oil, and relatively few minerals. Outokumpu, the biggest copper mine in Europe in its time, was depleted in the 1980s. Even water power is scarce, despite the large number of lakes, because of the small height differences. The country is among the larger ones in Europe in area, but it is sparsely populated with 44 people per square mile, 5.3 million people altogether. The population is very homogeneous. There are a small number of people of foreign origin, about two percent, and for historical reasons there are two official language groups, the Finnish-speaking majority and a Swedish-speaking minority. In recent years population has grown at about 0.3 percent per year.

The Beginnings of Industrialization and Accelerating Growth

Finland was an agrarian country in the 1800s, despite poor climatic conditions for efficient grain growing. Seventy percent of the population was engaged in agriculture and forestry, and half of the value of production came from these primary industries in 1900. Slash and burn cultivation finally gave way to field cultivation during the nineteenth century, even in the eastern parts of the country.

Some iron works were founded in the southwestern part of the country in order to process Swedish iron ore as early as in the seventeenth century. Significant tar burning, sawmilling and fur trading brought cash with which to buy a few imported items such as salt, and some luxuries – coffee, sugar, wines and fine cloths. The small towns in the coastal areas flourished through the shipping of these items, even if restrictive legislation in the eighteenth century required transport via Stockholm. The income from tar and timber shipping accumulated capital for the first industrial plants.

The nineteenth century saw the modest beginnings of industrialization, clearly later than in Western Europe. The first modern cotton factories started up in the 1830s and 1840s, as did the first machine shops. The first steam machines were introduced in the cotton factories and the first rag paper machine in the 1840s. The first steam sawmills were allowed to start only in 1860. The first railroad shortened the traveling time from the inland towns to the coast in 1862, and the first telegraphs came at around the same time. Some new inventions, such as electrical power and the telephone, came into use early in the 1880s, but generally the diffusion of new technology to everyday use took a long time.

The export of various industrial and artisan products to Russia from the 1840s on, as well as the opening up of British markets to Finnish sawmill products in the 1860s were important triggers of industrial development. From the 1870s on pulp and paper based on wood fiber became major export items to the Russian market, and before World War I one-third of the demand of the vast Russian empire was satisfied with Finnish paper. Finland became a very open economy after the 1860s and 1870s, with an export share equaling one-fifth of GDP and an import share of one-fourth. A happy coincidence was the considerable improvement in the terms of trade (export prices/import prices) from the late 1860s to 1900, when timber and other export prices improved in relation to the international prices of grain and industrial products.

Openness of the economies (exports+imports of goods/GDP, percent) in Finland and EU 15, 1960-2005

Sources: Heikkinen and van Zanden 2004; Hjerppe 1989.

Finland participated fully in the global economy of the first gold-standard era, importing much of its grain tariff-free and a lot of other foodstuffs. Half of the imports consisted of food, beverages and tobacco. Agriculture turned to dairy farming, as in Denmark, but with poorer results. The Finnish currency, the markka from 1865, was tied to gold in 1878 and the Finnish Senate borrowed money from Western banking houses in order to build railways and schools.

GDP grew at a slightly accelerating average rate of 2.6 percent per annum, and GDP per capita rose 1.5 percent per year on average between 1860 and 1913. The population was also growing rapidly, and from two million in the 1860s it reached three million on the eve of World War I. Only about ten percent of the population lived in towns. The investment rate was a little over 10 percent of GDP between the 1860s and 1913 and labor productivity was low compared to the leading nations. Accordingly, economic growth depended mostly on added labor inputs, as well as a growing cultivated area.

Catching up in the Interwar Years

The revolution of 1917 in Russia and Finland’s independence cut off Russian trade, which was devastating for Finland’s economy. The food situation was particularly difficult as 60 percent of grain required had been imported.

Postwar reconstruction in Europe and the consequent demand for timber soon put the economy on a swift growth path. The gap between the Finnish economy and Western economies narrowed dramatically in the interwar period, although it remained the same among the Scandinavian countries, which also experienced fast growth: GDP grew by 4.7 percent per annum and GDP per capita by 3.8 percent in 1920–1938. The investment rate rose to new heights, which also improved labor productivity. The 1930s depression was milder than in many other European countries because of the continued demand for pulp and paper. On the other hand, Finnish industries went into depression at different times, which made the downturn milder than it would have been if all the industries had experienced their troughs simultaneously. The Depression, however, had serious and long-drawn-out consequences for poor people.

The land reform of 1918 secured land for tenant farmers and farm workers. A large number of new, small farms were established, which could only support families if they had extra income from forest work. The country remained largely agrarian. On the eve of World War II, almost half of the labor force and one-third of the production were still in the primary industries. Small-scale agriculture used horses and horse-drawn machines, lumberjacks went into the forest with axes and saws, and logs were transported from the forest by horses or by floating. Tariff protection and other policy measures helped to raise the domestic grain production to 80–90 percent of consumption by 1939.

Soon after the end of World War I, Finnish sawmill products, pulp and paper found old and new markets in the Western world. The structure of exports became more one-sided, however. Textiles and metal products found no markets in the West and had to compete hard with imports on the domestic market. More than four-fifths of exports were based on wood, and one-third of industrial production was in sawmilling, other wood products, pulp and paper. Other growing industries included mining, basic metal industries and machine production, but they operated on the domestic market, protected by the customs barriers that were typical of Europe at that time.

The Postwar Boom until the 1970s

Finland came out of World War II crippled by the loss of a full tenth of its territory, and with 400.000 evacuees from Karelia. Productive units were dilapidated and the raw material situation was poor. The huge war reparations to the Soviet Union were the priority problem of the decision makers. The favorable development of the domestic machinery and shipbuilding industries, which was based on domestic demand during the interwar period and arms deliveries to the army during the War made war-reparations deliveries possible. They were paid on time and according to the agreements. At the same time, timber exports to the West started again. Gradually the productive capacity was modernized and the whole industry was reformed. Evacuees and soldiers were given land on which to settle, and this contributed to the decrease in farm size.

Finland became part of the Western European trade-liberalization movement by joining the World Bank, the International Monetary Fund (IMF) and the Bretton Woods agreement in 1948, becoming a member of the General Agreement on Tariffs and Trade (GATT) two years later, and joining Finnefta (an agreement between the European Free Trade Area (EFTA) and Finland) in 1961. The government chose not to receive Marshall Aid because of the world political situation. Bilateral trade agreements with the Soviet Union started in 1947 and continued until 1991. Tariffs were eased and imports from market economies liberated from 1957. Exports and imports, which had stayed at internationally high levels during the interwar years, only slowly returned to the earlier relative levels.

The investment rate climbed to new levels soon after War World II under a government policy favoring investments and it remained on this very high level until the end of the 1980s. The labor-force growth stopped in the early 1960s, and economic growth has since depended on increases in productivity rather than increased labor inputs. GDP growth was 4.9 percent and GDP per capita 4.3 percent in 1950–1973 – matching the rapid pace of many other European countries.

Exports and, accordingly, the structure of the manufacturing industry were diversified by Soviet and, later, on Western orders for machinery products including paper machines, cranes, elevators, and special ships such as icebreakers. The vast Soviet Union provided good markets for clothing and footwear, while Finnish wool and cotton factories slowly disappeared because of competition from low-wage countries. The modern chemical industry started to develop in the early twentieth century, often led by foreign entrepreneurs, and the first small oil refinery was built by the government in the 1950s. The government became actively involved in industrial activities in the early twentieth century, with investments in mining, basic industries, energy production and transmission, and the construction of infrastructure, and this continued in the postwar period.

The new agricultural policy, the aim of which was to secure reasonable incomes and favorable loans to the farmers and the availability of domestic agricultural products for the population, soon led to overproduction in several product groups, and further to government-subsidized dumping on the international markets. The first limitations on agricultural production were introduced at the end of the 1960s.

The population reached four million in 1950, and the postwar baby boom put extra pressure on the educational system. The educational level of the Finnish population was low in Western European terms in the 1950s, even if everybody could read and write. The underdeveloped educational system was expanded and renewed as new universities and vocational schools were founded, and the number of years of basic, compulsory education increased. Education has been government run since the 1960s and 1970s, and is free at all levels. Finland started to follow the so-called Nordic welfare model, and similar improvements in health and social care have been introduced, normally somewhat later than in the other Nordic countries. Public child-health centers, cash allowances for children, and maternity leave were established in the 1940s, and pension plans have covered the whole population since the 1950s. National unemployment programs had their beginnings in the 1930s and were gradually expanded. A public health-care system was introduced in 1970, and national health insurance also covers some of the cost of private health care. During the 1980s the income distribution became one of the most even in the world.

Slower Growth from the 1970s

The oil crises of the 1970s put the Finnish economy under pressure. Although the oil reserves of the main supplier, the Soviet Union, showed no signs of running out, the price increased in line with world market prices. This was a source of devastating inflation in Finland. On the other hand, it was possible to increase exports under the terms of the bilateral trade agreement with the Soviet Union. This boosted export demand and helped Finland to avoid the high and sustained unemployment that plagued Western Europe.

Economic growth in the 1980s was somewhat better than in most Western economies, and at the end of the 1980s Finland caught up with the sluggishly-growing Swedish GDP per capita for the first time. In the early 1990s the collapse of the Soviet trade, Western European recession and problems in adjusting to the new liberal order of international capital movement led the Finnish economy into a depression that was worse than that of the 1930s. GDP fell by over 10 percent in three years, and unemployment rose to 18 percent. The banking crisis triggered a profound structural change in the Finnish financial sector. The economy revived again to a brisk growth rate of 3.6 percent in 1994-2005: GDP growth was 2.5 percent and GDP per capita 2.1 percent between 1973 and 2005.

Electronics started its spectacular rise in the 1980s and it is now the largest single manufacturing industry with a 25 percent share of all manufacturing. Nokia is the world’s largest producer of mobile phones and a major transmission-station constructor. Connected to this development was the increase in the research-and- development outlay to three percent of GDP, one of the highest in the world. The Finnish paper companies UPM-Kymmene and M-real and the Finnish-Swedish Stora-Enso are among the largest paper producers in the world, although paper production now accounts for only 10 percent of manufacturing output. The recent discussion on the future of the industry is alarming, however. The position of the Nordic paper industry, which is based on expensive, slowly-growing timber, is threatened by new paper factories founded near the expanding consumption areas in Asia and South America, which use local, fast-growing tropical timber. The formerly significant sawmilling operations now constitute a very small percentage of the activities, although the production volumes have been growing. The textile and clothing industries have shrunk into insignificance.

What has typified the last couple of decades is the globalization that has spread to all areas. Exports and imports have increased as a result of export-favoring policies. Some 80 percent of the stocks of Finnish public companies are now in foreign hands: foreign ownership was limited and controlled until the early 1990s. A quarter of the companies operating in Finland are foreign-owned, and Finnish companies have even bigger investments abroad. Most big companies are truly international nowadays. Migration to Finland has increased, and since the collapse of the eastern bloc Russian immigrants have become the largest single foreign group. The number of foreigners is still lower than in many other countries – there are about 120.000 people with foreign background out of a population of 5.3 million.

The directions of foreign trade have been changing because trade with the rising Asian economies has been gaining in importance and Russian trade has fluctuated. Otherwise, almost the same country distribution prevails as has been common for over a century. Western Europe has a share of three-fifths, which has been typical. The United Kingdom was for long Finland’s biggest trading partner, with a share of one-third, but this started to diminish in the 1960s. Russia accounted for one-third of Finnish foreign trade in the early 1900s, but the Soviet Union had minimal trade with the West at first, and its share of the Finnish foreign trade was just a few percentage points. After World War II Soviet-Finnish trade increased gradually until it reached 25 percent of Finnish foreign trade in the 1970s and early 1980s. Trade with Russia is now gradually gaining ground again from the low point of the early 1990s, and had risen to about ten percent in 2006. This makes Russia one of Finland’s three biggest trading partners, Sweden and Germany being the other two with a ten percent share each.

The balance of payments was a continuing problem in the Finnish economy until the 1990s. Particularly in the post-World War II period inflation repeatedly eroded the competitive capacity of the economy and led to numerous devaluations of the currency. An economic policy favoring exports helped the country out of the depression of the 1990s and improved the balance of payments.

Agriculture continued its problematic development of overproduction and high subsidies, which finally became very unpopular. The number of farms has shrunk since the 1960s and the average size has recently risen to average European levels. The share of agricultural production and labor are also on the Western European levels nowadays. Finnish agriculture is incorporated into the Common Agricultural Policy of the European Union and shares its problems, even if Finnish overproduction has been virtually eliminated.

The share of forestry is equally low, even if it supplies four-fifths of the wood used in Finnish sawmills and paper factories: the remaining fifth is imported mainly from the northwestern parts of Russia. The share of manufacturing is somewhat above Western European levels and, accordingly, that of services is high but slightly lower than in the old industrialized countries.

Recent discussion on the state of the economy mainly focuses on two issues. The very open economy of Finland is very much influenced by the rather sluggish economic development of the European Union. Accordingly, not very high growth rates are to be expected in Finland either. Since the 1990s depression, the investment rate has remained at a lower level than was common in the postwar period, and this is cause for concern.

The other issue concerns the prominent role of the public sector in the economy. The Nordic welfare model is basically approved of, but the costs create tensions. High taxation is one consequence of this and political parties discuss whether or not the high public-sector share slows down economic growth.

The aging population, high unemployment and the decreasing numbers of taxpayers in the rural areas of eastern and central Finland place a burden on the local governments. There is also continuing discussion about tax competition inside the European Union: how does the high taxation in some member countries affect the location decisions of companies?

Development of Finland’s exports by commodity group 1900-2005, percent

Source: Finnish National Board of Customs, Statistics Unit

Note on classification: Metal industry products SITC 28, 67, 68, 7, 87; Chemical products SITC 27, 32, 33, 34, 5, 66; Textiles SITC 26, 61, 65, 84, 85; Wood, paper and printed products SITC 24, 25, 63, 64, 82; Food, beverages, tobacco SITC 0, 1, 4.

Development of Finland’s imports by commodity group 1900-2005, percent

Source: Finnish National Board of Customs, Statistics Unit

Note on classification: Metal industry products SITC 28, 67, 68, 7, 87; Chemical products SITC 27, 32, 33, 34, 5, 66; Textiles SITC 26, 61, 65, 84, 85; Wood, paper and printed products SITC 24, 25, 63, 64, 82; Food, beverages, tobacco SITC 0, 1, 4.

References:

Heikkinen, S. and J.L van Zanden, eds. Explorations in Economic Growth. Amsterdam: Aksant, 2004.

Heikkinen, S. Labour and the Market: Workers, Wages and Living Standards in Finland, 1850–1913. Commentationes Scientiarum Socialium 51 (1997).

Hjerppe, R. The Finnish Economy 1860–1985: Growth and Structural Change. Studies on Finland’s Economic Growth XIII. Helsinki: Bank of Finland Publications, 1989.

Jalava, J., S. Heikkinen and R. Hjerppe. “Technology and Structural Change: Productivity in the Finnish Manufacturing Industries, 1925-2000.” Transformation, Integration and Globalization Economic Research (TIGER), Working Paper No. 34, December 2002.

Kaukiainen, Yrjö. A History of Finnish Shipping. London: Routledge, 1993.

Myllyntaus, Timo. Electrification of Finland: The Transfer of a New Technology into a Late Industrializing Economy. Worcester, MA: Macmillan, Worcester, 1991.

Ojala, J., J. Eloranta and J. Jalava, editors. The Road to Prosperity: An Economic History of Finland. Helsinki: Suomalaisen Kirjallisuuden Seura, 2006.

Pekkarinen, J. and J. Vartiainen. Finlands ekonomiska politik: den långa linjen 1918–2000, Stockholm: Stiftelsen Fackföreningsrörelsens institut för ekonomisk forskning FIEF, 2001.

Citation: Hjerppe, Riitta. “An Economic History of Finland”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-finland/

An Economic History of Denmark

Ingrid Henriksen, University of Copenhagen

Denmark is located in Northern Europe between the North Sea and the Baltic. Today Denmark consists of the Jutland Peninsula bordering Germany and the Danish Isles and covers 43,069 square kilometers (16,629 square miles). 1 The present nation is the result of several cessions of territory throughout history. The last of the former Danish territories in southern Sweden were lost to Sweden in 1658, following one of the numerous wars between the two nations, which especially marred the sixteenth and seventeenth centuries. Following defeat in the Napoleonic Wars, Norway was separated from Denmark in 1814. After the last major war, the Second Schleswig War in 1864, Danish territory was further reduced by a third when Schleswig and Holstein were ceded to Germany. After a regional referendum in 1920 only North-Schleswig returned to Denmark. Finally, Iceland, withdrew from the union with Denmark in 1944. The following will deal with the geographical unit of today’s Denmark.

Prerequisites of Growth

Throughout history a number of advantageous factors have shaped the Danish economy. From this perspective it may not be surprising to find today’s Denmark among the richest societies in the world. According to the OECD, it ranked seventh in 2004, with income of $29.231 per capita (PPP). Although we can identify a number of turning points and breaks, for the time period over which we have quantitative evidence this long-run position has changed little. Thus Maddison (2001) in his estimate of GDP per capita around 1600 places Denmark as number six. One interpretation could be that favorable circumstances, rather than ingenious institutions or policies, have determined Danish economic development. Nevertheless, this article also deals with time periods in which the Danish economy was either diverging from or converging towards the leading economies.

Table 1:
Average Annual GDP Growth (at factor costs)
Total Per capita
1870-1880 1.9% 0.9%
1880-1890 2.5% 1.5%
1890-1900 2.9% 1.8%
1900-1913 3.2% 2.0%
1913-1929 3.0% 1.6%
1929-1938 2.2% 1.4%
1938-1950 2.4% 1.4%
1950-1960 3.4% 2.6%
1960-1973 4.6% 3.8%
1973-1982 1.5% 1.3%
1982-1993 1.6% 1.5%
1993-2004 2.2% 2.0%

Sources: Johansen (1985) and Statistics Denmark ‘Statistikbanken’ online.

Denmark’s geographical location in close proximity of the most dynamic nations of sixteenth-century Europe, the Netherlands and the United Kingdom, no doubt exerted a positive influence on the Danish economy and Danish institutions. The North German area influenced Denmark both through long-term economic links and through the Lutheran Protestant Reformation which the Danes embraced in 1536.

The Danish economy traditionally specialized in agriculture like most other small and medium-sized European countries. It is, however, rather unique to find a rich European country in the late-nineteenth and mid-twentieth century which retained such a strong agrarian bias. Only in the late 1950s did the workforce of manufacturing industry overtake that of agriculture. Thus an economic history of Denmark must take its point of departure in agricultural development for quite a long stretch of time.

Looking at resource endowments, Denmark enjoyed a relatively high agricultural land-to-labor ratio compared to other European countries, with the exception of the UK. This was significant for several reasons since it, in this case, was accompanied by a comparatively wealthy peasantry.

Denmark had no mineral resources to speak of until the exploitation of oil and gas in the North Sea began in 1972 and 1984, respectively. From 1991 on Denmark has been a net exporter of energy although on a very modest scale compared to neighboring Norway and Britain. The small deposits are currently projected to be depleted by the end of the second decade of the twenty-first century.

Figure 1. Percent of GDP in selected=

Source: Johansen (1985) and Statistics Denmark ’Nationalregnskaber’

Good logistic can be regarded as a resource in pre-industrial economies. The Danish coast line of 7,314 km and the fact that no point is more than 50 km from the sea were advantages in an age in which transport by sea was more economical than land transport.

Decline and Transformation, 1500-1750

The year of the Lutheran Reformation (1536) conventionally marks the end of the Middle Ages in Danish historiography. Only around 1500 did population growth begin to pick up after the devastating effect of the Black Death. Growth thereafter was modest and at times probably stagnant with large fluctuations in mortality following major wars, particularly during the seventeenth century, and years of bad harvests. About 80-85 percent of the population lived from subsistence agriculture in small rural communities and this did not change. Exports are estimated to have been about 5 percent of GDP between 1550 and 1650. The main export products were oxen and grain. The period after 1650 was characterized by a long lasting slump with a marked decline in exports to the neighboring countries, the Netherlands in particular.

The institutional development after the Black Death showed a return to more archaic forms. Unlike other parts of northwestern Europe, the peasantry on the Danish Isles afterwards became a victim of a process of re-feudalization during the last decades of the fifteenth century. A likely explanation is the low population density that encouraged large landowners to hold on to their labor by all means. Freehold tenure among peasants effectively disappeared during the seventeenth century. Institutions like bonded labor that forced peasants to stay on the estate where they were born, and labor services on the demesne as part of the land rent bring to mind similar arrangements in Europe east of the Elbe River. One exception to the East European model was crucial, however. The demesne land, that is the land worked directly under the estate, never made up more than nine percent of total land by the mid eighteenth century. Although some estate owners saw an interest in encroaching on peasant land, the state protected the latter as production units and, more importantly, as a tax base. Bonded labor was codified in the all-encompassing Danish Law of Christian V in 1683. It was further intensified by being extended, though under another label, to all Denmark during 1733-88, as a means for the state to tide the large landlords over an agrarian crisis. One explanation for the long life of such an authoritarian institution could be that the tenants were relatively well off, with 25-50 acres of land on average. Another reason could be that reality differed from the formal rigor of the institutions.

Following the Protestant Reformation in 1536, the Crown took over all church land, thereby making it the owner of 50 percent of all land. The costs of warfare during most of the sixteenth century could still be covered by the revenue of these substantial possessions. Around 1600 the income from taxation and customs, mostly Sound Toll collected from ships that passed the narrow strait between Denmark and today’s Sweden, on the one hand and Crown land revenues on the other were equally large. About 50 years later, after a major fiscal crisis had led to the sale of about half of all Crown lands, the revenue from royal demesnes declined relatively to about one third, and after 1660 the full transition from domain state to tax state was completed.

The bulk of the former Crown land had been sold to nobles and a few common owners of estates. Consequently, although the Danish constitution of 1665 was the most stringent version of absolutism found anywhere in Europe at the time, the Crown depended heavily on estate owners to perform a number of important local tasks. Thus, conscription of troops for warfare, collection of land taxes and maintenance of law and order enhanced the landlords’ power over their tenants.

Reform and International Market Integration, 1750-1870

The driving force of Danish economic growth, which took off during the late eighteenth century was population growth at home and abroad – which triggered technological and institutional innovation. Whereas the Danish population during the previous hundred years grew by about 0.4 percent per annum, growth climbed to about 0.6 percent, accelerating after 1775 and especially from the second decade of the nineteenth century (Johansen 2002). Like elsewhere in Northern Europe, accelerating growth can be ascribed to a decline in mortality, mainly child mortality. Probably this development was initiated by fewer spells of epidemic diseases due to fewer wars and to greater inherited immunity against contagious diseases. Vaccination against smallpox and formal education of midwives from the early nineteenth century might have played a role (Banggård 2004). Land reforms that entailed some scattering of the farm population may also have had a positive influence. Prices rose from the late eighteenth century in response to the increase in populations in Northern Europe, but also following a number of international conflicts. This again caused a boom in Danish transit shipping and in grain exports.

Population growth rendered the old institutional set up obsolete. Landlords no longer needed to bind labor to their estate, as a new class of landless laborers or cottagers with little land emerged. The work of these day-laborers was to replace the labor services of tenant farmers on the demesnes. The old system of labor services obviously presented an incentive problem all the more since it was often carried by the live-in servants of the tenant farmers. Thus, the labor days on the demesnes represented a loss to both landlords and tenants (Henriksen 2003). Part of the land rent was originally paid in grain. Some of it had been converted to money which meant that real rents declined during the inflation. The solution to these problems was massive land sales both from the remaining crown lands and from private landlords to their tenants. As a result two-thirds of all Danish farmers became owner-occupiers compared to only ten percent in the mid-eighteenth century. This development was halted during the next two and a half decades but resumed as the business cycle picked up during the 1840s and 1850s. It was to become of vital importance to the modernization of Danish agriculture towards the end of the nineteenth century that 75 percent of all agricultural land was farmed by owners of middle-sized farms of about 50 acres. Population growth may also have put a pressure on common lands in the villages. At any rate enclosure begun in the 1760s, accelerated in the 1790s supported by legislation and was almost complete in the third decade of the nineteenth century.

The initiative for the sweeping land reforms from the 1780s is thought to have come from below – that is from the landlords and in some instances also from the peasantry. The absolute monarch and his counselors were, however, strongly supportive of these measures. The desire for peasant land as a tax base weighed heavily and the reforms were believed to enhance the efficiency of peasant farming. Besides, the central government was by now more powerful than in the preceding centuries and less dependent on landlords for local administrative tasks.

Production per capita rose modestly before the 1830s and more pronouncedly thereafter when a better allocation of labor and land followed the reforms and when some new crops like clover and potatoes were introduced at a larger scale. Most importantly, the Danes no longer lived at the margin of hunger. No longer do we find a correlation between demographic variables, deaths and births, and bad harvest years (Johansen 2002).

A liberalization of import tariffs in 1797 marked the end of a short spell of late mercantilism. Further liberalizations during the nineteenth and the beginning of the twentieth century established the Danish liberal tradition in international trade that was only to be broken by the protectionism of the 1930s.

Following the loss of the secured Norwegian market for grain in 1814, Danish exports began to target the British market. The great rush forward came as the British Corn Law was repealed in 1846. The export share of the production value in agriculture rose from roughly 10 to around 30 percent between 1800 and 1870.

In 1849 absolute monarchy was peacefully replaced by a free constitution. The long-term benefits of fundamental principles such as the inviolability of private property rights, the freedom of contracting and the freedom of association were probably essential to future growth though hard to quantify.

Modernization and Convergence, 1870-1914

During this period Danish economic growth outperformed that of most other European countries. A convergence in real wages towards the richest countries, Britain and the U.S., as shown by O’Rourke and Williamsson (1999), can only in part be explained by open economy forces. Denmark became a net importer of foreign capital from the 1890s and foreign debt was well above 40 percent of GDP on the eve of WWI. Overseas emigration reduced the potential workforce but as mortality declined population growth stayed around one percent per annum. The increase in foreign trade was substantial, as in many other economies during the heyday of the gold standard. Thus the export share of Danish agriculture surged to a 60 percent.

The background for the latter development has featured prominently in many international comparative analyses. Part of the explanation for the success, as in other Protestant parts of Northern Europe, was a high rate of literacy that allowed a fast spread of new ideas and new technology.

The driving force of growth was that of a small open economy, which responded effectively to a change in international product prices, in this instance caused by the invasion of cheap grain to Western Europe from North America and Eastern Europe. Like Britain, the Netherlands and Belgium, Denmark did not impose a tariff on grain, in spite of the strong agrarian dominance in society and politics.

Proposals to impose tariffs on grain, and later on cattle and butter, were turned down by Danish farmers. The majority seems to have realized the advantages accruing from the free imports of cheap animal feed during the ongoing process of transition from vegetable to animal production, at a time when the prices of animal products did not decline as much as grain prices. The dominant middle-sized farm was inefficient for wheat but had its comparative advantage in intensive animal farming with the given technology. O’Rourke (1997) found that the grain invasion only lowered Danish rents by 4-5 percent, while real wages rose (according to expectation) but more than in any other agrarian economy and more than in industrialized Britain.

The move from grain exports to exports of animal products, mainly butter and bacon, was to a great extent facilitated by the spread of agricultural cooperatives. This organization allowed the middle-sized and small farms that dominated Danish agriculture to benefit from the economy of scale in processing and marketing. The newly invented steam-driven continuous cream separator skimmed more cream from a kilo of milk than conventional methods and had the further advantage of allowing transported milk brought together from a number of suppliers to be skimmed. From the 1880s the majority of these creameries in Denmark were established as cooperatives and about 20 years later, in 1903, the owners of 81 percent of all milk cows supplied to a cooperative (Henriksen 1999). The Danish dairy industry captured over a third of the rapidly expanding British butter-import market, establishing a reputation for consistent quality that was reflected in high prices. Furthermore, the cooperatives played an active role in persuading the dairy farmers to expand production from summer to year-round dairying. The costs of intensive feeding during the wintertime were more than made up for by a winter price premium (Henriksen and O’Rourke 2005). Year-round dairying resulted in a higher rate of utilization of agrarian capital – that is of farm animals and of the modern cooperative creameries. Not least did this intensive production mean a higher utilization of hitherto underemployed labor. From the late 1890’s, in particular, labor productivity in agriculture rose at an unanticipated speed at par with productivity increase in the urban trades.

Industrialization in Denmark took its modest beginning in the 1870s with a temporary acceleration in the late 1890s. It may be a prime example of an industrialization process governed by domestic demand for industrial goods. Industry’s export never exceeded 10 percent of value added before 1914, compared to agriculture’s export share of 60 percent. The export drive of agriculture towards the end of the nineteenth century was a major force in developing other sectors of the economy not least transport, trade and finance.

Weathering War and Depression, 1914-1950

Denmark, as a neutral nation, escaped the devastating effects of World War I and was even allowed to carry on exports to both sides in the conflict. The ensuing trade surplus resulted in a trebling of the money supply. As the monetary authorities failed to contain the inflationary effects of this development, the value of the Danish currency slumped to about 60 percent of its pre-war value in 1920. The effects of monetary policy failure were aggravated by a decision to return to the gold standard at the 1913 level. When monetary policy was finally tightened in 1924, it resulted in fierce speculation in an appreciation of the Krone. During 1925-26 the currency returned quickly to its pre-war parity. As this was not counterbalanced by an equal decline in prices, the result was a sharp real appreciation and a subsequent deterioration in Denmark’s competitive position (Klovland 1997).

Figure 2. Indices of the Krone Real Exchange Rate and Terms Of Trade (1980=100; Real rates based on Wholesale Price Index

Source: Abildgren (2005)

Note: Trade with Germany is included in the calculation of the real effective exchange rate for the whole period, including 1921-23.

When, in September 1931, Britain decided to leave the gold standard again, Denmark, together with Sweden and Norway, followed only a week later. This move was beneficial as the large real depreciation lead to a long-lasting improvement in Denmark’s competitiveness in the 1930s. It was, no doubt, the single most important policy decision during the depression years. Keynesian demand management, even if it had been fully understood, was barred by a small public sector, only about 13 percent of GDP. As it was, fiscal orthodoxy ruled and policy was slightly procyclical as taxes were raised to cover the deficit created by crisis and unemployment (Topp 1995).

Structural development during the 1920s, surprisingly for a rich nation at this stage, was in favor of agriculture. The total labor force in Danish agriculture grew by 5 percent from 1920 to 1930. The number of employees in agriculture was stagnating whereas the number of self-employed farmers increased by a larger number. The development in relative incomes cannot account for this trend but part of the explanation must be found in a flawed Danish land policy, which actively supported a further parceling out of land into small holdings and restricted the consolidation into larger more viable farms. It took until the early 1960s before this policy began to be unwound.

When the world depression hit Denmark with a minor time lag, agriculture still employed one-third of the total workforce while its contribution to total GDP was a bit less than one-fifth. Perhaps more importantly, agricultural goods still made up 80 percent of total exports.

Denmark’s terms of trade, as a consequence, declined by 24 percent from 1930 to 1932. In 1933 and 1934 bilateral trade agreements were forced upon Denmark by Britain and Germany. In 1932 Denmark had adopted exchange control, a harsh measure even for its time, to stem the net flow of foreign exchange out of the country. By rationing imports exchange control also offered some protection of domestic industry. At the end of the decade manufacture’s GDP had surpassed that of agriculture. In spite of the protectionist policy, unemployment soared to 13-15 percent of the workforce.

The policy mistakes during World War I and its immediate aftermath served as a lesson for policymakers during World War II. The German occupation force (April 9, 1940 until May 5, 1945) drew the funds for its sustenance and for exports to Germany on the Danish central bank whereby the money supply more than doubled. In response the Danish authorities in 1943 launched a policy of absorbing money through open market operations and, for the first time in history, through a surplus on the state budget.

Economic reconstruction after World War II was swift, as again Denmark had been spared the worst consequences of a major war. In 1946 GDP recovered its highest pre-war level. In spite of this, Denmark received relatively generous support through the Marshall Plan of 1948-52, when measured in dollars per capita.

From Riches to Crisis, 1950-1973: Liberalizations and International Integration Once Again

The growth performance during 1950-1957 was markedly lower than the Western European average. The main reason was the high share of agricultural goods in Danish exports, 63 percent in 1950. International trade in agricultural products to a large extent remained regulated. Large deteriorations in the terms of trade caused by the British devaluation 1949, when Denmark followed suit, the outbreak of the Korean War in 1950, and the Suez-crisis of 1956 made matters worse. The ensuing deficits on the balance of payment led the government to contractionary policy measures which restrained growth.

The liberalization of the flow of goods and capital in Western Europe within the framework of the OEEC (the Organization for European Economic Cooperation) during the 1950s probably dealt a blow to some of the Danish manufacturing firms, especially in the textile industry, that had been sheltered through exchange control and wartime. Nevertheless, the export share of industrial production doubled from 10 percent to 20 percent before 1957, at the same time as employment in industry surpassed agricultural employment.

On the question of European economic integration Denmark linked up with its largest trading partner, Britain. After the establishment of the European Common Market in 1958 and when the attempts to create a large European free trade area failed, Denmark entered the European Free Trade Association (EFTA) created under British leadership in 1960. When Britain was finally able to join the European Economic Community (EEC) in 1973, Denmark followed, after a referendum on the issue. Long before admission to the EEC, the advantages to Danish agriculture from the Common Agricultural Policy (CAP) had been emphasized. The higher prices within the EEC were capitalized into higher land prices at the same time that investments were increased based on the expected gains from membership. As a result the most indebted farmers who had borrowed at fixed interests rates were hit hard by two developments from the early 1980s. The EEC started to reduce the producers’ benefits of the CAP because of overproduction and, after 1982, the Danish economy adjusted to a lower level of inflation, and therefore, nominal interest rates. According to Andersen (2001) Danish farmers were left with the highest interest burden of all European Union (EU) farmers in the 1990’s.

Denmark’s relations with the EU, while enthusiastic at the beginning, have since been characterized by a certain amount of reserve. A national referendum in 1992 turned down the treaty on the European Union, the Maastricht Treaty. The Danes, then, opted out of four areas, common citizenship, a common currency, common foreign and defense politics and a common policy on police and legal matters. Once more, in 2000, adoption of the common currency, the Euro, was turned down by the Danish electorate. In the debate leading up to the referendum the possible economic advantages of the Euro in the form of lower transaction costs were considered to be modest, compared to the existent regime of fixed exchange rates vis-à-vis the Euro. All the major political parties, nevertheless, are pro-European, with only the extreme Right and the extreme Left being against. It seems that there is a discrepancy between the general public and the politicians on this particular issue.

As far as domestic economic policy is concerned, the heritage from the 1940s was a new commitment to high employment modified by a balance of payment constraint. The Danish policy differed from that of some other parts of Europe in that the remains of the planned economy from the war and reconstruction period in the form of rationing and price control were dismantled around 1950 and that no nationalizations took place.

Instead of direct regulation, economic policy relied on demand management with fiscal policy as its main instrument. Monetary policy remained a bone of contention between politicians and economists. Coordination of policies was the buzzword but within that framework monetary policy was allotted a passive role. The major political parties for a long time were wary of letting the market rate of interest clear the loan market. Instead, some quantitative measures were carried out with the purpose of dampening the demand for loans.

From Agricultural Society to Service Society: The Growth of the Welfare State

Structural problems in foreign trade extended into the high growth period of 1958-73, as Danish agricultural exports were met with constraints both from the then EEC-member countries and most EFTA countries, as well. During the same decade, the 1960s, as the importance of agriculture was declining the share of employment in the public sector grew rapidly until 1983. Building and construction also took a growing share of the workforce until 1970. These developments left manufacturing industry with a secondary position. Consequently, as pointed out by Pedersen (1995) the sheltered sectors in the economy crowded out the sectors that were exposed to international competition, that is mostly industry and agriculture, by putting a pressure on labor and other costs during the years of strong expansion.

Perhaps the most conspicuous feature of the Danish economy during the Golden Age was the steep increase in welfare-related costs from the mid 1960s and not least the corresponding increases in the number of public employees. Although the seeds of the modern Scandinavian welfare state were sown at a much earlier date, the 1960s was the time when public expenditure as a share of GDP exceeded that of most other countries.

As in other modern welfare states, important elements in the growth of the public sector during the 1960s were the expansion in public health care and education, both free for all citizens. The background for much of the increase in the number of public employees from the late 1960s was the rise in labor participation by married women from the late 1960s until about 1990, partly at least as a consequence. In response, the public day care facilities for young children and old people were expanded. Whereas in 1965 7 percent of 0-6 year olds were in a day nursery or kindergarten, this share rose to 77 per cent in 2000. This again spawned more employment opportunities for women in the public sector. Today the labor participation for women, around 75 percent of 16-66 year olds, is among the highest in the world.

Originally social welfare programs targeted low income earners who were encouraged to take out insurance against sickness (1892), unemployment (1907) and disability (1922). The public subsidized these schemes and initiated a program for the poor among old people (1891). The high unemployment period in the 1930s inspired some temporary relief and some administrative reform, but little fundamental change.

Welfare policy in the first four decades following World War II is commonly believed to have been strongly influenced by the Social Democrat party which held around 30 percent of the votes in general elections and was the party in power for long periods of time. One of the distinctive features of the Danish welfare state has been its focus on the needs of the individual person rather than on the family context. Another important characteristic is the universal nature of a number of benefits starting with a basic old age pension for all in 1956. The compensation rates in a number of schedules are high in international comparison, particularly for low income earners. Public transfers gained a larger share in total public outlays both because standards were raised – that is benefits became higher – and because the number of recipients increased dramatically following the high unemployment regime from the mid 1970s to the mid 1990s. To pay for the high transfers and the large public sector – around 30 percent of the work force – the tax load is also high in international perspective. The share public sector and social expenditure has risen to above 50 percent of GDP, only second to the share in Sweden.

Figure 3. Unemployment, Denmark (percent of total labor force)

Source: Statistics Denmark ‘50 års-oversigten’ and ADAM’s databank

The Danish labor market model has recently attracted favorable international attention (OECD 2005). It has been declared successful in fighting unemployment – especially compared to the policies of countries like Germany and France. The so-called Flexicurity model rests on three pillars. The first is low employment protection, the second is relatively high compensation rates for the unemployed and the third is the requirement for active participation by the unemployed. Low employment protection has a long tradition in Denmark and there is no change in this factor when comparing the twenty years of high unemployment – 8-12 per cent of the labor force – from the mid 1970s to the mid 1990s, to the past ten years when unemployment has declined to a mere 4.5 percent in 2006. The rules governing compensation to the unemployed were tightened from 1994, limiting the number of years the unemployed could receive benefits from 7 to 4. Most noticeably labor market policy in 1994 turned from ‘passive’ measures – besides unemployment benefits, an early retirement scheme and a temporary paid leave scheme – toward ‘active’ measures that were devoted to getting people back to work by providing training and jobs. It is commonly supposed that the strengthening of economic incentives helped to lower unemployment. However, as Andersen and Svarer (2006) point out, while unemployment has declined substantially a large and growing share of Danes of employable age receives transfers other than unemployment benefit – that is benefits related to sickness or social problems of various kinds, early retirement benefits, etc. This makes it hazardous to compare the Danish labor market model with that of many other countries.

Exchange Rates and Macroeconomic Policy

Denmark has traditionally adhered to a fixed exchange rate regime. The belief is that for a small and open economy, a floating exchange rate could lead to very volatile exchange rates which would harm foreign trade. After having abandoned the gold standard in 1931, the Danish currency (the Krone) was, for a while, pegged to the British pound, only to join the IMF system of fixed but adjustable exchange rates, the so-called Bretton Woods system after World War II. The close link with the British economy still manifested itself when the Danish currency was devaluated along with the pound in 1949 and, half way, in 1967. The devaluation also reflected that after 1960, Denmark’s international competitiveness had gradually been eroded by rising real wages, corresponding to a 30 percent real appreciation of the currency (Pedersen 1996).

When the Bretton Woods system broke down in the early 1970s, Denmark joined the European exchange rate cooperation, the “Snake” arrangement, set up in 1972, an arrangement that was to be continued in the form of the Exchange Rate Mechanism within the European Monetary System from 1979. The Deutschmark was effectively the nominal anchor in European currency cooperation until the launch of the Euro in 1999, a fact that put Danish competitiveness under severe pressure because of markedly higher inflation in Denmark compared to Germany. In the end the Danish government gave way before the pressure and undertook four discrete devaluations from 1979 to 1982. Since compensatory increases in wages were held back, the balance of trade improved perceptibly.

This improvement could, however, not make up for the soaring costs of old loans at a time when the international real rates of interests were high. The Danish devaluation strategy exacerbated this problem. The anticipation of further devaluations was mirrored in a steep increase in the long-term rate of interest. It peaked at 22 percent in nominal terms in 1982, with an interest spread to Germany of 10 percent. Combined with the effects of the second oil crisis on the Danish terms of trade, unemployment rose to 10 percent of the labor force. Given the relatively high compensation ratios for the unemployed, the public deficit increased rapidly and public debt grew to about 70 percent of GDP.

Figure 4. Current Account and Foreign Debt (Denmark)

Source: Statistics Denmark Statistical Yearbooks and ADAM’s Databank

In September 1982 the Social Democrat minority government resigned without a general election and was relieved by a Conservative-Liberal minority government. The new government launched a program to improve the competitiveness of the private sector and to rebalance public finances. An important element was a disinflationary economic policy based on fixed exchange rates pegging the Krone to the participants of the EMS and, from 1999, to the Euro. Furthermore, automatic wage indexation that had occurred, with short interruptions since 1920 (with a short lag and high coverage), was abolished. Fiscal policy was tightened, thus bringing an end to the real increases in public expenditure that had lasted since the 1960’s.

The stabilization policy was successful in bringing down inflation and long interest rates. Pedersen (1995) finds that this process, nevertheless, was slower than might have been expected. In view of former Danish exchange rate policy it took some time for the market to believe in the credible commitment to fixed exchange rates. From the late 1990s the interest spread to Germany/ Euroland has been negligible, however.

The initial success of the stabilization policy brought a boom to the Danish economy that, once again, caused overheating in the form of high wage increases (in 1987) and a deterioration of the current account. The solution to this was a number of reforms in 1986-87 aiming at encouraging private savings that had by then fallen to an historical low. Most notable was the reform that reduced tax deductibility of private interest on debts. These measures resulted in a hard landing to the economy caused by the collapse of the housing market.

The period of low growth was further prolonged by the international recession in 1992. In 1993 yet another shift of regime occurred in Danish economic policy. A new Social Democrat government decided to ‘kick start’ the economy by means of a moderate fiscal expansion whereas, in 1994, the same government tightened labor market policies substantially, as we have seen. Mainly as a consequence of these measures the Danish economy from 1994 entered a period of moderate growth with unemployment steadily falling to the level of the 1970s. A new feature that still puzzles Danish economists is that the decline in unemployment over these years has not yet resulted in any increase in wage inflation.

Denmark at the beginning of the twenty-first century in many ways fits the description of a Small Successful European Economy according to Mokyr (2006). Unlike in most of the other small economies, however, Danish exports are broad based and have no “niche” in the world market. Like some other small European countries, Ireland, Finland and Sweden, the short term economic fluctuations as described above have not followed the European business cycle very closely for the past thirty years (Andersen 2001). Domestic demand and domestic economic policy has, after all, played a crucial role even in a very small and very open economy.

References

Abildgren, Kim. “Real Effective Exchange Rates and Purchasing-Power-parity Convergence: Empirical Evidence for Denmark, 1875-2002.” Scandinavian Economic History Review 53, no. 3 (2005): 58-70.

Andersen, Torben M. et al. The Danish Economy: An international Perspective. Copenhagen: DJØF Publishing, 2001.

Andersen, Torben M. and Michael Svarer. “Flexicurity: den danska arbetsmarknadsmodellen.” Ekonomisk debatt 34, no. 1 (2006): 17-29.

Banggaard, Grethe. Befolkningsfremmende foranstaltninger og faldende børnedødelighed. Danmark, ca. 1750-1850. Odense: Syddansk Universitetsforlag, 2004

Hansen, Sv. Aage. Økonomisk vækst i Danmark: Volume I: 1720-1914 and Volume II: 1914-1983. København: Akademisk Forlag, 1984.

Henriksen, Ingrid. “Avoiding Lock-in: Cooperative Creameries in Denmark, 1882-1903.” European Review of Economic History 3, no. 1 (1999): 57-78

Henriksen, Ingrid. “Freehold Tenure in Late Eighteenth-Century Denmark.” Advances in Agricultural Economic History 2 (2003): 21-40.

Henriksen, Ingrid and Kevin H. O’Rourke. “Incentives, Technology and the Shift to Year-round Dairying in Late Nineteenth-century Denmark.” Economic History Review 58, no. 3 (2005):.520-54.

Johansen, Hans Chr. Danish Population History, 1600-1939. Odense: University Press of Southern Denmark, 2002.

Johansen, Hans Chr. Dansk historisk statistik, 1814-1980. København: Gyldendal, 1985.

Klovland, Jan T. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 3 (1998): 309-44.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001

Mokyr, Joel. “Successful Small Open Economies and the Importance of Good Institutions.” In The Road to Prosperity. An Economic History of Finland, edited by Jari Ojala, Jari Eloranta and Jukka Jalava, 8-14. Helsinki: SKS, 2006.

Pedersen, Peder J. “Postwar Growth of the Danish Economy.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo. Cambridge: Cambridge University Press, 1995.

OECD, Employment Outlook, 2005.

O’Rourke, Kevin H. “The European Grain Invasion, 1870-1913.” Journal of Economic History 57, no. 4 (1997): 775-99.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-century Atlantic Economy. Cambridge, MA: MIT Press, 1999

Topp, Niels-Henrik. “Influence of the Public Sector on Activity in Denmark, 1929-39.” Scandinavian Economic History Review 43, no. 3 (1995): 339-56.


Footnotes

1 Denmark also includes the Faeroe Islands, with home rule since 1948, and Greenland, with home rule since 1979, both in the North Atlantic. These territories are left out of this account.

Citation: Henriksen, Ingrid. “An Economic History of Denmark”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2006. URL http://eh.net/encyclopedia/an-economic-history-of-denmark/

An Economic History of Copyright in Europe and the United States

B. Zorina Khan, Bowdoin College

Introduction

Copyright is a form of intellectual property that provides legal protection against unauthorized copying of the producer’s original expression in products such as art, music, books, articles, and software. Economists have paid relatively little scholarly attention to copyrights, although recent debates about piracy and “the digital dilemma” (free use of digital property) have prompted closer attention to theoretical and historical issues. Like other forms of intellectual property, copyright is directed to the protection of cultural creations that are nonrivalrous and nonexclusive in nature. It is generally proposed that, in the absence of private or public forms of exclusion, prices will tend to be driven down to the low or zero marginal costs and the original producer would be unable to recover the initial investment.

Part of the debate about copyright exists because it is still not clear whether state enforcement is necessary to enable owners to gain returns, or whether the producers of copyrightable products respond significantly to financial incentives. Producers of these public goods might still be able to appropriate returns without copyright laws or in the face of widespread infringement, through such strategies as encryption, cartelization, the provision of complementary products, private monitoring and enforcement, market segmentation, network externalities, first mover effects and product differentiation. Patronage, taxation, subsidies, or public provision, might also comprise alternatives to copyright protection. In some instances “authors” (broadly defined) might be more concerned about nonfinancial rewards such as enhanced reputations or more extensive diffusion.

During the past three centuries great controversy has always been associated with the grant of property rights to authors, ranging from the notion that cultural creativity should be rewarded with perpetual rights, through the complete rejection of any intellectual property rights at all for copyrightable commodities. However, historically, the primary emphasis has been on the provision of copyright protection through the formal legal system. Europeans have generally tended to adopt the philosophical position that authorship embodies rights of personhood or moral rights that should be accorded strong protections. The American approach to copyright has been more utilitarian: policies were based on a comparison of costs and benefits, and the primary emphasis of early copyright policies was on the advancement of public welfare. However, the harmonization of international laws has created a melding of these two approaches. The tendency at present is toward stronger enforcement of copyrights, prompted by the lobbying of publishers and the globalization of culture and commerce. Technological change has always exerted an exogenous force for change in copyright laws, and modern innovations in particular provoke questions about the extent to which copyright systems can respond effectively to such challenges.

Copyright in Europe

Copyright in France

In the early years of printing, books and other written matter became part of the public domain when they were published. Like patents, the grant of book privileges originated in the Republic of Venice in the fifteenth century, a practice which was soon prevalent in a number of other European countries. Donatus Bossius, a Milan author, petitioned the duke in 1492 for an exclusive privilege for his book, and successfully argued that he would be unjustly deprived of the benefits from his efforts if others were able to freely copy his work. He was given the privilege for a term of ten years. However, authorship was not required for the grant of a privilege, and printers and publishers obtained monopolies over existing books as well as new works. Since privileges were granted on a case by case basis, they varied in geographical scope, duration, and breadth of coverage, as well as in terms of the attendant penalties for their violation. Grantors included religious orders and authorities, universities, political figures, and the representatives of the Crown.

The French privilege system was introduced in 1498 and was well-developed by the end of the sixteenth century. Privileges were granted under the auspices of the monarch, generally for a brief period of two to three years, although the term could be as much as ten years. Protection was granted to new books or translations, maps, type designs, engravings and artwork. Petitioners paid formal fees and informal gratuities to the officials concerned. Since applications could only be sealed if the King were present, petitions had to be carefully timed to take advantage of his route or his return from trips and campaigns. It became somewhat more convenient when the courts of appeal such as the Parlement de Paris began to issue grants that were privileges in all but name, although this could lead to conflicting rights if another authority had already allocated the monopoly elsewhere. The courts sometimes imposed limits on the rights conferred, in the form of stipulations about the prices that could be charged. Privileges were property that could be assigned or licensed to another party, and their infringement was punished by a fine and at times confiscation of all the output of “pirates.”

After 1566, the Edict of Moulins required that all new books had to be approved and licensed by the Crown. Favored parties were able to get renewals of their monopolies that also allowed them to lay claim to works that were already in the public domain. By the late eighteenth century an extensive administrative procedure was in place that was designed to restrict the number of presses and engage in surveillance and censorship of the publishing industry. Manuscripts first had to be read by a censor, and only after a permit was requested and granted could the book be printed, although the permit could later be revoked if complaints were lodged by sufficiently influential individuals. Decrees in 1777 established that authors who did not alienate their property were entitled to exclusive rights in perpetuity. Since few authors had the will or resources to publish and distribute books, their privileges were likely to be sold outright to professional publishers. However, the law made a distinction in the rights accorded to publishers, because if the right was sold the privilege was only accorded a limited duration of at least ten years, the exact term to be determined in accordance with the value of the work, and once the publisher’s term expired, the work passed into the public domain. The fee for a privilege was thirty six livres. Approvals to print a work, or a “permission simple” which did not entail exclusive rights could also be obtained after payment of a substantial fee. Between 1700 and 1789, a total of 2,586 petitions for exclusive privileges were filed, and about two thirds were granted. The result was a system that resulted in “odious monopolies,” higher prices and greater scarcity, large transfers to officials of the Crown and their allies, and pervasive censorship. It likewise disadvantaged smaller book producers, provincial publishers, and the academic and broader community.

The French Revolutionary decrees of 1791 and 1793 replaced the idea of privilege with that of uniform statutory claims to literary property, based on the principle that “the most sacred, the most unassailable and the most personal of possessions is the fruit of a writer’s thought.” The subject matter of copyrights covered books, dramatic productions and the output of the “beaux arts” including designs and sculpture. Authors were required to deposit two copies of their books with the Bibliothèque Nationale or risk losing their copyright. Some observers felt that copyrights in France were the least protected of all property rights, since they were enforced with a care to protecting the public domain and social welfare. Although France is associated with the author’s rights approach to copyright and proclamations of the “droit d’auteur,” these ideas evolved slowly and hesitatingly, mainly in order to meet the self-interest of the various members of the book trade. During the ancien régime, the rhetoric of authors’ rights had been promoted by French owners of book privileges as a way of deflecting criticism of monopoly grants and of protecting their profits, and by their critics as a means of attacking the same monopolies and profits. This language was retained in the statutes after the Revolution, so the changes in interpretation and enforcement may not have been universally evident.

By the middle of the nineteenth century, French jurisprudence and philosophy tended to explicate copyrights in terms of rights of personality but the idea of the moral claim of authors to property rights was not incorporated in the law until early in the twentieth century. The droit d’auteur first appeared in a law of April 1910. In 1920 visual artists were granted a “droit de suite” or a claim to a portion of the revenues from resale of their works. Subsequent evolution of French copyright laws led to the recognition of the right of disclosure, the right of retraction, the right of attribution, and the right of integrity. These moral rights are (at least in theory) perpetual, inalienable, and thus can be bequeathed to the heirs of the author or artist, regardless of whether or not the work was sold to someone else. The self-interested rhetoric of the owners of monopoly privileges now fully emerged as the keystone of the “French system of literary property” that would shape international copyright laws in the twenty first century.

Copyright in England

England similarly experienced a period during which privileges were granted, such as a seven year grant from the Chancellor of Oxford University for an 1518 work. In 1557, the Worshipful Company of Stationers, a publishers’ guild, was founded on the authority of a royal charter and controlled the book trade for next one hundred and fifty years. This company created and controlled the right of their constituent members to make copies, so in effect their “copy right” was a private property right that existed in perpetuity, independently of state or statutory rights. Enforcement and regulation were carried out by the corporation itself through its Court of Assistants. The Stationers’ Company maintained a register of books, issued licenses, and sanctioned individuals who violated their regulations. Thus, in both England and France, copyright law began as a monopoly grant to benefit and regulate the printers’ guilds, and as a form of surveillance and censorship over public opinion on behalf of the Crown.

The English system of privileges was replaced in 1710 by a copyright statute (the “Statute of Anne” or “An Act for the Encouragement of Learning, by Vesting the Copies of Printed Books in the Authors or Purchasers of Such Copies, During the Times Therein Mentioned,” 1709-10, 8 Anne, ch. 19.) The statute was not directed toward the authors of books and their rights. Rather, its intent was to restrain the publishing industry and destroy its monopoly power. According to the law, the grant of copyright was available to anyone, not just to the Stationers. Instead of a perpetual right, the term was limited to fourteen years, with a right of renewal, after which the work would enter the public domain. The statute also permitted the importation of books in foreign languages.

Subsequent litigation and judicial interpretation added a new and fundamentally different dimension to copyright. In order to protect their perpetual copyright, publishers tried to promote the idea that copyright was based on the natural rights of authors or creative individuals and, as the agent of the author, those rights devolved to the publisher. If indeed copyrights derived from these inherent principles, they represented property that existed independently of statutory provisions and could be protected under common law. The booksellers engaged in a series of strategic litigation that culminated in their defeat in the landmark case, Donaldson v. Beckett [98 Eng. Rep. 257 (1774)]. The court ruled that authors had a common law right in their unpublished works, but on publication that right was extinguished by the statute, whose provisions determined the nature and scope of any copyright claims. This transition from publisher’s rights to statutory author’s rights implied that copyright had transmuted from a straightforward license to protect monopoly profits into an expanding property right whose boundaries would henceforth increase at the expense of the public domain.

Between 1735 and 1875 fourteen Acts of Parliament amended the copyright legislation. Copyrights extended to sheet music, maps, charts, books, sculptures, paintings, photographs, dramatic works and songs sung in a dramatic fashion, and lectures outside of educational institutions. Copyright owners had no remedies at law unless they complied with a number of stipulations which included registration, the payment of fees, the delivery of free copies of every edition to the British Museum (delinquents were fined), as well as complimentary copies for four libraries, including the Bodleian and Trinity College. The ubiquitous Stationers’ Company administered registration, and the registrar personally benefited from the monetary fees of 5 shillings when the book was registered and an equal amount for each assignment and each copy of an entry, along with one shilling for each entry searched. Foreigners could only obtain copyrights if they presented themselves in a part of the British Empire at the time of publication. The book had to be published in the United Kingdom, and prior publication in a foreign country – even in a British colony – was an obstacle to copyright protection.

The term of the copyright in books was for the longer of 42 years from publication or the lifetime of the author plus seven years, and after the death of the author a compulsory license could be issued to ensure that works of sufficient public benefit would be published. The “work for hire” doctrine was in force for books, reviews, newspapers, magazines and essays unless a distinct contractual clause specified that the copyright was to accrue to the author. Similarly, unauthorized use of a publication was permitted for the purposes of “fair use.” Only the copyright holder and his agents were allowed to import the protected works into Britain.

The British Commission that reported on the state of the copyright system in 1878 felt that the laws were “obscure, arbitrary and piecemeal” and were compounded by the confused state of the common law. The numerous uncoordinated laws that were simultaneously in force led to conflicts and unintended defects in the system. The report discussed but did not recommend an alternative to the grant of copyrights, in the form of a royalty system where “any person would be entitled to copy or republish the work on paying or securing to the owner a remuneration, taking the form of royalty or definite sum prescribed by law.” The main benefit would be to be public in the form of early access to cheap editions, whereas the main cost would be to the publishers whose risk and return would be negatively affected.

The Commission noted that the implications for the colonies were “anomalous and unsatisfactory.” The publishers in England practiced price discrimination, modifying the initial high prices for copyrighted material through discounts given to reading clubs, circulating libraries and the like, benefits which were not available in the colonies. In 1846 the Colonial Office acknowledged “the injurious effects produced upon our more distant colonists” and passed the Foreign Reprints Act in the following year. This allowed the colonies who adopted the terms of British copyright legislation to import cheap reprints of British copyrighted material with a tariff of 12.5 percent, the proceeds of which were to be remitted to the copyright owners. However, enforcement of the tariff seems to have been less than vigorous since, between 1866 and 1876 only £1155 was received from the 19 colonies who took advantage of the legislation (£1084 from Canada which benefited significantly from the American reprint trade). The Canadians argued that it was difficult to monitor imports, so it would be more effective to allow them to publish the reprints themselves and collect taxes for the benefit of the copyright owners. This proposal was rejected, but under the Canadian Copyright Act of 1875 British copyright owners could obtain Canadian copyrights for Canadian editions that were sold at much lower prices than in Britain or even in the United States.

The Commission made two recommendations. First, the bigger colonies with domestic publishing facilities should be allowed to reprint copyrighted material on payment of a license to be set by law. Second, the benefits to the smaller colonies of access to British literature should take precedence over lobbies to repeal the Foreign Reprints Act, which should be better enforced rather than removed entirely. Some had argued that the public interest required that Britain should allow the importation of cheap colonial reprints since the high prices of books were “altogether prohibitory to the great mass of the reading public” but the Commission felt that this should only be adopted with the consent of the copyright owner. They also devoted a great deal of attention to what was termed “The American Question” but took the “highest public ground” and recommended against retaliatory policies.

Copyright in the United States

Colonial Copyright

In the period before the Declaration of Independence individual American states recognized and promoted patenting activity, but copyright protection was not considered to be of equal importance, for a number of reasons. First, in a democracy the claims of the public and the wish to foster freedom of expression were paramount. Second, to a new colony, pragmatic concerns were likely of greater importance than the arts, and the more substantial literary works were imported. Markets were sufficiently narrow that an individual could saturate the market with a first run printing, and most local publishers produced ephemera such as newspapers, almanacs, and bills. Third, it was unclear that copyright protection was needed as an incentive for creativity, especially since a significant fraction of output was devoted to works such as medical treatises and religious tracts whose authors wished simply to maximize the number of readers, rather than the amount of income they received.

In 1783, Connecticut became the first state to approve an “Act for the encouragement of literature and genius” because “it is perfectly agreeable to the principles of natural equity and justice, that every author should be secured in receiving the profits that may arise from the sale of his works, and such security may encourage men of learning and genius to publish their writings; which may do honor to their country, and service to mankind.” Although this preamble might seem to strongly favor author’s rights, the statute also specified that books were to be offered at reasonable prices and in sufficient quantities, or else a compulsory license would issue.

Federal Copyright Grants

Despite their common source in the intellectual property clause of the U.S. Constitution, copyright policies provided a marked contrast to the patent system. According to Wheaton v. Peters, 33 U.S. 591, 684 (1834): “It has been argued at the bar, that as the promotion of the progress of science and the useful arts is here united in the same clause in the constitution, the rights of the authors and inventors were considered as standing on the same footing; but this, I think, is a non sequitur, for when congress came to execute this power by legislation, the subjects are kept distinct, and very different provisions are made respecting them.”

The earliest federal statute to protect the product of authors was approved on May 31 1790, “for the encouragement of learning, by securing the copies of maps, charts, and books to the authors and proprietors of such copies, during the times therein mentioned.” John Barry obtained the first federal copyright when he registered his spelling book in the District Court of Pennsylvania, and early grants reflected the same utilitarian character. Policy makers felt that copyright protection would serve to increase the flow of learning and information, and by encouraging publication would contribute to democratic principles of free speech. The diffusion of knowledge would also ensure broad-based access to the benefits of social and economic development. The copyright act required authors and proprietors to deposit a copy of the title of their work in the office of the district court in the area where they lived, for a nominal fee of sixty cents. Registration secured the right to print, publish and sell maps, charts and books for a term of fourteen years, with the possibility of an extension for another like term. Amendments to the original act extended protection to other works including musical compositions, plays and performances, engravings and photographs. Legislators refused to grant perpetual terms, but the length of protection was extended in the general revision of the laws in 1831, and 1909.

In the case of patents, the rights of inventors, whether domestic or foreign, were widely viewed as coincident with public welfare. In stark contrast, policymakers showed from the very beginning an acute sensitivity to trade-offs between the rights of authors (or publishers) and social welfare. The protections provided to authors under copyrights were as a result much more limited than those provided by the laws based on moral rights that were applied in many European countries. Of relevance here are stipulations regarding first sale, work for hire, and fair use. Under a moral rights-based system, an artist or his heirs can claim remedies if subsequent owners alter or distort the work in a way that allegedly injures the artist’s honor or reputation. According to the first sale doctrine, the copyright holder lost all rights after the work was sold. In the American system, if the copyright holder’s welfare were enhanced by nonmonetary concerns, these individualized concerns could be addressed and enforced through contract law, rather than through a generic federal statutory clause that would affect all property holders. Similarly, “work for hire” doctrines also repudiated the right of personality, in favor of facilitating market transactions. For example, in 1895 Thomas Donaldson filed a complaint that Carroll D. Wright’s editing of Donaldson’s report for the Census Bureau was “damaging and injurious to the plaintiff, and to his reputation” as a scholar. The court rejected his claim and ruled that as a paid employee he had no rights in the bulletin; to rule otherwise would create problems in situations where employees were hired to prepare data and statistics.

This difficult quest for balance between private and public good was most evident in the copyright doctrine of “fair use” that (unlike with patents) allowed unauthorized access to copyrighted works under certain conditions. Joseph Story ruled in [Folsom v. Marsh, 9 F. Cas. 342 (1841)]: “we must often, in deciding questions of this sort, look to the nature and objects of the selections made, the quantity and value of the materials used, and the degree in which the use may prejudice the sale, or diminish the profits, or supersede the objects, of the original work.” One of the striking features of the fair use doctrine is the extent to which property rights were defined in terms of market valuations, or the impact on sales and profits, as opposed to a clear holding of the exclusivity of property. Fair use doctrine thus illustrates the extent to which the early policy makers weighed the costs and benefits of private property rights against the rights of the public and the provisions for a democratic society. If copyrights were as strictly construed as patents, it would serve to reduce scholarship, prohibit public access for noncommercial purposes, increase transactions costs for potential users, and inhibit learning which the statutes were meant to promote.

Nevertheless, like other forms of intellectual property, the copyright system evolved to encompass improvements in technology and changes in the marketplace. Technological changes in nineteenth-century printing included the use of stereotyping which lowered the costs of reprints, improvements in paper making machinery, and the advent of steam powered printing presses. Graphic design also benefited from innovations, most notably the development of lithography and photography. The number of new products also expanded significantly, encompassing recorded music and moving pictures by the end of the nineteenth century; and commercial television, video recordings, audiotapes, and digital music in the twentieth century.

The subject matter, scope and duration of copyrights expanded over the course of the nineteenth century to include musical compositions, plays, engravings, sculpture, and photographs. By 1910 the original copyright holder was granted derivative rights such as to translations of literary works into other languages; to performances; and the rights to adapt musical works, among others. Congress also lengthened the term of copyright several times, although by 1890 the term of copyright protection in Greece and the United States were the most abbreviated in the world. New technologies stimulated change by creating new subjects for copyright protection, and by lowering the costs of infringement of copyrighted works. In Edison v. Lubin, 122 F. Cas. 240 (1903), the lower court rejected Edison’s copyright of moving pictures under the statutory category of photographs. This decision was overturned by the appellate court: “[Congress] must have recognized there would be change and advance in making photographs, just as there has been in making books, printing chromos, and other subjects of copyright protection.” Copyright enforcement was largely the concern of commercial interests, and not of the creative individual. The fraction of copyright plaintiffs who were authors (broadly defined) was initially quite low, and fell continuously during the nineteenth century. By 1900-1909, only 8.6 percent of all plaintiffs in copyright cases were the creators of the item that was the subject of the litigation. Instead, by the same period, the majority of parties bringing cases were publishers and other assignees of copyrights.

In 1909 Congress revised the copyright law and composers were given the right to make the first mechanical reproductions of their music. However, after the first recording, the statute permitted a compulsory license to issue for copyrighted musical compositions: that is to say, anyone could subsequently make their own recording of the composition on payment of a fee that was set by the statute at two cents per recording. In effect, the property right was transformed into a liability rule. The next major legislative change in 1976 similarly allowed compulsory licenses to issue for works that are broadcast on cable television. The prevalence of compulsory licenses for copyrighted material is worth noting for a number of reasons: they underline some of the statutory differences between patents and copyrights in the United States; they reflect economic reasons for such distinctions; and they are also the result of political compromises among the various interest groups that are affected.

Allied Rights

The debate about the scope of patents and copyrights often underestimates or ignores the importance of allied rights that are available through other forms of the law such as contract and unfair competition. A noticeable feature of the case law is the willingness of the judiciary in the nineteenth century to extend protection to noncopyrighted works under alternative doctrines in the common law. More than 10 percent of copyright cases dealt with issues of unfair competition, and 7.7 percent with contracts; a further 12 percent encompassed issues of right to privacy, trade secrets, and misappropriation. For instance, in Keene v. Wheatley et al., 14 F. Cas. 180 (1860), the plaintiff did not have a statutory copyright in the play that was infringed. However, she was awarded damages on the basis of her proprietary common law right in an unpublished work, and because the defendants had taken advantage of a breach of confidence by one of her former employees. Similarly, the courts offered protection against misappropriation of information, such as occurred when the defendants in Chamber of Commerce of Minneapolis v. Wells et al., 111 N.W. 157 (1907) surreptitiously obtained stock market information by peering in windows, eavesdropping, and spying.

Several other examples relate to the more traditional copyright subject of the book trade. E. P. Dutton & Company published a series of Christmas books which another publisher photographed, and offered as a series with similar appearance and style but at lower prices. Dutton claimed to have been injured by a loss of profits and a loss of reputation as a maker of fine books. The firm did not have copyrights in the series, but they essentially claimed a right in the “look and feel” of the books. The court agreed: “the decisive fact is that the defendants are unfairly and fraudulently attempting to trade upon the reputation which plaintiff has built up for its books. The right to injunctive relief in such a case is too firmly established to require the citation of authorities.” In a case that will resonate with academics, a surgery professor at the University of Pennsylvania was held to have a common law property right in the lectures he presented, and a student could not publish them without his permission. Titles could not be copyrighted, but were protected as trade marks and under unfair competition doctrines. In this way, in numerous lawsuits G. C. Merriam & Co, the original publishers of Webster’s Dictionary, restrained the actions of competitors who published the dictionary once the copyrights had expired.

International Copyrights in the United States

The U.S. was long a net importer of literary and artistic works, especially from England, which implied that recognition of foreign copyrights would have led to a net deficit in international royalty payments. The Copyright Act recognized this when it specified that “nothing in this act shall be construed to extend to prohibit the importation or vending, reprinting or publishing within the United States, of any map, chart, book or books … by any person not a citizen of the United States.” Thus, the statutes explicitly authorized Americans to take free advantage of the cultural output of other countries. As a result, it was alleged that American publishers “indiscriminately reprinted books by foreign authors without even the pretence of acknowledgement.” The tendency to reprint foreign works was encouraged by the existence of tariffs on imported books that ranged as high as 25 percent.

The United States stood out in contrast to countries such as France, where Louis Napoleon’s Decree of 1852 prohibited counterfeiting of both foreign and domestic works. Other countries which were affected by American piracy retaliated by refusing to recognize American copyrights. Despite the lobbying of numerous authors and celebrities on both sides of the Atlantic, the American copyright statutes did not allow for copyright protection of foreign works for fully one century. As a result, American publishers and producers freely pirated foreign literature, art, and drama.

Effects of Copyright Piracy

What were the effects of piracy? First, did the American industry suffer from cheaper foreign books being dumped on the domestic market? This does not seem to have been the case. After controlling for the type of work, the cost of the work, and other variables, the prices of American books were lower than prices of foreign books. American book prices may have been lower to reflect lower perceived quality or other factors that caused imperfect substitutability between foreign and local products. As might be expected, prices were not exogenously and arbitrarily fixed, but varied in accordance with a publisher’s estimation of market factors such as the degree of competition and the responsiveness of demand to determinants. The reading public appears to have gained from the lack of copyright, which increased access to the superior products of more developed markets in Europe, and in the long run this likely improved both the demand and supply of domestic science and literature.

Second, according to observers, professional authorship in the United States was discouraged because it was difficult to compete with established authors such as Scott, Dickens and Tennyson. Whether native authors were deterred by foreign competition would depend on the extent to which foreign works prevailed in the American market. Early in American history the majority of books were reprints of foreign titles. However, nonfiction titles written by foreigners were less likely to be substitutable for nonfiction written by Americans; consequently, the supply of nonfiction soon tended to be provided by native authors. From an early period grammars, readers, and juvenile texts were also written by Americans. Geology, geography, history and similar works would have to be adapted or completely rewritten to be appropriate for an American market which reduced their attractiveness as reprints. Thus, publishers of schoolbooks, medical volumes and other nonfiction did not feel that the reforms of 1891 were relevant to their undertakings. Academic and religious books are less likely to be written for monetary returns, and their authors probably benefited from the wider circulation that lack of international copyright encouraged. However, the writers of these works declined in importance relative to writers of fiction, a category which grew from 6.4 percent before 1830 to 26.4 percent by the 1870s.

On the other hand, foreign authors dominated the field of fiction for much of the century. One study estimates about fifty percent of all fiction best sellers in antebellum period were pirated from foreign works. In 1895 American authors accounted for two of the top ten best sellers but by 1910 nine of the top ten were written by Americans. This fall over time in the fraction of foreign authorship may have been due to a natural evolutionary process, as the development of the market for domestic literature over time encouraged specialization. The growth in fiction authors was associated with the increase in the number of books per author over the same period. Improvements in transportation and the increase in the academic population probably played a large role in enabling individuals who lived outside the major publishing centers to become writers despite the distance. As the market expanded, a larger fraction of writers could become professionals.

Although the lack of copyright protection may not have discouraged authors, this does not imply that intellectual property policy in this dimension had no costs. It is likely that the lack of foreign copyrights led to some misallocation of efforts or resources, such as in attempting to circumvent the rules. Authors changed their residence temporarily when books were about to be published in order to qualify for copyright. Others obtained copyrights by arranging to co-author with a foreign citizen. T. H. Huxley adopted this strategy, arranging to co-author with “a young Yankee friend … Otherwise the thing would be pillaged at once.” An American publisher suggested that Kipling should find “a hack writer, whose name would be of use simply on account of its carrying the copyright.” Harriet Beecher Stowe proposed a partnership with Elizabeth Gaskell, so they could “secure copyright mutually in our respective countries and divide the profits.”

It is widely acknowledged that copyrights in books tended to be the concern of publishers rather than of authors (although the two are naturally not independent of each other). As a result of lack of legal copyrights in foreign works, publishers raced to be first on the market with the “new” pirated books, and the industry experienced several decades of intense, if not quite “ruinous” competition. These were problems that publishers in England had faced before, in the market for books that were uncopyrighted, such as Shakespeare and Fielding. Their solution was to collude in the form of strictly regulated cartels or “printing congers.” The congers created divisible property in books that they traded, such as a one hundred and sixtieth share in Johnson’s Dictionary that was sold for £23 in 1805. Cooperation resulted in risk sharing and a greater ability to cover expenses. The unstable races in the United States similarly settled down during the 1840s to collusive standards that were termed “trade custom” or “courtesy of the trade.”

The industry achieved relative stability because the dominant firms cooperated in establishing synthetic property rights in foreign-authored books. American publishers made payments (termed “copyrights”) to foreign authors to secure early sheets, and other firms recognized their exclusive property in the “authorized reprint”. Advance payments to foreign authors not only served to ensure the coincidence of publishers’ and authors’ interests – they were also recognized by “reputable” publishers as “copyrights.” These exclusive rights were tradable, and enforced by threats of predatory pricing and retaliation. Such practices suggest that publishers were able to simulate the legal grant through private means.

However, private rights naturally did not confer property rights that could be enforced at law. The case of Sheldon v. Houghton 21 F. Cas 1239 (1865) illustrates that these rights were considered to be “very valuable, and is often made the subject of contracts, sales, and transfers, among booksellers and publishers.” The very fact that a firm would file a plea for the court to protect their claim indicates how vested a right it had become. The plaintiff argued that “such custom is a reasonable one, and tends to prevent injurious competition in business, and to the investment of capital in publishing enterprises that are of advantage to the reading public.” The courts rejected this claim, since synthetic rights differed from copyrights in the degree of security that was offered by the enforcement power of the courts. Nevertheless, these title-specific of rights exclusion decreased uncertainty, enabled publishers to recoup their fixed costs, and avoided the wasteful duplication of resources that would otherwise have occurred.

It was not until 1891 that the Chace Act granted copyright protection to selected foreign residents. Thus, after a century of lobbying by interested parties on both sides of the Atlantic, based on reasons that ranged from the economic to the moral, copyright laws only changed when the United States became more competitive in the international market for literary and artistic works. However, the act also included significant concessions to printers’ unions and printing establishments in the form of “manufacturing clauses.” First, a book had to be published in the U.S. before or at the same time as the publication date in its country of origin. Second, the work had to be printed here, or printed from type set in the United States or from plates made from type set in the United States. Copyright protection still depended on conformity with stipulations such as formal registration of the work. These clauses resulted in U.S. failure to qualify for admission to the international Berne Convention until 1988, more than one hundred years after the first Convention.

After the copyright reforms in 1891, both English and American authors were disappointed to find that the change in the law did not lead to significant gains. Foreign authors realized they may even have benefited from the lack of copyright protection in the United States. Despite the cartelization of publishing, competition for these synthetic copyrights ensured that foreign authors were able to obtain payments that American firms made to secure the right to be first on the market. It can also be argued that foreign authors were able to reap higher total returns from the expansion of the market through piracy. The lack of copyright protection may have functioned as a form of price discrimination, where the product was sold at a higher price in the developed country, and at a lower or zero price in the poorer country. Returns under such circumstances may have been higher for goods with demand externalities or network effects, such as “bestsellers” where consumer valuation of the book increased with the size of the market. For example, Charles Dickens, Anthony Trollope, and other foreign writers were able to gain considerable income from complementary lecture tours in the extensive United States market.

Harmonization of Copyright Laws

In view of the strong protection accorded to inventors under the U.S. patent system, to foreign observers its copyright policies appeared to be all the more reprehensible. The United States, the most liberal in its policies towards patentees, had led the movement for harmonization of patent laws. In marked contrast, throughout the history of the U.S. system, its copyright grants in general were more abridged than almost all other countries in the world. The term of copyright grants to American citizens was among the shortest in the world, the country applied the broadest interpretation of fair use doctrines, and the validity of the copyright depended on strict compliance with the requirements. U.S. failure to recognize the rights of foreign authors was also unique among the major industrial nations. Throughout the nineteenth century proposals to reform the law and to acknowledge foreign copyrights were repeatedly brought before Congress and rejected. Even the bill that finally recognized international copyrights almost failed, only passed at the last possible moment, and required longstanding exemptions in favor of workers and printing enterprises.

In a parallel fashion to the status of the United States in patent matters, France’s influence was evident in the subsequent evolution of international copyright laws. Other countries had long recognized the rights of foreign authors in national laws and bilateral treaties, but France stood out in its favorable treatment of domestic and foreign copyrights as “the foremost of all nations in the protection it accords to literary property.” This was especially true of its concessions to foreign authors and artists. For instance, France allowed copyrights to foreigners conditioned on manufacturing clauses in 1810, and granted foreign and domestic authors equal rights in 1852. In the following decade France entered into almost two dozen bilateral treaties, prompting a movement towards multilateral negotiations, such as the Congress on Literary and Artistic Property in 1858. The International Literary and Artistic Association, which the French novelist Victor Hugo helped to establish, conceived of and organized the Convention which first met in Berne in 1883.

The Berne Convention included a number of countries that wished to establish an “International Union for the Protection of Literary and Artistic Works.” The preamble declared their intent to “protect effectively, and in as uniform a manner as possible, the rights of authors over their literary and artistic works.” The actual Articles were more modest in scope, requiring national treatment of authors belonging to the Union and minimum protection for translation and public performance rights. The Convention authorized the establishment of a physical office in Switzerland, whose official language would be French. The rules were revised in 1908 to extend the duration of copyright and to include modern technologies. Perhaps the most significant aspect of the convention was not its specific provisions, but the underlying property rights philosophy which was decidedly from the natural rights school. Berne abolished compliance with formalities as a prerequisite for copyright protection since the creative act itself was regarded as the source of the property right. This measure had far-reaching consequences, because it implied that copyright was now the default, whereas additions to the public domain would have to be achieved through affirmative actions and by means of specific limited exemptions. In 1928 the Berne Convention followed the French precedent and acknowledged the moral rights of authors and artists.

Unlike its leadership in patent conventions, the United States declined an invitation to the pivotal copyright conference in Berne in 1883; it attended but refused to sign the 1886 agreement of the Berne Convention. Instead, the United States pursued international copyright policies in the context of the weaker Universal Copyright Convention (UCC), which was adopted in 1952 and formalized in 1955 as a complementary agreement to the Berne Convention. The UCC membership included many developing countries that did not wish to comply with the Berne Convention because they viewed its provisions as overly favorable to the developed world. The United States was among the last wave of entrants into the Berne Convention when it finally joined in 1988. In order to do so it complied by removing prerequisites for copyright protection such as registration, and also lengthened the term of copyrights. However, it still has not introduced federal legislation in accordance with Article 6bis, which declares the moral rights of authors “independently of the author’s economic rights, and even after the transfer of the said rights.” Similarly, individual countries continue to differ in the extent to which multilateral provisions governed domestic legislation and practices.

The quest for harmonization of intellectual property laws resulted in a “race to the top,” directed by the efforts and self interest of the countries which had the strongest property rights. The movement to harmonize patents was driven by American efforts to ensure that its extraordinary patenting activity was remunerated beyond as well as within its borders. At the same time, the United States ignored international conventions to unify copyright legislation. Nevertheless, the harmonization of copyright laws proceeded, promoted by France and other civil law regimes which urged stronger protection for authors based on their “natural rights” although at the same time they infringed on the rights of foreign inventors. The net result was that international pressure was applied to developing countries in the twentieth century to establish strong patents and strong copyrights, although no individual developed country had adhered to both concepts simultaneously during their own early growth phase. This occurred even though theoretical models did not offer persuasive support for intellectual property harmonization, and indeed suggested that uniform policies might be detrimental even to some developed countries and to overall global welfare.

Conclusion

The past three centuries stand out in terms of the diversity across nations in intellectual property institutions, but the nineteenth century saw the origins of the movement towards the “harmonization” of laws that at present dominates global debates. Among the now-developed countries, the United States stood out for its conviction that broad access to intellectual property rules and standards was key to achieving economic development. Europeans were less concerned about enhancing mass literacy and public education, and viewed copyright owners as inherently meritorious and deserving of strong protection. European copyright regimes thus evolved in the direction of author’s rights, while the United States lagged behind the rest of the world in terms of both domestic and foreign copyright protection.

By design, American statutes differentiated between patents and copyrights in ways that seemed warranted if the objective was to increase social welfare. The patent system early on discriminated between nonresident and domestic inventors, but within a few decades changed to protect the right of any inventor who filed for an American patent regardless of nationality. The copyright statutes, in contrast, openly encouraged piracy of foreign goods on an astonishing scale for one hundred years, in defiance of the recriminations and pressures exerted by other countries. The American patent system required an initial search and examination that ensured the patentee was the “first and true” creator of the invention in the world, whereas copyrights were granted through mere registration. Patents were based on the assumption of novelty and held invalid if this assumption was violated, whereas essentially similar but independent creation was copyrightable. Copyright holders were granted the right to derivative works, whereas the patent holder was not. Unauthorized use of patented inventions was prohibited, whereas “fair use” of copyrighted material was permissible if certain conditions were met. Patented inventions involved greater initial investments, effort, and novelty than copyrighted products and tended to be more responsive to material incentives; whereas in many cases cultural goods would still be produced or only slightly reduced in the absence of such incentives. Fair use was not allowed in the case of patents because the disincentive effect was likely to be higher, while the costs of negotiation between the patentee and the more narrow market of potential users would generally be lower. If copyrights were as strongly enforced as patents it would benefit publishers and a small literary elite at the cost of social investments in learning and education.

The United States created a utilitarian market-based model of intellectual property grants which created incentives for invention, but always with the primary objective of increasing social welfare and protecting the public domain. The checks and balances of interest group lobbies, the legislature and the judiciary worked effectively as long as each institution was relatively well-matched in terms of size and influence. However, a number of legal and economic scholars are increasingly concerned that the political influence of corporate interests, the vast number of uncoordinated users over whom the social costs are spread, and international harmonization of laws have upset these counterchecks, leading to over-enforcement at both the private and public levels.

International harmonization with European doctrines introduced significant distortions in the fundamental principles of American copyright and its democratic provisions. One of the most significant of these changes was also one of the least debated: compliance with the precepts of the Berne Convention accorded automatic copyright protection to all creations on their fixation in tangible form. This rule reversed the relationship between copyright and the public domain that the U.S. Constitution stipulated. According to original U.S. copyright doctrines, the public domain was the default, and copyright merely comprised a limited exemption to the public domain; after the alignment with Berne, copyright became the default, and the rights of the public and of the public domain now merely comprise a limited exception to the primacy of copyright. The pervasive uncertainty that characterizes the intellectual property arena today leads risk-averse individuals and educational institutions to err on the side of abandoning their right to free access rather than invite potential challenges and costly litigation. A number of commentators are equally concerned about other dimensions of the globalization of intellectual property rights, such as the movement to emulate European grants of property rights in databases, which has the potential to inhibit diffusion and learning.

Copyright law and policy has always altered and been altered by social, economic and technological changes, in the United States and elsewhere. However, the one constant feature across the centuries is that copyright protection involves crucial political questions to a far greater extent than its economic implications.

Additional Readings

Economic History

B. Zorina Khan. The Democratization of Invention: Patents and Copyrights in American Economic Development, 1790-1920. New York: Cambridge University Press, 2005.

Law and Economics

Besen, Stanley, and L. Raskind. “An Introduction to the Law and Economics of Intellectual Property.” Journal of Economic Perspectives 5 (1991): 3-27.

Breyer, Stephen. “The Uneasy Case for Copyright: A Study of Copyright in Books, Photocopies and Computer Programs.” Harvard Law Review 84 (1970): 281-351.

Gallini, Nancy and S. Scotchmer. “Intellectual Property: When Is It the Best Incentive System?” Innovation Policy and the Economy 2 (2002): 51-78.

Gordon, Wendy, and R. Watt, editors. The Economics of Copyright: Developments in Research and Analysis. Cheltenham, UK: Edward Elgar, 2002.

Hurt, Robert M., and Robert M. Shuchman. “The Economic Rationale of Copyright.” American Economic Review Papers and Proceedings 56 (1966): 421-32.

Johnson, William R. “The Economics of Copying.” Journal of Political Economy 93 (1985): 1581-74.

Landes, William M., and Richard A. Posner. “An Economic Analysis of Copyright Law.” Journal of Legal Studies 18 (1989): 325-63.

Landes, William M., and Richard A. Posner. The Economic Structure of Intellectual Property Law. Cambridge, MA: Harvard University Press, 2003.

Liebowitz, S. J. “Copying and Indirect Appropriability: Photocopying of Journals.” Journal of Political Economy 93 (1985): 945-57.

Merges, Robert P. “Contracting into Liability Rules: Intellectual Property Rights and Collective Rights Organizations.” California Law Review 84, no. 5 (1996): 1293-1393.

Meurer, Michael J. “Copyright Law and Price Discrimination.” Cardozo Law Review 23 (2001): 55-148.

Novos, Ian E., and Michael Waldman. “The Effects of Increased Copyright Protection: An Analytic Approach.” Journal of Political Economy 92 (1984): 236-46.

Plant, Arnold. “The Economic Aspects of Copyright in Books.” Economica 1 (1934): 167-95.

Takeyama, L. “The Welfare Implications of Unauthorized Reproduction of Intellectual Property in the Presence of Demand Network Externalities.” Journal of Industrial Economics 42 (1994): 155–66.

Takeyama, L. “The Intertemporal Consequences of Unauthorized Reproduction of Intellectual Property.” Journal of Law and Economics 40 (1997): 511–22.

Varian, Hal. “Buying, Sharing and Renting Information Goods.” Journal of Industrial Economics 48, no. 4 (2000): 473–88.

Varian, Hal. “Copying and Copyright.” Journal of Economic Perspectives 19, no. 2 (2005): 121-38.

Watt, Richard. Copyright and Economic Theory: Friends or Foes? Cheltenham, UK: Edward Elgar, 2000.

History of Economic Thought

Hadfield, Gilliam K. “The Economics of Copyright: A Historical Perspective.” Copyright Law Symposium (ASCAP) 38 (1992): 1-46.

History

Armstrong, Elizabeth. Before Copyright: The French Book-Privilege System, 1498-1526. Cambridge: Cambridge University Press, 1990.

Birn, Raymond. “The Profits of Ideas: Privileges en librairie in Eighteenth-century France.” Eighteenth-Century Studies 4, no. 2 (1970-71): 131-68.

Bugbee, Bruce. The Genesis of American Patent and Copyright Law. Washington, DC: Public Affairs Press, 1967.

Dawson, Robert L. The French Booktrade and the “Permission Simple” of 1777: Copyright and the Public Domain. Oxford: Voltaire Foundation, 1992.

Hackett, Alice P., and James Henry Burke. Eighty Years of Best Sellers, 1895-1975. New York: Bowker, 1977.

Nowell-Smith, Simon. International Copyright Law and the Publisher in the Reign of Queen Victoria. Oxford: Clarendon Press, 1968.

Patterson, Lyman. Copyright in Historical Perspective. Nashville: Vanderbilt University Press, 1968.

Rose, Mark. Authors and Owners: The Invention of Copyright. Cambridge: Harvard University Press, 1993.

Saunders, David. Authorship and Copyright. London: Routledge, 1992.

Citation: Khan, B. “An Economic History of Copyright in Europe and the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-copyright-in-europe-and-the-united-states/

A History of the U.S. Carpet Industry

Randall L. Patton, Kennesaw State University

Paul Krugman (1993, p. 5) has written that “the most striking feature of the geography of economic activity…. is surely concentration” (emphasis in the original). There are few better examples of highly concentrated economic activity than the U.S. carpet industry. Today, carpet mills located within a 65-mile radius of Dalton, Georgia, produce about 85% of the carpet sold in the U.S. market. The U.S. industry accounts for about 45% of the world’s carpet production. While many segments of the textile industry have struggled in the post-World War II era, carpet makers have prospered. The U.S. carpet industry also exemplifies the southward drift of textile production within the United States during the twentieth century. Indeed, it is probably useful to conceptualize the U.S. carpet industry as two distinct industries with different trajectories. The early American carpet industry was, like other textile segments, a product of borrowed (from the United Kingdom) technology and skill that struggled throughout its existence against imports. The second American carpet industry grew from deep southern roots and utilized locally developed technology and skills. The second industry also came along at just the right time to ride the boom in consumer spending associated with the economic golden age that followed World War II.

The First U.S. Carpet Industry

The first U.S. carpet industry emerged at the end of the eighteenth century. Skilled weavers produced carpets and rugs with handloom technology. In its early years, American carpet makers encountered the same problem as other textile manufacturers – imports. Congress protected the infant U.S. industry, along with textiles generally, in 1816 and raised protective tariffs in the 1820s. In an early survey of the industry conducted in 1834, Timothy Pitkin found 20 carpet mills producing about 1 million square yards. By 1850, a government survey found 116 mills producing 8 million square yards of carpets and rugs (employing more than 6,000 workers). Twenty years later, U.S. carpet mills numbered 215, wove more than 20 million square yards, and employed 12,000 persons. In the nineteenth century Americans used carpet to cover poor quality, soft wood floors. A commentator wrote in 1872 that the “general use of carpets was a necessity some few years ago, from the fact that the floors of our houses were generally built of such poor material, and in such a shiftless manner, that the floor was too unsightly to be left exposed” (Greeley, 1872). The mid-nineteenth century saw the introduction of the varnished hardwood floor. With the hardwood floor came a declining demand for wall-to-wall carpets and an increasing demand for smaller rugs to provide stylistic accents.

Employment and production figures indicate that, although there was an incremental increase in productivity, production effectively rose in concert with the number of workers. Erastus Bigelow introduced power loom technology for various types of carpeting in the early 1840s, and others quickly followed with competing designs. Though Bigelow’s idea – the use of power looms in carpet production – would eventually result in great productivity gains, Bigelow’s own looms were not the primary source of the gains, nor did those gains materialize overnight. Handloom production outweighed power loom production as late as the 1870s in the Philadelphia area. Power looms were expensive and manufacturers had great difficulty in matching the quality of goods produced with handlooms.

Boom and Bust in the First Half of the Twentieth Century

After 1870, refinements in power loom technology allowed manufacturers to produce reasonable substitutes for higher quality handloom woven goods. This resulted in a decline in the production of the cheapest carpets as consumers moved toward higher quality goods as the price of higher quality weaves declined. Large rugs became a staple in upper-middle class American homes by the early twentieth century. Sales ballooned to more than 83 million square yards by 1923. Firms such as Bigelow-Hartford produced lavish catalogs and advertised products direct to consumers in the early twentieth century, bypassing the traditional commission agents who had dominated marketing in the nineteenth century. The industry seemed, however, to have peaked in 1923. Sales fell off even before the Great Depression, and the economic disaster of the 1930s offered no respite. Firms such as Bigelow and Mohawk struggled. Industry production hovered in the 60 million square yard range throughout the 1930s. Most mills converted to war production during the Second World War, a move that helped forestall a deeper crisis. Just after World War II, the industry experienced a brief boom, with sales jumping to nearly 90 million square yards in 1948, but the boom quickly turned bust. Even the seemingly robust sales of 1948 amounted to a scant increase over the peak of a quarter century earlier. When compared with population growth, the industry’s sales had actually declined. Worse still, sales fell through the early 1950s back into the 60 million yard range.

The Second U.S. Carpet Industry

Carpet in the United States had three salient characteristics in 1950. Carpets were (1) woven on power looms out of (2) wool in (3) mills located in the northeastern United States. In just one short decade, each of those critical elements had changed dramatically. By 1960, most carpet in the United States was made on tufting machines from synthetic fibers such as nylon in factories located in the southeastern United States – and the vast majority of these new mills were located in and around the Appalachian foothills town of Dalton, Georgia.

The U.S. economy entered a prolonged boom period after World War II that many historians have labeled the “golden age.” The release of pent-up consumer demand associated with the sacrifices of World War II, Keyenesian government policies aimed at maintaining a high level of demand, and other factors helped produce a period of unparalleled economic growth. Northeastern carpet manufacturers tried a variety of approaches during the late 1940s and early 1950s to reverse their industry’s fortunes, but had little success. Annual per household carpet consumption stood at 1.97 square yards in 1950, virtually unchanged from the beginning of the twentieth century. Industry executives expressed increasing frustration throughout the early 1950s with their inability to tap the booming housing market of the postwar period. Many northern carpet mills began to open new plants in the South. Moving south allowed older firms to escape unionized work forces, take advantage of the region’s lower labor costs and, occasionally, benefit from incentives offered by state and local governments in the region (Greenville, Mississippi, built a $4 million facility to entice the Alexander Smith Company in the early 1950s, for example). Bigelow, Mohawk, and other northeastern companies built facilities in Virginia, South Carolina, Georgia, and Mississippi during the 1950s.

With few exceptions, these facilities produced carpet using weaving technology. The shining new mills in Greenville, Mississippi and Liberty, South Carolina, used the latest and most productive looms and were constructed according to the most up-to-date standards – single-floor construction and concrete floors, for example, to make the use of lift trucks possible. Yet the industry encountered one insurmountable barrier. In spite of decades of incremental progress, woven carpets were still too expensive to penetrate the working class market. The wholesale price of woven carpets rose slightly during the 1950s. The quite modest increases were interpreted within the industry as something of a success.

The woven carpet manufacturers also tried other strategies to boost sales in the 1950s. Some manufacturers experimented with selling carpet “on time” (credit) through retailers; others emphasized style and elegance. The chief impact of the advertising campaigns seems to have been to raise awareness of and desire for carpeting in general. In 1949, this would have seemed a winning strategy.

Tufted Textiles Take the Floor

During the same decade, however, a new southern industry produced a cheaper substitute for woven goods – tufted carpets and rugs, whose sales grew from near zero in the late 1940s to more than 100 million square yards by 1958. The origins of this new carpet industry in the South can be traced to a combination of purposeful action and historical accident.

The Tufted Bedspread Industry

The historical accident, as Krugman called it, was the revival of the hand tufting tradition in northwest Georgia (and elsewhere in the region) in the early twentieth century. To create a tufted bedspread, the craftsperson inserted raised tufts of yarn into a pre-woven piece of backing material (generally cotton sheeting) to form a pattern, then boiled the sheeting to shrink it and lock in the tufts of yarn. Catherine Evans, a young woman living near Dalton, Georgia, saw an old hand tufted bedspread at a friend’s house in 1895. Evans duplicated the design and made a similar spread as a wedding gift. Evans and some of her relatives began teaching other area women the art of tufting. From these beginnings, a cottage industry developed. By the 1920s, local entrepreneurs had created numerous “spread houses.” The spread houses operated a putting out system, sending “haulers” into the countryside with sheeting and yarn. The haulers returned later to pay the farm families for their hand work and pick up tufted spreads for finishing – washing and, for some, dyeing. These spreads found a ready market, not just regionally, but in the northeast as well. (Wannamaker’s department stores stocked Georgia bedspreads in the 1930s.) This cottage industry became a source of economic growth in north Georgia even during the Great Depression.

Here the residue of purposeful action intersected with Catherine Evans’ historical accident. By the 1920s, the South had become home to the lion’s share of U.S. textile production. Some of this shift southward was due to capital movement from North to South, but most of the shift could be accounted for by new southern firms – large firms such as Georgia’s West Point Manufacturing and North Carolina’s Burlington Mills and smaller firms like Dalton, Georgia’s Crown Cotton Mill and American Hosiery Mill. After the Civil War, and especially after 1880, southern firms had borrowed northern technology, begun at the bottom of the quality chain with the coarsest fabrics, and initiated what might be called a process of regional learning. Much of this development was the result of a purposeful effort to industrialize the region. By the early twentieth century, the South still had not developed a regional textile machine-making industry, but the cotton mills, hosiery mills, and other textile firms had recruited and trained a large number of mechanics to maintain machinery purchased in the northeast. Mechanics from the Dalton area and nearby Chattanooga began adapting sewing machines for the purpose of inserting raised yarn tufts, and in the early 1930s many of the spread houses moved toward becoming spread mills, or factories. Spread mill owners employed a largely female work force to operate the sewing machines that now created the raised patterns.

From Spread Mills to Carpet Mills

By the end of the 1930s, a number of these firms had begun to experiment with multi-needle machines that could tuft wider swaths of backing material more quickly. Some firms, such as the cleverly named Cabin Crafts (to conjure the image of a cottage industry that already had ceased to exist) had begun making small rugs by covering the entire surface of a piece of backing material with tufts. Hosiery mill mechanics like Albert and Joe Cobble founded firms in the southern industrial dynamo of Chattanooga, Tennessee (less than 30 miles from Dalton) to build special machines for the tufted bedspread and small rug industry. From these technological roots, area entrepreneurs began experimenting with making large rugs and wall-to-wall carpeting with this tufting process. About 1949, the Cobble Brothers firm and an innovative Dalton spread making company, Cabin Crafts, introduced tufting machinery wide enough to produce carpeting in a single pass. Carpet makers could buy cheap pre-woven backing materials. Manufacturers tried cotton with mostly poor results. Eventually Indian jute became the primary backing material for tufted carpets through the 1960s. In the 1970s, manufacturers developed suitable synthetic substitutes for jute.

The traditional woven carpet industry primarily used wool. (One manufacturer lamented in 1950 that it was “unfortunate that the carpet industry was tied to the back of a sheep.”) Wool made an excellent material for floor coverings – it was durable and resilient. The new southern tufting mills used cotton yarn at first. Cotton did not compare with wool as a floor covering material – it crushed easily and wore more quickly. Yet already by 1955, southern carpet mills were selling more carpets than northern mills, in spite of the clearly inferior nature of the product. The key was price: the wholesale price of tufted carpet was about half that of woven products. Consumer surveys in the 1950s demonstrated that few carpet buyers could name the manufacturer of the carpets they had purchased. The same consumers were almost without exception unable to distinguish between a tufted and a woven construction with a visual inspection. The old woven firms’ ad campaigns of the 1950s probably helped move more tufted carpet than woven.

Synthetic Fibers

The tufted carpet industry experienced a meteoric rise in the 1950s, but many skeptics saw it as a fad that would fade. One machinery executive quipped that “every year was the last big year for tufting” in the 1950s, according to industry observers. The obvious inferiority of cotton made the argument plausible. Surely consumers, many in the old woven industry argued, would eventually tire of placing glorified bedspreads on their floors. Tufted manufacturers experimented with rayon (disastrously) and staple (chopped, spun) nylon (with some success) in the 1950s. The most significant breakthrough in terms of raw materials came in the mid-1950s from the DuPont Corporation. Woven manufacturers and others had experimented with DuPont’s nylon as a carpet fiber, but nylon lacked the bulk needed in floor coverings. DuPont helped insure that the bust never came by developing bulked continuous filament (BCF) nylon in the mid-1950s. DuPont’s initiative was clearly stimulated by the growth of carpet sales. In essence, tufted manufacturers created a market large enough to justify DuPont’s research and development costs. DuPont even helped the new industry along by launching its own ad campaign for carpets made with its trademark 501 nylon in the late 1950s and early 1960s.

BCF nylon helped insure the long-term future of the tufted carpet industry. Tufted carpets used, and still use, a variety of fibers. Staple nylon could be used in constructions and styles that were not possible with a continuous filament yarn – plush, lustrous constructions. And in recent years, the industry has made increasing use polypropylene and other continuous filament yarns. DuPont’s BCF nylon (and similar products introduced by Monsanto a bit later), however, fit perfectly with the least expensive, low pile height, loop constructions that sold best in the emerging modest income market.

By the end of the 1950s, the new tufted carpet industry had raced past the old woven industry. While the total volume of carpet sales skyrocketed, woven sales actually fell. Tufted products accounted for all the growth in the industry through the 1970s. Tufted carpet sales increased from about 6 million square yards in 1951 to nearly 400 million yards in 1968. Carpet finally became a staple of middle and working class home furnishings – indeed, it became the default floor covering over much of the nation for decades. The logjam had been broken by product substitution. Per household sales increased for the first time since the turn of the century. By 1990, Americans consumed over 12 square yards of carpet per family per year, up from 1.97 in the early 1950s. Woven sales drifted downward in the same period from 67 million yards to just over 40 million. Woven products did not disappear. High-end consumers still sought the assumed quality of woven goods, and woven products continued to dominate specialty commercial markets – hotel lobbies, casinos, etc. But tufted carpet achieved total dominance of not just the residential carpet market, but the residential flooring market in general.

Table 1

Average Mill Value of Carpet Shipments, 1950-1965 (price per square yard)

All Broadloom Carpet and Rugs Woven Tufted
1950 $6.26 $6.26 n.a.
1955 5.30 6.19 3.36
1960 4.50 6.56 3.49
1965 3.76 6.09 3.40

Table 2

Carpet Industry Output, 1951-1968 (square yards)

Tufted Carpet Shipments(square yards) Woven Carpet Shipments(square yards) Total IndustryShipments

(square yards)

1951 6,076,000 66,924,000 73,000,000
1960 113,764,000 52,044,000 165,808,000
1963 250,000,000 41,000,000 291,000,000
1968 395,000,000 40,000,000 435,000,000

The tufted carpet industry was the nation’s fourth fastest growing industry in the 1960s, trailing only aircraft, television picture tubes, and computers. Robert Shaw, CEO of Shaw Industries, for two decades the nation’s leading manufacturer of carpet, recalled the late 1950s and 1960s as the era of the “gold coast” in the Dalton area, an era in which demand constantly outstripped supply and small manufacturers and large could succeed with few controls and a “seat-of-the-pants” management style.

Carpet Capital: An Industrial District

The brief narrative sketched above outlines the emergence of an industrial district. By the 1960s, the district had developed several distinct features. The carpet complex was characterized by the rapid emergence of new firms. No single firm accounted for as much ten percent of the industry’s output. The industry had developed from the deep roots of textile manufacture and, specifically, bedspread making. Carpet making emerged out of a process of regional learning (albeit a small region, similar to Jane Jacobs’ “city regions”). Carpet manufacture was also a decentralized affair. A few large firms, such as Cabin Crafts and E.T. Barwick Mills, spun some of their own yarn and finished some of their own carpets in-house by the 1960s, but most of the hundreds of small firms relied on independent yarn spinning or production mills and independent commission finishing firms. Carpet finishing provided the industry with significant flexibility. Mills produced some carpets with pre-dyed yarns, but tufted significant yardage with undyed yarn. This allowed manufacturers to delay the critical decision on color until later, increasing the company’s flexibility. Commission finishing companies provided these services. Initially post-production dyeing was handled in dye becks, or large drums. That is, finishers dyed carpets by the piece (albeit large pieces, 900 feet or more in length). Dye becks were produced locally and regionally.

The Dalton district offered a classic example of the great Victorian economist Alfred Marshall’s industrial district based on external economies. Clearly this industry originated in northwest Georgia because of the peculiar skill set developed among managers, mechanics, and workers. The finishing companies and other suppliers clearly filled the role of Marshall’s “subsidiary trades” devoted “to one small branch of the process of production.” Innovation and ideas were “in the air,” as Marshall put it. With so many firms and workers in close proximity, improvements in technology, management practices, marketing, and other arenas were rapidly transmitted throughout the industry. Though different in many ways, Paul Krugman has observed, the relatively low-tech carpet industry of the Appalachian foothills was quite similar to the high-tech Silicon Valley in these respects.

In the 1960s, European firms introduced continuous dyeing equipment to the U.S. carpet market. Continuous dyeing equipment held out the potential for more effective use of mass production techniques – an endless stream of white carpet moving through a dye range capable of rapidly shifting colors. The continuous ranges were, however, frightfully expensive compared to dye becks. The relative expense of the equipment in this evolving industry offers a window into the strategic options available to management. A tufting machine might have sold for $10,000 in the late 1950s, with Cobble Brothers or some other firm offering in-house financing. Through the 1960s, the well-nigh indestructible tufting machines were available second-hand – a bit slower than brand new models installed by larger mills, but still effective for smaller product runs. That particular barrier to entry into this new industry was quite low. To establish a beck dyeing operation, the equipment alone would have cost more than $700,000 by the end of the 1960s. The stakes in finishing were much higher, but the risks were shared among the finisher and his many customers. Just one of the new continuous dye ranges in the early 1970s cost more than $800,000. The capital stakes rose for finishers.

The Maturing of the Industry

The carpet boom slowed in the 1970s as did the rest of the US economy. The recessions of the mid-1970s brought an end to the double-digit annual growth rates of the earlier period. In a slower growth environment, attention to cost became critical. Some firms adapted to the changing environment, but many did not. Adaptation generally involved vertical integration. Particularly during the 1980s, a few firms took the lead in bringing yarn spinning (and eventually production of extruded, continuous filament yarn) in-house, integrating backward toward raw materials. The most successful large manufacturers also integrated forward through finishing, investing in their own dyeing facilities. The recession of 1981-82 proved a pivotal moment – many smaller and mid-sized firms had continued to struggle along and occasionally prosper during the inflationary 1970s. The recession of the early 1980s claimed nearly half of the 285 mills that had been in operation in 1980; by 1992 the industry counted only about 100 mills, down dramatically from its early 1970s peak of more than 400. Shaw Industries, a revamped Mohawk Industries, and a few others bought competitors and moved the industry towards greater consolidation. Moreover, the top four firms, led by Shaw Industries, accounted for more than 80% of total production by the early 1990s.

The Industry Today

The carpet industry today is essentially the domain of a few large firms, led by Shaw Industries and Mohawk. The nation’s largest carpet making firms are headquartered in northwest Georgia. Shaw and other carpet firms have moved into the production and distribution of other flooring surfaces – tile, wood, vinyl, etc. – as carpet has slipped in market share. No longer the unchallenged leader in covering America’s floors, carpet is still the single most popular choice. Perhaps the most notable change associated with the industry today is its increasing use of workers of Hispanic descent. Since the late 1980s, Hispanic immigrants have moved in large numbers to Dalton, as they have to many new destinations throughout the nation. The region’s employers laud the immigrant workers as the saviors of the industry, a solution to the region’s recurrent labor shortages. Some community leaders and longtime residents express anxiety about the pace of cultural change in the small communities that still serve as hosts to the industry.

Bibliography

Cole, Arthur H., and Harold Williamson. The American Carpet Manufacture: A History and an Analysis. Cambridge, MA: Harvard University Press, 1926.

Deaton, Thomas M. From Bedspreads to Broadloom: The Story of the Tufted Carpet Industry. Acton, MA: Tapestry Press, 1993.

Ewing, John S., and Nancy Norton. Broadlooms and Businessmen: A History of the Bigelow-Sanford Company. Cambridge, MA: Harvard University Press, 1955.

Flamming, Douglas. Creating the Modern South: Millhands and Managers in Dalton, Georgia, 1884-1984. Chapel Hill: University of North Carolina Press, 1992.

Friedman, Tami J. “Communities in Competition: Capital Migration and Plant Relocation in the United States Carpet Industry, 1929-1975.” Ph.D. diss., Columbia University, 2001.

Greeley, Horace, et al. Great Industries of the United States: Being an Historical Summary of the Origin, Growth, andPerfection of the Chief Industrial Arts ofThis County. Hartford: J.B. Burr and Hyde, 1872.

Krugman, Paul. Geography and Trade. Cambridge, MA: MIT Press, 1993.

Patton, Randall L. Shaw Industries: A History. Athens, GA: University of Georgia Press, 2002.

Patton, Randall L., with David B. Parker. Carpet Capital: The Rise of a New South Industry. Athens, GA: University of Georgia Press, 1999.

Scranton, Philip. Proprietary Capitalism: The Textile Manufacture at Philadelphia, 1800-1885. Cambridge, MA: Cambridge University Press, 1985.

Scranton, Philip. Figured Tapestry: Production, Markets, and Power in Philadelphia Textiles, 1885-1941. Cambridge, MA: Cambridge University Press, 1989.

Walters, Billie J. and James O. Wheeler, “Localization Economies in the American Carpet Industry.” Geographical Review 74 (Spring 1984): 183-91.

Zuniga, Victor, and Ruben Hernandez-Leon, “Making Carpet by the Mile: The Emergence of a Mexican Immigrant Community in an Industrial Community of the U.S. Historic South.” Social Science Quarterly 81, no. 1 (2000): 49-66.

Citation: Patton, Randall. “A History of the U.S. Carpet Industry”. EH.Net Encyclopedia, edited by Robert Whaples. September 22, 2006. URL http://eh.net/encyclopedia/a-history-of-the-u-s-carpet-industry/