EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Between Slavery and Capitalism: The Legacy of Emancipation in the American South

Author(s):Ruef, Martin
Reviewer(s):Wright, Gavin

Published by EH.Net (December 2014)

Martin Ruef, Between Slavery and Capitalism: The Legacy of Emancipation in the American South.  Princeton, NJ: Princeton University Press, 2014. xvii + 285 pp.  $35 (hardcover), ISBN: 978-0-691-16277-5.

Reviewed for EH.Net by Gavin Wright, Department of Economics, Stanford University.

Martin Ruef, the Egan Family Professor of Sociology and director of Markets and Management Studies at Duke University, has published numerous analyses over the past decade on the transition from slavery to new labor systems in the American South, primarily in sociological journals.  Between Slavery and Capitalism collects and updates these articles.  Because these subject areas have received much attention from economic historians, the book provides an opportunity to compare approaches and perspectives between two hybrid disciplines.

Between Slavery and Capitalism is notable in its effort to develop data sets that allow comparisons between antebellum and postbellum outcomes.  Most studies specialize in either one of these eras or the other, perhaps because the historical issues and institutional structures seem so different, but also because direct quantitative comparisons are difficult.  Ruef makes effective use of sources that bridge the wartime divide, including life histories of ex-slaves from interviews conducted by the WPA’s Federal Writers’ Project; R.G. Dun Credit Reports for southern businesses; and labor contracts recorded by the Freedmen’s Bureau between 1865 and 1867.  The last of these is used in Chapter 2 to compare the relative evaluations of labor attributes (age, gender, and occupational skill) under slavery and free labor markets.  The exercise shows that differentials by age and gender were more pronounced under slavery than under free labor, and that the relative value of slave labor peaked much earlier in the life cycle.   One would not want to draw major historical conclusions from such a narrow sample of labor contracts, but it is helpful to have quantitative confirmation of the proposition that the change in property-rights regimes did make an economic difference.

Subsequent chapters take up a diverse array of topics: persistence of antebellum status distinctions among emancipated slaves; restructuring of labor systems on plantations; trade and credit networks in the South; differences among counties in growth performance; and U.S. emancipation in comparative perspective.  My discussion here will focus on the plantation chapter, which features new evidence bearing on an episode in institutional change much-discussed by economic historians.

In the aftermath of war and emancipation, reports of large-scale black migration were widespread.  Whereas economists tend to view labor mobility as a natural individual response to market opportunities, Ruef instead takes the view that the decision to leave the plantation was a bold and risky move with uncertain consequences, best understood as an interdependent choice process featuring network externalities and “tipping points” (pp. 109-113).  The author tracks plantation departures from wartime through 1870, using the WPA interviews (pp. 120-125).  The resulting pattern does exhibit a classic S-shaped form (p. 120), but because the inflection point occurs with the end of the war in 1865, this is not particularly strong confirmation for a threshold effect.  Nonetheless, Ruef’s analysis is an interesting possible addition to the economic historian’s toolkit, with potential connections to recent work by Kenneth Chay and Kaivan Munshi (2014), who model black voting behavior and regional outmigration as collective-action phenomena.   When it comes to relating these departures to the emergence of new labor systems, however, doubts begin to arise.

From this reviewer’s perspective, Ruef gets off on the wrong foot by entitling the chapter “The Demise of the Plantation.”  True, Ransom and Sutch (1977) have a chapter with the same name, drawing on the same census data showing declining farm size.  The author acknowledges that big land ownership units were largely maintained, and he even quotes Charles Aiken’s view (1998) that the disappearance of the plantation is a myth (p. 106); but he seems to view this as merely terminological rather than substantive persistence.  That labor relationships changed fundamentally after emancipation is not at issue.  The question is the survival of the plantation as a managerial entity. The 1880 census figures cannot be used to settle the matter, because enumerators were instructed to count each tenant plot as an independent farm, even if it was part of a larger operational unit (Virts 1987).   Many of these “tenant plantations” retained aspects of centralized management, such as the “through-and-through” system, but the agricultural census did not enumerate these operations until a special report issued in 1916.  Ruef does not acknowledge this phenomenon, much less address the interpretive issue.

Indeed, the analysis never gets very deeply into the substance of the choices faced by landlords or laborers.  The author does not engage the work of Ralph Shlomowitz (1982) or Gerald Jaynes (1986), both of whom recount the process by which the centralized “wage plantation” gave way first to an intermediate form known as the “squad system,” before devolving to the nuclear family tenant as the basic unit.  A governing constraint was that payments had to be post-harvest, because of the two-peak character of labor requirements in cotton and uncertainty about price; early decisions were strongly influenced by the fact that the price of cotton was falling rapidly, as world markets adjusted to the return of American supply. In this setting, bargains struck at the start of the season looked like bad deals (and were often defaulted) by the end.  All of these considerations are neglected by Ruef.  For a book whose unifying theme is uncertainty, these are major omissions.

The author’s claim to methodological distinctiveness is the notion of uncertainty, distinguishing “classical” uncertainty (where the probability distribution of outcomes is unknown) from “categorical” uncertainty (where even the kinds of outcomes are not known), both of which are to be distinguished from “risk” with a known probability distribution.   This all sounds profound, but this reviewer finds it hard to see the historical content behind these abstractions. The issue comes to a head in a concluding section called “the escalation of uncertainty” in which the author concludes that “predicting the position of the freedman and woman in Southern society seemed a far more uncertain exercise in 1880 than it had been after the Civil War” (p. 190).  An informed historical observer might well argue precisely the opposite.  The wide-open range of political and economic outcomes that seemed possible in 1865 had been sharply limited by 1880, as cotton laborers could then choose among at most a handful of reasonably well-defined tenure options.  Perhaps this is a matter of disciplinary perspectives, but the variability of plausible interpretations suggests that the uncertainty trope does not really add much to historical understanding.

References:

Aiken, Charles (1998). The Cotton Plantation South since the Civil War. Baltimore, MD: Johns Hopkins University Press.

Chay, Kenneth, and Kaivan Munshi (2014). “Black Networks After Emancipation: Evidence from Reconstruction and the Great Migration,” Working Paper.

Jaynes, Gerald (1986).  Branches without Roots: Genesis of the Black Working Class in the American South, 1862-1882.  New York: Oxford University Press.

Ransom, Roger L., and Richard Sutch (1977). One Kind of Freedom: The Economic Consequences of Emancipation.  New York: Cambridge University Press.

Shlomowitz, Ralph (1982).  “The Squad System on Postbellum Cotton Plantations,” in Orville Vernon Burton and Robert C. McMath (eds.), Toward a New South? Studies in Post-Civil War Southern Communities.  Westport, CT: Greenwood Press.

Virts, Nancy (1987).  “Estimating the Importance of the Plantation System in Southern Agriculture in 1880,” Journal of Economic History 47: 984-988.

Gavin Wright is the William Robertson Coe Professor of American Economic History at Stanford. His latest book is Sharing the Prize: The Economics of the Civil Rights Revolution in the American South (Belknap Press, 2013).

Copyright (c) 2014 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (December 2014). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Agriculture, Natural Resources, and Extractive Industries
Business History
Social and Cultural History, including Race, Ethnicity and Gender
Geographic Area(s):North America
Time Period(s):19th Century

Economic History Classics

Selections for 2006

During 2006 EH.NET published a series of “Classic Reviews.” Modeled along the lines of our earlier Project 2000 and Project 2001 series, reviewers were asked to “reintroduce” each of the books to the profession, “explaining its significance at the time of publication and why it has endured as a classic.” Each review summarizes the book’s key findings, methods and arguments, as it puts it into the larger context and discusses any weaknesses.

This year’s selections are (alphabetically by author):

Selection Committee

  • Gareth Austin, London School of Economics
  • Ann Carlos, University of Colorado
  • John Murray, University of Toledo
  • Lawrence Officer, University of Illinois at Chicago
  • Cormac Ó Gráda, University College Dublin
  • Peter Scott, University of Reading
  • Catherine Schenk, University of Glasgow
  • Pierre van der Eng, Australian National University
  • Jenny Wahl, Carleton College

The State and the Stork: The Population Debate and Policy Making in U.S. History

Author(s):Hoff, Derek S.
Reviewer(s):Hammond, J. Daniel

Published by EH.Net (October 2013)

Derek S. Hoff, The State and the Stork: The Population Debate and Policy Making in U.S. History. Chicago: University of Chicago Press, 2012. xii + 378 pp. $49 (hardcover), ISBN: 978-0-22634-762-2.

Reviewed for EH.Net by J. Daniel Hammond, Department of Economics, Wake Forest University.

The State and the Stork is an economic and political history of ideas about population in the United States since the nation’s founding. The narrative is presented chronologically and structured around three ideas that have waxed and waned in the American economic and political spheres. These are (1) the “Malthusian” fear that growth of the population tends to outpace growth of the resource base; (2) the aesthetic concern that crowdedness reduces the quality of life; and (3) the belief that population growth enhances economic growth and thereby enhances the quality of life. The fact that these ideas run throughout American history is thoroughly documented in this study. Clearly, the question of optimal population has not been settled, and it is unlikely to be settled soon. Whatever one?s opinion might be on population questions, including immigration policy, Hoff’s book will help to set the opinion in historical and intellectual context. Historical documentation in support of Hoff?s narrative is extensive.

The first chapter, Foundations, sets up the narrative from the basis of British ideas on population in classical economics as they were brought to America in the eighteenth and first half of the nineteenth century. Britons John Locke, Adam Smith, David Ricardo, John Stuart Mill, and especially T.R. Malthus, and others appear. Americans Ben Franklin, Thomas Jefferson, James Madison, Alexander Hamilton, among the founders, along with Friedrich List, Henry Carey, John McVickar, George Tucker and others wrote on population and demographics. They made use of or criticized Malthusian ideas in the context of American issues such as slavery, industrialization, and western expansion. Hoff?s thesis is that there was ambivalence among Americans about population from the beginning that has persisted to the present day.

Chapter two, The Birth of the Modern Population Debate, provides a primer on the transition from classical economic theory to marginalist theories of consumers and producers, and a survey of population ideas in economic and political context from the closing of the American frontier in the 1890s to the Great Depression. J.M. Keynes and Keynesian economics in relation to Malthusian ideas of over-population are the subject of chapter three, Population Depressed. Keynes himself regarded population growth as supportive of economic growth. But some Keynesians combined his insights on management of aggregate demand with Malthusian ideas of population management, creating what Hoff calls Stable Population Keynesianism. Chapter four, Population Unbound, covers the post-World War II baby boom and the emerging debate between cornucopianists and doomsayers.

Chapter five moves to the next economic and social era with Managing the Great Society’s Population Growth. This chapter also marks a shift in focus from the ideas of economists to those of politicians during the Kennedy and Johnson presidencies. Chapter six covers the growth of environmentalism and the campaign to halt population growth, which Hoff calls radical Malthusianism. This is personified by Paul Ehrlich and The Population Bomb (1968). Chapter seven, Defusing the Population Bomb, covers the Nixon presidency, with special attention to the Commission on Population Growth and the American Future, established by Congress in 1969 and chaired by John D. Rockefeller III. In chapter eight Hoff argues that in the 1970s fear of the fiscal consequences of an aging population and an emerging conservative political economy that embraced population growth pushed Malthusian concerns off stage. Chapter eight brings the historical account up to the present, setting the stage for the Epilogue.

The primary theme one finds in The State and the Stork is that population questions have loomed large in American public life since the colonial period, with population growth seen at different times as either a portent of danger or of hope. Pessimism is predominant at some times and optimism at others. At the present time Hoff finds Malthusian concerns being suppressed by fear of the economic consequences of an inverted age pyramid. The latest conventional wisdom is optimistic about the effects of population growth, but pessimistic about the prospects for near-term population growth. In the epilogue Hoff makes clear what is indistinct throughout the book, that he is a population pessimist. He is discouraged that few Americans today take seriously the dangers posed by economic growth and overpopulation for natural resources and the quality of life.

Hoff chose to look at population issues through the eyes of economists. But he might have made other choices. He could, for instance, have used biologists ideas on population. It is interesting to ponder what difference this would have made. I suspect that the ebb and flow of concerns about under and over-population would have been much the same, for there is cross pollination between disciplines. Scholars from different disciplines tend to flock together around the same issues and general points of view. Yet the details of Hoff?s history might have been different in crucial ways, hinging on persistent differences in the way economists and biologists view humans.

Biologists tend to view humans as animals. Economists tend to view humans as rational animals. This difference can have profound implications. Animals tend to breed up to the physical limits of their environments. Human beings do not. This was recognized by Malthus, but not by Malthusians, including economists who are Malthusians. Malthusians draw more deeply on the intellectual legacy of the biologist Darwin than they do on the economist Malthus. To explain this point I will attempt to briefly demonstrate that Hoff fundamentally misinterprets Malthus?s 1798 Essay on Population. In misinterpreting Malthus, Hoff is far from alone. His is in fact the conventional interpretation of Malthus by economists, as his book amply illustrates.

Hoff’s conventional interpretation of Malthus is first encountered on page 15: Thus Americans had engaged in substantial population debates long before the Rev. Thomas Malthus argued in An Essay on the Principle of Population as it Affects the Future Improvement of Society (1798) that population growth doomed human societies by overwhelming natural resources.

The book’s first chapter on Foundations is on the theories of Malthus and the other leading classical economists of the late eighteenth and early nineteenth centuries, whose ideas, two centuries later, remain the starting point for serious discussion of population, resources, and the economy (p. 16). Further on Hoff writes: Like many in this era, [Benjamin] Franklin assumed that human population growth followed the same biological laws as plants and animals. In a line Malthus echoed, Franklin wrote, There is in short, no Bound to the prolific Nature of Plants or Animals, but what is made by their crowding and interfering with each others Means of Subsistence? (p. 20).

And one final quotation to illustrate Hoff’s biological interpretation of Malthus: But even if Malthus?s Essay on Population detoured from the prevailing optimism of the Enlightenment, it was born of immediate political and intellectual circumstances. It reflected the burgeoning of biological science. It also was part of a broader attack by the classical economists on the doctrine of mercantilism. Insisting that all societies progress toward overpopulation and misery, Malthus conformed to Enlightenment stages theory (p. 26).

Hoff comes to the verge of a more accurate interpretation of Malthus when he notes that Malthus wrote his Essay to challenge the utopian ideas of the radical political philosopher William Godwin … who, inspired by the revolutionary epoch of the late eighteenth century, believed that paradise, plenty, and human perfectibility were within the grasp of the people of his age (p. 25). But Hoff fails to acknowledge how Godwin expected paradise, plenty, and human perfectibility to come about, and thus he fails to grasp the nature of Malthus’s response.

The point of Malthus’s growth projections of food and population was to show what would happen if population was unchecked. But, he wrote, and this is crucial, population is always checked, though differently for plants and animals and for humans. Among plants and animals the view of the subject is simple. They are all impelled by a powerful instinct to the increase of their species; and this instinct is interrupted by no reasoning, or doubts about providing for their offspring. Wherever therefore there is liberty, the power of increase is exerted; and the superabundant effects are repressed afterwards by want of room and nourishment, which is common to animals and plants; and among animals by becoming the prey of others. The effects of this check on man are more complicated. Impelled to the increase of his species by an equally powerful instinct, reason interrupts his career, and asks him whether he may not bring beings into the world, for whom he cannot provide the means of subsistence. In a state of equality, this would be the simple question. In the present state of society, other considerations occur. Will he not lower his rank in life? Will he not subject himself to greater difficulties than he at present feels? Will he not be obliged to labour harder? (Malthus, Essay, Chapter. 2, Online Library of Liberty).

Godwin’s vision of a just society, in which humans reach their full potential of perfection, was one with perfect equality, without private property or accumulation of wealth, without markets, and even without marriage. Life is blissful, with the workday as short as half an hour and with children having no need to know the identity of their parents. Malthus meant to show that in the type of society envisioned by Godwin there would be overpopulation. Overpopulation would follow the dismantling of social institutions, and this would lead people to rediscover the benefits of the very institutions they had pulled down.

Charles Darwin and Alfred Russell Wallace both read Malthus’s Essay as they developed their theories of natural selection in the plant and animal worlds. However, as the passages quoted above show, Malthus did not conceive of humans as animals who breed up to the limit of the food supply. With the enormous influence of the theory of natural selection in the late nineteenth and twentieth centuries, it is reasonable to suspect that Malthusianism and its complement eugenics owe more to the theory of natural selection than to Robert Malthus?s economic and demographic theory. Malthus, as the presumed originator of Malthusianism, looms large in Hoff’s account. But Darwin makes only a brief appearance.

It may be that among the consequences of Hoff’s decision to examine population questions through the lens of economics rather than biology is his presumption that Malthusian concern about population numbers and eugenic concern about population quality are separable. It is widely acknowledged that the eugenics movement was Darwinist. My suggestion is that the population control movement was also Darwinist. Hoff separates concerns about overpopulation, which presumably are from an enlightened point of view, from concerns over the fitness of members of the population, which presumably are from an unenlightened point of view. He also sorts economists into political classifications of liberal and conservative. Liberals are, by the standards of most intellectuals, more enlightened; conservatives less enlightened. If Hoff had covered eugenics, which had very broad appeal across the social and natural sciences in the first three decades of the twentieth century, this might have prompted him to question the usefulness of his political classification of liberals and conservatives.[1]?

In the Epilogue Hoff writes that currently a continued emphasis on the aging of the population, however justified by spiraling deficits, has encouraged policy makers to think of babies as future taxpayers rather than as potential environmental or social externalities (p. 246). If our culture has come to the point where we either welcome the birth of a new human being because he or she is a future taxpayer or bemoan the birth as the arrival of an external cost, we are perhaps in danger of descending from the rational animals studied by Parson Malthus to the non-rational animals studied by Darwin.

Note:
1. See Thomas C. Leonard (2009) “American Economic Reform in the Progressive Era: Its Foundational Beliefs and Their Relationship to Eugenics,” History of Political Economy 41 (1): 109-41.

J. Daniel Hammond will present “Malthus, Utopians, and Economists” at the History of Economics Society session, “New Perspectives on Malthus: What Was He Really Saying about Population Growth and Human Societies?” at the Philadelphia ASSA meeting in January 2014.

Copyright (c) 2013 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (October 2013). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Economic Planning and Policy
Historical Demography, including Migration
History of Economic Thought; Methodology
Geographic Area(s):North America
Time Period(s):18th Century
19th Century
20th Century: Pre WWII
20th Century: WWII and post-WWII

An Overview of the Economic History of Uruguay since the 1870s

Luis Bértola, Universidad de la República — Uruguay

Uruguay’s Early History

Without silver or gold, without valuable species, scarcely peopled by gatherers and fishers, the Eastern Strand of the Uruguay River (Banda Oriental was the colonial name; República Oriental del Uruguay is the official name today) was, in the sixteenth and seventeenth centuries, distant and unattractive to the European nations that conquered the region. The major export product was the leather of wild descendants of cattle introduced in the early 1600s by the Spaniards. As cattle preceded humans, the state preceded society: Uruguay’s first settlement was Colonia del Sacramento, a Portuguese military fortress founded in 1680, placed precisely across from Buenos Aires, Argentina. Montevideo, also a fortress, was founded by the Spaniards in 1724. Uruguay was on the border between the Spanish and Portuguese empires, a feature which would be decisive for the creation, with strong British involvement, in 1828-1830, of an independent state.

Montevideo had the best natural harbor in the region, and rapidly became the end-point of the trans-Atlantic routes into the region, the base for a strong commercial elite, and for the Spanish navy in the region. During the first decades after independence, however, Uruguay was plagued by political instability, precarious institution building and economic retardation. Recurrent civil wars with intensive involvement by Britain, France, Portugal-Brazil and Argentina, made Uruguay a center for international conflicts, the most important being the Great War (Guerra Grande), which lasted from 1839 to 1851. At its end Uruguay had only about 130,000 inhabitants.

“Around the middle of the nineteenth century, Uruguay was dominated by the latifundium, with its ill-defined boundaries and enormous herds of native cattle, from which only the hides were exported to Great Britain and part of the meat, as jerky, to Brazil and Cuba. There was a shifting rural population that worked on the large estates and lived largely on the parts of beef carcasses that could not be marketed abroad. Often the landowners were also the caudillos of the Blanco or Colorado political parties, the protagonists of civil wars that a weak government was unable to prevent” (Barrán and Nahum, 1984, 655). This picture still holds, even if it has been excessively stylized, neglecting the importance of subsistence or domestic-market oriented peasant production.

Economic Performance in the Long Run

Despite its precarious beginnings, Uruguay’s per capita gross domestic product (GDP) growth from 1870 to 2002 shows an amazing persistence, with the long-run rate averaging around one percent per year. However, this apparent stability hides some important shifts. As shown in Figure 1, both GDP and population grew much faster before the 1930s; from 1930 to 1960 immigration vanished and population grew much more slowly, while decades of GDP stagnation and fast growth alternated; after the 1960s Uruguay became a net-emigration country, with low natural growth rates and a still spasmodic GDP growth.

GDP growth shows a pattern featured by Kuznets-like swings (Bértola and Lorenzo 2004), with extremely destructive downward phases, as shown in Table 1. This cyclical pattern is correlated with movements of the terms of trade (the relative price of exports versus imports), world demand and international capital flows. In the expansive phases exports performed well, due to increased demand and/or positive terms of trade shocks (1880s, 1900s, 1920s, 1940s and even during the Mercosur years from 1991 to 1998). Capital flows would sometimes follow these booms and prolong the cycle, or even be a decisive force to set the cycle up, as were financial flows in the 1970s and 1990s. The usual outcome, however, has been an overvalued currency, which blurred the debt problem and threatened the balance of trade by overpricing exports. Crises have been the result of a combination of changing trade conditions, devaluation and over-indebtedness, as in the 1880s, early 1910s, late 1920s, 1950s, early 1980s and late 1990s.

Population and per capita GDP of Uruguay, 1870-2002 (1913=100)

Table 1: Swings in the Uruguayan Economy, 1870-2003

Per capita GDP fall (%) Length of recession (years) Time to pre-crisis levels (years) Time to next crisis (years)
1872-1875 26 3 15 16
1888-1890 21 2 19 25
1912-1915 30 3 15 19
1930-1933 36 3 17 24-27
1954/57-59 9 2-5 18-21 27-24
1981-1984 17 3 11 17
1998-2003 21 5

Sources: See Figure 1.

Besides its cyclical movement, the terms of trade showed a sharp positive trend in 1870-1913, a strongly fluctuating pattern around similar levels in 1913-1960 and a deteriorating trend since then. While the volume of exports grew quickly up to the 1920s, it stagnated in 1930-1960 and started to grow again after 1970. As a result, the purchasing power of exports grew fourfold in 1870-1913, fluctuated along with the terms of trade in 1930-1960, and exhibited a moderate growth in 1970-2002.

The Uruguayan economy was very open to trade in the period up to 1913, featuring high export shares, which naturally declined as the rapidly growing population filled in rather empty areas. In 1930-1960 the economy was increasingly and markedly closed to international trade, but since the 1970s the economy opened up to trade again. Nevertheless, exports, which earlier were mainly directed to Europe (beef, wool, leather, linseed, etc.), were increasingly oriented to Argentina and Brazil, in the context of bilateral trade agreements in the 1970s and 1980s and of Mercosur (the trading zone encompassing Argentina, Brazil, Paraguay and Uruguay) in the 1990s.

While industrial output kept pace with agrarian export-led growth during the first globalization boom before World War I, the industrial share in GDP increased in 1930-54, and was mainly domestic-market orientated. Deindustrialization has been profound since the mid-1980s. The service sector was always large: focusing on commerce, transport and traditional state bureaucracy during the first globalization boom; focusing on health care, education and social services, during the import-substituting industrialization (ISI) period in the middle of the twentieth century; and focusing on military expenditure, tourism and finance since the 1970s.

The income distribution changed markedly over time. During the first globalization boom before World War I, an already uneven distribution of income and wealth seems to have worsened, due to massive immigration and increasing demand for land, both rural and urban. However, by the 1920s the relative prices of land and labor changed their previous trend, reducing income inequality. The trend later favored industrialization policies, democratization, introduction of wage councils, and the expansion of the welfare state based on an egalitarian ideology. Inequality diminished in many respects: between sectors, within sectors, between genders and between workers and pensioners. While the military dictatorship and the liberal economic policy implemented since the 1970s initiated a drastic reversal of the trend toward economic equality, the globalizing movements of the 1980s and 1990s under democratic rule didn’t increase equality. Thus, inequality remains at the higher levels reached during the period of dictatorship (1973-85).

Comparative Long-run Performance

If the stable long-run rate of Uruguayan per capita GDP growth hides important internal transformations, Uruguay’s changing position in the international scene is even more remarkable. During the first globalization boom the world became more unequal: the United States forged ahead as the world leader (nearly followed by other settler economies); Asia and Africa lagged far behind. Latin America showed a confusing map, in which countries as Argentina and Uruguay performed rather well, and others, such as the Andean region, lagged far behind (Bértola and Williamson 2003). Uruguay’s strong initial position tended to deteriorate in relation to the successful core countries during the late 1800s, as shown in Figure 2. This trend of negative relative growth was somewhat weak during the first half of the twentieth century, improved significantly during the 1960s, as the import-substituting industrialization model got exhausted, and has continued since the 1970s, despite policies favoring increased integration into the global economy.

 Per capita GDP of Uruguay relative to four core countries,  1870-2002

If school enrollment and literacy rates are reasonable proxies for human capital, in late 1800s both Argentina and Uruguay had a great handicap in relation to the United States, as shown in Table 2. The gap in literacy rates tended to disappear — as well as this proxy’s ability to measure comparative levels of human capital. Nevertheless, school enrollment, which includes college-level and technical education, showed a catching-up trend until the 1960’s, but reverted afterwards.

The gap in life-expectancy at birth has always been much smaller than the other development indicators. Nevertheless, some trends are noticeable: the gap increased in 1900-1930; decreased in 1930-1950; and increased again after the 1970s.

Table 2: Uruguayan Performance in Comparative Perspective, 1870-2000 (US = 100)

1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000
GDP per capita

Uruguay

101 65 63 27 32 27 33 27 26 24 19 18 15 16

Argentina

63 34 38 31 32 29 25 25 24 21 15 16

Brazil

23 8 8 8 8 8 7 9 9 13 11 10
Latin America 13 12 13 10 9 9 9 6 6

USA

100 100 100 100 100 100 100 100 100 100 100 100 100 100
Literacy rates

Uruguay

57 65 72 79 85 91 92 94 95 97 99

Argentina

57 65 72 79 85 91 93 94 94 96 98

Brazil

39 38 37 42 46 51 61 69 76 81 86

Latin America

28 30 34 37 42 47 56 65 71 77 83

USA

100 100 100 100 100 100 100 100 100 100 100
School enrollment

Uruguay

23 31 31 30 34 42 52 46 43

Argentina

28 41 42 36 39 43 55 44 45

Brazil

12 11 12 14 18 22 30 42

Latin America

USA

100 100 100 100 100 100 100 100 100
Life expectancy at birth

Uruguay

102 100 91 85 91 97 97 97 95 96 96

Argentina

81 85 86 90 88 90 93 94 95 96 95
Brazil 60 60 56 58 58 63 79 83 85 88 88
Latin America 65 63 58 58 59 63 71 77 81 88 87
USA 100 100 100 100 100 100 100 100 100 100 100

Sources: Per capita GDP: Maddison (2001) and Astorga, Bergés and FitzGerald (2003). Literacy rates and life expectancy; Astorga, Bergés and FitzGerald (2003). School enrollment; Bértola and Bertoni (1998).

Uruguay during the First Globalization Boom: Challenge and Response

During the post-Great-War reconstruction after 1851, Uruguayan population grew rapidly (fueled by high natural rates and immigration) and so did per capita output. Productivity grew due to several causes including: the steam ship revolution, which critically reduced the price spread between Europe and America and eased access to the European market; railways, which contributed to the unification of domestic markets and reduced domestic transport costs; the diffusion and adaptation to domestic conditions of innovations in cattle-breeding and services; a significant reduction in transaction costs, related to a fluctuating but noticeable process of institutional building and strengthening of the coercive power of the state.

Wool and woolen products, hides and leather were exported mainly to Europe; salted beef (tasajo) to Brazil and Cuba. Livestock-breeding (both cattle and sheep) was intensive in natural resources and dominated by large estates. By the 1880s, the agrarian frontier was exhausted, land properties were fenced and property rights strengthened. Labor became abundant and concentrated in urban areas, especially around Montevideo’s harbor, which played an important role as a regional (supranational) commercial center. By 1908, it contained 40 percent of the nation’s population, which had risen to more than a million inhabitants, and provided the main part of Uruguay’s services, civil servants and the weak and handicraft-dominated manufacturing sector.

By the 1910s, Uruguayan competitiveness started to weaken. As the benefits of the old technological paradigm were eroding, the new one was not particularly beneficial for resource-intensive countries such as Uruguay. International demand shifted away from primary consumption, the population of Europe grew slowly and European countries struggled for self-sufficiency in primary production in a context of soaring world supply. Beginning in the 1920s, the cattle-breeding sector showed a very poor performance, due to lack of innovation away from natural pastures. In the 1930’s, its performance deteriorated mainly due to unfavorable international conditions. Export volumes stagnated until the 1970s, while purchasing power fluctuated strongly following the terms of trade.

Inward-looking Growth and Structural Change

The Uruguayan economy grew inwards until the 1950s. The multiple exchange rate system was the main economic policy tool. Agrarian production was re-oriented towards wool, crops, dairy products and other industrial inputs, away from beef. The manufacturing industry grew rapidly and diversified significantly, with the help of protectionist tariffs. It was light, and lacked capital goods or technology-intensive sectors. Productivity growth hinged upon technology transfers embodied in imported capital goods and an intensive domestic adaptation process of mature technologies. Domestic demand grew also through an expanding public sector and the expansion of a corporate welfare state. The terms of trade substantially impacted protectionism, productivity growth and domestic demand — the government raised money by manipulating exchange rates, so that when export prices rose the state had a greater capacity to protect the manufacturing sector through low exchange rates for capital goods, raw material and fuel imports and to spur productivity increases by imports of capital, while protection allowed industry to pay higher wages and thus expand domestic demand.

However, rent-seeking industries searching for protectionism and a weak clienteslist state, crowded by civil servants recruited in exchange for political favors to the political parties, directed structural change towards a closed economy and inefficient management. The obvious limits to inward looking growth of a country peopled by only about two million inhabitants were exacerbated in the late 1950s as terms of trade deteriorated. The clientelist political system, which was created by both traditional parties while the state was expanding at the national and local level, was now not able to absorb the increasing social conflicts, dyed by stringent ideological confrontation, in a context of stagnation and huge fiscal deficits.

Re-globalization and Regional Integration

The dictatorship (1973-1985) started a period of increasing openness to trade and deregulation which has persisted until the present. Dynamic integration into the world market is still incomplete, however. An attempt to return to cattle-breeding exports, as the engine of growth, was hindered by the oil crises and the ensuing European response, which restricted meat exports to that destination. The export sector was re-orientated towards “non-traditional exports” — i.e., exports of industrial goods made of traditional raw materials, to which low-quality and low-wage labor was added. Exports were also stimulated by means of strong fiscal exemptions and negative real interest rates and were re-orientated to the regional market (Argentina and Brazil) and to other developing regions. At the end of the 1970s, this policy was replaced by the monetarist approach to the balance of payments. The main goal was to defeat inflation (which had continued above 50% since the 1960s) through deregulation of foreign trade and a pre-announced exchange rate, the “tablita.” A strong wave of capital inflows led to a transitory success, but the Uruguayan peso became more and more overvalued, thus limiting exports, encouraging imports and deepening the chronic balance of trade deficit. The “tablita” remained dependent on increasing capital inflows and obviously collapsed when the risk of a huge devaluation became real. Recession and the debt crisis dominated the scene of the early 1980s.

Democratic regimes since 1985 have combined natural resource intensive exports to the region and other emergent markets, with a modest intra-industrial trade mainly with Argentina. In the 1990s, once again, Uruguay was overexposed to financial capital inflows which fueled a rather volatile growth period. However, by the year 2000, Uruguay had a much worse position in relation to the leaders of the world economy as measured by per capita GDP, real wages, equity and education coverage, than it had fifty years earlier.

Medium-run Prospects

In the 1990s Mercosur as a whole and each of its member countries exhibited a strong trade deficit with non-Mercosur countries. This was the result of a growth pattern fueled by and highly dependent on foreign capital inflows, combined with the traditional specialization in commodities. The whole Mercosur project is still mainly oriented toward price competitiveness. Nevertheless, the strongly divergent macroeconomic policies within Mercosur during the deep Argentine and Uruguayan crisis of the beginning of the twenty-first century, seem to have given place to increased coordination between Argentina and Brazil, thus making of the region a more stable environment.

The big question is whether the ongoing political revival of Mercosur will be able to achieve convergent macroeconomic policies, success in international trade negotiations, and, above all, achievements in developing productive networks which may allow Mercosur to compete outside its home market with knowledge-intensive goods and services. Over that hangs Uruguay’s chance to break away from its long-run divergent siesta.

References

Astorga, Pablo, Ame R. Bergés and Valpy FitzGerald. “The Standard of Living in Latin America during the Twentieth Century.” University of Oxford Discussion Papers in Economic and Social History 54 (2004).

Barrán, José P. and Benjamín Nahum. “Uruguayan Rural History.” Latin American Historical Review, 1985.

Bértola, Luis. The Manufacturing Industry of Uruguay, 1913-1961: A Sectoral Approach to Growth, Fluctuations and Crisis. Publications of the Department of Economic History, University of Göteborg, 61; Institute of Latin American Studies of Stockholm University, Monograph No. 20, 1990.

Bértola, Luis and Reto Bertoni. “Educación y aprendizaje en escenarios de convergencia y divergencia.” Documento de Trabajo, no. 46, Unidad Multidisciplinaria, Facultad de Ciencias Sociales, Universidad de la República, 1998.

Bértola, Luis and Fernando Lorenzo. “Witches in the South: Kuznets-like Swings in Argentina, Brazil and Uruguay since the 1870s.” In The Experience of Economic Growth, edited by J.L. van Zanden and S. Heikenen. Amsterdam: Aksant, 2004.

Bértola, Luis and Gabriel Porcile. “Argentina, Brasil, Uruguay y la Economía Mundial: una aproximación a diferentes regímenes de convergencia y divergencia.” In Ensayos de Historia Económica by Luis Bertola. Montevideo: Uruguay en la región y el mundo, 2000.

Bértola, Luis and Jeffrey Williamson. “Globalization in Latin America before 1940.” National Bureau of Economic Research Working Paper, no. 9687 (2003).

Bértola, Luis and others. El PBI uruguayo 1870-1936 y otras estimaciones. Montevideo, 1998.

Maddison, A. Monitoring the World Economy, 1820-1992. Paris: OECD, 1995.

Maddison, A. The World Economy: A Millennial

Citation: Bertola, Luis. “An Overview of the Economic History of Uruguay since the 1870s”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/article/Bertola.Uruguay.final

Urban Decline (and Success) in the United States

Fred Smith and Sarah Allen, Davidson College

Introduction

Any discussion of urban decline must begin with a difficult task – defining what is meant by urban decline. Urban decline (or “urban decay”) is a term that evokes images of abandoned homes, vacant storefronts, and crumbling infrastructure, and if asked to name a city that has suffered urban decline, people often think of a city from the upper Midwest like Cleveland, Detroit, or Buffalo. Yet, while nearly every American has seen or experienced urban decline, the term is one that is descriptive and not easily quantifiable. Further complicating the story is this simple fact – metropolitan areas, like greater Detroit, may experience the symptoms of severe urban decline in one neighborhood while remaining economically robust in others. Indeed, the city of Detroit is a textbook case of urban decline, but many of the surrounding communities in metropolitan Detroit are thriving. An additional complication comes from the fact that modern American cities – cities like Dallas, Charlotte, and Phoenix – don’t look much like their early twentieth century counterparts. Phoenix of the early twenty-first century is an economically vibrant city, yet the urban core of Phoenix looks very, very different from the urban core found in “smaller” cities like Boston or San Francisco.[1] It is unlikely that a weekend visitor to downtown Phoenix would come away with the impression that Phoenix is a rapidly growing city, for downtown Phoenix does not contain the housing, shopping, or recreational venues that are found in downtown San Francisco or Boston.

There isn’t a single variable that will serve as a perfect choice for measuring urban decline, but this article will take an in depth look at urban decline by focusing on the best measure of a city’s well being – population. In order to provide a thorough understanding of urban decline, this article contains three additional sections. The next section employs data from a handful of sources to familiarize the reader with the location and severity of urban decline in the United States. Section three is dedicated to explaining the causes of urban decline in the U.S. Finally, the fourth section looks at the future of cities in the United States and provides some concluding remarks.

Urban Decline in the United States – Quantifying the Population Decline

Between 1950 and 2000 the population of the United States increased by approximately 120 million people, from 152 million to 272 million. Despite the dramatic increase in population experienced by the country as a whole, different cities and states experienced radically different rates of growth. Table 1 shows the population figures for a handful of U.S. cities for the years 1950 to 2000. (It should be noted that these figures are population totals for the cities in the list, not for the associated metropolitan areas.)

Table 1: Population for Selected U.S. Cities, 1950-2000

City

Population

% Change

1950 – 2000

1950

1960

1970

1980

1990

2000

New York

7,891,957

7,781,984

7,895,563

7,071,639

7,322,564

8,008,278

1.5

Philadelphia

2,071,605

2,002,512

1,949,996

1,688,210

1,585,577

1,517,550

-26.7

Boston

801,444

697,177

641,071

562,994

574,283

589,141

-26.5

Chicago

3,620,962

3,550,404

3,369,357

3,005,072

2,783,726

2,896,016

-20.0

Detroit

1,849,568

1,670,144

1,514,063

1,203,339

1,027,974

951,270

-48.6

Cleveland

914,808

876,050

750,879

573,822

505,616

478,403

-47.7

Kansas City

456,622

475,539

507,330

448,159

435,146

441,545

-3.3

Denver

415,786

493,887

514,678

492,365

467,610

554,636

33.4

Omaha

251,117

301,598

346,929

314,255

335,795

390,007

55.3

Los Angeles

1,970,358

2,479,015

2,811,801

2,966,850

3,485,398

3,694,820

87.5

San Francisco

775,357

740,316

715,674

678,974

723,959

776,733

0.2

Seattle

467,591

557,087

530,831

493,846

516,259

563,374

20.5

Houston

596,163

938,219

1,233,535

1,595,138

1,630,553

1,953,631

227.7

Dallas

434,462

679,684

844,401

904,078

1,006,877

1,188,580

173.6

Phoenix

106,818

439,170

584,303

789,704

983,403

1,321,045

1136.7

New Orleans

570,445

627,525

593,471

557,515

496,938

484,674

-15.0

Atlanta

331,314

487,455

495,039

425,022

394,017

416,474

25.7

Nashville

174,307

170,874

426,029

455,651

488,371

545,524

213.0

Washington

802,178

763,956

756,668

638,333

606,900

572,059

-28.7

Miami

249,276

291,688

334,859

346,865

358,548

362,470

45.4

Charlotte

134,042

201,564

241,178

314,447

395,934

540,828

303.5

Source: U.S. Census Bureau.

Several trends emerge from the data in Table 1. The cities in the table are clustered together by region, and the cities at the top of the table – cities from the Northeast and Midwest – experience no significant population growth (New York City) or experience dramatic population loss (Detroit and Cleveland). These cities’ experiences stand in stark contrast to that of the cities located in the South and West – cities found farther down the list. Phoenix, Houston, Dallas, Charlotte, and Nashville all experience triple digit population increases during the five decades from 1950 to 2000. Figure 1 displays this information even more dramatically:

Figure 1: Percent Change in Population, 1950 – 2000

Source: U.S. Census Bureau.

While Table 1 and Figure 1 clearly display the population trends within these cities, they do not provide any information about what was happening to the metropolitan areas in which these cities are located. Table 2 fills this gap. (Please note – these metropolitan areas do not correspond directly to the metropolitan areas identified by the U.S. Census Bureau. Rather, Jordan Rappaport – an economist at the Kansas City Federal Reserve Bank – created these metropolitan areas for his 2005 article “The Shared Fortunes of Cities and Suburbs.”)

Table 2: Population of Selected Metropolitan Areas, 1950 to 2000

Metropolitan Area

1950

1960

1970

2000

Percent Change 1950 to 2000

New York-Newark-Jersey City, NY

13,047,870

14,700,000

15,812,314

16,470,048

26.2

Philadelphia, PA

3,658,905

4,175,988

4,525,928

4,580,167

25.2

Boston, MA

3,065,344

3,357,607

3,708,710

4,001,752

30.5

Chicago-Gary, IL-IN

5,612,248

6,805,362

7,606,101

8,573,111

52.8

Detroit, MI

3,150,803

3,934,800

4,434,034

4,366,362

38.6

Cleveland, OH

1,640,319

2,061,668

2,238,320

1,997,048

21.7

Kansas City, MO-KS

972,458

1,232,336

1,414,503

1,843,064

89.5

Denver, CO

619,774

937,677

1,242,027

2,414,649

289.6

Omaha, NE

471,079

568,188

651,174

803,201

70.5

Los Angeles-Long Beach, CA

4,367,911

6,742,696

8,452,461

12,365,627

183.1

San Francisco-Oakland, CA

2,531,314

3,425,674

4,344,174

6,200,867

145.0

Seattle, WA

920,296

1,191,389

1,523,601

2,575,027

179.8

Houston, TX

1,021,876

1,527,092

2,121,829

4,540,723

344.4

Dallas, TX

780,827

1,119,410

1,555,950

3,369,303

331.5

Phoenix, AZ

NA

663,510

967,522

3,251,876

390.1*

New Orleans, LA

754,856

969,326

1,124,397

1,316,510

74.4

Atlanta, GA

914,214

1,224,368

1,659,080

3,879,784

324.4

Nashville, TN

507,128

601,779

704,299

1,238,570

144.2

Washington, DC

1,543,363

2,125,008

2,929,483

4,257,221

175.8

Miami, FL

579,017

1,268,993

1,887,892

3,876,380

569.5

Charlotte, NC

751,271

876,022

1,028,505

1,775,472

136.3

* The percentage change is for the period from 1960 to 2000.

Source: Rappaport; http://www.kc.frb.org/econres/staff/jmr.htm

Table 2 highlights several of the difficulties in conducting a meaningful discussion about urban decline. First, by glancing at the metro population figures for Cleveland and Detroit, it becomes clear that while these cities were experiencing severe urban decay, the suburbs surrounding them were not. The Detroit metropolitan area grew more rapidly than the Boston, Philadelphia, or New York metro areas, and even the Cleveland metro area experienced growth between 1950 and 2000. Next, we can see from Tables 1 and 2 that some of the cities experiencing dramatic growth between 1950 and 2000 did not enjoy similar increases in population at the metro level. The Phoenix, Charlotte, and Nashville metro areas experienced tremendous growth, but their metro growth rates were not nearly as large as their city growth rates. This raises an important question – did these cities experience tremendous growth rates because the population was growing rapidly or because the cities were annexing large amounts of land from the surrounding suburbs? Table 3 helps to answer this question. In Table 3, land area, measured in square miles, is provided for each of the cities initially listed in Table 1. The data in Table 3 clearly indicate that Nashville and Charlotte, as well as Dallas, Phoenix, and Houston, owe some of their growth to the expansion of their physical boundaries. Charlotte, Phoenix, and Nashville are particularly obvious examples of this phenomenon, for each city increased its physical footprint by over seven hundred percent between 1950 and 2000.

Table 3: Land Area for Selected U.S. Cities, 1950 – 2000

Metropolitan Area

1950

1960

1970

2000

Percent Change 1950 to 2000

New York, NY

315.1

300

299.7

303.3

-3.74

Philadelphia, PA

127.2

129

128.5

135.1

6.21

Boston, MA

47.8

46

46

48.4

1.26

Chicago, IL

207.5

222

222.6

227.1

9.45

Detroit, MI

139.6

138

138

138.8

-0.57

Cleveland, OH

75

76

75.9

77.6

3.47

Kansas City, MO

80.6

130

316.3

313.5

288.96

Denver, CO

66.8

68

95.2

153.4

129.64

Omaha, NE

40.7

48

76.6

115.7

184.28

Los Angeles, CA

450.9

455

463.7

469.1

4.04

San Francisco, CA

44.6

45

45.4

46.7

4.71

Seattle, WA

70.8

82

83.6

83.9

18.50

Houston, TX

160

321

433.9

579.4

262.13

Dallas, TX

112

254

265.6

342.5

205.80

Phoenix, AZ

17.1

187

247.9

474.9

2677.19

New Orleans, LA

199.4

205

197.1

180.6

-9.43

Atlanta, GA

36.9

136

131.5

131.7

256.91

Nashville, TN

22

29

507.8

473.3

2051.36

Washington, DC

61.4

61

61.4

61.4

0.00

Miami, FL

34.2

34

34.3

35.7

4.39

Charlotte, NC

30

64.8

76

242.3

707.67

Sources: Rappaport, http://www.kc.frb.org/econres/staff/jmr.htm; Gibson, Population of the 100 Largest Cities.

Taken together, Tables 1 through 3 paint a clear picture of what has happened in urban areas in the United States between 1950 and 2000: Cities in the Southern and Western U.S. have experienced relatively high rates of growth when they are compared to their neighbors in the Midwest and Northeast. And, as a consequence of this, central cities in the Midwest and Northeast have remained the same size or they have experienced moderate to severe urban decay. But, to complete this picture, it is worth considering some additional data. Table 4 presents regional population and housing data for the United States during the period from 1950 to 2000.

Table 4: Regional Population and Housing Data for the U.S., 1950 – 2000

1950

1960

1970

1980

1990

2000

Population Density – persons/(square mile)

50.9

50.7

57.4

64

70.3

79.6

Population by Region

West

19,561,525

28,053,104

34,804,193

43,172,490

52,786,082

63,197,932

South

47,197,088

54,973,113

62,795,367

75,372,362

85,445,930

100,236,820

Midwest

44,460,762

51,619,139

56,571,663

58,865,670

59,668,632

64,392,776

Northeast

39,477,986

44,677,819

49,040,703

49,135,283

50,809,229

53,594,378

Population by Region – % of Total

West

13

15.6

17.1

19.1

21.2

22.5

South

31.3

30.7

30.9

33.3

34.4

35.6

Midwest

29.5

28.8

27.8

26

24

22.9

Northeast

26.2

24.9

24.1

21.7

20.4

19

Population Living in non-Metropolitan Areas (millions)

66.2

65.9

63

57.1

56

55.4

Population Living in Metropolitan Areas (millions)

84.5

113.5

140.2

169.4

192.7

226

Percent in Suburbs in Metropolitan Area

23.3

30.9

37.6

44.8

46.2

50

Percent in Central City in Metropolitan Area

32.8

32.3

31.4

30

31.3

30.3

Percent Living in the Ten Largest Cities

14.4

12.1

10.8

9.2

8.8

8.5

Percentage Minority by Region

West

26.5

33.3

41.6

South

25.7

28.2

34.2

Midwest

12.5

14.2

18.6

Northeast

16.6

20.6

26.6

Housing Units by Region

West

6,532,785

9,557,505

12,031,802

17,082,919

20,895,221

24,378,020

South

13,653,785

17,172,688

21,031,346

29,419,692

36,065,102

42,382,546

Midwest

13,745,646

16,797,804

18,973,217

22,822,059

24,492,718

26,963,635

Northeast

12,051,182

14,798,360

16,642,665

19,086,593

20,810,637

22,180,440

Source: Hobbs and Stoops (2002).

There are several items of particular interest in Table 4. Every region in the United States becomes more diverse between 1980 and 2000. No region has a minority population greater than 26.5 percent minority in 1980, but only the Midwest remains below 26.5 percent minority by 2000. The U.S. population becomes increasingly urbanized over time, yet the percentage of Americans who live in central cities remains nearly constant. Thus, it is the number of Americans living in suburban communities that has fueled the dramatic increase in “urban” residents. This finding is reinforced by looking at the figures for average population density for the United States as a whole, the figures listing the numbers of Americans living in metropolitan versus non-metropolitan areas, and the figures listing the percentage of Americans living in the ten largest cities in the United States.

Other Measures of Urban Decline

While the population decline documented in the first part of this section suggests that cities in the Northeast and Midwest experienced severe urban decline, anyone who has visited the cities of Detroit and Boston would be able to tell you that the urban decline in these cities has affected their downtowns in very different ways. The central city in Boston is, for the most part, economically vibrant. A visitor to Boston would fine manicured public spaces as well as thriving retail, housing, and commercial sectors. Detroit’s downtown is still scarred by vacant office towers, abandoned retail space, and relatively little housing. Furthermore, the city’s public spaces would not compare favorably to those of Boston. While the leaders of Detroit have made some needed improvements to the city’s downtown in the past several years, the central city remains a mere shadow of its former self. Thus, the loss of population experienced by Detroit and Boston do not tell the full story about how urban decline has affected these cities. They have both lost population, yet Detroit has lost a great deal more – it no longer possesses a well-functioning urban economy.

To date, there have been relatively few attempts to quantify the loss of economic vitality in cities afflicted by urban decay. This is due, in part, to the complexity of the problem. There are few reliable historical measures of economic activity available at the city level. However, economists and other social scientists are beginning to better understand the process and the consequences of severe urban decline.

Economists Edward Glaeser and Joseph Gyourko (2005) developed a model that thoroughly explains the process of urban decline. One of their principal insights is that the durable nature of housing means that the process of urban decline will not mirror the process of urban expansion. In a growing city, the demand for housing is met through the construction of new dwellings. When a city faces a reduction in economic productivity and the resulting reduction in the demand for labor, workers will begin to leave the city. Yet, when population in a city begins to decline, housing units do not magically disappear from the urban landscape. Thus, in Glaeser and Gyourko’s model a declining city is characterized by a stock of housing that interacts with a reduction in housing demand, producing a rapid reduction in the real price of housing. Empirical evidence supports the assertions made by the model, for in cities like Cleveland, Detroit, and Buffalo the real price of housing declined in the second half of the twentieth century. An important implication of the Glaeser and Gyourko model is that declining housing prices are likely to attract individuals who are poor and who have acquired relatively little human capital. The presence of these workers makes it difficult for a declining city – like Detroit – to reverse its economic decline, for it becomes relatively difficult to attract businesses that need workers with high levels of human capital.

Complementing the theoretical work of Glaeser and Gyourko, Fred H. Smith (2003) used property values as a proxy for economic activity in order to quantify the urban decline experienced by Cleveland, Ohio. Smith found that the aggregate assessed value for the property in the downtown core of Cleveland fell from its peak of nearly $600 million in 1930 to a mere $45 million by 1980. (Both figures are expressed in 1980 dollars.) Economists William Collins and Robert Margo have also examined the impact of urban decline on property values. Their work focuses on how the value of owner occupied housing declined in cities that experienced a race riot in the 1960s, and, in particular, it focuses on the gap in property values that developed between white and black owned homes. Nonetheless, a great deal of work still remains to be done before the magnitude of urban decay in the United States is fully understood.

What Caused Urban Decline in the United States?

Having examined the timing and the magnitude of the urban decline experienced by U.S. cities, it is now necessary to consider why these cities decayed. In the subsections that follow, each of the principal causes of urban decline is considered in turn.

Decentralizing Technologies

In “Sprawl and Urban Growth,” Edward Glaeser and Matthew Kahn (2001) assert that “while many factors may have helped the growth of sprawl, it ultimately has only one root cause: the automobile” (p. 2). Urban sprawl is simply a popular term for the decentralization of economic activity, one of the principal symptoms of urban decline. So it should come as no surprise that many of the forces that have caused urban sprawl are in fact the same forces that have driven the decline of central cities. As Glaeser and Kahn suggest, the list of causal forces must begin with the emergence of the automobile.

In order to maximize profit, firm owners must choose their location carefully. Input prices and transportation costs (for inputs and outputs) vary across locations. Firm owners ultimately face two important decisions about location, and economic forces dictate the choices made in each instance. First, owners must decide in which city they will do business. Then, the firm owners must decide where the business should be located within the chosen city. In each case, transportation costs and input costs must dominate the owners’ decision making. For example, a business owner whose firm will produce steel must consider the costs of transporting inputs (e.g. iron ore), the costs of transporting the output (steel), and the cost of other inputs in the production process (e.g. labor). For steel firms operating in the late nineteenth century these concerns were balanced out by choosing locations in the Midwest, either on the Great Lakes (e.g. Cleveland) or major rivers (e.g. Pittsburgh). Cleveland and Pittsburgh were cities with plentiful labor and relatively low transport costs for both inputs and the output. However, steel firm owners choosing Cleveland or Pittsburgh also had to choose a location within these cities. Not surprisingly, the owners chose locations that minimized transportation costs. In Cleveland, for example, the steel mills were built near the shore of Lake Erie and relatively close to the main rail terminal. This minimized the costs of getting iron ore from ships that had come to the city via Lake Erie, and it also provided easy access to water or rail transportation for shipping the finished product. The cost of choosing a site near the rail terminal and the city’s docks was not insignificant: Land close to the city’s transportation hub was in high demand, and, therefore, relatively expensive. It would have been cheaper for firm owners to buy land on the periphery of these cities, but they chose not to do this because the costs associated with transporting inputs and outputs to and from the transportation hub would have dominated the savings enjoyed from buying cheaper land on the periphery of the city. Ultimately, it was the absence of cheap intra-city transport that compressed economic activity into the center of an urban area.

Yet, transportation costs and input prices have not simply varied across space; they’ve also changed over time. The introduction of the car and truck had a profound impact on transportation costs. In 1890, moving a ton of goods one mile cost 18.5 cents (measured in 2001 dollars). By 2003 the cost had fallen to 2.3 cents (measured in 2001 dollars) per ton-mile (Glaeser and Kahn 2001, p. 4). While the car and truck dramatically lowered transportation costs, they did not immediately affect firm owners’ choices about which city to choose as their base of operations. Rather, the immediate impact was felt in the choice of where within a city a firm should choose to locate. The intra-city truck made it easy for a firm to locate on the periphery of the city, where land was plentiful and relatively cheap. Returning to the example from the previous paragraph, the introduction of the intra-city truck allowed the owners of steel mills in Cleveland to build new plants on the periphery of the urban area where land was much cheaper (Encyclopedia of Cleveland History). Similarly, the car made it possible for residents to move away from the city center and out to the periphery of the city – or even to newly formed suburbs. (The suburbanization of the urban population had begun in the late nineteenth century when streetcar lines extended from the central city out to the periphery of the city or to communities surrounding the city; the automobile simply accelerated the process of decentralization.) The retail cost of a Ford Model T dropped considerably between 1910 and 1925 – from approximately $1850 to $470, measuring the prices in constant 1925 dollars (these values would be roughly $21,260 and $5400 in 2006 dollars), and the market responded accordingly. As Table 5 illustrates, the number of passenger car registrations increased dramatically during the twentieth century.

Table 5: Passenger Car Registrations in the United States, 1910-1980

Year

Millions of Registered Vehicles

1910

.5

1920

8.1

1930

23.0

1940

27.5

1950

40.4

1960

61.7

1970

89.2

1980

131.6

Source: Muller, p. 36.

While changes in transportation technology had a profound effect on firms’ and residents’ choices about where to locate within a given city, they also affected the choice of which city would be the best for the firm or resident. Americans began demanding more and improved roads to capitalize on the mobility made possible by the car. Also, the automotive, construction, and tourism related industries lobbied state and federal governments to become heavily involved in funding road construction, a responsibility previously relegated to local governments. The landmark National Interstate and Defense Highway Act of 1956 signified a long-term commitment by the national government to unite the country through an extensive network of interstates, while also improving access between cities’ central business district and outlying suburbs. As cars became affordable for the average American, and paved roads became increasingly ubiquitous, not only did the suburban frontier open up to a rising proportion of the population; it was now possible to live almost anywhere in the United States. (However, it is important to note that the widespread availability of air conditioning was a critical factor in Americans’ willingness to move to the South and West.)

Another factor that opened up the rest of the United States for urban development was a change in the cost of obtaining energy. Obtaining abundant, cheap energy is a concern for firm owners and for households. Historical constraints on production and residential locations continued to fall away in the late nineteenth and early twentieth century as innovations in energy production began to take hold. One of the most important of these advances was the spread of the alternating-current electric grid, which further expanded firms’ choices regarding plant location and layout. Energy could be generated at any site and could travel long distances through thin copper wires. Over a fifty-year period from 1890 to 1940, the proportion of goods manufactured using electrical power soared from 0.1 percent to 85.6 percent (Nye 1990). With the complementary advancements in transportation, factories now had the option of locating outside of the city where they could capture savings from cheaper land. The flexibility of electrical power also offered factories new freedom in the spatial organization of production. Whereas steam engines had required a vertical system of organization in multi-level buildings, the AC grid made possible a form of production that permanently transformed the face of manufacturing – the assembly line (Nye 1990).

The Great Migration

Technological advances were not bound by urban limits; they also extended into rural America where they had sweeping social and economic repercussions. Historically, the vast majority of African Americans had worked on Southern farms, first as slaves and then as sharecroppers. But progress in the mechanization of farming – particularly the development of the tractor and the mechanical cotton-picker – reduced the need for unskilled labor on farms. The dwindling need for farm laborers coupled with continuing racial repression in the South led hundreds of thousands of southern African Americans to migrate North in search of new opportunities. The overall result was a dramatic shift in the spatial distribution of African Americans. In 1900, more than three-fourths of black Americans lived in rural areas, and all but a handful of rural blacks lived in the South. By 1960, 73% of blacks lived in urban areas, and the majority of the urban blacks lived outside of the South (Cahill 1974).

Blacks had begun moving to Northern cities in large numbers at the onset of World War I, drawn by the lure of booming wartime industries. In the 1940s, Southern blacks began pouring into the industrial centers at more than triple the rate of the previous decade, bringing with them a legacy of poverty, poor education, and repression. The swell of impoverished and uneducated African Americans rarely received a friendly reception in Northern communities. Instead they frequently faced more of the treatment they had sought to escape (Groh 1972). Furthermore, the abundance of unskilled manufacturing jobs that had greeted the first waves of migrants had begun to dwindle. Manufacturing firms in the upper Midwest (the Rustbelt) faced increased competition from foreign firms, and many of the American firms that remained in business relocated to the suburbs or the Sunbelt to take advantage of cheap land. African Americans had difficulty accessing jobs at locations in the suburbs, and the result for many was a “spatial mismatch” – they lived in the inner city where employment opportunities were scarce, yet lacked access to transportation and that would allow them to commute to the suburban jobs (Kain 1968). Institutionalized racism, which hindered blacks’ attempts to purchase real estate in the suburbs, as well as the proliferation of inner city public housing projects, reinforced the spatial mismatch problem. As inner city African Americans coped with high unemployment rates, high crime rates and urban disturbances such as the race riots of the 1960s were obvious symptoms of economic distress. High crime rates and the race riots simply accelerated the demographic transformation of Northern cities. White city residents had once been “pulled” to the suburbs by the availability of cheap land and cheap transportation when the automobile became affordable; now white residents were being “pushed” by racism and the desire to escape the poverty and crime that had become common in the inner city. Indeed, by 2000 more than 80 percent of Detroit’s residents were African American – a stark contrast from 1950 when only 16 percent of the population was black.

The American City in the Twenty-First Century

Some believe that technology – specifically advances in information technology – will render the city obsolete in the twenty-first century. Urban economists find their arguments unpersuasive (Glaeser 1998). Recent history shows that the way we interact with one another has changed dramatically in a very short period of time. E-mail, cell phones, and text messages belonged to the world science fiction as recently as 1980. Clearly, changes in information technology no longer make it a requirement that we locate ourselves in close proximity to the people we want to interact with. Thus, one can understand the temptation to think that we will no longer need to live so close to one another in New York, San Francisco or Chicago. Ultimately, a person or a firm will only locate in a city if the benefits from being in the city outweigh the costs. What is missing from this analysis, though, is that people and firms locate in cities for reasons that are not immediately obvious.

Economists point to economies of agglomeration as one of the main reasons that firms will continue to choose urban locations over rural locations. Economics of agglomeration exist when a firm’s productivity is enhanced (or its cost of doing business is lowered) because it is located in a cluster of complementary firms of in a densely populated area. A classic example of an urban area that displays substantial economies of agglomeration is “Silicon Valley” (near San Jose, California). Firms choosing to locate in Silicon Valley benefit from several sources of economies of agglomeration, but two of the most easily understood are knowledge spillovers and labor pooling. Knowledge spillovers in Silicon Valley occur because individuals who work at “computer firms” (firms producing software, hardware, etc.) are likely to interact with one another on a regular basis. These interactions can be informal – playing together on a softball team, running into one another at a child’s soccer game, etc. – but they are still very meaningful because they promote the exchange of ideas. By exchanging ideas and information it makes it possible for workers to (potentially) increase their productivity at their own job. Another example of economies of agglomeration in Silicon Valley is the labor pooling that occurs there. Because workers who are trained in computer related fields know that computer firms are located in Silicon Valley, they are more likely to choose to live in or around Silicon Valley. Thus, firms operating in Silicon Valley have an abundant supply of labor in close proximity, and, similarly, workers enjoy the opportunities associated with having several firms that can make use of their skills in a small geographic area. The clustering of computer industry workers and firms allows firms to save money when they need to hire another worker, and it makes it easier for workers who need a job to find one.

In addition to economies of agglomeration, there are other economic forces that make the disappearance of the city unlikely. Another of the benefits that some individuals will associate with urban living is the diversity of products and experiences that are available in a city. For example, in a large city like Chicago it is possible to find deep dish pizza, thin crust pizza, Italian food, Persian food, Greek food, Swedish food, Indian food, Chinese food… literally almost any type of food that you might imagine. Why is all of this food available in Chicago but not in a small town in southern Illinois? Economists answer this question using the concept of demand density. Lots of people like Chinese food, so it is not uncommon to find a Chinese restaurant in a small town. Fewer people, though, have been exposed to Persian cuisine. While it is quite likely that the average American would like Persian food if it were available, most Americans haven’t had the opportunity to try it. Hence, the average American is unlikely to demand much Persian food in a given time period. So, individuals who are interested in operating a Persian food restaurant logically choose to operate in Chicago instead of a small town in southern Illinois. While each individual living in Chicago may not demand Persian food any more frequently than the individuals living in the small town, the presence of so many people in a relatively small area makes it possible for the Persian food restaurant to operate and thrive. Moreover, exposure to Persian food may change people’s tastes and preferences. Over time, the amount of Persian food demand (on average) from each inhabitant of the city may increase.

Individuals who value Persian food – or any of the other experiences that can only be found in a large city – will value the opportunity to live in a large city more than they will value the opportunity to live in a rural area. But the incredible diversity that a large city has to offer is a huge benefit to some individuals, not to everyone. Rural areas will continue to be populated as long as there are people who prefer the pleasures of low-density living. For these individuals, the pleasure of being able to walk in the woods or hike in the mountains may be more than enough compensation for living in a part of the country that doesn’t have a Persian restaurant.

As long as there are people (and firm owners) who believe that the benefits from locating in a city outweigh the costs, cities will continue to exist. The data shown above make it clear that Americans continue to value urban living. Indeed, the population figures for Chicago and New York suggest that in the 1990s more people were finding that there are net benefits to living in very large cities. The rapid expansion of cities in the South and Southwest simply reinforces this idea. To be sure, the urban living experienced in Charlotte is not the same as the urban living experience in Chicago or New York. So, while the urban cores of cities like Detroit and Cleveland are not likely to return to their former size anytime soon, and urban decline will continue to be a problem for these cities in the foreseeable future, it remains clear that Americans enjoy the benefits of urban living and that the American city will continue to thrive in the future.

References

Cahill, Edward E. “Migration and the Decline of the Black Population in Rural and Non-Metropolitan Areas.” Phylon 35, no. 3, (1974): 284-92.

Casadesus-Masanell, Ramon. “Ford’s Model-T: Pricing over the Product Life Cycle,” ABANTE –

Studies in Business Management 1, no. 2, (1998): 143-65.

Chudacoff, Howard and Judith Smith. The Evolution of American Urban Society, fifth edition. Upper Saddle River, NJ: Prentice Hall, 2000.

Collins, William and Robert Margo. “The Economic Aftermath of the 1960s Riots in American Cities: Evidence from Property Values.” Journal of Economic History 67, no. 4 (2007): 849 -83.

Collins, William and Robert Margo. “Race and the Value of Owner-Occupied Housing, 1940-1990.”

Regional Science and Urban Economics 33, no. 3 (2003): 255-86.

Cutler, David et al. “The Rise and Decline of the American Ghetto.” Journal of Political Economy 107, no. 3 (1999): 455-506.

Frey, William and Alden Speare, Jr. Regional and Metropolitan Growth and Decline in the United States. New York: Russell Sage Foundation, 1988.

Gibson, Campbell. “Population of the 100 Largest Cities and Other Urban Places in the United States: 1790 to 1990.” Population Division Working Paper, no. 27, U.S. Bureau of the Census, June 1998. Accessed at: http://www.census.gov/population/www/documentation/twps0027.html

Glaeser, Edward. “Are Cities Dying?” Journal of Economic Perspectives 12, no. 2 (1998): 139-60.

Glaeser, Edward and Joseph Gyourko. “Urban Decline and Durable Housing.” Journal of Political Economy 113, no. 2 (2005): 345-75.

Glaeser, Edward and Matthew Kahn. “Decentralized Employment and the Transformation of the American City.” Brookings-Wharton Papers on Urban Affairs, 2001.

Glaeser, Edward and Janet Kohlhase. “Cities, Regions, and the Decline of Transport Costs.” NBER Working Paper Series, National Bureau of Economic Research, 2003.

Glaeser, Edward and Albert Saiz. “The Rise of the Skilled City.” Brookings-Wharton Papers on Urban Affairs, 2004.

Glaeser, Edward and Jesse Shapiro. “Urban Growth in the 1990s: Is City Living Back?” Journal of Regional Science 43, no. 1 (2003): 139-65.

Groh, George. The Black Migration: The Journey to Urban America. New York: Weybright and Talley, 1972.

Gutfreund, Owen D. Twentieth Century Sprawl: Highways and the Reshaping of the American Landscape. Oxford: Oxford University Press, 2004.

Hanson, Susan, ed. The Geography of Urban Transportation. New York: Guilford Press, 1986.

Hobbs, Frank and Nicole Stoops. Demographic Trends in the Twentieth Century: Census 2000 Special Reports. Washington, DC: U.S. Census Bureau, 2002.

Kim, Sukkoo. “Urban Development in the United States, 1690-1990.” NBER Working Paper Series, National Bureau of Economic Research, 1999.

Mieszkowski, Peter and Edwin Mills. “The Causes of Metropolitan Suburbanization.” Journal of Economic Perspectives 7, no. 3 (1993): 135-47.

Muller, Peter. “Transportation and Urban Form: Stages in the Spatial Evolution of the American Metropolis.” In The Geography of Urban Transportation, edited by Susan Hanson. New York: Guilford Press, 1986.

Nye, David. Electrifying America: Social Meanings of a New Technology, 1880-1940. Cambridge, MA: MIT Press, 1990.

Nye, David. Consuming Power: A Social History of American Energies. Cambridge, MA: MIT Press, 1998.

Rae, Douglas. City: Urbanism and Its End. New Haven: Yale University Press, 2003.

Rappaport, Jordan. “U.S. Urban Decline and Growth, 1950 to 2000.” Economic Review: Federal Reserve Bank of Kansas City, no. 3, 2003: 15-44.

Rodwin, Lloyd and Hidehiko Sazanami, eds. Deindustrialization and Regional Economic Transformation: The Experience of the United States. Boston: Unwin Hyman, 1989.

Smith, Fred H. “Decaying at the Core: Urban Decline in Cleveland, Ohio.” Research in Economic History 21 (2003): 135-84.

Stanback, Thomas M. Jr. and Thierry J. Noyelle. Cities in Transition: Changing Job Structures in Atlanta, Denver, Buffalo, Phoenix, Columbus (Ohio), Nashville, Charlotte. Totowa, NJ: Allanheld, Osmun, 1982.

Van Tassel, David D. and John J. Grabowski, editors, The Encyclopedia of Cleveland History. Bloomington: Indiana University Press, 1996. Available at http://ech.case.edu/


[1] Reporting the size of a “city” should be done with care. In day-to-day usage, many Americans might talk about the size (population) of Boston and assert that Boston is a larger city than Phoenix. Strictly speaking, this is not true. The 2000 Census reports that the population of Boston was 589,000 while Phoenix had a population of 1.3 million. However, the Boston metropolitan area contained 4.4 million inhabitants in 2000 – substantially more than the 3.3 million residents of the Phoenix metropolitan area.

Citation: Smith, Fred and Sarah Allen. “Urban Decline (and Success), US”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/urban-decline-and-success-in-the-united-states/

The Economic History of Taiwan

Kelly Olds, National Taiwan University

Geography

Taiwan is a sub-tropical island, roughly 180 miles long, located less than 100 miles offshore of China’s Fujian province. Most of the island is covered with rugged mountains that rise to over 13,000 feet. These mountains rise directly out of the ocean along the eastern shore facing the Pacific so that this shore, and the central parts of the island are sparsely populated. Throughout its history, most of Taiwan’s people have lived on the Western Coastal Plain that faces China. This plain is crossed by east-west rivers, which occasionally bring floods of water down from the mountains creating broad boulder strewn flood plains. Until modern times, these rivers have made north-south travel costly and limited the island’s economic integration. The most important river is the Chuo Shuei-Hsi (between present-day Changhua and Yunlin counties), which has been an important economic and cultural divide.

Aboriginal Economy

Little is known about Taiwan prior to the seventeenth-century. When the Dutch came to the island in 1622, they found a population of roughly 70,000 Austronesian aborigines, at least 1,000 Chinese and a smaller number of Japanese. The aborigine women practiced subsistence agriculture while aborigine men harvested deer for export. The Chinese and Japanese population was primarily male and transient. Some of the Chinese were fishermen who congregated at the mouths of Taiwanese rivers but most Chinese and Japanese were merchants. Chinese merchants usually lived in aborigine villages and acted as middlemen, exporting deerskins, primarily to Japan, and importing salt and various manufactures. The harbor alongside which the Dutch built their first fort (in present-day Tainan City) was already an established place of rendezvous for Chinese and Japanese trade when the Dutch arrived.

Taiwan under the Dutch and Koxinga

The Dutch took control of most of Taiwan in a series of campaigns that lasted from the mid-1630s to the mid-1640s. The Dutch taxed the deerskin trade, hired aborigine men as soldiers and tried to introduce new forms of agriculture, but otherwise interfered little with the aborigine economy. The Tainan harbor grew in importance as an international entrepot. The most important change in the economy was an influx of about 35,000 Chinese to the island. These Chinese developed land, mainly in southern Taiwan, and specialized in growing rice and sugar. Sugar became Taiwan’s primary export. One of the most important Chinese investors in the Taiwanese economy was the leader of the Chinese community in Dutch Batavia (on Java) and during this period the Chinese economy on Taiwan bore a marked resemblance to the Batavian economy.

Koxinga, a Chinese-Japanese sea lord, drove the Dutch off the island in 1661. Under the rule of Koxinga and his heirs (1661-1683), Chinese settlement continued to spread in southern Taiwan. On the one hand, Chinese civilians made the crossing to flee the chaos that accompanied the Ming-Qing transition. On the other hand, Koxinga and his heirs brought over soldiers who were required to clear land and farm when they were not being used in wars. The Chinese population probably rose to about 120,000. Taiwan’s exports changed little, but the Tainan harbor lost importance as a center of international trade, as much of this trade now passed through Xiamen (Amoy), a port across the strait in Fujian that was also under the control of Koxinga and his heirs.

Taiwan under Qing Rule

The Qing dynasty defeated Koxinga’s grandson and took control of Taiwan in 1683. Taiwan remained part of the Chinese empire until it ceded the island to Japan in 1895. The Qing government originally saw control of Taiwan as an economic burden that had to be borne in order to keep the island out of the hand of pirates. In the first year of occupation, the Qing government shipped as many Chinese residents as possible back to the mainland. The island lost perhaps one-third of its Chinese population. Travel to Taiwan by all but male migrant workers was illegal until 1732 and this prohibition was reinstated off-and-on until it was finally permanently rescinded in 1788. However, the island’s Chinese population grew about two percent per year in the century following the Qing takeover. Both illegal immigration and natural increase were important components of this growth. The Qing government feared the expense of Chinese-aborigine confrontations and tried futilely to restrain Chinese settlement and keep the populations apart. Chinese pioneers, however, were constantly pushing the bounds of Chinese settlement northward and eastward and the aborigines were forced to adapt. Some groups permanently leased their land to Chinese settlers. Others learned Chinese farming skills and eventually assimilated or else moved toward the mountains where they continued hunting, learned to raise cattle or served as Qing soldiers. Due to the lack of Chinese women, intermarriage was also common.

Individual entrepreneurs or land companies usually organized Chinese pioneering enterprises. These people obtained land from aborigines or the government, recruited settlers, supplied loans to the settlers and sometimes invested in irrigation projects. Large land developers often lived in the village during the early years but moved to a city after the village was established. They remained responsible for paying the land tax and they received “large rents” from the settlers amounting to 10-15 percent of the expected harvest. However, they did not retain control of land usage or have any say in land sales or rental. The “large rents” were, in effect, a tax paid to a tax farmer who shared this revenue with the government. The payers of the large rents were the true owners who controlled the land. These people often chose to rent out their property to tenants who did the actual farming and paid a “small rent” of about 50 percent of the expected harvest.

Chinese pioneers made extensive use of written contracts but government enforcement of contracts was minimal. In the pioneers’ homeland across the strait, protecting property and enforcing agreements was usually a function of the lineage. Being part of a strong lineage was crucial to economic success and violent struggles among lineages were a problem endemic to south China. Taiwanese settlers had crossed the strait as individuals or in small groups and lacked strong lineages. Like other Chinese immigrants throughout the world, they created numerous voluntary associations based on one’s place of residence, occupation, place of origin, surname, etc. These organizations substituted for lineages in protecting property and enforcing contracts, and violent conflict among these associations over land and water rights was frequent. Due to property rights problems, land sales contracts often included the signature of not only the owner, but also his family and neighbors agreeing to the transfer. The difficulty of seizing collateral led to the common use of “conditional sales” as a means of borrowing money. Under the terms of a conditional sale, the lender immediately took control of the borrower’s property and retained the right to the property’s production in lieu of rent until the borrower paid back the loan. Since the borrower could wait an indefinite period of time before repaying the loan, this led to an awkward situation in which the person who controlled the land did not have permanent ownership and had no incentive to invest in land improvements.

Taiwan prospered during a sugar boom in the early eighteenth century, but afterwards its sugar industry had a difficult time keeping up with advances in foreign production. Until the Japanese occupation in 1895, Taiwan’s sugar farms and sugar mills remained small-scale operations. The sugar industry was centered in the south of the island and throughout the nineteenth century, the southern population showed little growth and may have declined. By the end of the nineteenth century, the south of the island was poorer than the north of the island and its population was shorter in stature and had a lower life expectancy. The north of the island was better suited to rice production and the northern economy seems to have grown robustly. As the Chinese population moved into the foothills of the northern mountains in the mid-nineteenth century, they began growing tea, which added to the north’s economic vitality and became the island’s leading export during the last quarter of the nineteenth century. The tea industry’s most successful product was oolong tea produced primarily for the U.S. market.

During the last years of the Qing dynasty’s rule in Taiwan, Taiwan was made a full province of China and some attempts were made to modernize the island by carrying out a land survey and building infrastructure. Taiwan’s first railroad was constructed linking several cities in the north.

Taiwan under Japanese Rule

The Japanese gained control of Taiwan in 1895 after the Sino-Japanese War. After several years of suppressing both Chinese resistance and banditry, the Japanese began to modernize the island’s economy. A railroad was constructed running the length of the island and modern roads and bridges were built. A modern land survey was carried out. Large rents were eliminated and those receiving these rents were compensated with bonds. Ownership of approximately twenty percent of the land could not be established to Japanese satisfaction and was confiscated. Much of this land was given to Japanese conglomerates that wanted land for sugarcane. Several banks were established and reorganized irrigation districts began borrowing money to make improvements. Since many Japanese soldiers had died of disease, improving the island’s sanitation and disease environment was also a top priority.

Under the Japanese, Taiwan remained an agricultural economy. Although sugarcane continued to be grown mainly on family farms, sugar processing was modernized and sugar once again became Taiwan’s leading export. During the early years of modernization, native Taiwanese sugar refiners remained important but, largely due to government policy, Japanese refiners holding regional monopsony power came to control the industry. Taiwanese sugar remained uncompetitive on the international market, but was sold duty free within the protected Japanese market. Rice, also bound for the protected Japanese market, displaced tea to become the second major export crop. Altogether, almost half of Taiwan’s agricultural production was being exported in the 1930s. After 1935, the government began encouraging investment in non-agricultural industry on the island. The war that followed was a time of destruction and economic collapse.

Growth in Taiwan’s per-capita economic product during this colonial period roughly kept up with that of Japan. Population also grew quickly as health improved and death rates fell. The native Taiwanese population’s per-capita consumption grew about one percent per year, slower than the growth in consumption in Japan, but greater than the growth in China. Better property rights enforcement, population growth, transportation improvements and protected agricultural markets caused the value of land to increase quickly, but real wage rates increased little. Most Taiwanese farmers did own some land but since the poor were more dependent on wages, income inequality increased.

Taiwan Under Nationalist Rule

Taiwan’s economy recovered from the war slower than the Japanese economy. The Chinese Nationalist government took control of Taiwan in 1945 and lost control of their original territory on the mainland in 1949. The Japanese population, which had grown to over five percent of Taiwan’s population (and a much greater proportion of Taiwan’s urban population), was shipped to Japan and the new government confiscated Japanese property creating large public corporations. The late 1940s was a period of civil war in China, and Taiwan also experienced violence and hyperinflation. In 1949, soldiers and refugees from the mainland flooded onto the island increasing Taiwan’s population by about twenty percent. Mainlanders tended to settle in cities and were predominant in the public sector.

In the 1950s, Taiwan was dependent on American aid, which allowed its government to maintain a large military without overburdening the economy. Taiwan’s agricultural economy was left in shambles by the events of the 1940s. It had lost its protected Japanese markets and the low-interest-rate formal-sector loans to which even tenant farmers had access in the 1930s were no longer available. With American help, the government implemented a land reform program. This program (1) sold public land to tenant farmers, (2) limited rent to 37.5% of the expected harvest and (3) severely restricted the size of individual landholdings forcing landlords to sell most of their land to the government in exchange for stocks and bonds valued at 2.5 times the land’s annual expected harvest. This land was then redistributed. The land reform increased equality among the farm population and strengthened government control of the countryside. Its justice and effect on agricultural investment and productivity are still hotly debated.

High-speed growth accompanied by quick industrialization began in the late-1950s. Taiwan became known for its cheap manufactured exports produced by small enterprises bound together by flexible sub-contracting networks. Taiwan’s postwar industrialization is usually attributed to (1) the decline in land per capita, (2) the change in export markets and (3) government policy. Between 1940 and 1962, Taiwan’s population increased at an annual rate of slightly over three percent. This cut the amount of land per capita in half. Taiwan’s agricultural exports had been sold tariff-free at higher-than-world-market prices in pre-war Japan while Taiwan’s only important pre-war manufactured export, imitation panama hats, faced a 25% tariff in the U.S., their primary market. After the war, agricultural products generally faced the greatest trade barriers. As for government policy, Taiwan went through a period of import substitution policy in the 1950s, followed by promotion of manufactured exports in the 1960s and 1970s. Subsidies were available for certain manufactures under both regimes. During the import substitution regime, domestic manufactures were protected both by tariffs and multiple overvalued exchange rates. Under the later export promotion regime, export processing zones were set up in which privileges were extended to businesses which produced products which would not be sold domestically.

Historical research into the “Taiwanese miracle” has focused on government policy and its effects, but statistical data for the first few post-war decades is poor and the overall effect of the various government policies is unclear. During the 1960s and 1970s, real GDP grew about 10% (7% per capita) each year. Most of this growth can be explained by increases in factors of production. Savings rates began rising after the currency was stabilized and reached almost 30% by 1970. Meanwhile, primary education, in which 70% of Taiwanese children had participated under the Japanese, became universal, and students in higher education increased many-fold. Although recent research has emphasized the importance of factor growth in the Asian “miracle economies,” studies show that productivity also grew substantially in Taiwan.

Further Reading

Chang, Han-Yu and Ramon Myers. “Japanese Colonial Development Policy in Taiwan, 1895-1906.” Journal of Asian Studies 22, no. 4 (August 1963): 433-450.

Davidson, James. The Island of Formosa: Past and Present. London: MacMillan & Company, 1903.

Fei, John et.al. Growth with Equity: The Taiwan Case. New York: Oxford University Press, 1979.

Gardella, Robert. Harvesting Mountains: Fujian and the China Tea Trade, 1757-1937. Berkeley: University of California Press, 1994.

Ho, Samuel. Economic Development of Taiwan 1860-1970. New Haven: Yale University Press, 1978.

Ho, Yhi-Min. Agricultural Development of Taiwan, 1903-1960. Nashville: Vanderbilt University Press, 1966.

Ka, Chih-Ming. Japanese Colonialism in Taiwan: Land Tenure, Development, and Dependency, 1895-1945. Boulder: Westview Press, 1995.

Knapp, Ronald, editor. China’s Island Frontier: Studies in the Historical Geography of Taiwan. Honolulu: University Press of Hawaii, 1980.

Li, Kuo-Ting. The Evolution of Policy Behind Taiwan’s Development Success. New Haven: Yale University Press, 1988.

Koo Hui-Wen and Chun-Chieh Wang. “Indexed Pricing: Sugarcane Price Guarantees in Colonial Taiwan, 1930-1940.” Journal of Economic History 59, no. 4 (December 1999): 912-926.

Mazumdar, Sucheta. Sugar and Society in China: Peasants, Technology, and the World Market. Cambridge, MA: Harvard University Asia Center, 1998.

Meskill, Johanna. A Chinese Pioneer Family: The Lins of Wu-feng, Taiwan, 1729-1895. Princeton, NJ: Princeton University Press, 1979.

Ng, Chin-Keong. Trade and Society: The Amoy Network on the China Coast 1683-1735. Singapore: Singapore University Press, 1983.

Olds, Kelly. “The Risk Premium Differential in Japanese-Era Taiwan and Its Effect.” Journal of Institutional and Theoretical Economics 158, no. 3 (September 2002): 441-463.

Olds, Kelly. “The Biological Standard of Living in Taiwan under Japanese Occupation.” Economics and Human Biology, 1 (2003): 1-20.

Olds, Kelly and Ruey-Hua Liu. “Economic Cooperation in Nineteenth-Century Taiwan.” Journal of Institutional and Theoretical Economics 156, no. 2 (June 2000): 404-430.

Rubinstein, Murray, editor. Taiwan: A New History. Armonk, NY: M.E. Sharpe, 1999.

Shepherd, John. Statecraft and Political Economy on the Taiwan Frontier, 1600-1800. Stanford: Stanford University Press, 1993.

Citation: Olds, Kelly. “The Economic History of Taiwan”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-taiwan/

Sweden – Economic Growth and Structural Change, 1800-2000

Lennart Schön, Lund University

This article presents an overview of Swedish economic growth performance internationally and statistically and an account of major trends in Swedish economic development during the nineteenth and twentieth centuries.1

Modern economic growth in Sweden took off in the middle of the nineteenth century and in international comparative terms Sweden has been rather successful during the past 150 years. This is largely thanks to the transformation of the economy and society from agrarian to industrial. Sweden is a small economy that has been open to foreign influences and highly dependent upon the world economy. Thus, successive structural changes have put their imprint upon modern economic growth.

Swedish Growth in International Perspective

The century-long period from the 1870s to the 1970s comprises the most successful part of Swedish industrialization and growth. On a per capita basis the Japanese economy performed equally well (see Table 1). The neighboring Scandinavian countries also grew rapidly but at a somewhat slower rate than Sweden. Growth in the rest of industrial Europe and in the U.S. was clearly outpaced. Growth in the entire world economy, as measured by Maddison, was even slower.

Table 1 Annual Economic Growth Rates per Capita in Industrial Nations and the World Economy, 1871-2005

Year Sweden Rest of Nordic Countries Rest of Western Europe United States Japan World Economy
1871/1875-1971/1975 2.4 2.0 1.7 1.8 2.4 1.5
1971/1975-2001/2005 1.7 2.2 1.9 2.0 2.2 1.6

Note: Rest of Nordic countries = Denmark, Finland and Norway. Rest of Western Europe = Austria, Belgium, Britain, France, Germany, Italy, the Netherlands, and Switzerland.

Source: Maddison (2006); Krantz/Schön (forthcoming 2007); World Bank, World Development Indicator 2000; Groningen Growth and Development Centre, www.ggdc.com.

The Swedish advance in a global perspective is illustrated in Figure 1. In the mid-nineteenth century the Swedish average income level was close to the average global level (as measured by Maddison). In a European perspective Sweden was a rather poor country. By the 1970s, however, the Swedish income level was more than three times higher than the global average and among the highest in Europe.

Figure 1
Swedish GDP per Capita in Relation to World GDP per Capita, 1870-2004
(Nine year moving averages)
Swedish GDP per Capita in Relation to World GDP per Capita, 1870-2004
Sources: Maddison (2006); Krantz/Schön (forthcoming 2007).

Note. The annual variation in world production between Maddison’s benchmarks 1870, 1913 and 1950 is estimated from his supply of annual country series.

To some extent this was a catch-up story. Sweden was able to take advantage of technological and organizational advances made in Western Europe and North America. Furthermore, Scandinavian countries with resource bases such as Sweden and Finland had been rather disadvantaged as long as agriculture was the main source of income. The shift to industry expanded the resource base and industrial development – directed both to a growing domestic market but even more to a widening world market – became the main lever of growth from the late nineteenth century.

Catch-up is not the whole story, though. In many industrial areas Swedish companies took a position at the technological frontier from an early point in time. Thus, in certain sectors there was also forging ahead,2 quickening the pace of structural change in the industrializing economy. Furthermore, during a century of fairly rapid growth new conditions have arisen that have required profound adaptation and a renewal of entrepreneurial activity as well as of economic policies.

The slow down in Swedish growth from the 1970s may be considered in this perspective. While in most other countries growth from the 1970s fell only in relation to growth rates in the golden post-war ages, Swedish growth fell clearly below the historical long run growth trend. It also fell to a very low level internationally. The 1970s certainly meant the end to a number of successful growth trajectories in the industrial society. At the same time new growth forces appeared with the electronic revolution, as well as with the advance of a more service based economy. It may be the case that this structural change hit the Swedish economy harder than most other economies, at least of the industrial capitalist economies. Sweden was forced into a transformation of its industrial economy and of its political economy in the 1970s and the 1980s that was more profound than in most other Western economies.

A Statistical Overview, 1800-2000

Swedish economic development since 1800 may be divided into six periods with different growth trends, as well as different composition of growth forces.

Table 2 Annual Growth Rates in per Capita Production, Total Investments, Foreign Trade and Population in Sweden, 1800-2000

Period Per capita GDP Investments Foreign Trade Population
1800-1840 0.6 0.3 0.7 0.8
1840-1870 1.2 3.0 4.6 1.0
1870-1910 1.7 3.0 3.3 0.6
1910-1950 2.2 4.2 2.0 0.5
1950-1975 3.6 5.5 6.5 0.6
1975-2000 1.4 2.1 4.3 0.4
1800-2000 1.9 3.4 3.8 0.7

Source: Krantz/Schön (forthcoming 2007).

In the first decades of the nineteenth century the agricultural sector dominated and growth was slow in all aspects but in population. Still there was per capita growth, but to some extent this was a recovery from the low levels during the Napoleonic Wars. The acceleration during the next period around the mid-nineteenth century is marked in all aspects. Investments and foreign trade became very dynamic ingredients with the onset of industrialization. They were to remain so during the following periods as well. Up to the 1970s per capita growth rates increased for each successive period. In an international perspective it is most notable that per capita growth rates increased also in the interwar period, despite the slow down in foreign trade. The interwar period is crucial for the long run relative success of Swedish economic growth. The decisive culmination in the post-war period with high growth rates in investments and in foreign trade stands out as well, as the deceleration in all aspects in the late twentieth century.

An analysis in a traditional growth accounting framework gives a long term pattern with certain periodic similarities (see Table 3). Thus, total factor productivity growth has increased over time up to the 1970s, only to decrease to its long run level in the last decades. This deceleration in productivity growth may be looked upon either as a failure of the “Swedish Model” to accommodate new growth forces or as another case of the “productivity paradox” in lieu of the information technology revolution.3

Table 3 Total Factor Productivity (TFP) Growth and Relative Contribution of Capital, Labor and TFP to GDP Growth in Sweden, 1840-2000

Period TFP Growth Capital Labor TFP
1840-1870 0.4 55 27 18
1870-1910 0.7 50 18 32
1910-1950 1.0 39 24 37
1950-1975 2.1 45 7 48
1975-2000 1.0 44 1 55
1840-2000 1.1 45 16 39

Source: See Table 2.

In terms of contribution to overall growth, TFP has increased its share for every period. The TFP share was low in the 1840s but there was a very marked increase with the onset of modern industrialization from the 1870s. In relative terms TFP reached its highest level so far from the 1970s, thus indicating an increasing role of human capital, technology and knowledge in economic growth. The role of capital accumulation was markedly more pronounced in early industrialization with the build-up of a modern infrastructure and with urbanization, but still capital did retain much of its importance during the twentieth century. Thus its contribution to growth during the post-war Golden Ages was significant with very high levels of material investments. At the same time TFP growth culminated with positive structural shifts, as well as increased knowledge intensity complementary to the investments. Labor has in quantitative terms progressively reduced its role in economic growth. One should observe, however, the relatively large importance of labor in Swedish economic growth during the interwar period. This was largely due to demographic factors and to the employment situation that will be further commented upon.

In the first decades of the nineteenth century, growth was still led by the primary production of agriculture, accompanied by services and transport. Secondary production in manufacturing and building was, on the contrary, very stagnant. From the 1840s the industrial sector accelerated, increasingly supported by transport and communications, as well as by private services. The sectoral shift from agriculture to industry became more pronounced at the turn of the twentieth century when industry and transportation boomed, while agricultural growth decelerated into subsequent stagnation. In the post-war period the volume of services, both private and public, increased strongly, although still not outpacing industry. From the 1970s the focus shifted to private services and to transport and communications, indicating fundamental new prerequisites of growth.

Table 4 Growth Rates of Industrial Sectors, 1800-2000

Period Agriculture Industrial and Hand Transport and Communic. Building Private Services Public Services GDP
1800-1840 1.5 0.3 1.1 -0.1 1.4 1.5 1.3
1840-1870 2.1 3.7 1.8 2.4 2.7 0.8 2.3
1870-1910 1.0 5.0 3.9 1.3 2.7 1.0 2.3
1910-1950 0.0 3.5 4.9 1.4 2.2 2.2 2.7
1950-1975 0.4 5.1 4.4 3.8 4.3 4.0 4.3
1975-2000 -0.4 1.9 2.6 -0.8 2.2 0.2 1.8
1800-2000 0.9 3.8 3.7 1.8 2.7 1.7 2.6

Source: See Table 2.

Note: Private services are exclusive of dwelling services.

Growth and Transformation in the Agricultural Society of the Early Nineteenth Century

During the first half of the nineteenth century the agricultural sector and the rural society dominated the Swedish economy. Thus, more than three-quarters of the population were occupied in agriculture while roughly 90 percent lived in the countryside. Many non-agrarian activities such as the iron industry, the saw mill industry and many crafts as well as domestic, religious and military services were performed in rural areas. Although growth was slow, a number of structural and institutional changes occurred that paved the way for future modernization.

Most important was the transformation of agriculture. From the late eighteenth century commercialization of the primary sector intensified. Particularly during the Napoleonic Wars, the domestic market for food stuffs widened. The population increase in combination with the temporary decrease in imports stimulated enclosures and reclamation of land, the introduction of new crops and new methods and above all it stimulated a greater degree of market orientation. In the decades after the war the traditional Swedish trade deficit in grain even shifted to a trade surplus with an increasing exportation of oats, primarily to Britain.

Concomitant with the agricultural transformation were a number of infrastructural and institutional changes. Domestic transportation costs were reduced through investments in canals and roads. Trade of agricultural goods was liberalized, reducing transaction costs and integrating the domestic market even further. Trading companies became more effective in attracting agricultural surpluses for more distant markets. In support of the agricultural sector new means of information were introduced by, for example, agricultural societies that published periodicals on innovative methods and on market trends. Mortgage societies were established to supply agriculture with long term capital for investments that in turn intensified the commercialization of production.

All these elements meant a profound institutional change in the sense that the price mechanism became much more effective in directing human behavior. Furthermore, a greater interest in information and in the main instrument of information, namely literacy, was infused. Traditionally, popular literacy had been upheld by the church, mainly devoted to knowledge of the primary Lutheran texts. In the new economic environment, literacy was secularized and transformed into a more functional literacy marked by the advent of schools for public education in the 1840s.

The Breakthrough of Modern Economic Growth in the Mid-nineteenth Century

In the decades around the middle of the nineteenth century new dynamic forces appeared that accelerated growth. Most notably foreign trade expanded by leaps and bounds in the 1850s and 1860s. With new export sectors, industrial investments increased. Furthermore, railways became the most prominent component of a new infrastructure and with this construction a new component in Swedish growth was introduced, heavy capital imports.

The upswing in industrial growth in Western Europe during the 1850s, in combination with demand induced through the Crimean War, led to a particularly strong expansion in Swedish exports with sharp price increases for three staple goods – bar iron, wood and oats. The charcoal-based Swedish bar iron had been the traditional export good and had completely dominated Swedish exports until mid-nineteenth century. Bar iron met, however, increasingly strong competition from British and continental iron and steel industries and Swedish exports had stagnated in the first half of the nineteenth century. The upswing in international demand, following the diffusion of industrialization and railway construction, gave an impetus to the modernization of Swedish steel production in the following decades.

The saw mill industry was a really new export industry that grew dramatically in the 1850s and 1860s. Up until this time, the vast forests in Sweden had been regarded mainly as a fuel resource for the iron industry and for household heating and local residential construction. With sharp price increases on the Western European market from the 1840s and 1850s, the resources of the sparsely populated northern part of Sweden suddenly became valuable. A formidable explosion of saw mill construction at the mouths of the rivers along the northern coastline followed. Within a few decades Swedish merchants, as well as Norwegian, German, British and Dutch merchants, became saw mill owners running large-scale capitalist enterprises at the fringe of the European civilization.

Less dramatic but equally important was the sudden expansion of Swedish oat exports. The market for oats appeared mainly in Britain, where short-distance transportation in rapidly growing urban centers increased the fleet of horses. Swedish oats became an important energy resource during the decades around the mid-nineteenth century. In Sweden this had a special significance since oats could be cultivated on rather barren and marginal soils and Sweden was richly endowed with such soils. Thus, the market for oats with strongly increasing prices stimulated further the commercialization of agriculture and the diffusion of new methods. It was furthermore so since oats for the market were a substitute for local flax production – also thriving on barren soils – while domestic linen was increasingly supplanted by factory-produced cotton goods.

The Swedish economy was able to respond to the impetus from Western Europe during these decades, to diffuse the new influences in the economy and to integrate them in its development very successfully. The barriers to change seem to have been weak. This is partly explained by the prior transformation of agriculture and the evolution of market institutions in the rural economy. People reacted to the price mechanism. New social classes of commercial peasants, capitalists and wage laborers had emerged in an era of domestic market expansion, with increased regional specialization, and population increase.

The composition of export goods also contributed to the diffusion of participation and to the diffusion of export income. Iron, wood and oats meant both a regional and a social distribution. The value of prior marginal resources such as soils in the south and forests in the north was inflated. The technology was simple and labor intensive in industry, forestry, agriculture and transportation. The demand for unskilled labor increased strongly that was to put an imprint upon Swedish wage development in the second half of the nineteenth century. Commercial houses and industrial companies made profits but export income was distributed to many segments of the population.

The integration of the Swedish economy was further enforced through initiatives taken by the State. The parliament decision in the 1850s to construct the railway trunk lines meant, first, a more direct involvement by the State in the development of a modern infrastructure and, second, new principles of finance since the State had to rely upon capital imports. At the same time markets for goods, labor and capital were liberalized and integration both within Sweden and with the world market deepened. The Swedish adoption of the Gold Standard in 1873 put a final stamp on this institutional development.

A Second Industrial Revolution around 1900

In the late nineteenth century, particularly in the 1880s, international competition became fiercer for agriculture and early industrial branches. The integration of world markets led to falling prices and stagnation in the demand for Swedish staple goods such as iron, sawn wood and oats. Profits were squeezed and expansion thwarted. On the other hand there arose new markets. Increasing wages intensified mechanization both in agriculture and in industry. The demand increased for more sophisticated machinery equipment. At the same time consumer demand shifted towards better foodstuff – such as milk, butter and meat – and towards more fabricated industrial goods.

The decades around the turn of the twentieth century meant a profound structural change in the composition of Swedish industrial expansion that was crucial for long term growth. New and more sophisticated enterprises were founded and expanded particularly from the 1890s, in the upswing after the Baring Crisis.

The new enterprises were closely related to the so called Second Industrial Revolution in which scientific knowledge and more complex engineering skills were main components. The electrical motor became especially important in Sweden. A new development block was created around this innovation that combined engineering skills in companies such as ASEA (later ABB) with a large demand in energy-intensive processes and with the large supply of hydropower in Sweden.4 Financing the rapid development of this large block engaged commercial banks, knitting closer ties between financial capital and industry. The State, once again, engaged itself in infrastructural development in support of electrification, still resorting to heavy capital imports.

A number of innovative industries were founded in this period – all related to increased demand for mechanization and engineering skills. Companies such as AGA, ASEA, Ericsson, Separator (AlfaLaval) and SKF have been labeled “enterprises of genius” and all are represented with renowned inventors and innovators. This was, of course, not an entirely Swedish phenomenon. These branches developed simultaneously on the Continent, particularly in nearby Germany and in the U.S. Knowledge and innovative stimulus was diffused among these economies. The question is rather why this new development became so strong in Sweden so that new industries within a relatively short period of time were able to supplant old resource-based industries as main driving forces of industrialization.

Traditions of engineering skills were certainly important, developed in old heavy industrial branches such as iron and steel industries and stimulated further by State initiatives such as railway construction or, more directly, the founding of the Royal Institute of Technology. But apart from that the economic development in the second half of the nineteenth century fundamentally changed relative factor prices and the profitability of allocation of resources in different lines of production.

The relative increase in the wages of unskilled labor had been stimulated by the composition of early exports in Sweden. This was much reinforced by two components in the further development – emigration and capital imports.

Within approximately the same period, 1850-1910, the Swedish economy received a huge amount of capital mainly from Germany and France, while delivering an equally huge amount of labor to primarily the U.S. Thus, Swedish relative factor prices changed dramatically. Swedish interest rates remained at rather high levels compared to leading European countries until 1910, due to a continuous large demand for capital in Sweden, but relative wages rose persistently (see Table 5). As in the rest of Scandinavia, wage increases were much stronger than GDP growth in Sweden indicating a shift in income distribution in favor of labor, particularly in favor of unskilled labor, during this period of increased world market integration.

Table 5 Annual Increase in Real Wages of Unskilled Labor and Annual GDP Growth per Capita, 1870-1910

Country Annual real wage increase, 1870-1910 Annual GDP growth per capita, 1870-1910
Sweden 2.8 1.7
Denmark and Norway 2.6 1.3
France, Germany and Great Britain 1.1 1.2
United States 1.1 1.6

Sources: Wages from Williamson (1995); GDP growth see Table 1.

Relative profitability fell in traditional industries, which exploited rich natural resources and cheap labor, while more sophisticated industries were favored. But the causality runs both ways. Had this structural shift with the growth of new and more profitable industries not occurred, the Swedish economy would not have been able to sustain the wage increase.5

Accelerated Growth in the War-stricken Period, 1910-1950

The most notable feature of long term Swedish growth is the acceleration in growth rates during the period 1910-1950, which in Europe at large was full of problems and catastrophes.6 Thus, Swedish per capita production grew at 2.2 percent annually while growth in the rest of Scandinavia was somewhat below 2 percent and in the rest of Europe hovered at 1 percent. The Swedish acceleration was based mainly on three pillars.

First, the structure created at the end of the nineteenth century was very viable, with considerable long term growth potential. It consisted of new industries and new infrastructures that involved industrialists and financial capitalists, as well as public sector support. It also involved industries meeting a relatively strong demand in war times, as well as in the interwar period, both domestically and abroad.

Second, the First World War meant an immense financial bonus to the Swedish market. A huge export surplus at inflated prices during the war led to the domestication of the Swedish national debt. This in turn further capitalized the Swedish financial market, lowering interest rates and ameliorating sequential innovative activity in industry. A domestic money market arose that provided the State with new instruments for economic policy that were to become important for the implementation of the new social democratic “Keynesian” policies of the 1930s.

Third, demographic development favored the Swedish economy in this period. The share of the economically active age group 15-64 grew substantially. This was due partly to the fact that prior emigration had sized down cohorts that now would have become old age pensioners. Comparatively low mortality of young people during the 1910s, as well as an end to mass emigration further enhanced the share of the active population. Both the labor market and domestic demand was stimulated in particular during the 1930s when the household forming age group of 25-30 years increased.

The augmented labor supply would have increased unemployment had it not been combined with the richer supply of capital and innovative industrial development that met elastic demand both domestically and in Europe.

Thus, a richer supply of both capital and labor stimulated the domestic market in a period when international market integration deteriorated. Above all it stimulated the development of mass production of consumption goods based upon the innovations of the Second Industrial Revolution. Significant new enterprises that emanated from the interwar period were very much related to the new logic of the industrial society, such as Volvo, SAAB, Electrolux, Tetra Pak and IKEA.

The Golden Age of Growth, 1950-1975

The Swedish economy was clearly part of the European Golden Age of growth, although Swedish acceleration from the 1950s was less pronounced than in the rest of Western Europe, which to a much larger extent had been plagued by wars and crises.7 The Swedish post-war period was characterized primarily by two phenomena – the full fruition of development blocks based upon the great innovations of the late nineteenth century (the electrical motor and the combustion engine) and the cementation of the “Swedish Model” for the welfare state. These two phenomena were highly complementary.

The Swedish Model had basically two components. One was a greater public responsibility for social security and for the creation and preservation of human capital. This led to a rapid increase in the supply of public services in the realms of education, health and children’s day care as well as to increases in social security programs and in public savings for transfers to pensioners program. The consequence was high taxation. The other component was a regulation of labor and capital markets. This was the most ingenious part of the model, constructed to sustain growth in the industrial society and to increase equality in combination with the social security program and taxation.

The labor market program was the result of negotiations between trade unions and the employers’ organization. It was labeled “solidaristic wage policy” with two elements. One was to achieve equal wages for equal work, regardless of individual companies’ ability to pay. The other element was to raise the wage level in low paid areas and thus to compress the wage distribution. The aim of the program was actually to increase the speed in the structural rationalization of industries and to eliminate less productive companies and branches. Labor should be transferred to the most productive export-oriented sectors. At the same time income should be distributed more equally. A drawback of the solidaristic wage policy from an egalitarian point of view was that profits soared in the productive sectors since wage increases were held back. However, capital market regulations hindered the ability of high profits to be converted into very high incomes for shareholders. Profits were taxed very low if they were converted into further investments within the company (the timing in the use of the funds was controlled by the State in its stabilization policy) but taxed heavily if distributed to share holders. The result was that investments within existing profitable companies were supported and actually subsidized while the mobility of capital dwindled and the activity at the stock market fell.

As long as the export sectors grew, the program worked well.8 Companies founded in the late nineteenth century and in the interwar period developed into successful multinationals in engineering with machinery, auto industries and shipbuilding, as well as in resource-based industries of steel and paper. The expansion of the export sector was the main force behind the high growth rates and the productivity increases but the sector was strongly supported by public investments or publicly subsidized investments in infrastructure and residential construction.

Hence, during the Golden Age of growth the development blocks around electrification and motorization matured in a broad modernization of the society, where mass consumption and mass production was supported by social programs, by investment programs and by labor market policy.

Crisis and Restructuring from the 1970s

In the 1970s and early 1980s a number of industries – such as steel works, pulp and paper, shipbuilding, and mechanical engineering – ran into crisis. New global competition, changing consumer behavior and profound innovative renewal, especially in microelectronics, made some of the industrial pillars of the Swedish Model crumble. At the same time the disadvantages of the old model became more apparent. It put obstacles to flexibility and to entrepreneurial initiatives and it reduced individual incentives for mobility. Thus, while the Swedish Model did foster rationalization of existing industries well adapted to the post-war period, it did not support more profound transformation of the economy.

One should not exaggerate the obstacles to transformation, though. The Swedish economy was still very open in the market for goods and many services, and the pressure to transform increased rapidly. During the 1980s a far-reaching structural change within industry as well as in economic policy took place, engaging both private and public actors. Shipbuilding was almost completely discontinued, pulp industries were integrated into modernized paper works, the steel industry was concentrated and specialized, and the mechanical engineering was digitalized. New and more knowledge-intensive growth industries appeared in the 1980s, such as IT-based telecommunication, pharmaceutical industries, and biotechnology, as well as new service industries.

During the 1980s some of the constituent components of the Swedish model were weakened or eliminated. Centralized negotiations and solidaristic wage policy disappeared. Regulations in the capital market were dismantled under the pressure of increasing international capital flows simultaneously with a forceful revival of the stock market. The expansion of public sector services came to an end and the taxation system was reformed with a reduction of marginal tax rates. Thus, Swedish economic policy and welfare system became more adapted to the main European level that facilitated the Swedish application of membership and final entrance into the European Union in 1995.

It is also clear that the period from the 1970s to the early twenty-first century comprise two growth trends, before and after 1990 respectively. During the 1970s and 1980s, growth in Sweden was very slow and marked by the great structural problems that the Swedish economy had to cope with. The slow growth prior to 1990 does not signify stagnation in a real sense, but rather the transformation of industrial structures and the reformulation of economic policy, which did not immediately result in a speed up of growth but rather in imbalances and bottle necks that took years to eliminate. From the 1990s up to 2005 Swedish growth accelerated quite forcefully in comparison with most Western economies.9 Thus, the 1980s may be considered as a Swedish case of “the productivity paradox,” with innovative renewal but with a delayed acceleration of productivity and growth from the 1990s – although a delayed productivity effect of more profound transformation and radical innovative behavior is not paradoxical.

Table 6 Annual Growth Rates per Capita, 1971-2005

Period Sweden Rest of Nordic Countries Rest of Western Europe United States World Economy
1971/1975-1991/1995 1.2 2.1 1.8 1.6 1.4
1991/1995-2001/2005 2.4 2.5 1.7 2.1 2.1

Sources: See Table 1.

The recent acceleration in growth may also indicate that some of the basic traits from early industrialization still pertain to the Swedish economy – an international attitude in a small open economy fosters transformation and adaptation of human skills to new circumstances as a major force behind long term growth.

References

Abramovitz, Moses. “Catching Up, Forging Ahead and Falling Behind.” Journal of Economic History 46, no. 2 (1986): 385-406.

Dahmén, Erik. “Development Blocks in Industrial Economics.” Scandinavian Economic History Review 36 (1988): 3-14.

David, Paul A. “The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox.” American Economic Review 80, no. 2 (1980): 355-61.

Eichengreen, Barry. “Institutions and Economic Growth: Europe after World War II.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo. New York: Cambridge University Press, 1996.

Krantz, Olle and Lennart Schön. Swedish Historical National Accounts, 1800-2000. Lund: Almqvist and Wiksell International (forthcoming, 2007).

Maddison, Angus. The World Economy, Volumes 1 and 2. Paris: OECD (2006).

Schön, Lennart. “Development Blocks and Transformation Pressure in a Macro-Economic Perspective: A Model of Long-Cyclical Change.” Skandinaviska Enskilda Banken Quarterly Review 20, no. 3-4 (1991): 67-76.

Schön, Lennart. “External and Internal Factors in Swedish Industrialization.” Scandinavian Economic History Review 45, no. 3 (1997): 209-223.

Schön, Lennart. En modern svensk ekonomisk historia: Tillväxt och omvandling under två sekel (A Modern Swedish Economic History: Growth and Transformation in Two Centuries). Stockholm: SNS (2000).

Schön, Lennart. “Total Factor Productivity in Swedish Manufacturing in the Period 1870-2000.” In Exploring Economic Growth: Essays in Measurement and Analysis: A Festschrift for Riitta Hjerppe on Her Sixtieth Birthday, edited by S. Heikkinen and J.L. van Zanden. Amsterdam: Aksant, 2004.

Schön, Lennart. “Swedish Industrialization 1870-1930 and the Heckscher-Ohlin Theory.” In Eli Heckscher, International Trade, and Economic History, edited by Ronald Findlay et al. Cambridge, MA: MIT Press (2006).

Svennilson, Ingvar. Growth and Stagnation in the European Economy. Geneva: United Nations Economic Commission for Europe, 1954.

Temin, Peter. “The Golden Age of European Growth Reconsidered.” European Review of Economic History 6, no. 1 (2002): 3-22.

Williamson, Jeffrey G. “The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses.” Explorations in Economic History 32, no. 2 (1995): 141-96.

Citation: Schön, Lennart. “Sweden – Economic Growth and Structural Change, 1800-2000″. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/sweden-economic-growth-and-structural-change-1800-2000/

A History of the Standard of Living in the United States

Richard H. Steckel, Ohio State University

Methods of Measuring the Standard of Living

During many years of teaching, I have introduced the topic of the standard of living by asking students to pretend that they would be born again to unknown (random) parents in a country they could choose based on three of its characteristics. The list put forward in the classroom invariably includes many of the categories usually suggested by scholars who have studied the standard of living over the centuries: access to material goods and services; health; socio-economic fluidity; education; inequality; the extent of political and religious freedom; and climate. Thus, there is little disagreement among people, whether newcomers or professionals, on the relevant categories of social performance.

Components and Weights

Significant differences of opinion emerge, both among students and research specialists, on the precise measures to be used within each category and on the weights or relative importance that should be attached to each. There are numerous ways to measure health, for example, with some approaches emphasizing length of life while other people give high priority to morbidity (illness or disability) or to yet other aspects of health quality of life while living (e.g. physical fitness). Conceivably one might attempt comparisons using all feasible measures, but this is expensive and time-consuming and in any event, many good measures within categories are often highly correlated.

Weighting the various components is the most contentious issue in any attempt to summarize the standard of living, or otherwise compress diverse measures into a single number. Some people give high priority to income, for example, while others claim that health is most important. Economists and other social scientists recognize that tastes or preferences are individualistic and diverse, and following this logic to the extreme, one might argue that all interpersonal comparisons are invalid. On the other hand, there are general tendencies in preferences. Every class that I have taught has emphasized the importance of income and health, and for this reason I discuss historical evidence on these measures.

Material Aspects of the Standard of Living

Gross Domestic Product

The most widely used measure of the material standard of living is Gross Domestic Product (GDP) per capita, adjusted for changes in the price level (inflation or deflation). This measure, real GDP per capita, reflects only economic activities that flow through markets, omitting productive endeavors unrecorded in market exchanges, such a preparing meals at home or maintenance done by the homeowner. It ignores work effort required to produce income and does not consider conditions surrounding the work environment, which might affect health and safety. Crime, pollution, and congestion, which many people consider important to their quality of life, are also excluded from GDP. Moreover technological change, relative prices and tastes affect the course of GDP and the products and services that it includes, which creates what economists call an “index number” problem that is not readily solvable. Nevertheless most economists believe that real GDP per capita does summarize or otherwise quantify important aspects of the average availability of goods and services.

Time Trends in Real GDP per Capita

Table 1 shows the course of the material standard of living in the United States from 1820 to 1998. Over this period of 178 years real GDP per capita increased 21.7 fold, or an average of 1.73 percent per year. Although the evidence available to estimate GDP directly is meager, this rate of increase was probably many times higher than experienced during the colonial period. This conclusion is justified by considering the implications of extrapolating the level observed in 1820 ($1,257) backward in time at the growth rate measured since 1820 (1.73 percent). Under this supposition, real per capita GDP would have doubled every forty years (halved every forty years going backward in time) and so by the mid 1700s there would have been insufficient income to support life. Because the cheapest diet able to sustain good health would have cost nearly $500 per year, the tentative assumption of modern economic growth contradicts what actually happened. Moreover, historical evidence suggests that important ingredients of modern economic growth, such as technological change and human and physical capital, accumulated relatively slowly during the colonial period.

Table 1: GDP per Capita in the United States

Year GDP per capitaa Annual growth rate from previous period
1820 1,257
1870 2,445 1.34
1913 5,301 1.82
1950 9,561 1.61
1973 16,689 2.45
1990 23,214 1.94
1998 27,331 2.04

a. Measured in 1990 international dollars.

Source: Maddison (2001), Tables A-1c and A-1d.

Cycles in Real GDP per Capita

Although real GDP per capita is given for only 7 dates in Table 1, it is apparent that economic progress has been uneven over time. If annual or quarterly data were given, it would show that business cycles have been a major feature of the economic landscape since industrialization began in the 1820s. By far the worst downturn in U.S. history occurred during the Great Depression of the 1930s, when real per capita GDP declined by approximately one-third and the unemployment rate reached 25 percent.

Regional Differences

The aggregate numbers also disguise regional differences in the standard of living. In 1840 personal income per capita was twice as high in the Northeast as in the North Central States. Regional divergence increased after the Civil War when the South Atlantic became the nation’s poorest region, attaining a level only one-third of that in the Northeast. Regional convergence occurred in the twentieth century and industrialization in the South significantly improved the region’s economic standing after World War II.

Health and the Standard of Living

Life Expectancy

Two measures of health are widely used in economic history: life expectancy at birth (or average length of life) and average height, which measures nutritional conditions during the growing years. Table 2 shows that life expectancy approximately doubled over the past century and a half, reaching 76.7 years in 1998. If depressions and recessions have adversely affected the material standard of living, epidemics have been a major cause of sudden declines in health in the past. Fluctuations during the nineteenth century are evident from the table, but as a rule growth rates in health have been considerably less volatile than those for GDP, particularly during the twentieth century.

Table 2: Life Expectancy at Birth in the United States

Year Life Expectancy
1850 38.3
1860 41.8
1870 44.0
1880 39.4
1890 45.2
1900 47.8
1910 53.1
1920 54.1
1930 59.7
1940 62.9
1950 68.2
1960 69.7
1970 70.8
1980 73.7
1990 75.4
1998 76.7

Source: Haines (2002)

Childhood mortality greatly affects life expectancy, which was low in the mid 1800s substantially because mortality rates were very high for this age group. For example, roughly one child in five born alive in 1850 did not survive to age one, but today the infant mortality rate is under one percent. The past century and a half witnessed a significant shift in deaths from early childhood to old age. At the same time, the major causes of death have shifted from infectious diseases originating with germs or microorganisms to degenerative processes that are affected by life-style choices such as diet, smoking and exercise.

The largest gains were concentrated in the first half of the twentieth century, when life expectancy increased from 47.8 years in 1900 to 68.2 years in 1950. Factors behind the growing longevity include the ascent of the germ theory of disease, programs of public health and personal hygiene, better medical technology, higher incomes, better diets, more education, and the emergence of health insurance.

Explanations of Increases in Life Expectancy

Numerous important medical developments contributed to improving health. The research of Pasteur and Koch was particularly influential in leading to acceptance of the germ theory in the late 1800s. Prior to their work, many diseases were thought to have arisen from miasmas or vapors created by rotting vegetation. Thus, swamps were accurately viewed as unhealthy, but not because they were home to mosquitoes and malaria. The germ theory gave public health measures a sound scientific basis, and shortly thereafter cities began cost-effective measures to remove garbage, purify water supplies, and process sewage. The notion that “cleanliness is next to Godliness” also emerged in the home, where bathing and the washing of clothes, dishes, and floors became routine.

The discovery of Salvarsan in 1910 was the first use of an antibiotic (for syphilis), which meant that the drug was effective in altering the course of a disease. This was an important medical event, but broad-spectrum antibiotics were not available until the middle of the century. The most famous of these early drugs was penicillin, which was not manufactured in large quantities until the 1940s. Much of the gain in life expectancy was attained before chemotherapy and a host of other medical technologies were widely available. A cornerstone of improving health from the late 1800s to the middle of the twentieth century was therefore prevention of disease by reducing exposure to pathogens. Also important were improvements in immune systems created by better diets and by vaccination against diseases such as smallpox and diphtheria.

Heights

In the past quarter century, historians have increasingly used average heights to assess health aspects of the standard of living. Average height is a good proxy for the nutritional status of a population because height at a particular age reflects an individual’s history of net nutrition, or diet minus claims on the diet made by work (or physical activity) and disease. The growth of poorly nourished children may cease, and repeated bouts of biological stress — whether from food deprivation, hard work, or disease — often leads to stunting or a reduction in adult height. The average heights of children and of adults in countries around the world are highly correlated with their life expectancy at birth and with the log of the per capita GDP in the country where they live.

This interpretation for average height has led to their use in studying the health of slaves, health inequality, living standards during industrialization, and trends in mortality. The first important results in the “new anthropometric history” dealt with the nutrition and health of Americans slaves as determined from stature recorded for identification purposes on slave manifests required in the coastwise slave trade. The subject of slave health has been a contentious issue among historians, in part because vital statistics and nutrition information were never systematically collected for slaves (or for the vast majority of the American population in the mid-nineteenth century, for that matter). Yet, the height data showed that children were astonishingly small and malnourished while working slaves were remarkably well fed. Adolescent slaves grew rapidly as teenagers and were reasonably well off in nutritional aspects of health.

Time Trends in Average Height

Table 3 shows the time pattern in height of native-born American men obtained in historical periods from military muster rolls, and for men and women in recent decades from the National Health and Nutrition Examination Surveys. This historical trend is notable for the tall stature during the colonial period, the mid-nineteenth century decline, and the surge in heights of the past century. Comparisons of average heights from military organizations in Europe show that Americans were taller by two to three inches. Behind this achievement were a relatively good diet, little exposure to epidemic disease, and relative equality in the distribution of wealth. Americans could choose their foods from the best of European and Western Hemisphere plants and animals, and this dietary diversity combined with favorable weather meant that Americans never had to contend with harvest failures. Thus, even the poor were reasonably well fed in colonial America.

Table 3:

Average Height of Native-Born American Men and Women by Year of Birth

Centimeters

Inches

Year Men Men Women
1710 171.5 67.5
1720 171.8 67.6
1730 172.1 67.8
1740 172.1 67.8
1750 172.2 67.8
1760 172.3 67.8
1770 172.8 68.0
1780 173.2 68.2
1790 172.9 68.1
1800 172.9 68.1
1810 173.0 68.1
1820 172.9 68.1
1830 173.5 68.3
1840 172.2 67.8
1850 171.1 67.4
1860 170.6 67.2
1870 171.2 67.4
1880 169.5 66.7
1890 169.1 66.6
1900 170.0 66.9
1910 172.1 67.8
1920 173.1 68.1
1930 175.8 162.6 69.2 64.0
1940 176.7 163.1 69.6 64.2
1950 177.3 163.1 69.8 64.2
1960 177.9 164.2 70.0 64.6
1970 177.4 163.6 69.8 64.4

Source: Steckel (2002) and sources therein.

Explaining Height Cycles

Loss of stature began in the second quarter of the nineteenth century when the transportation revolution of canals, steamboats and railways brought people into greater contact with diseases. The rise of public schools meant that children were newly exposed to major diseases such as whooping cough, diphtheria, and scarlet fever. Food prices also rose during the 1830s and growing inequality in the distribution of income or wealth accompanied industrialization. Business depressions, which were most hazardous for the health of those who were already poor, also emerged with industrialization. The Civil War of the 1860s and its troop movements further spread disease and disrupted food production and distribution. A large volume of immigration also brought new varieties of disease to the United States at a time when urbanization brought a growing proportion of the population into closer contact with contagious diseases. Estimates of life expectancy among adults at ages 20, 30 and 50, which was assembled from family histories, also declined in the middle of the nineteenth century.

Rapid Increases in Heights in the First Half of the Twentieth Century

In the twentieth century, heights grew most rapidly for those born between 1910 and 1950, an era when public health and personal hygiene measures took vigorous hold, incomes rose rapidly and there was reduced congestion in housing. The latter part of the era also witnessed a larger share of income or wealth going to the lower portion of the distribution, implying that the incomes of the less well-off were rising relatively rapidly. Note that most of the rise in heights occurred before modern antibiotics were available, which means that disease prevention rather than the ability to alter its course after onset, was the most important basis of improving health. The growing control that humans have exercised over their environment, particularly increased food supply and reduced exposure to disease, may be leading to biological (but not genetic) evolution of humans with more durable vital organ systems, larger body size, and later onset of chronic diseases.

Recent Stagnation

Between the middle of the twentieth century and the present, however, the average heights of American men have stagnated, increasing by only a small fraction of an inch over the past half century. Table 3 refers to the native born, so recent increases in immigration cannot account for the stagnation. In the absence of other information, one might be tempted to suppose that environmental conditions for growth are so good that most Americans have simply reached their genetic potential for growth. Unlike the United States, heights and life expectancy have continued to grow in Europe, which has the same genetic stock from which most Americans descend. By the 1970s several American health indicators had fallen behind those in Norway, Sweden, the Netherlands, and Denmark. While American heights were essentially flat after the 1970s, heights continued to grow significantly in Europe. The Dutch men are now the tallest, averaging six feet, about two inches more than American men. Lagging heights leads to questions about the adequacy of health care and life-style choices in America. As discussed below, it is doubtful that lack of resource commitment to health care is the problem because America invests far more than the Netherlands. Greater inequality and less access to health care could be important factors in the difference. But access to health care alone, whether due to low income or lack of insurance coverage, may not be the only issues — health insurance coverage must be used regularly and wisely. In this regard, Dutch mothers are known for regular pre-and post-natal checkups, which are important for early childhood health.

Note that significant differences in health and the quality of life follow from these height patterns. The comparisons are not part of an odd contest that emphasizes height, nor is big per se assumed to be beautiful. Instead, we know that on average, stunted growth has functional implications for longevity, cognitive development, and work capacity. Children who fail to grow adequately are often sick, suffer learning impairments and have a lower quality of life. Growth failure in childhood has a long reach into adulthood because individuals whose growth has been stunted are at greater risk of death from heart disease, diabetes, and some types of cancer. Therefore it is important to know why Americans are falling behind.

International Comparisons

Per capita GDP

Table 4 places American economic performance in perspective relative to other countries. In 1820 the United States was fifth in world rankings, falling roughly thirty percent below the leaders (United Kingdom and the Netherlands), but still two-to-three times better off than the poorest sections of the globe. It is notable that in 1820 the richest country (the Netherlands at $1,821) was approximately 4.4 times better off than the poorest (Africa at $418) but by 1950 the ratio of richest-to-poorest had widened to 21.8 ($9,561 in the United States versus $439 in China), which is roughly the level it is today (in 1998, it was $27,331 in the United States versus $1,368 in Africa). These calculations understate the growing disparity in the material standard of living because several African countries today fall significantly below the average, whereas it is unlikely that they did so in 1820 because GDP for the continent as a whole was close to the level of subsistence.

Table 4: GDP per Capita by Country and Year (1990 International $)

Country 1820 1870 1913 1950 1973 1998 Ratio 1998 to 1820
Austria 1,218 1,863 3,465 3,706 11,235 18,905 15.5
Belgium 1,319 2,697 4,220 5,462 12,170 19,442 14.7
Denmark 1,274 2,003 3,912 6,946 13,945 22,123 17.4
Finland 781 1,140 2,111 4,253 11,085 18,324 23.5
France 1,230 1,876 3,485 5,270 13,123 19,558 15.9
Germany 1,058 1,821 3,648 3,881 11,966 17,799 16.8
Italy 1,117 1,499 2,564 3,502 10,643 17,759 15.9
Netherlands 1,821 2,753 4,049 5,996 13,082 20,224 11.1
Norway 1,104 1,432 2,501 5,463 11,246 23,660 21.4
Sweden 1,198 1,664 3,096 6,738 13,493 18,685 15.6
Switzerland 1,280 2,202 4,266 9,064 18,204 21,367 16.7
United Kingdom 1,707 3,191 4,921 6,907 12,022 18,714 11.0
Portugal 963 997 1,244 2,069 7,343 12,929 13.4
Spain 1,063 1,376 2,255 2,397 8,739 14,227 13.4
United States 1,257 2,445 5,301 9,561 16,689 27,331 21.7
Mexico 759 674 1,732 2,365 4,845 6,655 8.8
Japan 669 737 1,387 1,926 11,439 20,413 30.5
China 600 530 552 439 839 3,117 5.2
India 533 533 673 619 853 1,746 3.3
Africa 418 444 585 852 1,365 1,368 3.3
World 667 867 1,510 2,114 4,104 5,709 8.6
Ratio of richest to poorest 4.4 7.2 8.9 20.6 21.7 20.0

Source: Maddison (2001), Table B-21.

It is clear that the poorer countries are better off today than they were in 1820 (3.3 fold in both Africa and India). But the countries that are now rich grew at a much faster rate. The last column of Table 4 shows that Japan realized the most spectacular gain, climbing from approximately the world average in 1820 to the fifth richest today, with an increase of over thirty fold in real per capita GDP. All countries that are rich today had rapid increases in their material standard of living, realizing more than ten-fold increases since 1820. The underlying reasons for this diversity of economic success is a central question in the field of economic history.

Life Expectancy

Table 5 shows that disparities in life expectancy have been much less than those in per capita GDP. In 1820 all countries were bunched in the range of 21 to 41 years, with Germany at the top and India at the bottom, giving a ratio of less than 2 to 1. It is doubtful that any country or region has had a life expectancy below 20 years for long periods of time because death rates would have exceeded any plausible upper limit for birth rates, leading to population implosion. The twentieth century witnessed a compression in life expectancies across countries, with the ratio of levels in 1999 being 1.56 (81 in Japan versus 52 in Africa). Japan has also been a spectacular performer in health, increasing life expectancy from 34 years in 1820 to 81 years in 1999. Among poor unhealthy countries, health aspects of the standard of living have improved more rapidly than the material standard of living relative to the world average. Because many public health measures are cheap and effective, it has been easier to extend life than it has been to promote material prosperity, which has numerous complicated causes.

Table 5: Life Expectancy at Birth by Country and Year

Country 1820 1900 1950 1999
France 37 47 65 78
Germany 41 47 67 77
Italy 30 43 66 78
Netherlands 32 52 72 78
Spain 28 35 62 78
Sweden 39 56 70 79
United Kingdom 40 50 69 77
United States 39 47 68 77
Japan 34 44 61 81
Russia 28 32 65 67
Brazil 27 36 45 67
Mexico n.a. 33 50 72
China n.a. 24 41 71
India 21 24 32 60
Africa 23 24 38 52
World 26 31 49 66

n.a.: not available.

Source: Maddison (2001), Table 1-5a.

Height Comparisons

Figure 1 compares stature in the United States and the United Kingdom. Americans were very tall by global standards in the early nineteenth century as a result of their rich and varied diets, low population density, and relative equality of wealth. Unlike other countries that have been studied (France, the Netherlands, Sweden, Germany, Japan and Australia), both the U.S. and the U.K. suffered significant height declines during industrialization (as defined primarily by the achievement of modern economic growth) in the nineteenth century. (Note, however, that the amount and timing of the height decline in the U.K. has been the subject of a lively debate in the Economic History Review involving Roderick Floud, Kenneth Wachter and John Komlos; only the Floud-Wachter figures are given here.)

Source: Steckel (2002, Figure 12) and Floud, Wachter and Gregory (1990, table 4.8).

One may speculate that the timing of the declines shown in the Figure 1 is probably more coincidental than emblematic of linkage among similar causal factors across the two countries. While it is possible that growing trade and commerce spread disease, as in the United States, it is more likely that a major culprit in the U.K was rapid urbanization and associated increased in exposure to diseases. This conclusion is reached by noting that urban-born men were substantially shorter than the rural born, and between the periods of 1800-1830 and 1830–1870 the share of the British population living in urban areas leaped from 38.7 to 54.1%.

References

Easterlin, Richard A. “Regional Income Trends, 1840-1950.” In The Reinterpretation of American Economic History, edited byRobert William Fogel and Stanley L. Engerman. New York: Harper and Row, 1971.

Engerman, Stanley L. “The Standard of Living Debate in International Perspective: Measures and Indicators.” In Health and Welfare during Industrialization, edited by Richard H. Steckel and Roderick Floud. Chicago: University of Chicago Press, 1997.

Floud, Roderick, Kenneth W. Wachter and Annabel S. Gregory. Height, Health, and History: Nutritional Status in the United Kingdom, 1750-1980. Cambridge: Cambridge University Press, 1990.

Haines, Michael. “Vital Statistics.” In Historical Statistics of the United States: Millennial Edition, edited by Susan Carter, Scott Gartner, Michael Haines, Alan Olmstead, Richard Sutch, and Gavin Wright. New York: Cambridge University Press, forthcoming, 2002.

Komlos, John. “Shrinking in a Growing Economy? The Mystery of Physical Stature during the Industrial Revolution.” Journal of Economic History 58, no. 3 (1998): 779-802.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

Meeker, Edward. “Medicine and Public Health.” In Encyclopedia of American Economic History, edited by Glenn Porter. New York: Scribner, 1980.

Pope, Clayne L. “Adult Mortality in America before 1900: A View from Family Histories.” In Strategic Factors in Nineteenth-Century American Economic History: A Volume to Honor Robert W. Fogel, edited by Claudia Goldin and Hugh Rockoff. Chicago: University of Chicago Press, 1992.

Porter, Roy, editor. The Cambridge Illustrated History of Medicine. Cambridge: Cambridge University Press, 1996.

Steckel, Richard H. “Health, Nutrition and Physical Well-Being.” In Historical Statistics of the United States: Millennial Edition, edited by Susan Carter, Scott Gartner, Michael Haines, Alan Olmstead, Richard Sutch, and Gavin Wright. New York: Cambridge University Press, forthcoming, 2002.

Steckel, Richard H. “Industrialization and Health in Historical Perspective.” In Poverty, Inequality and Health, edited by David Leon and Gill Walt. Oxford: Oxford University Press, 2000.

Steckel, Richard H. “Strategic Ideas in the Rise of the New Anthropometric History and Their Implications for Interdisciplinary Research.” Journal of Economic History 58, no. 3 (1998): 803-21.

Steckel, Richard H. “Stature and the Standard of Living.” Journal of Economic Literature 33, no. 4 (1995): 1903-1940.

Steckel, Richard H. “A Peculiar Population: The Nutrition, Health, and Mortality of American Slaves from Childhood to Maturity.” Journal of Economic History 46, no. 3 (1986): 721-41.

Steckel, Richard H. and Roderick Floud, editors. Health and Welfare during Industrialization. Chicago: University of Chicago Press, 1997.

Citation: Steckel, Richard. “A History of the Standard of Living in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. July 21, 2002. URL http://eh.net/encyclopedia/a-history-of-the-standard-of-living-in-the-united-states/

Economic History of Retirement in the United States

Joanna Short, Augustana College

One of the most striking changes in the American labor market over the twentieth century has been the virtual disappearance of older men from the labor force. Moen (1987) and Costa (1998) estimate that the labor force participation rate of men age 65 and older declined from 78 percent in 1880 to less than 20 percent in 1990 (see Table 1). In recent decades, the labor force participation rate of somewhat younger men (age 55-64) has been declining as well. When coupled with the increase in life expectancy over this period, it is clear that men today can expect to spend a much larger proportion of their lives in retirement, relative to men living a century ago.

Table 1

Labor Force Participation Rates of Men Age 65 and Over

Year Labor Force Participation Rate (percent)
1850 76.6
1860 76.0
1870 —–
1880 78.0
1890 73.8
1900 65.4
1910 58.1
1920 60.1
1930 58.0
1940 43.5
1950 47.0
1960 40.8
1970 35.2
1980 24.7
1990 18.4
2000 17.5

Sources: Moen (1987), Costa (1998), Bureau of Labor Statistics

Notes: Prior to 1940, ‘gainful employment’ was the standard the U.S. Census used to determine whether or not an individual was working. This standard is similar to the ‘labor force participation’ standard used since 1940. With the exception of the figure for 2000, the data in the table are based on the gainful employment standard.

How can we explain the rise of retirement? Certainly, the development of government programs like Social Security has made retirement more feasible for many people. However, about half of the total decline in the labor force participation of older men from 1880 to 1990 occurred before the first Social Security payments were made in 1940. Therefore, factors other than the Social Security program have influenced the rise of retirement.

In addition to the increase in the prevalence of retirement over the twentieth century, the nature of retirement appears to have changed. In the late nineteenth century, many retirements involved a few years of dependence on children at the end of life. Today, retirement is typically an extended period of self-financed independence and leisure. This article documents trends in the labor force participation of older men, discusses the decision to retire, and examines the causes of the rise of retirement including the role of pensions and government programs.

Trends in U.S. Retirement Behavior

Trends by Gender

Research on the history of retirement focuses on the behavior of men because retirement, in the sense of leaving the labor force permanently in old age after a long career, is a relatively new phenomenon among women. Goldin (1990) concludes that “even as late as 1940, most young working women exited the labor force on marriage, and only a small minority would return.” The employment of married women accelerated after World War II, and recent evidence suggests that the retirement behavior of men and women is now very similar. Gendell (1998) finds that the average age at exit from the labor force in the U.S. was virtually identical for men and women from 1965 to 1995.

Trends by Race and Region

Among older men at the beginning of the twentieth century, labor force participation rates varied greatly by race, region of residence, and occupation. In the early part of the century, older black men were much more likely to be working than older white men. In 1900, for example, 84.1 percent of black men age 65 and over and 64.4 percent of white men were in the labor force. The racial retirement gap remained at about twenty percentage points until 1920, then narrowed dramatically by 1950. After 1950, the racial retirement gap reversed. In recent decades older black men have been slightly less likely to be in the labor force than older white men (see Table 2).

Table 2

Labor Force Participation Rates of Men Age 65 and Over, by Race

Labor Force Participation Rate (percent)
Year White Black
1880 76.7 87.3
1890 —- —-
1900 64.4 84.1
1910 58.5 86.0
1920 57.0 76.8
1930 —- —-
1940 44.1 54.6
1950 48.7 51.3
1960 40.3 37.3
1970 36.6 33.8
1980 27.1 23.7
1990 18.6 15.7
2000 17.8 16.6

Sources: Costa (1998), Bureau of Labor Statistics

Notes: Census data are unavailable for the years 1890 and 1930.

With the exception of the figures for 2000, participation rates are based on the gainful employment standard

Similarly, the labor force participation rate of men age 65 and over living in the South was higher than that of men living in the North in the early twentieth century. In 1900, for example, the labor force participation rate for older Southerners was sixteen percentage points higher than for Northerners. The regional retirement gap began to narrow between 1910 and 1920, and narrowed substantially by 1940 (see Table 3).

Table 3

Labor Force Participation Rates of Men Age 65 and Over, by Region

Labor Force Participation Rate (percent)
Year North South
1880 73.7 85.2
1890 —- —-
1900 66.0 82.9
1910 56.6 72.8
1920 58.8 69.9
1930 —- —-
1940 42.8 49.4
1950 43.2 42.9

Source: Calculated from Ruggles and Sobek, Integrated Public Use Microdata Series for 1880, 1900, 1910, 1920, 1940, and 1950, Version 2.0, 1997

Note: North includes the New England, Middle Atlantic, and North Central regions

South includes the South Atlantic and South Central regions

Differences in retirement behavior by race and region of residence are related. One reason Southerners appear less likely to retire in the late nineteenth and early twentieth centuries is that a relatively large proportion of Southerners were black. In 1900, 90 percent of black households were located in the South (see Maloney on African Americans in this Encyclopedia). In the early part of the century, black men were effectively excluded from skilled occupations. The vast majority worked for low pay as tenant farmers or manual laborers. Even controlling for race, southern per capita income lagged behind the rest of the nation well into the twentieth century. Easterlin (1971) estimates that in 1880, per capita income in the South was only half that in the Midwest, and per capita income remained less than 70 percent of the Midwestern level until 1950. Lower levels of income among blacks, and in the South as a whole during this period, may have made it more difficult for these men to accumulate resources sufficient to rely on in retirement.

Trends by Occupation

Older men living on farms have long been more likely to be working than men living in nonfarm households. In 1900, for example, 80.6 percent of farm residents and 62.7 percent of nonfarm residents over the age of 65 were in the labor force. Durand (1948), Graebner (1980), and others have suggested that older farmers could remain in the labor force longer than urban workers because of help from children or hired labor. Urban workers, on the other hand, were frequently forced to retire once they became physically unable to keep up with the pace of industry.

Despite the large difference in the labor force participation rates of farm and nonfarm residents, the actual gap in the retirement rates of farmers and nonfarmers was not that great. Confusion on this issue stems from the fact that the labor force participation rate of farm residents does not provide a good representation of the retirement behavior of farmers. Moen (1994) and Costa (1995a) point out that farmers frequently moved off the farm in retirement. When the comparison is made by occupation, farmers have labor force participation rates only slightly higher than laborers or skilled workers. Lee (2002) finds that excluding the period 1900-1910 (a period of exceptional growth in the value of farm property), the labor force participation rate of older farmers was on average 9.3 percentage points higher than that of nonfarmers from 1880-1940.

Trends in Living Arrangements

In addition to the overall rise of retirement, and the closing of differences in retirement behavior by race and region, over the twentieth century retired men became much more independent. In 1880, nearly half of retired men lived with children or other relatives. Today, fewer than 5 percent of retired men live with relatives. Costa (1998) finds that between 1910 and 1940, men who were older, had a change in marital status (typically from married to widowed), or had low income were much more likely to live with family members as a dependent. Rising income appears to explain most of the movement away from coresidence, suggesting that the elderly have always preferred to live by themselves, but they have only recently had the means to do so.

Explaining Trends in the Retirement Decision

One way to understand the rise of retirement is to consider the individual retirement decision. In order to retire permanently from the labor force, one must have enough resources to live on to the end of the expected life span. In retirement, one can live on pension income, accumulated savings, and anticipated contributions from family and friends. Without at least the minimum amount of retirement income necessary to survive, the decision-maker has little choice but to remain in the labor force. If the resource constraint is met, individuals choose to retire once the net benefits of retirement (e.g., leisure time) exceed the net benefits of working (labor income less the costs associated with working). From this model, we can predict that anything that increases the costs associated with working, such as advancing age, an illness, or a disability, will increase the probability of retirement. Similarly, an increase in pension income increases the probability of retirement in two ways. First, an increase in pension income makes it more likely the resource constraint will be satisfied. In addition, higher pension income makes it possible to enjoy more leisure in retirement, thereby increasing the net benefits of retirement.

Health Status

Empirically, age, disability, and pension income have all been shown to increase the probability that an individual is retired. In the context of the individual model, we can use this observation to explain the overall rise of retirement. Disability, for example, has been shown to increase the probability of retirement, both today and especially in the past. However, it is unlikely that the rise of retirement was caused by increases in disability rates — advances in health have made the overall population much healthier. Costa (1998), for example, shows that chronic conditions were much more prevalent for the elderly born in the nineteenth century than for men born in the twentieth century.

The Decline of Agriculture

Older farmers are somewhat more likely to be in the labor force than nonfarmers. Furthermore, the proportion of people employed in agriculture has declined steadily, from 51 percent of the work force in 1880, to 17 percent in 1940, to about 2 percent today (Lebergott, 1964). Therefore, as argued by Durand (1948), the decline in agriculture could explain the rise in retirement. Lee (2002) finds, though, that the decline of agriculture only explains about 20 percent of the total rise of retirement from 1880 to 1940. Since most of the shift away from agricultural work occurred before 1940, the decline of agriculture explains even less of the retirement trend since 1940. Thus, the occupational shift away from farming explains part of the rise of retirement. However, the underlying trend has been a long-term increase in the probability of retirement within all occupations.

Rising Income: The Most Likely Explanation

The most likely explanation for the rise of retirement is the overall increase in income, both from labor market earnings and from pensions. Costa (1995b) has shown that the pension income received by Union Army veterans in the early twentieth century had a strong effect on the probability that the veteran was retired. Over the period from 1890 to 1990, economic growth has led to nearly an eightfold increase in real gross domestic product (GDP) per capita. In 1890, GDP per capita was $3430 (in 1996 dollars), which is comparable to the levels of production in Morocco or Jamaica today. In 1990, real GDP per capita was $26,889. On average, Americans today enjoy a standard of living commensurate with eight times the income of Americans living a century ago. More income has made it possible to save for an extended retirement.

Rising income also explains the closing of differences in retirement behavior by race and region by the 1950s. Early in the century blacks and Southerners earned much lower income than Northern whites, but these groups made substantial gains in earnings by 1950. In the second half of the twentieth century, the increasing availability of pension income has also made retirement more attractive. Expansions in Social Security benefits, Medicare, and growth in employer-provided pensions all serve to increase the income available to people in retirement.

Costa (1998) has found that income is now less important to the decision to retire than it once was. In the past, only the rich could afford to retire. Income is no longer a binding constraint. One reason is that Social Security provides a safety net for those who are unable or unwilling to save for retirement. Another reason is that leisure has become much cheaper over the last century. Television, for example, allows people to enjoy concerts and sporting events at a very low price. Golf courses and swimming pools, once available only to the rich, are now publicly provided. Meanwhile, advances in health have allowed people to enjoy leisure and travel well into old age. All of these factors have made retirement so much more attractive that people of all income levels now choose to leave the labor force in old age.

Financing Retirement

Rising income also provided the young with a new strategy for planning for old age and retirement. Ransom and Sutch (1986a,b) and Sundstrom and David (1988) hypothesize that in the nineteenth century men typically used the promise of a bequest as an incentive for children to help their parents in old age. As more opportunities for work off the farm became available, children left home and defaulted on the implicit promise to care for retired parents. Children became an unreliable source of old age support, so parents stopped relying on children — had fewer babies — and began saving (in bank accounts) for retirement.

To support the “babies-to-bank accounts” theory, Sundstrom and David look for evidence of an inheritance-for-old age support bargain between parents and children. They find that many wills, particularly in colonial New England and some ethnic communities in the Midwest, included detailed clauses specifying the care of the surviving parent. When an elderly parent transferred property directly to a child, the contracts were particularly specific, often specifying the amount of food and firewood with which the parent was to be supplied. There is also some evidence that people viewed children and savings as substitute strategies for retirement planning. Haines (1985) uses budget studies from northern industrial workers in 1890 and finds a negative relationship between the number of children and the savings rate. Short (2001) conducts similar studies for southern men that indicate the two strategies were not substitutes until at least 1920. This suggests that the transition from babies to bank accounts occurred later in the South, only as income began to approach northern levels.

Pensions and Government Retirement Programs

Military and Municipal Pensions (1781-1934)

In addition to the rise in labor market income, the availability of pension income greatly increased with the development of Social Security and the expansion of private (employer-provided) pensions. In the U.S., public (government-provided) pensions originated with the military pensions that have been available to disabled veterans and widows since the colonial era. Military pensions became available to a large proportion of Americans after the Civil War, when the federal government provided pensions to Union Army widows and veterans disabled in the war. The Union Army pension program expanded greatly as a result of the Pension Act of 1890. As a result of this law, pensions were available for all veterans age 65 and over who had served more than 90 days and were honorably discharged, regardless of current employment status. In 1900, about 20 percent of all white men age 55 and over received a Union Army pension. The Union Army pension was generous even by today’s standards. Costa (1995b) finds that the average pension replaced about 30 percent of the income of a laborer. At its peak of nearly one million pensioners in 1902, the program consumed about 30 percent of the federal budget.

Each of the formerly Confederate states also provided pensions to its Confederate veterans. Most southern states began paying pensions to veterans disabled in the war and to war widows around 1880. These pensions were gradually liberalized to include most poor or disabled veterans and their widows. Confederate veteran pensions were much less generous than Union Army pensions. By 1910, the average Confederate pension was only about one-third the amount awarded to the average Union veteran.

By the early twentieth century, state and municipal governments also began paying pensions to their employees. Most major cities provided pensions for their firemen and police officers. By 1916, 33 states had passed retirement provisions for teachers. In addition, some states provided limited pensions to poor elderly residents. By 1934, 28 states had established these pension programs (See Craig in this Encyclopedia for more on public pensions).

Private Pensions (1875-1934)

As military and civil service pensions became available to more men, private firms began offering pensions to their employees. The American Express Company developed the first formal pension in 1875. Railroads, among the largest employers in the country, also began providing pensions in the late nineteenth century. Williamson (1992) finds that early pension plans, like that of the Pennsylvania Railroad, were funded entirely by the employer. Thirty years of service were required to qualify for a pension, and retirement was mandatory at age 70. Because of the lengthy service requirement and mandatory retirement provision, firms viewed pensions as a way to reduce labor turnover and as a more humane way to remove older, less productive employees. In addition, the 1926 Revenue Act excluded from current taxation all income earned in pension trusts. This tax advantage provided additional incentive for firms to provide pensions. By 1930, a majority of large firms had adopted pension plans, covering about 20 percent of all industrial workers.

In the early twentieth century, labor unions also provided pensions to their members. By 1928, thirteen unions paid pension benefits. Most of these were craft unions, whose members were typically employed by smaller firms that did not provide pensions.

Most private pensions survived the Great Depression. Exceptions were those plans that were funded under a ‘pay as you go’ system — where benefits were paid out of current earnings, rather than from built-up reserves. Many union pensions were financed under this system, and hence failed in the 1930s. Thanks to strong political allies, the struggling railroad pensions were taken over by the federal government in 1937.

Social Security (1935-1991)

The Social Security system was designed in 1935 to extend pension benefits to those not covered by a private pension plan. The Social Security Act consisted of two programs, Old Age Assistance (OAA) and Old Age Insurance (OAI). The OAA program provided federal matching funds to subsidize state old age pension programs. The availability of federal funds quickly motivated many states to develop a pension program or to increase benefits. By 1950, 22 percent of the population age 65 and over received OAA benefits. The OAA program peaked at this point, though, as the newly liberalized OAI program began to dominate Social Security. The OAI program is administered by the federal government, and financed by payroll taxes. Retirees (and later, survivors, dependents of retirees, and the disabled) who have paid into the system are eligible to receive benefits. The program remained small until 1950, when coverage was extended to include farm and domestic workers, and average benefits were increased by 77 percent. In 1965, the Social Security Act was amended to include Medicare, which provides health insurance to the elderly. The Social Security program continued to expand in the late 1960s and early 1970s — benefits increased 13 percent in 1968, another 15 percent in 1969, and 20 percent in 1972.

In the late 1970s and early 1980s Congress was finally forced to slow the growth of Social Security benefits, as the struggling economy introduced the possibility that the program would not be able to pay beneficiaries. In 1977, the formula for determining benefits was adjusted downward. Reforms in 1983 included the delay of a cost-of-living adjustment, the taxation of up to half of benefits, and payroll tax increases.

Today, Social Security benefits are the main source of retirement income for most retirees. Poterba, Venti, and Wise (1994) find that Social Security wealth was three times as large as all the other financial assets of those age 65-69 in 1991. The role of Social Security benefits in the budgets of elderly households varies greatly. In elderly households with less than $10,000 in income in 1990, 75 percent of income came from Social Security. Higher income households gain larger shares of income from earnings, asset income, and private pensions. In households with $30,000 to $50,000 in income, less than 30 percent was derived from Social Security.

The Growth of Private Pensions (1935-2000)

Even in the shadow of the Social Security system, employer-provided pensions continued to grow. The Wage and Salary Act of 1942 froze wages in an attempt to contain wartime inflation. In order to attract employees in a tight labor market, firms increasingly offered generous pensions. Providing pensions had the additional benefit that the firm’s contributions were tax deductible. Therefore, pensions provided firms with a convenient tax shelter from high wartime tax rates. From 1940 to 1960, the number of people covered by private pensions increased from 3.7 million to 23 million, or to nearly 30 percent of the labor force.

In the 1960s and 1970s, the federal government acted to regulate private pensions, and to provide tax incentives (like those for employer-provided pensions) for those without access to private pensions to save for retirement. Since 1962, the self-employed have been able to establish ‘Keogh plans’ — tax deferred accounts for retirement savings. In 1974, the Employment Retirement Income Security Act (ERISA) regulated private pensions to ensure their solvency. Under this law, firms are required to follow funding requirements and to insure against unexpected events that could cause insolvency. To further level the playing field, ERISA provided those not covered by a private pension with the option of saving in a tax-deductible Individual Retirement Account (IRA). The option of saving in a tax-advantaged IRA was extended to everyone in 1981.

Over the last thirty years, the type of pension plan that firms offer employees has shifted from ‘defined benefit’ to ‘defined contribution’ plans. Defined benefit plans, like Social Security, specify the amount of benefits the retiree will receive. Defined contribution plans, on the other hand, specify only how much the employer will contribute to the plan. Actual benefits then depend on the performance of the pension investments. The switch from defined benefit to defined contribution plans therefore shifts the risk of poor investment performance from the employer to the employee. The employee stands to benefit, though, because the high long-run average returns on stock market investments may lead to a larger retirement nest egg. Recently, 401(k) plans have become a popular type of pension plan, particularly in the service industries. These plans typically involve voluntary employee contributions that are tax deductible to the employee, employer matching of these contributions, and more choice as far as how the pension is invested.

Summary and Conclusions

The retirement pattern we see today, typically involving decades of self-financed leisure, developed gradually over the last century. Economic historians have shown that rising labor market and pension income largely explain the dramatic rise of retirement. Rather than being pushed out of the labor force because of increasing obsolescence, older men have increasingly chosen to use their rising income to finance an earlier exit from the labor force. In addition to rising income, the decline of agriculture, advances in health, and the declining cost of leisure have contributed to the popularity of retirement. Rising income has also provided the young with a new strategy for planning for old age and retirement. Instead of being dependent on children in retirement, men today save for their own, more independent, retirement.

References

Achenbaum, W. Andrew. Social Security: Visions and Revisions. New York: Cambridge University Press, 1986. Bureau of Labor Statistics, cpsaat3.pdf

Costa, Dora L. The Evolution of Retirement: An American Economic History, 1880-1990. Chicago: University of Chicago Press, 1998.

Costa, Dora L. “Agricultural Decline and the Secular Rise in Male Retirement Rates.” Explorations in Economic History 32, no. 4 (1995a): 540-552.

Costa, Dora L. “Pensions and Retirement: Evidence from Union Army Veterans.” Quarterly Journal of Economics 110, no. 2 (1995b): 297-319.

Durand, John D. The Labor Force in the United States 1890-1960. New York: Gordon and Breach Science Publishers, 1948.

Easterlin, Richard A. “Interregional Differences in per Capita Income, Population, and Total Income, 1840-1950.” In Trends in the American Economy in the Nineteenth Century: A Report of the National Bureau of Economic Research, Conference on Research in Income and Wealth. Princeton, NJ: Princeton University Press, 1960.

Easterlin, Richard A. “Regional Income Trends, 1840-1950.” In The Reinterpretation of American Economic History, edited by Robert W. Fogel and Stanley L. Engerman. New York: Harper & Row, 1971.

Gendell, Murray. “Trends in Retirement Age in Four Countries, 1965-1995.” Monthly Labor Review 121, no. 8 (1998): 20-30.

Glasson, William H. Federal Military Pensions in the United States. New York: Oxford University Press, 1918.

Glasson, William H. “The South’s Pension and Relief Provisions for the Soldiers of the

Confederacy.” Publications of the North Carolina Historical Commission, Bulletin no. 23, Raleigh, 1918.

Goldin, Claudia. Understanding the Gender Gap: An Economic History of American Women. New York: Oxford University Press, 1990.

Graebner, William. A History of Retirement: The Meaning and Function of an American Institution, 1885-1978. New Haven: Yale University Press, 1980.

Haines, Michael R. “The Life Cycle, Savings, and Demographic Adaptation: Some Historical Evidence for the United States and Europe.” In Gender and the Life Course, edited by Alice S. Rossi, pp. 43-63. New York: Aldine Publishing Co., 1985.

Kingson, Eric R. and Edward D. Berkowitz. Social Security and Medicare: A Policy Primer. Westport, CT: Auburn House, 1993.

Lebergott, Stanley. Manpower in Economic Growth. New York: McGraw Hill, 1964.

Lee, Chulhee. “Sectoral Shift and the Labor-Force Participation of Older Males in the United States, 1880-1940.” Journal of Economic History 62, no. 2 (2002): 512-523.

Maloney, Thomas N. “African Americans in the Twentieth Century.” EH.Net Encyclopedia, edited by Robert Whaples, Jan 18, 2002. http://www.eh.net/encyclopedia/contents/maloney.african.american.php

Moen, Jon R. Essays on the Labor Force and Labor Force Participation Rates: The United States from 1860 through 1950. Ph.D. dissertation, University of Chicago, 1987.

Moen, Jon R. “Rural Nonfarm Households: Leaving the Farm and the Retirement of Older Men, 1860-1980.” Social Science History 18, no. 1 (1994): 55-75.

Ransom, Roger and Richard Sutch. “Babies or Bank Accounts, Two Strategies for a More Secure Old Age: The Case of Workingmen with Families in Maine, 1890.” Paper prepared for presentation at the Eleventh Annual Meeting of the Social Science History Association, St. Louis, 1986a.

Ransom, Roger L. and Richard Sutch. “Did Rising Out-Migration Cause Fertility to Decline in Antebellum New England? A Life-Cycle Perspective on Old-Age Security Motives, Child Default, and Farm-Family Fertility.” California Institute of Technology, Social Science Working Paper, no. 610, April 1986b.

Ruggles, Steven and Matthew Sobek, et al. Integrated Public Use Microdata Series: Version 2.0. Minneapolis: Historical Census Projects, University of Minnesota, 1997.

http://www.ipums.umn.edu

Short, Joanna S. “The Retirement of the Rebels: Georgia Confederate Pensions and Retirement Behavior in the New South.” Ph.D. dissertation, Indiana University, 2001.

Sundstrom, William A. and Paul A. David. “Old-Age Security Motives, Labor Markets, and Farm Family Fertility in Antebellum America.” Explorations in Economic History 25, no. 2 (1988): 164-194.

Williamson, Samuel H. “United States and Canadian Pensions before 1930: A Historical Perspective.” In Trends in Pensions, U.S. Department of Labor, Vol. 2, 1992, pp. 34-45.

Williamson, Samuel H. The Development of Industrial Pensions in the United States during the Twentieth Century. World Bank, Policy Research Department, 1995.

Citation: Short, Joanna. “Economic History of Retirement in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. September 30, 2002. URL http://eh.net/encyclopedia/economic-history-of-retirement-in-the-united-states/

The Economic History of Norway

Ola Honningdal Grytten, Norwegian School of Economics and Business Administration

Overview

Norway, with its population of 4.6 million on the northern flank of Europe, is today one of the most wealthy nations in the world, both measured as GDP per capita and in capital stock. On the United Nation Human Development Index, Norway has been among the three top countries for several years, and in some years the very top nation. Huge stocks of natural resources combined with a skilled labor force and the adoption of new technology made Norway a prosperous country during the nineteenth and twentieth century.

Table 1 shows rates of growth in the Norwegian economy from 1830 to the present using inflation-adjusted gross domestic product (GDP). This article splits the economic history of Norway into two major phases — before and after the nation gained its independence in 1814.

Table 1
Phases of Growth in the Real Gross Domestic Product of Norway, 1830-2003

(annual growth rates as percentages)

Year GDP GDP per capita
1830-1843 1.91 0.86
1843-1875 2.68 1.59
1875-1914 2.02 1.21
1914-1945 2.28 1.55
1945-1973 4.73 3.81
1973-2003 3.28 2.79
1830-2003 2.83 2.00

Source: Grytten (2004b)

Before Independence

The Norwegian economy was traditionally based on local farming communities combined with other types of industry, basically fishing, hunting, wood and timber along with a domestic and international-trading merchant fleet. Due to topography and climatic conditions the communities in the North and the West were more dependent on fish and foreign trade than the communities in the south and east, which relied mainly on agriculture. Agricultural output, fish catches and wars were decisive for the waves in the economy previous to independence. This is reflected in Figure 1, which reports a consumer price index for Norway from 1516 to present.

The peaks in this figure mark the sixteenth-century Price Revolution (1530s to 1590s), the Thirty Years War (1618-1648), the Great Nordic War (1700-1721), the Napoleonic Wars (1800-1815), the only period of hyperinflation in Norway — World War I (1914-1918) — and the stagflation period, i.e. high rates of inflation combined with a slowdown in production, in the 1970s and early 1980s.

Figure 1
Consumer Price Index for Norway, 1516-2003 (1850 = 100).

Figure 1
Source: Grytten (2004a)

During the last decades of the eighteenth century the Norwegian economy bloomed along with a first era of liberalism. Foreign trade of fish and timber had already been important for the Norwegian economy for centuries, and now the merchant fleet was growing rapidly. Bergen, located at the west coast, was the major city, with a Hanseatic office and one of the Nordic countries’ largest ports for domestic and foreign trade.

When Norway gained its independence from Denmark in 1814, after a tight union covering 417 years, it was a typical egalitarian country with a high degree of self-supply from agriculture, fisheries and hunting. According to the population censuses from 1801 and 1815 more than ninety percent of the population of 0.9 million lived in rural areas, mostly on small farms.

After Independence (1814)

Figure 2 shows annual development in GDP by expenditure (in fixed 2000 prices) from 1830 to 2003. The series, with few exceptions, reveal steady growth rates with few huge fluctuations. However, economic growth as a more or less continuous process started in the 1840s. We can also conclude that the growth process slowed down during the last three decades of the nineteenth century. The years 1914-1945 were more volatile than any other period in question, while there was an impressive and steady rate of growth until the mid 1970s and from then on slower growth.

Figure 2
Gross Domestic Product for Norway by Expenditure Category
(in 2000 Norwegian Kroner)

Figure 2
Source: Grytten (2004b)

Stagnation and Institution Building, 1814-1843

The newborn state lacked its own institutions, industrial entrepreneurs and domestic capital. However, due to its huge stocks of natural resources and its geographical closeness to the sea and to the United Kingdom, the new state, linked to Sweden in a loose royal union, seized its opportunities after some decades. By 1870 it had become a relatively wealthy nation. Measured in GDP per capita Norway was well over the European average, in the middle of the West European countries, and in fact, well above Sweden.

During the first decades after its independence from Denmark, the new state struggled with the international recession after the Napoleonic wars, deflationary monetary policy, and protectionism from the UK.

The Central Bank of Norway was founded in 1816, and a national currency, the spesidaler pegged to silver was introduced. The daler depreciated heavily during the first troubled years of recession in the 1820s.

The Great Boom, 1843-1875

After the Norwegian spesidaler gained its par value to silver in 1842, Norway saw a period of significant economic growth up to the mid 1870s. This impressive growth was mirrored in only a few other countries. The growth process was very much initiated by high productivity growth in agriculture and the success of the foreign sector. The adoption of new structures and technology along with substitution from arable to lifestock production made labor productivity in agriculture increase by about 150 percent between 1835 and 1910. The exports of timber, fish and in particular maritime services achieved high growth rates. In fact, Norway became a major power in shipping services during this period, accounting for about seven percent of the world merchant fleet in 1875. Norwegian sailing vessels freighted international goods all over the world at low prices.

The success of the Norwegian foreign sector can be explained by a number of factors. Liberalization of world trade and high international demand secured a market for Norwegian goods and services. In addition, Norway had vast stocks of fish and timber along with maritime skills. According to recent calculations, GDP per capita had an annual growth rate of 1.6 percent 1843 to 1876, well above the European average. At the same time the Norwegian annual rate of growth for exports was 4.8 percent. The first modern large-scale manufacturing industry in Norway saw daylight in the 1840s, when textile plants and mechanized industry were established. A second wave of industrialization took place in the 1860s and 1870s. Following the rapid productivity growth in agriculture, food processing and dairy production industries showed high growth in this period.

During this great boom, capital was imported mainly from Britain, but also from Sweden, Denmark and Germany, the four most important Norwegian trading partners at the time. In 1536 the King of Denmark and Norway chose the Lutheran faith as the state religion. In consequence of the Reformation, reading became compulsory; consequently Norway acquired a generally skilled and independent labor force. The constitution from 1814 also cleared the way for liberalism and democracy. The puritan revivals during the nineteenth century created a business environment, which raised entrepreneurship, domestic capital and a productive labor force. In the western and southern parts of the country these puritan movements are still strong, both in daily life and within business.

Relative Stagnation with Industrialization, 1875-1914

Norway’s economy was hit hard during the “depression” from mid 1870s to the early 1890s. GDP stagnated, particular during the 1880s, and prices fell until 1896. This stagnation is mirrored in the large-scale emigration from Norway to North America in the 1880s. At its peak in 1882 as many as 28,804 persons, 1.5 percent of the population, left the country. All in all, 250,000 emigrated in the period 1879-1893, equal to 60 percent of the birth surplus. Only Ireland had higher emigration rates than Norway between 1836 and 1930, when 860,000 Norwegians left the country.

The long slow down can largely been explained by Norway’s dependence on the international economy and in particular the United Kingdom, which experienced slower economic growth than the other major economies of the time. As a result of the international slowdown, Norwegian exports contracted in several years, but expanded in others. A second reason for the slowdown in Norway was the introduction of the international gold standard. Norway adopted gold in January 1874, and due to the trade deficit, lack of gold and lack of capital, the country experienced a huge contraction in gold reserves and in the money stock. The deflationary effect strangled the economy. Going onto the gold standard caused the appreciation of the Norwegian currency, the krone, as gold became relatively more expensive compared to silver. A third explanation of Norway’s economic problems in the 1880s is the transformation from sailing to steam vessels. Norway had by 1875 the fourth biggest merchant fleet in the world. However, due to lack of capital and technological skills, the transformation from sail to steam was slow. Norwegian ship owners found a niche in cheap second-hand sailing vessels. However, their market was diminishing, and finally, when the Norwegian steam fleet passed the size of the sailing fleet in 1907, Norway was no longer a major maritime power.

A short boom occurred from the early 1890s to 1899. Then, a crash in the Norwegian building industry led to a major financial crash and stagnation in GDP per capita from 1900 to 1905. Thus from the middle of the 1870s until 1905 Norway performed relatively bad. Measured in GDP per capita, Norway, like Britain, experienced a significant stagnation relative to most western economies.

After 1905, when Norway gained full independence from Sweden, a heavy wave of industrialization took place. In the 1890s the fish preserving and cellulose and paper industries started to grow rapidly. From 1905, when Norsk Hydro was established, manufacturing industry connected to hydroelectrical power took off. It is argued, quite convincingly, that if there was an industrial breakthrough in Norway, it must have taken place during the years 1905-1920. However, the primary sector, with its labor-intensive agriculture and increasingly more capital-intensive fisheries, was still the biggest sector.

Crises and Growth, 1914-1945

Officially Norway was neutral during World War I. However, in terms of the economy, the government clearly took the side of the British and their allies. Through several treaties Norway gave privileges to the allied powers, which protected the Norwegian merchant fleet. During the war’s first years, Norwegian ship owners profited from the war, and the economy boomed. From 1917, when Germany declared war against non-friendly vessels, Norway took heavy losses. A recession replaced the boom.

Norway suspended gold redemption in August 1914, and due to inflationary monetary policy during the war and in the first couple of years afterward, demand was very high. When the war came to an end this excess demand was met by a positive shift in supply. Thus, Norway, like other Western countries experienced a significant boom in the economy from the spring of 1919 to the early autumn 1920. The boom was followed by high inflation, trade deficits, currency depreciation and an overheated economy.

The international postwar recession beginning in autumn 1920, hit Norway more severely than most other countries. In 1921 GDP per capita fell by eleven percent, which was only exceeded by the United Kingdom. There are two major reasons for the devastating effect of the post-war recession. In the first place, as a small open economy, Norway was more sensitive to international recessions than most other countries. This was in particular the case because the recession hit the country’s most important trading partners, the United Kingdom and Sweden, so hard. Secondly, the combination of strong and mostly pro-cyclical inflationary monetary policy from 1914 to 1920 and thereafter a hard deflationary policy made the crisis worse (Figure 3).

Figure 3
Money Aggregates for Norway, 1910-1930

Figure 3
Source: Klovland (2004a)

In fact, Norway pursued a long, but non-persistent deflationary monetary policy aimed at restoring the par value of the krone (NOK) up to May 1928. In consequence, another recession hit the economy during the middle of the 1920s. Hence, Norway was one of the worst performers in the western world in the 1920s. This can best be seen in the number of bankruptcies, a huge financial crisis and mass unemployment. Bank losses amounted to seven percent of GDP in 1923. Total unemployment rose from about one percent in 1919 to more than eight percent in 1926 and 1927. In manufacturing it reached more than 18 percent the same years.

Despite a rapid boom and success within the whaling industry and shipping services, the country never saw a convincing recovery before the Great Depression hit Europe in late summer 1930. The worst year for Norway was 1931, when GDP per capita fell by 8.4 percent. This, however, was not only due to the international crisis, but also to a massive and violent labor conflict that year. According to the implicit GDP deflator prices fell more than 63 percent from 1920 to 1933.

All in all, however, the depression of the 1930s was milder and shorter in Norway than in most western countries. This was partly due to the deflationary monetary policy in the 1920s, which forced Norwegian companies to become more efficient in order to survive. However, it was probably more important that Norway left gold as early as September 27th, 1931 only a week after the United Kingdom. Those countries that left gold early, and thereby employed a more inflationary monetary policy, were the best performers in the 1930s. Among them were Norway and its most important trading partners, the United Kingdom and Sweden.

During the recovery period, Norway in particular saw growth in manufacturing output, exports and import substitution. This can to a large extent be explained by currency depreciation. Also, when the international merchant fleet contracted during the drop in international trade, the Norwegian fleet grew rapidly, as Norwegian ship owners were pioneers in the transformation from steam to diesel engines, tramp to line freights and into a new expanding niche: oil tankers.

The primary sector was still the largest in the economy during the interwar years. Both fisheries and agriculture struggled with overproduction problems, however. These were dealt with by introducing market controls and cartels, partly controlled by the industries themselves and partly by the government.

The business cycle reached its bottom in late 1932. Despite relatively rapid recovery and significant growth both in GDP and in employment, unemployment stayed high, and reached 10-11 percent on annual basis from 1931 to 1933 (Figure 4).

Figure 4
Unemployment Rate and Public Relief Work
as a Percent of the Work Force, 1919-1939

Figure 4
Source: Hodne and Grytten (2002)

The standard of living became poorer in the primary sector, among those employed in domestic services and for the underemployed and unemployed and their households. However, due to the strong deflation, which made consumer prices fall by than 50 percent from autumn 1920 to summer 1933, employees in manufacturing, construction and crafts experienced an increase in real wages. Unemployment stayed persistently high due to huge growth in labor supply, as result of immigration restrictions by North American countries from the 1920s onwards.

Denmark and Norway were both victims of a German surprise attack the 9th of April 1940. After two months of fighting, the allied troops surrendered in Norway on June 7th and the Norwegian royal family and government escaped to Britain.

From then until the end of the war there were two Norwegian economies, the domestic German-controlled and the foreign Norwegian- and Allied-controlled economy. The foreign economy was primarily established on the basis of the huge Norwegian merchant fleet, which again was among the biggest in the world accounting for more than seven percent of world total tonnage. Ninety percent of this floating capital escaped the Germans. The ships were united into one state-controlled company, NORTASHIP, which earned money to finance the foreign economy. The domestic economy, however, struggled with a significant fall in production, inflationary pressure and rationing of important goods, which three million Norwegians had to share with 400.000 Germans occupying the country.

Economic Planning and Growth, 1945-1973

After the war the challenge was to reconstruct the economy and re-establish political and economic order. The Labor Party, in office from 1935, grabbed the opportunity to establish a strict social democratic rule, with a growing public sector and widespread centralized economic planning. Norway first declined the U.S. proposition of financial aid after the world. However, due to lack of hard currencies they accepted the Marshall aid program. By receiving 400 million dollars from 1948 to 1952, Norway was one of the biggest per capita recipients.

As part of the reconstruction efforts Norway joined the Bretton Woods system, GATT, the IMF and the World Bank. Norway also chose to become member of NATO and the United Nations. In 1958 the country also joined the European Free Trade Area (EFTA). The same year Norway made the krone convertible to the U.S. dollar, as many other western countries did with their currencies.

The years from 1950 to 1973 are often called the golden era of the Norwegian economy. GDP per capita showed an annual growth rate of 3.3 percent. Foreign trade stepped up even more, unemployment barely existed and the inflation rate was stable. This has often been explained by the large public sector and good economic planning. The Nordic model, with its huge public sector, has been said to be a success in this period. If one takes a closer look into the situation, one will, nevertheless, find that the Norwegian growth rate in the period was lower than that for most western nations. The same is true for Sweden and Denmark. The Nordic model delivered social security and evenly-distributed wealth, but it did not necessarily give very high economic growth.

Figure 5
Public Sector as a Percent of GDP, 1900-1990

Figure 5
Source: Hodne and Grytten (2002)

Petroleum Economy and Neoliberalism, 1973 to the Present

After the Bretton Woods system fell apart (between August 1971 and March 1973) and the oil price shock in autumn 1973, most developed economies went into a period of prolonged recession and slow growth. In 1969 Philips Petroleum discovered petroleum resources at the Ekofisk field, which was defined as part of the Norwegian continental shelf. This enabled Norway to run a countercyclical financial policy during the stagflation period in the 1970s. Thus, economic growth was higher and unemployment lower than for most other western countries. However, since the countercyclical policy focused on branch and company subsidies, Norwegian firms soon learned to adapt to policy makers rather than to the markets. Hence, both productivity and business structure did not have the incentives to keep pace with changes in international markets.

Norway lost significant competitive power, and large-scale deindustrialization took place, despite efforts to save manufacturing industry. Another reason for deindustrialization was the huge growth in the profitable petroleum sector. Persistently high oil prices from the autumn 1973 to the end of 1985 pushed labor costs upward, through spillover effects from high wages in the petroleum sector. High labor costs made the Norwegian foreign sector less competitive. Thus, Norway saw deindustrialization at a more rapid pace than most of her largest trading partners. Due to the petroleum sector, however, Norway experienced high growth rates in all the three last decades of the twentieth century, bringing Norway to the top of the world GDP per capita list at the dawn of the new millennium. Nevertheless, Norway had economic problems both in the eighties and in the nineties.

In 1981 a conservative government replaced Labor, which had been in power for most of the post-war period. Norway had already joined the international wave of credit liberalization, and the new government gave fuel to this policy. However, along with the credit liberalization, the parliament still ran a policy that prevented market forces from setting interest rates. Instead they were set by politicians, in contradiction to the credit liberalization policy. The level of interest rates was an important part of the political game for power, and thus, they were set significantly below the market level. In consequence, a substantial credit boom was created in the early 1980s, and continued to the late spring of 1986. As a result, Norway had monetary expansion and an artificial boom, which created an overheated economy. When oil prices fell dramatically from December 1985 onwards, the trade surplus was suddenly turned to a huge deficit (Figure 6).

Figure 6
North Sea Oil Prices and Norway’s Trade Balance, 1975-2000

Figure 6
Source: Statistics Norway

The conservative-center government was forced to keep a tighter fiscal policy. The new Labor government pursued this from May 1986. Interest rates were persistently high as the government now tried to run a trustworthy fixed-currency policy. In the summer of 1990 the Norwegian krone was officially pegged to the ECU. When the international wave of currency speculation reached Norway during autumn 1992 the central bank finally had to suspend the fixed exchange rate and later devaluate.

In consequence of these years of monetary expansion and thereafter contraction, most western countries experienced financial crises. It was relatively hard in Norway. Prices of dwellings slid, consumers couldn’t pay their bills, and bankruptcies and unemployment reached new heights. The state took over most of the larger commercial banks to avoid a total financial collapse.

After the suspension of the ECU and the following devaluation, Norway had growth until 1998, due to optimism, an international boom and high prices of petroleum. The Asian financial crisis also rattled the Norwegian stock market. At the same time petroleum prices fell rapidly, due to internal problems among the OPEC countries. Hence, the krone depreciated. The fixed exchange rate policy had to be abandoned and the government adopted inflation targeting. Along with changes in monetary policy, the center coalition government was also able to monitor a tighter fiscal policy. At the same time interest rates were high. As result, Norway escaped the overheating process of 1993-1997 without any devastating effects. Today the country has a strong and sound economy.

The petroleum sector is still very important in Norway. In this respect the historical tradition of raw material dependency has had its renaissance. Unlike many other countries rich in raw materials, natural resources have helped make Norway one of the most prosperous economies in the world. Important factors for Norway’s ability to turn resource abundance into economic prosperity are an educated work force, the adoption of advanced technology used in other leading countries, stable and reliable institutions, and democratic rule.

References

Basberg, Bjørn L. Handelsflåten i krig: Nortraship: Konkurrent og alliert. Oslo: Grøndahl and Dreyer, 1992.

Bergh, Tore Hanisch, Even Lange and Helge Pharo. Growth and Development. Oslo: NUPI, 1979.

Brautaset, Camilla. “Norwegian Exports, 1830-1865: In Perspective of Historical National Accounts.” Ph.D. dissertation. Norwegian School of Economics and Business Administration, 2002.

Bruland, Kristine. British Technology and European Industrialization. Cambridge: Cambridge University Press, 1989.

Danielsen, Rolf, Ståle Dyrvik, Tore Grønlie, Knut Helle and Edgar Hovland. Norway: A History from the Vikings to Our Own Times. Oslo: Scandinavian University Press, 1995.

Eitrheim. Øyvind, Jan T. Klovland and Jan F. Qvigstad, editors. Historical Monetary Statistics for Norway, 1819-2003. Oslo: Norges Banks skriftserie/Occasional Papers, no 35, 2004.

Hanisch, Tore Jørgen. “Om virkninger av paripolitikken.” Historisk tidsskrift 58, no. 3 (1979): 223-238.

Hanisch, Tore Jørgen, Espen Søilen and Gunhild Ecklund. Norsk økonomisk politikk i det 20. århundre. Verdivalg i en åpen økonomi. Kristiansand: Høyskoleforlaget, 1999.

Grytten, Ola Honningdal. “A Norwegian Consumer Price Index 1819-1913 in a Scandinavian Perspective.” European Review of Economic History 8, no.1 (2004): 61-79.

Grytten, Ola Honningdal. “A Consumer Price Index for Norway, 1516-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 47-98.

Grytten. Ola Honningdal. “The Gross Domestic Product for Norway, 1830-2003.” Norges Bank: Occasional Papers, no. 1 (2004b): 241-288.

Hodne, Fritz. An Economic History of Norway, 1815-1970. Tapir: Trondheim, 1975.

Hodne, Fritz. The Norwegian Economy, 1920-1980. London: Croom Helm and St. Martin’s, 1983.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 19. århundre. Bergen: Fagbokforlaget, 2000.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 20. århundre. Bergen: Fagbokforlaget, 2002.

Klovland, Jan Tore. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 2 (1998):

Klovland, Jan Tore. “Monetary Aggregates in Norway, 1819-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 181-240.

Klovland, Jan Tore. “Historical Exchange Rate Data, 1819-2003”. Norges Bank: Occasional Papers, no. 1 (2004b): 289-328.

Lange, Even, editor. Teknologi i virksomhet. Verkstedsindustri i Norge etter 1840. Oslo: Ad Notam Forlag, 1989.

Nordvik, Helge W. “Finanspolitikken og den offentlige sektors rolle i norsk økonomi i mellomkrigstiden”. Historisk tidsskrift 58, no. 3 (1979): 239-268.

Sejersted, Francis. Demokratisk kapitalisme. Oslo: Universitetsforlaget, 1993.

Søilen. Espen. “Fra frischianisme til keynesianisme? En studie av norsk økonomisk politikk i lys av økonomisk teori, 1945-1980.” Ph.D. dissertation. Bergen: Norwegian School of Economics and Business Administration, 1998.

Citation: Grytten, Ola. “The Economic History of Norway”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-norway/